Hybrid Media Workshop

 

In early October this year I gave a small workshop at the GUC for the media design department’s Reflection’s on… series. Reflections on is a lecture series initiated by my former colleague Magdalena Kallenberger trying to highlight various aspects of cinematography and media production/conception. Artists from various fields are invited on a weekly basis to have an open discussion of their projects with students and staff. I greatly enjoyed visiting the series before, because often they turned out to be forums in which students and artists could learn from each another: The artists — most often coming from abroad and just visiting Cairo for a short period — would gain a different point of view on their projects and work, while the students got to get a glimpse over the fence of the already familiar.

Reflections on ... Lecture series @GUC Media Design, Cairo

At the time I when I was asked to give the lecture the people invited were supposed to take a stance on the term Hybrid Media. Most of the previous lecturers where dealing with cinematography in one way or another and where dealing with the topic in a conceptional manner. I decided to do something different: I would tackle the topic from a technological point of view, focusing on the hybridization of not the content of media, but the media itself.

In past projects I often dealt with these kind of intermedial transformations. For example in fogpatch this involved the transformation of seismic activity in the San Francisco bay area into algorithmic poetry, while in CairoRoundabout meant building a specific tool to achieve a visual effect to highlight the cross-medial integration of man, media and the city of Cairo.

For the workshop I created some examples to demonstrate this fluidity and also to highlight it’s limitations and special properties needing to be addressed when designing them. They were build using Processing and Max5.

 

time-based video-transformation

example of moving image slitscan technique

Demonstrating the reflexive transformation abilities of a media (basically a re-arrangement of itself) this patch demonstrates a time-based slit scan technique. The outcome is a continuous stream of images in which time flows from top to the bottom.

 

graphic-to-text

intermedial workshop example graphic to text

This example written in processing assembles a typographic spiral based on poetry/lyrics of Rilke, Frost and the Rolling Stones and the pixel values of an image.

 

text-to-graphic

intermedial workshop example text to image

Inspired by Boris Mueller’s Poetry on the road visualization of poetry, this small program demonstrates how to build a simple system for transforming texts into interesting graphical representations.

 

video-to-sound

intermedial workshop graphic to sound

Using max5 and jitter it is relatively easy to transform arbitrary image material to create music: Each pixel value of a downsampled source picture is used for a specific instrument in a midi drumset.┬áThis was probably the example most fun to develop, because nice drum patterns easily emerge. With a webcam one can easily manipulate the process in a tangible way and create interesting results. I am thinking about extending it into an own small project when I find the time. Maybe in combination with my newly developed interest for Lego Mindstorm robots …

 

sound-to-graphic

intermedial Workshop audio to graphic example

Equalizers for media players are one of the most common approaches to hybrid media and intermedial transformations. The amplitude of a given sound file is taken and transformed into some kind of visual output. This little program gives a simple demonstration for this, focusing on highlighting patterns in the audio stream that is used as an input.

 


Download all of the examples in an archive

How to use a Kinect in Processing

Before I can start coding and use the Kinect in my own projects, I need to find a way to access it from inside a familiar programming environment. I decided to use Processing/Java for it.

Mac OSX:

Daniel Shiffman’s wrap of libfreenect to processing as an easy to use off-the-shelf library. Just download it, put it into your sketchbooks libraries-folder and start working.

Windows:

For easy access inside processing for Windows users there seem to be 2 choices available: CLNUI 4 Java or dLibs. Both of them seem to be essentially JNA wrappers for C++/C# libraries. Victor Martins clnui4j which is based on the original CLNUI by Codelaboratories (the same people who hacked PS3 Eye camera drivers for windows). dLib by Thomas Diewald which ships with a precompiled version of libfreenect for Windows (requires Microsoft Visual C++ 2010 Redistributable Package to be installed). Both of them seem to work fine on my machine – however, dLib seems to be the better choice at this moment in time, for two reasons. First it worked off the shelf from inside the processing IDE – which was not the case for the clnui4j. This makes it more handy when it comes to including the Kinect into student projects. Second, it seems that the furutre of the originial CLNUI project is uncertain and that it might die – that would logically mean the end for the wrapper library based on it aswell.

Update: There seems to have emerged a third – much easier option – called simple-openni. It allows the usage of scene-analysis and skeleton-tracking directly from OpenNI and NITE frameworks (which have to be installed aswell). As the only function missing when I tested it was tilting the camera, I decided to work with this solution for all my projects.

Ubuntu Linux:

Despite it being actually very easy to get the Kinect to work on Ubuntu, and the existence of JNA/JNI wrappers for libreenect, the only pure processing IDE solution I stumbled upon was this workaround/port of Shiffman’s processing library by Nikolaus Gradwohl. However using the provided wrappers inside Eclipse works fine so far.