In early October this year I gave a small workshop at the GUC for the media design department’s Reflection’s on… series. Reflections on is a lecture series initiated by my former colleague Magdalena Kallenberger trying to highlight various aspects of cinematography and media production/conception. Artists from various fields are invited on a weekly basis to have an open discussion of their projects with students and staff. I greatly enjoyed visiting the series before, because often they turned out to be forums in which students and artists could learn from each another: The artists — most often coming from abroad and just visiting Cairo for a short period — would gain a different point of view on their projects and work, while the students got to get a glimpse over the fence of the already familiar.
At the time I when I was asked to give the lecture the people invited were supposed to take a stance on the term Hybrid Media. Most of the previous lecturers where dealing with cinematography in one way or another and where dealing with the topic in a conceptional manner. I decided to do something different: I would tackle the topic from a technological point of view, focusing on the hybridization of not the content of media, but the media itself.
In past projects I often dealt with these kind of intermedial transformations. For example in fogpatch this involved the transformation of seismic activity in the San Francisco bay area into algorithmic poetry, while in CairoRoundabout meant building a specific tool to achieve a visual effect to highlight the cross-medial integration of man, media and the city of Cairo.
For the workshop I created some examples to demonstrate this fluidity and also to highlight it’s limitations and special properties needing to be addressed when designing them. They were build using Processing and Max5.
Demonstrating the reflexive transformation abilities of a media (basically a re-arrangement of itself) this patch demonstrates a time-based slit scan technique. The outcome is a continuous stream of images in which time flows from top to the bottom.
This example written in processing assembles a typographic spiral based on poetry/lyrics of Rilke, Frost and the Rolling Stones and the pixel values of an image.
Inspired by Boris Mueller’s Poetry on the road visualization of poetry, this small program demonstrates how to build a simple system for transforming texts into interesting graphical representations.
Using max5 and jitter it is relatively easy to transform arbitrary image material to create music: Each pixel value of a downsampled source picture is used for a specific instrument in a midi drumset. This was probably the example most fun to develop, because nice drum patterns easily emerge. With a webcam one can easily manipulate the process in a tangible way and create interesting results. I am thinking about extending it into an own small project when I find the time. Maybe in combination with my newly developed interest for Lego Mindstorm robots …
Equalizers for media players are one of the most common approaches to hybrid media and intermedial transformations. The amplitude of a given sound file is taken and transformed into some kind of visual output. This little program gives a simple demonstration for this, focusing on highlighting patterns in the audio stream that is used as an input.