IN-CYCLE

The process of designing interactions between humans and machines has been dominated for decades by the creation of complex patterns of user actions with simple input devices. This started to change in the recent time: Systems that allow the creation of interactive environments making use of the human body as a whole are spreading. Also, they are no longer detached systems of information processing, but aware of the wider context in which they are situated and include it by design. One of the major challenges in current interaction design is therefor to transform them from mere objects of curiosity into designs usable in everyday life situations.

The project IN-CYCLE is a contribution to that attempt by building and evaluating an prototypical setup for the documentation of human interactions involving the whole body.

It is based on multi-perspective slit-scan recordings rendered to three dimensional objects and combined with various sensor data.

It enhances the process of analyzing users’ interactions with gestural-based interfaces by providing a unique perspective on them, highlighting properties not visible by traditional means of documentation.

Methodology

In his paper dealing with the Video Streamer (1999), Elliot highlights a couple of interesting criteria that have to be addressed when “rendering time”.

  • How to portray motion in a still image.
  • How to display video frames so they portray their own temporal characteristics.
  • How to associate a view of motion within the frame with a view of motion beyond the frame.
  • Different ways to effectively render different time scales and how to relate them.

They are hinting at a couple of problems that have to be addressed by IN-CYCLE as well.

The core problem is obviously already stated in the first one: How to transcend the limitations of the still image while at the same time being confined to it? There are various possible solution coming to mind. Some are elaborated in more detail below and are mainly an evaluation of previous attempts using similar technical ways of transforming video footage. Another attempt would be to look at how motion has been tried to depict in illustrations and other creative processes dealing with still images. Even if it is clear, that animation is rather interested with recreation of the illusion of motion through still images (so in a way quite the opposite of what I am trying to do here), there may be some interesting insight to be gained from it as well.

In order to truly highlight the development of an interaction process, motion needs to remain motion. I think no simple reduction of the interaction process to mere atomic events, will do justice to it’s emergent properties. A solution for this could somehow lie in Elliots fourth criteria. If the connection between the current limited amount of visualized data and the overall process could be maintained in an appropriate way

Other projects tried to tackle this issues in various ways before, so it seems to be a plausible first step to build some prototypes similar to these existing works and evaluate how well they fare with addressing these critical aspects of “rendering time”.

Related Work

salient Still from Teodosio&Bender(2004)
SALIENT STILLS (Laura Teodosio&Walter Bender)

 

videoStreamer screenshot
VIDEO STREAMER (Edward Elliot)

 

time crystal sculpture
TIME CRYSTALS (Tomas Walizky)

Experiencing interactivity – part1

After resigning from my job at the GUC due to my plans of pursuing a master’s degree, I finally have the chance to continue working on my own projects. I kept telling myself that I am doing just that for the last two years – and indeed everything I did was connected to my research interests in one way or another – but somehow I there was just never enough room to get beyond the I-will-start-now-or-latest-tomorrow-but-in-the-worst-case-the-day-after-tomorrow phase. Well, I’ve got the opportunity to change that now.

I have been thinking about for a while to follow up on my bachelor thesis about the phenomenology of interactivity. Hugely inspired by Dag Svanaes groundbreaking PhD work on interactivity, I asked myself a simple question in it (although, I still have problems formulating it in an equally simple way): What are the effects of interactivity on our experience while engaging with interactive systems?

I think it is easiest to explain it by using a small example.

What do you see here? Most probably you recognize what is depicted in the image above as symbols/icons in a desktop environment. However, you would not try to use them in the same manner as you would use them if this was a desktop, despite them looking the exact same way. In fact, I bet you are a bit confused right now because of the last sentence. The very thought of this comparison seems absurd to you: Of course you wouldn’t try to use them – it’s a screenshot!

However, if you stop for a second and give it a thought, you will understand what I’m aiming at. They look exactly the same as they do in any standard Windows7 desktop environment all over the planet. Yet the context of them being displayed in your browser makes you not perceive them as objects to interact with. If I would not have started to elaborate on them, most probably you would not even have regarded them as separate objects on top of a background, but just as another screenshot in a blog post. This is because the phenomenon in question is so much part of our existence that we usually don’t think about it.

What makes the graphics displayed in the context of a browser different from them being displayed in the context of a desktop environment is, that in the former you do know that your actions on them do not have any reaction attached to them, while in the latter case you learned that they do (provided you are familiar with the concept of operation systems making use of any kind of desktop metaphor at all). The relation, between the icon being displayed on the desktop and it being depicted in the screenshot is of the same kind as the one between an object and an image taken of it — Only one of them is the real thing. The possibility for action is what distinguishes it from a mere graphical element and makes it what it is.

Now this insight about an desktop icon alone may not seem to be terribly exciting, but it is an small example that in it’s triviality highlights an important point — one which has long been neglected in designing interfaces. The point is, that Interactivity is not only a result of changing-system-feedback of some kind, but an first-class entity of our experiences in dealing with artifacts/designs/applications/systems/whatever-might-be-an-appropiate-term-for-the-things-we-both-design-and-experience-using-computer-based technology. Usually we do not focus on this aspect of our experiences of interfaces, because they are so tightly connected to the visual sense and quasi-standardized structures of systems.

This goes beyond mere conventions in the sense of Norman, however. It is not an functional indicator that we infer from it by logical deduction or by knowing it already. Possibilities for actions are not some kind of third things we add to their appearance. What things ARE, is tightly connected to what we can do with things (or cannot do with them for that matter). Thus, it needs the context, the appearance and the possibility for actions to emerge. Previous experiences shape, saturate and even define future ones — A very important insight for interaction design.