interactive side and code//

my part in future cinema project was working on interactivity aspect of it.
we decided to create a video installation type piece with possibilities to communicate/trigger visual narrative on the walls.
in general we will have a prototype structure of 3 walls positioned like a cube on the table (in real case those walls would be in a human size representing a small room which can be entered and experienced by people)
firstly, we had to decide what sort of videos we will project and how we will go about the narrative. right now i can clearly say that we have left narrative to the person engaged to decide what this room is about and we have focused more on creating visual aspect of it. we went for a city theme,  but not only. our idea was to shoot scenes from real life and then juxtaposition them with some edited stop motion clips and 3-d animations. the main theme i can tell is about things which defines spaces and environment. it is about being thrown in different places , like, traffic, fields, tunnels and rooms suggesting sort of virtual reality. because person is going to move about the space, he/she will be able to experience all sort of places in one small enclosure, and because it is projected on 3 walls simultaneously, we expect a person to respond naturally by engaging one’s sensual reflexes based what is seen. we like to play with real and unreal environments and bind them together on three platforms and let a person engage and create their own narrative. it is about surprise and learning how this structure works and, hopefully, also a cinematic enjoyment.

…how this thing actually works::

there are 3 projectors, each for one wall and they will be projecting from outside. reason for that is mainly a convenience that there will be no shadows casted on walls by person inside (if projectors would be high up and facing wall and projecting on its inside, there is a slight chance that person will be at some point distracting the flow of an image). we have made frames stretched with lycra and the image will be seen from inside very well. each projector is connected to three computers which will listen for commands from main computer what videos and when they should fire up. main computer is gathering data from web camera which is allocated on top of this installation and is covering it’s floor (the red are where person stands on). flash program which is running on main computer is calculating motion tracked data and sending further information to max/msp, which also runs on main computer as well as on others. main computer sorts data from flash and sends it over to all 3 computers simultaneously, which causes them to react accordingly to what is happening on the stage. i will explain each step in details further here.

1) we have a default animation on 3 walls which represents 3 tunnels. these animations will be on these 3 walls at all times.
these are three different tunnels and they have been made from stop animation shots we took at the asda train station big roundabout. in order to introduce some interactivity, these animations will be triggered depending how far away or how close person stands to these walls.

i had to look into max/msp patches how to define a single frame and how can it be triggered with array of changing values. i found this example in tutorials and tested with my animation in it.

main elements of this patch is the message “frame $1” which represents a particular frame within this film clip and above it i can define which frame exactly by entering its value. the clip will jump to exact frame i have defined above. i did change this patch and refined to just basics and then built much bigger around it.

here is a part of a new updated patch:
this patch is running on 3 computers which deals with projections on the walls

blue highlighted area is dealing with default tunnel animation and sets a frame which it should be at. reason why i need to trigger frames is to create an illusion of moving into a tunnel. as closer one moves towards the wall, as more into tunnel video brings him/her. this effect applies to all 3 walls, if person moves closer to the left wall, visually it will move further from right wall, if person moves towards the right wall, animation will progress forwards on the facing wall but backwards on the wall behind. it will not exact as if in reality, but the main principle of this interactivity will create a false illusion of movement. i personally, like the idea to project something which has a depth and play with perception of moving inside of it, but actually it is a flat plane which doesn’t make any sense if looked upon from a side. one must be INSIDE this artificial tunnel try-out and experience that a chosen movement can make this environment correspond and play some silly tricks with ones mind. surely it doesn’t have to be a super realistic and mind blowingly immersive, i like to see this installation as a soft and joyful mind game people have fun with.

now, back to the main computer and to the flash/max/msp programs running on it.
so, web camera tracks the movement of the person within the scene and processes this data in flash, flash consequently sends further out data to max/msp and using some range conversion objects max has clear distinction of the exact frame each video should be at a give time. here are examples showing 3 different positions of a tracking point and how it reflects in values.

in this example tracking point is near to the wall number 1 which is circled with green on right side window which is a web camera feed in flash whereas on the left there is a max patch receiving 3 types of values. green areas in max patch are true values representing the values of the video clip frames. for example, the first highlighted green cube says “41”, which means the animation on the right side wall will be progressed till frame 41, while animation on the middle wall will be at frame 14 and animation on the opposite right wall will be at the very beginning stage at frame 8. if we consider that frame 0 is a beginning of the animation and that would mean the furthest point one can be and as it progressed further it indicates a movement towards it. here are more examples with different positions of a tracking point and how values in green rectangles has changed accordingly:

here is an example video which shows how the distance of the motion tracking point interacts with the progress of the animation on left side wall only but in 3 walls case these animations will be played simultaneously each at different frame sets.


but there wont be just one way of interactivity. we decided to introduce more videos but which will be triggered when person steps on invisible trigger points. in total we want to have 8 or 9 different animations created in panoramic view and displayed on 3 walls simultaneously. these trigger points are not visible inside installation, people will have to discover themselves. but these triggerpoints can be seen in flash web camera preview as well easily dragged across the window to be placed in best places. every time flash reopens it places them at random places, but they can be readjusted with help of mouse.

when there is a collision made with a trigger point, max send out signal to start animation 1 or 2 or 3, depending which trigger point has been touched. all 3 computers are listening for signals all the time, whereas they are distance signals for default tunnel animation or start signal for particular animation. in order to communicate my computer with 3 others at the same time i had to look into max/msp tutorials and website. i found this very useful article on how to communicate max/msp on different machines.

i looked through different protocols of transmitting data and decided to use net.udp.send and net.udp.recv.
UDP and TCP are the two dominant protocols that computers use to communicate in today’s networks. To communicate with the internet they both use what is called the IP (Internet Protocol) layer, which is why you sometimes see acronyms like TCP/IP. UDP (User Datagram Protocol) is the simpler of the two protocols. To send some data via UDP, all that is required is a destination port and IP address.” (extract taken from article on this webpage: ) the only issue with using udp is that there is no feedback and no way to test if the signal has reached the destination. but in my case that would be seen anyway by failure in running the whole thing. udp protocol also seemed simpler just by acquiring two identifiers as ip address and port number. here is the part of main max/msp patch which sends distance values=frame numbers to 3 different computers using udp.send :

in this case only first one is connecting to the computer with ip address of and is sending data at port number 7777.  this part only deals with default animation of tunnels. here is another part of same patch which sends bang messages to start animations on trigger points:

gate 9 listens to numbers sent out of flash from 1-9 (which represent 9 trigger points) and then send a bang message to corresponding number. each object with letter “p” is a subpatch. subpatch means there is a separate patch underneath it which can be opened by double clicking “p 1”, or “p 2”. here i have opened “p 1” :

this sub patch sends out trigger signal to 3 computers at same time, they all are listening at same port, but each will be represented by individual ip address. so, if flash detects a collision with trigger point 1, then it will send bang message to max/msp main patch for sub patch 1, consequently sub patch 1 will send out trigger signal to all 3 computers to project animation number one on all 3 walls at the same time. each computer will have different clips of same animation and when played simultaneously  they will bring the picture together.

here i did test how sending signals to another computer via max/msp works and how the triggering points switches to different animation (sorry for the bad quality-it was filmed on my cell phone):

here is a small take (with a visual effect) of two walls with traffic animation on them:

on the whole it was not so difficult to program this installation. i had full support from Adam who completely wrote a flash code with my design of action. i structured and planned out all processes and wrote them into max/msp patches. it is very fiddly work to write similar objects but with different ip addresses which will be needed to type in just before launching this whole project. but i think it pays off the effort and huge work each of us has put into it. we are very close in realisation of the original concept and if not the scale everything rest is according to the plan.

i think this installation is quite intelligent same time fun. i hope people will enjoy it and play with it. it is shame we only can present a prototype size version, but hopefully we can build a intended size interactive visual cube in future.
i would like to see how people react by discovering trigger points and how they move around the space by knowing of them. it is also interesting to observe the reaction of mood facing different visual appeals, and i am very happy we have a sound accompanying all animations. we hope that in total this piece of work will reflect our intentions and we will receive a good feedback.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: