I have been doing lots of video communication living abroad from my family and fellowship, and it is a challenge.
At first there was a honeymoon novelty phase, but now I feel like we are starting to get used to the rhythm of it, and annoyed by the shortcomings. It is somewhere in the uncanny valley of communication… feels a little like a face-to-face conversation, but then lag, A/V dropouts and freezes, and the lack of eye contact really drive home that this isn’t natural. Have you ever tried to sing together?
Lag seriously messes with the conversational flow, especially with multiple parties experiencing different lags from each other. With casual video chatting these don’t matter so much, but in a meeting where people are trying to make decisions it is a bit of a battle.
I hear myself rambling sometimes, rewording my point a few times, to fill in the lack of the normal subtle cues that all parties are on the same page. I catch myself looking at the camera “in the eye,” which is really backwards.
We’ll get used to it, and develop systems for explicitly communicating the subtle, subconscious things that are lost in compression. I’m noticing “sounds good to me” verbally filling in what might have been understood without words in person. Lag will decrease, A/V quality will be better, and maybe depth cameras will adjust the angle of our gaze to better approximate eye contact.
My daughters don’t know a world without video chatting. In 2009 I was 14300km away and my first was able to communicate with me with sign language. This wouldn’t have happened with just audio. My second didn’t meet my folks in person for 3 months, but when she did there was immediate recognition.
Some thought-provoking writing on the video-communication-centric future we might be heading towards:
Your children will know a very different way of relating to people who are not physically present. It will change the way they work, maintain friendships, relate to family members, fall in love, and experience the world. It will change their sense of self, and self-worth. It may be a boon, or it may be harmful. Most likely, it’ll be a bit of both, because after all, it’s still about people.
I’ve been in NYC for a few days to interact directly with Eyebeam and Mozilla people to talk about future plans and potentials for Meemoo.org. From Thursday to Saturday was Art Hack Day at 319 Scholes in Brooklyn.
This diagram shows a concept that has been in my mind for one way to conceptualize the project: a frictionless digital/analog converter/synthesizer.
The idea is that any of these should be easily mashed together to experiment with aesthetic possibilities. WebGL textures projected on clay forms, paper puppets animated with your voice, finger-paint textures on 3D graphics… This concept is extremely broad, and I still need to find ways to narrow the focus on a few easy activities that can introduce newbies to the toolset. I’m making a collection of “hack-tivities” as I come up with them.
Today Mozilla and Eyebeam are announcing the Open(Art) 2013 fellows, and I’m one of them!
I’ll be using this opportunity to expand the capabilities of Meemoo and extend it into an open source web art community.
Coder analogy: Meemoo is a dataflow framework, where apps are made by connecting modules which encapsulate functionality like visual effects. The output of the apps could be a generative animation, stop-motion GIF, web cam effect, or (in the future) audio composition. These apps can be built, shared, and forked without leaving the browser.
Artist analogy: The community will be like a game of exquisite corpse, where the rules of the game as well as the media can be transformed with each step.
we didn’t plan this… jyri accused me of being a little over-specced (o_o-)
Now, I understand the appeal of things like this. They tickle the recognition part of the brain, and since many children play this game, many people will have that recognition and click the Skinner button. Lately, image memes have been spreading in Facebook as well, but at least there you can connect the idea to a person in your social circle. With this one, 134 random G+ Herpderps +1ed it, and that is enough for the algorithm to consider it “hot.” If this is what’s hot on G+, count me out.
On the web, curation is important. If I feel like wasting some time getting exposed to random ideas, I’d like them to have some redeeming social value*.
Boingboing is curated by a handful of people that make it their job to post interesting stuff. Reddit has designed a curation algorithm that works with subcommunity curators that seems to work better than Google’s (at least this crappy image would never have made it to Reddit’s frontpage).
Youtube’s frontpage is a wasteland. There is plenty of good content in YouTube, but with 60 hours of video uploaded every minute, most of it is bound to be crap. In this random screenshot it looks like two clips are recorded by pointing a camera at a TV. Couldn’t their algorithm at least weed those out?
Compare YouTube’s frontpage video selection to Vimeo’s:
I would be interested in 5/6 of the videos on Vimeo. I would avoid 6/6 of YouTube’s top videos like the plague, unless morbid curiosity compelled me click one of them, and then I would feel bad about the decision.
Most of the videos on Vimeo’s homepage are also in YouTube, so there is plenty of good content, but I almost never find it from YouTube itself. It is always a link from a blog or another service. This decentralized curation isn’t a bad thing, but within the site there is obviously a lot of work to be done.
Matti Niinimäki and I created this installation from Michihito Mizutani’s “poetic interaction” prompt at Winter 2010 Demo Day.
Two people enter a darkened room with two swings. Classical music plays softly, and a cryptic symbol is projected between the curtains. They sit on the swings, and the music get louder. They begin to swing together, and the symbols start to move. When their swinging gets out of sync, the music’s pitch bends uncomfortably. If they shift their weight correctly while swinging, the symbol assembles itself and the mystery is solved!
Computational photography is a wide concept with many possible interpretations and directions. I couldn’t choose one project for the course, so I made two, and showed them with my classmates at Pixelache in Augusta Gallery. One project was an extension of some earlier computational photography experiments with Flash+webcam, called PaintCam. The other was a collaboration with Timo Wright called Slithering.
We shot two dance scenes, one with Anna Mustonen and the other with Lucía Merlo, Charlotte Lovera, Elise Giordano and myself. Thanks to Lume studio for helping us set up the lights.
We shot HD video with the Canon 5D Mark III. We thought that it would be important for the dancers to have a preview image to see how different movements would affect the final image, so I made a Processing sketch that would approximate the aspect ratio of the 5D’s output.
We used Kinect for the preview, with the idea that the depth data might interesting to use somehow in the final. The slitscan video (three dimensions shuffled) ended up interesting enough that I left the depth data for future experiments. What could four shuffled dimensions look like?
This is the description that Timo came up with for our piece when we were thinking that the final product of our piece would be still images:
Slithering is a alternative dance documentation where the dancer dances and reacts with the Slithering program. The program scans from a camera a one pixel wide segment and orders these segments to become one long picture in time. In this project the dancer has to find a completely new kind of movement, if she wants to control the visual end result. It also changes the documentation of dance in time and space to now happen only in time. What is dance minus space?
To make the first still images, I wrote a Photoshop script that would take one column of pixels from each frame of video. It took ages. I noticed that changing the column variable ended up with a very different image. What would they look like animated? I managed to write my first C++ application, with the help of Cinder, to shuffle the billions of pixels from one video to an output video. The source for Redimensionator is available freely, without warranty. Here are some experiments with the software, Redimensionating some videos found on YouTube: http://www.youtube.com/view_play_list?p=B2540182DE868E85
Redimensionator wasn’t written until after the dances were filmed. We had the still-image preview while dancing, but had no idea what it would look like in Redimensionated video form. In the future, it would be fun to choreograph a dance piece or music video with the output in mind. Putting some planning into costumes, props, and choreography could make for very interesting output.
I joined the Computational Photography course in November of last year. We were working towards presenting at Pixelache, and it was nice to have the goal of showing some work outside of our little Media Lab family here. I couldn’t decide on one project, so I took on two: Slithering and PaintCam.
PaintCam is an extension of the MegaCam app that I started last year while living in beautiful Tomahawk, NZ. Some of these webcam toys were inspired by Lomo cameras, but this one goes beyond imitating analog photographic effects. My goal was to make a single-purpose application that allows you to composite short video loops in real-time. It works like a paint application, but instead of a color chooser you get the color, image, texture and motion from the webcam.
If you would like to try PaintCam, go to sembiki.com/megacam. I don’t have a one-click method for saving the animations yet, but if this form doesn’t scare you off you can upload the 3×4 frame image to your favorite image host site and put the image url in. Send me any nice ones that you come up with.
It was fun to see people interacting with PaintCam and the other webcam toys in the gallery setting.
I took part in Transmediale Festival in Berlin last week with a few other people from Media Lab Helsinki. Here are a few of the things that I enjoyed:
I have not seen many multiscreen film installations, so I was glad to see Reynold Reynolds’ Secrets Trilogy in installation form. One particular shot that was interesting to me was stop-motion/pixilation of a woman playing piano, which became smashed as she played. Then she began to climb into (or be devoured by) the piano. Part of the installation was a simultaneous behind-the-scenes view, with the stated goal of shattering the illusion of film. I would have have preferred the illusion to remain intact, at least for that shot. It was painful to see a piano smashed with a sledge hammer over and over. Without that view the emotion would have been much more subtle, as it was quite a beautiful image.
I went to see the Finnish film Where Is Where? knowing nothing about the film or filmmaker. The story is set in a mixture of Algeria and Finland, told in Finnish, and visually arranged in a grid of four screens. Sometimes two adjacent screens became a panoramic image, which was nice. The whole thing was quite beautifully shot, and I was hoping to meet the filmmaker, Eija-Liisa Ahtila, but she wasn’t there.
I’m working on a multiscreen live video editor/sequencer, stemming from the Interactive Cinema workshop at the beginning of the year. Seeing these films with multiple screens was a good insight into the storytelling possibilities of this format used in a linear way.
I saw a few of the “modules” presented in Cinechamber, a 360° 10-HD-screen 8.1-channel audio space. These works tended to be more abstract. I think that this system has more potential for interactive content, since there could be the possibility of moving around in the space. People sat on the floor to watch these works, but choosing your vantage point and staying there meant that you couldn’t see all of the screens, and the ceiling was open. If the works somehow encouraged movement, I think that the space would be more interesting. A planetarium/Omnimax system would be better for this kind of immersive+passive viewing.
This system would be better for a dance party, and solve the “let’s all stare at the back of the DJ’s laptop” problem
I was especially taken by Ho Tzu Nyen’s self-effacing charm in talking about his films. For Earth, he prefaced the screening by saying that it would be hard to stay awake, and even encouraged the audience to sleep, because it would add to the experience. It did! The film started by panning from vignette to vignette of one person each, dead or sleeping, in a richly textured environment. I watched intently for quite a while, but eventually the pace, dark lighting, and music soothed me to sleep. When I woke up the camera had pulled back to reveal the entire tableau, which seemed like a pile of fifty people… quite a dramatic jump.
He said that there have been seven different soundtracks, and I imagine that they would give quite a different feel to the film.
As an aspiring mad scientist, I appreciated Deconstructing Dad, a personal look into Raymond Scott’s life by his son. Scott was described as the Frank Zappa of the 1930s and 40s because of the “wierdness” that he inserted into the popular consciousness through music. He went on to be a pioneer of electronic music, making machines to play, sequence, and even compose music. One thing that he seemed to regret later in his life was the secrecy in which he operated. I plan to give away everything that I make with the hope that my ideas will be fruitful and multiply and divide and become new things that I never imagined.
Check out some of Scott’s “descriptive jazz” numbers, one set to the wonderful weirdness of Betty Boop, and one with some sweet tap:
Himalaya Variations was a good reminder that an old-school overhead projector is more high definition than digital video, when it comes to colors and textures (and frame rate). I had seen Daito Manabe’s Face Visualizer on YouTube, and it was cool to see it as a live performance. The Braun Tube Jazz Band was fun to see as well: a performance involving drumming on pulsing TV screens and letting the electromagnetic energy flow through the body to make music.
The talks that I joined made me realize that I read too many articles online… there wasn’t too much new news for me. I thought that the idea of making a film (or, ahem, “an immediated autodocumentary”) at the festival was cool, and they articulated a lot of the thoughts that were floating around:
After the festival we joined some Universität der Künste students for a couple of days to made some “paper-based electroacoustical instruments.” I overheard that UDK is an evil lair of SuperCollider, which is a programming language for making sounds. Since I have been getting comfortable with PureData, I thought that I would keep on that track, since one esoteric generative noisemaking system is probably enough for me.
I wanted to make a monotone instrument that could be used as the root note in a chording harmonizer. I made an oboe out of cardboard and a drinking straw, which sounded like a (loud) duck call. A glass bottle sounded nicer. I was able to get it working on my iPod at the last moment, and it made people laugh, so I consider it a success. I’ll make the Rjdj scene public once I polish it a bit. I’ll record a song once I learn how to play it.
It was cool to see the wide variety of instruments and sounds that came from the open prompt of “paper.”