NoFlo clock demo

I have been working on the NoFlo graph editor, building on my experience making Meemoo.

noflo clock demo
press play on the iframe to start the clock

This graph runs the analog clock in the corner. We’re discussing a few ways that the graph might be simpler. For example, most of the components could be encapsulated into one subgraph, which would output just the numbers and rotation percentages needed to draw the clock.

Also, we’re talking about allowing equations like “=x/(60*1000)” in input ports that expect numbers. When the date is x, that equation will give you the correct rotation of the second hand (60 seconds * 1000 milliseconds). This will be a powerful feature, and reduce the number of nodes in the graph.

If you have ideas for the graph editor or future demos, please chime in.

lost in compression

I have been doing lots of video communication living abroad from my family and fellowship, and it is a challenge.

At first there was a honeymoon novelty phase, but now I feel like we are starting to get used to the rhythm of it, and annoyed by the shortcomings. It is somewhere in the uncanny valley of communication… feels a little like a face-to-face conversation, but then lag, A/V dropouts and freezes, and the lack of eye contact really drive home that this isn’t natural. Have you ever tried to sing together?

Lag seriously messes with the conversational flow, especially with multiple parties experiencing different lags from each other. With casual video chatting these don’t matter so much, but in a meeting where people are trying to make decisions it is a bit of a battle.

I hear myself rambling sometimes, rewording my point a few times, to fill in the lack of the normal subtle cues that all parties are on the same page. I catch myself looking at the camera “in the eye,” which is really backwards.

We’ll get used to it, and develop systems for explicitly communicating the subtle, subconscious things that are lost in compression. I’m noticing “sounds good to me” verbally filling in what might have been understood without words in person. Lag will decrease, A/V quality will be better, and maybe depth cameras will adjust the angle of our gaze to better approximate eye contact.

fam hangout

My daughters don’t know a world without video chatting. In 2009 I was 14300km away and my first was able to communicate with me with sign language. This wouldn’t have happened with just audio. My second didn’t meet my folks in person for 3 months, but when she did there was immediate recognition.

Some thought-provoking writing on the video-communication-centric future we might be heading towards:

Your children will know a very different way of relating to people who are not physically present. It will change the way they work, maintain friendships, relate to family members, fall in love, and experience the world. It will change their sense of self, and self-worth. It may be a boon, or it may be harmful. Most likely, it’ll be a bit of both, because after all, it’s still about people.

Alex Payne (via Henri Bergius)

DACS: digital/analog converter/synthesizer

I’ve been in NYC for a few days to interact directly with Eyebeam and Mozilla people to talk about future plans and potentials for From Thursday to Saturday was Art Hack Day at 319 Scholes in Brooklyn.

This diagram shows a concept that has been in my mind for one way to conceptualize the project: a frictionless digital/analog converter/synthesizer.

digital analog

(Edit 10/01: I realized that the old version of the diagram wasn’t really what I was thinking.)

The idea is that any of these should be easily mashed together to experiment with aesthetic possibilities. WebGL textures projected on clay forms, paper puppets animated with your voice, finger-paint textures on 3D graphics… This concept is extremely broad, and I still need to find ways to narrow the focus on a few easy activities that can introduce newbies to the toolset. I’m making a collection of “hack-tivities” as I come up with them.

We made this animated font in a rainy-day workshop. I snuck in some easy HTML hacking so that anybody can mix the letters into messages:
animated A

These “physical gif” animated images are now easy to make with Meemoo:

I’ll be making more of these introductory pathways to Meemoo as the project progresses.

Open(Art) fellowship announced today

Today Mozilla and Eyebeam are announcing the Open(Art) 2013 fellows, and I’m one of them! 😀
I’ll be using this opportunity to expand the capabilities of Meemoo and extend it into an open source web art community.

Coder analogy: Meemoo is a dataflow framework, where apps are made by connecting modules which encapsulate functionality like visual effects. The output of the apps could be a generative animation, stop-motion GIF, web cam effect, or (in the future) audio composition. These apps can be built, shared, and forked without leaving the browser.

Artist analogy: The community will be like a game of exquisite corpse, where the rules of the game as well as the media can be transformed with each step.

Input welcome!

we didn’t plan this… jyri accused me of being a little over-specced (o_o-)

isometric ascii impossible shapes

     /\  _____________ \
    /  \ \___________/\ \
   / /\ \ \       / /\ \ \  w
  / / /\ \ \_____/ / /\ \ \  a
 / /_/__\ \ \____\/_/  \ \ \  t
 \ \ \___\ \ \___/\ \   \ \ \
  \ \ \   \ \ \__\ \ \___\_\ \
   \ \ \   \ \____\ \ \_______\
    \ \ \  / / ____\ \ \____  /
     \ \ \/ / /     \ \ \/ / /
      \ \/ / /_______\_\/ / /
       \  / /___________\/ /

isometric ascii impossible cube (related: the best java applet online)

no accounting for algorithmic taste

I went to check out G+’s new layout, and saw this poorly-done meme image at the top of my list.

Now, I understand the appeal of things like this. They tickle the recognition part of the brain, and since many children play this game, many people will have that recognition and click the Skinner button. Lately, image memes have been spreading in Facebook as well, but at least there you can connect the idea to a person in your social circle. With this one, 134 random G+ Herpderps +1ed it, and that is enough for the algorithm to consider it “hot.” If this is what’s hot on G+, count me out.

On the web, curation is important. If I feel like wasting some time getting exposed to random ideas, I’d like them to have some redeeming social value*.

Boingboing is curated by a handful of people that make it their job to post interesting stuff. Reddit has designed a curation algorithm that works with subcommunity curators that seems to work better than Google’s (at least this crappy image would never have made it to Reddit’s frontpage).

Youtube’s frontpage is a wasteland. There is plenty of good content in YouTube, but with 60 hours of video uploaded every minute, most of it is bound to be crap. In this random screenshot it looks like two clips are recorded by pointing a camera at a TV. Couldn’t their algorithm at least weed those out?

Compare YouTube’s frontpage video selection to Vimeo’s:

I would be interested in 5/6 of the videos on Vimeo. I would avoid 6/6 of YouTube’s top videos like the plague, unless morbid curiosity compelled me click one of them, and then I would feel bad about the decision.

Most of the videos on Vimeo’s homepage are also in YouTube, so there is plenty of good content, but I almost never find it from YouTube itself. It is always a link from a blog or another service. This decentralized curation isn’t a bad thing, but within the site there is obviously a lot of work to be done.

* Thanks Dad.

Meemoo: Hackable Web App Framework (the thesis)

That’s it. That is all of the words. The thesis is done. If I successfully defend it on May 2nd I’ll be a MA.

Check out the awesome art that my buddy Jyri Pieniniemi designed for the project and thesis cover:

Oh no wait, I was zoomed way in. It is actually:
Talk about attention to detail. The screws symbolize hackability, which is the main theme of the research project.

words on paper

If you want to read the paper, be my guest:
Meemoo: Hackable Web App Framework, Forrest Oliphant
It is kind of lame to have a thesis about the web locked in a PDF, so I’ll make an HTML version soon.

The project:

Swing Thing: playful interaction

Matti Niinimäki and I created this installation from Michihito Mizutani’s “poetic interaction” prompt at Winter 2010 Demo Day.

swing thing poster

Two people enter a darkened room with two swings. Classical music plays softly, and a cryptic symbol is projected between the curtains. They sit on the swings, and the music get louder. They begin to swing together, and the symbols start to move. When their swinging gets out of sync, the music’s pitch bends uncomfortably. If they shift their weight correctly while swinging, the symbol assembles itself and the mystery is solved!

Source (Arduino, Pure Data, and Quartz Composer). One technical discovery that was especially useful was figuring out how to route all system audio through Pure Data.

(Crossposted to the Media Lab Helsinki Students Blog)

Slithering: trading dimensions in video

Computational photography is a wide concept with many possible interpretations and directions. I couldn’t choose one project for the course, so I made two, and showed them with my classmates at Pixelache in Augusta Gallery. One project was an extension of some earlier computational photography experiments with Flash+webcam, called PaintCam. The other was a collaboration with Timo Wright called Slithering.

We shot two dance scenes, one with Anna Mustonen and the other with Lucía Merlo, Charlotte Lovera, Elise Giordano and myself. Thanks to Lume studio for helping us set up the lights.

We shot HD video with the Canon 5D Mark III. We thought that it would be important for the dancers to have a preview image to see how different movements would affect the final image, so I made a Processing sketch that would approximate the aspect ratio of the 5D’s output.

Now... how to turn this into a self-texturewrapped 3d model?

We used Kinect for the preview, with the idea that the depth data might interesting to use somehow in the final. The slitscan video (three dimensions shuffled) ended up interesting enough that I left the depth data for future experiments. What could four shuffled dimensions look like?

This is the description that Timo came up with for our piece when we were thinking that the final product of our piece would be still images:

Slithering is a alternative dance documentation where the dancer dances and reacts with the Slithering program. The program scans from a camera a one pixel wide segment and orders these segments to become one long picture in time. In this project the dancer has to find a completely new kind of movement, if she wants to control the visual end result. It also changes the documentation of dance in time and space to now happen only in time. What is dance minus space?

Slithering single still 2

Slithering single still 3

Slithering group still 3

Slithering group still 4

To make the first still images, I wrote a Photoshop script that would take one column of pixels from each frame of video. It took ages. I noticed that changing the column variable ended up with a very different image. What would they look like animated? I managed to write my first C++ application, with the help of Cinder, to shuffle the billions of pixels from one video to an output video. The source for Redimensionator is available freely, without warranty. Here are some experiments with the software, Redimensionating some videos found on YouTube:

Redimensionator wasn’t written until after the dances were filmed. We had the still-image preview while dancing, but had no idea what it would look like in Redimensionated video form. In the future, it would be fun to choreograph a dance piece or music video with the output in mind. Putting some planning into costumes, props, and choreography could make for very interesting output.

(Cross-posted to the new Media Lab Helsinki students blog.)