using zoom, motion, and depth to explore graphs and levels of abstraction
Novemberween resolution: take (at least) a couple of months vacation from social networking.
I’m moving my online activity here for public communication and (e)mail for private. Ideally, I’ll spend my extra time writing real letters. If you would like to get in touch in a way you can touch, write me:
Haapaniemenkatu 12 A 40
Helsinki 00530 Finland
I finally summoned the courage to watch my JSConf.eu talk, and (besides the 30 ums per minute) it isn’t so bad.
I talk about:
Meditate, direct our love indiscriminately and our condemnation exclusively at those with power. Revolt in whatever way we want, with the spontaneity of the London rioters, with the certainty and willingness to die of religious fundamentalists or with the twinkling mischief of the trickster. We should include everyone, judging no one, without harming anyone. The Agricultural Revolution took thousands of years, the Industrial Revolution took hundreds of years, the Technological Revolution took tens, the Spiritual Revolution has come and we have only an instant to act.
This could be the first in a series of UX rage comics.
Simon St. Laurent connects some conceptual components in Transformative Programming: Flow-based, functional, and more. He explains the connections between web services, punch cards, Unix pipes, functional programming, and flow-based programming. I have been thinking about these connections for some time, and I’m glad somebody articulated them.
After years of watching and wondering, I’m starting to see a critical mass of developers working within approaches that value loose connections. …they share an approach of applying encapsulation to transformations, rather than data.
I think that people want tools to solve problems. It is amazing to see the lengths that computer novices will go to get the wrong tools to do what they want. (I talked about this in my JSConf.eu talk this year. It isn’t on YouTube yet, so I have no idea how coherent I was.)
If we make it easier to stitch together minimal tools, then we could make our own environment for different kinds of tasks. Wire in a spreadsheet when we need tabular data, timeline when we have linear media, dataflow graph when we want to make A/V synthesis and filtering… building software should be the practice of recognizing how to stitch these components together.
More people should have this skill. My main research interest is in making this skill (really, superpower) more accessible. I want to do this for myself, but also for my parents, kids, friends, and myself as a 9-year-old.
Monolithic software suites try to solve every problem in a broad domain with a giant toolbox, and then get abused to solve problems in other domains. Photoshop isn’t a web design tool, and it isn’t a dynamic web/mobile app design tool, but it is bent to solve those problems. Toolboxes are useful things to have and understand, but every digital media challenge is different.
At JSConf.eu Jason Frame demoed a prototype window manager made for slicing up a screen and stitching together tools like this, using dataflow wires to connect widgets. I imagine a system like that which can also zoom in (open up?) to make more complex logic in a NoFlo graph.
This is the long-tail of tool design. 5 billion people will come online for the first time in the next 10 years. What problems will they be interested in solving? How many of these problems will be too obscure or not profitable enough to be suitable for a company to attempt to solve?
Right now, today, we can’t see the thing, at all, that’s going to be the most important 100 years from now.
– Carver Mead
St. Laurent also writes about “Humans as Transformers.” Lauren McCarthy made a project where she farmed out all decisions during some dates to strangers on the internet in real-time. This got me to imagine a “Mechanical Turk” component for NoFlo: inputs data, text for directions, and price per item processed; outputs the human-transformed data. You could run these in parallel to compare answers, and use NoFlo’s normal flow controls to handle the asynchronous nature of the component. This would be a quick and dirty way to encapsulate any programming challenge too complex or difficult to express in code.
This graph runs the analog clock in the corner. We’re discussing a few ways that the graph might be simpler. For example, most of the components could be encapsulated into one subgraph, which would output just the numbers and rotation percentages needed to draw the clock.
Also, we’re talking about allowing equations like “=x/(60*1000)” in input ports that expect numbers. When the date is x, that equation will give you the correct rotation of the second hand (60 seconds * 1000 milliseconds). This will be a powerful feature, and reduce the number of nodes in the graph.
If you have ideas for the graph editor or future demos, please chime in.
I have been doing lots of video communication living abroad from my family and fellowship, and it is a challenge.
At first there was a honeymoon novelty phase, but now I feel like we are starting to get used to the rhythm of it, and annoyed by the shortcomings. It is somewhere in the uncanny valley of communication… feels a little like a face-to-face conversation, but then lag, A/V dropouts and freezes, and the lack of eye contact really drive home that this isn’t natural. Have you ever tried to sing together?
Lag seriously messes with the conversational flow, especially with multiple parties experiencing different lags from each other. With casual video chatting these don’t matter so much, but in a meeting where people are trying to make decisions it is a bit of a battle.
I hear myself rambling sometimes, rewording my point a few times, to fill in the lack of the normal subtle cues that all parties are on the same page. I catch myself looking at the camera “in the eye,” which is really backwards.
We’ll get used to it, and develop systems for explicitly communicating the subtle, subconscious things that are lost in compression. I’m noticing “sounds good to me” verbally filling in what might have been understood without words in person. Lag will decrease, A/V quality will be better, and maybe depth cameras will adjust the angle of our gaze to better approximate eye contact.
My daughters don’t know a world without video chatting. In 2009 I was 14300km away and my first was able to communicate with me with sign language. This wouldn’t have happened with just audio. My second didn’t meet my folks in person for 3 months, but when she did there was immediate recognition.
Some thought-provoking writing on the video-communication-centric future we might be heading towards:
Your children will know a very different way of relating to people who are not physically present. It will change the way they work, maintain friendships, relate to family members, fall in love, and experience the world. It will change their sense of self, and self-worth. It may be a boon, or it may be harmful. Most likely, it’ll be a bit of both, because after all, it’s still about people.
I’ve been in NYC for a few days to interact directly with Eyebeam and Mozilla people to talk about future plans and potentials for Meemoo.org. From Thursday to Saturday was Art Hack Day at 319 Scholes in Brooklyn.
This diagram shows a concept that has been in my mind for one way to conceptualize the project: a frictionless digital/analog converter/synthesizer.
(Edit 10/01: I realized that the old version of the diagram wasn’t really what I was thinking.)
The idea is that any of these should be easily mashed together to experiment with aesthetic possibilities. WebGL textures projected on clay forms, paper puppets animated with your voice, finger-paint textures on 3D graphics… This concept is extremely broad, and I still need to find ways to narrow the focus on a few easy activities that can introduce newbies to the toolset. I’m making a collection of “hack-tivities” as I come up with them.
These “physical gif” animated images are now easy to make with Meemoo:
I’ll be making more of these introductory pathways to Meemoo as the project progresses.