visual / spatial / tangible programming

My thesis and current open-source focus is on visual dataflow programming. I have been imagining the possibilities of bringing the graphs and their output into real spaces:

  • Use spatial perception and memory to keep track of dataflow code.
  • Collaborate across office walls (= unlimited virtual resolution), together or remotely. Bluescape explores the concept with huge screens.
  • Smart home programming: baby monitor flashes lamp in living room. MIT’s Fluid Interfaces group has been exploring AR controls for devices in the home.
  • Stretch a canvas across the wall to make generative wallpaper. View and edit source in situ.
  • Set up 100 virtual speakers around an exhibition space, then feed them with a/v synths. The speakers can skitter around the walls too, or flock (if they evolve wings ;-) ).

To do this we’ll port the graph editor to Unity, a game engine which will make it possible to port to various hardware. Our current graph editor is browser-based (Polymer custom elements + SVG) and makes use of zoomable UI concepts. Zoom out, see the big-picture shape of the graph. Zoom in, see details like port names. People are used to this interaction pattern with maps. It works with mouse (wheel/scroll gesture), touch (pinch), and AR/VR (get closer).

flowhub zui
Flowhub’s Zoomable UI (work in progress)

I’m a backer of Structure Sensor, and applying for Google’s Project Tango. These portable 3D cameras with screens will map the real space and provide a “magic window” into the augmented space. Combining these portable 3D cameras with VR/AR glasses would free the hands to explore interactions beyond poking glass or waving in the air.

Tegu blocks are sustainably made wooden blocks with magnets inside. They are lovely to hold and addictive to play with. I have been imagining using them as a tangible interface for dataflow programming. Snap together to connect, then pull apart and see the virtual wire. Arrange them on wall (or drafting table). Turn them like knobs to tweak values.

I hope to design augmented reality interactions that free programming (and computing) from the desktop metaphor.

idea: open fashion design tool

My partner is showing me the ropes (threads) of designing and making clothes. I’m not a fan of shopping for clothes; I don’t think I’ve ever seen something in a store and thought “this is so me.” The things that I design and make for myself quickly become my favorites.

3d fab
The most practical, economical, and accessible machines for 3D fabrication.

As a bonus, sustainable/recycled materials to feed into the machine are really easy to find. Designs can be easily ripped and remixed from anything that you have that fits well. The design and fabrication process is tactile, and quite forgiving.

Fashion pattern design is essentially low-polygon 3D modeling, with constraints for body measurements.

There are general purpose tools for pattern design that look cool, but they wouldn’t even tell me the price. They are not interested in the DIY community.

Continuum Fashion’s D.dress is impressive, drawing shapes to generate the triangles needed to sew that 3D form. But the output is more of a New Aesthetic experiment than practical clothing. Their Constrvct project outputs more everyday-friendly designs, but only for women, and they keep control of the pattern and production steps.

I’d really like to use a design tool that lets you:

  • start with time-tested standard patterns, and customize from there
  • tweak any of the pieces or constraints
  • see a live-updating 3d preview
  • specify fabric type / weight / stretchiness
  • use a projector to transfer your pattern directly onto cloth (printing paper patterns is annoying)
  • add colors, photos, or patterns (these could be printed with the lines to cut directly on the cloth with services like Spoonflower)

Getting measurements and building on a Kinect-scanned 3D model of yourself would be a fun stretch goal. (Though taking measurements might be a little less invasive than stripping down for a Microsoft camera.)

It should obviously be Free, open-source, and available to anybody with a web browser.

What are the technical pieces that I need to put together to make this tool?

  1. Bret Victor’s Drawing Dynamic Visualizations shows a well-considered UX for constraint-based vector design. This line = 1/2 (waist circumference + breathing space).
  2. Then we need some way to specify which lines are sewn together, and in which order. (Order of sewing is really important and still somewhat magical to me. But I’m starting to see the logic in it.) The live 3D visualization will help there.
  3. To turn the polygons into a cloth-like 3D shape, we’ll need to triangulate and subdivide them into many smaller triangles. Each line in this mesh will want to keep it’s length, but will stretch as much as cloth allows.
  4. When we “sew” those polygons together, we need some way to inflate the model. This demo of a force-directed graph layout algorithm illustrates how this might work.
  5. 3D rendering the calculated meshes, probably with Three.js.

The implications of reimagining production chains have been explored by the folks behind Matter Machine. I’ll expand on those ideas in a future post, but I’ll just say that I think it could be a good thing.

So far, this project is only a collection of conversations, sketches, and this blog post. If you’re interested in joining, or have some pointers, please leave a comment or get in touch.


Edit 02/23: Trying to express the low-level question of designing a UI for constraint-based vector drawing, I made this diagram for a friend:

Novemberween resolution: take (at least) a couple of months vacation from social networking.

I’m moving my online activity here for public communication and (e)mail for private. Ideally, I’ll spend my extra time writing real letters. If you would like to get in touch in a way you can touch, write me:

Haapaniemenkatu 12 A 40
Helsinki 00530 Finland

Design for Hackability talk at JSConf.eu

I finally summoned the courage to watch my JSConf.eu talk, and (besides the 30 ums per minute) it isn’t so bad.

I talk about:

  • software eating the world
  • metamedia / metanetwork
  • hacking the wrong tools to get things done
  • potential for ux design in coding:
  • text interfaces to gui to touchscreens invited more people to use software, how can we invite more people to hack software?
  • design not for hackability
    (in this section I talk about how consumer electronics are not hackable, but I was happy to find out how I was wrong about that in this TED talk: Vinay Venkatraman: Technology crafts for the digitally underserved.)
  • layers of abstraction:
    making it easy to dive from gui to dataflow to code and back
  • demos, and the long tail of tool design
  • artistic potential
  • future plans and NoFlo

Meditate, direct our love indiscriminately and our condemnation exclusively at those with power. Revolt in whatever way we want, with the spontaneity of the London rioters, with the certainty and willingness to die of religious fundamentalists or with the twinkling mischief of the trickster. We should include everyone, judging no one, without harming anyone. The Agricultural Revolution took thousands of years, the Industrial Revolution took hundreds of years, the Technological Revolution took tens, the Spiritual Revolution has come and we have only an instant to act.

Russel Brand on Revolution

The long-tail of tool design and transformative programming

Simon St. Laurent connects some conceptual components in Transformative Programming: Flow-based, functional, and more. He explains the connections between web services, punch cards, Unix pipes, functional programming, and flow-based programming. I have been thinking about these connections for some time, and I’m glad somebody articulated them.

After years of watching and wondering, I’m starting to see a critical mass of developers working within approaches that value loose connections. …they share an approach of applying encapsulation to transformations, rather than data.

I think that people want tools to solve problems. It is amazing to see the lengths that computer novices will go to get the wrong tools to do what they want. (I talked about this in my JSConf.eu talk this year. It isn’t on YouTube yet, so I have no idea how coherent I was.)

If we make it easier to stitch together minimal tools, then we could make our own environment for different kinds of tasks. Wire in a spreadsheet when we need tabular data, timeline when we have linear media, dataflow graph when we want to make A/V synthesis and filtering… building software should be the practice of recognizing how to stitch these components together.

More people should have this skill. My main research interest is in making this skill (really, superpower) more accessible. I want to do this for myself, but also for my parents, kids, friends, and myself as a 9-year-old.

Monolithic software suites try to solve every problem in a broad domain with a giant toolbox, and then get abused to solve problems in other domains. Photoshop isn’t a web design tool, and it isn’t a dynamic web/mobile app design tool, but it is bent to solve those problems. Toolboxes are useful things to have and understand, but every digital media challenge is different.

meemoo-illo-by-jyri-pieniniemi I think that the upcoming custom elements web standard + NoFlo is going to be a powerful combination to make tools to get stuff done. I agree with St. Laurent that making the high-level picture dataflow is an effective way to make this work. My Meemoo.org and Mozilla App Maker are two potentially-compatible concepts for building toy programs like this. NoFlo is bringing this method to general purpose JavaScript, which now includes many spheres of possibility: server, browser, audio, video, 3D.

At JSConf.eu Jason Frame demoed a prototype window manager made for slicing up a screen and stitching together tools like this, using dataflow wires to connect widgets. I imagine a system like that which can also zoom in (open up?) to make more complex logic in a NoFlo graph.

This is the long-tail of tool design. 5 billion people will come online for the first time in the next 10 years. What problems will they be interested in solving? How many of these problems will be too obscure or not profitable enough to be suitable for a company to attempt to solve?

Right now, today, we can’t see the thing, at all, that’s going to be the most important 100 years from now.
Carver Mead

***

St. Laurent also writes about “Humans as Transformers.” Lauren McCarthy made a project where she farmed out all decisions during some dates to strangers on the internet in real-time. This got me to imagine a “Mechanical Turk” component for NoFlo: inputs data, text for directions, and price per item processed; outputs the human-transformed data. You could run these in parallel to compare answers, and use NoFlo’s normal flow controls to handle the asynchronous nature of the component. This would be a quick and dirty way to encapsulate any programming challenge too complex or difficult to express in code.

NoFlo clock demo

I have been working on the NoFlo graph editor, building on my experience making Meemoo.

noflo clock demo
press play on the iframe to start the clock

This graph runs the analog clock in the corner. We’re discussing a few ways that the graph might be simpler. For example, most of the components could be encapsulated into one subgraph, which would output just the numbers and rotation percentages needed to draw the clock.

Also, we’re talking about allowing equations like “=x/(60*1000)” in input ports that expect numbers. When the date is x, that equation will give you the correct rotation of the second hand (60 seconds * 1000 milliseconds). This will be a powerful feature, and reduce the number of nodes in the graph.

If you have ideas for the graph editor or future demos, please chime in.