Tag Archives: idea

visual / spatial / tangible programming

My thesis and current open-source focus is on visual dataflow programming. I have been imagining the possibilities of bringing the graphs and their output into real spaces:

  • Use spatial perception and memory to keep track of dataflow code.
  • Collaborate across office walls (= unlimited virtual resolution), together or remotely. Bluescape explores the concept with huge screens.
  • Smart home programming: baby monitor flashes lamp in living room. MIT’s Fluid Interfaces group has been exploring AR controls for devices in the home.
  • Stretch a canvas across the wall to make generative wallpaper. View and edit source in situ.
  • Set up 100 virtual speakers around an exhibition space, then feed them with a/v synths. The speakers can skitter around the walls too, or flock (if they evolve wings ūüėČ ).

To do this we’ll port the graph editor to Unity, a game engine which will make it possible to port to various hardware. Our current graph editor is browser-based (Polymer custom elements + SVG) and makes use of zoomable UI concepts. Zoom out, see the big-picture shape of the graph. Zoom in, see details like port names. People are used to this interaction pattern with maps. It works with mouse (wheel/scroll gesture), touch (pinch), and AR/VR (get closer).

flowhub zui
Flowhub’s Zoomable UI (work in progress)

I’m a backer of Structure Sensor, and applying for Google’s Project Tango. These portable 3D cameras with screens will map the real space and provide a “magic window” into the augmented space. Combining these portable 3D cameras with VR/AR glasses would free the hands to explore interactions beyond poking glass or waving in the air.

Tegu blocks are sustainably made wooden blocks with magnets inside. They are lovely to hold and addictive to play with. I have been imagining using them as a tangible interface for dataflow programming. Snap together to connect, then pull apart and see the virtual wire. Arrange them on wall (or drafting table). Turn them like knobs to tweak values.

I hope to design augmented reality interactions that free programming (and computing) from the desktop metaphor.

idea: open fashion design tool

My partner¬†is showing me the ropes (threads) of designing and making clothes. I’m not a fan of shopping for clothes; I don’t think I’ve ever seen something in a store and thought “this is so me.” The things that I design and make for myself quickly become my favorites.

3d fab
The most practical, economical, and accessible machines for 3D fabrication.

As a bonus, sustainable/recycled materials to feed into the machine are really easy to find. Designs can be easily ripped and remixed from anything that you have that fits well. The design and fabrication process is tactile, and quite forgiving.

Fashion pattern design is essentially low-polygon 3D modeling, with constraints for body measurements.

There are general purpose tools for pattern design that look cool, but they wouldn’t even tell me the price. They are not interested in the DIY community.

Continuum Fashion’s D.dress is impressive, drawing shapes to generate the triangles needed to sew that 3D form. But the output is more of a New Aesthetic experiment than practical clothing. Their Constrvct project outputs more everyday-friendly designs, but only for women, and they keep control of the pattern and production steps.

I’d really like to use a design tool that lets you:

  • start with time-tested standard patterns, and customize from there
  • tweak any of the pieces or constraints
  • see a live-updating 3d preview
  • specify fabric type / weight / stretchiness
  • use a projector to transfer your pattern directly onto cloth (printing paper patterns is annoying)
  • add colors, photos, or¬†patterns¬†(these could be printed with the lines to cut directly on the cloth with services like Spoonflower)

Getting measurements and building on a Kinect-scanned 3D model of yourself would be a fun stretch goal. (Though taking measurements might be a little less invasive than stripping down for a Microsoft camera.)

It should obviously be Free, open-source, and available to anybody with a web browser.

What are the technical pieces that I need to put together to make this tool?

  1. Bret Victor’s¬†Drawing Dynamic Visualizations¬†shows a well-considered UX for constraint-based vector design. This line = 1/2 (waist circumference + breathing space).
  2. Then we need some way to specify which lines are sewn together, and in which order. (Order of sewing is really important and still somewhat magical to me. But I’m starting to see the logic in it.) The live 3D visualization will help there.
  3. To turn the polygons into a cloth-like 3D shape, we’ll need to triangulate and subdivide them into many smaller triangles. Each line in this mesh will want to keep it’s length, but will stretch as much as cloth allows.
  4. When we “sew” those polygons together, we need some way to inflate the model. This demo of a force-directed graph layout algorithm illustrates how this might work.
  5. 3D rendering the calculated meshes, probably with Three.js.

The implications of reimagining production chains have been explored by the folks behind Matter Machine. I’ll expand on those ideas in a future post, but I’ll just say that I think it could be a good thing.

So far, this project is only a collection of conversations, sketches, and this blog post. If you’re interested in joining, or have some pointers, please leave a comment or get in touch.


Edit 02/23: Trying to express the low-level question of designing a UI for constraint-based vector drawing, I made this diagram for a friend: