Tag Archives: noflo

visual / spatial / tangible programming

My thesis and current open-source focus is on visual dataflow programming. I have been imagining the possibilities of bringing the graphs and their output into real spaces:

  • Use spatial perception and memory to keep track of dataflow code.
  • Collaborate across office walls (= unlimited virtual resolution), together or remotely. Bluescape explores the concept with huge screens.
  • Smart home programming: baby monitor flashes lamp in living room. MIT’s Fluid Interfaces group has been exploring AR controls for devices in the home.
  • Stretch a canvas across the wall to make generative wallpaper. View and edit source in situ.
  • Set up 100 virtual speakers around an exhibition space, then feed them with a/v synths. The speakers can skitter around the walls too, or flock (if they evolve wings 😉 ).

To do this we’ll port the graph editor to Unity, a game engine which will make it possible to port to various hardware. Our current graph editor is browser-based (Polymer custom elements + SVG) and makes use of zoomable UI concepts. Zoom out, see the big-picture shape of the graph. Zoom in, see details like port names. People are used to this interaction pattern with maps. It works with mouse (wheel/scroll gesture), touch (pinch), and AR/VR (get closer).

flowhub zui
Flowhub’s Zoomable UI (work in progress)

I’m a backer of Structure Sensor, and applying for Google’s Project Tango. These portable 3D cameras with screens will map the real space and provide a “magic window” into the augmented space. Combining these portable 3D cameras with VR/AR glasses would free the hands to explore interactions beyond poking glass or waving in the air.

Tegu blocks are sustainably made wooden blocks with magnets inside. They are lovely to hold and addictive to play with. I have been imagining using them as a tangible interface for dataflow programming. Snap together to connect, then pull apart and see the virtual wire. Arrange them on wall (or drafting table). Turn them like knobs to tweak values.

I hope to design augmented reality interactions that free programming (and computing) from the desktop metaphor.

The long-tail of tool design and transformative programming

Simon St. Laurent connects some conceptual components in Transformative Programming: Flow-based, functional, and more. He explains the connections between web services, punch cards, Unix pipes, functional programming, and flow-based programming. I have been thinking about these connections for some time, and I’m glad somebody articulated them.

After years of watching and wondering, I’m starting to see a critical mass of developers working within approaches that value loose connections. …they share an approach of applying encapsulation to transformations, rather than data.

I think that people want tools to solve problems. It is amazing to see the lengths that computer novices will go to get the wrong tools to do what they want. (I talked about this in my JSConf.eu talk this year. It isn’t on YouTube yet, so I have no idea how coherent I was.)

If we make it easier to stitch together minimal tools, then we could make our own environment for different kinds of tasks. Wire in a spreadsheet when we need tabular data, timeline when we have linear media, dataflow graph when we want to make A/V synthesis and filtering… building software should be the practice of recognizing how to stitch these components together.

More people should have this skill. My main research interest is in making this skill (really, superpower) more accessible. I want to do this for myself, but also for my parents, kids, friends, and myself as a 9-year-old.

Monolithic software suites try to solve every problem in a broad domain with a giant toolbox, and then get abused to solve problems in other domains. Photoshop isn’t a web design tool, and it isn’t a dynamic web/mobile app design tool, but it is bent to solve those problems. Toolboxes are useful things to have and understand, but every digital media challenge is different.

meemoo-illo-by-jyri-pieniniemi I think that the upcoming custom elements web standard + NoFlo is going to be a powerful combination to make tools to get stuff done. I agree with St. Laurent that making the high-level picture dataflow is an effective way to make this work. My Meemoo.org and Mozilla App Maker are two potentially-compatible concepts for building toy programs like this. NoFlo is bringing this method to general purpose JavaScript, which now includes many spheres of possibility: server, browser, audio, video, 3D.

At JSConf.eu Jason Frame demoed a prototype window manager made for slicing up a screen and stitching together tools like this, using dataflow wires to connect widgets. I imagine a system like that which can also zoom in (open up?) to make more complex logic in a NoFlo graph.

This is the long-tail of tool design. 5 billion people will come online for the first time in the next 10 years. What problems will they be interested in solving? How many of these problems will be too obscure or not profitable enough to be suitable for a company to attempt to solve?

Right now, today, we can’t see the thing, at all, that’s going to be the most important 100 years from now.
Carver Mead

***

St. Laurent also writes about “Humans as Transformers.” Lauren McCarthy made a project where she farmed out all decisions during some dates to strangers on the internet in real-time. This got me to imagine a “Mechanical Turk” component for NoFlo: inputs data, text for directions, and price per item processed; outputs the human-transformed data. You could run these in parallel to compare answers, and use NoFlo’s normal flow controls to handle the asynchronous nature of the component. This would be a quick and dirty way to encapsulate any programming challenge too complex or difficult to express in code.

NoFlo clock demo

I have been working on the NoFlo graph editor, building on my experience making Meemoo.

noflo clock demo
press play on the iframe to start the clock

This graph runs the analog clock in the corner. We’re discussing a few ways that the graph might be simpler. For example, most of the components could be encapsulated into one subgraph, which would output just the numbers and rotation percentages needed to draw the clock.

Also, we’re talking about allowing equations like “=x/(60*1000)” in input ports that expect numbers. When the date is x, that equation will give you the correct rotation of the second hand (60 seconds * 1000 milliseconds). This will be a powerful feature, and reduce the number of nodes in the graph.

If you have ideas for the graph editor or future demos, please chime in.