« Back to home

Project Introduction

Posted on

sfiuh

Photo being taken of a light installation at Lumiere, 2018

Updated on 21/1/18

Light art-installations, including pieces put up for London’s Lumiere festival last weekend, can be conceptualised as a union of hardware and software. The general technique of streaming visual media onto addressable lights is analogous to pixel-based systems (just like the device this blog is being read on), however tinkerers often have to build custom software to make light art-inspired objects work.

A current approach to creating light art starts with choosing a creative coding language to write a generative sketch in, then finding and including a suitable LED control library, then programmatically mapping colours to the output. Overall the process demands non-trivial technical ability from creators and furthermore limits who and how pieces may be extended in the future.

The aim of this project is to streamline light art development and interaction to make it far easier to take advantage of the expressivity, energy efficiency and discreteness that new LED technology offers.

It will be realised as an application to display sketches programmed in any creative coding language on arbitrary LED configurations, and will be remotely controlled via a web interface. By “arbitrary” I mean that LEDs do not have to be arranged in a regular grid but can be located to suit the constraints of their physical environment.

Features for live coding, automation and an online sketch sharing platform will be made available on the web GUI. The rest of this post discusses the first task; capturing video from programmed sketches.


Visual sketches written in Processing and openFrameworks/OF can be displayed on LEDs by using language-specific functions to pick up local colour (e.g. “getColor(x,y)” in OF), which could be useful in this project as libraries may be written for both - however other visual languages lack similar functions, notably WebGL which would be incompatible with this approach.

A more widely applicable method could involve ‘framebuffers’ rather than application-specific functions, because the former stores visual data from any graphical application and allows external processes to view them. A schematic of this approach to grabbing colours from sketches could look like this:

  1. Virtual framebuffer e.g. Xvfb *
  2. Windowed graphical application e.g. Firefox
  3. Lossless digital media e.g. WebGL/Processing sketch**

* Virtual framebuffers run in memory alone; skipping video-out interfaces like HDMI which will remain unused by this project
** Processing sketches are standalone applications (they fit level 2 and 3) along with openFrameworks

A lower-level, possibly higher performance approach is to access sketch’s raw graphical data via the URI: /dev/fb0. It is suggested that you can get images this way by copying /dev/fb0 into a file and by using the fbdump tool [link].

In order to test the research covered so far I’ve ordered new Raspberry Pi 3 storage. I’ve also ordered LEDs which may take a month to arrive from their source in Asia, but will be used to build physical objects for the software to power.