Images showing how a 96-LED artefact was built to instantiate the system developed for this project…
Prototype components: 5V 8A DC power adaptor, stripboard, wire, 7A automotive fuse w/ fuse holder, 2 voltage regulators, 10,000μF capacitor and a 60LED/metre roll of addressable APA102s, (not pictured:) Raspberry Pi, Arduino, USB cables, terminal block to connect the power supply to, and a single LED + fuse to indicate power
Voltage regulators and the capacitor soldered at a distance from the stripboard so they can be easily desoldered to use in the final piece, with insulating sleeving to prevent a short circuit…
In the first weeks of this project, in which I am piecing software together to aide people creating light art, ‘dependency hell’ has been overcome to demonstrate Processing, openFrameworks and WebGL sketches running on the Raspberry Pi 3 without a monitor attached.
However the issue of performance, effecting the quality of the LED output, is the biggest so far. The system should ideally run as fast as digital video (usually at least 30 frames per second).…
Coiled strip of new LEDs (APA102 chipset “hotness” link)
Last week I identified the issue of framerates from the prototype that displayed a Processing sketch on LEDs using the Raspberry Pi 3. I also identified several possible options for refactoring the code to achieve the ~30FPS performance needed for perceptually smooth animation. However I’m choosing to not go too deep into performance yet, as another consideration in this project is supporting sketches in other visual programming languages which may inform how to optimise the system later on.…
Portable storage suitable for running Raspberry Pi applications
The last milestone this project reached was compiling and running Processing sketches on a headless Raspberry Pi 3 using X virtual framebuffer (Xvfb). To present the sketches on LED arrays we need to do colour sampling. This post starts with a summary of my research into X regarding this task.
X Window System is common on UNIX-like operating systems and seems to be standard on Linux distributions in 2018 - exactly thirty years after X11 (the latest version) came out.…
Raspberry Pi 3 with storage
I discussed different ways of capturing video from programmed visual applications, known as ‘sketches,’ in my last project blog. Now I will break down the ‘virtual framebuffer’ approach on the Raspberry Pi 3.
The sketch used for this demonstration is written in the Processing language although the approach could also apply to other languages. There are two benefits to starting with Processing: firstly its sketches are simply one file of code by default (unlike openFrameworks which starts with three), secondly the sketches are standalone applications (unlike sketches written in shader languages like WebGL) which means fewer dependencies.…
Photo being taken of a light installation at Lumiere, 2018
Updated on 21/1/18
Light art-installations, including pieces put up for London’s Lumiere festival last weekend, can be conceptualised as a union of hardware and software. The general technique of streaming visual media onto addressable lights is analogous to pixel-based systems (just like the device this blog is being read on), however tinkerers often have to build custom software to make light art-inspired objects work.…
One habit of mine is listening to the same songs so much they become less enjoyable, in spite of a world of other related music out there to enjoy as well. This problem leads to the project’s first goal; mapping a music collection so a recommendation system can be built from it, which is is complemented by a second goal; producing visual media using album artwork in the collection. The minimum viable product is an application to recommend music while involving generative imagery in the process.…
In this blog I walk through the process of realising my ‘pulse light’ starting with a video demonstrating its features in action, followed by a summary of the outcome and then captioned pictures and videos to illustrate the development from beginning to end. I was inspired by LED artworks (like this: link) and smart home devices.
The start of the video shows a light-dependent resistor (LDR sensor) embedded into the enclosure.…
The venture of computational creativity interests me because it brings fields of research together that seem sparsely related. Particle swarm optimisation (PSO) likewise came from the work of a social psychologist and an electrical engineer: James Kennedy and Russell C. Eberhart respectively.
I will discuss my thoughts around applying PSO in computational creativity, first listing three general aims described on Wikipedia:
To construct a program or computer capable of human-level creativity To better understand human creativity and to formulate an algorithmic perspective on creative behaviour in humans To design programs that can enhance human creativity without necessarily being creative themselves Source: Computational creativity - Wikipedia…
Understanding an optimisation problem helps thinking about how best to approach solving it. This post follows the process of analysing a maximisation scenario and implementing a genetic algorithm to solve it. Chromosomes in the scenario are encoded with the structure:
ch = {x0, x1, x2, x3, x4, x5, x6, x7} where x0-7 can be any digit between zero and nine
Fitness is calculated using the formula:
f (ch) = (x0+x1) - (x2+x3) + (x4+x5) - (x6+x7) I: Calculate fitnesses and sort a sample population… ch2 = {8, 7, 1, 2, 6, 6, 0, 1} f (ch2) = (8+7) - (1+2) + (6+6) - (0+1) = 23 ch1 = {6, 5, 4, 1, 3, 5, 3, 2} f (ch1) = (6+5) - (4+1) + (3+5) - (3+2) = 9 ch3 = {2, 3, 9, 2, 1, 2, 8, 5} f (ch3) = (2+3) - (9+2) + (1+2) - (8+5) = -16 ch4 = {4, 1, 8, 5, 2, 0, 9, 4} f (ch4) = (4+1) - (8+5) + (2+0) - (9+4) = -19 II: Apply crossovers… Single point crossover (at the middle) on the best two chromosomes……