« Back to home

Testing LED Input/Output Performance

Posted on

In the first weeks of this project, in which I am piecing software together to aide people creating light art, ‘dependency hell’ has been overcome to demonstrate Processing, openFrameworks and WebGL sketches running on the Raspberry Pi 3 without a monitor attached.

However the issue of performance, effecting the quality of the LED output, is the biggest so far. The system should ideally run as fast as digital video (usually at least 30 frames per second).

This is a list of approaches to grabbing window colours tried to date and my corresponding comments:

Approach Speed  Comment
ImageMagick CLI N/A good to check pixels but too slow to use in real-time
Python Wand library 2 fps
Python Xlib and Pillow 10 fps
UV4L-raspidisp and UV4L-xscreen N/A streams sketch window to /dev/video0 which is in YU12 format, there should be a C++ library to interface with V4L2 but it was less straight-forward than I hoped
Libav/avconv N/A encodes sketch window to video file, potentially to a localhost web stream too, however it was very slow so would need to be compiled from source with RPi hardware acceleration flag
Xutils and Xlib in C++ with OpenCV 10 fps
Xutils and Xlib in C++ without OpenCV 15 fps

Image processing tasks lend themselves to hardware accelerated methods because image-processing is highly parallel. And the Raspberry Pi 3 does in fact have hardware accelerated graphics capabilities. The difficult bit is taking advantage of the GPU hardware due to the drivers being closed source and currently transitioning (legacy and experimental drivers can be selected).

Only the UV4L approach I tried above, which is developed for the Pi, is using hardware acceleration. It emulates windows as webcams so I installed Processing’s Pi-accelerated video library (link) to capture the stream. The library requires P2D rendering (OpenGL mode) in Processing which I put in the setup function. But these errors came up when running the sketch:

root@bdd360b:/app# Xvfb :1 -screen 0 256x256x24 -fbdir /var/tmp +extension GLX +render -noreset & export DISPLAY=":1"
[1] 663
root@bdd360b:/app# processing-java --sketch=./display_sketch  --present
Could not parse -1 for --display
OpenGL error 1280 at bot beginDraw(): invalid enumerant
GLVideo: Created compatibility symlinks
GLVideo requires the P2D or P3D renderer.

(java:1125): GLib-CRITICAL **: g_error_free: assertion 'error != NULL' failed
display_sketch.pde:7:0:7:0: RuntimeException: Could not load GStreamer

The errors are confusing because they suggest P2D mode isn’t on. Furthermore there is a separate error, for systems incompatible with P2D rendering, that doesn’t show up.

So I looked at the source code relevant to the error and it appears to be caused by EGL (graphics lib) or X (windowing system):

// save the current EGL context
#ifdef __APPLE__
context = gst_gl_context_get_current_gl_context (GST_GL_PLATFORM_CGL);
#elif GLES2
display = eglGetCurrentDisplay ();
surface = eglGetCurrentSurface (0);
context = eglGetCurrentContext ();
#else
display = glXGetCurrentDisplay ();
context = glXGetCurrentContext ();
#endif

if (!context) {
  g_printerr ("GLVideo requires the P2D or P3D renderer.\n");
  g_error_free (error);
  return JNI_FALSE;
}
source: link

Currently it seems that Processing isn’t connecting properly to the accelerated graphics drivers. I noticed that Processing development is less active now (the last official release was a year ago) too which is offputting when it comes to debugging.

In concluding this post it has to be said that the project progress has slowed down. I think this is due to performance related details, some of which (e.g. RPi graphics drivers) seem out of my control. Luckily there are many other approaches still available to try. There are also some projects similar to mine that may help to offer more insight.