« Back to home

Classification in Wekinator

Week Two:

  • Online quiz on learn.gold
  • Wekinator Assignment 2 group work on learn.gold
  • openFrameworks app that responds to Wekinator classifier on learn.gold
  • Code for the app demonstrated below: link

Reflection:

The third task this week; to create a piece of code that responds to a classifier, was surprising in several ways. With it being an early experiment, I made a simple program to demonstrate classification: clicking the mouse sends two features (current X and Y position within the 2-dimensional input space) to Wekinator. Setting up inter-process communication, despite it being new to me, required less code than I anticipated by using the OSC protocol (eased further by Wekinator and ofxOsc). There were two hickups however - firstly with Wekinator being unable to listen on the default port because another instance was already using it (I closed the second Wekinator instance but could’ve opened it on a different port instead). The second problem was an unequal number of OSC arguments being sent and recieved: console errors usefully showed that 3 inputs were being sent while 2 were expected to be recieved. I initially made that third input as if it were the output, but realised how my intuition needed to change for machine learning as opposed to standard problem solving.

Training model

After getting my head around Wekinator and openFrameworks, I illustrated the feature space with graphics to highlight target decision boundaries for three different classes (seen while adding training examples above). It’s meant to show the mutual exclusivity of classification; there are no points which are both red triangles and blue hexagons. After training the model, it was then evaluated by categorising mouse positions and outputting back to the demo (triggering the animated response I made below). kNN worked well providing the training data didn’t have any noise, which would fragment the decision boundaries causing false classifications. Adding examples did improve the accuracy close to the boundaries, but examples could equally be removed without much effect as long as the remaining examples were reguarly distributed in the middle of each slice. Next I want to apply classification to higher dimensional data, such as from a depth camera or the Leap motion controller, which would make more interesting and complex experiments. I also wonder what the maximum number of inputs that Wekinator and/or my hardware can deal with, and also how pre-processing data in openFrameworks to reduce the dimensionality could change the model.

Model running

Step 3 code:

Code references (OSC sending and listening):

[1] Wekinator example (Simple_Mouse_2Inputs)

[2] Wekinator example (Wekinator_Squirrel_Rotator)


For Data and Machine Learning for Creative Practice (IS53055A)