« Back to home

Swarm-led Image Processing

Posted on

The first swarm intelligence algorithm described was stochastic diffusion search (SDS). I heard from its author that current research on SDS may prove it to be Turing complete. If this is the case then any given algorithm could be executed within it - which is thought-provoking despite being presumably impractical for many computational tasks. SI may also be used to create new media tangential to academic research.

Algorithmic art has an obscure history but it seems to be an emerging movement. I think over-predictability has been a constraint with digital art, however chaos and randomness can be simulated by using many agents/iterations in the algorithmic process. Andy Lomas’ morphogenesis-themed Cellular Forms, involving tens of millions particles processed in parallel, definitely doesn’t lack expression. The algorithmically generated structures are so complex that they couldn’t be made in any other (traditional) way.

Inspired by algorithmic research and creativity, I finally implemented Dispersive Flies Optimisation in openFrameworks using sample C++ code kindly shared by Josh Hodge. This produced my first visual demonstration below:

DFO Experiment

The apparent vertical and horizontal lines are caused by dispersing the agents in just one dimension at a time (which I liked the visual effect of). I played with the number of agents and the disturbance threshold, noticing perceivable effects which I tried to accentuate in the moving clips below:

A post shared by unsignedint (@untitledart) on

In the process to create the animations above, source photos are first ‘glitched’ in notepad then loaded into an openFrameworks application for keypoint detection using OpenCV’s ORB implementation:

// load image
ofImage image;
image.load( "glitch.jpg" );

// detect features
std::vector<cv::KeyPoint> keypoints;
Ptr<FeatureDetector> fd = FeatureDetector::create( "ORB" );
fd->detect( toCv(image), keypoints );

kNN is used in the fitness function to allow agents to search for the image’s points of interest:

// fitness function
for (auto agent : population)
{
    float tempFitness = 2.;
    ofVec2f agentPosition = ofVec2f(population[i][0], population[i][1]);
    for(auto keypoint : keypoints)
    {
    	// map keypoint to feature space
    	ofVec2f keypntPosition = ofVec2f( ofMap( keypoint.pt.x, 0, ofGetWidth(), -1., 1. ),
                                      ofMap( keypoint.pt.y, 0, ofGetHeight(), -1., 1. ) );
        // calculate and compare euclidean distance
        if( tempFitness > keypntPosition.distance(agentPosition) )
        	tempFitness = keypntPosition.distance( agentPosition );
    }
	// normalise fitness
    fitness.push_back( ofMap( tempFitness, 0., 2., 0., 1. ) );
}

Random numbers and sine functions are used elsewhere for added effect, such as gradually and periodically changing disturbance threshold:

disturbanceThreshold = ofMap( sin( ofGetFrameNum()/10 ), -1., 1., 0.0001, 0.2 );

Overall the test ‘works’ but it’s not doing anything advanced. Furthermore my combination of SI with computer vision (CV) is very rudimentary - if considered as a task to locate an image’s region of interest then there are many improvements to be made, such as using >1 keypoint in kNN which would make the search space more complex.

There is more learning for me to catch up on but I am curious about other ways SI could be used. My further experiments could involve solving more well-defined problems like symmetry-detection and applying SI to totally different types of data.

For Natural Computing (IS53052A)