The venture of computational creativity interests me because it brings fields of research together that seem sparsely related. Particle swarm optimisation (PSO) likewise came from the work of a social psychologist and an electrical engineer: James Kennedy and Russell C. Eberhart respectively.
I will discuss my thoughts around applying PSO in computational creativity, first listing three general aims described on Wikipedia:
To construct a program or computer capable of human-level creativity To better understand human creativity and to formulate an algorithmic perspective on creative behaviour in humans To design programs that can enhance human creativity without necessarily being creative themselves Source: Computational creativity - Wikipedia…
Understanding an optimisation problem helps thinking about how best to approach solving it. This post follows the process of analysing a maximisation scenario and implementing a genetic algorithm to solve it. Chromosomes in the scenario are encoded with the structure:
ch = {x0, x1, x2, x3, x4, x5, x6, x7} where x0-7 can be any digit between zero and nine
Fitness is calculated using the formula:
f (ch) = (x0+x1) - (x2+x3) + (x4+x5) - (x6+x7) I: Calculate fitnesses and sort a sample population… ch2 = {8, 7, 1, 2, 6, 6, 0, 1} f (ch2) = (8+7) - (1+2) + (6+6) - (0+1) = 23 ch1 = {6, 5, 4, 1, 3, 5, 3, 2} f (ch1) = (6+5) - (4+1) + (3+5) - (3+2) = 9 ch3 = {2, 3, 9, 2, 1, 2, 8, 5} f (ch3) = (2+3) - (9+2) + (1+2) - (8+5) = -16 ch4 = {4, 1, 8, 5, 2, 0, 9, 4} f (ch4) = (4+1) - (8+5) + (2+0) - (9+4) = -19 II: Apply crossovers… Single point crossover (at the middle) on the best two chromosomes……
The knapsack problem deals with a set of items we want to put in a bag, each with associated weights and values. Crucially, the bag has a weight capacity that the selected items can’t exceed. The solution therefore is the subset of items that can fit in the bag with the optimum ‘profit’. A few variations of the problem exist (and even more approaches to solving them) but this post is about applying the genetic algorithm metaheuristic to 0-1 knapsack:…
The first problem I wanted to focus on in my nature-inspired metaheuristics project was circle packing in a circle, which aims to fit N unit circles within a larger circle as tightly as possible. It seems that just 25 cases for N have solutions known that don’t leave any circles loose (moveable without overlap). From what I found N = 91 is the largest. [?]
Above: Set of optimal circle packings with loose circles coloured pink (source)…
The Travelling Salesman Problem (TSP) is an optimisation problem in algorithmic research based on a simple scenario:
A salesman must travel between N cities Cities are linked to other cities with a cost (e.g. journey distance or duration) The salesman does not care about order, just that they visit all cities To solve TSP, our aim is to search for the lowest cost path connecting every node (city) in the graph (map).…
Stochastic diffusion search (SDS) is a swarm intelligence based algorithm that fundamentally differs to DFO (implemented last week) in the following ways:
Agents’ features define a hypothesis in the search space; a set of ‘microfeatures’ which together are a candidate for the solution In the ‘test phase’, random microfeatures in the hypotheses are used in partial evaluation instead of calculating the fitness of the whole Boolean (true/false) information is shared directly between random agents during the ‘diffusion phase’, unlike DFO agents’ results which is continuous When each negatively-evaluated agent communicates with a single random agent in the diffusion phase of SDS, it can either adopt the partner’s positively-evaluated hypothesis or (if the partner is also negative) a new random hypothesis is set.…
The first swarm intelligence algorithm described was stochastic diffusion search (SDS). I heard from its author that current research on SDS may prove it to be Turing complete. If this is the case then any given algorithm could be executed within it - which is thought-provoking despite being presumably impractical for many computational tasks. SI may also be used to create new media tangential to academic research.
Algorithmic art has an obscure history but it seems to be an emerging movement.…
Swarm intelligence occurs in structures composed of simple ‘agents’ interacting at a local level, where intelligent behaviour emerges from the system as a whole. Foraging ants, flocking birds and swarming flies are examples of this behaviour that have each inspired different optimisation algorithms. Dispersive Flies Optimisation (DFO) is based on how flies hover over food sources in nature [1].
In each iteration of DFO the randomly generated agents can move based on the feature vector of two neighbors (left and right in the array), the best agent of the swarm and its own.…
The old saying “There ain’t no such thing as a free lunch” suggests that you can’t get something for nothing. It is also the namesake for the topic of this post: the “no free lunch” theorems (NFLT). NFLT revolves around the difficult problem of optimisation; finding the best result out of a set of alternatives.
As an example, I previously came across optimisation for computer vision in the task of separating an image’s background (also known as ‘background subtraction’).…