# James Kennedy

# & Russell Eberhart

Kennedy and Eberhart present the origins of particle system optimization (PSO) through the simulation of a simplified social behavior: the search for food of a school of fish and evolutionary algorithms. This type of algorithm is inspired by natural mechanisms to solve problems of all types. As for the PSO, the idea is to make a swarm of solutions evolve towards an optimal solution. They are stochastic algorithms, i.e. they use iteratively random processes. The PSO is an algorithm that solves basic mathematical operations and thus, is inexpensive in terms of memory and computing speed. This algorithm is based on simple concept and rules, yet it solves complex problems. Indeed, to find an optimal solution, one just needs to correctly understand the question and then identify the right parameters to be applied to the model. The authors summarize this situation elegantly: "This algorithm [...] allows wisdom to emerge rather than trying to impose it".

The authors point out that, until now, the methods use to simulate flocks of birds have been based on sets of rules that maintain an optimal distance between individuals during their journey. They also observe that, according to sociobiologist E. 0. Wilson [20], a school of fish takes advantage of the large number of members to systematically find food, thanks to individual discoveries and their experiences. In other words, a member solves the problem of "finding food" through social and cognitive factors.

### Summary of the PSO’s development

Kennedy and Eberhart remind us that the particle system was originally used to simulate the foraging of a flock of birds using rules that are now familiar to us. At each iteration in the original algorithm, the members of the flock would copy the speed of their neighbors, such as Reynolds’ "Velocity matching" rule. This rule has the effect of homogenizing the speed and direction of movement of the flock, giving it an unrealistic behavior. To overcome this problem, a random variation of speed and direction of the agents was introduced to disturb the flock. Then a new approach was introduced called the "Cornfield Vector" which aims to make the swarm gravitate and converge towards a point symbolizing food. The principle is very close to the final PSO algorithm, in fact, at each iteration the individuals compare their position with that of the target using a cost function. A cost function aims at minimizing or maximizing a parameter to an optimum solution, here we try to minimize the distance separating the particles from their objective. The particles remember their best placement (Pbest) and this information is pooled with the flock to obtain the best overall position (Gbest). The best personal position is obtained from cognitive factors specific to the particle, while the best position is acquired socially. The particle will then move a random distance in the general direction of the best global position while staying in the vicinity of its best personal position, keep in mind that these values may change with each iteration. It is also possible to apply weights to the Pbest and Gbest information to moderate their respective contributions to the movement of the particle. Using this algorithm, a convergence of the flock towards the target is observed as expected.

Then a last element was added to the algorithm called "Acceleration by Distance" which consists in varying the range of the displacement of a particle as a function of its distance from the target. Indeed if the particle is far from the objective we want it to move quickly to cover a larger distance. As it gets closer to the target it must slow down to explore smaller areas and refine the best overall position. This example of foraging is a particle system optimization because Gbest can be interpreted as the solution to the "feeding" problem. Indeed the particle swarm combined with the algorithm presented earlier will allow the approximation of the source of food’s position.

### Multidimensional search and Neural Networks

We know from the previous example that particle system optimization is able to solve a linear problem in two dimensions. The authors indicate that it is possible to handle Ndimensional problems by simply changing the dimensions of the position vectors. Indeed, our position vectors have for the moment two components, the x and y position of the particle in space and we can easily imagine adding the z component to move to a 3- dimensional problem. Kennedy and Eberhart then introduce us to an application of the PSO, the training of a neural network (NN). Indeed the algorithm was used to train a NN to solve the XOR logic using the solutions obtained by a particle swarm evolving in a 13 dimension search-space, allowing the neural network to achieve its goal with a success rate of 95%.

We now know that particle systems use very simple displacement rules to converge towards a global extremum, hence the notion of optimization. Indeed, an optimization is the search for an extremum of a function, in this case, in our simulation we use a swarm of particles to discover the location of a randomly placed migration point. At the beginning, the particles appear at random positions in the search space. Each particle’s position is a potential solution to our problem, and we will try to converge these solutions to the global minimum, which is the location of our goal, the migration point.

In our example, we know the position of this point so we will trivially use a simple distance test to make the algorithm evolve. On the other hand, this global extremum will not be known for the optimization of the parameters of a neural network that we will present later.

As explained in the previous chapter, the search for an optimal position is based on three concepts, the speed of the particle, the best position reached individually (Pbest) and the best global position (Gbest).

### Flocking Simulation using a cellular automata

### Simulation's controls :

Press 'p' to start or stop the simulation

's' to play the simulation step by step

'd' to show or hide the recorded paths of each particles

'r' to reinitialize the simulation

### A few details on the PSO’s Algorithm

Each particle starts its search by moving in a random direction, and then tests whether its new position is closer to the migration point than before. If so, this position will become the best personal position reached, the Pbest. Then the swarm will update, if necessary, the best global position Gbest based on the best positions reached by individual particles. Finally the speed and thus the direction is updated according to the following formula:

*Formula : Update of the particle's velocity*

and then we can update the position of the particles:

*Formula : Update of the particle's position*

Let’s break down these formulas with the figure below as a support, we can see that the velocity's formula is composed of 3 members:

*PSO's search strategy*

A swarm of particles subjected to this algorithm is then able to converge towards a common migration point.