|Home Contact Us|
Complex Adaptive Systems
Complex adaptive systems are a reoccuring theme in fields such as biology, computer science, neuroscience, and economics. While such systems may manifest themselves in very different ways, from the organizational behavior of ants to cars in traffic to patterns of neuronal activaction, the underlying concept is that simple rule-based parts interact to form complex emergent behavior.
An adaptive system contains variables which are subject to change. These changes occur as a system "adapts" to a problem - attempting to find a good solution. There are many kind of optimization strategies for finding the best values for all the variables in the system. The quantified ability of a system to solve a problem is it's "fitness". Simulated Annealing, the strategy that is demonstrated in our two simulations, recognizes the fact that in a "fitness landscape" (the set of all possible system states and fitness), one can get stuck on a sub-optimal strategy (see "balldropper" for a more detailed explanation). Simulated annealing causes the system to take random jumps to other locations in the landscape, in an attempt to find a better solution.
We use the multi-agent simulator Breve to demonstrate two examples of such systems, and allow manipulation within the simulation so that questions about the composition of such systems can be explored. The user also has direct access to the code through Breve's setup, so that making changes to the simulation is easy.
To open the following simulations, simply open Breve and then open the file of the simulation you want to run.
This is a very simple demonstration of the concept behind the optimization strategy know as "simulated annealing." Think of a fitness landscape in terms of a terrain of hills and valleys. Now, imagine a ball somewhere in that landscape, which represents the location of the actual system with its specific variable values. In many strategies, the ball can roll around the landscape (figuratively speaking) - but the problem is that once it finds a valley, there is no reason for it to look for a deeper valley (even though that valley may be a better solution), because it would first have to go uphill, and lose much of its fitness.
The simulation gives a simple example of getting balls out of their valleys and exploring other parts of the terrrain. Randomness is introduced by way of an "earthquake", which causes balls to be upset from their stable wells and re-locate another valley.
Click on the small sphere for a small earthquake and the large sphere for the large earthquake.
This simulation demonstrates a simplified annealing strategy used to find a path through a random set of obstacles. The "state" of the system begins randomly (the balls find a random path through the obstacles). When one of the small or large sphere on the top right are clicked, each of balls is given a random intensity of velocity that sets them all moving. The path may end up much different from the start or it may be relatively similar, but because the balls are linked, not only do they influence each other when they begin moving, but there will always be a path because the links cannot be broken.
One can observe in this simulation that while almost an infinite number of stable states exist, even a small random jump in state space can find a much better (shorter) path through the obstacles. Or, it may get worse again before it gets better (which is a fundamental assumption of the strategy).
Think of systems where this strategy might be very useful, and ones that it might not work well. What would be the best way to implement this strategy? Is it best for the random jumps to be large or small? Should all the variables be affected during a jump, or just some of them?
|© 2005 Indiana University - All Rights Reserved.|