# 4. Self-Organization and Emergent Complex Behavior

# By Luis M. Rocha

Lecture notes for I485/I585/I601 - : Biologically Inspired Computing. School of Informatics, Indiana University. Also available in adobe acrobat pdf format

*Self-organization* is usually understood as the process by which systems of many components tend to reach a particular state, a set of cycling states, or a small volume of their state space (attractor basins), with no external interference. This *attractor behavior* is often recognized at a different level of observation as the spontaneous formation of well-organized structures, patterns, or behaviors, from random initial conditions (emergent behavior). The systems used to study this behavior are referred to as dynamical systems or *state-determined systems*, since every trajectory is perfectly determined by its initial state. Dynamical systems are traditionally studied by continuous variables and sets of discrete-time difference equations (such as the logistic map) or continuous-time differential equations (such as models of the motion of bodies under gravitational forces). However, self-organization is more easily studied computationally with *discrete dynamical systems* (DDS) such as Boolean networks or cellular automata.

The state-determined transition rules of DDS are interpreted as the laws of some physical system [Langton, 1986] where the state of each component depends on the states of its neighbor (or related) components at the previous time instance. DDS possess a large number of components or variables, and thus very large state spaces. However, when started with random initial conditions (note: not from special initial conditions) they tend to converge, or self-organize, into small sets of attractor states in this space. Attractors may be chaotic in which case the emergent behavior is sensitive to initial conditions. But even chaotic attractors tend to be restricted to small volumes of their state-space (e.g. chaotic in a subset of dimensions of the state-space), therefore we still consider the convergence of a dynamical system into a chaotic basin of attraction to be a form of self-organization.

Since material systems are accurately modeled by dynamical systems, it follows from the observed attractor behavior [Wuensche and Lesser, 1992] of these systems that there is a propensity for matter to self-organize (e.g., [Kauffmann, 1992]). In this sense, matter is described by the (micro-level) dynamics of state transitions and the observed (emergent or macro-level) attractor behavior of self-organization. In general, attractors manifest or emerge as global patterns that involve many of components of the dynamical system, and are not easily describable in terms of their state-determined transition rules. For instance, the simple transition rules of the automata in Conway's *Game of Life* cannot describe what the emergent patterns of "blinkers" and "gliders" are. These emergent patterns pertain to a different, complementary level of observation of the same system [Pattee, 1978].

The process of self-organization is often interpreted as the evolution of order from random initial conditions. However, notice that this evolution is limited to the specific attractor landscape of a given dynamical system. Unless its parameters are changed (structural perturbation), no dynamical system can escape its own attractor landscape. This limitation will become more apparent when we approach the problem of self-replication.

# Life on the Edge of Chaos?

Another interesting aspect of the behavior of dynamical systems concerns the concept of *bifurcation or phase transition*. When the parameters of a dynamic system are changed gradually its trajectories and attractors typically change gradually, however, for certain parameter values sudden changes in the dynamic behavior can occur. It is at this critical point that complicated spatio-temporal organization may occur (e.g. from a steady-state to a limit cycle attractor). Close to bifurcations the system also becomes increasingly more sensitive to parameter and initial condition changes. It is often proposed that bifurcations offer a selection mechanism [Prigogine, 1985], as a dynamical system may respond very differently to very small changes in their parameters.

However, if the parameter space is divided by many bifurcations, the system becomes increasingly sensitive to initial conditions and small parameter changes; in this sense its behavior becomes chaotic. It has been argued that the most useful behavior lies instead in between full order and chaos. Langton [1990, 1992] has shown (for one-dimensional cellular automata) that it is in this range of behavior that dynamical systems can carry the most complicated computations. Computation here is used in a loose sense---not as the rate-independent, symbolic manipulation of Turing-machines---meaning that information exchange between elements of these systems is maximized in this range. In other words, Langton showed that the highest correlation among the automata in a cellular lattice occur at this stage.

Kauffman [1993,] likewise hypothesized that "living systems exist in the [ordered] regime near the edge of chaos, and natural selection achieves and sustains such a poised state". This hypothesis is based on Packard's [1988] work showing that when natural selection algorithms are applied to dynamical systems, with the goal of achieving higher discriminative power, the parameters are changed generally to lead these systems into this transitional area between order and chaos. This idea is very intuitive, since chaotic dynamical systems are too sensitive to parameter changes, that is,a single perturbation or mutation (structural perturbation) leads the system into another completely different behavior (sensitive to damage). By contrast, ordered systems are more resilient to damage, and a small parameter change will usually result in a small behavior change which is ideal for smooth adaptation. However, even though very ordered systems can adapt by accumulation of useful successful variations (because damage does not propagate widely), they may not be able ‘step out' of their particular organization in the presence of novel demands in their environment.

It is here that systems at the edge of chaos were thought to enter the scene; they are not as sensitive to damage as chaotic systems, but still they are more sensitive than fully ordered systems. Thus, most mutations cause only minor structural changes and can accumulate, while a few others may cause major changes in the dynamics enabling a few dramatic changes in behavior. These characteristics of simultaneous mutation buffering (to small changes) and dramatic alteration of behavior (in response to larger changes) is ideal for evolvability [Conrad, 1983, 1990]. However, many of the real gene networks that have been successfully modeled with dynamical systems (e.g. the network of segment polarity genes in Drosophila melanogaster [Albert and Othmer, 2003]), exist in a very ordered regime, being very robust to structural changes [Chaves, Albert and Sontag, 2005; Willadsen and Wiles, 2007; kauffman et al, 2003]. Still, other genetic regulatory network models do operate close to criticality [Balleza et al, 2008]. It appears that evolution favors ordered, very robust regimes of self-organization in gene networks – at least the ones involved in very conserved regulatory pathways – though there is also evidence of near-critical regimes for increased evolvability.

# Complex Self-organization

We have studied several computational systems said to be self-organizing in the sense described above. The discrete logistic equation observes several ranges of ordered behavior according to its parameter *r*. For *r* ≤ 3, the system converges to a single point steady state (independently of its initial value). For 3 ≤ *r* ≤ 4 the system enters a series of bifurcations, meaning that it changes its attractor behavior, first from a steady-state into a two-state limit cycle, and then progressively doubling the number of states in an attractor limit cycle as *r* increases. Close to *r* = 4, the limit cycle becomes chaotic. That is, in the chaotic range, the slightest change in the initial value, will lead to a completely different trajectory (though similarly chaotic). The system goes from being independent to strongly dependent of initial conditions, though, in each range, the attractor behavior of the equation is the same for random initial conditions. Thus, we can see the logistic equation as self-organizing.

But there is another aspect of the logistic equation that should be understood. In all of its ranges of behavior, from full order to full chaos, the system is (fairly) reversible. That is, I can always obtain a specific initial condition which caused some behavior, by formally running the system backwards. This means the system is deterministic in both temporal directions. Formally, this means the state transition function is invertible. (This is actually only true, if we decide to work on the lower half of its state space, since the logistic equation is a quadratic function, it has always two possible solutions for the previous value of the current state, these values are symmetric about the middle point of its state space). Some, resist calling this kind of reversible systems self-organizing because they are not sufficiently complex. They reason that if a system is self-organizing, when ran backwards it should be self-disorganizing, that is, it should lead to random initial conditions, or to an incomplete knowledge of possible initial states. Indeed, complexity is typically equated with the inability to describe the behavior of a system from the behavior of its components or predecessors. This way, we ought to reserve the term self-organization to those irreversible systems whose behaviors must be evaluated statistically. The logistic map shows "hints" of this backwards self-disorganization, but we can still work out effectively its backwards trajectory to an initial condition by restricting the quadratic solutions to half of its state space.

*Random Boolean Networks* are much more complicated than this. They are completely deterministic since a certain state will always lead to the same next state (state-determinacy), however, we cannot usually know exactly what the predecessor of a current state was. Systems like this are usually studied with statistical tools. Even though the rules that dictate the next state of its components are simple and deterministic, the overall behavior of the system is generally too complicated to predict and statistical analysis has to be performed. For instance, Kauffman has shown that when *K*=2 (number of inputs to each node), his networks will have on average √N basins of attraction with a length of √N states; if the output of one node is switched to the other boolean value (perturbation), the trajectory returns to that cycle 85% of the time, while on the remaining 15% of the time it will "jump" into a different basin of attraction. *Cellular automata* (CA) fall into this same category of deterministic, irreversible, self-organization.

# Further Readings and References

Albert, R., H.G. Othmer [2003]. "The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster." *J Theor Biol*. 223(1):1-18.

Balleza, Enrique, Elena R Alvarez-Buylla, Alvaro Chaos, Stuart Kauffman, Ilya Shmulevich, and Maximino Aldana. 2008. “Critical Dynamics in Genetic Regulatory Networks: Examples from Four Kingdoms.” *PloS One* **3** (6) (January): e2456.

Chaves, M., R. Albert, and E.D. Sontag [2005]. "Robustness and fragility of Boolean models for genetic regulatory networks." *J Theor Biol.* 235(3):431-49.

Conrad. M. [1983], *Adpatability*. Plenum Press.

Conrad, M. [1990], "The geometry of evolutions". In *BioSystems* Vol. 24, pp. 61-81.

Forrest, S. (Ed.) [1990]. *Emergent Computation*. MIT Press/ North-Holland. Special Issue of Physica D. Vol. 42.

Kauffman, Stuart A. [1993]. *The Origins of Order: Self-Organization and Selection in Evolution*. Oxford University Press.

Kauffman, Stuart, Carsten Peterson, Björn Samuelsson, and Carl Troein. 2003. “Random Boolean Network Models and the Yeast Transcriptional Network.” *Proceedings of the National Academy of Sciences of the United States of America* **100** (25): 14796–14799.

Klir, George J. [1991]. *Facets of Systems Science*. Plenum Press.

Langton, C. [1990]. "Computation at the edge of chaos: phase transitions and emergent computation". In Forrest [1990] pp12-37.

Langton, C. [1992], "Life at the edge of chaos". In *Artificial Life II*. C. Langton (Ed.) Pp 41-91. Addison-Wesley

Packard, N. [1988] "Adaptation toward the edge of chaos". In. *Complexity in Biological Modelling*. S. Kelso and M. Shlesinger.

Pattee, Howard H. [1978]."The complementarity principle in biological and social structures." In: *Journal of Social and Biological Structures* Vol. 1, pp. 191-200.

Prigogine, I. [1985]. "New Perspectives on Complexity". In *the Sciences and Praxis of Complexity*. The United Nations University pp.107-118. Reprinted in Klir [1991]

Willadsen, K. and Wiles, J. [2007] "Robustness and state-space structure of Boolean gene regulatory models". *Journal of Theoretical Biology*, **249** (4), 749-765.

Wuensche, A., & Lesser, M. (1992). *The global dynamics of cellular automata: An atlas of basin of attraction fields of one-dimensional cellular automata*. Reading, MA: Addison-Wesley.