This issue of CC-AI attempts to present a combined view of current ideas concerning the understanding and development of systems with self-referential and evolutionary characteristics. In short, both of the traditional paradigms of Artificial Intelligence (AI), leading to the construction of symbolic or dynamic models of the mind-brain, are considered insufficient to tackle the understanding of cognitive systems. The modelling of biological systems is believed to pose the same sort of problem for the field of Artificial Life (ALife).
Traditionally, AI has been associated with a computational approach to cognition. Its models involve systems that manipulate symbols which stand for external observables _ a representational scheme. This manipulation follows computational rules, and has lead to the idea of mind as a program. With this approach, the particular substrate which implements symbols and rules is irrelevant as long as the desired computation is achieved. It is usually referred to as the Symbolic or Computational Paradigm.
The alternative view, brought back to life in the 1980's with the revival of earlier cybernetic ideas, uses physics as the basis of its models. It attempts a dynamic explanation of cognition by building models inspired by the dynamics of the brain. The functions that the symbolic paradigm attempts to directly represent and compute, are seen on this alternative approach as emergent properties of the dynamics. It is usually referred to as a Connectionist, Emergent, Dynamic, Subsymbolic, or Self-Organizing Paradigm:
"One of the most interesting aspects of this alternative approach in cognitive science is that symbols, in their conventional sense, play no role. In the connectionist approach, symbolic computations are replaced by numerical operations _ for example, the differential equations that govern a dynamical system. These operations are more fine grained than those performed using symbols; in other words, a single, discrete symbolic computation would, in a connectionist model, be performed as a result of a large number of numerical operations that govern a network of simple units. In such a system, the meaningful items are not symbols; they are complex patterns of activity among the numerous units that make up the network." [Varela, Thompson, and Rosch, 1991 page 99]
The early 1990's were prolific in debates on the differences between the Symbolic and Connectionist paradigms [e.g. Ramsey, Stich, and Rumelhart, 1991 (eds); Dinsmore, 1992 (ed)]. Today, in most of AI, these distinctions have boiled down to the opportunistic utilization of techniques from both camps according to particular aspects of designed applications. Applied AI journals and magazines will just as likely publish the latest advances in expert systems driven by logic engines (classical, modal, or fuzzy), as the latest neural network, cellular automata, or genetic algorithm schemes. In fact, hybrid architectures tend to be the hottest items found on these publications.
On a more conceptual level, it has been generally accepted that looking at cognition with symbolic or dynamic tools is a question of finding a comfortable level of description for the properties we desire to model. Those working within a symbolic paradigm are most often preoccupied with models of natural language and human reasoning and categorization processes, while those utilizing dynamic tools focus on pattern recognition and dynamic classificationmodels.
However, in the wake of the success of the dynamic paradigm, the very interesting problem of emergent properties has developed. Are emergent properties nothing but a different way to look at the complicated dynamics responsible for this emergence, or do these properties establish a genuinely novel and distinct organization? At first glance, and if we desire to maintain the non- dualist, materialist, tenets of modern science, we have no choice but to accept the pragmatic, reductionist, stance expressed on the first interpretation of emergent properties: everything is ultimately explained by a lower dynamic level and distinctions between levels exist only on the eye of the beholder who chooses to work on a particular level of description. The question is thus one of explanation and reference.
In the realm of cognitive science and AI these views have often lead to the idea that mind and consciousness do not really exist [Stich, 1983; Churchland, 1986] even if our experience seems to indicate otherwise. Naturally, this divorce between science and human experience has been observed and criticized by unsatisfied cognitive scientists [Varela, Thompson, and Rosch, 1991; Searle, 1992] who consider such explanations of cognition unsatisfactory.
The present compilation is based on the idea that emergent properties, though embedded on some lower dynamics, present a true novel organization which is not completely explainable by the lower level descriptions. In particular, if some properties are genuinely emergent, their distinctive nature is not based on some observer's particular interest, but present the dynamics with a truly distinct domain, and thus establish a two-level organization to whom the emergent properties can be said to be self-referent. Furthermore, some contributors will ponder the next logical question: why, and in which circumstances, do systems observing more than one distinct organizational level, namely a symbolic and a dynamic level, have selective advantage and evolutionary potential?
Several authors were invited to present essays regarding the understanding and modelling of such autonomous organizations with self-referential attributes. The articles range from the discussion of relevant philosophical aspects of such systems and the laying out of conceptual avenues for improved modelling, to the more specific, yet related, implementation of artificial systems observing particular autonomous features.
In "Evolving Self-Reference: Matter, Symbols, and Semantic Closure", Howard Pattee presents in detail his Semantic Closure Principle as a required self-referential relation between the physical and symbolic aspects of material organizations with open-ended evolutionary potential: only material organizations capable of performing autonomous classifications in a self- referential closure, where matter assumes both physical and symbolic attributes, can maintain functional value required for evolution. The function of symbols is described as the communication of the material aspects they symbolize from one material structure to another; thus, symbols are not disentangled from matter, and syntax is not separated from semantics as in the extreme case of formal systems. Additionally, in light of this principle, a purely material, reductionist, conceptual approach to modelling systems with these self-referential properties is rendered ineffective, as it overlooks the symbolic traits of matter necessary to describe function or significance. The proposed alternative is based on a complementary model where the dualaspects of matter, physical and symbolic, can be usefully described.
In "The Mind-Brain Problem and the Physics of Reductionism", Robert Rosen presents cognitive systems (or the mind-brain problem) as a subset of what is referred to as complex systems. These systems observe non-reducible, or emergent, properties which cannot be explained by an algorithmic list of its constituents, and are rather described by impredicativities within themselves, that is, semantic self-referents entirely dependent on the context created by causal loops within one such system itself. A material reductionist approach, in the spirit of Church's Thesis, denies emergent properties by definition, since all systems' properties must be found in the context-independent algorithmic component analysis, hence it is rendered ineffective in dealing with complex systems which abound in self-referents. Similarly, the conclusion is that syntax alone cannot describe these complex systems as it destroys their closed causal loops; any approach to conceptualize life or the mind-brain problem must be one in which syntax and semantics coexist in an unfractionable closure and a relational framework based on the complememtarity of different predicative (formalizable) fragments forming an impredicative (self-referential) whole is used.
In "Universal Grammar", Charles Henry searches for the underlying organizational principle of natural language in the biological and evolutionary principles of the brain, as opposed to the traditional linguistic theory which aims to study grammar as the set of syntactic rules of language separated from semantics. Biological and cognitive life is seen as the ability to transform matter into symbols contained in evolvable closures where meaning is entailed. Symbols are not however abstract entities, but material carriers utilized by other material organizations; in the case of natural language, the brain utilizes the brain by appropriation of its own neurology. The idea is not to reduce language to the neuro-chemistry of the brain, but rather to use language as a window into the organization of the brain itself and living organisms in general. For this purpose, the concept of metaphor is presented not as an aberrant aspect of language, but the very means to discover new knowledge, rather than a mere recombination of elements: not simply an aspect of language but an aspect of life itself. The necessity of symbols is found in the necessity of stable meanings in the fractal-like organization of the brain, with metaphor the process of creating further meanings. If life is the ability to utilize material and symbolic aspects of matter, then existence is imbedded in a surprising symbolic and metaphorical value.
In "Artificial Semantically Closed Objects", Luis Rocha takes Heinz Von Foerster's concept of eigen-values or eigen-states as precisely those stable meanings that matter needs to symbolize in order to form evolvable closures. Von Foerster's conceptualization of a minimum memory element, a cognitive tile, is presented as a semantically closed building block for complex systems (in Rosen's sense). These constructions can be organized into context and time- dependent tessellations, in other words, into autonomous classifiers; simulations of such classifiers are also proposed. In order to establish what is understood by autonomous classification, the Church-Turing Thesis, the issue of computability, and the reductionist approach are discussed vis a vis a functional emergent theory of modelling based on the semantic closure principle.
In "Computability, Self-Reference, and Self-Amendement", George Kampis discusses theproblems of circularity, recursiveness, and formal theories of self-reference. Based on this analysis, self-modification is sugested as a primary concept from which self-reference derives. A biologically motivated mechanism for achieving both is outlined. The conclusion reached is that no computable model of complete self-reference is possible, though constructive definitions offer a framework where self-modification can be formulated. A general causal model for such processes is outlined, and its application to the autonomous definition of semantic relations in symbol systems is discussed as a possible avenue to model a certain kind of "symbol grounding".
In "Metalogues: An Abridge of a Genetic Psychology of Non-Natural Systems", Pedro Medina Martins conceptualizes a computational system whose goal is to simulate the mental behavior of infants. This system is based on the extension of Gordon Pask's Conversation Theory through the utilization of Zadeh's Fuzzy Set and Approximate Reasoning theories. A Principle of Wholes' Completion is presented as the underlying idea for this development of Pask's autonomous associative processes and it aims at the system's own generation of arbitrary internal symbols standing for aggregates (Pask's Coherencies) of related lower level objects, e.g., simulations of physiological states. Medina Martins proposes the stretching of computational tools to a possible close simulation of autonomous systems, again observing a certain kind of "symbol grounding".
In "As If Time Really Mattered: Temporal Strategies for Neural Coding of Sensory Information", Peter Cariani presents an extensive overview of current brain research on potential strategies for temporal neural processing, as well as possible avenues for the implementation of artificial neural networks with these characteristics. The importance of coding is stressed as a functional usage of signs by other material organizations to effect a perceptual discrimination or classification. The significance of temporal coding in particular turns on the observation that the tuning of resonances in elements in a network can introduce new temporal pattern primitives, thus increasing the classification space. This amounts to the introduction of new observables in a model, in other words, the emergence of new functional properties. The utilization of temporal neural processing is critical to the understanding and modelling of evolvable autonomous classifiers.
I would like to thank Gertrudis Van de Vijver for her immediate interest in this project, and Howard Pattee for his day to day support while compiling this edition. I further thank all the contributors for their prompt interest in participating, and Jon Umerez for most valuable comments, revisions, and ideas. Finally, I would like to especially thank Chuck Henry for his friendship and, along with Pedro Medina Martins, Gordon Pask, Gerard de Zeeuw, Heinz Von Foerster, and George Klir, for encouraging me to pursue this line of research.
Luis Mateus Rocha
References to the Introduction
Churcland, Patricia S.  Neurophilosophy. MIT Press.
Disnmore, J. (ed) , The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaun Associates, New Jersey.
Ramsey, William M., S.P. Stich, and D.E. Rumelhart (Editors) , Philosophy and Connectionist Theory, Lawrence Erlbaum , New Jersey.
Searle, J.R. , The Rediscovery of the Mind. MIT Press.
Stich, S. . From Folk Psychology to Cognitive Science: The Case Against Belief. MIT Press.
Varela, F., E. Thompson, and E. Rosch , The Embodied Mind: Cognitive Science and Human Experience. MIT Press.