Ned Gardiner, Coweeta LTER GIS Coordinator University of Georgia
During the month of June 1997, the Santa Fe Institute (SFI) hosted 84 students, researchers, and professionals from around the world for the Tenth Annual Complex Systems Summer School. The SFI pursues interdisciplinary research leading to syntheses in science through on-site research, off-site support of established scholars, and undergraduate, graduate, and post-doctoral programs. Many of its past successes have been in computer science and simulation modeling. The summer program was designed to introduce students from diverse fields to the theories and applications of "complex systems", with specific examples drawn from mathematical, physical, and biological sciences. "Complex systems" loosely describes the set of real and simulated phenomena whose nonlinear, emergent properties are not predictable from the individual behaviors of constituent elements.
SFI invited two graduate students from the LTER network to attend the school in 1997: Heidi Dierssen of McMurdo Dry Valleys, Antarctica, and Ned Gardiner of Coweeta, North Carolina. Their participation represented a small aspect of the SFI’s ecology program; it also provided LTER with increased access to the SFI. Experienced modelers, simulation novices, and pure empiricists will each find value in the summer program at SFI. Here’s why.
The informal educational experience of the SFI Summer School is perhaps its greatest asset. Participants can interact with individuals from many disciplines, including physics, medicine, operations research, computer science, neurobiology, ecology, logic, philosophy, meteorology, and more. The socio-cultural diversity also makes for interesting interactions outside the classroom.
The curriculum includes:
- Eight week-long lecture series by visiting researchers
- Shorter series or one-time sessions by resident and visiting scholars
The week-long lectures included such diverse topics as geology and geomorphology, the human brain, a crash course in mathematics and statistics, complexity and evolution, genetic algorithms in computer science, and mathematical representations of two-dimensional forms. Brief discussions included spin glasses, computation theory and the evolution of language, demonstrations of nonlinearity from physics, modeling mass extinctions, and biogeography.
Lecturers shared hard-won insights into modeling real systems. The central problem of how and why to create a model presents the individual with immediate practical and epistemological hurdles. One must first determine the purpose of the simulation. For example, is the purpose
- Application or validation of established theory (First Principles)
- Developing general concepts from observed patterns (Synthesis)
- Description (Hierarchy and Realism)
Below are examples which demonstrate the interdependence of those goals and some procedural issues to address when formulating simulations.
My first example demonstrates a rationale, based on established theory, to increase the number of parameters in a previously-accepted model in order to obtain accurate descriptions of observed behaviors. Geologist Susan Kieffer has explored, tested, and applied well-established physical theories and principals from fluid mechanics in her study of supercritical flow in constricted rivers, geysers, volcanoes, and eruptions on other planets. Her research has required her to revisit the basic assumptions made by hydraulicists and aerodynamicists when representing their systems mathematically. In hydraulics, it is common to ignore the effects of the compressibility of water. In aeronautics, one ignores the effects of gravity. These simplifications are quantitatively justified in light of the reasonable approximations they have yielded in natural, experimental, and applied settings (Kieffer 1989). Mach and Froude numbers describe the ratio of gases and fluids, respectively, to their critical velocities. If a medium exceeds its own critical velocity, then supercritical flow may arise, characterized by
...a complicated flow field consisting of oblique and normal shocks and mixed regions of subsonic and supersonic flow" within the jet. Because the decelerating waves are nonlinear, the jet ‘overshoots’ ambient conditions, and multiple shock and rarefaction waves are required to achieve the pressure balance. (Kieffer 1989)
In her model of standing waves in the Colorado River, Kieffer showed that one must consider both gravity and compressibility in applications involving super-critical flow. Kieffer further argued that eruptions of Old Faithful geyser and the Mt. St. Helens blast were both super-critical and that models of their behavior must consider compressibility and gravity. More generally, Kieffer demonstrated that unpredictable behaviors emerge when otherwise well-understood general theories are applied to real phenomena.
The second example treats modeling as a tool for synthesis and theory building. Borrowing from electrical engineering, Frank Hoppensteadt has developed an operationally simple, but realistic, representation of the behaviors of interacting neurons, namely the Voltage-Controlled Oscillating Network (VCON) model (Hoppensteadt 1997). Hoppensteadt was a strong advocate of mathematically representing ordinary language models; such "canonical models" (his definition) characteristically describe components throughout a complex system which share a similar function or definition. In this sense, he argued, mathematical representations are meant to facilitate a scientist’s intuition regarding system behavior. Concurring with Hoppensteadt’s approach, Shepherd suggested that modeling focus on realistic, observable suppositions.
In practice this means the relevance of a model for a real biological system depends on the extent to which its subcomponents are not arbitrary, but represent properties that can be tested in the biological system (Shepherd 1990). Shepherd discussed behavior within the nervous system as the expression of emergent properties by "functional units", or hierarchical levels of biological organization (Shepherd et al. 1990; Shepherd 1990). Hierarchy theory, familiar to ecologists, is a convenient simplification and abstraction to the scientist. Neurobiologists have described the nervous system in reasonable detail at many of these abstracted levels. At the cellular and subcellular levels, for example, we now understand firing patterns, ionization-deionization, inhibition, and excitatory feedbacks in neuronal systems. Many have studied brain function at broader physiological hierarchies which may span laterally-connected component systems. At that hierarchical level, one infers the functional roles of distinct areas of the brain. Syntheses in neuroscience have been born from merging reductionist and hierarchical approaches to modeling. The former has provided a mechanistic description of processes; the latter allowed researchers to focus on particular behaviors or properties which emerge out of the complex interactions among entire networks of cells, regions of the brain, and interactions with other organ systems.
The third example treats the case many ecologists face: observed behavior at a given hierarchical level may not be supportable by understanding each of the components contributing to the observed behavior. In this case, geomorphologist Brad Werner argued for descriptive accuracy at one hierarchical level over realism at a lower one. For example, it is nonsensical to model the angular momentum of individual grains of sand when studying dune formation over ranges spanning several kilometers. Because realistic modeling of natural systems exceeds computational limitations, one must be comfortable at some point abstracting and describing emergent behavior per se, without complete understanding of the physical properties underlying the behavior. Werner presented several geomorphological models for rivers, hillslopes, periglacial, and aeolian landforms. Primarily, he reasoned, the researcher should observe the system of interest in close detail. Because geomorphological systems are driven by processes operating across many scales of time and space, he suggested that one formulate a priori the functional relationship between dominant variables and the pattern of interest. To do so, one must determine the appropriate spatial and temporal hierarchy of concern. Next, construct a simple mathematical model to explore whether or not the chosen approach would yield realistic results. By this method, modeling and field observations go hand in hand as hypotheses are developed and refined.
Whether your research is purely theoretical or entirely empirical, modeling can be used
- To synthesize knowledge
- To compare expectations, simulation results, and observable phenomena
The necessary investment of effort by ecologists is simple to justify: simulation offers insights into patterns and behavior which would be experimentally elusive, at the very least. Further, modeling is the principal avenue by which ecologists may draw from parallel research in complex systems by physicists, economists, and computer scientists. If describing complex systems is your goal, I recommend the Santa Fe Institute as host to a rich treasure of modeling examples and tools.
Hoppensteadt, F. 1997. An Introduction to the Mathematics of Neurons. Cambridge: Cambridge University Press.
Kieffer, S.W. 1989. Geologic nozzles. Reviews of Geophysics 27: 3-38.
Shepherd, G.M. 1990. The significance of real neuron architectures for neural network simulations. In E.L. Schwartz, editor. Computational Neuroscience. Cambridge: MIT Press.
Shepherd, G.M., T.B. Woolf, and N.T. Carnevale. 1990. Comparisons between active properties of distal dendritic branches and spines: implications for neuronal computations. Journal of Cognitive Neuroscience 1:273-286.