Hostname: page-component-848d4c4894-sjtt6 Total loading time: 0 Render date: 2024-07-05T15:54:45.508Z Has data issue: false hasContentIssue false

The Concept of Law in Natural, Technical and Social Systems

Published online by Cambridge University Press:  13 May 2014

Klaus Mainzer*
Affiliation:
Lehrstuhl für Philosophie und Wissenschaftstheorie, Director of the Munich Center for Technology in Society, Technische Universität München, Arcisstr. 21, D-80333 München, Germany. E-mail: mainzer@tum.de
Rights & Permissions [Opens in a new window]

Abstract

In the classical tradition, natural laws were considered as eternal truths of the world. Galileo and Newton even proclaimed them as ‘thoughts of God’ represented by mathematical equations. In Kantian tradition, they became categories of the human mind. David Hume criticized their ontological status and demanded their reduction to habituations of sentiments and statistical correlations of observations. In mainstream twentieth-century century science, laws were often understood as convenient instruments only, or even deconstructed in Feyerabend's ‘anything goes’. However, the Newtonian paradigm of mathematical laws and models seems also to be extended to the life sciences (e.g. systems biology). Parallel to the developments in the natural sciences, a change of public meaning regarding laws in society can be observed over the last few centuries. In economics, experimental, statistical, and behavioural approaches are favoured. In any case, the ontological basis of laws, sometimes blamed as ‘Platonism’, seems to be lost. At the beginning of the twenty-first century, the question arises: Are laws still important concepts of science? What is their contemporary meaning and task in different disciplines? Are there already alternative concepts or do laws remain an essential concept of science? In the following, we consider (1) the universal concept of laws; (2) the dynamical concept of laws; (3) their applications in natural and technical systems; (4) their applications in social and economic systems; and finally (5), we emphasize the instrumental concept of laws.

Type
Introduction
Creative Commons
Creative Common License - CCCreative Common License - BY
The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licence http://creativecommons.org/licenses/by/3.0/
Copyright
Copyright © Academia Europaea 2014

1. The Universal Concept of Laws

1.1. Newtonian Concept of Natural Laws

At the beginning of modern times, laws of nature were introduced as axioms of classical mechanics. Isaac Newton (1642–1727) explained physical effects by mechanical forces as causes. In his Philosophiae naturalis principia mathematica (1687), he assumed three axioms as laws of forces (in modern version):

  1. 1. law of inertia (lex inertiae)

  2. 2. F = ma (F = force, m = mass, a = acceleration)

  3. 3. law of interaction (actio = reactio)

His prominent example of force is gravity. In the Newtonian sense, all natural laws should be derived from these fundamental axioms according to logical-mathematical rules, as in Euclidean geometry (‘more geometrico’).Reference Mittelstrass and Mainzer14 Different from mathematical axioms, the fundamental axioms of mechanics must be justified by experience. The conclusion from observations and measurements to general laws was called induction, which is no logically rigorous rule that is contrary to the deduction of predictions and explanations from the general laws. Therefore, from a methodological point of view, the Newtonian laws are, in principle, hypotheses that are justified by overwhelming experiences, but not generally verified and true like mathematical propositions. In analytical mechanics (L. Euler, J.L. Lagrange, and so on), laws of causes and effects are reduced to differential equations with solutions under certain initial conditions and constraints. The search for causes is replaced by the computation of solutions.

1.2. Invariance of Natural Laws

Natural laws are assumed to be universal and hold always and everywhere in nature. In physical terms, they hold independently of spatio-temporal reference systems. In classical mechanics, inertial systems are reference systems for force-free bodies moving along straight lines with uniform velocity (Newton's lex inertiae). Therefore, mechanical laws are preserved (invariant) with respect to all inertial systems moving uniformly relative to one another (Galilean invariance). The spatio-temporal interchange of these uniformly moving reference systems is defined by the Galilean transformations satisfying the laws of a mathematical group (Galilean group). Thus, in a mathematical rigorous sense, the Newtonian laws are universal (invariant) with respect to the Galilean transformation group.

1.3. Symmetries and Conservation Laws

Symmetries of space and time are also defined by invariant transformation groups. For example, homogeneity of space means that the structure of space does not change with respect to spatial translation from one point to another point as a reference system. In other words, no point of space is distinguished. Isotropy of space means that that the spatial structure does not change with respect to spatial rotation. In this case, no direction in space is distinguished. According to Emmy Noether's famous proof, geometrical symmetries of this kind imply physical conservation laws. For example, spatial homogeneity implies the conservation of linear momentum, and spatial isotropy implies the conservation of angular momentum. Homogeneity of time means that no temporal point of time is distinguished. In this case, we can derive the conservation law of energy.

1.4. PCT-Theorem

Obviously, symmetry and invariance of natural laws have fundamental consequences in the natural sciences.Reference Mainzer7 In twentieth-century physics, these insights lead to the important PCT-theorem: laws of classical physics are invariant with respect to the discrete symmetry transformations of parity P, charge C, and time T. Parity P relates to the left or right orientation in space without violating natural laws of classical physics. Analogously, charge C means the exchange of positive or negative charge (in general: the exchange of matter and anti-matter). Time T reflects the arrow of time from forward to backward without violating natural laws of classical physics. But quantum systems are in general only invariant with respect to the combination PCT. For example, in weak interactions of elementary particles, there are only left-handed neutrinos, but right-handed antineutrinos. That means a violation of parity P. Only the combined PC symmetry holds.

1.5. Global Symmetry of Natural Laws

Global invariance of natural laws means that the form of a natural law is preserved with respect to transformation of all coordinates (e.g. change from a uniformly moving reference system to another one). Examples are Galilean invariance (classical mechanics) and Lorentzian invariance (special relativity). Global invariance can be illustrated by the shape of a sphere that remains unchanged (invariant) after a global symmetry transformation, since all points were rotated by the same angle.

1.6. Local Symmetry of Natural Laws

For a local symmetry, the sphere must also retain its shape when the points on the sphere are rotated independently of one another by different angles. Thereby, forces of tension occur on the surface of the sphere between the points, compensating the local changes and preserving (‘saving’) the symmetry of the sphere. In relativistic space-time, local accelerations of reference systems are equivalent to local gravitation fields, maintaining the invariance (‘covariance’) of the relativistic field equations.

1.7. Unification of Natural LawsReference Mainzer8

In quantum field theory, the strong, weak, and electromagnetic interactions have been reduced to fundamental symmetries. The reduction to fundamental symmetries leads to a research programme that starts with Newton's unification of Kepler's celestial mechanics and Galileo's terrestrial mechanics in the theory of gravitation, and finally Einstein's relativistic version, Maxwell's unification of electricity and magnetism into electrodynamics, the relativistic version of quantum electrodynamics, its unification with the theory of weak interaction and the theory of strong interaction. The mathematical framework of these unifications is formed by gauge theories in which the physical forces are determined by the transition from global to local symmetries.

The experimentally confirmed and physically accepted cases include abelian local gauge theory U(1), the non-abelian local gauge theory SU(3) of the strong interaction, and the non-abelian local gauge theory of the weak and electrodynamic forces SU(2) × U(1). The so-called standard theory aims at the grand unification of strong, weak, and electromagnetic interaction in the framework of quantum field theory. Einstein's relativistic theory of gravitation can also be characterized as a non-abelian local gauge theory. A final goal would be the unification of all natural forces in a universal force of gigantic power with universal local symmetry – the fundamental interaction to which all observable physical interactions could be reduced. For the grand unification of the strong, weak, and electromagnetic forces, it is mathematically obvious to start with the smallest special unitary group SU(5) into which the SU(3)-group of electromagnetic interaction and the SU(2) × U(1)-group of electroweak interaction (as unification of electromagnetic and weak interactions) can be embedded. After the grand unification of weak, strong and electromagnetic forces, gravitation must also be included. In a supersymmetry of all four forces, general relativity theory would have to be combined with quantum mechanics into a single theory.

1.8. Symmetry Breaking of Universal Laws

At the moment, supersymmetry may be only a mathematical model. At least, however, the theories of weak, strong and electromagnetic interactions with their symmetries and symmetry breaking have been convincingly proven by experiments in high-energy laboratories. If we imagine the universe itself as a gigantic high-energy laboratory, the phases of expansion of the universe can also be explained in a physical cosmogony. In particular, a unified quantum field theory of the four fundamental interactions supplies the physical basis for an explanation of the present state of the universe as a consequence of symmetry breakings from a unified original state. Immediately after the Big Bang, the universe was at such a high temperature that a unified symmetry prevailed. In this initial state, all particles were nearly equal and all interactions were undistinguishable with respect to quantum fluctuations. This first epoch of the universe would have to be described by the symmetry of a unified quantum field theory which we still must miss. In this epoch, fermions and bosons were still being transformed into one another.

Only upon cooling did this symmetry break apart into increasingly new subsymmetries, and the individual particles were crystallized in stages. For example, it would be conceivable that after the end of the first epoch, the original symmetry had dissolved into the local subsymmetry of gravitation and the symmetry of the other forces. In the SU(5) symmetry of the weak, strong and electromagnetic forces, quarks and leptons are still being transformed. The SU(5) symmetry decays into the SU(3) symmetry of the strong forces and the SU(2) × U(1) symmetry of the electroweak force. Quarks and leptons became identifiable particles. Ultimately, SU(2) × U(1) symmetry also decays into the SU(2) subsymmetry of the weak interaction and the U(1) subsymmetry of the electromagnetic interaction. The atomic nuclei of electrons, neutrinos, and so on, become identifiable particles.

Symmetry breaking is explained by the so-called Higgs mechanism. For example, in the state of SU(2) × U(1) – symmetry, the field quanta of the gauge fields are initially massless and of unlimited range. Spontaneous symmetry breaking happens if the vacuum in which the gauge fields are propagated is not symmetrical. In this case, scalar field quanta (Goldstone particles) occur which, according to the Higgs mechanism, are absorbed by the gauge fields and thus provide massive gauge bosons. In this sense, the Higgs mechanism exhibits new massive particles if an asymmetric vacuum state is given.

Returning to the initial unification of all forces, supergravitation theory has the great disadvantage that too many new particles are predicted and not confirmed. Further, quantum field theories, in general, seem to have difficulties with divergent and infinite quantities. Procedures of renormalization could eliminate these quantities with great accuracy, but without theoretical foundation by the principles of quantum theory. These difficulties motivated many physicists to think about deeper substructures of matter than elementary particles. In string theory, these substructures are so-called strings that oscillate and produce elementary particles, like the strings of a violin produce tones. Actually, the oscillations produce energetic packets (quanta), which are equivalent to particle mass according to Einstein's equivalence of energy and mass. Anyway, all these research strategies of unification aim at a fundamental universal concept of natural laws.

2. The Dynamical Concept of Laws

2.1. Natural Laws of Dynamical Systems

A dynamical system is characterized by its elements and the time-depending development of their states. A dynamical system is called complex if many (more than two) elements interact in causal feedback loops generating unstable states, chaos or other kinds of attractors. The states can refer to moving planets, molecules in a gas, gene expressions of proteins in cells, excitation of neurons in a neural net, nutrition of populations in an ecological system, or products in a market system. The dynamics of a system, i.e. the change of system states depending on time, is mathematically described by differential equations. A conservative (Hamiltonian) system, e.g. an ideal pendulum, is determined by the reversibility of time direction and conservation of energy. Dissipative systems, e.g. a real pendulum with friction, are irreversible.

In classical physics, the dynamics of a system is considered a continuous process. But, continuity is only a mathematical idealization. Actually, a scientist has single observations or measurements at discrete-time points that are chosen equidistant or defined by other measurement devices. In discrete processes, there are finite differences between the measured states and no infinitely small differences (differentials) which are assumed in a continuous process. Thus, discrete processes are mathematically described by difference equations.

Random events (e.g. Brownian motion in a fluid, mutation in evolution, innovations in economy) are represented by additional fluctuation terms. Classical stochastic processes, e.g. the billions of unknown molecular states in a fluid, are defined by time-depending differential equations with distribution functions of probabilistic states. In quantum systems of elementary particles, the dynamics of quantum states is defined by Schrödinger's equation with observables (e.g. position and momentum of a particle) depending on Heisenberg's principle of uncertainty, which allows only probabilistic forecasts of future states.

2.2. Computability of Dynamical Laws

Historically, in the age of classical physics, the universe was considered a deterministic and conservative system. The astronomer and mathematician P.S. Laplace (1814) assumed, for example, the total computability and predictability of nature if all natural laws and initial states of celestial bodies are well known. Laplace's spirit expressed the belief of philosophers in determinism and computability of the world during the eighteenth and nineteenth centuries. Laplace was right about linear and conservative dynamical systems. In general, a linear relation means that the rate of change in a system is proportional to its cause: small changes cause small effects while large changes cause large effects. Changes of a dynamical system can be modelled in one dimension by changing values of a time-depending quantity along the time axis (time series). Mathematically, linear equations are completely computable. This is the deeper reason for Laplace's philosophical assumption to be right for linear and conservative systems.

In systems theory, the complete information about a dynamical system at a certain time is determined by its state at that time. The state of a complex system is determined by more than two quantities. Then, a higher dimensional phase space is needed to study the dynamics of a system. From a methodological point of view, time series and phase spaces are important instruments to study systems dynamics. The state space of a system contains the complete information of its past, present and future behaviour.

At the end of the nineteenth century, H. Poincaré (1892) discovered that celestial mechanics is not completely computable, even if it is considered as a deterministic and conservative system. The mutual gravitational interactions of more than two celestial bodies (‘Many-bodies-problem’) can be represented by causal feedback loops corresponding to nonlinear and non-integrable equations with instabilities and irregularities. In a strict dynamical sense, the degree of complexity depends on the degree of nonlinearity of a dynamical system.Reference Mainzer9 According to the Laplacean view, similar causes effectively determine similar effects. Thus, in the phase space, trajectories that start close to each other also remain close to each other during time evolution. Dynamical systems with deterministic chaos exhibit an exponential dependence on initial conditions for bounded orbits: the separation of trajectories with close initial states increases exponentially.

Thus, tiny deviations of initial data lead to exponentially increasing computational efforts for future data, limiting long-term predictions, even though the dynamics is in principle uniquely determined. This is known as the ‘butterfly effect’: initial, small and local causes soon lead to unpredictable, large and global effects. According to the famous KAM-Theorem of A.N. Kolmogorov (1954), V.I. Arnold (1963), and J. K. Moser (1967), trajectories in the phase space of classical mechanics are neither completely regular, nor completely irregular, but depend sensitively on the chosen initial conditions.Reference Mainzer10

2.3. Dynamical Laws and Attractors

Dynamical systems can be classified on the basis of the effects of the dynamics on a region of the phase space. A conservative system is defined by the fact that, during time evolution, the volume of a region remains constant, although its shape may be transformed. In a dissipative system, dynamics causes a volume contraction. An attractor is a region of a phase space into which all trajectories departing from an adjacent region, the so-called basin of attraction, tend to converge. There are different kinds of attractors. The simplest class of attractors contains the fixed points. In this case, all trajectories of adjacent regions converge to a point. An example is a dissipative harmonic oscillator with friction: the oscillating system is gradually slowed by frictional forces and finally come to a rest in an equilibrium point. Conservative harmonic oscillators without friction belong to the second class of attractors with limit cycles, which can be classified as being periodic or quasi-periodic. A periodic orbit is a closed trajectory into which all trajectories departing from an adjacent region converge. For a simple dynamical system with only two degrees of freedom and continuous time, the only possible attractors are fixed points or periodic limit cycles. An example is a Van der Pol oscillator modelling a simple vacuum-tube oscillator circuit.

In continuous systems with a phase space of dimension n > 2, more complex attractors are possible. Dynamical systems with quasi-periodic limit cycles show a time evolution that can be decomposed into different periodic parts without a unique periodic regime. The corresponding time series consist of periodic parts of oscillation without a common structure. Nevertheless, closely starting trajectories remain close to each other during time evolution. The third class contains dynamical systems with chaotic attractors that are non-periodic, with an exponential dependence on initial conditions for bounded orbits. A famous example is the chaotic attractor of a Lorenz system simulating the chaotic development of weather caused by local events, which cannot be forecast in the long run (butterfly effect).

3. Application in Natural and Technical Systems

3.1. Natural Laws of Self-organization

Laws of nonlinear dynamics do not only exhibit instability and chaos, but also self-organization of structure and order. The intuitive idea is that global patterns and structures emerge from locally interacting elements such as atoms in laser beams, molecules in chemical reactions, proteins in cells, cells in organs, neurons in brains, agents in markets, and so on. Complexity phenomena have been reported from many disciplines (e.g. biology, chemistry, ecology, physics, sociology, economics, and so on) and analysed from various perspectives such as Schrödinger's order from disorder, Prigogine's dissipative structure, Haken's synergetics, Langton's edge of chaos, etc. But concepts of complexity are often based on examples or metaphors only. We argue for a mathematically precise and rigorous definition of local activity as the genesis of complexity, which can be represented by equations of natural laws.

The principle of local activity had originated from electronic circuits, but can easily be translated into other non-electrical homogeneous media. The transistor is a technical example of a locally-active device, whereby a ‘small’ (low-power) input signal can be converted into a ‘large’ (high power) output signal at the expense of an energy supply (namely a battery). No radios, televisions, and computers can function without using locally-active devices such as transistors. For the formation of complex biological and chemical patterns, Schrödinger and Prigogine demanded nonlinear dynamics and an energy source as necessary conditions. But, for the exhibition of patterns in an electronic circuit (i.e. non-uniform voltage distributions), the claim for nonlinearity and energy source is too crude. In fact, no patterns can emerge from circuits with cells made of only batteries and nonlinear circuit elements that are not locally active.

In general, a spatially continuous or discrete medium made of identical cells interacting with all cells located within a neighbourhood is said to manifest complexity if and only if the homogeneous medium can exhibit a non-homogeneous static or spatio-temporal pattern under homogeneous initial and boundary conditions. The principle of local activity can be formulated mathematically in an axiomatic way without mentioning any circuit models. Moreover, any proposed unified theory on complexity should not be based on observations from a particular collection of examples and explained in terms that make sense only for a particular discipline, say chemistry. Rather it must be couched in discipline-free concepts, which means mathematics, being the only universal scientific language. In that sense, we extend Newton's research programme of natural laws as ‘principia mathematica’ to chemistry, biology, and even technology.

However, in order to keep physical intuition and motivation behind this concept, we start with a special class of spatially-extended dynamical systems, namely the reaction-diffusion equations that are familiar in physics and chemistry. Our first definition of local activity refers to a discretized spatial model that can be easily illustrated by cellular nonlinear networks. These networks are any spatial arrangement of locally-coupled cells, where each cell is a dynamical system that has an input, an output, and a state evolving according to some prescribed dynamical laws, and whose dynamics are coupled only among the neighbouring cells lying within some prescribed sphere of influence centred at the cell's location. In short: the dynamical laws are defined by the state equations of isolated cells and the cell coupling laws (in addition to boundary and initial conditions).

3.2. Natural Laws of Reaction-Diffusion SystemsReference Mainzer and Chua12

In a reaction-diffusion system, each cell located at any three-dimensional grid point with coordinates (α, β, γ) is defined by a state equation $$${{\dot{\bi x}}_{{\rm{\alpha \beta \gamma }}}}\, = \,{\bf f}{\rm{(}}{{\bi x}_{\bialpha \bibeta \bigamma }}{\rm{)}}$$$ and a ‘diffusion’ cell coupling law $$${{{\bf i}_{\alpha \beta \gamma }}\, = \,{\bf D}\brnabla^2{\rm{}}{{\bi x}_{\bialpha \bibeta \bigamma }}}{\rm{}}$$$ where D is a constant diffusion matrix with diagonal elements (diffusion coefficients) associated with the state variables of $$${{\bi x}_{\bialpha \bibeta \bigamma }}}{\rm{}}$$$ . The term $$$\brnabla^2$$$ $$${{\bi x}_{\bialpha \bibeta \bigamma }}}{\rm{}}$$$ is the discrete Laplacian operator. If we add the cell coupling law to the cell state equation, we obtain the discrete reaction-diffusion equation $$${{\dot{\bi x}}_{\alpha \beta \gamma }}\, = \,{\bf f}{\rm{(}}{{\bi x}_{\bialpha \bibeta \bigamma }}{\rm{)}}\, + \,{\bf D}\brnabla^2{\rm{}}{{\bi x}_{\bialpha \bibeta \bigamma }}}{\rm{}}$$$ . If the discrete Laplacian operator is replaced by its limiting continuum version, we get the standard reaction-diffusion partial differential equations (PDE). According to extensive numerical simulations, discrete reaction-diffusion equations and their associated PDEs have similar qualitative behaviour. For example, various spatio-temporal patterns from Prigogine's Brusselator and Oregonator PDE have been simulated by the corresponding reaction-diffusion systems.

A locally active cell must behave exactly like a transistor: at a cell equilibrium point Q there exists a local (i.e. infinitesimal) input current signal such that by applying this small information-bearing signal (about the DC operating point Q) imposed by the battery, we can extract more local (infinitesimal) information-bearing energy δε(Q) at Q over some time interval, at the expense of the battery, which supplies the signal energy amplification. A cell is said to be locally passive if there is no cell equilibrium point where it is locally active. There are well-known sets of necessary and sufficient conditions for testing whether a cell is locally passive. A violation of any one of these conditions would make a cell locally active. Thus, there are constructive procedures for testing local activity.

For reaction-diffusion equations, the parameters must be chosen so as to make its cells locally active, in order to make it possible for complex phenomena such as pattern formation to emerge. These parameters can be derived numerically from the set of local activity criteria. In a so-called linear stability analysis of, for example, Prigogine's dissipative structures, at least one eigenvalue of the Jacobian matrix of f(·) associated with the kinetic laws of reaction-diffusion equations must have a positive real part for patterns to exist. But this is only one of four independent conditions, each of which is sufficient for local activity. Another one can be satisfied with all eigenvalues having negative parts. This is the standard operating mode of most useful transistor amplifiers.

The key concept claims that a locally-active cell with only negative eigenvalues appears to be dead, when uncoupled because each is asymptotically stable, but together they could generate patterns when they are coupled by a dissipative environment. This is counter-intuitive because dissipation should tend to damp out any non-uniformity. A mathematical theory that explains this perplexing phenomenon is what Turing and Smale had sought in vain. Prigogine was totally confused that he could only lament that a new term ‘instability of the homogeneous’ needed to be introduced. We claim the principle of local activity is precisely this missing concept and foundation.

The fundamental new concept here is that if the cells are not locally active, no passive coupling circuits of any complexity can be found to create patterns in the resulting system, which may be chemical, biological, or even technical. The universality of the concept of local activity is manifested clearly by its local activity criteria, which are independent of the nature of the cells. The only assumption we demand is that any system exhibiting complexity be represented by a mathematical model that can be decomposed into elements and coupling laws.

3.3. Natural Laws of Chemistry

In open (dissipative) chemical systems, phase transitions lead to complex macroscopic structures (attractors) with increasing complexity, which are initiated by nonlinear chemical reactions in dependence of input and output of substances (control parameters): e.g. oscillatory patterns of Belousov-Zhabotinsky (BZ)-reaction. Autocatalytic reaction laws in chemistry are mathematically represented by nonlinear differential equations. Chemical oscillations (e.g. BZ-reaction) can be represented by the trajectories of a limit cycle (attractor) in a phase space, as oscillating time series or as dynamic process of bifurcation.

3.4. Natural Laws of Systems Biology

Since antiquity, scientists and philosophers have found it a hard nut to crack whether life can be reduced to its atomic and molecular constituents or it needs some additional holistic assumptions of the whole organism. In modern terms, the question arises whether cellular life can be explained by a mixture of disciplines from physics and chemistry to molecular biology or it needs its own unique foundations and methodology. In systems biology, the traditional contradiction between reductionism and holism is solved by the nonlinear dynamics of complex systems. In the framework of complex systems science, the functional properties and behaviour of living organisms emerge from the nonlinear interactions of their constituents, which are modelled by nonlinear mathematical equations.

3.5. State Spaces in the Life Sciences

In general, the dynamics of complex systems are analysed in state spaces. Components of states are represented by coordinates of state spaces defining points that represent system states: e.g. the state of health of a patient defined by several medical indicators, the chemical states of a cell with particular concentrations of chemical substances, or the genetic states of a cell defined by particular gene expressions. In systems science, state spaces are used to model the attractor dynamics of complex systems.Reference Knights6 The differentiation process of a stem cell is modelled by the orbits of the chemical cellular concentrations. Attractors correspond to new cell types. The transition steps of cellular division can also be illustrated by bifurcation trees with branching points at critical values of chemical concentrations. The transitions in a bifurcation tree correspond to formal transition rules, which can be programmed to computers.

After atomic and molecular self-organization, we observe cellular (genetically-coded) self-organization in nature. Complex cellular systems, e.g. a human embryo, grow in a self-organizing manner by cellular self-replication, mutation, selection, and metabolism according to the genetic codes. Again, this kind of self-organization generates a bifurcation tree with points of instability and branches. It is the well-known evolutionary tree of Darwin with bifurcating points of instability characterized by mutations as random changes of the DNA-code, leading to new species at the bifurcating branches. Selections are the driving forces in the branches generating the well-known variety of life.

On the background of complexity research, we now consider the paradigm shifts in systems biology. Systems biology integrates the molecular, cellular, organic, human, and ecological levels of life with models of complex systems. They are represented by mathematical (differential) equations. In bioinformatics, mathematics and informatics grow together with biology, in order to explain and forecast the complexity of life and to construct artificial organisms in synthetic biology.

3.6. Differential Equations of Systems Biology

From a methodological point of view, several models can be distinguished.Reference Alon1 The example X + A → 2X describes a simple reaction through which molecules of some species X are reproduced. In this case, the change in the number Nx of molecules of this species is described by the differential equation dNx/dt = aNx. If the number of molecules is increased, then the rate of its synthesis also increases. A simple reaction network can be modelled by a pathway consisting of, for example, six species A, B, C, D, E and F and three reactions with rates ν A,B, ν B,C, and ν B,D. The branch of B shows two independent paths by which B is degraded. The degradation of B and C requires two molecules of E, which are consumed in the process to produce a molecule of F. The corresponding analytical model consists of linear differential equations describing the time rate of change of the reactants and products from the stoichiometry and the rate of advancement of the reaction.

In general, the temporal evolution of variables is due to the reaction among chemical substances, which change their concentrations in time. For macroscopic descriptions, the change of chemical concentrations is modelled by rate equations for chemical equations. They are given by a set of equations dx i/dt = fi(x 1, x 2, …, xn) (i = 1, 2, …, n) with state variables x 1, x 2, …, xn. In this case, the state variable at later time is uniquely determined by the set of variables at a given time. The temporal change of the state is characterized by a flow in the n-dimensional state space. The temporal evolution is illustrated by an orbit in the n-dimensional state space.

3.7. Attractors of Biological Systems

The macroscopic dynamics of concentrations of biochemicals given by the rate equations of chemical reactions are generally dissipative. In this case, information on initial conditions is lost through temporal evolution. This phenomenon can be represented in the corresponding state space. A set of initial conditions of points covers a volume in the n-dimensional state space. During the temporal evolution of all orbits from the points, the volume covered by the points shrinks in time, in contrast to a Hamiltonian system without dissipation. In dissipative systems, an orbit does not necessarily return to the neighbourhood of the starting point. The region within which the orbit recurrently returns to its neighbourhood is called an attractor. An attractor is illustrated by the set of points to which the orbit is attracted as time passes.

The abundance of gene expressions or chemical substances is not completely governed by the mentioned rate equations, which are deterministic. On average, there can be deviations from the deterministic equations. But even in the case of perturbing fluctuations, there can be a tendency to recover the original state, when the attraction to the state works. Therefore, the attractor can give an answer to why a state is stable against molecular fluctuations.

In the simplest case, the attractor is a fixed point. Sometimes, it is a periodic cycle that leads to regular oscillations in time. In some other cases, the orbit in the state space is on a torus leading to a quasi-periodic oscillation, which consists of combined periodic motions with several frequencies. In the case of strange attractors, the orbit is neither periodic nor a combination of some cycles, but characterized by orbital instability. A small difference in the initial condition of an orbit is amplified exponentially in time. The Lyapunov exponent, measuring the exponential instability, is defined by taking the limit of the initial deviation to zero and of time to infinity.

Compatibility between instability and stability is an important feature of biological systems. Diversity in a biological system is made possible by some instability so that the original state is not maintained. But, at least at the macroscopic level, a biological system should be stable. Cells can differentiate, but the collection of cells in an organism should be robust against perturbations. Therefore, the analysis of attractors with orbital instability may give helpful insights for the dynamics of complex biochemical systems.

Systems biology aims at developing models to describe and predict cellular behaviour at the whole-system level. The genome project was still a reductionist research programme with the automatic analysis of DNA-sequences by high speed supercomputers (e.g. 2000 bases per second). The paradigm shift from molecular reductionism to the whole-system level of cells, organs and organisms needs an immense increase of computational capacity in order to reconstruct integrated metabolic and regulatory networks at different molecular levels and to understand complex functions of regulation, control, adaption, and evolution (e.g. the computational metabolic network of E.coli with power law connection degree distribution and scale-free property).

3.8. Biological Complexity and Natural Laws

We yearn for simplifying principles, but biology is astoundingly complex. Every biochemical interaction is exquisitely crafted, and cells contain networks of many such interactions. These networks are the result of billions of years of evolution, which works by making random changes and selecting the organisms that survive. Therefore, the structures found by evolution are dependent on historical chance and are laden with biochemical detail that requires special description in many details.

Despite this complexity, scientists have attempted to find generalized principles of biology. Actually, general mathematical laws referring to biological networks and circuits are confirmed by lab experiments and measurements. In systems biology, a cell is considered a complex system of interacting proteins. Each protein is a kind of nanometre-size molecular machine that carries out specific tasks. For example, the bacterium Escherichia (E.) coli is a cell containing several million proteins of 4000 different types. With respect to a changing situation, cells generate appropriate proteins. When, for example, damaged, the cell produces repair proteins. Therefore, the cell monitors its environment and calculates the amount at which each type of protein is needed.

3.9. Invariance of Biological Power Laws

In cells, complex metabolic networks of proteins are characterized by the power law degree distribution, which has a low connection degree between most of the nodes, but a very high degree between a few nodes (‘hubs’). The average path length of connection of nodes and hubs is invariant to the network scale (‘scale free’). The scale-free property makes the network robust against random error, because most errors on the less connected nodes do not affect the network connectivity very heavily. Robust structures are the result of a long revolutionary selection process. Nevertheless, there are structural differences between the metabolic networks of different organisms, which are also a result of a long evolutionary development.

Power law connection degree distribution and scale-free property only show the local connectivity, not the global network structure. There are networks that may indicate a power law degree distribution, but with several disconnected sub-graphs as well as fully connected ones. In general, metabolic networks are not fully connected, but here are fully connected sub-networks. Fully connected sub-networks are called strong components of a network, in which all metabolites can be converted to each other. In E. coli the largest component is much larger than other components and thus is called a ‘giant strong component’ (GSC). A GSC may be important for a complex network to be robust and evolvable under changing environmental conditions. With respect to the local activity principle, a GSC is a locally active component dominating the metabolic flux in the whole network of the bacterium.

These connectivity structures can be discovered in different types of networks. For example, they were found in the metabolic networks of different organisms as well as in the web page graph, in which web pages represent nodes and hyperlinks represent edges. They seem to be common structures in large-scale networks. Understanding and manipulation of complex metabolic fluxes are important in metabolic engineering of organisms and therapy of metabolic diseases. Therefore, clusters with different degree distributions are analysed to determine their functions in large-scale metabolic networks (e.g. control, regulation of metabolic fluxes). Structural analysis of networks should uncover common laws of different applications with modular organization of different functions in networks.

Systems biology aims at mathematical laws to describe and predict cellular behaviour at the whole system level. The macroscopic of metabolic networks with scale-free and modular organization can only be uncovered by analysis of the whole complex system. They are, obviously, governed by common laws of interacting cellular components, e.g. enzymes and metabolites. With respect to systems theory, they are the blueprints for large-scale networks, which can be applied in life sciences as well as in technology.

3.10. Computational Power and Natural Laws of Systems Biology

A remarkable paradigm shift in methodology is the new role of computer experiments and computer simulations in systems biology. In systems biology, computational modelling and simulation and technology-driven high-throughput lab experiments are combined to generate new knowledge, which is used to fine-tune models and design new experiments. Thus, ‘in vivo’ experiments in labs must be supplemented by ‘in silico’ experiments on computers, in order to handle the huge amount of data in systems biology. Increasing accumulation of biological data ranging from DNA and protein sequences to metabolic pathways results in the development of computational models of cells, organs, and organisms with complex metabolic and gene regulatory networks.

The mind of a single scientist is no longer sufficient to find new hypotheses of research. Computational modelling must be supported by intelligent machine learning. Machine learning algorithms are powerful tools for identifying causal gene regularity networks from observational gene expression data. Dynamic Bayesian network (DBN) algorithms (C++) infer cyclic feedback loops, the strength and the direction of regulatory influence.

Nodes represent genes, directed links represent conditional statistical dependence of the child node on the parent node. Parents may be activators (arrow), repressors (flat heads), or neutral. Search heuristics are, for example, genetic algorithms, simulated annealing, or greedy search. In bioinformatics, true genetic causal systems are simulated by GeneSim with gene expression data. They are compared with recovered DBN networks.

3.11. Neural Self-organization of Brains

In the neural networks of brains, self-organization on the cellular and subcellular level is determined by the information processing in and between neurons. Chemical transmitters can effect neural information processing with direct and indirect mechanisms of great plasticity. The information is assumed to be stored in the synaptic connections of neural cell assemblies with typical macroscopic patterns. But while an individual neuron does not see or reason or remember, brains are able to do so. Vision, reasoning, and remembrance are understood as higher-level functions.

Scientists who prefer a bottom-up strategy recommend that higher-level functions of the brain can be neither addressed nor understood until each particular property of each neuron and synapse is explored and explained. An important insight of the complex system approach discloses that emergent effects of the whole system are synergetic system effects that cannot be reduced to the single elements. They are results of nonlinear interactions. Therefore, the whole is more than the (linear) sum of its parts. Thus, from a methodological point of view, a purely bottom-up-strategy of exploring the brain functions must fail. On the other hand, the advocates of a purely top-down strategy proclaiming that cognition is completely independent of the nervous system are caught in the old Cartesian dilemma: ‘How does the ghost drive the machine?’

3.12. Natural Laws of Brain Research

What are the laws of neural interaction? In the complex systems approach, the microscopic level of interacting neurons can be modelled by coupled differential equations modelling the transmission of nerve impulses by each neuron. The Hodgekin-Huxley equation is an example of a nonlinear reaction diffusion equation of a travelling wave of action potentials which give a precise prediction of the speed and shape of the nerve impulse of electric voltage. The domains of local activity can be precisely determined in the associated parameter spaces. In general, nerve impulses emerge as new dynamical entities, like the concentric waves in BZ-reactions or fluid patterns in non-equilibrium dynamics. Technical and computational models of the brain can be constructed on the blueprint of the Hodgkin-Huxley equations.Reference Cronin2

However, local activity of a single nerve impulse is not sufficient to understand the complex brain dynamics and the emergence of cognitive and mental abilities. The brain with its more than 1011 neurons can be considered a huge nonlinear lattice, where any two points (neurons) can interact with neural impulses. How can we bridge the gap between the neurophysiology of local neural activities and the psychology of mental states? A single neuron can neither think nor feel, but only fire or not fire. They are the atoms of the complex neural dynamics.

3.13. Differential Equations of Brain StatesReference Mainzer and Chua12

In this way, we get a hierarchy of emerging levels of cognition, starting with the microdynamics of firing neurons. The dynamics of each level are assumed to be characterized by differential equations with order parameters. For example, on the first level of macrodynamics, order parameters characterize a visual perception. On the following level, the observer becomes conscious of the perception. Then the cell assembly of perception is connected with the neural area that is responsible for states of consciousness. In a next step, a conscious perception can be the goal of planning activities. In this case, cell assemblies of cell assemblies are connected with neural areas in the planning cortex, and so on. They are represented by coupled nonlinear equations with the firing rates of the corresponding cell assemblies. Even high-level concepts such as self-consciousness can be explained by self-reflections of self-reflections, connected with a personal memory, which is represented in corresponding cell assemblies of the brain. Brain states emerge, persist for a small fraction of time, then disappear and are replaced by other states. It is the flexibility and creativeness of this process that makes a brain so successful in animals for their adaption to rapidly changing and unpredictable environments.

From a mathematical point of view, the interactions of n assemblies are described by a system of n coupled differential equations depending on their specific common firing rates Fj (j = 1, 2, …, n) with

$$\matrix{ \frac{{{\rm{d}}{{F}_{\rm{1}}}}}{{{\rm{d}}t}}\, = \, + \,{{F}_1}(1\, - \,{{F}_1})\, - \alpha \,F_2-\,\alpha \,{{F}_3}\, - \, \ldots \, - \,\alpha \,{{F}_n}\hfill \\ \frac{{{\rm{d}}{{F}_{\rm{2}}}}}{{{\rm{d}}t}}\, = \,{\rm{ - }}\,\alpha \,{{F}_1}+ F_2\,(1\, - \,{{F}_2})\, - \,\alpha \,{{F}_3}\, - \, \ldots \, - \,\alpha \,{{F}_n}\hfill \\ \vdots\cr\frac{{{\rm{d}}{{F}_{{n}}}}}{{{\rm{d}}t}}\, = \,{\rm{ - }}\,\alpha \,{{F}_1}\, - \,\alpha \,{{F}_2}\, - \,\alpha \,{{F}_3}\, - \, \ldots \, + \,{{F}_n}(1\, - \,{{F}_n})<! \\ >$$

where α is a parameter that is positive for inhibitory interactions among assemblies.

Cell assemblies behave like individual neurons. Thus, an assembly of randomly interconnected neurons has a threshold firing level for the onset of global activity. If this level is not attained, the assembly will not ignite, and will fall back to a quiescent state. If the threshold level is exceeded, the firing activity of an assembly will rise rapidly to a maximum level. These two conditions ensure that assemblies of neurons can form assemblies of assemblies. Assemblies emerge from the nonlinear interactions of individual neurons, assemblies of assemblies emerge from the nonlinear interaction of assemblies. Repeated several times, one gets the model of the brain as an emergent dynamic hierarchy.

3.14. Dynamical Laws of Technical NetworksReference Mainzer and Chua12

With cellular organisms, neural networks and brains have emerged in biological evolution. Their laws have inspired the construction of technical neural networks that are also governed by the local activity principle. Technical neural networks are applied in simulations of brain activities and humanoid robots. In a next step, they are globalized in information and communication networks. The local activity principle is applied in the router dynamics as well as in multi-agent technologies. We can find it in the smart grids of energy supply and in cyberphysical systems. The local activity principle has become a dominating organizational principle of global technical infrastructures. The innovation of computational systems seems to continue the natural evolution of organisms and ecologies in a technical co-evolution.

Intelligent systems like brains are only subclasses of complex dynamical systems. Different disciplines are growing together in a unified theory of complex networks: systems and evolutionary biology, brain and cognition research, software and hardware engineering, robotics, information and communication networking, construction of cyberphysical systems and societal infrastructures. The common laws of this unified theory of complex networks are the theory of complex dynamical systems. Applications are the self-organizing gene and protein networks, cellular organisms, agents and robots population, cyberphysical systems and communication networks. They all are typical examples of networks with locally active agents amplifying and transformer low energy and input signals into new patterns, structures, and behaviour. The unified theory of networks in these fields is not yet completely accomplished, and, sometimes, we only know certain aspects and main features. But, at least in the natural sciences, it is not unusual to work successfully with incomplete theories: in elementary particle physics, the standard theory of unified forces still has many open questions, but it is nevertheless successfully applied to measure and solve problems.

Mathematically, the structures of networks are described by cluster coefficients and distribution degrees of connections between nodes. Gene and protein networks, cyberphysical systems and the Internet are characterized by power laws of distribution degrees. The oriented edges are determined by two distribution degrees of incoming and outgoing connections with a power law, i.e. P out (k) ∼ k γout and P in (k) ∼ k –γin. Power laws are scale-invariant. Therefore, structures and patterns of networks are invariant if their dimensions are changed. They are hints on general laws of pattern formation in complex dynamical systems.

4. Applications in Social and Economic Systems

4.1. Nonlinear Dynamics of Social and Economic SystemsReference Mainzer11

Nonlinear laws of complex systems with locally active units can also be observed in social groups. An application of social dynamics is the emergence of behavioural patterns generated by car drivers as locally active units. In automobile traffic systems, a phase transition from non-jamming to jamming phases depends on the averaged car density as a control parameter. The spontaneous emergence of chaotic patterns of traffic is a famous self-organizing effect of nonlinear interactions that can often not be reduced to single causes. At a critical value, fluctuations with fractal or self-similar features can be observed. The term self-similarity states that the time series of measured traffic flow looks the same at different time scales, at least from a qualitative point of view with small statistical deviations. This phenomenon is also called fractality. In the theory of complex systems, self-similarity is a (not sufficient) hint about chaotic dynamics. These signals can be used for controlling traffic systems.

In a political community, collective trends or majorities of opinions can be considered as patterns that are produced by mutual discussions and interaction of the locally-active people in a more or less ‘heated’ situation. They can even be initiated by a few active people in a critical and unstable (‘revolutionary’) situation of the whole community. There may be a competition of opinions during heavy political debates and uncertainties. The essential point is that the winning opinion will dominate the collective (global) behaviour of the people. In this case, stable modes are dominated by some few unstable (locally active) modes. A mathematically rigorous law of the dominating behaviour is the Hopf bifurcation theorem where the attractor in a two-dimensional centre manifold dominates the remaining, usually in much higher dimensions, state space. Thus, there is a lawful kind of feedback: on one side, the behaviour of the elements is dominated by the collective order. On the other, people have their individual intentions to influence collective trends of society. Nevertheless, they are also driven by lawful attractors of collective behaviour.

4.2. Uncertainty and Information Incompleteness

However, in economics as well as in financial theory, uncertainty and information incompleteness seem to prevent exact predictions. A widely accepted belief in financial theory is that time series of asset prices are unpredictable. Chaos theory has shown that unpredictable time series can arise from deterministic nonlinear systems. The results obtained in the study of physical, chemical, and biological systems raise the question of whether the time evolution of asset prices in financial markets might be due to underlying nonlinear deterministic laws of a finite number of variables. If we analyse financial markets with the tools of nonlinear dynamics, we may be interested in the reconstruction of an attractor. In time series analysis, it is rather difficult to reconstruct an underlying attractor and its dimension. For chaotic systems, it is a challenge to distinguish between a chaotic time evolution and a random process, especially if the underlying deterministic dynamics are unknown. From an empirical point of view, the discrimination between randomness and chaos is often impossible. Time evolution of an asset price depends on all the information affecting the investigated asset. It seems unlikely that all this information can easily be described by a limited number of nonlinear deterministic equations.

Mathematical laws of finance and insurance can be traced back for centuries. Insurance of risks against the chances of life is an old topic for mankind. Commercial insurance dates back to the Renaissance, when great trading cities introduced bets on the safe routes of ships. In the seventeenth century, the great British insurance company Lloyds arose from this system of bookmakers. The philosopher and mathematician Gottfried Wilhelm Leibniz (1646–1716) had already suggested a health insurance in which people should pay with respect to their income. In Germany, the ingenious idea of Leibniz was realized no earlier than in the nineteenth century by Bismarck. In the time of Leibniz, life insurances were the first applications of probabilistic laws.

4.3. The Law of Small Numbers

In 1898 the Russian economist and statistician Ladislaus Josephovich Bortkiewicz (1868–1931) published a book about the Poisson distribution, entitled The Law of Small Numbers. In this book he first noted that events with low frequency in a large population follow a Poisson distribution even when the probabilities of the events varied. Modern insurance mathematics started with the thesis of the Swedish mathematician Filip Lundberg (1876–1965). He introduced the collective risk model for insurance claim data. Lundberg showed that the homogeneous Poisson process, after a suitable time transformation, is the key model for insurance liability data. Risk theory deals with the modelling of claims that arrive in an insurance business, and gives advice on how much premium has to be charged in order to avoid the ruin of the insurance company. Lundberg started with a simple model describing the basic dynamics of a homogeneous insurance portfolio.

4.4. Lundberg's Law of Insurance

His model of a homogeneous insurance portfolio means a portfolio of contracts for similar risks (e.g. car or household insurance) under three assumptions:

  • Claims happen at time Ti satisfying 0 ≤ T 1T 2T 3 ≤ … which are called claim arrivals.

  • The ith claim arriving at time Ti causes the claim size. The sequence (Ti) is an iid (independent and identically distributed) sequence of non-negative random variables.

  • The claim size process (Xi) and the claim arrival process (Ti) are mutually independent.

According to Lundberg's model, the risk process U(t) of an insurance company is determined by the initial capital u, the loaded premium rate c and the total claim amount S(t) of claims Xi with U(t) = u + ctS(t) and $$$S(t)\, = \,\mathop{\sum}\nolimits_{i = 1}^{N(t)} {{{X}_i}\,(t\,\geq \,0)} $$$ . N(t) is the number of the claims that occur until time t. Lundberg assumed that N(t) is a homogeneous Poisson process, independent of (Xi). Figure 1 illustrate a realization of the risk process U(t).

Figure 1 An illustration of Lundberg's risk law.Reference Embrechts, Klüppelberg and Mikosch3

Lundberg's law is fine for small claims. But the question arises how the global behaviour of U(t) is influenced by individual extreme events with large claims. Under Lundberg's condition of small claims, Harald Cramér estimated bounds for the ruin probability of an insurance company that are exponential in the initial capital u. Actually, claims are mostly modelled by heavy-tailed distributions like, for example, Pareto, which are much heavier than exponential.

4.5. Bachelier's Law of Risk at Stock Markets

With the upcoming stock markets during the period of industrialization, people became more and more interested in their risky dynamics. Asserts price dynamics are assumed to be stochastic processes. An early key-concept to understand stochastic processes was the random walk. The first theoretical description of a random walk in the natural sciences was performed in 1905 by Einstein's analysis of molecular interactions. However, the first mathematization of a random walk was not realized in physics but in social sciences, by the French mathematician Louis Jean Bachelier (1870–1946). In 1900 he published his doctoral thesis with the title ‘Théorie de la Spéculation’. During that time, most market analysis looked at stock and bond prices in a causal way: something happens as cause and prices react as effect. In complex markets with thousands of actions and reactions, a causal analysis is even difficult to work out afterwards, but impossible to forecast beforehand. One can never know everything. Instead, Bachelier tried to estimate the odds that prices will move. He was inspired by an analogy between the diffusion of heat through a substance and how a bond price wanders up and down. In his view, both are processes that cannot be forecast precisely. At the level of particles in matter or of individuals in markets, the details are too complicated. One can never analyse exactly how every relevant factor interrelate to spread energy or to energize spreads. But in both fields, the broad pattern of probability describing the whole system can be seen.

Bachelier introduced a stochastic law by looking at the bond market as a fair game. In tossing a coin, each time one tosses the coin the odds of heads or tails remain 1:2, regardless of what happened on the prior toss. In that sense, tossing coins is said to have no memory. Even during long runs of heads or tails, at each toss the run is as likely to end as to continue. In the thick of the trading, price changes can certainly look that way. Bachelier assumed that the market had already taken account of all relevant information, and that prices were in equilibrium with supply matched to demand, and seller paired with buyer. Unless some new information came along to change that balance, one would have no reason to expect any change in price. The next move would as likely be up as down.

4.6. Risk Law of Normal (‘Gaussian’) Distribution

Actually, prices follow a random walk. In order to illustrate this distribution, Bachelier plotted all of a bond's price-changes over a month or year onto a graph. In the case of independent and identically distributed (iid) price-changes, they spread out in the well-known bell-curve shape of a normal (Gaussian) distribution: the many small changes clustered in the centre of the bell, and the few big changes at the edges. Bachelier assumed that price changes behave like the random walk of molecules in a Brownian motion. Long before Bachelier and Einstein, the Scottish botanist Robert Brown had studied the way that tiny pollen grains jiggled about in a sample of water. Einstein explained it by molecular interactions and developed equations very similar to Bachelier's equation of bond-price probability, although Einstein never knew that. It is a remarkable interdisciplinary coincidence that the movement of security prices, the motion of molecules, and the diffusion of heat are described by mathematically analogous laws.

In short, Bachelier's law of price changes on stock markets depends on the three hypotheses of (1) statistical independence (i.e. each change in price appears independently from the last); (2) statistical stationarity of price changes; and (3) normal distribution (i.e. price changes follow the proportions of the Gaussian bell curve).

4.7. Efficient Market Hypothesis

It took a long time for economists to recognize the practical virtues of describing markets by the laws of chance and Brownian motion. In 1956, Bachelier's idea of a fair game was used by Paul A. Samuelson and his school to formulate the ‘Efficient markets hypothesis’. They argued that in an ideal market, security prices fully reflect all relevant information. A financial market is a fair game in which buyer balances seller. By reading price charts, analysing public information, and acting on inside information, the market quickly discounts the new information that results. Prices rise or fall to reach a new equilibrium of buyer and seller. The next price change is, once again, as likely to be up as down. Therefore, one can expect to win half the time and lose half the time. If one has special insights into a stock, one could profit from being the first in the market to act on it. But one cannot be sure to be right or first, because there are many clever people in a market as intelligent as oneself.

4.8. Black-Scholes Formula of Risk

A practical consequence of the Bachelier-Samuelson theory is the Black-Scholes formula for valuing options contracts and assessing risk. Its inventors were Fischer Black and Myron S. Scholes in the early 1970s. The Black-Scholes formula tries to implement risk-free portfolios. Black and Scholes assumed several conditions of financial markets:

  1. (1) The change of price Y(t) at each step t can be described by the stochastic differential equation of a Brownian motion. This assumption implies that the changes in the (logarithm of) price are Gaussian distributed.

  2. (2) Security trading is continuous.

  3. (3) Selling of securities is possible at any time.

  4. (4) There are no transaction costs.

  5. (5) The market interest rate r is constant.

  6. (6) There are no dividends between t = 0 and t = T.

  7. (7) There are no arbitrage opportunities. Arbitrage is a key concept for the understanding of markets. It means the purchase and sale of the same or equivalent security in order to profit from price discrepancies.

4.9. Black-Scholes Formula and Physical Laws

Now, in the absence of arbitrage opportunities, the change in the value of a portfolio must equal the gain obtained by investing the same amount of money in a riskless security providing a return per unit of time. This assumption allows us to derive the Black-Scholes partial differential equation, which is valid for both call and put European options. Under some boundary conditions and substitutions the Black-Scholes partial differential equation becomes formally equivalent to the heat-transfer law of physics which is analytically solvable.

Brownian motion is mathematically more manageable than any alternative. But, unfortunately, it is an extremely poor approximation to financial reality. Since the end of the 1980s, we can observe financial crashes and turbulences deviating significantly from normal distribution. Investment portfolios collapsed and hedging with options à la Black-Scholes failed. From the viewpoint of dynamical systems, patterns of time series analysis illustrate the failures of traditional financial theory.

4.10. Non-Gaussian Laws of Financial Dynamics

Financial markets display some common properties with fluid turbulence. As for fluid turbulent fluctuations, financial fluctuations have intermittency at all scales. In fluid turbulence, a cascade of energy flux is known to occur from the large scale of injection to the small scales of dissipation. In the nonlinear and fractal approach of the financial system, randomness can no longer be restricted to the ‘normal’ Gaussian distribution of price changes. Non-Gaussian distributions with Levy- and Pareto-distributions are more appropriate to the wild turbulence of financial markets of today.Reference Mandelbrot and Hudson13 They are caused by locally active centres or agents.Reference Mainzer and Chua12 Agents may be any element of the financial system, i.e. people, banks, governments, countries, or other units under consideration.

4.11. Mathematical Laws and Model Ambiguity

From a philosophical point of view, it is important to realize that the assumption that past distribution patterns carry robust inferences for the probability of future patterns is methodologically insecure. It involves applying to the world of social and economic relationships a technique drawn from the world of physics, in which a random sample of a definitively existing universe of possible events is used to determine the probability characteristics that govern future random samples. But it is doubtful when applied to economic and social decisions with inherent uncertainty. Economists sometimes refer to it as ‘Knightian’ uncertainty, which is a reference to the classical distinction between risk and uncertainty in Frank Knights’ PhD ‘Risk, Uncertainty, and Profit’ from 1921.Reference Knights6 But it would also suggest that no system of regulation could ever guard against all risks and uncertainties.

Classical economic models are mainly built upon the two assumptions of rational expectations with well-known probabilities of utilities and a representative agent (homo oeconomicus). They imply a complete understanding of the economic laws governing the world. These models leave no place for imperfect knowledge discovered in empirical psychological studies of real humans. Their behaviour in financial markets is even strongly influenced by emotional and hormonal reactions. Thus, economic modelling has to take bounded rationality seriously. But, model ambiguity does not mean the collapse of mathematical modelling. Mathematically, a fixed probability measure of expected utilities should be replaced by a collective risk measure that simultaneously considers a whole class of possible stochastic models with different penalties.Reference Föllmer and Schied4 Financial praxis warned us not to rely on a fixed model, but to vary possible models in a flexible way and to pay attention to the worst case. This is also the mathematical meaning of a convex risk measure.Reference Knights6

5. The Instrumental Concept of Laws

5.1. Explanatory and Predictive Power of Laws

The classical scheme of natural laws (e.g, Newton, Einstein) starts with data, the discovery of data patterns which are generalized (‘induction’) for all possible events (‘natural law’), justified and explained by deduction from fundamental theories according to fundamental principles of science (e.g. invariance, symmetry, conservation) (Figure 2). The explanatory and predictive power of laws depends on their capacity to deduce past events (explanation) and future events (prediction). Actually, there is a complex network of depending theories of laws driven by new experience and research dynamics.

Figure 2 Classical scheme of natural laws.

5.2. Computational Power of Laws

However, a predicative and explanatory power of laws is not sufficient. In modern research with increasing mass of data (e.g. systems biology), computational modelling and simulation (in silico experiment) and technology-driven high-throughput lab (‘wet’) experiments are combined to generate new knowledge and laws, which are used to fine-tune models and design new experiments. Increasing accumulation of data (e.g. biology) ranging from DNA and protein sequences to metabolic pathways results in the development of computational models of cells, organs, and organisms with complex metabolic and gene regulatory networks. In that sense, laws must also have computational power.

5.3. From Laws to Early Warning Systems

However, because of their complexity, nature, environment, and life cannot be computed and controlled totally. Nevertheless, their systems laws can be analysed and understood, in order to support their self-organization of desired effects and sustainable developments. Extreme events in anthropogenic systems initiate global cascades of nonlinear effects (e.g. tsunamis, nuclear catastrophes, financial crises). Therefore, we need lawful and science-based policies of early warning systems and crisis management.

5.4. From Laws to Sustainable Innovations

In the end, the question arises what are laws for? There is also an instrumental concept of laws that has become increasingly important in a technical civilization. Laws and theories should open avenues to new innovations. Innovation of science and technology, growth of industry, growth of population, natural resources (e.g. water, energy), nutrition, biodiversity, and climate are only subsystems of the global Earth system, connected by nonlinear feedback causal loops and highly sensitive with respect to local perturbation leading to global effects of instability and chaos. Sustainable innovations should take care of the whole Earth system and find appropriate policies.

Doing this, we cannot trust in a single risk model, but must consider a class of more or less appropriate models, supplemented by experimental behavioural case studies. The lack of methodological understanding of laws and the lack of ethical responsibility to warn the public against the limitations of their underlying models of the world were the main reasons for the past crises. It is the task of philosophy of science to analyse scientific modelling, to determine scientific limitations, and to demand the ethical responsibility of scientists even in these methodological affairs.

Klaus Mainzer studied mathematics, physics, and philosophy (1968–1972), getting his PhD (1973) and habilitation (1979) at the university of Münster, and a Heisenberg-scholarship (1980). From 1981–1988 he was professor for philosophy of science at the University of Constance, vice-president of the University of Constance (1985–1988). From 1988–2008 he was chair for philosophy of science, dean (1999–2000), director of the Institute of Philosophy (1989–2008) and founding director of the Institute of Interdisciplinary Informatics (1998–2008) at the University of Augsburg. Since 2008 he has been chair for philosophy of science and technology, director of the Carl von Linde-Academy and founding director of the Munich Center for Technology in Society (MCTS), member of the advisory board of the Institute for Advanced Study (IAS) at the Technical University of Munich; member of several academies and interdisciplinary organizations (e.g. Academia Europaea in London, European Academy of Sciences and Arts in Salzburg, German National Academy of Science and Engineering (acatech)). His work is focused on philosophy and the mathematical foundations of science and he is well-known as a researcher of complex systems and artificial intelligence.

References

1.Alon, U. (2007) An introduction to Systems Biology. Design Principles of Biological Circuits (London: Chapman & Hall/CRC).Google Scholar
2.Cronin, J. (1987) Mathematical Aspects of Hodgkin-Huxley Neural Theory (Cambridge: Cambridge University Press).Google Scholar
3.Embrechts, P., Klüppelberg, C. and Mikosch, T. (2003) Modeling Extremal Events for Insurance and Finance (Berlin: Springer).Google Scholar
4.Föllmer, H., Schied, A. (2008) Convex and coherent risk measures. Working Paper, Institute for Mathematics, Humboldt-University Berlin, October.Google Scholar
5.Kaneko, K. (2006) Life: An Introduction to Complex Systems Biology (Berlin: Springer).CrossRefGoogle Scholar
6.Knights, F. (1921) Risk, uncertainty, and profit. PhD dissertation, Yale University.Google Scholar
7.Mainzer, K. (1996) Symmetries in Nature (Berlin: De Gruyter) (German: Symmetrien der Natur, Berlin: De Gruyter, 1988).Google Scholar
8.Mainzer, K. (2005) Symmetry and complexity in dynamical systems. European Review, 13, supplement 2, pp. 2948.Google Scholar
9.Mainzer, K. (2007) Thinking in Complexity. The Computational Dynamics of Matter, Mind, and Mankind, 5th edn (Berlin: Springer).Google Scholar
10.Mainzer, K. (Ed.) (2009) Complexity. European Review, 17.Google Scholar
11.Mainzer, K. (2009) Challenges of complexity in economics. Evolutionary and Institutional Economics Review, 6(1), pp. 122.Google Scholar
12.Mainzer, K. and Chua, L. (2013) Local Activity Principle. The Cause of Complexity and Symmetry Breaking (London: Imperial College Press).Google Scholar
13.Mandelbrot, B.B. and Hudson, R.L. (2004) The (mis)Behavior of Markets. A Fractal View of Risk, Ruin, and Reward (New York: Basic Books).Google Scholar
14.Mittelstrass, J. and Mainzer, K. (1984) Newton, Isaac. In: J. Mittelstrass (Ed.) Enzyklopädie Philosophie und Wissenschaftstheorie Bd. 2 (Mannheim: B.I. Wissenschaftsverlag), pp. 9971005.Google Scholar
Figure 0

Figure 1 An illustration of Lundberg's risk law.3

Figure 1

Figure 2 Classical scheme of natural laws.