Your new post is loading...
Your new post is loading...
Using Python to visualize chaos, fractals, and selfsimilarity to better understand the limits of knowledge and prediction.
Strange Stars Pulse to the Golden Mean Nature has revealed peculiar mathematical objects that connect order and chaos.
The question What is Complexity? has occupied a great deal of time and paper over the last 20 or so years. There are a myriad different perspectives and definitions but still no consensus. In this paper I take a phenomenological approach, identifying several factors that discriminate well between systems that would be consensually agreed to be simple versus others that would be consensually agreed to be complex  biological systems and human languages. I argue that a crucial component is that of structural building block hierarchies that, in the case of complex systems, correspond also to a functional hierarchy. I argue that complexity is an emergent property of this structural/functional hierarchy, induced by a property  fitness in the case of biological systems and meaning in the case of languages  that links the elements of this hierarchy across multiple scales. Additionally, I argue that noncomplex systems "are" while complex systems "do" so that the latter, in distinction to physical systems, must be described not only in a space of states but also in a space of update rules (strategies) which we do not know how to specify. Further, the existence of structural/functional building block hierarchies allows for the functional specialisation of structural modules as amply observed in nature. Finally, we argue that there is at least one measuring apparatus capable of measuring complexity as characterised in the paper  the human brain itself.
During the 1960s but mainly in the 1970s, large mathematical dynamic global models were implemented in computers to simulate the entire world, or large portions of it. Several different but interrelated subjects were considered simultaneously, and their variables evolved over time in an attempt to forecast the future, considering decades as time horizons. Global models continued to be developed while evidencing an increasing bias towards environmental aspects, or at least the public impact of models with such a focus became prevalent. In this paper we analyze the early evolution of computerbased global modeling and provide insights on less known pioneering works by South American modelers in the 1960s (Varsavsky and collaborators). We revisit relevant methodological aspects and discuss how they influenced different modeling endeavors. Finally, we overview how distinctive systemic approaches in global modeling evolved into the currently wellestablished discipline of complex systems.
Motivated by a possibility to optimize modelling of the population evolution we postulate a generalization of the wellknow logistic map. Generalized difference equation reads: xn+1=rxpn(1−xqn), x∈[0,1],(p,q)>0,n=0,1,2,..., where the two new parameters p and q may assume any positive values. The standard logistic map thus corresponds to the case p=q=1. For such a generalized equation we illustrate the character of the transition from regularity to chaos as a function of r for the whole spectrum of p and q parameters. As an example we consider the case for p=1 and q=2 both in the periodic and chaotic regime. We focus on the character of the corresponding bifurcation sequence and on the quantitative nature of the resulting attractor as well as its universal attribute (Feigenbaum constant).
It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for this hypothesized "Darwinian transition." The model suggests that HGTdominated dynamics may have been intermittently interrupted by selectiondriven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with threepoint (hypernetwork) interactions may have destabilized the HGTdominated collective state and led to the emergence of vertical descent and the first welldefined species in early evolution. A nonlinear analysis of a stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable route from early collective evolution to the start of individuality and vertical Darwinian evolution, enabling the emergence of the first species.
A potent theory has emerged explaining a mysterious statistical law that arises throughout physics and mathematics.
Richard Cook Royal Institute of Technology, Stockholm Dr. Richard Cook is the Professor of Healthcare Systems Safety and Chairman of the Department of Patien...
Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.
It has been nearly 25 years since the publication of Infectious Disease of Humans, the “vade mecum” of mathematical modeling of infectious disease; the proliferation of epidemiological careers that it initiated is now in its fourth generation. Epidemiological models have proved very powerful in shaping health policy discussions. The complex interactions that lead to pathogen (and pest) outbreaks make it necessary to use models to provide quantitative insights into the counterintuitive outcomes that are the rule of most nonlinear systems. Thus, epidemic models are most interesting when they suggest unexpected outcomes; they are most powerful when they describe the conditions that delineate the worstcase unexpected scenario, and provide a framework in which to compare alternative control strategies. But what are the limits of mathematical models and what kinds provide insight into emerging disease? Mathematical models for emerging disease Andy Dobson Science 12 December 2014: Vol. 346 no. 6215 pp. 12941295 http://dx.doi.org/10.1126/science.aaa3441
Via Complexity Digest
We consider biological individuality in terms of information theoretic and graphical principles. Our purpose is to extract through an algorithmic decomposition systemenvironment boundaries supporting individuality. We infer or detect evolved individuals rather than assume that they exist. Given a set of consistent measurements over time, we discover a coarsegrained or quantized description on a system, inducing partitions (which can be nested). Legitimate individual partitions will propagate information from the past into the future, whereas spurious aggregations will not. Individuals are therefore defined in terms of ongoing, bounded information processing units rather than lists of static features or conventional replicationbased definitions which tend to fail in the case of cultural change. One virtue of this approach is that it could expand the scope of what we consider adaptive or biological phenomena, particularly in the microscopic and macroscopic regimes of molecular and social phenomena. The Information Theory of Individuality David Krakauer, Nils Bertschinger, Eckehard Olbrich, Nihat Ay, Jessica C. Flack http://arxiv.org/abs/1412.2447
Via Complexity Digest
This is the first in a series of interviews highlighting the work of experts in the field of complex systems science. Dr. Ben Althouse, an Omidyar Fellow at the Santa Fe Institute, is a mathematical epidemiologist focusing on the dynamics of infectious disease transmission. Ben holds both an ScM in Biostatistics and a PhD in Epidemiology from the Johns Hopkins Bloomberg School of Public Health where he focused on understanding Dengue fever and other sylvatic mosquitoborne viruses (arboviruses) in Senegal using mechanistic modeling and the SIR model. Dr. Althouse also attended the Santa Fe Institute’s Complex Systems Summer School during his graduate studies.
Via Jorge Louçã
We introduce a nonpartisan probability distribution on congressional redistricting of North Carolina which emphasizes the equal partition of the population and the compactness of districts. When random districts are drawn and the results of the 2012 election were retabulated under the drawn districtings, we find that an average of 7.6 democratic representatives are elected. 95% of the randomly sampled redistrictings produced between 6 and 9 Democrats. Both of these facts are in stark contrast with the 4 Democrats elected in the 2012 elections with the same vote counts. This brings into serious question the idea that such elections represent the "will of the people." It underlines the ability of redistricting to undermine the democratic process, while on the face allowing democracy to proceed.

In this paper, we propose a novel methodology for automatically finding new chaotic attractors through a computational intelligence technique known as multigene genetic programming (MGGP). We apply this technique to the case of the Lorenz attractor and evolve several new chaotic attractors based on the basic Lorenz template. The MGGP algorithm automatically finds new nonlinear expressions for the different state variables starting from the original Lorenz system. The Lyapunov exponents of each of the attractors are calculated numerically based on the time series of the state variables using time delay embedding techniques. The MGGP algorithm tries to search the functional space of the attractors by aiming to maximise the largest Lyapunov exponent (LLE) of the evolved attractors. To demonstrate the potential of the proposed methodology, we report over one hundred new chaotic attractor structures along with their parameters, which are evolved from just the Lorenz system alone.
By performing a systematic study of the Hénon map, we find lowperiod sinks for parameter values extremely close to the classical ones. This raises the question whether or not the wellknown Hénon attractor—the attractor of the Hénon map existing for the classical parameter values—is a strange attractor, or simply a stable periodic orbit. Using results from our study, we conclude that even if the latter were true, it would be practically impossible to establish this by computing trajectories of the map.
We analyze the replicatormutator equations for the RockPaperScissors game. Various graphtheoretic patterns of mutation are considered, ranging from a single unidirectional mutation pathway between two of the species, to global bidirectional mutation among all the species. Our main result is that the coexistence state, in which all three species exist in equilibrium, can be destabilized by arbitrarily small mutation rates. After it loses stability, the coexistence state gives birth to a stable limit cycle solution created in a supercritical Hopf bifurcation. This attracting periodic solution exists for all the mutation patterns considered, and persists arbitrarily close to the limit of zero mutation rate and a zerosum game. Nonlinear Dynamics of the RockPaperScissors Game with Mutations Danielle F. P. Toupo, Steven H. Strogatz http://arxiv.org/abs/1502.03370
Via Complexity Digest
In social dilemmas punishment costs resources, not just from the one who is punished but often also from the punisher and society. Reciprocity on the other side is known to lead to cooperation without the costs of punishment. The questions at hand are whether punishment brings advantages besides its costs, and how its negative sideeffects can be reduced to a minimum in an environment populated by agents adopting a form of reciprocity. Various punishment mechanisms have been studied in the economic literature such as unrestricted punishment, legitimate punishment, cooperative punishment, and the hired gun mechanism. In this study all these mechanisms are implemented in a simulation where agents can share resources and may decide to punish other agents when the other agents do not share. Through evolutionary learning agents adapt their sharing/punishing policy. When the availability of resources was restricted, punishment mechanisms in general performed better than nopunishment, although unrestricted punishment was performing worse. When resource availability was high, performance was better in nopunishment conditions with indirect reciprocity. Unrestricted punishment was always the worst performing mechanism. Summarized, this paper shows that, in certain environments, some punishment mechanisms can improve the efficiency of cooperation even if the cooperating system is already based on indirect reciprocity.
The Multi Agent Based programming, modeling and simulation environment of NetLogo has been used extensively during the last fifteen years for educational among other purposes. The learning subject, upon interacting with the Users Interface of NetLogo, can easily study properties of the simulated natural systems, as well as observe the latters response, when altering their parameters. In this research, NetLogo was used under the perspective that the learning subject (student or prospective teacher)interacts with the model in a deeper way, obtaining the role of an agent. This is not achieved by obliging the learner to program (write NetLogo code) but by interviewing them, together with applying the choices that they make on the model. The scheme was carried out, as part of a broader research, with interviews, and web page like interface menu selections, in a sample of 17 University students in Athens (prospective Primary School teachers) and the results were judged as encouraging. At a further stage, the computers were set as a network, where all the agents performed together. In this way the learners could watch onscreen the overall outcome of their choices and actions on the modeled ecosystem. This seems to open a new, small, area of research in NetLogo educational applications.
Dans une étude publiée par la revue Nature, une équipe de l'Institut des sciences de l'évolution de Montpellier, CNRS/IRD/Université de Montpellier 2, a prouvé par l'expérience l'hypothèse selon laquelle la taille d'une population influait directement sur sa capacité à transmettre des traits culturels. Plus une population est grande, plus elle est capable de transmettre des savoirs et des techniques mais aussi d'innover ; plus elle est petite, plus elle risque de perdre son savoirfaire et de régresser.
Via cyberlabe
The philosophy of complexity is developing as a field of philosophical inquiry to accompany, support, and question advances in the science of complex systems.
Via Christophe Bredillet, Philippe Vallat
The Lyapunov exponent characterizes an exponential growth rate of the difference of nearby orbits. A positive Lyapunov exponent is a manifestation of chaos. Here, we propose the Lyapunov pair, which is based on the generalized Lyapunov exponent, as a unified characterization of nonexponential and exponential dynamical instabilities in onedimensional maps. Chaos is classified into three different types, i.e., superexponential, exponential, and subexponential dynamical instabilities. Using onedimensional maps, we demonstrate superexponential and subexponential chaos and quantify the dynamical instabilities by the Lyapunov pair. In subexponential chaos, we show superweak chaos, which means that the growth of the difference of nearby orbits is slower than a stretched exponential growth. The scaling of the growth is analytically studied by a recently developed theory of a continuous accumulation process, which is related to infinite ergodic theory.
The last decade and a half has seen an ardent development of selforganised criticality (SOC), a new approach to complex systems, which has become important in many domains of natural as well as social science, such as geology, biology, astronomy, and economics, to mention just a few. This has led many to adopt a generalist stance towards SOC, which is now repeatedly claimed to be a universal theory of complex behaviour. The aim of this paper is twofold. First, I provide a brief and nontechnical introduction to SOC. Second, I critically discuss the various bold claims that have been made in connection with it. Throughout, I will adopt a rather sober attitude and argue that some people have been too readily carried away by fancy contentions. My overall conclusion will be that none of these bold claims can be maintained. Nevertheless, stripped of exaggerated expectations and daring assertions, many SOC models are interesting vehicles for promising scientific research.
Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proofofconcept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proofofconcept modeling are common. In the hope of facilitating communication, we discuss the role of proofofconcept modeling in evolutionary biology. Servedio MR, Brandvain Y, Dhole S, Fitzpatrick CL, Goldberg EE, et al. (2014) Not Just a Theory—The Utility of Mathematical Models in Evolutionary Biology. PLoS Biol 12(12): e1002017. http://dx.doi.org/10.1371/journal.pbio.1002017
Via Complexity Digest
The robustness of complex networks against node failure and malicious attack has been of interest for decades, while most of the research has focused on random attack or hubtargeted attack. In many realworld scenarios, however, attacks are neither random nor hubtargeted, but localized, where a group of neighboring nodes in a network are attacked and fail. In this paper we develop a percolation framework to analytically and numerically study the robustness of complex networks against such localized attack. In particular, we investigate this robustness in Erd\H{o}sR\'{e}nyi networks, randomregular networks, and scalefree networks. Our results provide insight into how to better protect networks, enhance cybersecurity, and facilitate the design of more robust infrastructures.
An increasing number of dissident voices claim that the standard neoDarwinian view of genes as 'leaders' and phenotypes as 'followers' during the process of adaptive evolution should be turned on its head. This idea is older than the rediscovery of Mendel's laws of inheritance and has been given several names before its final 'Baldwin effect' label. A condition for this effect is that environmentally induced variation such as phenotypic plasticity or learning is crucial for the initial establishment of a population. This gives the necessary time for natural selection to act on genetic variation and the adaptive trait can be eventually encoded in the genotype. An influential paper published in the late 1980s showed the Baldwin effect to happen in computer simulations, and claimed that it was crucial to solve a difficult adaptive task. This generated much excitement among scholars in various disciplines that regard neoDarwinian accounts to explain the evolutionary emergence of highorder phenotypic traits such as consciousness or language almost hopeless. Here, we use analytical and computational approaches to show that a standard population genetics treatment can easily crack what the scientific community has granted as an unsolvable adaptive problem without learning. The Baldwin effect is once again in need of convincing theoretical foundations. Phenotypic Plasticity, the Baldwin Effect, and the Speeding up of Evolution: the Computational Roots of an Illusion Mauro Santos, Eörs Szathmáry, José F. Fontanari http://arxiv.org/abs/1411.6843
Via Complexity Digest
