Complex systems present problems both in mathematical modelling and philosophical foundations. The study of complex systems represents a new approach to science that investigates how relationships between parts give rise to the collective behaviors of a system and how the system interacts and forms relationships with its environment. The equations from which models of complex systems are developed generally derive from statistical physics, information theory and non-linear dynamics, and represent organized but unpredictable behaviors of natural systems that are considered fundamentally complex.
We review some of the history and early work in the area of synchronization in chaotic systems. We start with our own discovery of the phenomenon, but go on to establish the historical timeline of this topic back to the earliest known paper. The topic of synchronization of chaotic systems has always been intriguing, since chaotic systems are known to resist synchronization because of their positive Lyapunov exponents. The convergence of the two systems to identical trajectories is a surprise. We show how people originally thought about this process and how the concept of synchronization changed over the years to a more geometric view using synchronization manifolds. We also show that building synchronizing systems leads naturally to engineering more complex systems whose constituents are chaotic, but which can be tuned to output various chaotic signals. We finally end up at a topic that is still in very active exploration today and that is synchronization of dynamical systems in networks of oscillators.
I frequently talk to groups of managers on the nature of systems thinking and its radical implications to management. In doing so I use several case studies involving prominent American corporations. At the end of the presentation I am almost alwaysasked, "If this way of thinking is as good as you say it is, why don't more organizations use it?" It is easy to reply by saying that organizations naturally resist change. This of course is a tautology. I once asked a vice president of marketing why consumers used his product. He answered, "Because they like it." I then asked him how he knew this. He answered, "Because the use it." Our answer to the question about failure of organizations to adopt systems thinking is seldom any better then this. There be many reasons why any particular organization fails to adopt systems thinking but I believe there are two that are the most important, one general and one specific. By a general reason I mean one that is responsible for organizations failing to adopt any transforming idea, let alone systems thinking. By a specific reason I mean one responsible for the failure to adopt systems thinking in particular.
The Constitution of the United States empowers the Congress to pass copyright laws to promote knowledge creation in the society and more specifically scientific knowledge. Many interesting economic studies have been conducted on copyright law, but very little research has been done to study the impact of the law on knowledge creation. In this paper we develop and analyze an agent-based model to investigate the impact of copyright on the creation and discovery of new knowledge. The model suggests that, for the most part, the extension of the copyright term hinders scholars in producing new knowledge. Furthermore, extending the copyright term tends to harm everyone, including scholars who have access to all published articles in the research field. However, we also identify situations where extending copyright term promotes rather than hinders knowledge creation. Additionally, scholars that publish copyrighted materials tend to out-perform those who do not creating a potential tension between individual incentives and the public good.
Scholars and urban planners have suggested that the key characteristic of leading world cities is that they attract the highest quality human talent through educational and professional opportunities. They offer enabling environments for productive human interactions and the growth of knowledge-based industries which drives economic growth through innovation. Both through hard and soft infrastructure, they offer physical connectivity which fosters human creativity and results in higher income levels. When combined with population density, socio-economic diversity and societal tolerance; the elevated interaction intensity diffuses creativity and improves productivity. In many developing country cities however, rapid urbanization is increasing sprawl and causing deteriorating in public services. We operationalize these insights by creating a stylized agent-based model where heterogeneous and independent decision-making agents interact under the following three scenarios: (1) improved urban transportation investments; (2) mixed land-use regulations; and (3) reduced residential segregation. We find that any combination of these scenarios results in greater population density and enables the diffusion of creativity, thus resulting in economic growth. However, the results demonstrate a clear trade-off between rapid economic progress and socioeconomic equity mainly due to the crowding out of low- and middle-income households from clusters of creativity.
The question What is Complexity? has occupied a great deal of time and paper over the last 20 or so years. There are a myriad different perspectives and definitions but still no consensus. In this paper I take a phenomenological approach, identifying several factors that discriminate well between systems that would be consensually agreed to be simple versus others that would be consensually agreed to be complex - biological systems and human languages. I argue that a crucial component is that of structural building block hierarchies that, in the case of complex systems, correspond also to a functional hierarchy. I argue that complexity is an emergent property of this structural/functional hierarchy, induced by a property - fitness in the case of biological systems and meaning in the case of languages - that links the elements of this hierarchy across multiple scales. Additionally, I argue that non-complex systems "are" while complex systems "do" so that the latter, in distinction to physical systems, must be described not only in a space of states but also in a space of update rules (strategies) which we do not know how to specify. Further, the existence of structural/functional building block hierarchies allows for the functional specialisation of structural modules as amply observed in nature. Finally, we argue that there is at least one measuring apparatus capable of measuring complexity as characterised in the paper - the human brain itself.
During the 1960s but mainly in the 1970s, large mathematical dynamic global models were implemented in computers to simulate the entire world, or large portions of it. Several different but interrelated subjects were considered simultaneously, and their variables evolved over time in an attempt to forecast the future, considering decades as time horizons. Global models continued to be developed while evidencing an increasing bias towards environmental aspects, or at least the public impact of models with such a focus became prevalent. In this paper we analyze the early evolution of computer-based global modeling and provide insights on less known pioneering works by South American modelers in the 1960s (Varsavsky and collaborators). We revisit relevant methodological aspects and discuss how they influenced different modeling endeavors. Finally, we overview how distinctive systemic approaches in global modeling evolved into the currently well-established discipline of complex systems.
Motivated by a possibility to optimize modelling of the population evolution we postulate a generalization of the well-know logistic map. Generalized difference equation reads:
x∈[0,1],(p,q)>0,n=0,1,2,..., where the two new parameters p and q may assume any positive values. The standard logistic map thus corresponds to the case p=q=1. For such a generalized equation we illustrate the character of the transition from regularity to chaos as a function of r for the whole spectrum of p and q parameters. As an example we consider the case for p=1 and q=2 both in the periodic and chaotic regime. We focus on the character of the corresponding bifurcation sequence and on the quantitative nature of the resulting attractor as well as its universal attribute (Feigenbaum constant).
It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for this hypothesized "Darwinian transition." The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and led to the emergence of vertical descent and the first well-defined species in early evolution. A nonlinear analysis of a stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable route from early collective evolution to the start of individuality and vertical Darwinian evolution, enabling the emergence of the first species.
Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.
The rapid changes occurring in the higher education domain are placing increasing pressure on the actors in this space to focus efforts on identifying and adopting strategies for success. One particular group of interest are academics or scientists, and the ways that these individuals, or collectives as institutional or discipline-based science systems, make decisions about how best to achieve success in their chosen field. The agent-based model and simulation that we present draws on the hypothetical "strategic publication model" proposed by Mölders, Fink and Weyer (2011), and extends this work by defining experimental settings to implement a prototype ABMS in NetLogo. While considerable work remains to fully resolve theoretical issues relating to the scope, calibration and validation of the model, this work goes some way toward resolving some of the details associated with defining appropriate experimental settings. Also presented are the results of four experiments that focus on exploring the emergent effects of the system that result from varying the strategic mix of actors in the system.
Cooperation lies at the foundations of human societies, yet why people cooperate remains a conundrum. The issue, known as network reciprocity, of whether population structure can foster cooperative behavior in social dilemmas has been addressed by many, but theoretical studies have yielded contradictory results so far—as the problem is very sensitive to how players adapt their strategy. However, recent experiments with the prisoner's dilemma game played on different networks and in a specific range of payoffs suggest that humans, at least for those experimental setups, do not consider neighbors' payoffs when making their decisions, and that the network structure does not influence the final outcome. In this work we carry out an extensive analysis of different evolutionary dynamics, taking into account most of the alternatives that have been proposed so far to implement players' strategy updating process. In this manner we show that the absence of network reciprocity is a general feature of the dynamics (among those we consider) that do not take neighbors' payoffs into account. Our results, together with experimental evidence, hint at how to properly model real people's behavior.
The world is changing at an ever-increasing pace. And it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. Every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity: structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. How to respond to this challenge?
Responding to Complexity in Socio-Economic Systems: How to Build a Smart and Resilient Society?
In this paper, we propose a novel methodology for automatically finding new chaotic attractors through a computational intelligence technique known as multi-gene genetic programming (MGGP). We apply this technique to the case of the Lorenz attractor and evolve several new chaotic attractors based on the basic Lorenz template. The MGGP algorithm automatically finds new nonlinear expressions for the different state variables starting from the original Lorenz system. The Lyapunov exponents of each of the attractors are calculated numerically based on the time series of the state variables using time delay embedding techniques. The MGGP algorithm tries to search the functional space of the attractors by aiming to maximise the largest Lyapunov exponent (LLE) of the evolved attractors. To demonstrate the potential of the proposed methodology, we report over one hundred new chaotic attractor structures along with their parameters, which are evolved from just the Lorenz system alone.
By performing a systematic study of the Hénon map, we find low-period sinks for parameter values extremely close to the classical ones. This raises the question whether or not the well-known Hénon attractor—the attractor of the Hénon map existing for the classical parameter values—is a strange attractor, or simply a stable periodic orbit. Using results from our study, we conclude that even if the latter were true, it would be practically impossible to establish this by computing trajectories of the map.
We analyze the replicator-mutator equations for the Rock-Paper-Scissors game. Various graph-theoretic patterns of mutation are considered, ranging from a single unidirectional mutation pathway between two of the species, to global bidirectional mutation among all the species. Our main result is that the coexistence state, in which all three species exist in equilibrium, can be destabilized by arbitrarily small mutation rates. After it loses stability, the coexistence state gives birth to a stable limit cycle solution created in a supercritical Hopf bifurcation. This attracting periodic solution exists for all the mutation patterns considered, and persists arbitrarily close to the limit of zero mutation rate and a zero-sum game.
Nonlinear Dynamics of the Rock-Paper-Scissors Game with Mutations Danielle F. P. Toupo, Steven H. Strogatz
In social dilemmas punishment costs resources, not just from the one who is punished but often also from the punisher and society. Reciprocity on the other side is known to lead to cooperation without the costs of punishment. The questions at hand are whether punishment brings advantages besides its costs, and how its negative side-effects can be reduced to a minimum in an environment populated by agents adopting a form of reciprocity. Various punishment mechanisms have been studied in the economic literature such as unrestricted punishment, legitimate punishment, cooperative punishment, and the hired gun mechanism. In this study all these mechanisms are implemented in a simulation where agents can share resources and may decide to punish other agents when the other agents do not share. Through evolutionary learning agents adapt their sharing/punishing policy. When the availability of resources was restricted, punishment mechanisms in general performed better than no-punishment, although unrestricted punishment was performing worse. When resource availability was high, performance was better in no-punishment conditions with indirect reciprocity. Unrestricted punishment was always the worst performing mechanism. Summarized, this paper shows that, in certain environments, some punishment mechanisms can improve the efficiency of cooperation even if the cooperating system is already based on indirect reciprocity.
The Multi Agent Based programming, modeling and simulation environment of NetLogo has been used extensively during the last fifteen years for educational among other purposes. The learning subject, upon interacting with the Users Interface of NetLogo, can easily study properties of the simulated natural systems, as well as observe the latters response, when altering their parameters. In this research, NetLogo was used under the perspective that the learning subject (student or prospective teacher)interacts with the model in a deeper way, obtaining the role of an agent. This is not achieved by obliging the learner to program (write NetLogo code) but by interviewing them, together with applying the choices that they make on the model. The scheme was carried out, as part of a broader research, with interviews, and web page like interface menu selections, in a sample of 17 University students in Athens (prospective Primary School teachers) and the results were judged as encouraging. At a further stage, the computers were set as a network, where all the agents performed together. In this way the learners could watch onscreen the overall outcome of their choices and actions on the modeled ecosystem. This seems to open a new, small, area of research in NetLogo educational applications.
Dans une étude publiée par la revue Nature, une équipe de l'Institut des sciences de l'évolution de Montpellier, CNRS/IRD/Université de Montpellier 2, a prouvé par l'expérience l'hypothèse selon laquelle la taille d'une population influait directement sur sa capacité à transmettre des traits culturels. Plus une population est grande, plus elle est capable de transmettre des savoirs et des techniques mais aussi d'innover ; plus elle est petite, plus elle risque de perdre son savoir-faire et de régresser.
The Lyapunov exponent characterizes an exponential growth rate of the difference of nearby orbits. A positive Lyapunov exponent is a manifestation of chaos. Here, we propose the Lyapunov pair, which is based on the generalized Lyapunov exponent, as a unified characterization of non-exponential and exponential dynamical instabilities in one-dimensional maps. Chaos is classified into three different types, i.e., super-exponential, exponential, and sub-exponential dynamical instabilities. Using one-dimensional maps, we demonstrate super-exponential and sub-exponential chaos and quantify the dynamical instabilities by the Lyapunov pair. In sub-exponential chaos, we show super-weak chaos, which means that the growth of the difference of nearby orbits is slower than a stretched exponential growth. The scaling of the growth is analytically studied by a recently developed theory of a continuous accumulation process, which is related to infinite ergodic theory.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.