Your new post is loading...
Your new post is loading...
Scale independence is a ubiquitous feature of complex systems which implies a highly skewed distribution of resources with no characteristic scale. Research has long focused on why systems as varied as protein networks, evolution and stock actions all feature scale independence. Assuming that they simply do, we focus here on describing exactly how this behavior emerges. We show that growing towards scale independence implies strict constraints: the first is the wellknown preferential attachment principle and the second is a new form of temporal scaling. These constraints pave a precise evolution path, such that an instantaneous snapshot of a distribution is enough to reconstruct its past and to predict its future. We validate our approach on diverse spheres of human activities ranging from scientific and artistic productivity, to sexual relations and online traffic.
Is our relentless quest for economic growth killing the planet? Climate scientists have seen the data – and they are coming to some incendiary conclusions. Complex scientist Brad Werner is saying that his research shows that our entire economic paradigm is a threat to ecological stability. And indeed that challenging this economic paradigm – through massmovement counterpressure – is humanity’s best shot at avoiding catastrophe.
Neuroscientists have come up with a mathematical equation that may help predict calamities such as financial crashes in economic systems and epileptic seizures in the brain.
Adaptive networks are a novel class of dynamical networks whose topologies and states coevolve. Many realworld complex systems can be modeled as adaptive networks, including social networks, transportation networks, neural networks and biological networks. In this paper, we introduce fundamental concepts and unique properties of adaptive networks through a brief, noncomprehensive review of recent literature on mathematical/computational modeling and analysis of such networks. We also report our recent work on several applications of computational adaptive network modeling and analysis to realworld problems, including temporal development of search and rescue operational networks, automated rule discovery from empirical network evolution data, and cultural integration in corporate merger. Modeling complex systems with adaptive networks Hiroki Sayama, , , Irene Pestov, Jeffrey Schmidt, Benjamin James Bush, Chun Wong, Junichi Yamanoi, Thilo Gross Computers & Mathematics with Applications In Press, Corrected Proof http://dx.doi.org/10.1016/j.camwa.2012.12.005
Via Complexity Digest, NESS, Complejidady Economía
There is mounting evidence of the apparent ubiquity of scalefree networks among complex systems. Many natural and physical systems exhibit patterns of interconnection that conform, approximately, to the structure expected of a scalefree network. We propose an efficient algorithm to generate representative samples from the space of all networks defined by a particular (scalefree) degree distribution. Using this algorithm we are able to systematically explore that space with some surprising results: in particular, we find that preferential attachment growth models do not yield typical realizations and that there is a certain latent structure among such networks  which we loosely term "hubcentric". We provide a method to generate or remove this latent hubcentric bias  thereby demonstrating exactly which features of preferential attachment networks are atypical of the broader class of scale free networks. Based on these results we are also able to statistically determine whether experimentally observed networks are really typical realizations of a given degree distribution (scalefree degree being the example which we explore). In so doing we propose a surrogate generation method for complex networks, exactly analogous the the widely used surrogate tests of nonlinear time series analysis.
Around 1970, Stafford Beer developed the viable system model to diagnose the faults in any organizational system, programme, institution, nation, or enterprise. It uses cybernetics, which – according to its originator Norbert Wiener – is the study of control and communication in the animal and the machine, but there are many other interesting definitions. The central question could be formulated as: How can organizational efficacy be maintained? How can organizations sustain their own existence? Or: How do organizations create viability?
World food supply is crucial to the wellbeing of every human on the planet in the basic sense that we need food to live. It also has a profound impact on the world economy, international trade and global political stability. Furthermore, consumption of certain types and amounts foods can affect health, and the choice of livestock and plants for food production can impact sustainable use of global resources. There are communities where insufficient food causes nutritional deficiencies, and at the same time other communities eating too much food leading to obesity and accompanying diseases. These aspects reflect the utmost importance of agricultural production and conversion of commodities to food products. Moreover, all factors contributing to the food supply are interdependent, and they are an integrative part of the continuously changing, adaptive and interdependent systems in the world around us. The properties of such interdependent systems usually cannot be inferred from the properties of its parts. In addressing current challenges, like the apparent incongruences of obesity and hunger, we have to account for the complex interdependencies among areas such as physics and sociology. This is possible using the complex system approach. It encompasses an integrative multiscale and interdisciplinary approach. Using a complex system approach that accounts for the needs of stakeholders in the agriculture and food domain, and determines which research programs will enable these stakeholders to better anticipate emerging developments in the world around them, will enable them to determine effective intervention strategies to simultaneously optimise and safeguard their interests and the interests of the environment. Using a complex system approach to address world challenges in Food and Agriculture H.G.J. van Mil, E.A. Foegeding, E.J. Windhab, N. Perrot, E. van der Linden http://arxiv.org/abs/1309.0614
Via Complexity Digest
How did human societies evolve from small groups, integrated by facetoface cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largestscale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies—primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical largescale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The modelpredicted pattern of spread of largescale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in statebuilding and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita.
While new forms of attacks are developed every day to compromise essential infrastructures, service providers are also expected to develop strategies to mitigate the risk of extreme failures. In this context, tools of Network Science have been used to evaluate network robustness and propose resilient topologies against attacks. We present here a new rewiring method to modify the network topology improving its robustness, based on the evolution of the network largest component during a sequence of targeted attacks. In comparison to previous strategies, our method lowers by several orders of magnitude the computational effort necessary to improve robustness. Our rewiring also drives the formation of layers of nodes with similar degree while keeping a highly modular structure. This "modular onionlike structure" is a particular class of the onionlike structure previously described in the literature. We apply our rewiring strategy to an unweighted representation of the World Air Transportation network and show that an improvement of 30% in its overall robustness can be achieved through smart swaps of around 9% of its links.
Why are fast–slow oscillations ubiquitous in biology? We can think of this as homeostasis of a higher order. Biology is replete with negative feedback. Almost every reaction is inhibited directly or indirectly by its product, which prevents concentrations from running out of narrow bounds. However, if the feedback is slow and some source of positive feedback is available, the system can undergo transients before returning to rest, and under the right conditions, this can result in repeated oscillations. When these oscillations are useful, they can be fixed by evolution.
None of the ideas presented here are new. They are old hat to mathematical biologists although little known to nonmathematical biologists. Physiology, especially electrophysiology, has had a long symbiotic relationship with dynamic modeling because of the early development of techniques for monitoring timedependent behavior with high time resolution. There was not much of a field of calcium modeling before the invention of imaging techniques, which revealed a wealth of dynamic phenomena such as oscillations and waves. When experimentalists turned to theorists for help in understanding these phenomena, a large repertoire of models was readytohand to help out that was further enriched by new examples and by the challenge of integrating the calcium and electrical subsystems in cells.
As biology forges ahead and live cell–imaging techniques reveal the temporal complexity of more and more cellsignaling mechanisms, dynamical systems theory will be essential. Luckily, there are many accessible sources aimed at bringing this theory within the grasp of experimentalists (Keener and Sneyd, 1998; Rinzel and Ermentrout, 1998; Fall et al., 2002; Tyson et al., 2003; Izhikevich, 2010) and many theorists with the deep grounding in biology needed to lend a hand. Even if your own work has yet to be touched by these developments, be on the lookout: they are coming to a biological system near you.

Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user.The source code is publicly available and easily extensible.
Many real world, complex phenomena have underlying structures of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics,with a key test being the prediction of short and long term changes. For the problem of shortterm link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Recent work has suggested that the incorporation of topological features and node attributes can improve link prediction. We provide an approach to predicting future links by applying Covariance Matrix Adaptation Evolution Strategy (CMAES) to optimize weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine a large dynamic social network with over 106 nodes (Twitter reciprocal reply networks), both as a test of our general method and as a problem of scientific interest in itself. Our method exhibits fast convergence and high levels of precision for the top twenty predicted links, and to our knowledge, strongly outperforms all extant methods. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.
Powerlaws and distributions with heavy tails are common features of many experimentally studied complex systems, like the distribution of the sizes of earthquakes and solar flares, or the duration of neuronal avalanches in the brain. It had been tempting to surmise that a single general concept may act as a unifying underlying generative mechanism, with the theory of self organized criticality being a weighty contender. On the theory side there has been, lively activity in developing new and extended models. Three classes of models have emerged. The first line of models is based on a separation between the time scales of drive and dissipation, and includes the original sandpile model and its extensions, like the dissipative earthquake model. Within this approach the steady state is close to criticality in terms of an absorbing phase transition. The second line of approach is based on external drives and internal dynamics competing on similar time scales and includes the coherent noise model, which has a noncritical steady state characterized by heavytailed distributions. The third line of modeling proposes a noncritical state which is selforganizing, being guided by an optimization principle, such as the concept of highly optimized tolerance. We present a comparative overview regarding distinct modeling approaches together with a discussion of their potential relevance as underlying generative models for realworld phenomena. The complexity of physical and biological scaling phenomena has been found to transcend the explanatory power of individual paradigmal concepts, like the theory of selforganized criticality, the interaction between theoretical development and experimental observations has been very fruitful, leading to a series of novel concepts and insights.
The urinary system evolved to eject fluids from the body quickly and efficiently. Despite a long history of successful urology treatments in humans and animals, the physics of urination has received comparatively little attention. In this combined experimental and theoretical investigation, we elucidate the hydrodynamics of urination across five orders of magnitude in animal mass, from mice to elephants. Using highspeed fluid dynamics videos and flowrate measurement at Zoo Atlanta, we discover the "Law of Urination", which states animals empty their bladders over nearly constant duration of average 21 seconds (standard deviation 13 seconds), despite a difference in bladder volume from 100 mL to 100 L. This feat is made possible by the increasing urethra length of large animals which amplifies gravitational force and flow rate. We also demonstrate the challenges faced by the urinary system for rodents and other small mammals for which urine flow is limited to single drops. Our findings reveal the urethra evolved as a flowenhancing device, enabling the urinary system to be scaled up without compromising its function. This study may help in the diagnosis of urinary problems in animals and in inspiring the design of scalable hydrodynamic systems based on those in nature.
One of the basic principles of science is that good theories extrapolate well. The laws of gravity hold both here and on the moon, and this universality enables us to land a spacecraft on Mars. Darwin drew the theory of evolution from studies of a remote island in the Pacific, but today we use it to explain the emergence of drugresistant tuberculosis in a city hospital. The theories of gravity and of evolution are two of our greatest scientific achievements. But in contrast to the universal nature of these laws, our understanding of the human world — the messy realm of newspapers and cafes, traffic jams and gossip, governments and social movements — is remarkably limited. Take, for example, some of the most basic questions in politics. How do societies resolve conflict? How do new methods for resolving conflict emerge? A newspaper story can give us incredible detail on a particular fight — the war in Syria, say, or the political brinksmanship over “Obamacare.”
Scientific data sets are becoming more dynamic, requiring new mathematical techniques on par with the invention of calculus.
Complex systems are systems that exhibit several defining characteristics (Kastens et al., 2009), including: Feedback loops, where change in a variable results in either an amplification (positive feedback) or a ...
What features make the international banking network fragile or robust against major shocks, such as the failure of Lehman Brothers in fall 2008? Could our understanding of marine ecosystems help us better understand the human body and lead to better health and well being? How do the strategies of attacker and defender coevolve in a network attack, and how can examining this coevolution help make online networks more secure? Graduate students and postdocs participating in SFI's 2013 Complex Systems Summer School collaborated to develop some 15 original research papers.
The hallmark of deterministic chaos is that it creates informationthe rate being given by the KolmogorovSinai metric entropy. Since its introduction half a century ago, the metric entropy has been used as a unitary quantity to measure a system's intrinsic unpredictability. Here, we show that it naturally decomposes into two structurally meaningful components: A portion of the created informationthe ephemeral informationis forgotten and a portionthe bound informationis remembered. The bound information is a new kind of intrinsic computation that differs fundamentally from information creation: it measures the rate of active information storage. We show that it can be directly and accurately calculated via symbolic dynamics, revealing a hitherto unknown richness in how dynamical systems compute.
Can Complexity Thinking Advance Management and Fix Capitalism?
The scaling exponent of a hierarchy of cities used to be regarded as a fractal parameter. The Pareto exponent was treated as the fractal dimension of size distribution of cities, while the Zipf exponent was treated as the reciprocal of the fractal dimension. However, this viewpoint is not exact. In this paper, I will present a new interpretation of the scaling exponent of ranksize distributions. The ideas from fractal measure relation and the principle of dimension consistency are employed to explore the essence of Pareto's and Zipf's scaling exponents. The Pareto exponent proved to be a ratio of the fractal dimension of a network of cities to the average dimension of city population. Accordingly, the Zipf exponent is the reciprocal of this dimension ratio. On a digital map, the Pareto exponent can be defined by the scaling relation between a map scale and the corresponding number of cities based on this scale. The cities of the United States of America in 1900, 1940, 1960, and 1980 and Indian cities in 1981, 1991, and 2001 are utilized to illustrate the geographical spatial meaning of Pareto's exponent. The results suggest that the Pareto exponent of citysize distribution is not a fractal dimension, but a ratio of the urban network dimension to the city population dimension. This conclusion is revealing for scientists to understand Zipf's law and fractal structure of hierarchy of cities.
In early 2013, SFI External Professor Melanie Mitchell taught the Institute’s first Massive Open Online Course (MOOC). The 16week course, “Introduction to Complexity,” drew nearly 7,100 students. It marked the debut of a series of free courses and resources for complexity science SFI is providing through the online Complexity Explorer. Approximately 1,200 participants finished the course successfully, a 17 percent completion rate (much higher than the MOOC average). SFI is reoffering "Introduction to Complexity" beginning September 30, 2013. See the announcement for more information. As Mitchell prepares to begin the course again, she offers her thoughts to SFI science writer Jenna Marshall about the first SFI MOOC and what the future holds for future online courses in complexity.
