Your new post is loading...
Your new post is loading...
Nature’s largescale patterns emerge from incomplete surveys, thanks to ideas borrowed from information theory.
Long term behavior of nonlinear deterministic continuous time signals can be studied in terms of their reconstructed attractors. Reconstructed attractors of a continuous signal are meant to be topologically equivalent representations of the dynamics of the unknown dynamical system which generates the signal. Sometimes, geometry of the attractor or its complexity may give important information on the system of interest. However, if the trajectories of the attractor behave as if they are not coming from continuous system or there exists many spike like structures on the path of the system trajectories, then there is no way to characterize the shape of the attractor. In this article, the traditional attractor reconstruction method is first used for two types of ECG signals: Normal healthy persons (NHP) and Congestive Heart failure patients (CHFP). As common in such a framework, the reconstructed attractors are not at all well formed and hence it is not possible to adequately characterize their geometrical features. Thus, we incorporate frequency domain information to the given time signals. This is done by transforming the signals to a time frequency domain by means of suitable Wavelet transforms (WT). The transformed signal concerns two non homogeneous variables and is still quite difficult to use to reconstruct some dynamics out of it. By applying a suitable mapping, this signal is further converted into integer domain and a new type of 3D plot, called integer lag plot, which characterizes and distinguishes the ECG signals of NHP and CHFP, is finally obtained.
Extreme events, a type of collective behavior in complex networked dynamical systems, often can have catastrophic consequences. To develop effective strategies to control extreme events is of fundamental importance and practical interest. Utilizing transportation dynamics on complex networks as a prototypical setting, we find that making the network “mobile” can effectively suppress extreme events. A striking, resonancelike phenomenon is uncovered, where an optimal degree of mobility exists for which the probability of extreme events is minimized. We derive an analytic theory to understand the mechanism of control at a detailed and quantitative level, and validate the theory numerically. Implications of our finding to current areas such as cybersecurity are discussed. Controlling extreme events on complex networks • YuZhong Chen, ZiGang Huang & YingCheng Lai Scientific Reports 4, Article number: 6121 http://dx.doi.org/10.1038/srep06121
Via Claudia Mihai, Complexity Digest
Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) informationtheoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) opensource code implementation for empirical estimation of informationtheoretic measures from timeseries data. While the toolkit provides classic informationtheoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higherlevel measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuousvalued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, boxkernel and KraskovStoegbauerGrassberger) which can be swapped at runtime due to Java's objectoriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave and Python. We present the principles behind the code design, and provide several examples to guide users "JIDT: An informationtheoretic toolkit for studying the dynamics of complex systems" Joseph T. Lizier, arXiv:1408.3270, 2014 http://arxiv.org/abs/1408.3270
Via Complexity Digest
The idea is advanced that selforganization in complex systems can be treated as decision making (as it is performed by humans) and, vice versa, decision making is nothing but a kind of selforganization in the decision maker nervous systems. A mathematical formulation is suggested based on the definition of probabilities of system states, whose particular cases characterize the probabilities of structures, patterns, scenarios, or prospects. In this general framework, it is shown that the mathematical structures of selforganization and of decision making are identical. This makes it clear how selforganization can be seen as an endogenous decision making process and, reciprocally, decision making occurs via an endogenous selforganization. The approach is illustrated by phase transitions in large statistical systems, crossovers in small statistical systems, evolutions and revolutions in social and biological systems, structural selforganization in dynamical systems, and by the probabilistic formulation of classical and behavioral decision theories. In all these cases, selforganization is described as the process of evaluating the probabilities of macroscopic states or prospects in the search for a state with the largest probability. The general way of deriving the probability measure for classical systems is the principle of minimal information, that is, the conditional entropy maximization under given constraints. Behavioral biases of decision makers can be characterized in the same way as analogous to quantum fluctuations in natural systems Selforganization in complex systems as decision making V.I. Yukalov, D. Sornette arXiv:1408.1529, 2014 http://arxiv.org/abs/1408.1529
Via Complexity Digest
His weather forecasts changed the world. Could his predictions of war?
We show that dynamical systems with spatial degrees of freedom naturally evolve into a selforganized critical point. Flicker noise, or 1/f noise, can be identified with the dynamics of the critical state. This picture also yields insight into the origin of fractal objects.
Support is growing for a decadesold physics idea suggesting that localized episodes of disordered brain activity help keep the overall system in healthy balance.
The character of the timeasymptotic evolution of physical systems can have complex, singular behavior with variation of a system parameter, particularly when chaos is involved. A perturbation of the parameter by a small amount ϵ can convert an attractor from chaotic to nonchaotic or viceversa. We call a parameter value where this can happen ϵuncertain. The probability that a random choice of the parameter is ϵuncertain commonly scales like a power law in ϵ. Surprisingly, two seemingly similar ways of defining this scaling, both of physical interest, yield different numerical values for the scaling exponent. We show why this happens and present a quantitative analysis of this phenomenon.
We study the two particle annihilation reaction A+B>Ø A+B→∅ on interconnected scale free networks. We show that the mixing of particles and the evolution of the process are influenced by the number of interconnecting links and by their functional properties, while surprisingly when the interconnecting links have the same function as the links within the networks, they are not affected by the interconnectivity strategies in use. Due to the better mixing, which suppresses the segregation effect, we show that the reaction rates are faster than what was observed in other topologies, inline with previous studies performed on single scale free networks.
CLC interviewed Prof. Geoffrey West, Distinguished Professor and Past President of Sante Fe Institute, at the World Cities Summit 2014 on the study of cities in relation to complexity science....
Via Roger D. Jones, PhD
To further advance our understanding of the brain, new concepts and theories are needed. In particular, the ability of the brain to create information flows must be reconciled with its propensity for synchronization and mass action. The theoretical and empirical framework of Coordination Dynamics, a key aspect of which is metastability, are presented as a starting point to study the interplay of integrative and segregative tendencies that are expressed in space and time during the normal course of brain and behavioral function. Some recent shifts in perspective are emphasized, that may ultimately lead to a better understanding of brain complexity. Front. Syst. Neurosci., 25 June 2014  http://dx.doi.org/10.3389/fnsys.2014.00122 Enlarging the scope: grasping brain complexity Emmanuelle Tognoli and J. A. Scott Kelso
Via Complexity Digest

Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety
Signal analysis is one of the finest scientific techniques in communication theory. Some quantitative and qualitative measures describe the pattern of a music signal, vary from one to another. Same musical recital, when played by different instrumentalists, generates different types of music patterns. The reason behind various patterns is the psychoacoustic measures  Dynamics, Timber, Tonality and Rhythm, varies in each time. However, the psychoacoustic study of the music signals does not reveal any idea about the similarity between the signals. For such cases, study of synchronization of longterm nonlinear dynamics may provide effective results. In this context, phase synchronization (PS) is one of the measures to show synchronization between two nonidentical signals. In fact, it is very critical to investigate any other kind of synchronization for experimental condition, because those are completely non identical signals. Also, there exists equivalence between the phases and the distances of the diagonal line in Recurrence plot (RP) of the signals, which is quantifiable by the recurrence quantification measure taurecurrence rate. This paper considers two nonlinear music signals based on same raga played by two eminent sitar instrumentalists as two nonidentical sources. The psychoacoustic study shows how the Dynamics, Timber, Tonality and Rhythm vary for the two music signals. Then, long term analysis in the form of phase space reconstruction is performed, which reveals the chaotic phase spaces for both the signals. From the RP of both the phase spaces, taurecurrence rate is calculated. Finally by the correlation of normalized taurecurrence rate of their 3D phase spaces and the PS of the two music signals has been established. The numerical results well support the analysis.
Measures of nonlinearity and complexity, and in particular the study of Lyapunov exponents, have been increasingly used to characterize dynamical properties of a wide range of biological nonlinear systems, including cardiovascular control. In this work, we present a novel methodology able to effectively estimate the Lyapunov spectrum of a series of stochastic events in an instantaneous fashion. The paradigm relies on a novel pointprocess highorder nonlinear model of the event series dynamics. The longterm information is taken into account by expanding the linear, quadratic, and cubic WienerVolterra kernels with the orthonormal Laguerre basis functions. Applications to synthetic data such as the Hénon map and Rössler attractor, as well as two experimental heartbeat interval datasets (i.e., healthy subjects undergoing postural changes and patients with severe cardiac heart failure), focus on estimation and tracking of the Instantaneous Dominant Lyapunov Exponent (IDLE). The novel cardiovascular assessment demonstrates that our method is able to effectively and instantaneously track the nonlinear autonomic control dynamics, allowing for complexity variability estimations.
Herding of sheep by dogs is a powerful example of one individual causing many unwilling individuals to move in the same direction. Similar phenomena are central to crowd control, cleaning the environment and other engineering problems. Despite single dogs solving this ‘shepherding problem’ every day, it remains unknown which algorithm they employ or whether a general algorithm exists for shepherding. Here, we demonstrate such an algorithm, based on adaptive switching between collecting the agents when they are too dispersed and driving them once they are aggregated. Our algorithm reproduces key features of empirical data collected from sheep–dog interactions and suggests new ways in which robots can be designed to influence movements of living and artificial agents.
Complexity science has proliferated across academic domains in recent years. A question arises as to whether any useful sense of ‘generalized complexity’ can be abstracted from the various versions of complexity to be found in the literature, and whether it could prove fruitful in a scientific sense. Most attempts at defining complexity center around two kinds of notions: Structural, and temporal or dynamic. Neither of these is able to provide a foundation for the intuitive or generalized notion when taken separately; structure is often a derivative notion, dependent on prior notions of complexity, and dynamic notions such as entropy are often indefinable. The philosophical notion of process may throw light on the tensions and contradictions within complexity. Robustness, for instance, a key quality of complexity, is quite naturally understood within a processtheoretical framework. Understanding complexity as process also helps one align complexity science with holistically oriented predecessors such as General System Theory, while allowing for the reductionist perspective of complexity. These results, however, have the further implication that it may be futile to search for general laws of complexity, or to hope that investigations of complex objects in one domain may throw light on complexity in unrelated domains.
Submitted by mf344 on August 13, 2014 Artur Avila has been awarded the Fields Medal, the most prestigious prize in maths, at this year's International Congress of Mathematicians in Seoul.
Using the effective complexity measure, proposed by M. GellMann and S. Lloyd, we give a quantitative definition of an emergent property. We use several previous results and properties of this particular information measure closely related to the random features of the entity and its regularities. Complexity and the Emergence of Physical Properties Miguel Angel Fuentes Entropy 2014, 16(8), 44894496; http://dx.doi.org/10.3390/e16084489
Via Complexity Digest
Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialogue form.
The concept of stigmergy has been used to analyze selforganizing activities in an everwidening range of domains, from social insects via robotics and social media to human society. Yet, it is still poorly understood, and as such its full power remains underappreciated. The present paper clarifies the issue by defining stigmergy as a mechanism of indirect coordination in which the trace left by an action in a medium stimulates a subsequent action. It then analyses the fundamental components of the definition: action, agent, medium, trace and coordination. Stigmergy enables complex, coordinated activity without any need for planning, control, communication, simultaneous presence, or even mutual awareness. This makes the concept applicable to a very broad variety of cases, from chemical reactions to individual cognition and Internetsupported collaboration in Wikipedia. The paper classifies different varieties of stigmergy according to general aspects (number of agents, scope, persistence, sematectonic vs. markerbased, and quantitative vs. qualitative), while emphasizing the fundamental continuity between these cases. This continuity can be understood from a nonlinear, selforganizing dynamic that lets more complex forms of coordination evolve out of simpler ones. The paper concludes with two specifically human applications in cognition and cooperation, suggesting that without stigmergy these phenomena may never have evolved. Heylighen, F. (2015). Stigmergy as a Universal Coordination Mechanism: components, varieties and applications. To appear in T. Lewis & L. Marsh (Eds.), Human Stigmergy: Theoretical Developments and New Applications, Studies in Applied Philosophy, Epistemology and Rational Ethics. Springer. http://pespmc1.vub.ac.be/papers/stigmergyvarieties.pdf
Via Complexity Digest, NESS, Jorge Louçã
We present an optimization process to estimate parameters in systems of ordinary differential equations from chaotic time series. The optimization technique is based on a variational approach, and numerical studies on noisy time series demonstrate that it is very robust and appropriate to reduce the complexity of the model. The proposed process also allows to discard the parameters with scanty influence on the dynamic.
We present theoretical and empirical results demonstrating the usefulness of voting rules for participatory democracies. We first give algorithms which efficiently elicit \epsilonapproximations to two prominent voting rules: the Borda rule and the Condorcet winner. This result circumvents previous prohibitive lower bounds and is surprisingly strong: even if the number of ideas is as large as the number of participants, each participant will only have to make a logarithmic number of comparisons, an exponential improvement over the linear number of comparisons previously needed. We demonstrate the approach in an experiment in Finland's recent offroad traffic law reform, observing that the total number of comparisons needed to achieve a fixed \epsilon approximation is linear in the number of ideas and that the constant is not large. Finally, we note a few other experimental observations which support the use of voting rules for aggregation. First, we observe that rating, one of the common alternatives to ranking, manifested effects of bias in our data. Second, we show that very few of the topics lacked a Condorcet winner, one of the prominent negative results in voting. Finally, we show data hinting at a potential future direction: the use of partial rankings as opposed to pairwise comparisons to further decrease the elicitation time.
We study the capillary rise of wetting liquids in the corners of different geometries and show that the meniscus rises without limit following the universal law: h(t)/a ≈ (ɣt/na)⅓, where ɣ and n stand for the surface tension and viscosity of the liquid while a =√γ /ρɣ g is the capillary length, based on the liquid density p and gravity g. This law is universal in the sense that it does not depend on the geometry of the corner. © 2011 Cambridge University Press.

Review of recent contributions of network theory at different levels and domains within the Cognitive Sciences