We live in interesting times as we watch diverse effects of human activities on Earth's climate emerge from natural variability. In predicting the outcome of this evolving inadvertent experiment, climate science faces many challenges, some of which have been outlined in this series of Science Perspectives (1–6): reducing the uncertainty in climate sensitivity; explaining the recent slowdown in the rate of warming and its implications for understanding internal variability; uncovering the factors that control how and where the land will become drier as it warms; quantifying the cooling due to anthropogenic aerosols; explaining the curious evolution of atmospheric methane; and predicting changes in extreme weather. In addition to these challenges, the turbulent and chaotic atmospheric and oceanic flows seemingly limit predictability on various time scales. Is the climate system just too complex for useful prediction?
Context: At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. Purpose: This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) can inspire new models of computing. Method: The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. Results: Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. Implications: IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. Constructivist content: It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation.
Info-computational Constructivism and Cognition Gordana Dodig-Crnkovic
Constructivist Foundations Volume 9 · Number 2 · Pages 223–231
On television and in scientific journals, the story of how carnivores influence ecosystems has seized imaginations. From wolves in North America to lions in Africa and dingoes in Australia, top predators are thought to exert tight control over the populations and behaviours of other animals, shaping the entire food web down to the vegetation through a ‘trophic cascade’. This story is popular in part because it supports calls to conserve large carnivores as ‘keystone species’ for whole ecosystems. It also offers the promise of a robust rule within ecology, a field in which researchers have yearned for more predictive power.
But several studies in recent years have raised questions about the top-predator rule in the high-profile cases of the wolf and the dingo. That has led some scientists to suggest that the field’s fascination with top predators stems not from their relative importance, but rather from society’s interest in the big, the dangerous and the vulnerable. “Predators can be important,” says Oswald Schmitz, an ecologist at Yale University in New Haven, Connecticut, “but they aren’t a panacea.”
We explain how specific dynamical properties give rise to the limit distribution of sums of deterministic variables at the transition to chaos via the period-doubling route. We study the sums of successive positions generated by an ensemble of initial conditions uniformly distributed in the entire phase space of a unimodal map as represented by the logistic map. We find that these sums acquire their salient, multiscale, features from the repellor preimage structure that dominates the dynamics toward the attractors along the period-doubling cascade. And we explain how these properties transmit from the sums to their distribution. Specifically, we show how the stationary distribution of sums of positions at the Feigebaum point is built up from those associated with the supercycle attractors forming a hierarchical structure with multifractal and discrete scale invariance properties.
Pervasive presence of location-sharing services made it possible for researchers to gain an unprecedented access to the direct records of human activity in space and time. This article analyses geo-located Twitter messages in order to uncover global patterns of human mobility. Based on a dataset of almost a billion tweets recorded in 2012, we estimate the volume of international travelers by country of residence. Mobility profiles of different nations were examined based on such characteristics as mobility rate, radius of gyration, diversity of destinations, and inflow–outflow balance. Temporal patterns disclose the universally valid seasons of increased international mobility and the particular character of international travels of different nations. Our analysis of the community structure of the Twitter mobility network reveals spatially cohesive regions that follow the regional division of the world. We validate our result using global tourism statistics and mobility models provided by other authors and argue that Twitter is exceptionally useful for understanding and quantifying global mobility patterns.
Geo-located Twitter as proxy for global mobility patterns
Bartosz Hawelka*, Izabela Sitko, Euro Beinat, Stanislav Sobolevsky, Pavlos Kazakopoulos & Carlo Ratti
Power grids, road maps, and river streams are examples of infrastructural networks which are highly vulnerable to external perturbations. An abrupt local change of load (voltage, traffic density, or water level) might propagate in a cascading way and affect a significant fraction of the network. Almost discontinuous perturbations can be modeled by shock waves which can eventually interfere constructively and endanger the normal functionality of the infrastructure. We study their dynamics by solving the Burgers equation under random perturbations on several real and artificial directed graphs. Even for graphs with a narrow distribution of node properties (e.g., degree or betweenness), a steady state is reached exhibiting a heterogeneous load distribution, having a difference of one order of magnitude between the highest and average loads. Unexpectedly we find for the European power grid and for finite Watts-Strogatz networks a broad pronounced bimodal distribution for the loads. To identify the most vulnerable nodes, we introduce the concept of node-basin size, a purely topological property which we show to be strongly correlated to the average load of a node.
The combination of the network theoretic approach with recently available abundant economic data leads to the development of novel analytic and computational tools for modelling and forecasting key economic indicators. The main idea is to introduce a topological component into the analysis, taking into account consistently all higher-order interactions. We present three basic methodologies to demonstrate different approaches to harness the resulting network gain. First, a multiple linear regression optimisation algorithm is used to generate a relational network between individual components of national balance of payment accounts. This model describes annual statistics with a high accuracy and delivers good forecasts for the majority of indicators. Second, an early-warning mechanism for global financial crises is presented, which combines network measures with standard economic indicators. From the analysis of the cross-border portfolio investment network of long-term debt securities, the proliferation of a wide range of over-the-counter-traded financial derivative products, such as credit default swaps, can be described in terms of gross-market values and notional outstanding amounts, which are associated with increased levels of market interdependence and systemic risk. Third, considering the flow-network of goods traded between G-20 economies, network statistics provide better proxies for key economic measures than conventional indicators. For example, it is shown that a country's gate-keeping potential, as a measure for local power, projects its annual change of GDP generally far better than the volume of its imports or exports.
Netconomics: Novel Forecasting Techniques from the Combination of Big Data, Network Science and Economics Andreas Joseph, Irena Vodenska, Eugene Stanley, Guanrong Chen
We present a nondeterministic, recursive algorithm for updating a Kripke model so as to satisfy a given formula of computation-tree logic (CTL). Recursive algorithms for model update face two dual difficulties: (1) Removing transitions from a Kripke model to satisfy a universal subformula may dissatisfy some existential subformulas. Conversely, (2) adding transitions to satisfy an existential subformula may dissatisfy some universal subformulas. To overcome these difficulties, we employ protections of the form 〈E,A,L〉, recording information about the satisfaction of subformulas previously treated by the algorithm. Intuitively, (1) E is the set of transitions that we cannot remove without compromising the satisfaction of previously treated subformulas. Conversely, (2) A is the set of transitions that we can add. Hence, update proceeds without diminishing E and without augmenting A. Finally, (3) L is a set of literals protecting the model labels. We illustrate our algorithm through several examples: Emerson and Clarke's mutual-exclusion problem, Clarke's microwave-oven example, synchronous counters, and randomly generated models and formulas. In addition, we compare our method with other update approaches for either CTL or fragments of CTL. Lastly, we provide proofs of soundness and completeness and a complexity analysis.
CTL update of Kripke models through protections ☆ Miguel Carrillo, David A. Rosenblueth
Despite the successes of genome-wide association studies (GWAS) in identifying genetic connections with human disease, it has become clear that interpreting these data requires a clear understanding of how these new risk genes are regulated. On pages 1118 and 1119 of this issue, Fairfax et al. (1) and Lee et al. (2), respectively, elucidate networks of genetic regulation in the context of the human innate immune system and show how this information can be directly applied to understanding the genetics of autoimmune disorders.
A Genomic Road Map for Complex Human Disease Peter K. Gregersen
We propose a bare-bones stochastic model that takes into account both the geographical distribution of people within a country and their complex network of connections. The model, which is designed to give rise to a scale-free network of social connections and to visually resemble the geographical spread seen in satellite pictures of the Earth at night, gives rise to a power-law distribution for the ranking of cities by population size (but for the largest cities) and reflects the notion that highly connected individuals tend to live in highly populated areas. It also yields some interesting insights regarding Gibrat’s law for the rates of city growth (by population size), in partial support of the findings in a recent analysis of real data [Rozenfeld et al., Proc. Natl. Acad. Sci. U.S.A. 105 18702 (2008)]. The model produces a nontrivial relation between city population and city population density and a superlinear relationship between social connectivity and city population, both of which seem quite in line with real data. DOI: http://dx.doi.org/10.1103/PhysRevX.4.011008
Spatially Distributed Social Complex Networks Phys. Rev. X 4, 011008 – Published 28 January 2014 Gerald F. Frasco, Jie Sun, Hernán D. Rozenfeld, and Daniel ben-Avraham
[...] Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.
Researchers, policymakers and law enforcement agencies across the globe struggle to find effective strategies to control criminal networks. The effectiveness of disruption strategies is known to depend on both network topology and network resilience. However, as these criminal networks operate in secrecy, data-driven knowledge concerning the effectiveness of different criminal network disruption strategies is very limited. By combining computational modeling and social network analysis with unique criminal network intelligence data from the Dutch Police, we discovered, in contrast to common belief, that criminal networks might even become ‘stronger’, after targeted attacks. On the other hand increased efficiency within criminal networks decreases its internal security, thus offering opportunities for law enforcement agencies to target these networks more deliberately. Our results emphasize the importance of criminal network interventions at an early stage, before the network gets a chance to (re-)organize to maximum resilience. In the end disruption strategies force criminal networks to become more exposed, which causes successful network disruption to become a long-term effort.
The Relative Ineffectiveness of Criminal Network Disruption Paul A. C. Duijn, Victor Kashirin & Peter M. A. Sloot
Social networks readily transmit information, albeit with less than perfect fidelity. We present a large-scale measurement of this imperfect information copying mechanism by examining the dissemination and evolution of thousands of memes, collectively replicated hundreds of millions of times in the online social network Facebook. The information undergoes an evolutionary process that exhibits several regularities. A meme's mutation rate characterizes the population distribution of its variants, in accordance with the Yule process. Variants further apart in the diffusion cascade have greater edit distance, as would be expected in an iterative, imperfect replication process. Some text sequences can confer a replicative advantage; these sequences are abundant and transfer "laterally" between different memes. Subpopulations of the social network can preferentially transmit a specific variant of a meme if the variant matches their beliefs or culture. Understanding the mechanism driving change in diffusing information has important implications for how we interpret and harness the information that reaches us through our social networks.
Information Evolution in Social Networks Lada A. Adamic, Thomas M. Lento, Eytan Adar, Pauline C. Ng
In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States (1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data (3, 4), what lessons can we draw from this error?
The Parable of Google Flu: Traps in Big Data Analysis David Lazer, Ryan Kennedy, Gary King, Alessandro Vespignani
Johan Bollen caused a stir in January when he and his colleagues proposed an alternative science-funding model (J. Bollen et al. EMBO Rep. http://doi.org/f2pz34; 2014). Bollen, an informatician at Indiana University Bloomington, explains how the proposal developed, and how the idea of resource allocation became part of his research agenda.
Scientists are gearing up for a battle with the food industry after the World Health Organization (WHO) moved to halve its recommendation on sugar intake.
Nutrition researchers fear a backlash similar to that seen in 2003, when the WHO released its current guidelines stating that no more than 10% of an adult’s daily calories should come from ‘free’ sugars. That covers those added to food, as well as natural sugars in honey, syrups and fruit juice. In 2003, the US Sugar Association, a powerful food-industry lobby group based in Washington DC, pressed the US government to withdraw funding for the WHO if the organization did not modify its recommendations. The WHO did not back down, and has now mooted cutting the level to 5%.
In online social media systems users are not only posting, consuming, and resharing content, but also creating new and destroying existing connections in the underlying social network. While each of these two types of dynamics has individually been studied in the past, much less is known about the connection between the two. How does user information posting and seeking behavior interact with the evolution of the underlying social network structure? Here, we study ways in which network structure reacts to users posting and sharing content. We examine the complete dynamics of the Twitter information network, where users post and reshare information while they also create and destroy connections. We find that the dynamics of network structure can be characterized by steady rates of change, interrupted by sudden bursts. Information diffusion in the form of cascades of post re-sharing often creates such sudden bursts of new connections, which significantly change users' local network structure. These bursts transform users' networks of followers to become structurally more cohesive as well as more homogenous in terms of follower interests. We also explore the effect of the information content on the dynamics of the network and find evidence that the appearance of new topics and real-world events can lead to significant changes in edge creations and deletions. Lastly, we develop a model that quantifies the dynamics of the network and the occurrence of these bursts as a function of the information spreading through the network. The model can successfully predict which information diffusion events will lead to bursts in network dynamics.
The Bursty Dynamics of the Twitter Information Network Seth A. Myers, Jure Leskovec
After a traumatic brain injury, it sometimes happens that the brain can repair itself, building new brain cells to replace damaged ones. But the repair doesn't happen quickly enough to allow recovery from degenerative conditions like motor neuron disease (also known as Lou Gehrig's disease or ALS). Siddharthan Chandran walks through some new techniques using special stem cells that could allow the damaged brain to rebuild faster.
Scientometrics: Untangling the topics Adam Szanto-Varnagy, Peter Pollner, Tamas Vicsek, Illes J. Farkas
Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions - one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people), the other where the duration of the disease is constant. We find that, for most practical purposes, these versions are qualitatively the same.
Model versions and fast algorithms for network epidemiology Petter Holme
We address the question to what extent the success of scientific articles is due to social influence. Analyzing a data set of over 100000 publications from the field of Computer Science, we study how centrality in the coauthorship network differs between authors who have highly cited papers and those who do not. We further show that a machine learning classifier, based only on coauthorship network centrality measures at time of publication, is able to predict with high precision whether an article will be highly cited five years after publication. By this we provide quantitative insight into the social dimension of scientific publishing - challenging the perception of citations as an objective, socially unbiased measure of scientific success.
Predicting Scientific Success Based on Coauthorship Networks Emre Sarigöl, Rene Pfitzner, Ingo Scholtes, Antonios Garas, Frank Schweitzer
We show that numerical approximations of Kolmogorov complexity (K) of graphs and networks capture some group-theoretic and topological properties of empirical networks, ranging from metabolic to social networks, and of small synthetic networks that we have produced. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalized version of a Block Decomposition Method (BDM) based on algorithmic probability theory.
Social networks pervade our everyday lives: we interact, influence, and are influenced by our friends and acquaintances. With the advent of the World Wide Web, large amounts of data on social networks have become available, allowing the quantitative analysis of the distribution of information on them, including behavioral traits and fads. Recent studies of correlations among members of a social network, who exhibit the same trait, have shown that individuals influence not only their direct contacts but also friends’ friends, up to a network distance extending beyond their closest peers. Here, we show how such patterns of correlations between peers emerge in networked populations. We use standard models (yet reflecting intrinsically different mechanisms) of information spreading to argue that empirically observed patterns of correlation among peers emerge naturally from a wide range of dynamics, being essentially independent of the type of information, on how it spreads, and even on the class of underlying network that interconnects individuals. Finally, we show that the sparser and clustered the network, the more far reaching the influence of each individual will be. DOI: http://dx.doi.org/10.1103/PhysRevLett.112.098702
Origin of Peer Influence in Social Networks Phys. Rev. Lett. 112, 098702 – Published 6 March 2014 Flávio L. Pinheiro, Marta D. Santos, Francisco C. Santos, and Jorge M. Pacheco
As information thunders through the digital economy, it’s easy to miss valuable “weak signals” often hidden amid the noise. Arising primarily from social media, they represent snippets—not streams—of information and can help companies to figure out what customers want and to spot looming industry and market disruptions before competitors do. Sometimes, companies notice them during data-analytics number-crunching exercises. Or employees who apply methods more akin to art than to science might spot them and then do some further number crunching to test anomalies they’re seeing or hypotheses the signals suggest. In any case, companies are just beginning to recognize and capture their value. Here are a few principles that companies can follow to grasp and harness the power of weak signals.
We discuss models and data of crowd disasters, crime, terrorism, war and disease spreading to show that conventional recipes, such as deterrence strategies, are not effective and sufficient to contain them. The failure of many conventional approaches results from their neglection of feedback loops, instabilities and/or cascade effects, due to which equilibrium models do often not provide a good picture of the actual system behavior. However, the complex and often counter-intuitive behavior of social systems and their macro-level collective dynamics can be understood by means of complexity science, which enables one to address the aforementioned problems more successfully. We highlight that a suitable system design and management can help to stop undesirable cascade effects and to enable favorable kinds of self-organization in the system. In such a way, complexity science can help to save human lives.
How to Save Human Lives with Complexity Science Dirk Helbing, Dirk Brockmann, Thomas Chadefaux, Karsten Donnay, Ulf Blanke, Olivia Woolley-Meza, Mehdi Moussaid, Anders Johansson, Jens Krause, Sebastian Schutte, Matjaz Perc