What are the neural signatures of consciousness? This is an elusive yet fascinating challenge to current cognitive neuroscience, but it takes on an immediate clinical and societal significance in patients diagnosed as vegetative and minimally conscious. In these patients, it leads us to ask whether we can test for the presence of these signatures in the absence of any external signs of awareness. Recent conceptual advances suggest that consciousness requires a dynamic balance between integrated and differentiated networks of information exchange between brain regions. Here we apply this insight to study such networks in patients and compare them to healthy adults. Using the science of graph theory, we show that the rich and diversely connected networks that support awareness are characteristically impaired in patients, lacking the ability to efficiently integrate information across disparate regions via well-connected hubs. We find that the quality of patients' networks also correlates well with their degree of behavioural responsiveness, and some vegetative patients who show signs of hidden awareness have remarkably well-preserved networks similar to healthy adults.
The Western Ghats in India rise like a wall between the Arabian Sea and the heart of the subcontinent to the east. The 1,000-mile-long chain of coastal mountains is dense with lush rainforest and grasslands, and each year, clouds bearing monsoon rains blow in from the southwest and break against the mountains’ flanks, unloading water…
Complex Adaptive Systems Modeling welcomes submissions to the new thematic series on Modeling large-scale communication networks using complex networks and agent-based modeling techniques. This thematic series intends to publish high quality original research as well as review articles on case studies, models and methods for the modeling and simulation of large-scale computer communication networks using either of the following two approaches:
Complex networks (such as modeled using tools such as Gephi, Network Workbench and others) Agent-based models (such as based on NetLogo, Repast, Mason, Swarm and others)
Recently much attention has been paid to the study of the robustness of interdependent and multiplex networks and, in particular, the networks of networks. The robustness of interdependent networks can be evaluated by the size of a mutually connected component when a fraction of nodes have been removed from these networks. Here we characterize the emergence of the mutually connected component in a network of networks in which every node of a network (layer) alpha is connected with q_alpha its randomly chosen replicas in some other networks and is interdependent of these nodes with probability r. We find that when the superdegrees q_alpha of different layers in a network of networks are distributed heterogeneously, multiple percolation phase transition can occur. We show that, depending on the value of r, these transition are continuous or discontinuous.
Transfer entropy is a recently introduced information-theoretic measure quantifying directed statistical coherence between spatiotemporal processes, and is widely used in diverse fields ranging from finance to neuroscience. However, its relationships to fundamental limits of computation, such as Landauer's limit, remain unknown. Here we show that in order to increase transfer entropy (predictability) by one bit, heat flow must match or exceed Landauer's limit. Importantly, we generalise Landauer's limit to bi-directional information dynamics for non-equilibrium processes, revealing that the limit applies to prediction, in addition to retrodiction (information erasure). Furthermore, the results are related to negentropy, and to Bremermann's limit and the Bekenstein bound, producing, perhaps surprisingly, lower bounds on the computational deceleration and information loss incurred during an increase in predictability about the process. The identified relationships set new computational limits in terms of fundamental physical quantities, and establish transfer entropy as a central measure connecting information theory, thermodynamics and theory of computation.
The intelligence phenomenon continues to fascinate scientists and engineers, remaining an elusive moving target. Following numerous past observations (e.g., Hofstadter, 1985, p. 585), it can be pointed out that several attempts to construct “artificial intelligence” have turned to designing programs with discriminative power. These programs would allow computers to discern between meaningful and meaningless in similar ways to how humans perform this task. Interestingly, as noted by de Looze (2006) among others, such discrimination is based on etymology of “intellect” derived from Latin “intellego” (inter-lego): to choose between, or to perceive/read (a core message) between (alternatives). In terms of computational intelligence, the ability to read between the lines, extracting some new essence, corresponds to mechanisms capable of generating computational novelty and choice, coupled with active perception, learning, prediction, and post-diction. When a robot demonstrates a stable control in presence of a priori unknown environmental perturbations, it exhibits intelligence. When a software agent generates and learns new behaviors in a self-organizing rather than a predefined way, it seems to be curiosity-driven. When an algorithm rapidly solves a hard computational problem, by efficiently exploring its search-space, it appears intelligent.
The availability of human genome sequence has transformed biomedical research over the past decade. However, an equivalent map for the human proteome with direct measurements of proteins and peptides does not exist yet. Here we present a draft map of the human proteome using high-resolution Fourier-transform mass spectrometry. In-depth proteomic profiling of 30 histologically normal human samples, including 17 adult tissues, 7 fetal tissues and 6 purified primary haematopoietic cells, resulted in identification of proteins encoded by 17,294 genes accounting for approximately 84% of the total annotated protein-coding genes in humans. A unique and comprehensive strategy for proteogenomic analysis enabled us to discover a number of novel protein-coding regions, which includes translated pseudogenes, non-coding RNAs and upstream open reading frames. This large human proteome catalogue (available as an interactive web-based resource at http://www.humanproteomemap.org ) will complement available human genome and transcriptome data to accelerate biomedical research in health and disease.
A draft map of the human proteome • Min-Sik Kim, et al.
It is commonly believed that information spreads between individuals like a pathogen, with each exposure by an informed friend potentially resulting in a naive individual becoming infected. However, empirical studies of social media suggest that individual response to repeated exposure to information is far more complex. As a proxy for intervention experiments, we compare user responses to multiple exposures on two different social media sites, Twitter and Digg. We show that the position of exposing messages on the user-interface strongly affects social contagion. Accounting for this visibility significantly simplifies the dynamics of social contagion. The likelihood an individual will spread information increases monotonically with exposure, while explicit feedback about how many friends have previously spread it increases the likelihood of a response. We provide a framework for unifying information visibility, divided attention, and explicit social feedback to predict the temporal dynamics of user behavior.
Time plays an essential role in the diffusion of information, influence, and disease over networks. In many cases we can only observe when a node is activated by a contagion—when a node learns about a piece of information, makes a decision, adopts a new behavior, or becomes infected with a disease. However, the underlying network connectivity and transmission rates between nodes are unknown. Inferring the underlying diffusion dynamics is important because it leads to new insights and enables forecasting, as well as influencing or containing information propagation. In this paper we model diffusion as a continuous temporal process occurring at different rates over a latent, unobserved network that may change over time. Given information diffusion data, we infer the edges and dynamics of the underlying network. Our model naturally imposes sparse solutions and requires no parameter tuning. We develop an efficient inference algorithm that uses stochastic convex optimization to compute online estimates of the edges and transmission rates. We evaluate our method by tracking information diffusion among 3.3 million mainstream media sites and blogs, and experiment with more than 179 million different instances of information spreading over the network in a one-year period. We apply our network inference algorithm to the top 5,000 media sites and blogs and report several interesting observations. First, information pathways for general recurrent topics are more stable across time than for on-going news events. Second, clusters of news media sites and blogs often emerge and vanish in a matter of days for on-going news events. Finally, major events, for example, large scale civil unrest as in the Libyan civil war or Syrian uprising, increase the number of information pathways among blogs, and also increase the network centrality of blogs and social media sites.
Uncovering the structure and temporal dynamics of information propagation MANUEL GOMEZ RODRIGUEZ, JURE LESKOVEC, DAVID BALDUZZI, BERNHARD SCHÖLKOPF Network Science , Volume 2 , Issue 01 , April 2014, pp 26 - 65 http://dx.doi.org/10.1017/nws.2014.3 ;
We have often observed unexpected state transitions of complex systems. We are thus interested in how to steer a complex system from an unexpected state to a desired state. Here we introduce the concept of transittability of complex networks, and derive a new sufficient and necessary condition for state transittability which can be efficiently verified. We define the steering kernel as a minimal set of steering nodes to which control signals must directly be applied for transition between two specific states of a network, and propose a graph-theoretic algorithm to identify the steering kernel of a network for transition between two specific states. We applied our algorithm to 27 real complex networks, finding that sizes of steering kernels required for transittability are much less than those for complete controllability. Furthermore, applications to regulatory biomolecular networks not only validated our method but also identified the steering kernel for their phenotype transitions.
Power grids, road maps, and river streams are examples of infrastructural networks which are highly vulnerable to external perturbations. An abrupt local change of load (voltage, traffic density, or water level) might propagate in a cascading way and affect a significant fraction of the network. Almost discontinuous perturbations can be modeled by shock waves which can eventually interfere constructively and endanger the normal functionality of the infrastructure. We study their dynamics by solving the Burgers equation under random perturbations on several real and artificial directed graphs. Even for graphs with a narrow distribution of node properties (e.g., degree or betweenness), a steady state is reached exhibiting a heterogeneous load distribution, having a difference of one order of magnitude between the highest and average loads. Unexpectedly we find for the European power grid and for finite Watts-Strogatz networks a broad pronounced bimodal distribution for the loads. To identify the most vulnerable nodes, we introduce the concept of node-basin size, a purely topological property which we show to be strongly correlated to the average load of a node.
Following last years successful edition, we have once more decided to organize a summer school coinciding with the European Conference on Complex Systems thus profiting the opportunity offered by the presence of a wide variety of experts in different topics in Lucca. The projected school aims to offer young researchers the opportunity to learn new methods, present their work and meet fellow researchers, and it also represents a good opportunity for young researcher to prepare their participation to the main ECCS conference in an informal and relaxed environment. Following our policy to display local talent, three renowned italian researchers will each present a different aspect of complex networks in three hour sessions. Names such as Dr. Roberta Sinatra, Dr. Ciro Catutto and Prof. Stefano Battiston should sound familiar to any interested student. Furthermore, we plan a meeting where each participant will have the possibility to share with the others his work, organized as a flash presentation workshop. Of course, a major social event is also included, to stimulate networking and “prepare” the official ECCS conference.
Core percolation is a fundamental structural transition in complex networks related to a wide range of important problems. Recent advances have provided us an analytical framework of core percolation in uncorrelated random networks with arbitrary degree distributions. Here we apply the tools in analysis of network controllability. We confirm analytically that the emergence of the bifurcation in control coincides with the formation of the core and the structure of the core determines the control mode of the network. We also derive the analytical expression related to the controllability robustness by extending the deduction in core percolation. These findings help us better understand the interesting interplay between the structural and dynamical properties of complex networks.
Over the course of human history, thousands of languages have developed from what was once a much smaller number. How did we end up with so many? And how do we keep track of them all? Alex Gendler explains how linguists group languages into language families, demonstrating how these linguistic trees give us crucial insights into the past.
Never before were politicians, business leaders, and scientists more urgently needed to master the challenges ahead of us. We are in the middle of a third industrial revolution. While we see the symptoms, such as the financial and economic crisis, cybercrime and cyberwar, we haven't understood the implications well. But at the end of this socio-economic transformation, we will live in a digital society. This comes with breath-taking opportunities and challenges, as they occur only every 100 years.
Social networks pervade our everyday lives: we interact, influence, and are influenced by our friends and acquaintances. With the advent of the World Wide Web, large amounts of data on social networks have become available, allowing the quantitative analysis of the distribution of information on them, including behavioral traits and fads. Recent studies of correlations among members of a social network, who exhibit the same trait, have shown that individuals influence not only their direct contacts but also friends’ friends, up to a network distance extending beyond their closest peers. Here, we show how such patterns of correlations between peers emerge in networked populations. We use standard models (yet reflecting intrinsically different mechanisms) of information spreading to argue that empirically observed patterns of correlation among peers emerge naturally from a wide range of dynamics, being essentially independent of the type of information, on how it spreads, and even on the class of underlying network that interconnects individuals. Finally, we show that the sparser and clustered the network, the more far reaching the influence of each individual will be. DOI: http://dx.doi.org/10.1103/PhysRevLett.112.098702
Origin of Peer Influence in Social Networks Phys. Rev. Lett. 112, 098702 – Published 6 March 2014 Flávio L. Pinheiro, Marta D. Santos, Francisco C. Santos, and Jorge M. Pacheco
The compartmental models used to study epidemic spreading often assume the same susceptibility for all individuals, and are therefore, agnostic about the effects that differences in susceptibility can have on epidemic spreading. Here we show that-for the SIS model-differential susceptibility can make networks more vulnerable to the spread of diseases when the correlation between a node's degree and susceptibility are positive, and less vulnerable when this correlation is negative. Moreover, we show that networks become more likely to contain a pocket of infection when individuals are more likely to connect with others that have similar susceptibility (the network is segregated). These results show that the failure to include differential susceptibility to epidemic models can lead to a systematic over/under estimation of fundamental epidemic parameters when the structure of the networks is not independent from the susceptibility of the nodes or when there are correlations between the susceptibility of connected individuals.
Recently, the impact of network structure on evolutionary dynamics has been at the center of attention when studying the evolutionary process of structured populations. This paper aims at finding out the key structural feature of network to capture its impact on evolutionary dynamics. To this end, a novel concept called heat heterogeneity is introduced to characterize the structural heterogeneity of network, and the correlation between heat heterogeneity of structure and outcome of evolutionary dynamics is further investigated on various networks. It is found that the heat heterogeneity mainly determines the impact of network structure on evolutionary dynamics on complex networks. In detail, the heat heterogeneity readjusts the selection effect on evolutionary dynamics. Networks with high heat heterogeneity amplify the selection effect on the birth-death process and suppress the selection effect on the death-birth process. Based on the above results, an effective algorithm is proposed to generate selection adjusters with desired size and average degree.
While studies of aging are widely framed in terms of their demarcation of degenerative processes, the brain provides a unique opportunity to uncover the adaptive effects of getting older. Though intuitively reasonable, that life-experience and wisdom should reside somewhere in human cortex, these features have eluded neuroscientific explanation. The present study utilizes a “Bayesian Brain” framework to motivate an analysis of cortical circuit processing. From a Bayesian perspective, the brain represents a model of its environment and offers predictions about the world, while responding, through changing synaptic strengths to novel interactions and experiences. We hypothesized that these predictive and updating processes are modified as we age, representing an optimization of neuronal architecture. Using novel sensory stimuli we demonstrate that synaptic connections of older brains resist trial by trial learning to provide a robust model of their sensory environment. These older brains are capable of processing a wider range of sensory inputs – representing experienced generalists. We thus explain how, contrary to a singularly degenerative point-of-view, aging neurobiological effects may be understood, in sanguine terms, as adaptive and useful.
Moran RJ, Symmonds M, Dolan RJ, Friston KJ (2014) The Brain Ages Optimally to Model Its Environment: Evidence from Sensory Learning over the Adult Lifespan. PLoS Comput Biol 10(1): e1003422. http://dx.doi.org/10.1371/journal.pcbi.1003422