An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained.
An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro Kristian Lindgren
Research in the field of machine intelligence is seeing a resurgence. Big conceptual breakthroughs in artificial neural networks and access to powerful processors have led to applications that can process information in a human-like way. In addition, the creation of robots that can safely assist us with different tasks may soon become a reality. The Reviews in this Insight discuss the exciting developments in these fields and the opportunities for further research.
Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot’s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
Robots that can adapt like animals Antoine Cully, Jeff Clune, Danesh Tarapore & Jean-Baptiste Mouret
There is an increasing trend of people leaving digital traces through social media. This reality opens new horizons for urban studies. With this kind of data, researchers and urban planners can detect many aspects of how people live in cities and can also suggest how to transform cities into more efficient and smarter places to live in. In particular, their digital trails can be used to investigate tastes of individuals, and what attracts them to live in a particular city or to spend their vacation there. In this paper we propose an unconventional way to study how people experience the city, using information from geotagged photographs that people take at different locations. We compare the spatial behavior of residents and tourists in 10 most photographed cities all around the world. The study was conducted on both a global and local level. On the global scale we analyze the 10 most photographed cities and measure how attractive each city is for people visiting it from other cities within the same country or from abroad. For the purpose of our analysis we construct the users’ mobility network and measure the strength of the links between each pair of cities as a level of attraction of people living in one city (i.e., origin) to the other city (i.e., destination). On the local level we study the spatial distribution of user activity and identify the photographed hotspots inside each city. The proposed methodology and the results of our study are a low cost mean to characterize touristic activity within a certain location and can help cities strengthening their touristic potential.
Urban magnetism through the lens of geo-tagged photography Silvia Paldino, Iva Bojic, Stanislav Sobolevsky, Carlo Ratti and Marta C González
The gut microbiota is central to human health, but its establishment in early life has not been quantitatively and functionally examined. Applying metagenomic analysis on fecal samples from a large cohort of Swedish infants and their mothers, we characterized the gut microbiome during the first year of life and assessed the impact of mode of delivery and feeding on its establishment. In contrast to vaginally delivered infants, the gut microbiota of infants delivered by C-section showed significantly less resemblance to their mothers. Nutrition had a major impact on early microbiota composition and function, with cessation of breast-feeding, rather than introduction of solid food, being required for maturation into an adult-like microbiota. Microbiota composition and ecological network had distinctive features at each sampled stage, in accordance with functional maturation of the microbiome. Our findings establish a framework for understanding the interplay between the gut microbiome and the human body in early life.
Dynamics and Stabilization of the Human Gut Microbiome during the First Year of Life Fredrik Bäckhed, et al.
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Deep learning • Yann LeCun, Yoshua Bengio & Geoffrey Hinton
The world of bees is fascinating and varied. The common honeybee is the most well-known and well-studied species, but there are thousands of wild bee species that enliven our landscapes and help to pollinate crops and wildflowers. The widely reported threats to honeybees, which cause their colonies to collapse, also jeopardize the lives of these lesser-known and under-appreciated bee species.
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem finding phase transitions of different kinds. Distinct phases are associated to different arrangements of the connections; but the need of drastic topological changes does not determine the presence, nor the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Phase transitions in Pareto optimal complex networks Luís F Seoane, Ricard Solé
The Web has made it possible to harness human cognition en masse to achieve new capabilities. Some of these successes are well known; for example Wikipedia has become the go-to place for basic information on all things; Duolingo engages millions of people in real-life translation of text, while simultaneously teaching them to speak foreign languages; and fold.it has enabled public-driven scientific discoveries by recasting complex biomedical challenges into popular online puzzle games. These and other early successes hint at the tremendous potential for future crowd-powered capabilities for the benefit of health, education, science, and society. In the process, a new field called Human Computation has emerged to better understand, replicate, and improve upon these successes through scientific research. Human Computation refers to the science that underlies online crowd-powered systems and was the topic of a recent visioning activity in which a representative cross-section of researchers, industry practitioners, visionaries, funding agency representatives, and policy makers came together to understand what makes crowd-powered systems successful. Teams of experts considered past, present, and future human computation systems to explore which kinds of crowd-powered systems have the greatest potential for societal impact and which kinds of research will best enable the efficient development of new crowd-powered systems to achieve this impact. This report summarize the products and findings of those activities as well as the unconventional process and activities employed by the workshop, which were informed by human computation research.
A U.S. Research Roadmap for Human Computation Pietro Michelucci, Lea Shanley, Janis Dickinson, Haym Hirsh
We propose a new approach to analyzing massive transportation systems that leverages traffic information about individual travelers. The goals of the analysis are to quantify the effects of shocks in the system, such as line and station closures, and to predict traffic volumes. We conduct an in-depth statistical analysis of the Transport for London railway traffic system. The proposed methodology is unique in the way that past disruptions are used to predict unseen scenarios, by relying on simple physical assumptions of passenger flow and a system-wide model for origin–destination movement. The method is scalable, more accurate than blackbox approaches, and generalizable to other complex transportation systems. It therefore offers important insights to inform policies on urban transportation.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems Ricardo Silva, Soong Moon Kang, and Edoardo M. Airoldi
We develop a quantum information protocol that models the biological behaviors of individuals living in a natural selection scenario. The artificially engineered evolution of the quantum living units shows the fundamental features of life in a common environment, such as self-replication, mutation, interaction of individuals, and death. We propose how to mimic these bio-inspired features in a quantum-mechanical formalism, which allows for an experimental implementation achievable with current quantum platforms. This result paves the way for the realization of artificial life and embodied evolution with quantum technologies.
Artificial Life in Quantum Technologies U. Alvarez-Rodriguez, M. Sanz, L. Lamata, E. Solano
The 2014 Ebola outbreak in west Africa raised many questions about the control of infectious disease in an increasingly connected global society. Limited availability of contact information has made contact tracing difficult or impractical in combating the outbreak. We consider the development of multi-scale public health strategies and simulate policies for community-level response aimed at early screening of communities rather than individuals, as well as travel restrictions to prevent community cross-contamination. Our analysis shows community screening to be effective even at a relatively low level of compliance. In our simulations, 40% of individuals conforming to this policy is enough to stop the outbreak. Simulations with a 50% compliance rate are consistent with the case counts in Liberia during the period of rapid decline after mid September, 2014. We also find the travel restriction policies to be effective at reducing the risks associated with compliance substantially below the 40% level, shortening the outbreak and enabling efforts to be focused on affected areas. Our results suggest that the multi-scale approach could be applied to help end the outbreaks in Guinea and Sierra Leone, and the generality of our model can be used to further evolve public health strategy for defeating emerging epidemics.
D. Cooney, V. Wong, Y. Bar-Yam, Beyond contact tracing: Community-based early detection for Ebola response, ArXiv:1505.07020 [physics.soc-ph] (May 26, 2014); New England Complex Systems Institute Report 15-05-01
The relationship between information and complexity is analyzed using a detailed literature analysis. Complexity is a multifaceted concept, with no single agreed definition. There are numerous approaches to defining and measuring complexity and organization, all involving the idea of information. Conceptions of complexity, order, organization, and “interesting order” are inextricably intertwined with those of information. Shannon's formalism captures information's unpredictable creative contributions to organized complexity; a full understanding of information's relation to structure and order is still lacking. Conceptual investigations of this topic should enrich the theoretical basis of the information science discipline, and create fruitful links with other disciplines that study the concepts of information and complexity.
“Waiting for Carnot”: Information and complexity David Bawden and Lyn Robinson
Journal of the Association for Information Science and Technology Early View
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks---the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain---and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations, and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness.
How random are complex networks Chiara Orsini, Marija Mitrović Dankulov, Almerima Jamakovic, Priya Mahadevan, Pol Colomer-de-Simón, Amin Vahdat, Kevin E. Bassler, Zoltán Toroczkai, Marián Boguñá, Guido Caldarelli, Santo Fortunato, Dmitri Krioukov
The firm is a fundamental economic unit of contemporary human societies. Studies on the general quantitative and statistical character of firms have produced mixed results regarding their lifespans and mortality. We examine a comprehensive database of more than 25 000 publicly traded North American companies, from 1950 to 2009, to derive the statistics of firm lifespans. Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age. We show that the typical half-life of a publicly traded company is about a decade, regardless of business sector. Our results shed new light on the dynamics of births and deaths of publicly traded companies and identify some of the necessary ingredients of a general theory of firms.
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, the self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system specific modifications of the DEP rule but arise rather from the underlying mechanism of spontaneous symmetry breaking due to the tight brain-body-environment coupling. The new synaptic rule is biologically plausible and it would be an interesting target for a neurobiolocal investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.
A novel plasticity rule can explain the development of sensorimotor intelligence Ralf Der, Georg Martius
The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.
Analytical Computation of the Epidemic Threshold on Temporal Networks Eugenio Valdano, Luca Ferreri, Chiara Poletto, and Vittoria Colizza Phys. Rev. X 5, 021005 (2015)
High-dimensional computational challenges are frequently explained via the curse of dimensionality, i.e., increasing the number of dimensions leads to exponentially growing computational complexity. In this commentary, we argue that thinking on a different level helps to understand, why we face the curse of dimensionality. We introduce as a guiding principle the curse of instability, which triggers the classical curse of dimensionality. Furthermore, we claim that the curse of instability is a strong indicator for analytical difficulties and multiscale complexity. Finally, we suggest some practical conclusions for the analysis of mathematical models and formulate several conjectures.
Understanding human mobility patterns -- how people move in their everyday lives -- is an interdisciplinary research field. It is a question with roots back to the 19th century that has been dramatically revitalized with the recent increase in data availability. Models of human mobility often take the population distribution as a starting point. Another, sometimes more accurate, data source is land-use maps. In this paper, we discuss how the intra-city movement patterns, and consequently population distribution, can be predicted from such data sources. As a link between, land use and mobility, we show that the purposes of people's trips are strongly correlated with the land use of the trip's origin and destination. We calibrate, validate and discuss our model using survey data.
Relating land use and human intra-city mobility Minjin Lee, Petter Holme
A Sleeping Beauty (SB) in science refers to a paper whose importance is not recognized for several years after publication. Its citation history exhibits a long hibernation period followed by a sudden spike of popularity. Previous studies suggest a relative scarcity of SBs. The reliability of this conclusion is, however, heavily dependent on identification methods based on arbitrary threshold parameters for sleeping time and number of citations, applied to small or monodisciplinary bibliographic datasets. Here we present a systematic, large-scale, and multidisciplinary analysis of the SB phenomenon in science. We introduce a parameter-free measure that quantifies the extent to which a specific paper can be considered an SB. We apply our method to 22 million scientific papers published in all disciplines of natural and social sciences over a time span longer than a century. Our results reveal that the SB phenomenon is not exceptional. There is a continuous spectrum of delayed recognition where both the hibernation period and the awakening intensity are taken into account. Although many cases of SBs can be identified by looking at monodisciplinary bibliographic data, the SB phenomenon becomes much more apparent with the analysis of multidisciplinary datasets, where we can observe many examples of papers achieving delayed yet exceptional importance in disciplines different from those where they were originally published. Our analysis emphasizes a complex feature of citation dynamics that so far has received little attention, and also provides empirical evidence against the use of short-term citation metrics in the quantification of scientific impact.
Defining and identifying Sleeping Beauties in science Qing Ke, Emilio Ferrara, Filippo Radicchi, Alessandro Flammini
Spatial variations in the distribution and composition of populations inform urban development, health-risk analyses, disaster relief, and more. Despite the broad relevance and importance of such data, acquiring local census estimates in a timely and accurate manner is challenging because population counts can change rapidly, are often politically charged, and suffer from logistical and administrative challenges. These limitations necessitate the development of alternative or complementary approaches to population mapping. In this paper we develop an explicit connection between telecommunications data and the underlying population distribution of Milan, Italy. We go on to test the scale invariance of this connection and use telecommunications data in conjunction with high-resolution census data to create easily updated and potentially real time population estimates in time and space.
High resolution population estimates from telecommunications data Rex W Douglass, David A Meyer, Megha Ram, David Rideout and Dongjin Song
Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomenons (i.e. cascading proceese) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors get infected by a cascade after this node get infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and propose a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction.
From Micro to Macro: Uncovering and Predicting Information Cascading Process with Behavioral Dynamics Linyun Yu, Peng Cui, Fei Wang, Chaoming Song, Shiqiang Yang
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5%.
An Image is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures Rossano Schifanella, Miriam Redi, Luca Aiello
This special issue brings together articles that illustrate the recent advances of studying complex adaptive systems in industrial ecology (IE). The authors explore the emergent behavior of sociotechnical systems, including product systems, industrial symbiosis (IS) networks, cities, resource consumption, and co-authorship networks, and offer application of complex systems models and analyses. The articles demonstrate the links, relevance, and implications of many (often emerging) fields of study to IE, including network analysis, participatory modeling, nonequilibrium thermodynamics, and agent-based modeling. Together, these articles show that IE itself is a complex adaptive system, where knowledge, frameworks, methods, and tools evolve with and by their applications and use in small and large case studies—multidisciplinary knowledge ecology.
Complexity in Industrial Ecology: Models, Analysis, and Actions Gerard P.J. Dijkema, Ming Xu, Sybil Derrible and Reid Lifset
Journal of Industrial Ecology Special Issue: Advances in Complex Adaptive Systems and Industrial Ecology Volume 19, Issue 2, pages 189–194, April 2015
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.