The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.
Analytical Computation of the Epidemic Threshold on Temporal Networks Eugenio Valdano, Luca Ferreri, Chiara Poletto, and Vittoria Colizza Phys. Rev. X 5, 021005 (2015)
High-dimensional computational challenges are frequently explained via the curse of dimensionality, i.e., increasing the number of dimensions leads to exponentially growing computational complexity. In this commentary, we argue that thinking on a different level helps to understand, why we face the curse of dimensionality. We introduce as a guiding principle the curse of instability, which triggers the classical curse of dimensionality. Furthermore, we claim that the curse of instability is a strong indicator for analytical difficulties and multiscale complexity. Finally, we suggest some practical conclusions for the analysis of mathematical models and formulate several conjectures.
Understanding human mobility patterns -- how people move in their everyday lives -- is an interdisciplinary research field. It is a question with roots back to the 19th century that has been dramatically revitalized with the recent increase in data availability. Models of human mobility often take the population distribution as a starting point. Another, sometimes more accurate, data source is land-use maps. In this paper, we discuss how the intra-city movement patterns, and consequently population distribution, can be predicted from such data sources. As a link between, land use and mobility, we show that the purposes of people's trips are strongly correlated with the land use of the trip's origin and destination. We calibrate, validate and discuss our model using survey data.
Relating land use and human intra-city mobility Minjin Lee, Petter Holme
A Sleeping Beauty (SB) in science refers to a paper whose importance is not recognized for several years after publication. Its citation history exhibits a long hibernation period followed by a sudden spike of popularity. Previous studies suggest a relative scarcity of SBs. The reliability of this conclusion is, however, heavily dependent on identification methods based on arbitrary threshold parameters for sleeping time and number of citations, applied to small or monodisciplinary bibliographic datasets. Here we present a systematic, large-scale, and multidisciplinary analysis of the SB phenomenon in science. We introduce a parameter-free measure that quantifies the extent to which a specific paper can be considered an SB. We apply our method to 22 million scientific papers published in all disciplines of natural and social sciences over a time span longer than a century. Our results reveal that the SB phenomenon is not exceptional. There is a continuous spectrum of delayed recognition where both the hibernation period and the awakening intensity are taken into account. Although many cases of SBs can be identified by looking at monodisciplinary bibliographic data, the SB phenomenon becomes much more apparent with the analysis of multidisciplinary datasets, where we can observe many examples of papers achieving delayed yet exceptional importance in disciplines different from those where they were originally published. Our analysis emphasizes a complex feature of citation dynamics that so far has received little attention, and also provides empirical evidence against the use of short-term citation metrics in the quantification of scientific impact.
Defining and identifying Sleeping Beauties in science Qing Ke, Emilio Ferrara, Filippo Radicchi, Alessandro Flammini
Spatial variations in the distribution and composition of populations inform urban development, health-risk analyses, disaster relief, and more. Despite the broad relevance and importance of such data, acquiring local census estimates in a timely and accurate manner is challenging because population counts can change rapidly, are often politically charged, and suffer from logistical and administrative challenges. These limitations necessitate the development of alternative or complementary approaches to population mapping. In this paper we develop an explicit connection between telecommunications data and the underlying population distribution of Milan, Italy. We go on to test the scale invariance of this connection and use telecommunications data in conjunction with high-resolution census data to create easily updated and potentially real time population estimates in time and space.
High resolution population estimates from telecommunications data Rex W Douglass, David A Meyer, Megha Ram, David Rideout and Dongjin Song
Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomenons (i.e. cascading proceese) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors get infected by a cascade after this node get infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and propose a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction.
From Micro to Macro: Uncovering and Predicting Information Cascading Process with Behavioral Dynamics Linyun Yu, Peng Cui, Fei Wang, Chaoming Song, Shiqiang Yang
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5%.
An Image is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures Rossano Schifanella, Miriam Redi, Luca Aiello
This special issue brings together articles that illustrate the recent advances of studying complex adaptive systems in industrial ecology (IE). The authors explore the emergent behavior of sociotechnical systems, including product systems, industrial symbiosis (IS) networks, cities, resource consumption, and co-authorship networks, and offer application of complex systems models and analyses. The articles demonstrate the links, relevance, and implications of many (often emerging) fields of study to IE, including network analysis, participatory modeling, nonequilibrium thermodynamics, and agent-based modeling. Together, these articles show that IE itself is a complex adaptive system, where knowledge, frameworks, methods, and tools evolve with and by their applications and use in small and large case studies—multidisciplinary knowledge ecology.
Complexity in Industrial Ecology: Models, Analysis, and Actions Gerard P.J. Dijkema, Ming Xu, Sybil Derrible and Reid Lifset
Journal of Industrial Ecology Special Issue: Advances in Complex Adaptive Systems and Industrial Ecology Volume 19, Issue 2, pages 189–194, April 2015
Yes—and you're probably suffering from phantom text syndrome, too.
First it was radio. Then it was television. Now doomsayers are offering scary predictions about the consequences of smartphones and all the other digital devices to which we’ve all grown so attached. So why should you pay any attention to the warnings this time?
Apart from portability, the big difference between something like a traditional TV and your tablet is the social component, says Dr. David Strayer, a professor of cognition and neural science at the University of Utah. “Through Twitter or Facebook or email, someone in your social network is contacting you in some way all the time,” Strayer says.
The abundance of a species' population in an ecosystem is rarely stationary, often exhibiting large fluctuations over time. Using historical data on marine species, we show that the year-to-year fluctuations of population growth rate obey a well-defined double-exponential (Laplace) distribution. This striking regularity allows us to devise a stochastic model despite seemingly irregular variations in population abundances. The model identifies the effect of reduced growth at low population density as a key factor missed in current approaches of population variability analysis and without which extinction risks are severely underestimated. The model also allows us to separate the effect of demographic stochasticity and show that single-species growth rates are dominantly determined by stochasticity common to all species. This dominance—and the implications it has for interspecies correlations, including co-extinctions—emphasizes the need for ecosystem-level management approaches to reduce the extinction risk of the individual species themselves.
Regularity underlies erratic population abundances in marine ecosystems Jie Sun, Sean P. Cornelius, John Janssen, Kimberly A. Gray, Adilson E. Motter J. R. Soc. Interface 2015 12 20150235; http://dx.doi.org/10.1098/rsif.2015.0235
We examine all possible statistical pictures of violent conflicts over common era history with a focus on dealing with incompleteness and unreliability of data. We apply methods from extreme value theory on log-transformed data to remove compact support, then, owing to the boundedness of maximum casualties, retransform the data and derive expected means. We find the estimated mean likely to be at least three times larger than the sample mean, meaning severe underestimation of the severity of conflicts from naive observation. We check for robustness by sampling between high and low estimates and jackknifing the data. We study inter-arrival times between tail events and find (first-order) memorylessless of events. The statistical pictures obtained are at variance with the claims about "long peace".
On the tail risk of violent conflict and its underestimation Pasquale Cirillo, Nassim Nicholas Taleb
Understanding how the brain works requires a delicate balance between the appreciation of the importance of a multitude of biological details and the ability to see beyond those details to general principles. As technological innovations vastly increase the amount of data we collect, the importance of intuition into how to analyze and treat these data may, paradoxically, become more important.
In November, 2011, the Financial Stability Board, in collaboration with the International Monetary Fund, published a list of 29 "systemically important financial institutions" (SIFIs). This designation reflects a concern that the failure of any one of them could have dramatic negative consequences for the global economy and is based on "their size, complexity, and systemic interconnectedness". While the characteristics of "size" and "systemic interconnectedness" have been the subject of a good deal of quantitative analysis, less attention has been paid to measures of a firm's "complexity." In this paper we take on the challenges of measuring the complexity of a financial institution and to that end explore the use of the structure of an individual firm's control hierarchy as a proxy for institutional complexity. The control hierarchy is a network representation of the institution and its subsidiaries. We show that this mathematical representation (and various associated metrics) provides a consistent way to compare the complexity of firms with often very disparate business models and as such may provide the foundation for determining a SIFI designation. By quantifying the level of complexity of a firm, our approach also may prove useful should firms need to reduce their level of complexity either in response to business or regulatory needs. Using a data set containing the control hierarchies of many of the designated SIFIs, we find that in the past two years, these firms have decreased their level of complexity, perhaps in response to regulatory requirements.
The Intrafirm Complexity of Systemically Important Financial Institutions Robin L. Lumsdaine, Daniel N. Rockmore, Nicholas Foti, Gregory Leibon, J. Doyne Farmer
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Deep learning • Yann LeCun, Yoshua Bengio & Geoffrey Hinton
The world of bees is fascinating and varied. The common honeybee is the most well-known and well-studied species, but there are thousands of wild bee species that enliven our landscapes and help to pollinate crops and wildflowers. The widely reported threats to honeybees, which cause their colonies to collapse, also jeopardize the lives of these lesser-known and under-appreciated bee species.
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem finding phase transitions of different kinds. Distinct phases are associated to different arrangements of the connections; but the need of drastic topological changes does not determine the presence, nor the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Phase transitions in Pareto optimal complex networks Luís F Seoane, Ricard Solé
The Web has made it possible to harness human cognition en masse to achieve new capabilities. Some of these successes are well known; for example Wikipedia has become the go-to place for basic information on all things; Duolingo engages millions of people in real-life translation of text, while simultaneously teaching them to speak foreign languages; and fold.it has enabled public-driven scientific discoveries by recasting complex biomedical challenges into popular online puzzle games. These and other early successes hint at the tremendous potential for future crowd-powered capabilities for the benefit of health, education, science, and society. In the process, a new field called Human Computation has emerged to better understand, replicate, and improve upon these successes through scientific research. Human Computation refers to the science that underlies online crowd-powered systems and was the topic of a recent visioning activity in which a representative cross-section of researchers, industry practitioners, visionaries, funding agency representatives, and policy makers came together to understand what makes crowd-powered systems successful. Teams of experts considered past, present, and future human computation systems to explore which kinds of crowd-powered systems have the greatest potential for societal impact and which kinds of research will best enable the efficient development of new crowd-powered systems to achieve this impact. This report summarize the products and findings of those activities as well as the unconventional process and activities employed by the workshop, which were informed by human computation research.
A U.S. Research Roadmap for Human Computation Pietro Michelucci, Lea Shanley, Janis Dickinson, Haym Hirsh
We propose a new approach to analyzing massive transportation systems that leverages traffic information about individual travelers. The goals of the analysis are to quantify the effects of shocks in the system, such as line and station closures, and to predict traffic volumes. We conduct an in-depth statistical analysis of the Transport for London railway traffic system. The proposed methodology is unique in the way that past disruptions are used to predict unseen scenarios, by relying on simple physical assumptions of passenger flow and a system-wide model for origin–destination movement. The method is scalable, more accurate than blackbox approaches, and generalizable to other complex transportation systems. It therefore offers important insights to inform policies on urban transportation.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems Ricardo Silva, Soong Moon Kang, and Edoardo M. Airoldi
We develop a quantum information protocol that models the biological behaviors of individuals living in a natural selection scenario. The artificially engineered evolution of the quantum living units shows the fundamental features of life in a common environment, such as self-replication, mutation, interaction of individuals, and death. We propose how to mimic these bio-inspired features in a quantum-mechanical formalism, which allows for an experimental implementation achievable with current quantum platforms. This result paves the way for the realization of artificial life and embodied evolution with quantum technologies.
Artificial Life in Quantum Technologies U. Alvarez-Rodriguez, M. Sanz, L. Lamata, E. Solano
The 2014 Ebola outbreak in west Africa raised many questions about the control of infectious disease in an increasingly connected global society. Limited availability of contact information has made contact tracing difficult or impractical in combating the outbreak. We consider the development of multi-scale public health strategies and simulate policies for community-level response aimed at early screening of communities rather than individuals, as well as travel restrictions to prevent community cross-contamination. Our analysis shows community screening to be effective even at a relatively low level of compliance. In our simulations, 40% of individuals conforming to this policy is enough to stop the outbreak. Simulations with a 50% compliance rate are consistent with the case counts in Liberia during the period of rapid decline after mid September, 2014. We also find the travel restriction policies to be effective at reducing the risks associated with compliance substantially below the 40% level, shortening the outbreak and enabling efforts to be focused on affected areas. Our results suggest that the multi-scale approach could be applied to help end the outbreaks in Guinea and Sierra Leone, and the generality of our model can be used to further evolve public health strategy for defeating emerging epidemics.
D. Cooney, V. Wong, Y. Bar-Yam, Beyond contact tracing: Community-based early detection for Ebola response, ArXiv:1505.07020 [physics.soc-ph] (May 26, 2014); New England Complex Systems Institute Report 15-05-01
The relationship between information and complexity is analyzed using a detailed literature analysis. Complexity is a multifaceted concept, with no single agreed definition. There are numerous approaches to defining and measuring complexity and organization, all involving the idea of information. Conceptions of complexity, order, organization, and “interesting order” are inextricably intertwined with those of information. Shannon's formalism captures information's unpredictable creative contributions to organized complexity; a full understanding of information's relation to structure and order is still lacking. Conceptual investigations of this topic should enrich the theoretical basis of the information science discipline, and create fruitful links with other disciplines that study the concepts of information and complexity.
“Waiting for Carnot”: Information and complexity David Bawden and Lyn Robinson
Journal of the Association for Information Science and Technology Early View
Social structure influences ecological processes such as dispersal and invasion, and affects survival and reproductive success. Recent studies have used static snapshots of social networks, thus neglecting their temporal dynamics, and focused primarily on a limited number of variables that might be affecting social structure. Here, instead we modelled effects of multiple predictors of social network dynamics in the spotted hyena, using observational data collected during 20 years of continuous field research in Kenya. We tested the hypothesis that the current state of the social network affects its long-term dynamics. We employed stochastic agent-based models that allowed us to estimate the contribution of multiple factors to network changes. After controlling for environmental and individual effects, we found that network density and individual centrality affected network dynamics, but that social bond transitivity consistently had the strongest effects. Our results emphasise the significance of structural properties of networks in shaping social dynamics.
Topological effects of network structure on long-term social network dynamics in a wild mammal Amiyaal Ilany, Andrew S. Booms and Kay E. Holekamp
When we build complex technologies, despite our best efforts and our desire for clean logic, they often end up being far messier than we intend. They often end up kluges: inelegant solutions that work just well enough. And a reason they end up being messy—despite being designed and engineered—is because fundamentally the way they grow and evolve is often more similar to biological systems than we realize.
JEFF HAWKINS RECENTLY re-read his 2004 book On Intelligence, where the founder of Palm computing – the company that gave us the first handheld computer and later, first-generation smartphones – explains how the human brain learns. An electrical engineer by training, Hawkins had taken a deep interest in how the brain works and founded the Redwood Neuroscience Institute, a private, nonprofit research organization focused on understanding how the neocortex processes information, at UC Berkeley in 2002. The big surprise? “There was very little I would change about that book,” Hawkins says. “There’s a lot I would add. There’s a ton of stuff where I know exactly how it works, that I didn’t know when I wrote it.”
Bacteria regulate gene expression in response to changes in cell density in a process called quorum sensing. To synchronize their gene-expression programs, these bacteria need to glean as much information as possible about their cell density. Our study is the first to physically model the flow of information in a quorum-sensing microbial community, wherein the internal regulator of the individuals response tracks the external cell density via an endogenously generated shared signal. Combining information theory and Lagrangian formalism, we find that quorum-sensing systems can improve their information capabilities by tuning circuit feedbacks. Our analysis suggests that achieving information benefit via feedback requires dedicated systems to control gene expression noise, such as sRNA-based regulation.
Optimal Census by Quorum Sensing Thibaud Taillefumier, Ned S. Wingreen
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.