Questions of values, ontologies, ethics, aesthetics, discourse, origins, language, literature, and meaning do not lend themselves readily, or traditionally, to equations, probabilities, and models. However, with the increased adoption of natural science tools in economics, anthropology, and political science—to name only a few social scientific fields highlighted in this volume—quantitative methods in the humanities are becoming more common.
The theory of complexity holds significant promise for better understanding social and human phenomena based on interactions among the participating "agents," whatever they may be: a thought, a person, a conversation, a sentence, or an email. Such systems can exhibit phase transitions, feedback loops, self-organization, and emergent properties. These dynamic systems lend themselves naturally to the kind of analysis made possible by models and simulations developed with complex science tools. This volume offers a tour of quantitative analyses, models, and simulations of humanities and social science phenomena that have been historically the purview of qualitative methods.
Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system) with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds) with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.
Information Entropy-Based Metrics for Measuring Emergences in Artificial Societies Mingsheng Tang and Xinjun Mao
Core percolation is a fundamental structural transition in complex networks related to a wide range of important problems. Recent advances have provided us an analytical framework of core percolation in uncorrelated random networks with arbitrary degree distributions. Here we apply the tools in analysis of network controllability. We confirm analytically that the emergence of the bifurcation in control coincides with the formation of the core and the structure of the core determines the control mode of the network. We also derive the analytical expression related to the controllability robustness by extending the deduction in core percolation. These findings help us better understand the interesting interplay between the structural and dynamical properties of complex networks.
Connecting Core Percolation and Controllability of Complex Networks • Tao Jia & Márton Pósfai
A number of predictors have been suggested to detect the most influential spreaders of information in online social media across various domains such as Twitter or Facebook. In particular, degree, PageRank, k-core and other centralities have been adopted to rank the spreading capability of users in information dissemination media. So far, validation of the proposed predictors has been done by simulating the spreading dynamics rather than following real information flow in social networks. Consequently, only model-dependent contradictory results have been achieved so far for the best predictor. Here, we address this issue directly. We search for influential spreaders by following the real spreading dynamics in a wide range of networks. We find that the widely-used degree and PageRank fail in ranking users' influence. We find that the best spreaders are consistently located in the k-core across dissimilar social platforms such as Twitter, Facebook, Livejournal and scientific publishing in the American Physical Society. Furthermore, when the complete global network structure is unavailable, we find that the sum of the nearest neighbors' degree is a reliable local proxy for user's influence. Our analysis provides practical instructions for optimal design of strategies for viral information dissemination in relevant applications.
Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.
Investigating congestion in train rapid transit systems (RTS) in today's urban cities is a challenge compounded by limited data availability and difficulties in model validation. Here, we integrate information from travel smart card data, a mathematical model of route choice, and a full-scale agent-based model of the Singapore RTS to provide a more comprehensive understanding of the congestion dynamics than can be obtained through analytical modelling alone. Our model is empirically validated, and allows for close inspection of the dynamics including station crowdedness, average travel duration, and frequency of missed trains—all highly pertinent factors in service quality. Using current data, the crowdedness in all 121 stations appears to be distributed log-normally. In our preliminary scenarios, we investigate the effect of population growth on service quality. We find that the current population (2 million) lies below a critical point; and increasing it beyond a factor of approximately 10% leads to an exponential deterioration in service quality. We also predict that incentivizing commuters to avoid the most congested hours can bring modest improvements to the service quality provided the population remains under the critical point. Finally, our model can be used to generate simulated data for statistical analysis when such data are not empirically available, as is often the case.
We describe an exercise of using Big Data to predict the Michigan Consumer Sentiment Index, a widely used indicator of the state of confidence in the US economy. We carry out the exercise from a pure ex ante perspective. We use the methodology of algorithmic text analysis of an archive of brokers' reports over the period June 2010 through June 2013. The search is directed by the social-psychological theory of agent behaviour, namely conviction narrative theory. We compare one month ahead forecasts generated this way over a 15 month period with the forecasts reported for the consensus predictions of Wall Street economists. The former give much more accurate predictions, getting the direction of change correct on 12 of the 15 occasions compared to only 7 for the consensus predictions. We show that the approach retains significant predictive power even over a four month ahead horizon.
Power grids, road maps, and river streams are examples of infrastructural networks which are highly vulnerable to external perturbations. An abrupt local change of load (voltage, traffic density, or water level) might propagate in a cascading way and affect a significant fraction of the network. Almost discontinuous perturbations can be modeled by shock waves which can eventually interfere constructively and endanger the normal functionality of the infrastructure. We study their dynamics by solving the Burgers equation under random perturbations on several real and artificial directed graphs. Even for graphs with a narrow distribution of node properties (e.g., degree or betweenness), a steady state is reached exhibiting a heterogeneous load distribution, having a difference of one order of magnitude between the highest and average loads. Unexpectedly we find for the European power grid and for finite Watts-Strogatz networks a broad pronounced bimodal distribution for the loads. To identify the most vulnerable nodes, we introduce the concept of node-basin size, a purely topological property which we show to be strongly correlated to the average load of a node.
Shock waves on complex networks • Enys Mones, Nuno A. M. Araújo, Tamás Vicsek & Hans J. Herrmann
Game theory--the study of how people make choices while interacting with others--is one of the most popular technical approaches in social science today. But as Michael Chwe reveals in his insightful new book, Jane Austen explored game theory's core ideas in her six novels roughly two hundred years ago--over a century before its mathematical development during the Cold War. Jane Austen, Game Theorist shows how this beloved writer theorized choice and preferences, prized strategic thinking, and analyzed why superiors are often strategically clueless about inferiors. Exploring a diverse range of literature and folktales, this book illustrates the wide relevance of game theory and how, fundamentally, we are all strategic thinkers.
Online social media influence the flow of news and other information, potentially altering collective social action while generating a large volume of data useful to researchers. Mapping these networks may make it possible to predict the course of social and political movements, technology adoption, and economic behavior. Here, we map the network formed by Twitter users sharing British Broadcasting Corporation (BBC) articles. The global audience of the BBC is primarily organized by language with the largest linguistic groups receiving news in English, Spanish, Russian, and Arabic. Members of the network primarily “follow” members sharing articles in the same language, and these audiences are primarily located in geographical regions where the languages are native. The one exception to this rule is a cluster interested in Middle East news which includes both Arabic and English speakers. We further analyze English-speaking users, which differentiate themselves into four clusters: one interested in sports, two interested in United Kingdom (UK) news—with word usage suggesting this reflects political polarization into Conservative and Labour party leanings—and a fourth group that is the English speaking part of the group interested in Middle East news. Unlike the previously studied New York Times news sharing network the largest scale structure of the BBC network does not include a densely connected group of globally interested and globally distributed users. The political polarization is similar to what was found for liberal and conservative groups in the New York Times study. The observation of a primary organization of the BBC audience around languages is consistent with the BBC's unique role in history as an alternative source of local news in regions outside the UK where high quality uncensored news was not available.
An exploration of social identity: The structure of the BBC news-sharing community on Twitter Julius Adebayo, Tiziana Musso, Kawandeep Virdee, Casey Friedman and Yaneer Bar-Yam
Evolution of online social networks is driven by the need of their members to share and consume content, resulting in a complex interplay between individual activity and attention received from others. In a context of increasing information overload and limited resources, discovering which are the most successful behavioral patterns to attract attention is very important. To shed light on the matter, we look into the patterns of activity and popularity of users in the Yahoo Meme microblogging service. We observe that a combination of different type of social and content-producing activity is necessary to attract attention and the efficiency of users, namely the average attention received per piece of content published, for many users has a defined trend in its temporal footprint. The analysis of the user time series of efficiency shows different classes of users whose different activity patterns give insights on the type of behavior that pays off best in terms of attention gathering. In particular, sharing content with high spreading potential and then supporting the attention raised by it with social activity emerges as a frequent pattern for users gaining efficiency over time.
Modeling dynamics of attention in social media with user efficiency Carmen Vaca Ruiz, Luca Maria Aiello and Alejandro Jaimes
An important class of economic models involve agents whose wealth changes due to transactions with other agents. Several authors have pointed out an analogy with kinetic theory, which describes molecules whose momentum and energy change due to interactions with other molecules. We pursue this analogy and derive a Boltzmann equation for the time evolution of the wealth distribution of a population of agents for the so-called Yard-Sale Model of wealth exchange. We examine the solutions to this equation by a combination of analytical and numerical methods and investigate its long-time limit. We study an important limit of this equation for small transaction sizes and derive a partial integrodifferential equation governing the evolution of the wealth distribution in a closed economy. We then describe how this model can be extended to include features such as inflation, production, and taxation. In particular, we show that the model with taxation exhibits the basic features of the Pareto law, namely, a lower cutoff to the wealth density at small values of wealth, and approximate power-law behavior at large values of wealth.
Kinetics of wealth and the Pareto law Phys. Rev. E 89, 042804 – Published 8 April 2014 Bruce M. Boghosian
This book opens new ground in the study of financial crises. It treats the financial system as a complex adaptive system and shows how lessons from network disciplines - such as ecology, epidemiology, and statistical mechanics - shed light on our understanding of financial stability. Using tools from network theory and economics, it suggests that financial systems are robust-yet-fragile, with knife-edge properties that are greatly exacerbated by the hoarding of funds and the fire sale of assets by banks. The book studies the damaging network consequences of the failure of large inter-connected institutions, explains how key funding markets can seize up across the entire financial system, and shows how the pursuit of secured finance by banks in the wake of the global financial crisis can generate systemic risks. The insights are then used to model banking systems calibrated to data to illustrate how financial sector regulators are beginning to quantify financial system stress.
Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuous-valued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time due to Java's object-oriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave and Python. We present the principles behind the code design, and provide several examples to guide users
"JIDT: An information-theoretic toolkit for studying the dynamics of complex systems" Joseph T. Lizier, arXiv:1408.3270, 2014 http://arxiv.org/abs/1408.3270
Brian Skyrms presents eighteen essays which apply adaptive dynamics (of cultural evolution and individual learning) to social theory. Altruism, spite, fairness, trust, division of labor, and signaling are treated from this perspective. Correlation is seen to be of fundamental importance. Interactions with neighbors in space, on static networks, and on co-evolving dynamics networks are investigated. Spontaneous emergence of social structure and of signaling systems are examined in the context of learning dynamics.
This article introduces a special issue of Complexity dedicated to the increasingly important element of complexity science that engages with social policy. We introduce and frame an emerging research agenda that seeks to enhance social policy by working at the interface between the social sciences and the physical sciences (including mathematics and computer science), and term this research area the “social science interface” by analogy with research at the life sciences interface. We locate and exemplify the contribution of complexity science at this new interface before summarizing the contributions collected in this special issue and identifying some common themes that run through them.
Complexity at the social science interface Nigel Gilbert and Seth Bullock
Thomas W. Malone and Michael S. Bernstein (Editors)
Collective intelligence has existed at least as long as humans have, because families, armies, countries, and companies have all--at least sometimes--acted collectively in ways that seem intelligent. But in the last decade or so a new kind of collective intelligence has emerged: groups of people and computers, connected by the Internet, collectively doing intelligent things. In order to understand the possibilities and constraints of these new kinds of intelligence, a new interdisciplinary field is emerging.
The abundances of predators and their prey can oscillate in time. Mathematical theory of predator–prey systems predicts that in predator–prey cycles, peaks in prey abundance precede peaks in predator abundance. However, these models do not consider how the evolution of predator and prey traits related to offense and defense will affect the ordering and timing of peaks. Here we show that predator–prey coevolution can effectively reverse the ordering of peaks in predator–prey cycles, i.e., peaks in predator abundance precede peaks in prey abundance. We present examples from three distinct systems that exhibit reversed cycles, suggesting that coevolution may be an important driver of cycles in those systems.
Requests are at the core of many social media systems such as question & answer sites and online philanthropy communities. While the success of such requests is critical to the success of the community, the factors that lead community members to satisfy a request are largely unknown. Success of a request depends on factors like who is asking, how they are asking, when are they asking, and most critically what is being requested, ranging from small favors to substantial monetary donations. We present a case study of altruistic requests in an online community where all requests ask for the very same contribution and do not offer anything tangible in return, allowing us to disentangle what is requested from textual and social factors. Drawing from social psychology literature, we extract high-level social features from text that operationalize social relations between recipient and donor and demonstrate that these extracted relations are predictive of success. More specifically, we find that clearly communicating need through the narrative is essential and that that linguistic indications of gratitude, evidentiality, and generalized reciprocity, as well as high status of the asker further increase the likelihood of success. Building on this understanding, we develop a model that can predict the success of unseen requests, significantly improving over several baselines. We link these findings to research in psychology on helping behavior, providing a basis for further analysis of success in social media systems.
How to Ask for a Favor: A Case Study on the Success of Altruistic Requests Tim Althoff, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky
In 2011, the wrath of the 99% kindled Occupy movements around the world. The protests petered out, but in their wake an international conversation about inequality has arisen, with tens of thousands of speeches, articles, and blogs engaging everyone from President Barack Obama on down. Ideology and emotion drive much of the debate. But increasingly, the discussion is sustained by a tide of new data on the gulf between rich and poor. This special issue uses these fresh waves of data to explore the origins, impact, and future of inequality around the world.
What the numbers tell us Gilbert Chin, Elizabeth Culotta
Following Holland, complex adaptive systems (CASs) are collections of interacting, autonomous, learning decision makers embedded in an interactive environment. Modeling CASs is challenging for a variety of reasons including the presence of heterogeneity, spatial relationships, nonlinearity, and, of course, adaptation. The challenges of modeling CASs can largely be overcome by using the individual-level focus of agent-based modeling. Agent-based modeling has been used successfully to model CASs in many disciplines. Many of these models were implemented using agent-based modeling software such as Swarm, Repast 3, Repast Simphony, Repast for High-Performance Computing, MASON, NetLogo, or StarLogo. All of these options use modular imperative architectures with factored agents, spaces, a scheduler, logs, and an interface. Many custom agent-based models also use this kind of architecture. This paper’s contribution is to introduce and apply a theoretical formalism for analyzing modular imperative agent-based models of CASs. This paper includes an analysis of three example models to show how the formalism is useful for predicting the execution time and space requirements for representations of common CASs.
This paper’s contribution is to introduce, analyze, and apply a theoretical formalism for proving findings about agent-based models with modular agent scheduler architectures. Given that this kind of modeling is both computationally optimal and a natural structural match for many modeling problems, it follows that it is the best modeling method for such problems.
A theoretical formalism for analyzing agent-based models Michael J North
Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. However, when users evaluate content contributed by fellow users (e.g., by liking a post or voting on a comment), these evaluations create complex social feedback effects. This paper investigates how ratings on a piece of content affect its author's future behavior. By studying four large comment-based news communities, we find that negative feedback leads to significant behavioral changes that are detrimental to the community. Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such. Moreover, these authors are more likely to subsequently evaluate their fellow users negatively, percolating these effects through the community. In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts. Interestingly, the authors that receive no feedback are most likely to leave a community. Furthermore, a structural analysis of the voter network reveals that evaluations polarize the community the most when positive and negative votes are equally split.
How Community Feedback Shapes User Behavior Justin Cheng, Cristian Danescu-Niculescu-Mizil, Jure Leskovec
This volume contains eight papers written by Adam Brandenburger and his co-authors over a period of 25 years. These papers are part of a program to reconstruct game theory in order to make how players reason about a game a central feature of the theory. The program now called epistemic game theory extends the classical definition of a game model to include not only the game matrix or game tree, but also a description of how the players reason about one another (including their reasoning about other players' reasoning). With this richer mathematical framework, it becomes possible to determine the implications of how players reason for how a game is played. Epistemic game theory includes traditional equilibrium-based theory as a special case, but allows for a wide range of non-equilibrium behavior.
In the last few years there have been many efforts in econophysics studying how network theory can facilitate understanding of complex financial markets. These efforts consist mainly of the study of correlation-based hierarchical networks. This is somewhat surprising as the underlying assumptions of research looking at financial markets are that they are complex systems and thus behave in a nonlinear manner, which is confirmed by numerous studies, making the use of correlations which are inherently dealing with linear dependencies only baffling. In this paper we introduce a way to incorporate nonlinear dynamics and dependencies into hierarchical networks to study financial markets using mutual information and its dynamical extension: the mutual information rate. We show that this approach leads to different results than the correlation-based approach used in most studies, on the basis of 91 companies listed on the New York Stock Exchange 100 between 2003 and 2013, using minimal spanning trees and planar maximally filtered graphs.