The concept of stigmergy has been used to analyze self-organizing activities in an ever-widening range of domains, from social insects via robotics and social media to human society. Yet, it is still poorly understood, and as such its full power remains underappreciated. The present paper clarifies the issue by defining stigmergy as a mechanism of indirect coordination in which the trace left by an action in a medium stimulates a subsequent action. It then analyses the fundamental components of the definition: action, agent, medium, trace and coordination. Stigmergy enables complex, coordinated activity without any need for planning, control, communication, simultaneous presence, or even mutual awareness. This makes the concept applicable to a very broad variety of cases, from chemical reactions to individual cognition and Internet-supported collaboration in Wikipedia. The paper classifies different varieties of stigmergy according to general aspects (number of agents, scope, persistence, sematectonic vs. marker-based, and quantitative vs. qualitative), while emphasizing the fundamental continuity between these cases. This continuity can be understood from a non-linear, self-organizing dynamic that lets more complex forms of coordination evolve out of simpler ones. The paper concludes with two specifically human applications in cognition and cooperation, suggesting that without stigmergy these phenomena may never have evolved.
Heylighen, F. (2015). Stigmergy as a Universal Coordination Mechanism: components, varieties and applications. To appear in T. Lewis & L. Marsh (Eds.), Human Stigmergy: Theoretical Developments and New Applications, Studies in Applied Philosophy, Epistemology and Rational Ethics. Springer. http://pespmc1.vub.ac.be/papers/stigmergy-varieties.pdf
Prior to the up-coming "5th Lindau Meeting on Economics Sciences" (19-23 August 2014), Nobel Laureates in Economic Sciences and young economists were asked: "What makes a good economist?". Some of their answers have been compiled for this film.
Leaders must be able to act in a complex world and under uncertainty. This course is a first step to develop yourself into one of the future’s key decision makers or to enhance your decision-making skills.
The need for a multidisciplinary approach will be emphasized throughout the course. Guest lecturers from different faculties will explain, provide applications and examples from their respective fields of study for a more comprehensive understanding of the different elements of complexity. For example, complexity can be beautifully explained by the use of insects and brains and many other natural and social phenomena. The same ideas can be applied to economic and financial systems.
(Medical Xpress)—The first indication that you're sick is typically one or more symptoms: perhaps a cough, fever, abdominal pain, etc. Symptoms are high-level clinical manifestations of a disease that, at a lower level, is caused by molecular-level components, such as genes and proteins. Understanding ...
Economic models of animal behaviour assume that decision-makers are rational, meaning that they assess options according to intrinsic fitness value and not by comparison with available alternatives. This expectation is frequently violated, but the significance of irrational behaviour remains controversial. One possibility is that irrationality arises from cognitive constraints that necessitate short cuts like comparative evaluation. If so, the study of whether and when irrationality occurs can illuminate cognitive mechanisms. We applied this logic in a novel setting: the collective decisions of insect societies. We tested for irrationality in colonies of Temnothorax ants choosing between two nest sites that varied in multiple attributes, such that neither site was clearly superior. In similar situations, individual animals show irrational changes in preference when a third relatively unattractive option is introduced. In contrast, we found no such effect in colonies. We suggest that immunity to irrationality in this case may result from the ants’ decentralized decision mechanism. A colony's choice does not depend on site comparison by individuals, but instead self-organizes from the interactions of multiple ants, most of which are aware of only a single site. This strategy may filter out comparative effects, preventing systematic errors that would otherwise arise from the cognitive limitations of individuals.
Ever since the Internet became a mass social phenomenon in the 1990s, people have worried about its effects on their privacy. From time to time, a major scandal has erupted, focusing attention on those anxieties; last year’s revelations concerning the U.S.
National Security Agency’s surveillance of electronic communications are only the most recent example. In most cases, the subsequent debate has been about who should be able to collect and store personal data and how they should be able to go about it. When people hear or read about the issue, they tend to worry about who has access to information about their health, their finances, their relationships, and their political activities.
But those fears and the public conversations that articulate them have not kept up with the technological reality. Today, the widespread and perpetual collection and storage of personal data have become practically inevitable. (...)
Distributed intelligence is an ability to solve problems and process information that is not localized inside a single person or computer, but that emerges from the coordinated interactions between a large number of people and their technological extensions. The Internet and in particular the World-Wide Web form a nearly ideal substrate for the emergence of a distributed intelligence that spans the planet, integrating the knowledge, skills and intuitions of billions of people supported by billions of information-processing devices. This intelligence becomes increasingly powerful through a process of self-organization in which people and devices selectively reinforce useful links, while rejecting useless ones. This process can be modeled mathematically and computationally by representing individuals and devices as agents, connected by a weighted directed network along which "challenges" propagate. Challenges represent problems, opportunities or questions that must be processed by the agents to extract benefits and avoid penalties. Link weights are increased whenever agents extract benefit from the challenges propagated along it. My research group is developing such a large-scale simulation environment in order to better understand how the web may boost our collective intelligence. The anticipated outcome of that process is a "global brain", i.e. a nervous system for the planet that would be able to tackle both global and personal problems.
Tackling complex problems often requires coordinated group effort and can consume significant resources, yet our understanding of how teams form and succeed has been limited by a lack of large-scale, quantitative data. We analyze activity traces and success levels for ~150,000 self-organized, online team projects. While larger teams tend to be more successful, the distribution of activity is highly skewed across the team, with only small subsets of members performing most work. This focused centralization in activity indicates that larger teams succeed not simply by distributing workload, but by acting as a support system for a smaller set of core members. High impact teams are significantly more focused than average teams of the same size, yet are more likely to consist of members with diverse experiences, and these members, even non-core members, are more likely to themselves be core members of other teams. This mixture of size, focus, experience, and diversity points to underlying mechanisms that can be used to maximize the success of collaborative endeavors.
Core percolation is a fundamental structural transition in complex networks related to a wide range of important problems. Recent advances have provided us an analytical framework of core percolation in uncorrelated random networks with arbitrary degree distributions. Here we apply the tools in analysis of network controllability. We confirm analytically that the emergence of the bifurcation in control coincides with the formation of the core and the structure of the core determines the control mode of the network. We also derive the analytical expression related to the controllability robustness by extending the deduction in core percolation. These findings help us better understand the interesting interplay between the structural and dynamical properties of complex networks.
A number of predictors have been suggested to detect the most influential spreaders of information in online social media across various domains such as Twitter or Facebook. In particular, degree, PageRank, k-core and other centralities have been adopted to rank the spreading capability of users in information dissemination media. So far, validation of the proposed predictors has been done by simulating the spreading dynamics rather than following real information flow in social networks. Consequently, only model-dependent contradictory results have been achieved so far for the best predictor. Here, we address this issue directly. We search for influential spreaders by following the real spreading dynamics in a wide range of networks. We find that the widely-used degree and PageRank fail in ranking users' influence. We find that the best spreaders are consistently located in the k-core across dissimilar social platforms such as Twitter, Facebook, Livejournal and scientific publishing in the American Physical Society. Furthermore, when the complete global network structure is unavailable, we find that the sum of the nearest neighbors' degree is a reliable local proxy for user's influence. Our analysis provides practical instructions for optimal design of strategies for [ldquo]viral[rdquo] information dissemination in relevant applications.
The European Commission has launched a public consultation on ‘Science 2.0’, in order to gauge the trend towards a more open, data-driven and people-focused way of doing research and innovation. Researchers are using digital tools to get thousands of people participating
A new module on the Étoile Platform, by Jeffrey Johnson
Based on the course presented at the 4th Ph.D. summer School - conference on “Mathematical Modeling of Complex Systems”, Cultural Foundation “Kritiki Estia”, 14 – 25 July, 2014, Athens.
The modern world is complex beyond human understanding and control. The science of complex systems aims to find new ways of thinking about the many interconnected networks of interaction that defy traditional approaches. Thus far, research into networks has largely been restricted to pairwise relationships represented by links between two nodes.
This course marks a major extension of networks to multidimensional hypernetworks for modeling multi-element relationships, such as companies making up the stock market, the neighborhoods forming a city, people making up committees, divisions making up companies, computers making up the internet, men and machines making up armies, or robots working as teams. This course makes an important contribution to the science of complex systems by: (i) extending network theory to include dynamic relationships between many elements; (ii) providing a mathematical theory able to integrate multilevel dynamics in a coherent way; (iii) providing a new methodological approach to analyze complex systems; and (iv) illustrating the theory with practical examples in the design, management and control of complex systems taken from many areas of application.
What does a city look like? To Franz-Josef Ulm, an engineering professor at the Massachusetts Institute of Technology, it looks like a material with a molecular structure. With colleagues, Ulm has begun analyzing cities based on factors such as building arrangement, each building’s center of mass, and how they’re ordered around each other. He has concluded that Boston’s structure looks like an “amorphous liquid.” Seattle is another liquid, and so is Los Angeles. Chicago, which was designed on a grid, looks like glass, he says; New York resembles a highly ordered crystal. If the analogy does hold up, Ulm hopes it will give planners a new tool to understand a city’s structure, its energy use, and possibly even its resilience to climate change.
Interdisciplinary research is starting to attract more and more attention — and funding. This year, for example, the US National Science Foundation (NSF) has requested US$63 million (210% more than in 2012) for its INSPIRE (Integrated NSF Support Promoting Interdisciplinary Research and Education) awards programme, which supports research into complex scientific problems such as space-weather monitoring, groundwater restoration and epigenomic analysis of single cells. In an era of stagnant, even shrinking, research funds, such budding fields can be a shrewd choice, especially for early-career researchers.
Interdisciplinary research pulls together disparate expertise to advance an emerging field or solve a multifaceted problem. Nanotechnology, for example, requires knowledge of chemistry, biology and physics, and disease control can involve molecular biologists, biostatisticians, public-health officials and sociologists. Environmental science, with its study of entangled ecosystems and policy impacts, is the quintessential interdisciplinary field. (...)
In most natural and engineered systems, a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time and include other types of complications. Such systems include multiple subsystems and layers of connectivity, and it is important to take such ‘multilayer’ features into account to try to improve our understanding of complex systems. Consequently, it is necessary to generalize ‘traditional’ network theory by developing (and validating) a framework and associated tools to study multilayer systems in a comprehensive fashion. The origins of such efforts date back several decades and arose in multiple disciplines, and now the study of multilayer networks has become one of the most important directions in network science. In this paper, we discuss the history of multilayer networks (and related concepts) and review the exploding body of work on such networks. To unify the disparate terminology in the large body of recent work, we discuss a general framework for multilayer networks, construct a dictionary of terminology to relate the numerous existing concepts to each other and provide a thorough discussion that compares, contrasts and translates between related notions such as multilayer networks, multiplex networks, interdependent networks, networks of networks and many others. We also survey and discuss existing data sets that can be represented as multilayer networks. We review attempts to generalize single-layer-network diagnostics to multilayer networks. We also discuss the rapidly expanding research on multilayer-network models and notions like community structure, connected components, tensor decompositions and various types of dynamical processes on multilayer networks. We conclude with a summary and an outlook.
‘Causal’ direction is of great importance when dealing with complex systems. Often big volumes of data in the form of time series are available and it is important to develop methods that can inform about possible causal connections between the different observables. Here we investigate the ability of the Transfer Entropy measure to identify causal relations embedded in emergent coherent correlations. We do this by firstly applying Transfer Entropy to an amended Ising model. In addition we use a simple Random Transition model to test the reliability of Transfer Entropy as a measure of ‘causal’ direction in the presence of stochastic fluctuations. In particular we systematically study the effect of the finite size of data sets.
Campaigners against the use of journal impact factors as a proxy for research excellence received a shot in the arm last night with the launch of the San Francisco Declaration on Research Assessment (DORA). With an impressive line-up of founding signatories, including individual scientists, research funders and journal editors, DORA states in no uncertain terms that journal impact factors (which rank journals by the average number of citations their articles receive over a given period) should not be used "as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contribution, or in hiring, promotion or funding decisions."
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.