Edgar Analytics & Complex Systems
573 views | +0 today
Follow
 
Rescooped by Nuno Edgar Fernandes from Papers
onto Edgar Analytics & Complex Systems
Scoop.it!

How Natural Selection Can Create Both Self- and Other-Regarding Preferences, and Networked Minds

How Natural Selection Can Create Both Self- and Other-Regarding Preferences, and Networked Minds | Edgar Analytics & Complex Systems | Scoop.it

Biological competition is widely believed to result in the evolution of selfish preferences. The related concept of the ‘homo economicus’ is at the core of mainstream economics. However, there is also experimental and empirical evidence for other-regarding preferences. Here we present a theory that explains both, self-regarding and other-regarding preferences. Assuming conditions promoting non-cooperative behaviour, we demonstrate that intergenerational migration determines whether evolutionary competition results in a ‘homo economicus’ (showing self-regarding preferences) or a ‘homo socialis’ (having other-regarding preferences). Our model assumes spatially interacting agents playing prisoner's dilemmas, who inherit a trait determining ‘friendliness’, but mutations tend to undermine it. Reproduction is ruled by fitness-based selection without a cultural modification of reproduction rates. Our model calls for a complementary economic theory for ‘networked minds’ (the ‘homo socialis’) and lays the foundations for an evolutionarily grounded theory of other-regarding agents, explaining individually different utility functions as well as conditional cooperation.

 

How Natural Selection Can Create Both Self- and Other-Regarding Preferences, and Networked Minds

Thomas Grund, Christian Waloszek & Dirk Helbing

Scientific Reports 3, Article number: 1480 http://dx.doi.org/10.1038/srep01480


Via Complexity Digest
more...
No comment yet.
Edgar Analytics & Complex Systems
A space to Scoop about Big Data and Complexity
Your new post is loading...
Your new post is loading...
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Serendipity and strategy in rapid innovation

Serendipity and strategy in rapid innovation | Edgar Analytics & Complex Systems | Scoop.it

Innovation is to organizations what evolution is to organisms: it is how organizations adapt to environmental change and improve. Yet despite advances in our understanding of evolution, what drives innovation remains elusive. On the one hand, organizations invest heavily in systematic strategies to accelerate innovation. On the other, historical analysis and individual experience suggest that serendipity plays a significant role. To unify these perspectives, we analysed the mathematics of innovation as a search for designs across a universe of component building blocks. We tested our insights using data from language, gastronomy and technology. By measuring the number of makeable designs as we acquire components, we observed that the relative usefulness of different components can cross over time. When these crossovers are unanticipated, they appear to be the result of serendipity. But when we can predict crossovers in advance, they offer opportunities to strategically increase the growth of the product space.

 

Serendipity and strategy in rapid innovation
T. M. A. Fink, M. Reeves, R. Palma & R. S. Farr
Nature Communications 8, Article number: 2002 (2017)
doi:10.1038/s41467-017-02042-w


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Quantifying China’s regional economic complexity

Quantifying China’s regional economic complexity | Edgar Analytics & Complex Systems | Scoop.it

China’s regional economic complexity is quantified by modeling 25 years’ public firm data.
High positive correlation between economic complexity and macroeconomic indicators is shown.
Economic complexity has explanatory power for economic development and income inequality.
Multivariate regressions suggest the robustness of these results with controlling socioeconomic factors.

 

Quantifying China’s regional economic complexity
Jian Gao, Tao Zhou

Physica A: Statistical Mechanics and its Applications
Volume 492, 15 February 2018, Pages 1591-1603


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Digital Delights - Digital Tribes
Scoop.it!

Artificial Intelligence: A Free Online Course from MIT

Artificial Intelligence: A Free Online Course from MIT | Edgar Analytics & Complex Systems | Scoop.it

Today we're adding MIT's course on Artificial Intelligence to our ever-growing collection, .

Via Ana Cristina Pratas
more...
Nik Peachey's curator insight, December 21, 2017 1:56 AM

Given the way things are going, this might well be worth checking out.

Stephen Dale's curator insight, December 22, 2017 5:15 AM
Share your insight
Rescooped by Nuno Edgar Fernandes from Edgar Analytics & Complex Systems
Scoop.it!

An Evolutionary Game Theoretic Approach to Multi-Sector Coordination and Self-Organization

Coordination games provide ubiquitous interaction paradigms to frame human behavioral features, such as information transmission, conventions and languages as well as socio-economic processes and institutions. By using a dynamical approach, such as Evolutionary Game Theory (EGT), one is able to follow, in detail, the self-organization process by which a population of individuals coordinates into a given behavior. Real socio-economic scenarios, however, often involve the interaction between multiple co-evolving sectors, with specific options of their own, that call for generalized and more sophisticated mathematical frameworks. In this paper, we explore a general EGT approach to deal with coordination dynamics in which individuals from multiple sectors interact. Starting from a two-sector, consumer/producer scenario, we investigate the effects of including a third co-evolving sector that we call public. We explore the changes in the self-organization process of all sectors, given the feedback that this new sector imparts on the other two.

 

An Evolutionary Game Theoretic Approach to Multi-Sector Coordination and Self-Organization
Fernando P. Santos, Sara Encarnação, Francisco C. Santos, Juval Portugali and Jorge M. Pacheco

Entropy 2016, 18(4), 152; http://dx.doi.org/10.3390/e18040152


Via Complexity Digest, Nuno Edgar Fernandes
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

Call for Satellites: Conference on Complex Systems 2017

Call for Satellites: Conference on Complex Systems 2017 | Edgar Analytics & Complex Systems | Scoop.it
As usual with the Conferences on Complex Systems, apart form the main tracks of the conference, there will be two full days of satellites (Wednesday, September 20th and Thursday, September 21st). We therefore call for satellite proposals for half a day or full day events. Satellite organizers are responsible for promoting, organizing, reviewing, and scheduling their session.

Scientifically sound proposals should be less than 1000 words, including scope of the satellite, goals, tentative program (format, half-day/full day, invited speakers), estimated attendance, and organizers.

Proposals should be submitted in PDF format to satellites@ccs17.unam.mx

Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

GEFENOL Summer School 2017

GEFENOL Summer School 2017 | Edgar Analytics & Complex Systems | Scoop.it

Statistical Physics, which was born as an attempt to explain thermodynamic properties of systems from its atomic and molecular components, has evolved into a solid body of knowledge that allows for the understanding of macroscopic collective phenomena. The tools developed by the Statistical Physics together with the Theory of Dynamical Systems are of key importance in the understanding of Complex Systems which are characterized by the emergent and collective phenomena of many interacting units. While the basic body of knowledge of Statistical Physics and Dynamical Systems is well described in textbooks at undergraduate or master level, the applications to open problems in the context of Complex Systems are well beyond the scope of those textbooks. Aiming at bridging this gap the Topical Group on Statistical and Non Linear Physics (GEFENOL)  of the Royal Spanish Physical Society is promoting the Summer School on Statistical Physics of Complex Systems series, open to PhD students and young postdocs world-wide.
Following the spirit and concept of precedent succesful editions (Palma de Mallorca 2011, 2013, 2014, Benasque 2012, Barcelona 2015 and Pamplona 2016)  the 7th edition will take place from June 19 to 30, 2017. During these two weeks there will be a total of six courses

 

VII GEFENOL Summer School on
Statistical Physics of Complex Systems
IFISC, Palma de Mallorca, Spain, June 19-30, 2017


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Minorities report: optimal incentives for collective intelligence

Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one's peers. We investigate the role incentives play in maintaining useful diversity through an evolutionary game-theoretic model of collective prediction. We show that market-based incentive systems produce herding effects, reduce information available to the group and suppress collective intelligence. In response, we propose a new incentive scheme that rewards accurate minority predictions, and show that this produces optimal diversity and collective predictive accuracy. We conclude that real-world systems should reward those who have demonstrated accuracy when majority opinion has been in error.

 

Minorities report: optimal incentives for collective intelligence

Richard P. Mann, Dirk Helbing


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Teacher's corner
Scoop.it!

Our Brains Have a Basic Algorithm That Enables Our Intelligence

Our Brains Have a Basic Algorithm That Enables Our Intelligence | Edgar Analytics & Complex Systems | Scoop.it
Neuroscience News has recent neuroscience research articles, brain research news, neurology studies and neuroscience resources for neuroscientists, students, and science fans and is always free to join. Our neuroscience social network has science groups, discussion forums, free books, resources, science videos and more.

Via Suvi Salo
more...
Suvi Salo's curator insight, November 21, 2016 3:33 PM
via @NeuroscienceNew
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. Originally formulated in discrete time, we provide a framework for considering transfer entropy in continuous time systems. By appealing to a measure theoretic formulation we generalise transfer entropy, describing it in terms of Radon-Nikodym derivatives between measures of complete path realisations. The resulting formalism introduces and emphasises the idea that transfer entropy is an expectation of an individually fluctuating quantity along a path, in the same way we consider the expectation of physical quantities such as work and heat. We recognise that transfer entropy is a quantity accumulated over a finite time interval, whilst permitting an associated instantaneous transfer entropy rate. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). We contrast our approach with previous attempts to formulate information flow between continuous time point processes within a discrete time framework, which incur issues that our continuous time approach naturally avoids. Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes).

 

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Richard E. Spinney, Mikhail Prokopenko, Joseph T. Lizier


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

8th Conference on Complex Networks

8th Conference on Complex Networks | Edgar Analytics & Complex Systems | Scoop.it

CompleNet 2017 - 8th Conference on Complex Networks
http://complenet.weebly.com/
 
Where and When:
Dubrovnik, Croatia, March 21st-24th, 2016.
 
Important dates:
* Abstract/Paper submission deadline: November 27, 2016
* Notification of acceptance: December 23, 2016
* Submission of Camera-Ready (papers): January 8, 2017
* Early registration ends on: January 20, 2017


Via Complexity Digest
more...
No comment yet.
Scooped by Nuno Edgar Fernandes
Scoop.it!

Digitization of Industrie - Plattform Industrie 4.0

Digitization of Industrie - Plattform Industrie 4.0 | Edgar Analytics & Complex Systems | Scoop.it
Available as PDF "Digitization of Industrie - Plattform Industrie 4.0"
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

How complexity originates: Examples from history reveal additional roots to complexity

Most scientists will characterize complexity as the result of one or more factors out of three: (i) high dimensionality, (ii) interaction networks, and (iii) nonlinearity. High dimensionality alone need not give rise to complexity. The best known cases come from linear algebra: To determine the eigenvalues and eigenvectors of a large quadratic matrix, for example, is complicated but not complex. Every mathematician, physicist or economist, and most scholars from other disciplines can write down an algorithm that would work provided infinite resources in computer time and storage space are given. (...) 

 

How complexity originates: Examples from history reveal additional roots to complexity
Peter Schuster
Complexity
DOI: 10.1002/cplx.21841


Via Complexity Digest
more...
Marcelo Errera's curator insight, October 28, 2016 2:23 PM
It's reasonable to assume there is an underlying physics principle that drives systems to complexity.  Once the principle is identified, one will be able to discover when complexity emerges or not.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Compression and the origins of Zipf's law for word frequencies

Here we sketch a new derivation of Zipf's law for word frequencies based on optimal coding. The structure of the derivation is reminiscent of Mandelbrot's random typing model but it has multiple advantages over random typing: (1) it starts from realistic cognitive pressures, (2) it does not require fine tuning of parameters, and (3) it sheds light on the origins of other statistical laws of language and thus can lead to a compact theory of linguistic laws. Our findings suggest that the recurrence of Zipf's law in human languages could originate from pressure for easy and fast communication.

 

Compression and the origins of Zipf's law for word frequencies
Ramon Ferrer-i-Cancho

Complexity


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

A Mathematician Who Decodes the Patterns Stamped Out by Life

A Mathematician Who Decodes the Patterns Stamped Out by Life | Edgar Analytics & Complex Systems | Scoop.it
Corina Tarnita deciphers bizarre patterns in the soil created by competing life-forms. She’s found that they can reveal whether an ecosystem is thriving or on the verge of collapse.

Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Deep_In_Depth: Deep Learning, ML & DS
Scoop.it!

arXiv - Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.

Via Eric Feuilleaubois
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Deep_In_Depth: Deep Learning, ML & DS
Scoop.it!

How To Ace Data Science Interviews: Statistics

For someone working or trying to work in data science, statistics is probably the biggest and most intimidating area of knowledge you need to develop. The goal of this post is to reduce what you need to know to a finite number of concrete ideas, techniques, and equations.

Of course, that’s an ambitious goal — if you plan to be in data science for the long term I’d still expect to continue learning statistical concepts and techniques throughout your career. But what I’m aiming for is to provide you with a baseline to get you through your interviews and into practicing data science with as short and painless a process a possible. I’ll end each section with key terms and resources for further reading. Let’s dive in.

Via Eric Feuilleaubois
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Edgar Analytics & Complex Systems
Scoop.it!

The Complexity of Dynamics in Small Neural Circuits

The mesoscopic level of brain organization, describing the organization and dynamics of small circuits of neurons including from few tens to few thousands, has recently received considerable experimental attention. It is useful for describing small neural systems of invertebrates, and in mammalian neural systems it is often seen as a middle ground that is fundamental to link single neuron activity to complex functions and behavior. However, and somewhat counter-intuitively, the behavior of neural networks of small and intermediate size can be much more difficult to study mathematically than that of large networks, and appropriate mathematical methods to study the dynamics of such networks have not been developed yet. Here we consider a model of a network of firing-rate neurons with arbitrary finite size, and we study its local bifurcations using an analytical approach. This analysis, complemented by numerical studies for both the local and global bifurcations, shows the emergence of strong and previously unexplored finite-size effects that are particularly hard to detect in large networks. This study advances the tools available for the comprehension of finite-size neural circuits, going beyond the insights provided by the mean-field approximation and the current techniques for the quantification of finite-size effects.

 

Fasoli D, Cattani A, Panzeri S (2016) The Complexity of Dynamics in Small Neural Circuits. PLoS Comput Biol 12(8): e1004992. doi:10.1371/journal.pcbi.1004992


Via Complexity Digest, Nuno Edgar Fernandes
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

1st Call for Abstracts: Conference on Complex Systems 2017

1st Call for Abstracts: Conference on Complex Systems 2017 | Edgar Analytics & Complex Systems | Scoop.it
The flagship conference of the Complex Systems Society will go to Latin America for the first time in 2017. The Mexican complex systems community is enthusiast to welcome colleagues to one of our richest destinations: Cancun.

The conference will include presentations by Mario Molina (Environment, Nobel Prize in Chemistry), Ranulfo Romo (neuroscience), Antonio Lazcano (origins of life), Marta González (human mobility), Dirk Brockmann (epidemiology), Stefano Battiston (economics) John Quackenbush (computational biology), and many more.

 

Important dates:
Abstract deadline                      March 10
Notifications of Acceptance      April 21
Conference                                September 17-22


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxBooks
Scoop.it!

An Introduction to Transfer Entropy: Information Flow in Complex Systems

An Introduction to Transfer Entropy: Information Flow in Complex Systems | Edgar Analytics & Complex Systems | Scoop.it

T. Bossomaier, L. Barnett, M. Harré, J.T. Lizier
"An Introduction to Transfer Entropy: Information Flow in Complex Systems"
Springer, 2016.

This book considers a relatively new measure in complex systems, transfer entropy, derived from a series of measurements, usually a time series. After a qualitative introduction and a chapter that explains the key ideas from statistics required to understand the text, the authors then present information theory and transfer entropy in depth. A key feature of the approach is the authors' work to show the relationship between information flow and complexity. The later chapters demonstrate information transfer in canonical systems, and applications, for example in neuroscience and in finance.
 
The book will be of value to advanced undergraduate and graduate students and researchers in the areas of computer science, neuroscience, physics, and engineering.

 

SpringerLink access to PDFs: http://bit.ly/te-book-2016

Springer hard copy listing: http://bit.ly/te-book-2016-hardcopy

Amazon listing: http://amzn.to/2f5YdYW


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. Originally formulated in discrete time, we provide a framework for considering transfer entropy in continuous time systems. By appealing to a measure theoretic formulation we generalise transfer entropy, describing it in terms of Radon-Nikodym derivatives between measures of complete path realisations. The resulting formalism introduces and emphasises the idea that transfer entropy is an expectation of an individually fluctuating quantity along a path, in the same way we consider the expectation of physical quantities such as work and heat. We recognise that transfer entropy is a quantity accumulated over a finite time interval, whilst permitting an associated instantaneous transfer entropy rate. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). We contrast our approach with previous attempts to formulate information flow between continuous time point processes within a discrete time framework, which incur issues that our continuous time approach naturally avoids. Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes).

 

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Richard E. Spinney, Mikhail Prokopenko, Joseph T. Lizier


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Percolation in real multiplex networks

We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.

 

Percolation in real multiplex networks

Ginestra Bianconi, Filippo Radicchi


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Learning to Perform Physics Experiments via Deep Reinforcement Learning

When encountering novel object, humans are able to infer a wide range of physical properties such as mass, friction and deformability by interacting with them in a goal driven way. This process of active interaction is in the same spirit of a scientist performing an experiment to discover hidden facts. Recent advances in artificial intelligence have yielded machines that can achieve superhuman performance in Go, Atari, natural language processing, and complex control problems, but it is not clear that these systems can rival the scientific intuition of even a young child. In this work we introduce a basic set of tasks that require agents to estimate hidden properties such as mass and cohesion of objects in an interactive simulated environment where they can manipulate the objects and observe the consequences. We found that state of art deep reinforcement learning methods can learn to perform the experiments necessary to discover such hidden properties. By systematically manipulating the problem difficulty and the cost incurred by the agent for performing experiments, we found that agents learn different strategies that balance the cost of gathering information against the cost of making mistakes in different situations.

 

Learning to Perform Physics Experiments via Deep Reinforcement Learning

Misha Denil, Pulkit Agrawal, Tejas D Kulkarni, Tom Erez, Peter Battaglia, Nando de Freitas


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

GECCO 2017

GECCO 2017 | Edgar Analytics & Complex Systems | Scoop.it
The Genetic and Evolutionary Computation Conference (GECCO) in 2017 will present the latest high-quality results in genetic and evolutionary computation. Topics include: genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life/robotics/evolvable hardware/generative and developmental systems/artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real world applications, search-based software engineering (including self-* search), theory and more.

 

2017 Genetic and Evolutionary Computation Conference (GECCO 2017)
July, 2017, Berlin, Germany
http://gecco-2017.sigevo.org/


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Evidence of Shared Aspects of Complexity Science and Quantum Phenomena

Complexity science concepts of emergence, self-organization, and feedback suggest that descriptions of systems and events are subjective, incomplete, and impermanent-similar to what we observe in quantum phenomena. Complexity science evinces an increasingly compelling alternative to reductionism for describing physical phenomena, now that shared aspects of complexity science and quantum phenomena are being scientifically substantiated. Establishment of a clear connection between chaotic complexity and quantum entanglement in small quantum systems indicates the presence of common processes involved in thermalization in large and small-scale systems. Recent findings in the fields of quantum physics, quantum biology, and quantum cognition demonstrate evidence of the complexity science characteristics of sensitivity to initial conditions and emergence of self-organizing systems. Efficiencies in quantum superposition suggest a new paradigm in which our very notion of complexity depends on which information theory we choose to employ.

 

Evidence of Shared Aspects of Complexity Science and Quantum Phenomena
Cynthia Larson

Cosmos and History: The Journal of Natural and Social Philosophy, Vol 12, No 2 (2016)


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Multimodel agent-based simulation environment for mass-gatherings and pedestrian dynamics

• A multimodel agent-based simulation environment (PULSE) is presented.
• Model integration techniques suggested: common space and commonly controlled agents.
• Crowd pressure metrics for simulating crushing and asphyxia in crowds are proposed.
• Simulations of evacuation from cinema building to the city streets are carried out.

 

Multimodel agent-based simulation environment for mass-gatherings and pedestrian dynamics
Vladislav Karbovskii, Daniil Voloshin, Andrey Karsakov, Alexey Bezgodov, Carlos Gershenson

Future Generation Computer Systems


Via Complexity Digest
more...
No comment yet.