Edgar Analytics &...
Follow
Find
300 views | +0 today
 
Rescooped by Nuno Edgar Fernandes from CxAnnouncements
onto Edgar Analytics & Complex Systems
Scoop.it!

Open Economics Principles

Economic research is based on building on, reusing and openly criticising the published body of economic knowledge. Furthermore, empirical economic research and data play a central role for policy-making in many important areas of our economies and societies.
Openness enables and underpins scholarly enquiry and debate, and is crucial in ensuring the reproducibility of economic research and analysis. Thus, for economics to function effectively, and for society to reap the full benefits from economic research, it is therefore essential that economic research results, data and analysis be openly and freely available, wherever possible.

 

http://openeconomics.net/principles/


Via Complexity Digest
more...
No comment yet.

From around the web

Edgar Analytics & Complex Systems
A space to Scoop about Big Data and Complexity
Your new post is loading...
Your new post is loading...
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

The future cities agenda

Suddenly, ‘cities’ have become the hottest topic on the planet. National research institutes and local governments as well as various global agencies are all scrambling to get a piece of the action as cities become the places where it is considered future economic prosperity firmly lies while also offering the prospect of rescuing a developed world mired in recession.

 

Batty M, 2013, "The future cities agenda" Environment and Planning B: Planning and Design 40(2) 191 – 194 

http://dx.doi.org/10.1068/b4002ed


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

From Entropy to Information: Biased Typewriters and the Origin of Life

The origin of life can be understood mathematically to be the origin of information that can replicate. The likelihood that entropy spontaneously becomes information can be calculated from first principles, and depends exponentially on the amount of information that is necessary for replication. We do not know what the minimum amount of information for self-replication is because it must depend on the local chemistry, but we can study how this likelihood behaves in different known chemistries, and we can study ways in which this likelihood can be enhanced. Here we present evidence from numerical simulations (using the digital life chemistry "Avida") that using a biased probability distribution for the creation of monomers (the "biased typewriter") can exponentially increase the likelihood of spontaneous emergence of information from entropy. We show that this likelihood may depend on the length of the sequence that the information is embedded in, but in a non-trivial manner: there may be an optimum sequence length that maximizes the likelihood. We conclude that the likelihood of spontaneous emergence of self-replication is much more malleable than previously thought, and that the biased probability distributions of monomers that are the norm in biochemistry may significantly enhance these likelihoods

 

From Entropy to Information: Biased Typewriters and the Origin of Life
Christoph Adami, Thomas LaBar

http://arxiv.org/abs/1506.06988


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Cascades in multiplex financial networks with debts of different seniority

Cascades in multiplex financial networks with debts of different seniority | Edgar Analytics & Complex Systems | Scoop.it

A model of a banking network predicts the balance of high- and low-priority debts that ensures financial stability.

 

Synopsis: http://physics.aps.org/synopsis-for/10.1103/PhysRevE.91.062813

 

Cascades in multiplex financial networks with debts of different seniority

 

The seniority of debt, which determines the order in which a bankrupt institution repays its debts, is an important and sometimes contentious feature of financial crises, yet its impact on systemwide stability is not well understood. We capture seniority of debt in a multiplex network, a graph of nodes connected by multiple types of edges. Here an edge between banks denotes a debt contract of a certain level of seniority. Next we study cascading default. There exist multiple kinds of bankruptcy, indexed by the highest level of seniority at which a bank cannot repay all its debts. Self-interested banks would prefer that all their loans be made at the most senior level. However, mixing debts of different seniority levels makes the system more stable in that it shrinks the set of network densities for which bankruptcies spread widely. We compute the optimal ratio of senior to junior debts, which we call the optimal seniority ratio, for two uncorrelated Erdős-Rényi networks. If institutions erode their buffer against insolvency, then this optimal seniority ratio rises; in other words, if default thresholds fall, then more loans should be senior. We generalize the analytical results to arbitrarily many levels of seniority and to heavy-tailed degree distributions.

 

Charles D. Brummitt and Teruyoshi Kobayashi

Phys. Rev. E 91, 062813 (2015)

Published June 24, 2015


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Nature Physics Focus on Complex networks in finance

The 2008 financial crisis has highlighted major limitations in the modelling of financial and economic systems. However, an emerging field of research at the frontiers of both physics and economics aims to provide a more fundamental understanding of economic networks, as well as practical insights for policymakers. In this Nature Physics Focus, physicists and economists consider the state-of-the-art in the application of network science to finance.

 

http://www.nature.com/nphys/journal/v9/n3/index.html


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

The evolution of lossy compression

In complex environments, there are costs to both ignorance and perception. An organism needs to track fitness-relevant information about its world, but the more information it tracks, the more resources it must devote to memory and processing. Rate-distortion theory shows that, when errors are allowed, remarkably efficient internal representations can be found by biologically-plausible hill-climbing mechanisms. We identify two regimes: a high-fidelity regime where perceptual costs scale logarithmically with environmental complexity, and a low-fidelity regime where perceptual costs are, remarkably, independent of the environment. When environmental complexity is rising, Darwinian evolution should drive organisms to the threshold between the high- and low-fidelity regimes. Organisms that code efficiently will find themselves able to make, just barely, the most subtle distinctions in their environment.

 

The evolution of lossy compression
Sarah E. Marzen, Simon DeDeo

http://arxiv.org/abs/1506.06138


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from RJI links
Scoop.it!

'Native advertising is the future of newspaper advertising.'

'Native advertising is the future of newspaper advertising.' | Edgar Analytics & Complex Systems | Scoop.it
native advertising is the future of newspaper advertising.

Via Brian Steffens
more...
Brian Steffens's curator insight, June 10, 12:32 AM

"... I learned that I want to be part of finding the solution for the future growth of our industry. I’m no longer content to sit by the sidelines with the ‘woe is newspapers’ crowd." -- 2014-2015 RJI Fellow Jaci Smith.

Rescooped by Nuno Edgar Fernandes from CxConferences
Scoop.it!

Satellite Meeting INFORMATION PROCESSING IN COMPLEX SYSTEMS (IPCS'15)

Satellite Meeting INFORMATION PROCESSING IN COMPLEX SYSTEMS (IPCS'15) | Edgar Analytics & Complex Systems | Scoop.it

CCS'15 Satellite Meeting: Information Processing in Complex Systems (IPCS'15)

Abstracts due:     June 20
Decision of admission:     June 25
Satellite meeting:     October 1

 

All systems in nature have one thing in common: they process information. Information is registered in the state of a system and its elements, implicitly and invisibly. As elements interact, information is transferred. Indeed, bits of information about the state of one element will travel – imperfectly – to the state of the other element, forming its new state. This storage and transfer of information, possibly between levels of a multi level system, is imperfect due to randomness or noise. From this viewpoint, a system can be formalized as a collection of bits that is organized according to its rules of dynamics and its topology of interactions. Mapping out exactly how these bits of information percolate through the system could reveal new fundamental insights in how the parts orchestrate to produce the properties of the system. A theory of information processing would be capable of defining a set of universal properties of dynamical multi level complex systems, which describe and compare the dynamics of diverse complex systems ranging from social interaction to brain networks, from financial markets to biomedicine. Each possible combination of rules of dynamics and topology of interactions, with disparate semantics, would reduce to a single language of information processing.


Via Complexity Digest
more...
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro

An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained.

 

An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro
Kristian Lindgren

Entropy 2015, 17(5), 3332-3351; http://dx.doi.org/10.3390/e17053332 ;


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Computational Tinkering
Scoop.it!

No God In The Machine - InformationWeek

No God In The Machine - InformationWeek | Edgar Analytics & Complex Systems | Scoop.it
Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.

 

In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.


Via Susan Einhorn
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

The mortality of companies

The firm is a fundamental economic unit of contemporary human societies. Studies on the general quantitative and statistical character of firms have produced mixed results regarding their lifespans and mortality. We examine a comprehensive database of more than 25 000 publicly traded North American companies, from 1950 to 2009, to derive the statistics of firm lifespans. Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age. We show that the typical half-life of a publicly traded company is about a decade, regardless of business sector. Our results shed new light on the dynamics of births and deaths of publicly traded companies and identify some of the necessary ingredients of a general theory of firms.

 

Madeleine I. G. Daepp , Marcus J. Hamilton , Geoffrey B. West , Luís M. A. Bettencourt. The mortality of companies. Royal Society Interface, 2015 http://dx.doi.org/10.1098/rsif.2015.0120 ;

 


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

“Waiting for Carnot”: Information and complexity

The relationship between information and complexity is analyzed using a detailed literature analysis. Complexity is a multifaceted concept, with no single agreed definition. There are numerous approaches to defining and measuring complexity and organization, all involving the idea of information. Conceptions of complexity, order, organization, and “interesting order” are inextricably intertwined with those of information. Shannon's formalism captures information's unpredictable creative contributions to organized complexity; a full understanding of information's relation to structure and order is still lacking. Conceptual investigations of this topic should enrich the theoretical basis of the information science discipline, and create fruitful links with other disciplines that study the concepts of information and complexity.

 

“Waiting for Carnot”: Information and complexity
David Bawden and Lyn Robinson

Journal of the Association for Information Science and Technology
Early View

http://dx.doi.org/10.1002/asi.23535


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Global Brain
Scoop.it!

Jeff Hawkins on Firing Up the Silicon Brain | WIRED

Jeff Hawkins on Firing Up the Silicon Brain | WIRED | Edgar Analytics & Complex Systems | Scoop.it
#maketechhuman Jeff Hawkins recently re-read his 2004 book On Intelligence, where the founder of Palm computing – the company that gave us the first handheld computer and later, first-generation smartphones – explains how the human brain learns. An electrical engineer by training, Hawkins had taken a deep interest in how the brain works and founded…

Via Spaceweaver
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Understanding Brains: Details, Intuition, and Big Data

Understanding Brains: Details, Intuition, and Big Data | Edgar Analytics & Complex Systems | Scoop.it

Understanding how the brain works requires a delicate balance between the appreciation of the importance of a multitude of biological details and the ability to see beyond those details to general principles. As technological innovations vastly increase the amount of data we collect, the importance of intuition into how to analyze and treat these data may, paradoxically, become more important.

 

Marder E (2015) Understanding Brains: Details, Intuition, and Big Data. PLoS Biol 13(5): e1002147. http://dx.doi.org/10.1371/journal.pbio.1002147 


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

25 Years of Self-Organized Criticality: Numerical Detection Methods

The detection and characterization of self-organized criticality (SOC), in both real and simulated data, has undergone many significant revisions over the past 25 years. The explosive advances in the many numerical methods available for detecting, discriminating, and ultimately testing, SOC have played a critical role in developing our understanding of how systems experience and exhibit SOC. In this article, methods of detecting SOC are reviewed; from correlations to complexity to critical quantities. A description of the basic autocorrelation method leads into a detailed analysis of application-oriented methods developed in the last 25 years. In the second half of this manuscript space-based, time-based and spatial-temporal methods are reviewed and the prevalence of power laws in nature is described, with an emphasis on event detection and characterization. The search for numerical methods to clearly and unambiguously detect SOC in data often leads us outside the comfort zone of our own disciplines - the answers to these questions are often obtained by studying the advances made in other fields of study. In addition, numerical detection methods often provide the optimum link between simulations and experiments in scientific research. We seek to explore this boundary where the rubber meets the road, to review this expanding field of research of numerical detection of SOC systems over the past 25 years, and to iterate forwards so as to provide some foresight and guidance into developing breakthroughs in this subject over the next quarter of a century.

 

 

25 Years of Self-Organized Criticality: Numerical Detection Methods
R.T. James McAteer, Markus J. Aschwanden, Michaila Dimitropoulou, Manolis K. Georgoulis, Gunnar Pruessner, Laura Morales, Jack Ireland, Valentyna Abramenko

http://arxiv.org/abs/1506.08142 ;


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

A Neural Conversational Model

Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary suggest that, despite optimizing the wrong objective function, the model is able to extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.

 

A Neural Conversational Model
Oriol Vinyals, Quoc Le

http://arxiv.org/abs/1506.05869


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

The evolutionary advantage of cooperation

The present study asks how cooperation and consequently structure can emerge in many different evolutionary contexts. Cooperation, here, is a persistent behavioural pattern of individual entities pooling and sharing resources. Examples are: individual cells forming multicellular systems whose various parts pool and share nutrients; pack animals pooling and sharing prey; families firms, or modern nation states pooling and sharing financial resources. In these examples, each atomistic decision, at a point in time, of the better-off entity to cooperate poses a puzzle: the better-off entity will book an immediate net loss -- why should it cooperate? For each example, specific explanations have been put forward. Here we point out a very general mechanism -- a sufficient null model -- whereby cooperation can evolve. The mechanism is based the following insight: natural growth processes tend to be multiplicative. In multiplicative growth, ergodicity is broken in such a way that fluctuations have a net-negative effect on the time-average growth rate, although they have no effect on the growth rate of the ensemble average. Pooling and sharing resources reduces fluctuations, which leaves ensemble averages unchanged but -- contrary to common perception -- increases the time-average growth rate for each cooperator.

 

The evolutionary advantage of cooperation
Ole Peters, Alexander Adamou

http://arxiv.org/abs/1506.03414


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

John Forbes Nash Jr. (1928–2015)

John Forbes Nash Jr. (1928–2015) | Edgar Analytics & Complex Systems | Scoop.it

In the fall of 1949, many graduate students at Princeton University were assigned rooms in the Graduate College. In one suite, John Nash inhabited a single room, while I shared the double with Lloyd Shapley. John and Lloyd were the mathematicians and I was the economist, and together we pursued our interest in game theory. John was one of the youngest students at the Graduate College. He was from West Virginia, where his father was an engineer and his mother a Latin teacher. He graduated from the Carnegie Institute of Technology with bachelor's and master's degrees in mathematics, and arrived at the math department in Princeton in 1948.

 

John Forbes Nash Jr. (1928–2015)
Martin Shubik

Science 19 June 2015:
Vol. 348 no. 6241 p. 1324
http://dx.doi.org/10.1126/science.aac7085 ;


Via Complexity Digest
more...
Marcelo Errera's curator insight, June 23, 10:46 PM

His legacy went beyond Math. I wonder if game theory leads to configurations (organizations) that facilitate the flow.

Configurations would be otherwise unbalanced, unstable and likely to be surpassed.

Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

The non-linear health consequences of living in larger cities

Urbanization promotes economy, mobility, access and availability of resources, but on the other hand, generates higher levels of pollution, violence, crime, and mental distress. The health consequences of the agglomeration of people living close together are not fully understood. Particularly, it remains unclear how variations in the population size across cities impact the health of the population. We analyze the deviations from linearity of the scaling of several health-related quantities, such as the incidence and mortality of diseases, external causes of death, wellbeing, and health-care availability, in respect to the population size of cities in Brazil, Sweden and the USA. We find that deaths by non-communicable diseases tend to be relatively less common in larger cities, whereas the per-capita incidence of infectious diseases is relatively larger for increasing population size. Healthier life style and availability of medical support are disproportionally higher in larger cities. The results are connected with the optimization of human and physical resources, and with the non-linear effects of social networks in larger populations. An urban advantage in terms of health is not evident and using rates as indicators to compare cities with different population sizes may be insufficient.

 

The non-linear health consequences of living in larger cities
Luis E. C. Rocha, Anna E. Thorson, Renaud Lambiotte

http://arxiv.org/abs/1506.02735


Via Complexity Digest
more...
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Exposure to ideologically diverse news and opinion on Facebook

Exposure to news, opinion, and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using deidentified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks and examined the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed and further studied users’ choices to click through to ideologically discordant content. Compared with algorithmic ranking, individuals’ choices played a stronger role in limiting exposure to cross-cutting content.

 

Exposure to ideologically diverse news and opinion on Facebook
Eytan Bakshy, Solomon Messing, Lada A. Adamic

Science 5 June 2015:
Vol. 348 no. 6239 pp. 1130-1132
http://dx.doi.org/10.1126/science.aaa1160


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from RJI links
Scoop.it!

21 Essential Data Visualization Tools

21 Essential Data Visualization Tools | Edgar Analytics & Complex Systems | Scoop.it
If you are a data lover, then you must check the 21 data visualization tools we curated for you. Most of them are free and easy to use.

Via Brian Steffens
more...
Brian Steffens's curator insight, June 2, 6:26 PM

You're all skilled computer graphics creators, right? No, then check out this list of tools.

Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

How random are complex networks

Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks---the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain---and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations, and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness.

 

How random are complex networks
Chiara Orsini, Marija Mitrović Dankulov, Almerima Jamakovic, Priya Mahadevan, Pol Colomer-de-Simón, Amin Vahdat, Kevin E. Bassler, Zoltán Toroczkai, Marián Boguñá, Guido Caldarelli, Santo Fortunato, Dmitri Krioukov

http://arxiv.org/abs/1505.07503


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Robotics: Ethics of artificial intelligence

Robotics: Ethics of artificial intelligence | Edgar Analytics & Complex Systems | Scoop.it

Four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines.

Stuart Russell: Take a stand on AI weapons
Sabine Hauert: Shape the debate, don't shy from it
Russ Altman: Distribute AI benefits fairly
Manuela Veloso: Embrace a robot–human world

 

http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611  


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Why Our Genome and Technology Are Both Riddled With “Crawling Horrors”

Why Our Genome and Technology Are Both Riddled With “Crawling Horrors” | Edgar Analytics & Complex Systems | Scoop.it

When we build complex technologies, despite our best efforts and our desire for clean logic, they often end up being far messier than we intend. They often end up kluges: inelegant solutions that work just well enough. And a reason they end up being messy—despite being designed and engineered—is because fundamentally the way they grow and evolve is often more similar to biological systems than we realize.

 

http://nautil.us/blog/why-our-genome-and-technology-are-both-riddled-with-crawling-horrors


Via Complexity Digest
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from The Praxis of Research
Scoop.it!

What Makes A Good Data Visualization? - Information Is Beautiful

What Makes A Good Data Visualization? - Information Is Beautiful | Edgar Analytics & Complex Systems | Scoop.it
What are the four elements necessary for a good data visualization? Information is Beautiful founder David McCandless shares his experience.

Via Antonio Figueiredo
more...
Antonio Figueiredo's curator insight, May 19, 6:21 AM

This data visualization model stresses four elements that help scientists make their data more visual.

Rescooped by Nuno Edgar Fernandes from COMPUTATIONAL THINKING and CYBERLEARNING
Scoop.it!

This Incredible 3D Printed Robotic Lamp Follows Objects As They Move

This Incredible 3D Printed Robotic Lamp Follows Objects As They Move | Edgar Analytics & Complex Systems | Scoop.it
The future of 3D printing will undoubtedly play a major role in the development of robotic devices. The custom aspects that the technology provides make it a pe

Via Suvi Salo, Bonnie Bracey Sutton
more...
No comment yet.
Rescooped by Nuno Edgar Fernandes from Papers
Scoop.it!

Optimal Census by Quorum Sensing

Bacteria regulate gene expression in response to changes in cell density in a process called quorum sensing. To synchronize their gene-expression programs, these bacteria need to glean as much information as possible about their cell density. Our study is the first to physically model the flow of information in a quorum-sensing microbial community, wherein the internal regulator of the individuals response tracks the external cell density via an endogenously generated shared signal. Combining information theory and Lagrangian formalism, we find that quorum-sensing systems can improve their information capabilities by tuning circuit feedbacks. Our analysis suggests that achieving information benefit via feedback requires dedicated systems to control gene expression noise, such as sRNA-based regulation.

 

Optimal Census by Quorum Sensing
Thibaud Taillefumier, Ned S. Wingreen

PLoS Comput Biol 11(5): e1004238. http://dx.doi.org/10.1371/journal.pcbi.1004238 ;


Via Complexity Digest
more...
Pablo Vicente Munuera's curator insight, May 17, 4:15 AM

Quorum sensing is an interesting concept!