Noise permeates biology on all levels, from the most basic molecular, sub-cellular processes to the dynamics of tissues, organs, organisms and populations. The functional roles of noise in biological processes can vary greatly. Along with standard, entropy-increasing effects of producing random mutations, diversifying phenotypes in isogenic populations, limiting information capacity of signaling relays, it occasionally plays more surprising constructive roles by accelerating the pace of evolution, providing selective advantage in dynamic environments, enhancing intracellular transport of biomolecules and increasing information capacity of signaling pathways. This short review covers the recent progress in understanding mechanisms and effects of fluctuations in biological systems of different scales and the basic approaches to their mathematical modeling.
Dynamical ideas are beginning to have a major impact on cognitive science, from foundational debates to daily practice. In this article, I review three contrasting examples of work in this area that address the lexical and grammatical structure of language, Piaget’s classic ‘A-not-B’ error, and active categorical perception in an embodied, situated agent. From these three examples, I then attempt to articulate the major differences between dynamical approaches and more traditional symbolic and connectionist approaches. Although the three models reviewed here vary considerably in their details, they share a focus on the unfolding trajectory of a system’s state andthe internal and external forces that shape this trajectory, rather than the representational content of its constituent states or the underlying physical mechanisms that instantiate the dynamics. In some work, this dynamical viewpoint is augmented with a situated and embodied perspective on cognition, forming a promising unified theoretical framework for cognitive science broadly construed.
The confluence of new approaches in recording patterns of brain connectivity and quantitative analytic tools from network science has opened new avenues toward understanding the organization and function of brain networks. Descriptive network models of brain structural and functional connectivity have made several important contributions; for example, in the mapping of putative network hubs and network communities. Building on the importance of anatomical and functional interactions, network models have provided insight into the basic structures and mechanisms that enable integrative neural processes. Network models have also been instrumental in understanding the role of structural brain networks in generating spatially and temporally organized brain activity. Despite these contributions, network models are subject to limitations in methodology and interpretation, and they face many challenges as brain connectivity data sets continue to increase in detail and complexity.
Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats.
Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.
Why should the future resemble the past? Well, for one thing, it always has. But that is itself an observation from the past. As the philosopher David Hume pointed out in the middle of the 18th century, we can’t use our experience in the past to argue that the future will resemble it, without descending into circular logic. What’s more, physicists remain unable to explain why certain fundamental constants of nature have the values that they do, or why those values should remain constant over time. The question is a troubling one, especially for scientists. For one thing, the scientific method of hypothesis, test, and revision would falter if the fundamental nature of reality were constantly shifting. And scientists could no longer make predictions about the future or reconstructions of the past, or rely on past experiments with complete confidence. But science also has an ace up its sleeve: Unlike philosophy, it can try to measure whether the laws of nature and the constants that parameterize those laws are changing.
We review attempts that have been made towards understanding the computational properties and mechanisms of input-driven dynamical systems like RNNs, and reservoir computing networks in particular. We provide details on methods that have been developed to give quantitative answers to the questions above. Following this, we show how self-organization may be used to improve reservoirs for better performance, in some cases guided by the measures presented before. We also present a possible way to quantify task performance using an information-theoretic approach, and finally discuss promising future directions aimed at a better understanding of how these systems perform their computations and how to best guide self-organized processes for their optimization.
Guided Self-Organization of Input-Driven Recurrent Neural Networks Oliver Obst, Joschka Boedecker
Ramón y Cajal may have changed neuroscience forever, but he always maintained his original childhood passion for art (RT @SmithsonianMag: The father of modern neuroscience was also a talented artist, sketching hundreds of medical illustrations
Linguists used to think the human brain had a specific region devoted to understanding language. But brain scans now indicate that regions controlling vision, movement, taste, smell and touch are all called into action when we think of a word, too.
I wanted to create a series of pictures representing mathematical shapes on white background, like a "tribute to mathematics" that I often use in my work. I chose the "strange attractors" for their dynamic forms and "chaotic feel".
A Computable Universe Understanding and Exploring Nature as Computation Edited by: Hector Zenil
This volume, with a foreword by Sir Roger Penrose, discusses the foundations of computation in relation to nature.
It focuses on two main questions:
What is computation? How does nature compute? The contributors are world-renowned experts who have helped shape a cutting-edge computational understanding of the universe. They discuss computation in the world from a variety of perspectives, ranging from foundational concepts to pragmatic models to ontological conceptions and philosophical implications.
The volume provides a state-of-the-art collection of technical papers and non-technical essays, representing a field that assumes information and computation to be key in understanding and explaining the basic structure underpinning physical reality. It also includes a new edition of Konrad Zuse's “Calculating Space” (the MIT translation), and a panel discussion transcription on the topic, featuring worldwide experts in quantum mechanics, physics, cognition, computation and algorithmic complexity.
The volume is dedicated to the memory of Alan M Turing — the inventor of universal computation, on the 100th anniversary of his birth, and is part of the Turing Centenary celebrations.
We study explosive synchronization of network-coupled oscillators. Despite recent advances it remains unclear how robust explosive synchronization is in view of realistic structural and dynamical properties. Here we show that explosive synchronization can be induced simply by adding uncorrelated noise to the oscillators' frequencies, demonstrating it is not only robust to, but moreover promoted by, this natural mechanism. We support these results numerically and analytically, presenting simulations of a real neural network as well as a self consistency theory used to study synthetic networks.
Noise induces explosive synchronization Per Sebastian Skardal, Alex Arenas
We give a tutorial for the study of dynamical systems on networks, and we focus in particular on ``simple" situations that are tractable analytically. We briefly motivate why examining dynamical systems on networks is interesting and important. We then give several fascinating examples and discuss some theoretical results. We also discuss dynamical systems on dynamical (i.e., time-dependent) networks, overview software implementations, and give our outlook on the field.
Dynamical Systems on Networks: A Tutorial Mason A. Porter, James P. Gleeson
Neural ensembles oscillate across a broad range of frequencies and are transiently coupled or “bound” together when people attend to a stimulus, perceive, think, and act. This is a dynamic, self-assembling process, with parts of the brain engaging and disengaging in time. But how is it done? The theory of Coordination Dynamics proposes a mechanism called metastability, a subtle blend of integration and segregation. Tendencies for brain regions to express their individual autonomy and specialized functions (segregation, modularity) coexist with tendencies to couple and coordinate globally for multiple functions (integration). Although metastability has garnered increasing attention, it has yet to be demonstrated and treated within a fully spatiotemporal perspective. Here, we illustrate metastability in continuous neural and behavioral recordings, and we discuss theory and experiments at multiple scales, suggesting that metastable dynamics underlie the real-time coordination necessary for the brain’s dynamic cognitive, behavioral, and social functions.
Handedness and brain asymmetry are widely regarded as unique to humans, and associated with complementary functions such as a left-brain specialization for language and logic and a right-brain specialization for creativity and intuition. In fact, asymmetries are widespread among animals, and support the gradual evolution of asymmetrical functions such as language and tool use. Handedness and brain asymmetry are inborn and under partial genetic control, although the gene or genes responsible are not well established. Cognitive and emotional difficulties are sometimes associated with departures from the “norm” of right-handedness and left-brain language dominance, more often with the absence of these asymmetries than their reversal.
The PyCX Project aims to develop an online repository of simple, crude, yet easy-to-understand Python sample codes for dynamic complex systems simulations, including iterative maps, cellular automata, dynamical networks and agent-based models.
ScienceCareers.org Questioning the Validity of Neuroscience Results ScienceCareers.org But a recent analysis of scientific studies in neuroscience, which was published online in Nature Reviews Neuroscience earlier this month, urges caution both in...
NIMH Director Thomas Insel discusses recent advances in neuroscience in a TED Talk presentation delivered at Caltech in January 2013. (Great Tedx Talk by NIMH Director on mental health and recent advances in neuroscience.
Science 1 February 2013: Vol. 339 no. 6119 pp. 574-576 DOI: 10.1126/science.1225883
The capacity for groups to exhibit collective intelligence is an often-cited advantage of group living. Previous studies have shown that social organisms frequently benefit from pooling imperfect individual estimates. However, in principle, collective intelligence may also emerge from interactions between individuals, rather than from the enhancement of personal estimates. Here, we reveal that this emergent problem solving is the predominant mechanism by which a mobile animal group responds to complex environmental gradients. Robust collective sensing arises at the group level from individuals modulating their speed in response to local, scalar, measurements of light and through social interaction with others. This distributed sensing requires only rudimentary cognition and thus could be widespread across biological taxa, in addition to being appropriate and cost-effective for robotic agents.
Fascinating paper published in February's edition of Science. We often consider intelligence as an emergent phenomena at the scale of individual organisms. Yet, complex social systems and structures may also exhibit behavior reflecting the predispositions of its members as a whole. Perhaps we can view the dynamics of societies from this scaled perspective to better understand the issues facing our modern society.
Dynamical criticality has been shown to enhance information processing in dynamical systems, and there is evidence for self-organized criticality in neural networks. A plausible mechanism for such self-organization is activity-dependent synaptic plasticity. Here, we model neurons as discrete-state nodes on an adaptive network following stochastic dynamics. At a threshold connectivity, this system undergoes a dynamical phase transition at which persistent activity sets in. In a low-dimensional representation of the macroscopic dynamics, this corresponds to a transcritical bifurcation. We show analytically that adding activity-dependent rewiring rules, inspired by homeostatic plasticity, leads to the emergence of an attractive steady state at criticality and present numerical evidence for the system's evolution to such a state.
Analytical investigation of self-organized criticality in neural networks Felix Droste, Anne-Ly Do and Thilo Gross
Network structure is a product of both its topology and interactions between its nodes. We explore this claim using the paradigm of distributed synchronization in a network of coupled oscillators. As the network evolves to a global steady state, nodes synchronize in stages, revealing the network's underlying community structure. Traditional models of synchronization assume that interactions between nodes are mediated by a conservative process similar to diffusion. However, social and biological processes are often nonconservative. We propose a model of synchronization in a network of oscillators coupled via nonconservative processes. We study the dynamics of synchronization of a synthetic and real-world networks and show that the traditional and nonconservative models of synchronization reveal different structures within the same network.
"Network structure, topology, and dynamics in generalized models of synchronization"