Cognitive science is the interdisciplinary scientific study of the mind and its processes. It examines what cognition is, what it does and how it works. It includes research on intelligence and behavior, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, reasoning, and emotion) within nervous systems (human or other animal) and machines (e.g. computers). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology. The fundamental concept of cognitive science is "that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." Wikipedia (en)
Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialogue form.
Neuroscientists frequently use a certain statistical reasoning to establish the existence of distinct neuronal processes in the brain. We show that this reasoning is flawed and that the large corresponding literature needs reconsideration. We illustrate the fallacy with a recent study that received an enormous press coverage because it concluded that humans detect deceit better if they use unconscious processes instead of conscious deliberations. The study was published under a new open-data policy that enabled us to reanalyze the data with more appropriate methods. We found that unconscious performance was close to chance - just as the conscious performance. This illustrates the flaws of this widely used statistical reasoning, the benefits of open-data practices, and the need for careful reconsideration of studies using the same rationale.
In 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack
of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.
Models of neural networks have proven their utility in the development of learning algorithms in computer science and in the theoretical study of brain dynamics in computational neuroscience. We propose in this paper a spatial neural network model to analyze the important class of functional networks, which are commonly employed in computational studies of clinical brain imaging time series. We developed a simulation framework inspired by multichannel brain surface recordings (more specifically, EEG -- electroencephalogram) in order to link the mesoscopic network dynamics (represented by sampled functional networks) and the microscopic network structure (represented by an integrate-and-fire neural network located in a 3D space -- hence the term spatial neural network). Functional networks are obtained by computing pairwise correlations between time-series of mesoscopic electric potential dynamics, which allows the construction of a graph where each node represents one time-series. The spatial neural network model is central in this study in the sense that it allowed us to characterize sampled functional networks in terms of what features they are able to reproduce from the underlying spatial network. Our modeling approach shows that, in specific conditions of sample size and edge density, it is possible to precisely estimate several network measurements of spatial networks by just observing functional samples.
Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all issues that give rise to social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one shot situations. However, in spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we have found that there is no general answer: it depends on the strategic situation. Specifically, we have conducted two experiments, one on a one shot Public Goods Game and one on a one shot N person Prisoner Dilemma. We have found that larger group size favours the emergence of cooperation in the Public Goods game, but prevents it in the Prisoner dilemma. On the theoretical side, we have shown that this behaviour is not consistent with either the Fehr and Schmidt model or (a one parameter version of) the Charness and Rabin model. Looking for models explaining our findings, we have extended the cooperative equilibrium model from two player social dilemmas to some N person social dilemmas and we have shown that it indeed predicts the above mentioned regularities. Since the cooperative equilibrium is parameter free, we have also made a direct comparison between its predictions and experimental data. We have found that the predictions are neither strikingly close nor dramatically far from the experimental data.
Within the Flow Machines project we organize a workshop about "creativity and universal features in language and music".
This workshop will gather prominent specialists in mathematical physics, linguistics and computer science. During 2.5, days we will seek for possible convergence between different viewpoints on these fascinating issues.
WHEN: 18-20 June 2014
WHERE: Central Tower, Last Floor, Université Pierre et Marie Curie, 4 place Jussieu 75252 - Paris
This paper describes some biologically-inspired processes that could be used to build the sort of networks that we associate with the human brain. New to this paper, a 'refined' neuron will be proposed. This is a group of neurons that by joining together can produce a more analogue system, but with the same level of control and reliability that a binary neuron would have. With this new structure, it will be possible to think of an essentially binary system in terms of a more variable set of values. The paper also shows how recent research can be combined with established theories, to produce a more complete picture. The propositions are largely in line with conventional thinking, but possibly with one or two more radical suggestions. An earlier cognitive model can be filled in with more specific details, based on the new research results, where the components appear to fit together almost seamlessly. The intention of the research has been to describe plausible 'mechanical' processes that can produce the appropriate brain structures and mechanisms, but that could be used without the magical 'intelligence' part that is still not fully understood. There are also some important updates from an earlier version of this paper.
It is not just a manner of speaking: “Mind reading,” or working out what others are thinking and feeling, is markedly similar to print reading. Both of these distinctly human skills recover meaning from signs, depend on dedicated cortical areas, are subject to genetically heritable disorders, show cultural variation around a universal core, and regulate how people behave. But when it comes to development, the evidence is conflicting. Some studies show that, like learning to read print, learning to read minds is a long, hard process that depends on tuition. Others indicate that even very young, nonliterate infants are already capable of mind reading. Here, we propose a resolution to this conflict. We suggest that infants are equipped with neurocognitive mechanisms that yield accurate expectations about behavior (“automatic” or “implicit” mind reading), whereas “explicit” mind reading, like literacy, is a culturally inherited skill; it is passed from one generation to the next by verbal instruction.
The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, correlates are not enough if we are to understand even basic neurological fact; nor are they of much help in cases where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need a theory of consciousness that specifies what experience is and what type of physical systems can have it. Integrated Information Theory (IIT) does so by starting from conscious experience via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience, and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. IIT explains a range of clinical and laboratory findings, makes testable predictions, and extrapolates to unusual conditions. The theory vindicates some panpsychist intuitions - consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
Mammalian brain is one of the most complex objects in the known universe, as it governs every aspect of animal's and human behavior. It is fair to say that we have a very limited knowledge of how the brain operates and functions. Computational Neuroscience is a scientific discipline that attempts to understand and describe the brain in terms of mathematical modeling. This user-friendly review tries to introduce this relatively new field to mathematicians and physicists by showing examples of recent trends. It also discusses briefly future prospects for constructing an integrated theory of brain function.
In this paper we lay out an argument that generically the psychological arrow of time should align with the thermodynamic arrow of time where that arrow is well-defined. This argument applies to any physical system that can act as a memory, in the sense of preserving a record of the state of some other system. This result follows from two principles: the robustness of the thermodynamic arrow of time to small perturbations in the state, and the principle that a memory should not have to be fine-tuned to match the state of the system being recorded. This argument applies even if the memory system itself is completely reversible and non-dissipative. We make the argument with a paradigmatic system, then formulate it more broadly for any system that can be considered a memory. We illustrate these principles for a few other example systems, and compare our criteria to earlier treatments of this problem
Individuals in groups, whether composed of humans or other animal species, often make important decisions collectively, including avoiding predators, selecting a direction in which to migrate and electing political leaders. Theoretical and empirical work suggests that collective decisions can be more accurate than individual decisions, a phenomenon known as the ‘wisdom of crowds’. In these previous studies, it has been assumed that individuals make independent estimates based on a single environmental cue. In the real world, however, most cues exhibit some spatial and temporal correlation, and consequently, the sensory information that near neighbours detect will also be, to some degree, correlated. Furthermore, it may be rare for an environment to contain only a single informative cue, with multiple cues being the norm. We demonstrate, using two simple models, that taking this natural complexity into account considerably alters the relationship between group size and decision-making accuracy. In only a minority of environments do we observe the typical wisdom of crowds phenomenon (whereby collective accuracy increases monotonically with group size). When the wisdom of crowds is not observed, we find that a finite, and often small, group size maximizes decision accuracy. We reveal that, counterintuitively, it is the noise inherent in these small groups that enhances their accuracy, allowing individuals in such groups to avoid the detrimental effects of correlated information while exploiting the benefits of collective decision-making. Our results demonstrate that the conventional view of the wisdom of crowds may not be informative in complex and realistic environments, and that being in small groups can maximize decision accuracy across many contexts.
Recovery of society after a large scale disaster generally consists of two phases, short- and long-term recoveries. The main goal of the short-term recovery is to bounce the damaged system back to the operating standards enabling residents in damaged cities to survive, and fast supply with vital resources to them is one of its important elements. We propose a general principle by which the required redistribution of vital resources between the affected and neighbouring cities can be efficiently implemented. The short-term recovery is a rescuer operation where uncertainty in evaluating the state of damaged region is highly probable. To allow for such an operation the developed principle involves two basic components. The first one of ethic nature is the triage concept determining the current city priority in the resource delivery. The second one is the minimization of the delivery time subjected to this priority. Finally a certain plan of the resource redistribution is generated according to this principle. Several specific examples are studied numerically. It elucidates, in particular, the effects of system characteristics such as the city limit capacity in resource delivery, the type of initial resource allocation among the cities, the number of cities able to participate in the resource redistribution, and the damage level in the affected cities. As far as the uncertainty in evaluating the state of damaged region is concerned, some specific cases were studied. It assumes the initial communication system has crashed and formation of a new one and the resource redistribution proceed synchronously. The obtained results enable us to consider the resource redistribution plan governed by the proposed method semi-optimal and rather efficient especially under uncertainty.
Decisions in a group often result in imitation and aggregation, which are enhanced in panic, dangerous, stressful or negative situations. Current explanations of this enhancement are restricted to particular contexts, such as anti-predatory behavior, deflection of responsibility in humans, or cases in which the negative situation is associated with an increase in uncertainty. But this effect is observed across taxa and in very diverse conditions, suggesting that it may arise from a more general cause, such as a fundamental characteristic of social decision-making. Current decision-making theories do not explain it, but we noted that they concentrate on estimating which of the available options is the best one, implicitly neglecting the cases in which several options can be good at the same time. We explore a more general model of decision-making that instead estimates the probability that each option is good, allowing several options to be good simultaneously. This model predicts with great generality the enhanced imitation in negative situations. Fish and human behavioral data showing an increased imitation behavior in negative circumstances are well described by this type of decisions to choose a good option.
The Informative Herd: why humans and other animals imitate more when conditions are adverse Alfonso Pérez-Escudero, Gonzalo G. de Polavieja