Cognitive science is the interdisciplinary scientific study of the mind and its processes. It examines what cognition is, what it does and how it works. It includes research on intelligence and behavior, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, reasoning, and emotion) within nervous systems (human or other animal) and machines (e.g. computers). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology. The fundamental concept of cognitive science is "that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." Wikipedia (en)
•Literature suggests negativity bias might underlie variations in political views•fMRI responses to disgusting images accurately predict political orientation•Self-reports about affective images are not predictive of their political views•Single-stimulus data can reliably classify conservatives from liberals
An agent-based simulation model (ABM) is developed and implemented using Python to explore the emergence of intragenerational and intergenerational skill inequality at the societal level that results from differences in parental investment behavior at the household level during early stages of the life course. Parental behavior is modeled as optimal, heuristic-based, or norm-oriented. Skills grow according to the technology of skill formation developed in the field of economics, calibrated with empirically estimated parameters from existing research. Agents go through a simplified life course. During childhood and adolescence, skills are produced through parental investments. In adulthood, individuals find a partner, give birth to the next generation, and invest in offspring. Number and spacing of children and available resources are treated as exogenous factors and are varied experimentally. Simulation experiments suggest that parental decisions at the household level play a role in the emergence of inequality at the societal level. Being egalitarian or not is the most important distinction in parental investment behavior, while optimizing parents generate similar results as egalitarian parents. Furthermore, there is a tradeoff between equality at home and inequality at the macro-level. Changes in the environment reduce or exacerbate inequality depending on parental investment behavior. One prediction of the model on intragenerational inequality in cognitive skills was validated with the use of empirical data. The simulation can best be described as a middle-range model, informed by research on skill formation and the intrahousehold allocation of resources. It is a first step toward more complex ABMs on inequality from a life course perspective. Possible model extensions are suggested. The Overview, Design Concepts, and Details (ODD) protocol and Design of Experiments (DOE) were used to document the model and set up the experimental design respectively.
This paper describes some biologically-inspired processes that could be used to build the sort of networks that we associate with the human brain. New to this paper, a 'refined' neuron will be proposed. This is a group of neurons that by joining together can produce a more analogue system, but with the same level of control and reliability that a binary neuron would have. With this new structure, it will be possible to think of an essentially binary system in terms of a more variable set of values. The paper also shows how recent research can be combined with established theories, to produce a more complete picture. The propositions are largely in line with conventional thinking, but possibly with one or two more radical suggestions. An earlier cognitive model can be filled in with more specific details, based on the new research results, where the components appear to fit together almost seamlessly. The intention of the research has been to describe plausible 'mechanical' processes that can produce the appropriate brain structures and mechanisms, but that could be used without the magical 'intelligence' part that is still not fully understood. There are also some important updates from an earlier version of this paper.
It is not just a manner of speaking: “Mind reading,” or working out what others are thinking and feeling, is markedly similar to print reading. Both of these distinctly human skills recover meaning from signs, depend on dedicated cortical areas, are subject to genetically heritable disorders, show cultural variation around a universal core, and regulate how people behave. But when it comes to development, the evidence is conflicting. Some studies show that, like learning to read print, learning to read minds is a long, hard process that depends on tuition. Others indicate that even very young, nonliterate infants are already capable of mind reading. Here, we propose a resolution to this conflict. We suggest that infants are equipped with neurocognitive mechanisms that yield accurate expectations about behavior (“automatic” or “implicit” mind reading), whereas “explicit” mind reading, like literacy, is a culturally inherited skill; it is passed from one generation to the next by verbal instruction.
The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, correlates are not enough if we are to understand even basic neurological fact; nor are they of much help in cases where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need a theory of consciousness that specifies what experience is and what type of physical systems can have it. Integrated Information Theory (IIT) does so by starting from conscious experience via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience, and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. IIT explains a range of clinical and laboratory findings, makes testable predictions, and extrapolates to unusual conditions. The theory vindicates some panpsychist intuitions - consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
Mammalian brain is one of the most complex objects in the known universe, as it governs every aspect of animal's and human behavior. It is fair to say that we have a very limited knowledge of how the brain operates and functions. Computational Neuroscience is a scientific discipline that attempts to understand and describe the brain in terms of mathematical modeling. This user-friendly review tries to introduce this relatively new field to mathematicians and physicists by showing examples of recent trends. It also discusses briefly future prospects for constructing an integrated theory of brain function.
This is the second article in a series, How we make decisions, which explores our decision-making processes. How well do we consider all factors involved in a decision, and what helps and what holds us…
During the last thirty years education researchers have developed models for judging the comparative performance of schools, in studies of what has become known as "differential school effectiveness". A great deal of empirical research has been carried out to understand why differences between schools might emerge, with variable-based models being the preferred research tool. The use of more explanatory models such as agent-based models (ABM) has been limited. This paper describes an ABM that addresses this topic, using data from the London Educational Authority's Junior Project. To compare the results and performance with more traditional modelling techniques, the same data are also fitted to a multilevel model (MLM), one of the preferred variable-based models used in the field. The paper reports the results of both models and compares their performances in terms of predictive and explanatory power. Although the fitted MLM outperforms the proposed ABM, the latter still offers a reasonable fit and provides a causal mechanism to explain differences in the identified school performances that is absent in the MLM. Since MLM and ABM stress different aspects, rather than conflicting they are compatible methods.
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): This paper examines the phenomenon of daydreaming: spontaneously recalling or imagining personal or vicarious experiences in the past or future. The following important roles of daydreaming in human cognition are postulated: plan preparation and rehearsal, learning from failures and successes, support for processes of creativity, emotion regulation, and motivation. A computational theory of daydreaming and its implementation as the program DAYDREAMER are presented. DAYDREAMER consists of 1) a scenario generator based on relaxed planning, 2) a dynamic episodic memory of experiences used by the scenario generator, 3) a collection of personal goals and control goals which guide the scenario generator, 4) an emotion component in which daydreams initiate, and are initiated by, emotional states arising from goal outcomes, and 5) domain knowledge of interpersonal relations and common everyday occurrences. The role of emotions and control goals in daydreaming is discussed. Four control goals commonly used in guiding daydreaming are presented: rationalization, failure/success reversal, revenge, and preparation. The role of episodic memory in daydreaming is considered, including how daydreamed information is incorporated into memory and later used. An initial version of DAYDREAMER which produces several daydreams (in English) is currently running.
Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialogue form.
Neuroscientists frequently use a certain statistical reasoning to establish the existence of distinct neuronal processes in the brain. We show that this reasoning is flawed and that the large corresponding literature needs reconsideration. We illustrate the fallacy with a recent study that received an enormous press coverage because it concluded that humans detect deceit better if they use unconscious processes instead of conscious deliberations. The study was published under a new open-data policy that enabled us to reanalyze the data with more appropriate methods. We found that unconscious performance was close to chance - just as the conscious performance. This illustrates the flaws of this widely used statistical reasoning, the benefits of open-data practices, and the need for careful reconsideration of studies using the same rationale.
In 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack
of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.