Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks.
The other day I was reading Miranda Mulligan's article on the subtle messages conveyed through map color choice, since we seem to associate certain attitudes with certain colors. I figured that I could at least do a dry run of testing some of the associations I've heard before (orange is associated with appetite was something I heard years ago, and the more tenuous sounding association of purple with a sense of royalty -which just sounds like a regional cultural acquaintance).
Matching an infrared image of a face to its visible light counterpart is a difficult task, but one that deep neural networks are now coming to grips with.
One problem with infrared surveillance videos or infrared CCTV images is that it is hard to recognize the people in them. Faces look different in the infrared and matching these images to their normal appearance is a significant unsolved challenge.
The problem is that the link between the way people look in infrared and visible light is highly nonlinear. This is particularly tricky for footage taken in the mid- and far-infrared, which tends to use passive sensors that detect emitted light rather than the reflected variety.
The chances are your large corporate is pretty elderly; companies don’t generally get big enough to be deemed as “corporate” without a few years under their belt. With that in mind it’s probably a good bet that your institution existed long before the internet did, banking retail, government whatever it may be your company has had to adapt to digital at some point in the last 15 years or so.Disruption on that scale to large business is incredibly rare, so much so that the digital boom seemed mor
For over a century, the neuron doctrine — which states that the neuron is the structural and functional unit of the nervous system — has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.
Social network analysis provides a broad and complex perspective on animal sociality that is widely applicable to almost any species. Recent applications demonstrate the utility of network analysis for advancing our understanding of the dynamics, selection pressures, development, and evolution of complex social systems. However, most studies of animal social networks rely primarily on a descriptive approach. To propel the field of animal social networks beyond exploratory analyses and to facilitate the integration of quantitative methods that allow for the testing of ecologically and evolutionarily relevant hypotheses, we review methodological and conceptual advances in network science, which are underutilized in studies of animal sociality. First, we highlight how the use of statistical modeling and triadic motifs analysis can advance our understanding of the processes that structure networks. Second, we discuss how the consideration of temporal changes and spatial constraints can shed light on the dynamics of social networks. Third, we consider how the study of variation at multiple scales can potentially transform our understanding of the structure and function of animal networks. We direct readers to analytical tools that facilitate the adoption of these new concepts and methods. Our goal is to provide behavioral ecologists with a toolbox of current methods that can stimulate novel insights into the ecological influences and evolutionary pressures structuring networks and advance our understanding of the proximate and ultimate processes that drive animal sociality.
Studies on friendships in online social networks involving geographic distance have so far relied on the city location provided in users' profiles. Consequently, most of the research on friendships have provided accuracy at the city level, at best, to designate a user's location. This study analyzes a Twitter dataset because it provides the exact geographic distance between corresponding users. We start by introducing a strong definition of "friend" on Twitter (i.e., a definition of bidirectional friendship), requiring bidirectional communication. Next, we utilize geo-tagged mentions delivered by users to determine their locations, where "@username" is contained anywhere in the body of tweets. To provide analysis results, we first introduce a friend counting algorithm. From the fact that Twitter users are likely to post consecutive tweets in the static mode, we also introduce a two-stage distance estimation algorithm. As the first of our main contributions, we verify that the number of friends of a particular Twitter user follows a well-known power-law distribution (i.e., a Zipf's distribution or a Pareto distribution). Our study also provides the following newly-discovered friendship degree related to the issue of space: The number of friends according to distance follows a double power-law (i.e., a double Pareto law) distribution, indicating that the probability of befriending a particular Twitter user is significantly reduced beyond a certain geographic distance between users, termed the separation point. Our analysis provides concrete evidence that Twitter can be a useful platform for assigning a more accurate scalar value to the degree of friendship between two users.
Silicon Valley giants such as Google and Facebook have been trying to harness artificial intelligence by training brain-inspired neural networks to better represent the real world. Digital Reasoning, a cognitive computing company based in Franklin, Tenn., recently announced that it has trained a neural network consisting of 160 billion parameters—more than 10 times larger than previous neural networks.
The Digital Reasoning neural network easily surpassed previous records held by Google’s 11.2-billion parameter system and Lawrence Livermore National Laboratory’s 15-billion parameter system. But it also showed improved accuracy over previous neural networks in tackling an “industry-standard dataset” consisting of 20,000 word analogies. Digital Reasoning’s model achieved an accuracy of almost 86 percent; significantly higher than Google’s previous record of just over 76 percent and Stanford University’s 75 percent.
“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Matthew Russell, chief technology officer for Digital Reasoning, in a press release.
WebAssembly, an effort to diversify language support on the Web and speed up applications, is drawing both applause and skepticism.
Frequency-dependent selection and demographic fluctuations play important roles in evolutionary and ecological processes. Under frequency-dependent selection, the average fitness of the population may increase or decrease based on interactions between individuals within the population. This should be reflected in fluctuations of the population size even in constant environments. Here, we propose a stochastic model that naturally combines these two evolutionary ingredients by assuming frequency-dependent competition between different types in an individual-based model. In contrast to previous game theoretic models, the carrying capacity of the population, and thus the population size, is determined by pairwise competition of individuals mediated by evolutionary games and demographic stochasticity. In the limit of infinite population size, the averaged stochastic dynamics is captured by deterministic competitive Lotka–Volterra equations. In small populations, demographic stochasticity may instead lead to the extinction of the entire population. Because the population size is driven by fitness in evolutionary games, a population of cooperators is less prone to go extinct than a population of defectors, whereas in the usual systems of fixed size the population would thrive regardless of its average payoff.
Prosocial behavior is fundamental to the sustainability of society, enabling people to work in groups, to create larger and more successful social structures, and to contribute to the common welfare. However, despite the importance of altruism, science has only a limited understanding of how prosocial behaviors and selfish behaviors are represented in the brain. Additionally, individual transition between self-benefiting behavior and altruistic behavior is not well understood.
Recently, a mysterious photo appeared on Reddit showing a monstrous mutant: an iridescent, multi-headed, slug-like creature covered with melting animal faces. Soon, the image’s true origins surfaced, in the form of a blog post by a Google research team. It turned out the otherworldly picture was, in fact, inhuman. It was the product of an artificial neural network—a computer brain—built to recognize images. And it looked like it was on drugs.
Many commenters on Reddit and Hacker News noticed immediately that the images produced by the neural network were strikingly similar to what one sees on psychedelic substances such as mushrooms or LSD. “The level of resemblance with a psychotropics trip is simply fascinating,” wrote Hacker News commenter joeyspn. User henryl agreed: “I'll be the first to say it... It looks like an acid/shroom trip.”
Scientists have developed a brain inspired computer chip which mimics the neurons inside your brain.
The chip consumes just 70 milliwatts of power and can perform 46 billion synaptic operations per second.
Since 2008, scientists from IBM have been working with DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) programme.
They have developed the chip, or processor called TrueNorth, which is claimed to be efficient, scalable, and flexible non-von Neumann architecture using contemporary silicon technology.
TrueNorth has 5.4-billion-transistors with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses.
It can be tiled in two dimensions through an interchip communication interface and can be scaled up to a cortexlike sheet of arbitrary size.
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as “task partitioning”, whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization.
The financial crisis illustrated the need for a functional understanding of systemic risk in strongly interconnected financial structures. Dynamic processes on complex networks being intrinsically difficult to model analytically, most recent studies of this problem have relied on numerical simulations. Here we report analytical results in a network model of interbank lending based on directly relevant financial parameters, such as interest rates and leverage ratios. We obtain a closed-form formula for the “critical degree” (the number of creditors per bank below which an individual shock can propagate throughout the network), and relate failures distributions to network topologies, in particular scalefree ones. Our criterion for the onset of contagion turns out to be isomorphic to the condition for cooperation to evolve on graphs and social networks, as recently formulated in evolutionary game theory. This remarkable connection supports recent calls for a methodological rapprochement between finance and ecology.
Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth’s surface; however, in modern contagions long-range edges—for example, due to airline transportation or communication media—allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct ‘contagion maps’ that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.
Donald trawls Google Fonts for ten minutes. There are a few candidates, but Don’s not so confident in his font evaluation skills. He lands on Open Sans. Good ol’ faithful. He sets the actionable headline text to 48pt using the light weight in white. It’s perfectly centred over the stock photo of anonymous hands fondling an electronic device. The semi-transparent overlay assures the marketing team that the headline will always be readable. Because who knows what the headline will be next month—or what stock photo will appear underneath.
We are sometimes asked what the added value of applied neuroscience is. In the bid or preparation for projects, we sometimes hear: “why can’t we just do an eye-tracking study” or “we are mainly interested in biometrics.” Granted, neuroscience may seem as more cumbersome and it is traditionally a quite intrusive measure. Not the least when using methods such as functional Magnetic Resonance Imaging (fMRI), in which people have to lie perfectly still inside a claustrophobic tube and an extremely noisy environment, doing very repetitive tasks. Even with electroencephalography (EEG) the traditional method is using odd looking bathing caps, not something you’d like to meet your neighbour with strolling down the street…
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
Neuroscientists at Duke University have introduced a new paradigm for brain-machine interfaces that investigates the physiological properties and adaptability of brain circuits, and how the brains of two or more animals can work together to complete simple tasks.
These brain networks, or Brainets, are described in two articles to be published in the July 9, 2015, issue of Scientific Reports. In separate experiments reported in the journal, the brains of monkeys and the brains of rats are linked, allowing the animals to exchange sensory and motor information in real time to control movement or complete computations.
In one example, scientists linked the brains of rhesus macaque monkeys, who worked together to control the movements of the arm of a virtual avatar on a digital display in front of them. Each animal controlled two of three dimensions of movement for the same arm as they guided it together to touch a moving target.
You know your cellphone can distract you and that you shouldn’t be texting or surfing the Web while walking down a crowded street or driving a car. Augmented reality—in the form of Google Glass, Sony’s SmartEyeglass, or Microsoft HoloLens—may appear to solve that problem. These devices present contextual information transparently or in a way that obscures little, seemingly letting you navigate the world safely, in the same way head-up displays enable fighter pilots to maintain situational awareness.
If you like Raspberry Pi’s and like to get into Distributed Computing and Big Data processing what could be a better than creating your own Raspberry Pi Hadoop Cluster?
The tutorial does not assume that you have any previous knowledge of Hadoop. Hadoop is a framework for storage and processing of large amount of data. Or “Big Data” which is a pretty common buzzword those days. The performance of running Hadoop on a Rasperry PI is probably terrible but I hope to be able to make a small and fully functional little cluster to see how it works and perform.
In this tutorial we start with using one Raspberry PI at first and then adding two more after we have a working single node. We will also do some simple performance tests to compare the impact of adding more nodes to the cluster. Last we try to improve and optimize Hadoop for Raspberry Pi cluster.
This paper presents a step-by-step methodology for Twitter sentiment analysis with application to retail brands. Two approaches are tested to measure variations in the public opinion about particular products and brands. The first, a lexicon-based method, uses a dictionary of words with assigned to them semantic scores to calculate a final polarity of a tweet, and incorporates part of speech tagging. The second, machine learning approach, tackles the problem as a text classification task employing two supervised classifiers - Naive Bayes and Support Vector Machines. We show that combining the lexicon and machine learning approaches by using a lexicon score as a one of the features in Naive Bayes and SVM classifications improves the accuracy of classification by 5%.
When someone tries to remember 1 aspect of an event, such as who they met yesterday, the representation of the entire event can be reactivated in the brain, including incidental information such as where they were and what they did.
When remembering something from our past, we often vividly re-experience the whole episode in which it occurred. New UCL research funded by the Medical Research Council and Wellcome Trust has now revealed how this might happen in the brain.
The study, published in Nature Communications, shows that when someone tries to remember one aspect of an event, such as who they met yesterday, the representation of the entire event can be reactivated in the brain, including incidental information such as where they were and what they did.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.