Omega is an invention platform for the Internet of Things. It comes WiFi-enabled and supports most of the popular languages such as Python and Node.JS. Omega makes hardware prototyping as easy as creating and installing software apps.
With the growing amount and accessibility of data, data visualisation is becoming increasingly important. Not only does visualised data represent large quantities of data coherently, it doesn’t distort what the data has to say and helps the user discern relationships in the data. According to the writers of A Tour Through the Visualization Zoo, “The goal of visualization is to aid our understanding of data by leveraging the human visual system’s highly-tuned ability to see patterns, spot trends, and identify outliers.” In general, there are two basic types of data visualisation: exploration, which helps find a story the data is telling you, and explanation, which tells a story to an audience. Both types of data visualisation must take into account the audience’s expectations. Within these two basic categories, there are many different ways data can be made visual. In this article we’ll go through the 15 most common types of data visualisation that fall under the 2D area, temporal, multidimensional, hierarchical and network categories.
The incoming applicant does not necessarily need to know their bricks from their baseplates. Instead, they will head a new research center that focuses on children’s relationships with play in education, development and learning. They will also investigate how unrestrictive play can help improve a child’s experience of education.
This unusually titled position was created by the university after receiving $6.2 million (£4 million) in donations from the Lego Foundation, which aims "to make children's lives better – and communities stronger – by making sure the fundamental value of play is understood, embraced and acted upon." The Lego Foundation owns 25% of the Danish toy company Lego.
If you’re like me, you’ve probably bought a wearable that tracks your steps in hopes it will inspire you to walk more. And if you’re also like me, you learned quickly what it takes to reach your daily goal and started leaving your gadget collecting dust in a drawer.
This is not a unique scenario. According to Endeavor Partners, a market researcher, while at least 1 in 10 Americans over the age of 18 owns a tracking device like the Fitbit or Nike Fuelband, more than a third of those who get them abandon them within a few months. The reasons given include meaningless stats, poor design, and loss of interest.
Elizabeth Churchill, a specialist in user experience with a background in experimental psychology, is studying how to go beyond the data wearables gather to motivate people on a subconscious level to take charge of their health. She is coauthor of “Wellth Creation: Using Computer Science to Support Proactive Health,” published last November in IEEE’s Computer magazine, which can be downloaded from the IEEE Xplore Digital Library.
Deception is a central component of the personality 'Dark Triad' (Machiavellianism, Psychopathy and Narcissism). However, whether individuals exhibiting high scores on Dark Triad measures have a heightened deceptive ability has received little experimental attention. The present study tested whether the ability to lie effectively, and to detect lies told by others, was related to Dark Triad, Lie Acceptability, or Self-Deceptive measures of personality using an interactive group-based deception task. At a group level, lie detection accuracy was correlated with the ability to deceive others—replicating previous work. No evidence was found to suggest that Dark Triad traits confer any advantage either to deceive others, or to detect deception in others. Participants who considered lying to be more acceptable were more skilled at lying, while self-deceptive individuals were generally less credible and less confident when lying. Results are interpreted within a framework in which repeated practice results in enhanced deceptive ability.
Teaching computers how to interpret visual data is going to be essential to the development of things like self-driving cars and mood-sensing technologies? Humans assess the world around us mainly through vision, and computers are starting to do the same. Digital artist Adam Ferriss works with facial recognition software and pushes it to the extreme, creating portraits that reveal the strange beauty in a machine's search for meaning.
For his project, Ferriss took a facial recognition algorithm called SURF, which is used to identify "interesting" parts of an image, also known as feature detection. By turning up the settings so that the feature detection is way more sensitive than necessary, the portraits become a mere backdrop for thousands of lines and circles indicating features where none are apparent.
Cognitive stability and flexibility are core functions in the successful pursuit of behavioral goals. While there is evidence for a common frontoparietal network underlying both functions and for a key role of dopamine in the modulation of flexible versus stable behavior, the exact neurocomputational mechanisms underlying those executive functions and their adaptation to environmental demands are still unclear. In this work we study the neurocomputational mechanisms underlying cue based task switching (flexibility) and distractor inhibition (stability) in a paradigm specifically designed to probe both functions. We develop a physiologically plausible, explicit model of neural networks that maintain the currently active task rule in working memory and implement the decision process. We simplify the four-choice decision network to a nonlinear drift-diffusion process that we canonically derive from a generic winner-take-all network model. By fitting our model to the behavioral data of individual subjects, we can reproduce their full behavior in terms of decisions and reaction time distributions in baseline as well as distractor inhibition and switch conditions. Furthermore, we predict the individual hemodynamic response timecourse of the rule-representing network and localize it to a frontoparietal network including the inferior frontal junction area and the intraparietal sulcus, using functional magnetic resonance imaging. This refines the understanding of task-switch-related frontoparietal brain activity as reflecting attractor-like working memory representations of task rules. Finally, we estimate the subject-specific stability of the rule-representing attractor states in terms of the minimal action associated with a transition between different rule states in the phase-space of the fitted models. This stability measure correlates with switching-specific thalamocorticostriatal activation, i.e., with a system associated with flexible working memory updating and dopaminergic modulation of cognitive flexibility. These results show that stochastic dynamical systems can implement the basic computations underlying cognitive stability and flexibility and explain neurobiological bases of individual differences.
Author Summary Decision making is partly random: a person can make different decisions at different times based on the same information. The theory of probability matching says that one reason for this randomness is that people usually choose the response that they think is most likely to be correct, but they sometimes intentionally choose the response that they think is less likely to be correct. Probability matching is a theory that was developed to describe how people try to predict the ou
During flexible behavior, multiple brain regions encode sensory inputs, the current task, and choices. It remains unclear how these signals evolve. We simultaneously recorded neuronal activity from six cortical regions [middle temporal area (MT), visual area four (V4), inferior temporal cortex (IT), lateral intraparietal area (LIP), prefrontal cortex (PFC), and frontal eye fields (FEF)] of monkeys reporting the color or motion of stimuli. After a transient bottom-up sweep, there was a top-down flow of sustained task information from frontoparietal to visual cortex. Sensory information flowed from visual to parietal and prefrontal cortex. Choice signals developed simultaneously in frontoparietal regions and travelled to FEF and sensory cortex. This suggests that flexible sensorimotor choices emerge in a frontoparietal network from the integration of opposite flows of sensory and task information.
A revolution in artificial intelligence is currently sweeping through computer science. The technique is called deep learning and it’s affecting everything from facial and voice to fashion and economics.
But one area that has not yet benefitted is natural language processing—the ability to read a document and then answer questions about it. That’s partly because deep learning machines must first learn their trade from vast databases that are carefully annotated for the purpose. However, these simply do not exist in sufficient size to be useful.
Today, that changes thanks to the work of Karl Moritz Hermann at Google DeepMind in London and a few pals. These guys say the special way that the Daily Mail and CNN write online news articles allows them to be used in this way. And the sheer volume of articles available online creates for the first time, a database that computers can use to learn and then answer related about. In other words, DeepMind is using Daily Mail and CNN articles to teach computers to read.
The deep learning revolution has come about largely because of two breakthroughs. The first is related to neural networks, where computer scientists have developed new techniques to train networks with many layers, a task that has been tricky because of the number of parameters that must be fine-tuned. The new techniques essentially produce “ready-made” nets that are ready to learn.
Cities and their transportation systems become increasingly complex and multimodal as they grow, and it is natural to wonder if it is possible to quantitatively characterize our difficulty to navigate in them and whether such navigation exceeds our cognitive limits. A transition between different searching strategies for navigating in metropolitan maps has been observed for large, complex metropolitan networks. This evidence suggests the existence of another limit associated to the cognitive overload and caused by large amounts of information to process. In this light, we analyzed the world's 15 largest metropolitan networks and estimated the information limit for determining a trip in a transportation system to be on the order of 8 bits. Similar to the "Dunbar number," which represents a limit to the size of an individual's friendship circle, our cognitive limit suggests that maps should not consist of more than about 250 connections points to be easily readable. We also show that including connections with other transportation modes dramatically increases the information needed to navigate in multilayer transportation networks: in large cities such as New York, Paris, and Tokyo, more than 80% of trips are above the 8-bit limit. Multimodal transportation systems in large cities have thus already exceeded human cognitive limits and consequently the traditional view of navigation in cities has to be revised substantially.
Information measures and cognitive limits in multilayer navigation Riccardo Gallotti, Mason A. Porter, Marc Barthelemy
We present a model for the growth of the transportation network inside nests of the social insect subfamily Termitinae (Isoptera, termitidae). These nests consist of large chambers (nodes) connected by tunnels (edges). The model based on the empirical nest network description combined with pruning (edge removal) and a memory effect (preferential growth from the latest added chambers) successfully predicts emergent nest properties (degree distribution, average path lengths and backbone link ratios). A sensitivity analysis on the pruning and memory parameters indicates that Termitinae networks favor fast internal transportation over efficient defense strategies against ant predators. Our results provide an example of how complex network organization and efficient network properties can be generated from simple building rules and local interactions and contribute to our understanding of the mechanisms that come into play for the formation of termite networks and of biological transportation networks in general.
The emotional state of being moved, though frequently referred to in both classical rhetoric and current language use, is far from established as a well-defined psychological construct. In a series of three studies, we investigated eliciting scenarios, emotional ingredients, appraisal patterns, feeling qualities, and the affective signature of being moved and related emotional states. The great majority of the eliciting scenarios can be assigned to significant relationship and critical life events (especially death, birth, marriage, separation, and reunion). Sadness and joy turned out to be the two preeminent emotions involved in episodes of being moved. Both the sad and the joyful variants of being moved showed a coactivation of positive and negative affect and can thus be ranked among the mixed emotions. Moreover, being moved, while featuring only low-to-mid arousal levels, was experienced as an emotional state of high intensity; this applied to responses to fictional artworks no less than to own-life and other real, but media-represented, events. The most distinctive findings regarding cognitive appraisal dimensions were very low ratings for causation of the event by oneself and for having the power to change its outcome, along with very high ratings for appraisals of compatibility with social norms and self-ideals. Putting together the characteristics identified and discussed throughout the three studies, the paper ends with a sketch of a psychological construct of being moved.
A startup called MetaMind has developed a new, improved algorithm for processing language.
Talking to a machine over the phone or through a chat window can be an infuriating experience. However, several research groups, including some at large technology companies like Facebook and Google, are making steady progress toward improving computers’ language skills by building upon recent advances in machine learning.
The latest advance in this area comes from a startup called MetaMind, which has published details of an algorithm that is more accurate than other techniques at answering questions about several lines of text that tell a story. MetaMind is developing technology designed to be capable of a range of different artificial-intelligence tasks and hopes to sell it to other companies. The startup was founded by Richard Socher, a prominent machine-learning expert who earned a PhD at Stanford.
Most brain activity is not directly evoked by specific external events. This ongoing activity is correlated across distant brain regions within large-scale networks. This correlation or functional connectivity may reflect communication across brain regions. Strength and spatial organization of functional connectivity changes dynamically over seconds to minutes. Using functional MRI, we show that these ongoing changes correlate with behavior. The connectivity state before playback of a faint sound predicted whether the participant was going to perceive the sound on that trial. Connectivity states preceding missed sounds showed weakened modular structure, in which connectivity was more random and less organized across brain regions. These findings suggest that ongoing brain connectivity dynamics contribute to explaining behavioral variability.
The subtitle of this post can be “How to plot multiple elements on interactive web maps in R“. In this experiment I will show how to include multiple elements in interactive maps created using both plotGoogleMaps and leafletR. To complete the work presented here you would need the following packages: sp, raster, plotGoogleMaps and leafletR.
I am going to use data from the OpenStreet maps, which can be downloaded for free from this website: weogeo.com In particular I downloaded the shapefile with the stores, the one with the tourist attractions and the polyline shapefile with all the roads in London. I will assume that you want to spend a day or two walking around London, and for this you would need the location of some hotels and the locations of all the Greggs in the area, for lunch. You need to create a web map that you can take with you when you walk around the city with all these customized elements, that’s how you create it.
The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment.
For many years, interacting with artificial intelligence has been the stuff of science fiction and academic projects, but as smart systems take over more and more responsibilities, replace jobs, and become involved with complex emotionally charged decisions, figuring out how to collaborate with these systems has become a pragmatic problem that needs pragmatic solutions.
Machine learning and cognitive systems are now a major part many products people interact with every day, but to fully exploit the potential of artificial intelligence, people need much richer ways of communicating with the systems they use. The role of designers is to figure out how to build collaborative relationships between people and machines that help smart systems enhance human creativity and agency rather than simply replacing them.
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.
Mapping the human brain's network of interconnections, known as the connectome is typically done with help from computational tools because recreating interconnections between different brain regions has been challenging in the lab. Researchers at the Okinawa Institute of Science and Technology Graduate University have developed a method to recreate connections between neurons from two different brain areas in a dish.
The science of allometry, the study of the relationship between body size and shape, is more than 100 years old. It dates to the late 19th century, when anatomists became fascinated by the link between the size and strength of appendages such as arms and legs in creatures of varying size.
In recent years, various researchers have begun to think of cities as “living” entities in which activity patterns change over regular 24-hour periods and which also vary dramatically depending on city size. That’s lead to a new science of city-related allometry—how various aspects of life vary with the size of the conurbation they take place in.
Today, we get a new insight into this emerging science thanks to the work of Luis Rocha at the University of Namur in Belgium and a couple of pals who have studied the way health varies with city size. These guys have made some surprising discoveries.
It’s easy to imagine that a city is simply the sum of its parts. But economists, sociologists, and city planners have long known that many aspects of city life do not scale linearly with the size of a city.
Twitter announced today that it has acquired Cambridge-based machine learning startup Whetlab in an effort to accelerate the company’s own in-house efforts on the matter. Deal terms were not immediately available, but Twitter will gain both access to Whetlab’s technology and small team following the acquisition, while Whetlab’s current product will be discontinued next month, the company notes.
Whetlab was developing A.I.-like technologies that would make machine learning easier for companies to implement. Its system had been designed to get a company’s internal systems off the ground automatically, and therefore, more quickly than before. This could potentially reduce the time it takes to train a new machine learning system from months to just days.
Since Newton we have sought laws that “entail” the evolution of the system. These dreams range from reductionism, dreams of a final entailing theory, upward. In this chapter I hope to show that no laws at all entail the becoming of the biosphere. Ever new, typically unprestatable, biological functions arise, often as Darwinian preadaptations, and once they exist, they do not cause, but ENABLE an often unprestatable set of “opportunities” forming a new “adjacent possible” into which evolution flows, creating yet new adaptations that enable new adjacent possibles in an unprestatable becoming. Because we cannot prestate the variables, we can write no differential equation laws of motion for evolution, so cannot integrate those equations. Thus no laws entail evolution. Since the biosphere is part of the universe, if the above is correct, there can be no final theory that entails all that becomes in the universe. The discussion rests on the legitimacy of “functions” in biology, subsets of the causal consequences of parts of organisms. Physics cannot distinguish between causal consequences. I try to justify “functions”, whose unprestatable becoming are parts of the ever changing phase space of evolution, hence no entailing laws. “Functions” are justified in the non-ergodic universe above the level of atoms by Kantian wholes such as collectively autocatalytic sets in protocells that can sense, evaluate, and act in their worlds, yielding teleonomy and biosemiotics. Modernity is based on Newton and Darwin: these ideas may take us beyond Modernity.
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.