Microsoft has made a major breakthrough in speech recognition, creating a technology that understands a conversation as well as a person does.
In a paper published Monday, a team of researchers and engineers in Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists. The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month.
The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task.
Traffic congestion varies spatially and temporally. The observation of the formation, propagation and dispersion of network traffic congestion can lead to insights about the network performance, the bottleneck dynamics etc. While many researchers use the traffic flow data to reconstruct the congestion profile, the data missing problem is bypassed. Current methods either omit the missing data or supplement the missing part by average etc. Great error may be introduced during these processes. Rather than simply discarding the missing data, this research regards the data missing event as a result of either the severe congestion which prevent the floating vehicle from entering the congested area, or a type of feature of the resulting traffic flow time series. Hence a new traffic flow operational index time series similarity measurement is expected to be established as a basis of identifying the dynamic network bottleneck. The method first measures the traffic flow operational similarity between pairs of neighboring links, and then the similarity results are used to cluster the spatial-temporal congestion. In order to get the similarity under missing data condition, the measurement is implemented in a two-stage manner: firstly the so called first order similarity is calculated given that the traffic flow variables are bounded both upside and downside; then the first order similarity is aggregated to generate the second order similarity as the output. We implement the method on part of the real-world road network; the results generated are not only consistent with empirical observation, but also provide useful insights.
Platforms are all the rage these days. Powered by online technologies, they are sweeping across the economic landscape, striking down companies large and small. Uber’s global assault on the taxi industry is well known. Many platforms, some household names and others laboring in obscurity, are doing the same in other sectors.
Surveying these changes, you might conclude that if your business isn’t a platform, you had better worry that one is coming your way. Everyone from automakers to plumbers should count their days as traditional businesses. And maybe you should jump on the platform bandwagon too. If it worked for Airbnb, why not you?
Based on our research into the wave of online platforms that have started in the last two decades, we don’t necessarily disagree. Traditional businesses should worry, and maybe they should think about platform strategies. But we think these conclusions are overwrought — and miss what’s really going on.
The mammalian neocortex has a repetitious, laminar structure and performs functions integral to higher cognitive processes, including sensory perception, memory, and coordinated motor output. What computations does this circuitry subserve that link these unique structural elements to their function? Potjans and Diesmann (2014) parameterized a four-layer, two cell type (i.e. excitatory and inhibitory) model of a cortical column with homogeneous populations and cell type dependent connection probabilities. We implement a version of their model using a displacement integro-partial differential equation (DiPDE) population density model. This approach, exact in the limit of large homogeneous populations, provides a fast numerical method to solve equations describing the full probability density distribution of neuronal membrane potentials. It lends itself to quickly analyzing the mean response properties of population-scale firing rate dynamics. We use this strategy to examine the input-output relationship of the Potjans and Diesmann cortical column model to understand its computational properties. When inputs are constrained to jointly and equally target excitatory and inhibitory neurons, we find a large linear regime where the effect of a multi-layer input signal can be reduced to a linear combination of component signals. One of these, a simple subtractive operation, can act as an error signal passed between hierarchical processing stages.
Influencing social change on a broad scale is a chronically difficult problem. But what if you could identify – and then target and train at exactly the right time – those members of a population most likely to have the greatest influence on their peers?
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets R c increases by over ten-fold and the average transmission time ⟨ T ⟩ decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks.
Transforming bricks-and-mortar shopping is a high-stakes endeavor for retailers given Americans still do over 90% of our shopping in physical stores. In fact, one of the latest trends in retail is the launch of physical stores by online e-commerce companies, including Amazon, Warby Parker, and Birchbox.
Dozens of startups have taken on the challenge of helping retailers bridge the gap between digital and physical commerce through features ranging from shelf-stocking robots, to augmented reality displays, to Wi-Fi based beacons that collect data on shopper behavior.
Using CB Insights data, we identified startups enhancing the in-store experience with digital tools. The startups in our list have racked up partnerships with many big name brands — including Maybelline, Lancome, Kiehl’s, Cabela’s, Foot Locker, Home Depot, Express — and department stores, from Lord & Taylor to Target.
Much of the worlds data is streaming, time-series data, where anomalies give significant information in critical situations. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, and learn while simultaneously making predictions. We present a novel anomaly detection technique based on an on-line sequence memory algorithm called Hierarchical Temporal Memory (HTM). We show results from a live application that detects anomalies in financial metrics in real-time. We also test the algorithm on NAB, a published benchmark for real-time anomaly detection, where our algorithm achieves best-in-class results.
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Author Summary From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
This Spark Streaming use case is a great example of how near-real-time processing can be brought to Hadoop.
Spark Streaming is one of the most interesting components within the Apache Spark stack. With Spark Streaming, you can create data pipelines that process streamed data using the same API that you use for processing batch-loaded data. Furthermore, Spark Steaming’s “micro-batching” approach provides decent resiliency should a job fail for some reason.
In this post, I will demonstrate and walk you through some common and advanced Spark Streaming functionality via the use case of doing near-real time sessionization of Website events, then load stats about that activity into Apache HBase, and then populate graphs in your preferred BI tool for analysis. (Sessionization refers to the capture of all clickstream activity within the timeframe of a single visitor’s Website session.) You can find the code for this demo here.
A system like this one can be super-useful for understanding visitor behavior (whether human or machine). With some additional work, it can also be designed to contain windowing patterns for detecting possible fraud in an asynchronous manner.
At Google I/O in 2016 there were two browser focused technologies from the company. These are the Polymer project and Angular 2. It might be a bit hard to make sense of why the company is investing in these two overlapping and competing projects.
Angular 2 is a compelete web framework that allows developers to build client side applications that run both on the server with Node.js as well as in the browser. It's a major revision to the wildly successful Angular 1.x and while making major changes internally, it's still the same product.
Polymer on the other hand is a project that aims to let developers use the latest native Web Platform features today. It's essentially an interim solution to provide a layer that will enable technologies that only exist today as specifications. In essence Polymer does not create any new features.
Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems.
The ability to modulate brain states using targeted stimulation is increasingly being employed to treat neurological disorders and to enhance human performance. Despite the growing interest in brain stimulation as a form of neuromodulation, much remains unknown about the network-level impact of these focal perturbations. To study the system wide impact of regional stimulation, we employ a data-driven computational model of nonlinear brain dynamics to systematically explore the effects of targeted stimulation. Validating predictions from network control theory, we uncover the relationship between regional controllability and the focal versus global impact of stimulation, and we relate these findings to differences in the underlying network architecture. Finally, by mapping brain regions to cognitive systems, we observe that the default mode system imparts large global change despite being highly constrained by structural connectivity. This work forms an important step towards the development of personalized stimulation protocols for medical treatment or performance enhancement.
The weight with which a specific outcome feature contributes to preference quantifies a person’s ‘taste’ for that feature. However, far from being fixed personality characteristics, tastes are plastic. They tend to align, for example, with those of others even if such conformity is not rewarded. We hypothesised that people can be uncertain about their tastes. Personal tastes are therefore uncertain beliefs. People can thus learn about them by considering evidence, such as the preferences of relevant others, and then performing Bayesian updating. If a person’s choice variability reflects uncertainty, as in random-preference models, then a signature of Bayesian updating is that the degree of taste change should correlate with that person’s choice variability. Temporal discounting coefficients are an important example of taste–for patience. These coefficients quantify impulsivity, have good psychometric properties and can change upon observing others’ choices. We examined discounting preferences in a novel, large community study of 14–24 year olds. We assessed discounting behaviour, including decision variability, before and after participants observed another person’s choices. We found good evidence for taste uncertainty and for Bayesian taste updating. First, participants displayed decision variability which was better accounted for by a random-taste than by a response-noise model. Second, apparent taste shifts were well described by a Bayesian model taking into account taste uncertainty and the relevance of social information. Our findings have important neuroscientific, clinical and developmental significance.
The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.
Although simple social structures are more common in animal societies, some taxa (mainly mammals) have complex, multi-level social systems, in which the levels reflect differential association. We develop a simulation model to explore the conditions under which multi-level social systems of this kind evolve. Our model focuses on the evolutionary trade-offs between foraging and social interaction, and explores the impact of alternative strategies for distributing social interaction, with fitness criteria for wellbeing, alliance formation, risk, stress and access to food resources that reward social strategies differentially. The results suggest that multi-level social structures characterised by a few strong relationships, more medium ties and large numbers of weak ties emerge only in a small part of the overall fitness landscape, namely where there are significant fitness benefits from wellbeing and alliance formation and there are high levels of social interaction. In contrast, ‘favour-the-few’ strategies are more competitive under a wide range of fitness conditions, including those producing homogeneous, single-level societies of the kind found in many birds and mammals. The simulations suggest that the development of complex, multi-level social structures of the kind found in many primates (including humans) depends on a capacity for high investment in social time, preferential social interaction strategies, high mortality risk and/or differential reproduction. These conditions are characteristic of only a few mammalian taxa.
Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.
Nielsen Consumer Neuroscience, the leader in measuring non-conscious responses to deliver consumer insights, today announced the launch of an advertising research solution that will set a new standard for marketers looking to elevate their advertising creative and optimize in-market performance.
Video Ad Explorer, which was shown to predict in-market consumer sales behavior in a ground-breaking study with CBS, integrates the most comprehensive suite of neuroscience technologies. It helps brands unlock consumer insights and unravel the complexities of advertising creative development with unprecedented predictive power.
While individual neuroscience measures provide some level of prediction to in-market sales, Video Ad Explorer employs unique analyses using a rich combination of neuroscience tools for the highest level of prediction on a global scale. With analysis and feedback on a second by second basis, the results and insights can help optimize ideas and turn good advertising into great advertising.
By evaluating creative with measures from electroencephalography (EEG), core biometrics (which includes skin conductance response and heart rate), facial coding, eye tracking and self-report, brands can access their unique, complementary insights into the complexity of the consumer brain. The integrated use of these tools improves the ability of ad creative to drive-in-market success.
"Over the years, brands have had to settle for incomplete tools and processes for understanding creative development, but Video Ad Explorer changes that," said Dr. Carl Marci, Chief Neuroscientist for Nielsen Consumer Neuroscience. "By integrating these tools, we're providing brand teams with a full picture of their consumers' thinking and emotional response that will create greater confidence and understanding about how their creative will perform."
There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.
The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.