We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task involving finding rewards in random 3D mazes using a visual input.
Using software to solve complex problems by analyzing data—known as algorithmic decision-making—offers incredible potential for the public and private sectors to operate more effectively, efficiently, and equitably. For example, the technology has helped streamline wait lists for life-saving organ transplants, improve policing by predicting crime hotspots, and better target charitable giving to the poorest households in rural Kenya.
Despite these benefits, skeptics argue algorithmic decision-making will be inherently exploitative, discriminatory, or simply unreliable, and thus in need of greater government oversight. But countless real-world examples of algorithms unlocking tremendous social and economic benefits indicate otherwise: algorithms can be more effective and less biased than humans when it comes to making important decisions.
Join the Center for Data Innovation for a panel discussion about how public and private sector leaders are using algorithms to make better decisions and what an increasingly data-driven world means for the future of algorithmic decision-making.
Apache Spark continues to attract attention in the big data world, where it's expected to help drive the next wave of innovation. A survey on Hadoop from big data company Syncsort showed that 70% of survey participants are most interested in Spark, higher even than MapReduce, the current adoption leader, at 55%.
Syncsort surveyed 250 IT professionals. From that group, 66% were from firms with more than $100 million in annual revenue.
A healthy interest is not a surprise. In Apache Spark's relatively short life, there's been much discussion of its ascendancy. In September, Databricks, the company behind Spark, released results from a survey showing that Spark is the most active open source project in big data with more than 600 contributors within the past year, which is up from 315 in 2014. Plus, Spark is in use not just in the IT industry, but areas like finance, retail, advertising, education, health care, and more. That survey also showed that 51% of Spark users are using three or more Spark components.
This included Ole Peters, a Fellow at the London Mathematical Laboratory in the U.K., as well as an external professor at the Santa Fe Institute in New Mexico, and Murray Gell-Mann, a physicist who was awarded the 1969 Nobel Prize in physics for his contributions to the theory of elementary particles by introducing quarks, and is now a Distinguished Fellow at the Santa Fe Institute. They found it particularly curious that a field so central to how we live together as a society seems so unsure about so many of its key questions. So they asked: Might there be a foundational difficulty underlying our current economic theory? Is there some hidden assumption, possibly hundreds of years old, behind not one but many of the current scientific problems in economic theory? Such a foundational problem could have far-reaching practical consequences because economic theory informs economic policy. As they report in the journal Chaos, from AIP Publishing, the story that emerged is a fascinating example of scientific history, of how human understanding evolves, gets stuck, gets unstuck, branches, and so on.
A team of US researchers has built an energy-friendly chip that can perform powerful artificial intelligence (AI) tasks, enabling future mobile devices to implement "neural networks" modelled on the human brain.
The team from Massachusetts Institute of Technology (MIT) developed a new chip designed specifically to implement neural networks.
It is 10 times as efficient as a mobile GPU (Graphics Processing Unit) so it could enable mobile devices to run powerful AI algorithms locally rather than uploading data to the Internet for processing.
The GPU is a specialised circuit designed to accelerate the image output in a frame buffer intended for output to a display.
Modern smartphones are equipped with advanced embedded chipsets that can do many different tasks depending on their programming.
As organizations look to increase their agility, IT and lines of business need to connect faster. Companies need to adopt cloud applications more quickly and they need to be able to access and analyze all their data, whether from a legacy data warehouse, a new SaaS application, or an unstructured data source such as social media. In short, a unified integration platform has become a critical requirement for most enterprises.
According to Gartner, “unnecessarily segregated application and data integration efforts lead to counterproductive practices and escalating deployment costs.”
Don’t let your organization get caught in that trap. Whether you are evaluating what you already have or shopping for something completely new, you should measure any platform by how well it address the “three A’s” of integration: Anything, Anytime, Anywhere.
The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users’ behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012–2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a “wisdom-of-the-crowd” effect that allows to exploit users’ activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.
Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Neural networks have seen spectacular progress during the last few years and they are now the state of the art in image recognition and automated translation. TensorFlow is a new framework released by Google for numerical computations and neural networks. In this blog post, we are going to demonstrate how to use TensorFlow and Spark together to train and apply deep learning models.
Cloudera, the global provider of the fastest, easiest, and most secure data management and analytics platform built on Apache Hadoop and the latest open source technologies, announced today that SanDisk, a global leader in flash storage, has deployed Cloudera Enterprise as an enterprise data hub to store, process, analyze, and test all of its product quality data. With Cloudera, SanDisk is for the first time incorporating end-to-end analytics and machine learning into its manufacturing operations, reducing drive errors, predicting failures, and ultimately ensuring superior reliability, quality, and performance of its products.
Discrete choice models estimated using hypothetical choices made in a survey setting (i.e., choice experiments) are widely used to estimate the importance of product attributes in order to make product design and marketing mix decisions. Choice experiments allow the researcher to estimate preferences for product features that do not yet exist in the market. However, parameters estimated from experimental data often show marked inconsistencies with those inferred from the market, reducing their usefulness in forecasting and decision making. We propose an approach for combining choice-based conjoint data with individual-level purchase data to produce estimates that are more consistent with the market. Unlike prior approaches for calibrating conjoint models so that they correctly predict aggregate market shares for a “baseline” market, the proposed approach is designed to produce parameters that are more consistent with those that can be inferred from individual-level market data.
The preeminence of data science was inextricably linked to the emergence of big data. Combining business savvy, analytics, and data curation, this discipline was hailed as an enterprise-wide savior for the rapidity of the disparate forms of big data that threatened to overwhelm it.
Numerous developments within the past several months, however, have created a different reality for big data and its future. Its technologies were refined. The self-service movement within the data sphere thrived. The result? Big data came to occupy the central place in the data landscape as critical elements of data science – preparation, analytics, and integration – became automated.
Thanks to the self-service movement’s proliferation, even the smallest of organizations can now access big data’s advantages. “There’s been a lot of discussion about self-service…and having data analysts get at the data directly,” MapR Chief Marketing Officer Jack Norris said. “But you also have to recognize, what do you mean by ‘the data,’ and what has to happen to ‘the data’ before that self-service takes place?”
In fields ranging from genomics to quantum physics, researchers are increasingly using data-intensive computing to generate new insights and discoveries. Because of the volume of data involved in this research, scientists often store, analyze, and share it in the cloud. By leveraging the nearly infinite scale and tremendous computer power available in the cloud, they also are developing novel analytics tools and conducting more open and collaborative research that is accelerating the growth of scientific knowledge.
IBM® Watson IoT Real-Time Insights enables you to perform analytics on real-time data from your IoT devices to gain insights about their health and the overall state of your operations. IBM® Watson IoT Real-Time Insights connects to the IBM Watson IoT Platform for real-time device data feeds. The incoming data is interpreted through a virtual data model that can be augmented with asset master data from an asset management system.
In addition, user-defined rules are applied to the real-time streaming data to identify conditions that need attention. The action engine lets you define automated responses to the detected conditions, such as sending an email, triggering an IFTTT recipe, executing a Node-RED workflow, or using webhooks to connect to a variety of web services. And finally, real-time data is also displayed in a configurable dashboard for an at-a-glance view of the location, data, metrics, and alerts for your IoT devices.
We know, based on previous Future of News conversations, that the future of journalism is very much digital, that the industry is changing, practically on a daily basis. That it’s a demanding and challenging calling. But we also know that the journalists, editors, and publishers who have chosen this profession wouldn’t have it any other way. And Rich Jaroslovsky, Chief Journalist of SmartNews, is no exception.
With more than three decades of professional journalism experience and a resume he himself calls eccentric, Jaroslovsky epitomizes the curiosity, determination, and idealism of an industry that has taken some serious hits in recent years but refuses to back down.
the Museum focuses on shaping the future of robotics and artificial intelligence and their impact on human life.
His Highness Shaikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, today inaugurated the Museum of the Future at the Madinat Jumeirah in Dubai.
Organised by the Dubai Foundation for the Museum of the Future as part of the World Government Summit 2016, the Museum focuses on shaping the future of robotics and artificial intelligence and their impact on human life. It also offers a unique interactive experience to visitors by attempting to explore the future of this sector.
Shaikh Mohammed will also officiate the opening of the fourth World Government Summit when it gets underway on February8, 2016 under the theme 'Shaping Future Governments'.
Microsoft's AI program XiaoIce constantly analyzes the user's emotional state to simulate personal conversations. XiaoIce combines facts and data in Microsoft’s Bing search engine with recent advances in natural language processing. XiaoIce has had more than 10 billion conversations with people, most of them about private matters. XiaoIce is a huge hit in China, and could be a big boost to Microsoft's efforts in the AI space.
Neurons often respond to diverse combinations of task-relevant variables. This form of mixed selectivity plays an important computational role which is related to the dimensionality of the neural representations: high-dimensional representations with mixed selectivity allow a simple linear readout to generate a huge number of different potential responses. In contrast, neural representations based on highly specialized neurons are low dimensional and they preclude a linear readout from generating several responses that depend on multiple task-relevant variables. Here we review the conceptual and theoretical framework that explains the importance of mixed selectivity and the experimental evidence that recorded neural representations are high-dimensional. We end by discussing the implications for the design of future experiments.
IBM has opened a brand new global headquarters for Watson Internet of Things (IoT) in Munich, Germany in an effort to drive the innovation and development of connected devices and cognitive computing. Cognitive computing software uses natural language processing, machine learning and artificial intelligence to collect and analyse unstructured data and help users make better informed decisions.
In an effort to build—and control—ever smaller drones, researchers have been looking at how insects navigate. Insects use a technique called optical flow, based on the apparent speed of objects passing by in their field of vision. In fact, humans use optical flow to give us a sense of how fast we’re going when we’re driving.
But unlike humans in cars, drones have a third dimension to worry about. They also have to keep track of their height in order to land successfully. Stereo vision would allow them to estimate distances, but if the baseline between sensors is too small, those measurements are imprecise.
The move to fully open-source the CNTK tools is part of Microsoft's effort to gain mind share and boost Azure pickup.
The kit, built with an emphasis on performance, provides a unified computational network framework describing deep neural networks as a series of computational steps via a directed graph, according to Microsoft Research.
Quantitative analysis is increasingly being used in team sports to better understand performance in these stylized, delineated, complex social systems. Here we provide a first step toward understanding the pattern-forming dynamics that emerge from collective offensive and defensive behavior in team sports. We propose a novel method of analysis that captures how teams occupy sub-areas of the field as the ball changes location. We used the method to analyze a game of association football (soccer) based upon a hypothesis that local player numerical dominance is key to defensive stability and offensive opportunity. We found that the teams consistently allocated more players than their opponents in sub-areas of play closer to their own goal. This is consistent with a predominantly defensive strategy intended to prevent yielding even a single goal. We also find differences between the two teams' strategies: while both adopted the same distribution of defensive, midfield, and attacking players (a 4:3:3 system of play), one team was significantly more effective both in maintaining defensive and offensive numerical dominance for defensive stability and offensive opportunity. That team indeed won the match with an advantage of one goal (2 to 1) but the analysis shows the advantage in play was more pervasive than the single goal victory would indicate. Our focus on the local dynamics of team collective behavior is distinct from the traditional focus on individual player capability. It supports a broader view in which specific player abilities contribute within the context of the dynamics of multiplayer team coordination and coaching strategy. By applying this complex system analysis to association football, we can understand how players' and teams' strategies result in successful and unsuccessful relationships between teammates and opponents in the area of play.
Popular mobile games can attract millions of players and generate terabytes of game-related data in a short burst of time. This places extraordinary pressure on the infrastructure powering these games and requires scalable data analytics services to provide timely, actionable insights in a cost-effective way.
To address these needs, a growing number of successful gaming companies use Google’s web-scale analytics services to create personalized experiences for their players. They use telemetry and smart instrumentation to gain insight into how players engage with the game and to answer questions like: At what game level are players stuck? What virtual goods did they buy? And what's the best way to tailor the game to appeal to both casual and hardcore players?
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.