A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.
Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).
Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.
What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).
We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.
Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.
Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears. Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
Via Dr. Stefan Gruenwald
Chinese search engine company Baidu says it has built the world’s most-accurate computer vision system, dubbed Deep Image, which runs on a supercomputer optimized for deep learning algorithms. Baidu claims a 5.98 percent error rate on the ImageNet object classification benchmark; a team from Google won the 2014 ImageNet competition with a 6.66 percent error rate. In…
FORBES gets an exclusive look at SourcePin, a search technology powered by Artificial Intelligence that forms part of Memex, DARPA's project to shine a light on the darker parts of the web. It's already in use by law enforcement agencies tracking sex trafficking, but could be of use to all kinds of organisation. And it's about to go open source.
A new search engine being developed by Darpa aims to shine a light on the dark web and uncover patterns and relationships in online data to help law enforcement and others track illegal activity. The project, dubbed Memex, has been in the works for a year and is being developed by 17 different contractor teams…
These days, many of us in consumer and enterprise tech companies are working on predictive systems that provide modest but valuable augmentation of human intelligence and business processes. I think this scale of ambition is a good fit for the current state of the art in machine learning and probabilistic inference. Think personal assistants like Siri or Google Now, predictive analytics in the enterprise for churn detection and ad campaign targeting, and personalized news apps like Prismatic.
But I think that the long term story is much more exciting, and much further from our experience with synthetic intelligence to date. I believe that we are on the path to building the equivalent of global-scale nervous systems. I’m thinking Gaia’s brain: distributed but unified intelligences that gather data from sensors all over the world, and that synthesize those data streams to perceive the overall state of the planet as naturally as we perceive with our own sensory systems. This isn’t just big data–this is big inference.
To make this idea of a global intelligence more concrete, consider the startup Premise. As a first step toward the kind of perceptual systems that I am talking about, Premise is using various signals from the public internet as a set of massively distributed sensory organs, and then leveraging this information to develop more informative economic indexes.
Now consider what other problems such systems could solve in the coming decades. We could gain a true understanding of the climate system on a granular but global level. We could track and coordinate every vehicle on the planet, to improve energy efficiency and optimize scheduling to all but eliminate traffic jams. Or moving from vehicles to parts and materials, we could create and manage truly robust supply chains that maintain efficiency and resilience in the face of unexpected events. The possibilities go on, and are truly awesome.
So which explanation captures the dynamics that led to the crash? Could an ordinary order in the futures market or a lone market manipulator really cause the crash? The simple answer is that this is the wrong question to ask. From the perspective of the joint report and the enforcement action, the Flash Crash was a fluke, an idiosyncratic event caused by an unexpected glitch in the markets. But it was far from being a fluke. Instead, the Flash Crash reveals that we need a fundamentally different understanding of how modern financial markets work. We believe that it shows us that markets are governed by the same principle as earthquakes and avalanches: self-organized criticality.
Physicists Alexey Bezryadin, Alfred Hubler, and Andrey Belkin from the University of Illinois at Urbana-Champaign, have demonstrated the emergence of self-organized structures that drive the evolution of a non-equilibrium system to a state of maximum entropy production. The authors suggest MEPP underlies the evolution of the artificial system’s self-organization, in the same way that it underlies the evolution of ordered systems (biological life) on Earth. The team’s results are published in Nature Publishing Group’s online journal Scientific Reports.
MEPP may have profound implications for our understanding of the evolution of biological life on Earth and of the underlying rules that govern the behavior and evolution of all nonequilibrium systems. Life emerged on Earth from the strongly nonequilibrium energy distribution created by the Sun’s hot photons striking a cooler planet. Plants evolved to capture high energy photons and produce heat, generating entropy. Then animals evolved to eat plants increasing the dissipation of heat energy and maximizing entropy production.
In their experiment, the researchers suspended a large number of carbon nanotubes in a non-conducting non-polar fluid and drove the system out of equilibrium by applying a strong electric field. Once electrically charged, the system evolved toward maximum entropy through two distinct intermediate states, with the spontaneous emergence of self-assembled conducting nanotube chains.
In the first state, the “avalanche” regime, the conductive chains aligned themselves according to the polarity of the applied voltage, allowing the system to carry current and thus to dissipate heat and produce entropy. The chains appeared to sprout appendages as nanotubes aligned themselves so as to adjoin adjacent parallel chains, effectively increasing entropy production. But frequently, this self-organization was destroyed through avalanches triggered by the heating and charging that emanates from the emerging electric current streams. (Watch the video.)
“The avalanches were apparent in the changes of the electric current over time,” said Bezryadin.
I frequently talk to groups of managers on the nature of systems thinking and its radical implications to management. In doing so I use several case studies involving prominent American corporations. At the end of the presentation I am almost alwaysasked, "If this way of thinking is as good as you say it is, why don't more organizations use it?" It is easy to reply by saying that organizations naturally resist change. This of course is a tautology. I once asked a vice president of marketing why consumers used his product. He answered, "Because they like it." I then asked him how he knew this. He answered, "Because the use it." Our answer to the question about failure of organizations to adopt systems thinking is seldom any better then this. There be many reasons why any particular organization fails to adopt systems thinking but I believe there are two that are the most important, one general and one specific. By a general reason I mean one that is responsible for organizations failing to adopt any transforming idea, let alone systems thinking. By a specific reason I mean one responsible for the failure to adopt systems thinking in particular.
The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.
This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.
Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.
"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.
Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.