Amazing Science
764.2K views | +200 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google and other tech giants grapple with the ethical concerns raised by the AI boom

Google and other tech giants grapple with the ethical concerns raised by the AI boom | Amazing Science | Scoop.it
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

 

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations. “We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

 

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care. Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.” Such robots may still be a ways off, but ethical challenges raised by AI are already here. As businesses and governments rely more on machine-learning systems to make decisions, blind spots or biases in the technology can effectively lead to discrimination against certain types of people.

 

A ProPublica investigation last year, for example, found that a risk-scoring system used in some states to inform criminal sentencing was biased against blacks. Similarly, Horvitz described how an emotion-recognition service developed at Microsoft for use by businesses was initially inaccurate for small children because it was trained using a grab bag of photos that wasn’t properly curated. Maya Gupta, a researcher at Google, called for the industry to work harder on developing processes to ensure data used to train algorithms isn’t skewed. “A lot of times these data sets are being created in a somewhat automated way,” said Gupta. “We have to think more about, are we sampling enough from minority groups to be sure we did a good enough job?”

 

In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).

 

Companies are also taking individual action to build safeguards around their technology. Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place. Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well. Google has started its own AI ethics board.

 

Perhaps unsurprisingly, the companies creating such programs generally argue they obviate the need for new forms of government regulation of artificial intelligence. But at EmTech, Horvitz also encouraged discussion of extreme outcomes that might lead some people to conclude the opposite. In February 2017 he convened a workshop where experts laid out in detail how AI might harm society by doing things like messing with the stock market or election results. “If you’re proactive, you can come up with outcomes that are feasible and put mechanisms in place to disrupt them now,” said Horvitz. That kind of talk seemed to unnerve some of those he shared the stage with in San Francisco. Gupta of Google encouraged people to also consider how taking some decisions out of the hands of humans could make the world more moral than it is now.

 

“Machine learning is controllable and precise and measurable with statistics,” she said. “There are so many possibilities for more fairness, and more ethical results.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Newest Machine Learning Trends

Newest Machine Learning Trends | Amazing Science | Scoop.it

In the research areas, Machine Learning is steadily moving away from abstractions and engaging more in business problem solving with support from AI and Deep Learning. In What Is the Future of Machine Learning, Forbes predicts the theoretical research in ML will gradually pave the way for business problem solving. With Big Data making its way back to mainstream business activities, now smart (ML) algorithms can simply use massive loads of both static and dynamic data to continuously learn and improve for enhanced performance.

 

 If the threat of intelligent machines taking over Data Scientists is really as real as it is made out to be, then 2017 is probably the year when the global Data Science community should take a new look at the capabilities of so-called “smart machines.” The repeated failure of autonomous cars has made one point clear – that even learning machines cannot surpass the natural thinking faculties bestowed by nature on human beings. If autonomous or self-guided machines have to be useful to human society, then the current Artificial Intelligence and Machine Learning research should focus on acknowledging the limits of machine power and assign tasks that are suitable for the machines and include more human interventions at necessary checkpoints to avert disasters. Repetitive, routine tasks can be well handled by machines, but any out-of-the-ordinary situations will still require human intervention.

 

2017 Machine Learning and Application Development Trends

Gartner’s Top 10 Technology Trends for 2017  predicts that the combined AI and advanced ML practice that ignited about four years ago and since continued unscathed, will dominate Artificial Intelligence application development in 2017. This lethal combination will deliver more systems that “understand, learn, predict, adapt and potentially operate autonomously. “ Cheap hardware, cheap memory, cheap storage technologies, more processing power, superior algorithms, and massive data streams will all contribute to the success of ML-powered AI applications.  There will be steady rise in Ml-powered AI application in industry sectors like preventive healthcare, banking, finance, and media. For businesses that means more automated functions and fewer human checkpoints.  2017 Predictions from Forrester suggests that the Artificial Intelligence and Machine Learning Cloud will increasingly feed on IoT data as sensors and smart apps take over every facet of our daily lives.

 

Democratization of Machine Learning in the Cloud          

Democratization of AI and ML through Cloud technologies, open standards, and algorithm economy will continue. The growing trend of deploying prebuilt ML algorithms to enable Self-Service Business Intelligence and Analytics is a positive step towards democratization of ML. In Google Says Machine Learning is the Future, the author champions the democratization of ML through idea sharing. A case in point is Google’s Tensor Flow, which has championed the need for open standards in Machine Learning.  This article claims that almost anyone with a laptop and an Internet connection can dare to be a Machine Learning expert today provided they have the right mind set. The provisioning of Cloud-based IT services was already a good step to make advanced Data Science a mainstream activity, and now with Cloud and packaged algorithms, mid-sized ad smaller businesses will have access to Self-Service BI and Analytics, which was till now only a dream. Also, the mainstream business users will gradually take an active role in data-centric business systems. Machine Learning Trends – Future AI  claims that more enterprises in 2017 will capitalize on the Machine Learning Cloud and do their part to lobby for democratized data technologies.

 

Platform Wars will Peak in 2017

The platform war between IBM, Microsoft, Google, and Facebook to be the leader in ML developments will peak in 2017.  Where Machine Learning Is Headed, predicts that 2017 will experience a tremendous growth of smart apps, digital assistants, and main-stream use of Artificial Intelligence. Although many ML-enabled AI systems have turned into success stories, the self driving cars may die a premature death.

 

Humans will Make Peace with Machines

 Since 2012 the global business community has witnessed a meteoric rise and widespread proliferation of data technologies. Finally, humans will realize that it is time to stop fearing the machines and begin working with them. The InfoWorld article titled Application Development, Docker, Machine Learning Are Top Tech Trends for 2017 asserts humans and machines will work with each other, not against each other. In this context, readers should review the DATAVERSITY® article The Future of Machine Learning: Trends, Observations, and Forecasts, where the readers are reminded that as businesses develop a strong dependence on pre-built ML algorithms for Advanced Analytics, the need for Data Scientists or large IT departments may diminish.

 

Demand-Supply Gaps in Data Science and Machine Learning will Rise

The business world is steadily heading toward the prophetic 2018, when according to McKinsey the first void in data technology expertise will be felt in US and then gradually in the rest of the world. The demand-supply gap in Data Science and Machine Learning skills will continue to rise till academic programs and industry workshops begin to produce a ready workforce. In response to this sharp rise in demand-supply gap, more enterprises and academic institutions will collaborate to train future Data Scientists and ML experts. This kind of training will compete with the traditional Data Science classroom, and will focus more on practical skills rather than on theoretical knowledge. KDNuggets will continue to challenge the curious mind by publishing articles like 10 Algorithms that Machine Learning Engineers Should Know .  2017 will witness a steady rise in contributions from KDNugget and Kaggle in providing alternative training to future Data Scientists and Machine Learning experts through practical skill development.

 

Algorithm Economy will take Center Stage

Over the next year or two, businesses will be using canned algorithms for all data-centric activities like BI, Predictive Analytics, and CRM. The algorithm economy, which Forbes mentions, will usher in a marketplace where all data companies will compete for a space. In 2017, global businesses will engage in Self-Service BI, and experience the growth of algorithmic business solutions, and ML in the Cloud.  So far as algorithm-driven business decision making is concerned, 2017 may actually see two distinct types of algorithm economies. On one hand, average businesses will utilize canned algorithmic models for their operational and customer-facing functions. On the other hand, proprietary ML algorithms will become a market differentiator among large, competing enterprises. 

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Social Foraging
Scoop.it!

How AI researchers built a neural network that learns to speak in just a few hours

How AI researchers built a neural network that learns to speak in just a few hours | Amazing Science | Scoop.it
The Chinese search giant’s Deep Voice system learns to talk in just a few hours with little or no human interference.

 

In the battle to apply deep-learning techniques to the real world, one company stands head and shoulders above the competition. Google’s DeepMind subsidiary has used the technique to create machines that can beat humans at video games and the ancient game of Go. And last year, Google Translate services significantly improved thanks to the behind-the-scenes introduction of deep-learning techniques.

 

So it’s interesting to see how other companies are racing to catch up. Today, it is the turn of Baidu, an Internet search company that is sometimes described as the Chinese equivalent of Google. In 2013, Baidu opened an artificial intelligence research lab in Silicon Valley, raising an interesting question: what has it been up to?

 

Now Baidu’s artificial intelligence lab has revealed its work on speech synthesis. One of the challenges in speech synthesis is to reduce the amount of fine-tuning that goes on behind the scenes. Baidu’s big breakthrough is to create a deep-learning machine that largely does away with this kind of meddling. The result is a text-to-speech system called Deep Voice that can learn to talk in just a few hours with little or no human interference.

 

First some background. Text-to-speech systems are familiar in the modern world in navigation apps, talking clocks, telephone answering systems, and so on. Traditionally these have been created by recording a large database of speech from a single individual and then recombining the utterances to make new phrases.

 

The problem with these systems is that it is difficult to switch to a new speaker or change the emphasis in their words without recording an entirely new database. So computer scientists have been working on another approach. Their goal is to synthesize speech in real time from scratch as it is required.

 

Last year, Google’s DeepMind made a significant breakthrough in this area. It unveiled a neural network that learns how to speak by listening to the sound waves from real speech while comparing this to a transcript of the text. After training, it was able to produce synthetic speech based on text it was given. Google DeepMind called its system WaveNet.

 

Baidu’s work is an improvement on WaveNet, which still requires some fine-tuning during the training process. WaveNet is also computationally demanding, so much so that it is unclear whether it could ever be used to synthesize speech in real time in the real world.


Via Ashish Umre
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Machine Learning Algorithms Deciphers Bat Talk

Machine Learning Algorithms Deciphers Bat Talk | Amazing Science | Scoop.it

A machine learning algorithm helped decode the squeaks Egyptian fruit bats make in their roost, revealing that they “speak” to one another as individuals.

 

Plenty of animals communicate with one another, at least in a general way—wolves howl to each other, birds sing and dance to attract mates and big cats mark their territory with urine. But researchers at Tel Aviv University recently discovered that when at least one species communicates, it gets very specific. Egyptian fruit bats, it turns out, aren’t just making high pitched squeals when they gather together in their roosts. They’re communicating specific problems, reports Bob Yirka at Phys.org.

 

According to Ramin Skibba at Nature, neuroecologist Yossi Yovel and his colleagues recorded a group of 22 Egyptian fruit bats, Rousettus aegyptiacus, for 75 days. Using a modified machine learning algorithm originally designed for recognizing human voices, they fed 15,000 calls into the software. They then analyzed the corresponding video to see if they could match the calls to certain activities.

 

They found that the bat noises are not just random, as previously thought, reports Skibba. They were able to classify 60 percent of the calls into four categories. One of the call types indicates the bats are arguing about food. Another indicates a dispute about their positions within the sleeping cluster. A third call is reserved for males making unwanted mating advances and the fourth happens when a bat argues with another bat sitting too close. In fact, the bats make slightly different versions of the calls when speaking to different individuals within the group, similar to a human using a different tone of voice when talking to different people. Skibba points out that besides humans, only dolphins and a handful of other species are known to address individuals rather than making broad communication sounds. The research appears in the journal Scientific Reports.

 

“We have shown that a big bulk of bat vocalizations that previously were thought to all mean the same thing, something like ‘get out of here!’ actually contain a lot of information,” Yovel tells Nicola Davis at The Guardian. By looking even more carefully at stresses and patterns, Yovel says, researchers may be able to tease out even more subtleties in the bat calls.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

From Virtual Nurses To Drug Discovery: 106 Artificial Intelligence Startups In Healthcare

From Virtual Nurses To Drug Discovery: 106 Artificial Intelligence Startups In Healthcare | Amazing Science | Scoop.it

Increasingly crowded imaging & diagnostics: 19 out of the 24 companies under imaging & diagnostics raised their first equity funding round since January 2015 (this includes seed or Series A rounds, as well as a first round raised by stealth startup Imagen Technologies). In 2014, Butterfly Networks raised a $100M Series C, backed by Aeris Capital and Stanford University. This was one of the largest equity rounds to an AI in healthcare company, after China-based iCarbonX’s $154M mega-round and two $100M+ raises by oncology-focused Flatiron Health.

 

Remote patient monitoring: London-based Babylon Health, backed by investors including Kinnevik and Google-owned DeepMind Technologies, raised a $25M Series A round in 2016 to develop an AI-based chat platform. New York-based AiCure raised $12.3M in Series A funding from investors including Biomatics Capital Partners, New Leaf Venture Partners, Pritzker Group Venture Capital, and Tribeca Venture Partners, for the use of artificial intelligence to ensure patients are taking their medications. California-based Sense.ly has developed a virtual nursing assistant, Molly, to follow up with patients post-discharge. The company claims Molly gives clinicians “20% of their day back.” Sentrian, backed by investors including Frost Data Capital, analyzes biosensor data and sends patient-specific alerts to clinicians.

 

Core AI companies bring their algorithms to healthcare: Core AI startup Ayasdi, which has developed a machine intelligence platform based on topological data analysis, is bringing its solutions to healthcare providers for applications including patient risk scoring and readmission reduction. Other core AI startups looking at healthcare include H2O.ai and Digital Reasoning Systems.

 

Top VCs: Khosla Ventures and Data Collective are the top VC investors in healthcare AI startups, and have backed 5 unique companies each. Khosla Ventures backed California-based Ginger.io, which focuses on patients with depression and anxiety; healthcare analytics platform Lumiata; Israel’s Zebra Medical Vision and California-based Bay Labs, which applies AI to medical imaging; as well as drug discovery startup Atomwise. Data Collective backed imaging & diagnostics startups Enlitic, Bay Labs and Freenome, analytics platform CloudmedX, and previously mentioned Atomwise.

 

Drug discovery is also gaining attention: Startups are using machine learning algorithms to reduce drug discovery times, and VCs have backed 6 out of the 9 startups on the map. Andreessen Horowitz recently seed funded twoXAR, developer of the DUMA drug discovery platform; Khosla Ventures and Data Collective backed Atomwise, which published its first findings of Ebola treatment drugs last year, and has also partnered with MERCK; Lightspeed Venture Partners invested in Numedii in 2013; Foundation Capital participated in 3 equity funding rounds to Numerate.

 

AI in oncology: IBM Watson Group-backed Pathway Genomics has recently started a research study for its new blood test kit, CancerIntercept Detect. The company will collect blood samples from high-risk individuals who have never been diagnosed with the disease to determine if early detection is possible. Other oncology-focused startups include Flatiron HealthCyrcadia (wearable device), CureMetrixSkinVisionEntopsis, and Smart Healthcare.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Birdsnap: Identifying a bird from a picture with artificial intelligence

Birdsnap: Identifying a bird from a picture with artificial intelligence | Amazing Science | Scoop.it
Birdsnap is a free electronic field guide covering 500 of the most common North American bird species, available as a web site or aniPhone app. Researchers from Columbia University and the University of Maryland developed Birdsnap using computer vision and machine learning to explore new ways of identifying bird species. Birdsnap automatically discovers visually similar species and makes visual suggestions for how they can be distinguished. In addition, Birdsnap uses visual recognition technology to allow users who upload bird images to search for visually similar species. Birdsnap estimates the likelihood of seeing each species at any location and time of year based on sightings records, and uses this likelihood both to produce a custom guide to local birds for each user and to improve the accuracy of visual recognition.

The genesis of Birdsnap (and its predecessors Leafnsap and Dogsnap) was the realization that many techniques used for face recognition developed by Peter Belhumeur (Columbia University) and David Jacobs(University of Maryland) could also be applied to automatic species identification. State-of-the-art face recognition algorithms rely on methods that find correspondences between comparable parts of different faces, so that, for example, a nose is compared to a nose, and an eye to an eye. In the same way, Birdsnap detects the parts of a bird, so that it can examine the visual similarity of comparable parts of the bird.Our first electronic field guide Leafsnap, produced in collaboration with the Smithsonian Institution, was launched in May 2011. This free iPhone app uses visual recognition software to help identify tree species from photographs of their leaves. Leafsnap currently includes the trees of the northeastern US and will soon grow to include the trees of the United Kingdom. Leafsnap has been downloaded by over a million users, and discussed extensively in the press (see Leafsnap.com, for more information). In 2012, we launched Dogsnap, an iPhone app that allows you to use visual recognition to help determine dog breeds. Dogsnap contains images and textual descriptions of over 150 breeds of dogs recognized by the American Kennel Club.

For their inspiration and advice on bird identification, we thank the UCSD Computer Vision group, especially Serge Belongie, Catherine Wah, and Grant Van Horn; the Caltech Computational Vision group, especially Pietro Perona, Peter Welinder, and Steve Branson; the alumni of these groups Ryan Farrell (now at BYU), Florian Schroff (at Google), and Takeshi Mita (at Toshiba); and the Visipedia effort.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Friendly Robot Next Door May Have Just Written This Story

A Friendly Robot Next Door May Have Just Written This Story | Amazing Science | Scoop.it

The Washington Post's Heliograf software can autowrite tons of basic stories in no time, which could free up reporters to do more important work — or allow them to just retire.

 

USA Today has used this AI-driven production software to create short videos. It can condense news articles into a script, string together a selection of images or video footage, and even add narration with a synthesized newscaster voice.

 

Reuters’ algorithmic prediction tool helps journalists gauge the integrity of a tweet. The tech scores emerging stories on the basis of “credibility” and “newsworthiness” by evaluating who’s tweeting about it, how it’s spreading across the network, and if nearby users have taken to Twitter to confirm or deny breaking developments.

 

Originally designed to crowdsource reporting from the Republican and Democra­tic National Conventions, BuzzFeed’s software collects information from on-the-ground sources at news events. BuzzBot has since been open-sourced, portending a wave of bot-aided reporting tools.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Non-Invasive Brain-Computer Interface for Completely Locked-In Patients

A Non-Invasive Brain-Computer Interface for Completely Locked-In Patients | Amazing Science | Scoop.it

Researchers have developed a non-invasive brain-computer interface (BCI) for completely locked-in patients. This is the first time that these patients, with complete motor paralysis but an intact cognitive state, have been able to reliably communicate. A completely locked-in state involves the loss of all motor control, including that of the eye muscles, and until now some researchers suspected that such patients were unable to communicate.

 

The study, published in PLoS Biology, detailed the researchers’ efforts in developing a non-invasive method to allow four completely locked-in patients to answer “yes or no” questions. The technique involves patients wearing a cap that uses infrared light to measure blood flow in different areas of the brain when they think about responding “yes” or “no” to a question. The researchers trained the patients by asking them control test questions to make sure the system could accurately record their answers, before asking questions about their current lives.

 

Brain-computer interfaces involving implantable electrodes have previously been successfully used in patients with less severe forms of locked-in paralysis. However, these methods involved direct implantation of electrodes in the brain. The current method is non- invasive, along with being the first approach that has reliably worked for patients who are completely locked-in.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Laboratory of Artificial Intelligence for Design: Smart Free CAD For The Web

Laboratory of Artificial Intelligence for Design: Smart Free CAD For The Web | Amazing Science | Scoop.it

The design of 2D or 3D geometries is involved in one way or another in most of science, art and engineering activities. Modern design tools are powerful and boost the productivity of designers, but they require a lot of training, effort and time to achieve a good understanding and an efficient exploitation.

 

LAI4D is an R+D project whose aim is to develop an artificial intelligence able to understand the user's ideas regarding spatial imagination. This technology will help improve the communication between designers and design tools by emulating human cognitive abilities for interpreting graphic representations of geometric ideas and other capabilities.

 

The implementation has been conceived as a dual web application that can work as a 3D rendering widget or as a free light 3D CAD tool providing the adequate environment for the project. This CAD tool incorporates a special design assistant in charge of extracting conceptual geometries from pictures or sketches provided by the user as input. Additionally LAI4D tries to reduce the inherent complexity of professional design tools which, despite being suitable for experienced users, are almost unreachable for other people not trained in the usage of CAD systems and only in need of an occasional use. The selected implementation approach not only allows the users an easy access to the tool, but is also an excellent mean to build a community of designers that will provide the necessary feedback for the system in order to make it bigger and smarter. Follow this link to try the LAI4D designer.

 

Beginner's tutorial: This tutorial is the recommended introduction for all persons new to LAI4D. It is intended to teach the basics of design in 20 minutes. It shows step by step how to create a simple 3D geometry from the idea up to the publishing of the drawing on the Internet using the easiest tools. Thanks to this exercise the user will understand the working philosophy of LAI4D, will be able to generate polyhedral surfaces and polygonal lines with colors, and will learn to share designs through the free online sharing service of LAI4D. The created geometry can be inspected in the link: cubicle_sample.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New machine-learning algorithms may revolutionize drug discovery — and our understanding of life

New machine-learning algorithms may revolutionize drug discovery — and our understanding of life | Amazing Science | Scoop.it

A new set of machine-learning algorithms developed by researchers at the University of Toronto Scarborough can generate 3D structures of nanoscale protein molecules that could not be achieved in the past. The algorithms may revolutionize the development of new drug therapies for a range of diseases and may even lead to better understand how life works at the atomic level, the researchers say.

 

Drugs work by binding to a specific protein molecule and changing the protein’s 3D shape, which alters the way the drug works once inside the body. The ideal drug is designed in a shape that will only bind to a specific protein or group of proteins that are involved in a disease, while eliminating side effects that occur when drugs bind to other proteins in the body.

 

Since proteins are tiny — about 1 to 100 nanometers — even smaller than the shortest wavelength of visible light, they can’t be seen directly without using sophisticated techniques like electron cryomicroscopy (cryo-EM). Cryo-EM uses high-power microscopes to take tens of thousands of low-resolution images of a frozen protein sample from different positions.

The computational problem is to then piece together the correct high-resolution 3D structure from these 2D images.

 

Existing techniques take several days or even weeks to generate a 3D structure on a cluster of computers, requiring as much as 500,000 CPU hours, according to the researchers. Also, existing techniques often generate incorrect structures unless an expert user provides an accurate guess of the molecule being studied.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Henn-na Hotel: World’s First Fully Robot-Staffed Hotel Opens in Japan

Henn-na Hotel: World’s First Fully Robot-Staffed Hotel Opens in Japan | Amazing Science | Scoop.it
In Japan there’s now a hotel you can stay in without ever having to deal with another human being.

 

Henn-na Hotel is located within the Huis Ten Bosch amusement park, which recreates the Netherlands by displaying real size copies of old Dutch buildings. It’s not immediately clear what the connection between the theme park and it’s new gimmicky hotel have in common, but it will certainly help bring tourists to the park, which once declared bankruptcy in 2003. After much buzz in the media, which wasn’t hard to generate, the hotel finally opened last week and we got a peak at what this robotically ‘manned’ hotel actually looks like.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Is 2017 the Chinese year of AI?

Is 2017 the Chinese year of AI? | Amazing Science | Scoop.it
The country’s Internet giants are focusing on AI research, and domestic venture capital funding is pouring into the field.

 

The nation’s search giant, Baidu, is leading the charge. Already making serious headway in AI research, it has now announced that former Microsoft executive Qi Lu will take over as its chief operating officer. Lu ran the applications and services division at Microsoft, but, according to the Verge, a large part of his remit was developing strategies for artificial intelligence and chatbots. In a statement, Baidu cites hiring Lu as part of its plan to become a “global leader in AI.”

 

Meanwhile, Baidu’s chief scientist, Andrew Ng, has announced that the company is opening a new augmented reality lab in Beijing. Baidu has already made progress in AR, using  computer vision and deep learning to add an extra layer to the real world for millions of people. But the new plans aim to use a 55-strong lab to increase revenue by building AR marketing tools—though it’s thought that the company will also consider health-care and education applications in the future.

 

But Baidu isn’t alone in pushing forward. Late last year, Chinese Internet giant Tencent—the company behind the hugely successful mobile app WeChat, which has 846 million active users—said that it was determined to build a formidable AI lab. It plans to start publishing its work at conferences this year. Smaller players could also get a shot in the arm. According to KPMG, Chinese venture capital investment looks set to pour into AI research in the coming year. Speaking to Fintech Innovation, KPMG partner Egidio Zarrella explained that “the amount being invested in artificial intelligence in Asia is growing by the day.”

 

Similar growth is already underway in China's research community. Astudy by Japan's National Institute of Science and Technology Policy found China to be a close second to the U.S. in terms of the number of AI studies presented at top academic conferences in 2015. And a U.S.government report says that the number of papers published by Chinese researchers mentioning "deep learning" already exceeds the number published by U.S. researchers.

 

All of which has seen South China Morning Post label AI and AR as “must haves” in any self-respecting Chinese investment portfolio. No kidding. This year, it seems, many U.S. tech companies might find themselves looking East to identify competition.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

FDA clearance for AI-assisted cardiac imaging system

FDA clearance for AI-assisted cardiac imaging system | Amazing Science | Scoop.it

A San Francisco startup has landed Food and Drug Administration approval for artificial intelligence-assisted cardiac imaging in the cloud.

 

Arterys Inc.’s Cardio DL program applies deep learning, a form of artificial intelligence, to automate tasks that radiologists have been performing manually. It represents the first FDA-cleared, zero-footprint use of cloud computing and deep learning through AI in a clinical setting, the company said.

 

Arterys developed the technology by mining a data set of more than 3,000 cardiac cases. Cardio DL produces editable, automated contours, according to a company statement. It can provide accurate and consistent cardiac measurements in seconds, as opposed to one hour for manual processing.

 

Obtaining an image of a heart through MRI is a complex, time-consuming process that Arterys is working to improve, according to Arterys CEO Fabien Beckers.

 

Radiologists have traditionally used software to segment and draw contours around the ventricle to determine how the heart is functioning, Becker said. The new, AI-assisted software can provide deep learning-generated contours of the insides and outsides of the heart’s ventricles to speed up the process and improve accuracy.

 

“It’s the new way of doing medical imaging, a cloud medical imaging application that can have AI embedded in it,” he said. “It has the potential to make sure that physicians benefit from the work of thousands of other physicians and can be transforming healthcare in a positive fashion.”

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel
Scoop.it!

Google uses neural networks to translate without transcribing

Google uses neural networks to translate without transcribing | Amazing Science | Scoop.it

Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation.

 

Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modeled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analyzed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work.

 

After a learning period, Google’s system produced a better-quality English translation of Spanish speech than one that transcribed the speech into written Spanish first. It was evaluated using the BLEU score, which is designed to judge machine translations based on how close they are to that by a professional human.

 

The system could be particularly useful for translating speech in languages that are spoken by very few people, says Sharon Goldwater at the University of Edinburgh in the UK. International disaster relief teams, for instance, could use it to quickly put together a translation system to communicate with people they are trying to assist. When an earthquake hit Haiti in 2010, says Goldwater, there was no translation software available for Haitian Creole. Goldwater’s team is using a similar method to translate speech from Arapaho, a language spoken by only 1000 or so people in the Native American tribe of the same name, and Ainu, a language spoken by a handful of people in Japan.


Via Levin Chin
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This AI software dreams up new drug molecules

This AI software dreams up new drug molecules | Amazing Science | Scoop.it
Ingesting a heap of drug data allows a machine-learning system to suggest alternatives humans hadn’t tried yet.

 

A group of scientists now report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. The team can train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also enable the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. The researchers demonstrate the success of this method in the design of drug-like molecules as well as organic light-emitting diodes.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

The prototype of a chemical computer detects a sphere

The prototype of a chemical computer detects a sphere | Amazing Science | Scoop.it

Chemical computers are becoming ever more of a reality - this is being proven by scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw. It turns out that after an appropriate teaching procedure even a relatively simple chemical system can perform non-trivial operations. In their most recent computer simulations researchers have shown that correctly programmed chemical matrices of oscillating droplets can recognize the shape of a sphere with great accuracy.

 

Modern computers use electronic signals for their calculations, that is, physical phenomena related to the movement of electric charges. Information can, however, be processed in many ways. For some time now efforts have been underway worldwide to use chemical signals for this purpose. For the time being, however, the resulting chemical systems perform only the simplest logic operations. Meanwhile, researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw have demonstrated that even uncomplicated and easy-to-produce collections of droplets, in which oscillating chemical reactions proceed, can process information in a useful way, e.g. recognizing the shape of a specified three-dimensional object with great accuracy or correctly classifying cancer cells into benign or malignant.

 

"A lot of work being currently carried out in laboratories focuses on building chemical equivalents of standard logic gates. We took a different approach to the problem," says Dr. Konrad Gizynski (IPC PAS) and explains: "We investigate systems of a dozen-or-so to a few dozen drops in which chemical signals propagate, and treat each one as a whole, as a kind of neuronal network. It turns out that such networks, even very simple ones, after a short teaching procedure manage well with fairly sophisticated problems. For instance, our newest system has ability to recognize the shape of a sphere in a set of x, y, z spatial coordinates".

 

The systems being studied at the IPC PAS work thanks to the Belousov-Zhabotinsky reaction proceeding in individual drops. This reaction is oscillatory: after the completion of one oscillation cycle the reagents necessary to begin the next cycle are regenerated in the solution. A droplet is a batch reactor. Before reagents are depleted a droplet has usually performed from a few dozen to a few hundred oscillations. The time evolution of a droplet is easy to observe, since its catalyst, ferroin, changes color during the cycle. In a thin layer of solution the effect is spectacular: colorful strips - chemical fronts - traveling in all directions appear in the liquid. Fronts can also be seen in the droplets, but in practice the phase of the cycle is indicated just by the color of the droplet: when the cycle begins, the droplet rapidly turns blue (excites), after which it gradually returns to its initial state, which is red.

 

"Our systems basically work by mutual communication between droplets: when the droplets are in contact, the chemical excitation can be transmitted from droplet to droplet. In other words, one droplet can trigger the reaction in the next! It is also important that an excited droplet cannot be immediately excited once again. Speaking somewhat colloquially, before the next excitation it has to 'have a rest', in order to return to its original state," explains Dr. Gizynski.


Via Mariaschnee
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe
Scoop.it!

Neuromorphic chips: Can a digital 'brain' be in one of your next iPhones?

Neuromorphic chips: Can a digital 'brain' be in one of your next iPhones? | Amazing Science | Scoop.it

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs. Canadian startup Applied Brain Research is one of a wave of companies developing neuromorphic chips – which have several advantages over traditional CPUs.

 

“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me. Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer.

 

Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.

 

“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.

 

“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like - "Who did I have that conversation about doing the launch for our new product in Tokyo?" or "What was that idea for my wife's birthday gift that Melissa suggested?,” he says.

 

“The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.

 

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers. With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.


Via Scott Turner, CineversityTV
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Artificial intelligence system beats professional players at poker

Artificial intelligence system beats professional players at poker | Amazing Science | Scoop.it

Poker isn’t like other games artificial intelligence has mastered, like chess and go. In poker, each player has a different set of information than the others, and thus a different perspective on the game. This means poker more closely mirrors the kinds of decisions we make in real life, but also presents a huge challenge for AI. Now, an AI system called DeepStack has succeeded in untangling this imperfect information, refining its own strategy to win against professional players at a rate nearly 10 times that of a human poker pro.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bot wars: Computer bots are more human-like than you might think, having fights lasting years

Bot wars: Computer bots are more human-like than you might think, having fights lasting years | Amazing Science | Scoop.it

Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements.

 

The team analyzed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect.

 

Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Sophisticated AI Predicts Autism From Infant Brain Scans with 81% accuracy

Sophisticated AI Predicts Autism From Infant Brain Scans with 81% accuracy | Amazing Science | Scoop.it

Twenty-two years ago, researchers first reported that adolescents with autism spectrum disorder had increased brain volume. During the intervening years, studies of younger and younger children showed that this brain “overgrowth” occurs in childhood.

 

Now, a team at the University of North Carolina, Chapel Hill, has detected brain growth changes linked to autism in children as young as 6 months old. And it piqued our interest because a deep-learning algorithm was able to use that data to predict whether a child at high-risk of autism would be diagnosed with the disorder at 24 months.

 

The algorithm correctly predicted the eventual diagnosis in high-risk children with 81 percent accuracy and 88 percent sensitivity. That’s pretty damn good compared with behavioral questionnaires, which yield information that leads to early autism diagnoses (at around 12 months old) that are just 50 percent accurate.

 

“This is outperforming those kinds of measures, and doing it at a younger age,” says senior author Heather Hazlett, a psychologist and brain development researcher at UNC.

 

As part of the Infant Brain Imaging Study, a U.S. National Institues of Health–funded study of early brain development in autism, the research team enrolled 106 infants with an older sibling who had been given an autism diagnosis, and 42 infants with no family history of autism. They scanned each child’s brain—no easy feat with an infant—at 6-, 12-, and 24 months.

 

The researchers saw no change in any of the babies’ overall brain growth between 6- and 12-month mark. But there was a significant increase in the brain surface area of the high-risk children who were later diagnosed with autism. That increase in surface area was linked to brain volume growth that occurred between ages 12 and 24 months. In other words, in autism, the developing brain first appears to expand in surface area by 12 months, then in overall volume by 24 months.

 

The team also performed behavioral evaluations on the children at 24 months, when they were old enough to begin to exhibit the hallmark behaviors of autism, such as lack of social interest, delayed language, and repetitive body movements. The researchers note that the greater the brain overgrowth, the more severe a child’s autistic symptoms tended to be.

 

Though the new findings confirmed that brain changes associated with autism occur very early in life, the researchers did not stop there. In collaboration with computer scientists at UNC and the College of Charleston, the team built an algorithm, trained it with the brain scans, and tested whether it could use these early brain changes to predict which children would later be diagnosed with autism.

 

It worked well. Using just three variables—brain surface area, brain volume, and gender (boys are more likely to have autism than girls)—the algorithm identified up eight out of 10 kids with autism. “That’s pretty good, and a lot better than some behavioral tools,” says Hazlett.

 

To train the algorithm, the team initially used half the data for training and the other half for testing—“the cleanest possible analysis,” according to team member Martin Styner, co-director of the Neuro Image Analysis and Research Lab at UNC. But at the request of reviewers, they subsequently performed a more standard 10-fold analysis, in which data is subdivided into 10 equal parts. Machine learning is then done 10 times, each time with 9 folds used for training and the 10th saved for testing. In the end, the final program gathers together the “testing only” results from all 10 rounds to use in its predictions.

 

Happily, the two types of analyses—the initial 50/50 and the final 10-fold—showed virtually the same results, says Styner. And the team was pleased with the prediction accuracy. “We do expect roughly the same prediction accuracy when more subjects are added,” said co-author Brent Munsell, an assistant professor at College of Charleston, in an email to IEEE. “In general, over the last several years, deep learning approached that have been applied to image data have proved to be very accurate,” says Munsell.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain–Computer Interface Allows Speediest Typing to Date

Brain–Computer Interface Allows Speediest Typing to Date | Amazing Science | Scoop.it

A new interface system allowed three paralyzed individuals to type words up to four times faster than the speed that had been demonstrated in earlier studies.

 

The promise of brain–computer interfaces (BCIs) for restoring function to people with disabilities has driven researchers for decades, yet few devices are ready for widespread practical use. Several obstacles exist, depending on the application. For typing, however, one important barrier has been reaching speeds sufficient to justify adopting the technology, which usually involves surgery. A study published Tuesday in eLife reports the results of a system that enabled three participants—Degray and two people with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease, a neurodegenerative disease that causes progressive paralysis)—to type at the fastest speeds yet achieved using a BCI—speeds that bring the technology within reach of being practically useful. “We're approaching half of what, for example, I could probably type on a cell phone,” says neurosurgeon and co-senior author, Jamie Henderson of Stanford University.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The AI Threat Isn’t Skynet — It’s the End of the Middle Class

The AI Threat Isn’t Skynet — It’s the End of the Middle Class | Amazing Science | Scoop.it
The world's top AI researchers met to consider the threats posed by their research. The global economy could be the first casualty.

 

In the US, the number of manufacturing jobs peaked in 1979 and has steadily decreased ever since. At the same time, manufacturing has steadily increased, with the US now producing more goods than any other country but China. Machines aren’t just taking the place of humans on the assembly line. They’re doing a better job. And all this before the coming wave of AI upends so many other sectors of the economy. “I am less concerned with Terminator scenarios,” MIT economist Andrew McAfee said on the first day at Asilomar. “If current trends continue, people are going to rise up well before the machines do.”

 

McAfee pointed to newly collected data that shows a sharp decline in middle class job creation since the 1980s. Now, most new jobs are either at the very low end of the pay scale or the very high end. He also argued that these trends are reversible, that improved education and a greater emphasis on entrepreneurship and research can help feed new engines of growth, that economies have overcome the rise of new technologies before. But after his talk, in the hallways at Asilomar, so many of the researchers warned him that the coming revolution in AI would eliminate far more jobs far more quickly than he expected.

 

Indeed, the rise of driverless cars and trucks is just a start. New AI techniques are poised to reinvent everything from manufacturing to healthcare to Wall Street. In other words, it’s not just blue-collar jobs that AI endangers. “Several of the rock stars in this field came up to me and said: ‘I think you’re low-balling this one. I think you are underestimating the rate of change,'” McAfee says.

 

That threat has many thinkers entertaining the idea of a universal basic income, a guaranteed living wage paid by the government to anyone left out of the workforce. But McAfee believes this would only make the problem worse, because it would eliminate the incentive for entrepreneurship and other activity that could create new jobs as the old ones fade away. Others question the psychological effects of the idea. “A universal basic income doesn’t give people dignity or protect them from boredom and vice,” Etzioni says.

 

At a time when the Trump administration is promising to make America great again by restoring old-school manufacturing jobs, AI researchers aren’t taking him too seriously. They know that these jobs are never coming back, thanks in no small part to their own research, which will eliminate so many other kinds of jobs in the years to come, as well. At Asilomar, they looked at the real US economy, the real reasons for the “hollowing out” of the middle class. The problem isn’t immigration—far from it. The problem isn’t offshoring or taxes or regulation. It’s technology itself.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google tests AI vs AI to see if AI becomes 'aggressive' or cooperates

Google tests AI vs AI to see if AI becomes 'aggressive' or cooperates | Amazing Science | Scoop.it

Google's artificial intelligence subsidiary DeepMind is pitting AI agents against one another to test how they interact with each other and how they would react in various "social dilemmas". In a new study, researchers said they used two video games – Wolfpack and Gathering – to examine how AI agents change the way they behave based on the environment and situation they are in using social sciences and game theory principles.

 

"The question of how and under what circumstances selfish agents cooperate is one of the fundamental questions in the social sciences," DeepMind researchers wrote in a blog post. "One of the simplest and most elegant models to describe this phenomenon is the well-known game of Prisoner's Dilemma from game theory."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Machine learning kept unstable quantum bits in line – even before they wavered

Machine learning kept unstable quantum bits in line – even before they wavered | Amazing Science | Scoop.it

Imagine predicting your car will break down and being able to replace the faulty part before it becomes a problem. Now Australian physicists have found a way to do this – albeit on a quantum scale.

 

In Nature Communications, they enlisted machine learning to “foresee” the future failure of a quantum bit, or qubit, and makes the necessary corrections to stop it happening.

 

Quantum computing is a potentially world-changing technology with the potential to complete tasks in minutes what current computers take thousands of years. But achieving a practical, large-scale quantum technologies is probably a long way off.

 

One of the major challenges is maintaining qubits in the delicate, zen-like state of superposition they need to do their business.

Any tiny nudge from the environment – such as the jiggly atom next door – knocks the qubit off balance.

 

So physicists go to great lengths to stabilize qubits, cooling them to more than 200 degrees below zero to reduce atomic jiggling. Still, superposition typically lasts but a tiny fraction of a second, and this cuts quantum number-crunching time short.

 

A team led by Michael Biercuk at the University of Sydney found a new way of stabilizing qubits against noise in the environment. It works by predicting how a qubit will behave and act preemptively. In a quantum computer, the technique could make qubits twice as stable as before. The team used control theory and machine learning (a kind of artificial intelligence) to estimate how the future of a qubit would play out.

 

Control theory is the branch of engineering that deals with feedback systems, such as the thermostat keeping your room temperature constant. The thermostat reacts to changes in the environment, initiating warm or cool air to pump into the room. Meanwhile, new machine learning algorithms look at how the system behaved in the past and use this information predict how it will react to future events.

 

First, Biercuk’s team made a qubit by trapping a single ion of ytterbium in a beam of laser light. To train their algorithm, they simulated noise, tweaking the light to disturb the atom in a controlled way. Their algorithm monitored how the qubit responded to these tweaks and made a prediction for how it would behave in future. Next, they let events play out for the qubit to check their algorithm’s accuracy. The longer the algorithm watched the qubit, the more accurate its predictions became. Finally, the team used the predictions to help the system self-correct. The qubit was twice as stable with the algorithm as without it.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making an AI Systems that performs in the 75th percentile for American adults on standard IQ tests

Making an AI Systems that performs in the 75th percentile for American adults on standard IQ tests | Amazing Science | Scoop.it

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

 

“The model performs in the 75th percentile for American adults, making it better than average,” said Northwestern Engineering’s Ken Forbus. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”

 

The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory.

 

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern’s McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

more...
No comment yet.