Amazing Science
843.5K views | +130 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Teaching robots right from wrong

Teaching robots right from wrong | Amazing Science |
Robots of the future will face tricky dilemmas. Researchers are working on tools to help robots make the right choices and keep people safe.


You’re rushing across the school parking lot to get to your first class on time when you notice a friend is in trouble. She’s texting and listening to music on her headphones. Unawares, she’s also heading straight for a gaping hole in the sidewalk. What do you do? The answer seems pretty simple: Run over and try to stop her before she hurts herself. Who cares if you might be a little late for class?


To figure out the best solution, such a decision balances the effects of your choice. It’s an easy decision. You don’t even have to think hard about it. You make such choices all the time. But what about robots? Can they make such choices? Should a robot stop your friend from falling into the hole? Could it?


Not today’s robots. They simply aren’t smart enough to even realize when someone is in danger. Soon, they might be. Yet without some rules to follow, a robot wouldn’t know the best choice to make.


So robot developers are turning to philosophy. Called ethics, it’s a field in which people study differences between right and wrong. And with it, they are starting to develop robots that can make basic ethical decisions.


One lab’s robot is mastering the hole scenario. Another can decide not to follow a human’s instructions if they seem unsafe. A third robot is learning how to handle tricky situations in a nursing home. Such research should help robots of the future figure out the best action to take when there are competing choices. This ethical behavior may just become part of their programming. That will allow them to interact with people in safe, predictable ways. In time, robots may actually begin to understand the difference between right and wrong.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Melding mind and machine: How close are we in 2017?

Melding mind and machine: How close are we in 2017? | Amazing Science |
Brain-computer interfacing is a hot topic in the tech world, with Elon Musk's announcement of his new Neuralink startup. Here, researchers separate what's science from what's currently still fiction.


Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?


Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.


How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?


Eb Fetz, a researcher here at the Center for Sensorimotor Neural Engineering (CSNE), is one of the earliest pioneers to connect machines to minds. In 1969, before there were even personal computers, he showed that monkeys can amplify their brain signals to control a needle that moved on a dial. Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities.


You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.


Similarly, some limited virtual sensations can be sent back to the brain, by delivering electrical current inside the brain or to the brain surface.


What about our main senses of sight and sound? Very early versions of bionic eyes for people with severe vision impairment have been deployed commercially, and improved versions are undergoing human trials right now. Cochlear implants, on the other hand, have become one of the most successful and most prevalent bionic implants – over 300,000 users around the world use the implants to hear.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Self-driving and electric cars will have tons of strange effects on society

Self-driving and electric cars will have tons of strange effects on society | Amazing Science |

Autonomy and electrification will have bigger impacts on the world than you might expect.


First, a bit of managing expectations: without regulatory incentives, America’s electric car adoption looks like it will be slow to grow, and the first wave of autonomous cars might prove to be rather underwhelming. And while automakers and technology firms are indeed racing to reboot our cars—making these technologies seemingly inevitable—they are likely to take a while to get here.


What's less certain is how they'll change the world. Benedict Evans, a partner at the Silicon Valley venture capital firm Andreessen Horowitz and no stranger to tech trends analysis, has published some thoughts on what he calls second- and third-order effects of the disruption that’s going to play out on our highways. And his insights describe a future made fundamentally different by the technologies.


Consider electrification. We know that losing the internal combustion engine will be good for the planet. But, as Evans points out, a lot will change when the supporting infrastructure for gas guzzlers disappears: many repair shops will be out of a job, because most car maintenance is focused around the motor. And gas stations no longer have a purpose, so what happens to the convenience stores that they contain—and the half of America’s tobacco sales that gas stations account for?


As for self-driving cars, every company involved in the nascent industry is keen to point out that autonomous vehicles will crash less frequently than those driven by humans. But the benefits of a car that can drive itself aren't limited to moving folks from A to B: it can also go park itself somewhere usually considered too inconvenient for human passengers, ready to be beckoned when needed. That means that huge swaths of land in the hearts of cities, currently used as parking lots, could be repurposed—potentially upending the real estate market.


These are just a couple of the examples Evans provides, and there are far more to consider. He also traces out large-scale ramifications for the electricity industry, as home solar storage systems for car charging help solve the problem of peak demand; increased commute distances made possible by autonomous cars that drive faster and fender-to-fender; and huge shifts in the public transit sector as on-demand autonomous vehicles break down boundaries between cars, taxis, and buses.


But it's the combination of these outcomes that's really interesting. In an America without gas stations and inner-city parking lots, where on-demand transport rivals public transit, and car crashes are nonexistent, the urban landscape is redefined. In Europe, most cities predate cars by centuries, and were always built to be walkable. They could easily revert to type. American cities, on the other hand, have been designed around the car. That means that the way they’re used could change altogether.


No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel!

Google uses neural networks to translate without transcribing

Google uses neural networks to translate without transcribing | Amazing Science |

Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation.


Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modeled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analyzed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work.


After a learning period, Google’s system produced a better-quality English translation of Spanish speech than one that transcribed the speech into written Spanish first. It was evaluated using the BLEU score, which is designed to judge machine translations based on how close they are to that by a professional human.


The system could be particularly useful for translating speech in languages that are spoken by very few people, says Sharon Goldwater at the University of Edinburgh in the UK. International disaster relief teams, for instance, could use it to quickly put together a translation system to communicate with people they are trying to assist. When an earthquake hit Haiti in 2010, says Goldwater, there was no translation software available for Haitian Creole. Goldwater’s team is using a similar method to translate speech from Arapaho, a language spoken by only 1000 or so people in the Native American tribe of the same name, and Ainu, a language spoken by a handful of people in Japan.

Via Levin Chin
No comment yet.
Scooped by Dr. Stefan Gruenwald!

This AI software dreams up new drug molecules

This AI software dreams up new drug molecules | Amazing Science |
Ingesting a heap of drug data allows a machine-learning system to suggest alternatives humans hadn’t tried yet.


A group of scientists now report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. The team can train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also enable the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. The researchers demonstrate the success of this method in the design of drug-like molecules as well as organic light-emitting diodes.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

The prototype of a chemical computer detects a sphere

The prototype of a chemical computer detects a sphere | Amazing Science |

Chemical computers are becoming ever more of a reality - this is being proven by scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw. It turns out that after an appropriate teaching procedure even a relatively simple chemical system can perform non-trivial operations. In their most recent computer simulations researchers have shown that correctly programmed chemical matrices of oscillating droplets can recognize the shape of a sphere with great accuracy.


Modern computers use electronic signals for their calculations, that is, physical phenomena related to the movement of electric charges. Information can, however, be processed in many ways. For some time now efforts have been underway worldwide to use chemical signals for this purpose. For the time being, however, the resulting chemical systems perform only the simplest logic operations. Meanwhile, researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw have demonstrated that even uncomplicated and easy-to-produce collections of droplets, in which oscillating chemical reactions proceed, can process information in a useful way, e.g. recognizing the shape of a specified three-dimensional object with great accuracy or correctly classifying cancer cells into benign or malignant.


"A lot of work being currently carried out in laboratories focuses on building chemical equivalents of standard logic gates. We took a different approach to the problem," says Dr. Konrad Gizynski (IPC PAS) and explains: "We investigate systems of a dozen-or-so to a few dozen drops in which chemical signals propagate, and treat each one as a whole, as a kind of neuronal network. It turns out that such networks, even very simple ones, after a short teaching procedure manage well with fairly sophisticated problems. For instance, our newest system has ability to recognize the shape of a sphere in a set of x, y, z spatial coordinates".


The systems being studied at the IPC PAS work thanks to the Belousov-Zhabotinsky reaction proceeding in individual drops. This reaction is oscillatory: after the completion of one oscillation cycle the reagents necessary to begin the next cycle are regenerated in the solution. A droplet is a batch reactor. Before reagents are depleted a droplet has usually performed from a few dozen to a few hundred oscillations. The time evolution of a droplet is easy to observe, since its catalyst, ferroin, changes color during the cycle. In a thin layer of solution the effect is spectacular: colorful strips - chemical fronts - traveling in all directions appear in the liquid. Fronts can also be seen in the droplets, but in practice the phase of the cycle is indicated just by the color of the droplet: when the cycle begins, the droplet rapidly turns blue (excites), after which it gradually returns to its initial state, which is red.


"Our systems basically work by mutual communication between droplets: when the droplets are in contact, the chemical excitation can be transmitted from droplet to droplet. In other words, one droplet can trigger the reaction in the next! It is also important that an excited droplet cannot be immediately excited once again. Speaking somewhat colloquially, before the next excitation it has to 'have a rest', in order to return to its original state," explains Dr. Gizynski.

Via Mariaschnee
No comment yet.
Rescooped by Dr. Stefan Gruenwald from artificial intelligence for students!

Neuromorphic chips: Can a digital 'brain' be in one of your next iPhones?

Neuromorphic chips: Can a digital 'brain' be in one of your next iPhones? | Amazing Science |

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs. Canadian startup Applied Brain Research is one of a wave of companies developing neuromorphic chips – which have several advantages over traditional CPUs.


“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me. Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer.


Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.


Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.


“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.


“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like - "Who did I have that conversation about doing the launch for our new product in Tokyo?" or "What was that idea for my wife's birthday gift that Melissa suggested?,” he says.


“The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.


Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers. With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.

Via Scott Turner
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Artificial intelligence system beats professional players at poker

Artificial intelligence system beats professional players at poker | Amazing Science |

Poker isn’t like other games artificial intelligence has mastered, like chess and go. In poker, each player has a different set of information than the others, and thus a different perspective on the game. This means poker more closely mirrors the kinds of decisions we make in real life, but also presents a huge challenge for AI. Now, an AI system called DeepStack has succeeded in untangling this imperfect information, refining its own strategy to win against professional players at a rate nearly 10 times that of a human poker pro.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bot wars: Computer bots are more human-like than you might think, having fights lasting years

Bot wars: Computer bots are more human-like than you might think, having fights lasting years | Amazing Science |

Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements.


The team analyzed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect.


Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Sophisticated AI Predicts Autism From Infant Brain Scans with 81% accuracy

Sophisticated AI Predicts Autism From Infant Brain Scans with 81% accuracy | Amazing Science |

Twenty-two years ago, researchers first reported that adolescents with autism spectrum disorder had increased brain volume. During the intervening years, studies of younger and younger children showed that this brain “overgrowth” occurs in childhood.


Now, a team at the University of North Carolina, Chapel Hill, has detected brain growth changes linked to autism in children as young as 6 months old. And it piqued our interest because a deep-learning algorithm was able to use that data to predict whether a child at high-risk of autism would be diagnosed with the disorder at 24 months.


The algorithm correctly predicted the eventual diagnosis in high-risk children with 81 percent accuracy and 88 percent sensitivity. That’s pretty damn good compared with behavioral questionnaires, which yield information that leads to early autism diagnoses (at around 12 months old) that are just 50 percent accurate.


“This is outperforming those kinds of measures, and doing it at a younger age,” says senior author Heather Hazlett, a psychologist and brain development researcher at UNC.


As part of the Infant Brain Imaging Study, a U.S. National Institues of Health–funded study of early brain development in autism, the research team enrolled 106 infants with an older sibling who had been given an autism diagnosis, and 42 infants with no family history of autism. They scanned each child’s brain—no easy feat with an infant—at 6-, 12-, and 24 months.


The researchers saw no change in any of the babies’ overall brain growth between 6- and 12-month mark. But there was a significant increase in the brain surface area of the high-risk children who were later diagnosed with autism. That increase in surface area was linked to brain volume growth that occurred between ages 12 and 24 months. In other words, in autism, the developing brain first appears to expand in surface area by 12 months, then in overall volume by 24 months.


The team also performed behavioral evaluations on the children at 24 months, when they were old enough to begin to exhibit the hallmark behaviors of autism, such as lack of social interest, delayed language, and repetitive body movements. The researchers note that the greater the brain overgrowth, the more severe a child’s autistic symptoms tended to be.


Though the new findings confirmed that brain changes associated with autism occur very early in life, the researchers did not stop there. In collaboration with computer scientists at UNC and the College of Charleston, the team built an algorithm, trained it with the brain scans, and tested whether it could use these early brain changes to predict which children would later be diagnosed with autism.


It worked well. Using just three variables—brain surface area, brain volume, and gender (boys are more likely to have autism than girls)—the algorithm identified up eight out of 10 kids with autism. “That’s pretty good, and a lot better than some behavioral tools,” says Hazlett.


To train the algorithm, the team initially used half the data for training and the other half for testing—“the cleanest possible analysis,” according to team member Martin Styner, co-director of the Neuro Image Analysis and Research Lab at UNC. But at the request of reviewers, they subsequently performed a more standard 10-fold analysis, in which data is subdivided into 10 equal parts. Machine learning is then done 10 times, each time with 9 folds used for training and the 10th saved for testing. In the end, the final program gathers together the “testing only” results from all 10 rounds to use in its predictions.


Happily, the two types of analyses—the initial 50/50 and the final 10-fold—showed virtually the same results, says Styner. And the team was pleased with the prediction accuracy. “We do expect roughly the same prediction accuracy when more subjects are added,” said co-author Brent Munsell, an assistant professor at College of Charleston, in an email to IEEE. “In general, over the last several years, deep learning approached that have been applied to image data have proved to be very accurate,” says Munsell.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain–Computer Interface Allows Speediest Typing to Date

Brain–Computer Interface Allows Speediest Typing to Date | Amazing Science |

A new interface system allowed three paralyzed individuals to type words up to four times faster than the speed that had been demonstrated in earlier studies.


The promise of brain–computer interfaces (BCIs) for restoring function to people with disabilities has driven researchers for decades, yet few devices are ready for widespread practical use. Several obstacles exist, depending on the application. For typing, however, one important barrier has been reaching speeds sufficient to justify adopting the technology, which usually involves surgery. A study published Tuesday in eLife reports the results of a system that enabled three participants—Degray and two people with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease, a neurodegenerative disease that causes progressive paralysis)—to type at the fastest speeds yet achieved using a BCI—speeds that bring the technology within reach of being practically useful. “We're approaching half of what, for example, I could probably type on a cell phone,” says neurosurgeon and co-senior author, Jamie Henderson of Stanford University.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

The AI Threat Isn’t Skynet — It’s the End of the Middle Class

The AI Threat Isn’t Skynet — It’s the End of the Middle Class | Amazing Science |
The world's top AI researchers met to consider the threats posed by their research. The global economy could be the first casualty.


In the US, the number of manufacturing jobs peaked in 1979 and has steadily decreased ever since. At the same time, manufacturing has steadily increased, with the US now producing more goods than any other country but China. Machines aren’t just taking the place of humans on the assembly line. They’re doing a better job. And all this before the coming wave of AI upends so many other sectors of the economy. “I am less concerned with Terminator scenarios,” MIT economist Andrew McAfee said on the first day at Asilomar. “If current trends continue, people are going to rise up well before the machines do.”


McAfee pointed to newly collected data that shows a sharp decline in middle class job creation since the 1980s. Now, most new jobs are either at the very low end of the pay scale or the very high end. He also argued that these trends are reversible, that improved education and a greater emphasis on entrepreneurship and research can help feed new engines of growth, that economies have overcome the rise of new technologies before. But after his talk, in the hallways at Asilomar, so many of the researchers warned him that the coming revolution in AI would eliminate far more jobs far more quickly than he expected.


Indeed, the rise of driverless cars and trucks is just a start. New AI techniques are poised to reinvent everything from manufacturing to healthcare to Wall Street. In other words, it’s not just blue-collar jobs that AI endangers. “Several of the rock stars in this field came up to me and said: ‘I think you’re low-balling this one. I think you are underestimating the rate of change,'” McAfee says.


That threat has many thinkers entertaining the idea of a universal basic income, a guaranteed living wage paid by the government to anyone left out of the workforce. But McAfee believes this would only make the problem worse, because it would eliminate the incentive for entrepreneurship and other activity that could create new jobs as the old ones fade away. Others question the psychological effects of the idea. “A universal basic income doesn’t give people dignity or protect them from boredom and vice,” Etzioni says.


At a time when the Trump administration is promising to make America great again by restoring old-school manufacturing jobs, AI researchers aren’t taking him too seriously. They know that these jobs are never coming back, thanks in no small part to their own research, which will eliminate so many other kinds of jobs in the years to come, as well. At Asilomar, they looked at the real US economy, the real reasons for the “hollowing out” of the middle class. The problem isn’t immigration—far from it. The problem isn’t offshoring or taxes or regulation. It’s technology itself.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google tests AI vs AI to see if AI becomes 'aggressive' or cooperates

Google tests AI vs AI to see if AI becomes 'aggressive' or cooperates | Amazing Science |

Google's artificial intelligence subsidiary DeepMind is pitting AI agents against one another to test how they interact with each other and how they would react in various "social dilemmas". In a new study, researchers said they used two video games – Wolfpack and Gathering – to examine how AI agents change the way they behave based on the environment and situation they are in using social sciences and game theory principles.


"The question of how and under what circumstances selfish agents cooperate is one of the fundamental questions in the social sciences," DeepMind researchers wrote in a blog post. "One of the simplest and most elegant models to describe this phenomenon is the well-known game of Prisoner's Dilemma from game theory."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Amazon Echo vs. Google Home: Who has the smarter AI?

Amazon Echo vs. Google Home: Who has the smarter AI? | Amazing Science |

Amazon's Echo, a Bluetooth speaker powered by voice assistant Alexa, burst on to the scene a couple years ago and instantly captured the hearts and minds of consumers. You could hear the collective cry: voice control is finally here! Of course, there was lots of voice control on the market already (including Apple's O.G. Siri), but Alexa was the first out of the gate that promised—and delivered—on making voice commands useful in the home. With "Skills" being developed for a wealth of tasks (and Alexa being built into a lot of smart products like cars, refrigerators and lamps), it's as easy to ask her to tell you the weather or read you news headlines as it is to have her water your lawn, lock your door or order you a pizza.

But Alexa has a new competitor—Google Home. It's a speaker just like Alexa, but its brains are powered by the all-knowing Google Assistant. It's empowered with all that Google knows about search, plus data from other Google apps (like traffic from Google Maps). It can also get personal, alerting you to meetings on your Google Calendar or changes in your flight information by reading your Gmail Inbox.

The options are seemingly endless for these two equally charming and brainy gadgets, but how do their strengths and weaknesses stack up if they were engaged in a little battle? Which of the two would be the most useful?

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Face2Gene: Thanks to AI, Computers Can Now See Your Health Problems

Face2Gene: Thanks to AI, Computers Can Now See Your Health Problems | Amazing Science |

Face2Gene takes advantage of the fact that so many genetic conditions have a tell-tale “face”—a unique constellation of features that can provide clues to a potential diagnosis. It is just one of several new technologies taking advantage of how quickly modern computers can analyze, sort, and find patterns across huge reams of data. They are built in fields of artificial intelligence known as deep learning and neural nets—among the most promising to deliver AI’s 50-year old promise to revolutionize medicine by recognizing and diagnosing disease.


Genetic syndromes aren’t the only diagnoses that could get help from machine learning. The RightEye GeoPref Autism Test can identify the early stages of autism in infants as young as 12 months—the crucial stages where early intervention can make a big difference. Unveiled January 2, 2017 at CES in Las Vegas, the technology uses infrared sensors test the child’s eye movement as they watch a split-screen video: one side fills with people and faces, the other with moving geometric shapes. Children at that age should be much more attracted to faces than abstract objects, so the amount of time they look at each screen can indicate where on the autism spectrum a child might fall.

In validation studies done by the test’s inventor, UC San Diego researcher Karen Pierce,1 the test correctly predicted autism spectrum disorder 86 percent of the time in more than 400 toddlers. That said, it’s still pretty new, and hasn’t yet been approved by the FDA as a diagnostic tool. “In terms of machine learning, it’s the simplest test we have,” says RightEye’s Chief Science Officer Melissa Hunfalvay. “But before this, it was just physician or parent observations that might lead to a diagnosis. And the problem with that is it hasn’t been quantifiable.”


A similar tool could help with early detection of America’s sixth leading cause of death: Alzheimer’s disease. Often, doctors don’t recognize physical symptoms in time to try any of the disease’s few existing interventions. But machine learning hears what doctor’s can’t: Signs of cognitive impairment in speech. This is how Toronto-based Winterlight Labs is developing a tool to pick out hints of dementia in its very early stages. Co-founder Frank Rudzicz calls these clues “jitters,” and “shimmers:” high frequency wavelets only computers, not humans, can hear.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google and other tech giants grapple with the ethical concerns raised by the AI boom

Google and other tech giants grapple with the ethical concerns raised by the AI boom | Amazing Science |
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.


With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations. “We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”


Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care. Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.” Such robots may still be a ways off, but ethical challenges raised by AI are already here. As businesses and governments rely more on machine-learning systems to make decisions, blind spots or biases in the technology can effectively lead to discrimination against certain types of people.


A ProPublica investigation last year, for example, found that a risk-scoring system used in some states to inform criminal sentencing was biased against blacks. Similarly, Horvitz described how an emotion-recognition service developed at Microsoft for use by businesses was initially inaccurate for small children because it was trained using a grab bag of photos that wasn’t properly curated. Maya Gupta, a researcher at Google, called for the industry to work harder on developing processes to ensure data used to train algorithms isn’t skewed. “A lot of times these data sets are being created in a somewhat automated way,” said Gupta. “We have to think more about, are we sampling enough from minority groups to be sure we did a good enough job?”


In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).


Companies are also taking individual action to build safeguards around their technology. Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place. Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well. Google has started its own AI ethics board.


Perhaps unsurprisingly, the companies creating such programs generally argue they obviate the need for new forms of government regulation of artificial intelligence. But at EmTech, Horvitz also encouraged discussion of extreme outcomes that might lead some people to conclude the opposite. In February 2017 he convened a workshop where experts laid out in detail how AI might harm society by doing things like messing with the stock market or election results. “If you’re proactive, you can come up with outcomes that are feasible and put mechanisms in place to disrupt them now,” said Horvitz. That kind of talk seemed to unnerve some of those he shared the stage with in San Francisco. Gupta of Google encouraged people to also consider how taking some decisions out of the hands of humans could make the world more moral than it is now.


“Machine learning is controllable and precise and measurable with statistics,” she said. “There are so many possibilities for more fairness, and more ethical results.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Newest Machine Learning Trends

Newest Machine Learning Trends | Amazing Science |

In the research areas, Machine Learning is steadily moving away from abstractions and engaging more in business problem solving with support from AI and Deep Learning. In What Is the Future of Machine Learning, Forbes predicts the theoretical research in ML will gradually pave the way for business problem solving. With Big Data making its way back to mainstream business activities, now smart (ML) algorithms can simply use massive loads of both static and dynamic data to continuously learn and improve for enhanced performance.


 If the threat of intelligent machines taking over Data Scientists is really as real as it is made out to be, then 2017 is probably the year when the global Data Science community should take a new look at the capabilities of so-called “smart machines.” The repeated failure of autonomous cars has made one point clear – that even learning machines cannot surpass the natural thinking faculties bestowed by nature on human beings. If autonomous or self-guided machines have to be useful to human society, then the current Artificial Intelligence and Machine Learning research should focus on acknowledging the limits of machine power and assign tasks that are suitable for the machines and include more human interventions at necessary checkpoints to avert disasters. Repetitive, routine tasks can be well handled by machines, but any out-of-the-ordinary situations will still require human intervention.


2017 Machine Learning and Application Development Trends

Gartner’s Top 10 Technology Trends for 2017  predicts that the combined AI and advanced ML practice that ignited about four years ago and since continued unscathed, will dominate Artificial Intelligence application development in 2017. This lethal combination will deliver more systems that “understand, learn, predict, adapt and potentially operate autonomously. “ Cheap hardware, cheap memory, cheap storage technologies, more processing power, superior algorithms, and massive data streams will all contribute to the success of ML-powered AI applications.  There will be steady rise in Ml-powered AI application in industry sectors like preventive healthcare, banking, finance, and media. For businesses that means more automated functions and fewer human checkpoints.  2017 Predictions from Forrester suggests that the Artificial Intelligence and Machine Learning Cloud will increasingly feed on IoT data as sensors and smart apps take over every facet of our daily lives.


Democratization of Machine Learning in the Cloud          

Democratization of AI and ML through Cloud technologies, open standards, and algorithm economy will continue. The growing trend of deploying prebuilt ML algorithms to enable Self-Service Business Intelligence and Analytics is a positive step towards democratization of ML. In Google Says Machine Learning is the Future, the author champions the democratization of ML through idea sharing. A case in point is Google’s Tensor Flow, which has championed the need for open standards in Machine Learning.  This article claims that almost anyone with a laptop and an Internet connection can dare to be a Machine Learning expert today provided they have the right mind set. The provisioning of Cloud-based IT services was already a good step to make advanced Data Science a mainstream activity, and now with Cloud and packaged algorithms, mid-sized ad smaller businesses will have access to Self-Service BI and Analytics, which was till now only a dream. Also, the mainstream business users will gradually take an active role in data-centric business systems. Machine Learning Trends – Future AI  claims that more enterprises in 2017 will capitalize on the Machine Learning Cloud and do their part to lobby for democratized data technologies.


Platform Wars will Peak in 2017

The platform war between IBM, Microsoft, Google, and Facebook to be the leader in ML developments will peak in 2017.  Where Machine Learning Is Headed, predicts that 2017 will experience a tremendous growth of smart apps, digital assistants, and main-stream use of Artificial Intelligence. Although many ML-enabled AI systems have turned into success stories, the self driving cars may die a premature death.


Humans will Make Peace with Machines

 Since 2012 the global business community has witnessed a meteoric rise and widespread proliferation of data technologies. Finally, humans will realize that it is time to stop fearing the machines and begin working with them. The InfoWorld article titled Application Development, Docker, Machine Learning Are Top Tech Trends for 2017 asserts humans and machines will work with each other, not against each other. In this context, readers should review the DATAVERSITY® article The Future of Machine Learning: Trends, Observations, and Forecasts, where the readers are reminded that as businesses develop a strong dependence on pre-built ML algorithms for Advanced Analytics, the need for Data Scientists or large IT departments may diminish.


Demand-Supply Gaps in Data Science and Machine Learning will Rise

The business world is steadily heading toward the prophetic 2018, when according to McKinsey the first void in data technology expertise will be felt in US and then gradually in the rest of the world. The demand-supply gap in Data Science and Machine Learning skills will continue to rise till academic programs and industry workshops begin to produce a ready workforce. In response to this sharp rise in demand-supply gap, more enterprises and academic institutions will collaborate to train future Data Scientists and ML experts. This kind of training will compete with the traditional Data Science classroom, and will focus more on practical skills rather than on theoretical knowledge. KDNuggets will continue to challenge the curious mind by publishing articles like 10 Algorithms that Machine Learning Engineers Should Know .  2017 will witness a steady rise in contributions from KDNugget and Kaggle in providing alternative training to future Data Scientists and Machine Learning experts through practical skill development.


Algorithm Economy will take Center Stage

Over the next year or two, businesses will be using canned algorithms for all data-centric activities like BI, Predictive Analytics, and CRM. The algorithm economy, which Forbes mentions, will usher in a marketplace where all data companies will compete for a space. In 2017, global businesses will engage in Self-Service BI, and experience the growth of algorithmic business solutions, and ML in the Cloud.  So far as algorithm-driven business decision making is concerned, 2017 may actually see two distinct types of algorithm economies. On one hand, average businesses will utilize canned algorithmic models for their operational and customer-facing functions. On the other hand, proprietary ML algorithms will become a market differentiator among large, competing enterprises. 

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Social Foraging!

How AI researchers built a neural network that learns to speak in just a few hours

How AI researchers built a neural network that learns to speak in just a few hours | Amazing Science |
The Chinese search giant’s Deep Voice system learns to talk in just a few hours with little or no human interference.


In the battle to apply deep-learning techniques to the real world, one company stands head and shoulders above the competition. Google’s DeepMind subsidiary has used the technique to create machines that can beat humans at video games and the ancient game of Go. And last year, Google Translate services significantly improved thanks to the behind-the-scenes introduction of deep-learning techniques.


So it’s interesting to see how other companies are racing to catch up. Today, it is the turn of Baidu, an Internet search company that is sometimes described as the Chinese equivalent of Google. In 2013, Baidu opened an artificial intelligence research lab in Silicon Valley, raising an interesting question: what has it been up to?


Now Baidu’s artificial intelligence lab has revealed its work on speech synthesis. One of the challenges in speech synthesis is to reduce the amount of fine-tuning that goes on behind the scenes. Baidu’s big breakthrough is to create a deep-learning machine that largely does away with this kind of meddling. The result is a text-to-speech system called Deep Voice that can learn to talk in just a few hours with little or no human interference.


First some background. Text-to-speech systems are familiar in the modern world in navigation apps, talking clocks, telephone answering systems, and so on. Traditionally these have been created by recording a large database of speech from a single individual and then recombining the utterances to make new phrases.


The problem with these systems is that it is difficult to switch to a new speaker or change the emphasis in their words without recording an entirely new database. So computer scientists have been working on another approach. Their goal is to synthesize speech in real time from scratch as it is required.


Last year, Google’s DeepMind made a significant breakthrough in this area. It unveiled a neural network that learns how to speak by listening to the sound waves from real speech while comparing this to a transcript of the text. After training, it was able to produce synthetic speech based on text it was given. Google DeepMind called its system WaveNet.


Baidu’s work is an improvement on WaveNet, which still requires some fine-tuning during the training process. WaveNet is also computationally demanding, so much so that it is unclear whether it could ever be used to synthesize speech in real time in the real world.

Via Ashish Umre
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Machine Learning Algorithms Deciphers Bat Talk

Machine Learning Algorithms Deciphers Bat Talk | Amazing Science |

A machine learning algorithm helped decode the squeaks Egyptian fruit bats make in their roost, revealing that they “speak” to one another as individuals.


Plenty of animals communicate with one another, at least in a general way—wolves howl to each other, birds sing and dance to attract mates and big cats mark their territory with urine. But researchers at Tel Aviv University recently discovered that when at least one species communicates, it gets very specific. Egyptian fruit bats, it turns out, aren’t just making high pitched squeals when they gather together in their roosts. They’re communicating specific problems, reports Bob Yirka at


According to Ramin Skibba at Nature, neuroecologist Yossi Yovel and his colleagues recorded a group of 22 Egyptian fruit bats, Rousettus aegyptiacus, for 75 days. Using a modified machine learning algorithm originally designed for recognizing human voices, they fed 15,000 calls into the software. They then analyzed the corresponding video to see if they could match the calls to certain activities.


They found that the bat noises are not just random, as previously thought, reports Skibba. They were able to classify 60 percent of the calls into four categories. One of the call types indicates the bats are arguing about food. Another indicates a dispute about their positions within the sleeping cluster. A third call is reserved for males making unwanted mating advances and the fourth happens when a bat argues with another bat sitting too close. In fact, the bats make slightly different versions of the calls when speaking to different individuals within the group, similar to a human using a different tone of voice when talking to different people. Skibba points out that besides humans, only dolphins and a handful of other species are known to address individuals rather than making broad communication sounds. The research appears in the journal Scientific Reports.


“We have shown that a big bulk of bat vocalizations that previously were thought to all mean the same thing, something like ‘get out of here!’ actually contain a lot of information,” Yovel tells Nicola Davis at The Guardian. By looking even more carefully at stresses and patterns, Yovel says, researchers may be able to tease out even more subtleties in the bat calls.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

From Virtual Nurses To Drug Discovery: 106 Artificial Intelligence Startups In Healthcare

From Virtual Nurses To Drug Discovery: 106 Artificial Intelligence Startups In Healthcare | Amazing Science |

Increasingly crowded imaging & diagnostics: 19 out of the 24 companies under imaging & diagnostics raised their first equity funding round since January 2015 (this includes seed or Series A rounds, as well as a first round raised by stealth startup Imagen Technologies). In 2014, Butterfly Networks raised a $100M Series C, backed by Aeris Capital and Stanford University. This was one of the largest equity rounds to an AI in healthcare company, after China-based iCarbonX’s $154M mega-round and two $100M+ raises by oncology-focused Flatiron Health.


Remote patient monitoring: London-based Babylon Health, backed by investors including Kinnevik and Google-owned DeepMind Technologies, raised a $25M Series A round in 2016 to develop an AI-based chat platform. New York-based AiCure raised $12.3M in Series A funding from investors including Biomatics Capital Partners, New Leaf Venture Partners, Pritzker Group Venture Capital, and Tribeca Venture Partners, for the use of artificial intelligence to ensure patients are taking their medications. California-based has developed a virtual nursing assistant, Molly, to follow up with patients post-discharge. The company claims Molly gives clinicians “20% of their day back.” Sentrian, backed by investors including Frost Data Capital, analyzes biosensor data and sends patient-specific alerts to clinicians.


Core AI companies bring their algorithms to healthcare: Core AI startup Ayasdi, which has developed a machine intelligence platform based on topological data analysis, is bringing its solutions to healthcare providers for applications including patient risk scoring and readmission reduction. Other core AI startups looking at healthcare include and Digital Reasoning Systems.


Top VCs: Khosla Ventures and Data Collective are the top VC investors in healthcare AI startups, and have backed 5 unique companies each. Khosla Ventures backed California-based, which focuses on patients with depression and anxiety; healthcare analytics platform Lumiata; Israel’s Zebra Medical Vision and California-based Bay Labs, which applies AI to medical imaging; as well as drug discovery startup Atomwise. Data Collective backed imaging & diagnostics startups Enlitic, Bay Labs and Freenome, analytics platform CloudmedX, and previously mentioned Atomwise.


Drug discovery is also gaining attention: Startups are using machine learning algorithms to reduce drug discovery times, and VCs have backed 6 out of the 9 startups on the map. Andreessen Horowitz recently seed funded twoXAR, developer of the DUMA drug discovery platform; Khosla Ventures and Data Collective backed Atomwise, which published its first findings of Ebola treatment drugs last year, and has also partnered with MERCK; Lightspeed Venture Partners invested in Numedii in 2013; Foundation Capital participated in 3 equity funding rounds to Numerate.


AI in oncology: IBM Watson Group-backed Pathway Genomics has recently started a research study for its new blood test kit, CancerIntercept Detect. The company will collect blood samples from high-risk individuals who have never been diagnosed with the disease to determine if early detection is possible. Other oncology-focused startups include Flatiron HealthCyrcadia (wearable device), CureMetrixSkinVisionEntopsis, and Smart Healthcare.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Birdsnap: Identifying a bird from a picture with artificial intelligence

Birdsnap: Identifying a bird from a picture with artificial intelligence | Amazing Science |

Birdsnap is a free electronic field guide covering 500 of the most common North American bird species, available as a web site or aniPhone app. Researchers from Columbia University and the University of Maryland developed Birdsnap using computer vision and machine learning to explore new ways of identifying bird species. Birdsnap automatically discovers visually similar species and makes visual suggestions for how they can be distinguished. In addition, Birdsnap uses visual recognition technology to allow users who upload bird images to search for visually similar species. Birdsnap estimates the likelihood of seeing each species at any location and time of year based on sightings records, and uses this likelihood both to produce a custom guide to local birds for each user and to improve the accuracy of visual recognition.

The genesis of Birdsnap (and its predecessors Leafnsap and Dogsnap) was the realization that many techniques used for face recognition developed by Peter Belhumeur (Columbia University) and David Jacobs(University of Maryland) could also be applied to automatic species identification. State-of-the-art face recognition algorithms rely on methods that find correspondences between comparable parts of different faces, so that, for example, a nose is compared to a nose, and an eye to an eye. In the same way, Birdsnap detects the parts of a bird, so that it can examine the visual similarity of comparable parts of the bird.Our first electronic field guide Leafsnap, produced in collaboration with the Smithsonian Institution, was launched in May 2011.


This free iPhone app uses visual recognition software to help identify tree species from photographs of their leaves. Leafsnap currently includes the trees of the northeastern US and will soon grow to include the trees of the United Kingdom. Leafsnap has been downloaded by over a million users, and discussed extensively in the press (see, for more information). In 2012, we launched Dogsnap, an iPhone app that allows you to use visual recognition to help determine dog breeds. Dogsnap contains images and textual descriptions of over 150 breeds of dogs recognized by the American Kennel Club.

For their inspiration and advice on bird identification, we thank the UCSD Computer Vision group, especially Serge Belongie, Catherine Wah, and Grant Van Horn; the Caltech Computational Vision group, especially Pietro Perona, Peter Welinder, and Steve Branson; the alumni of these groups Ryan Farrell (now at BYU), Florian Schroff (at Google), and Takeshi Mita (at Toshiba); and the Visipedia effort.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A Friendly Robot Next Door May Have Just Written This Story

A Friendly Robot Next Door May Have Just Written This Story | Amazing Science |

The Washington Post's Heliograf software can autowrite tons of basic stories in no time, which could free up reporters to do more important work — or allow them to just retire.


USA Today has used this AI-driven production software to create short videos. It can condense news articles into a script, string together a selection of images or video footage, and even add narration with a synthesized newscaster voice.


Reuters’ algorithmic prediction tool helps journalists gauge the integrity of a tweet. The tech scores emerging stories on the basis of “credibility” and “newsworthiness” by evaluating who’s tweeting about it, how it’s spreading across the network, and if nearby users have taken to Twitter to confirm or deny breaking developments.


Originally designed to crowdsource reporting from the Republican and Democra­tic National Conventions, BuzzFeed’s software collects information from on-the-ground sources at news events. BuzzBot has since been open-sourced, portending a wave of bot-aided reporting tools.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A Non-Invasive Brain-Computer Interface for Completely Locked-In Patients

A Non-Invasive Brain-Computer Interface for Completely Locked-In Patients | Amazing Science |

Researchers have developed a non-invasive brain-computer interface (BCI) for completely locked-in patients. This is the first time that these patients, with complete motor paralysis but an intact cognitive state, have been able to reliably communicate. A completely locked-in state involves the loss of all motor control, including that of the eye muscles, and until now some researchers suspected that such patients were unable to communicate.


The study, published in PLoS Biology, detailed the researchers’ efforts in developing a non-invasive method to allow four completely locked-in patients to answer “yes or no” questions. The technique involves patients wearing a cap that uses infrared light to measure blood flow in different areas of the brain when they think about responding “yes” or “no” to a question. The researchers trained the patients by asking them control test questions to make sure the system could accurately record their answers, before asking questions about their current lives.


Brain-computer interfaces involving implantable electrodes have previously been successfully used in patients with less severe forms of locked-in paralysis. However, these methods involved direct implantation of electrodes in the brain. The current method is non- invasive, along with being the first approach that has reliably worked for patients who are completely locked-in.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Laboratory of Artificial Intelligence for Design: Smart Free CAD For The Web

Laboratory of Artificial Intelligence for Design: Smart Free CAD For The Web | Amazing Science |

The design of 2D or 3D geometries is involved in one way or another in most of science, art and engineering activities. Modern design tools are powerful and boost the productivity of designers, but they require a lot of training, effort and time to achieve a good understanding and an efficient exploitation.


LAI4D is an R+D project whose aim is to develop an artificial intelligence able to understand the user's ideas regarding spatial imagination. This technology will help improve the communication between designers and design tools by emulating human cognitive abilities for interpreting graphic representations of geometric ideas and other capabilities.


The implementation has been conceived as a dual web application that can work as a 3D rendering widget or as a free light 3D CAD tool providing the adequate environment for the project. This CAD tool incorporates a special design assistant in charge of extracting conceptual geometries from pictures or sketches provided by the user as input. Additionally LAI4D tries to reduce the inherent complexity of professional design tools which, despite being suitable for experienced users, are almost unreachable for other people not trained in the usage of CAD systems and only in need of an occasional use. The selected implementation approach not only allows the users an easy access to the tool, but is also an excellent mean to build a community of designers that will provide the necessary feedback for the system in order to make it bigger and smarter. Follow this link to try the LAI4D designer.


Beginner's tutorial: This tutorial is the recommended introduction for all persons new to LAI4D. It is intended to teach the basics of design in 20 minutes. It shows step by step how to create a simple 3D geometry from the idea up to the publishing of the drawing on the Internet using the easiest tools. Thanks to this exercise the user will understand the working philosophy of LAI4D, will be able to generate polyhedral surfaces and polygonal lines with colors, and will learn to share designs through the free online sharing service of LAI4D. The created geometry can be inspected in the link: cubicle_sample.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New machine-learning algorithms may revolutionize drug discovery — and our understanding of life

New machine-learning algorithms may revolutionize drug discovery — and our understanding of life | Amazing Science |

A new set of machine-learning algorithms developed by researchers at the University of Toronto Scarborough can generate 3D structures of nanoscale protein molecules that could not be achieved in the past. The algorithms may revolutionize the development of new drug therapies for a range of diseases and may even lead to better understand how life works at the atomic level, the researchers say.


Drugs work by binding to a specific protein molecule and changing the protein’s 3D shape, which alters the way the drug works once inside the body. The ideal drug is designed in a shape that will only bind to a specific protein or group of proteins that are involved in a disease, while eliminating side effects that occur when drugs bind to other proteins in the body.


Since proteins are tiny — about 1 to 100 nanometers — even smaller than the shortest wavelength of visible light, they can’t be seen directly without using sophisticated techniques like electron cryomicroscopy (cryo-EM). Cryo-EM uses high-power microscopes to take tens of thousands of low-resolution images of a frozen protein sample from different positions.

The computational problem is to then piece together the correct high-resolution 3D structure from these 2D images.


Existing techniques take several days or even weeks to generate a 3D structure on a cluster of computers, requiring as much as 500,000 CPU hours, according to the researchers. Also, existing techniques often generate incorrect structures unless an expert user provides an accurate guess of the molecule being studied.

No comment yet.