Amazing Science
882.4K views | +51 today
Scooped by Dr. Stefan Gruenwald
onto Amazing Science!

Machine-Learning Supercomputer Woven from Idle Computers to Rival Google in Power

Machine-Learning Supercomputer Woven from Idle Computers to Rival Google in Power | Amazing Science |
Sentient claims to have assembled machine-learning muscle to rival Google by rounding up idle computers.

Recent improvements in speech and image recognition have come as companies such as Google build bigger, more powerful systems of computers to run machine-learning software. Now a relative minnow, a private company called Sentient with only about 70 employees, says it can cheaply assemble even larger computing systems to power artificial-intelligence software. The company’s approach may not be suited to all types of machine learning, a technology that has uses as varied as facial recognition and financial trading. Sentient has not published details, but says it has shown that it can put together enough computing power to produce significant results in some cases.

Sentient’s power comes from linking up hundreds of thousands of computers over the Internet to work together as if they were a single machine. The company won’t say exactly where all the machines it taps into are. But many are idle inside data centers, the warehouse-like facilities that power Internet services such as websites and mobile apps, says Babak Hodjat, cofounder and chief scientist at Sentient. The company pays a data-center operator to make use of its spare machines.

Data centers often have significant numbers of idle machines because they are built to handle surges in demand, such as a rush of sales on Black Friday. Sentient has created software that connects machines in different places over the Internet and puts them to work running machine-learning software as if they were one very powerful computer. That software is designed to keep data encrypted as much as possible so that what Sentient is working on–perhaps for a client–is kept confidential.

Sentient can get up to one million processor cores working together on the same problem for months at a time, says Adam Beberg, principal architect for distributed computing at the company. Google’s biggest machine-learning systems don’t reach that scale, he says. A Google spokesman declined to share details of the company’s infrastructure and noted that results obtained using machine learning are more important than the scale of the computer system behind it. Google uses machine learning widely, in areas such as search, speech recognition and ad targeting.

Beberg helped pioneer the idea of linking up computers in different places to work together on a problem (see “Innovators Under 35: 1999”). He was a founder of, a project that was one of the first to demonstrate that idea at large scale. Its technology led to efforts such as Seti@Home andFolding@Home, in which millions of people installed software so their PCs could help search for alien life or contribute to molecular biology research.

Sentient was founded in 2007 and has received over $140 million in investment funding, with just over $100 million of that received late last year. The company has so far focused on using its technology to power a machine-learning technique known as evolutionary algorithms. That involves “breeding” a solution to a problem from an initial population of many slightly different algorithms. The best performers of the first generation are used to form the basis of the next, and over successive generations the solutions get better and better.

Sentient currently earns some revenue from operating financial-trading algorithms created by running its evolutionary process for months at a time on hundreds of thousands of processors. But the company now plans to use its infrastructure to offer services targeted at industries such as health care or online commerce, says Hodjat.

Kim Flintoff's curator insight, June 13, 2015 3:02 AM

Supercomputing can manifest in many ways - high-speed, high-volume computing will still probably require a dedicated supercomputer with high PFLOPS count as its appeal. 

Many solutions simply require ongoing processing of data and distributed models like this one have been leveraged by systems like SETI for many years.

Variety is still a requirement...

Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science |



Toll Free:1-800-605-8422  FREE
Regular Line:1-858-345-4817



NOTE: To subscribe to the RSS feed of Amazing Science, copy into the URL field of your browser and click "subscribe".


This newsletter is aggregated from over 1450 news sources:


All my Tweets and Scoop.It! posts sorted and searchable:

Twitter Feeds



You can search through all the articles semantically on my

archived twitter feed


NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.


You can also type your own query:


e.g., you are looking for articles involving "dna" as a keyword



CLICK on the little

FUNNEL symbol at the





• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciencesgreen-energy • history • language • mapmaterial-science • math • med • medicine • microscopymost-reads • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

Arturo Pereira's curator insight, August 12, 2017 2:01 PM
The democratization of knowledge!
Nevermore Sithole's curator insight, September 11, 2017 7:42 AM
FREE Online Science and Technology Lectures from Top Universities
Scooped by Dr. Stefan Gruenwald!

Google’s AI Can Help Predict Where Earthquake Aftershocks Are Most Likely

Google’s AI Can Help Predict Where Earthquake Aftershocks Are Most Likely | Amazing Science |

The destruction that a large earthquake can cause often doesn’t end when the ground stops shaking. Many produce aftershocks, smaller tremors hours or even days later caused by the ground’s reaction to the first quake.


These aftershocks can sometimes cause more damage than the primary quake. And though we can usually predict the size of an aftershock, we haven’t been so great at predicting its location.

Now, that could change. Researchers from Harvard University and Google’s AI division have created a neural network that can assess how likely it is that a particular location will experience an aftershock. The best part? It’s more accurate than the best existing models.


They published their study Wednesday in the journal Nature.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists Are Warning: Warming Oceans Will Lead to a “Catastrophic” Future

Scientists Are Warning: Warming Oceans Will Lead to a “Catastrophic” Future | Amazing Science |

new study in the journal Science has found that the Earth’s oceans are warming far faster than experts had previously predicted, leading to a bleak outlook among climate scientists who say the rapid environmental shifts will lead to international disputes, humanitarian crises and deadly freak weather events.


The New York Times, for instance, summarized researchers’ view of the findings as “catastrophic.” “It’s spilling over far beyond just fish, it’s turned into trade wars,” Rutgers professor Malin Pinsky told the newspaper. “It’s turned into diplomatic disputes. It’s led to a breakdown in international relations in some cases.” As the greenhouse effect has intensified, according to the new research, the oceans have born the brunt of global warming. Readings suggest that 2018 will be the hottest year on record for the planet’s seas, replacing 2017 and 2016 before it.


The effects for weather patterns and marine life are dire, experts warn — and food shortages and displacement will leak into geopolitics long before we scorch life above the waterline as well.

“If the ocean wasn’t absorbing as much heat, the surface of the land would heat up much faster than it is right now,” Pinsky told the Times. “In fact, the ocean is saving us from massive warming right now.”


READ MORE: Ocean Warming Is Accelerating Faster Than Thought, New Research Finds [The New York Times]

No comment yet.
Scooped by Dr. Stefan Gruenwald!

AI Can Make Sure Cancer Patients Get Just Enough (but Not Too Much) Treatment

AI Can Make Sure Cancer Patients Get Just Enough (but Not Too Much) Treatment | Amazing Science |

Patients with glioblastoma, a malignant tumor in the brain or spinal cord, typically live no more than five years after receiving their diagnosis. And those five years can be painful — in an effort to minimize the tumor, doctors often prescribe a combination of radiation therapy and drugs that can cause debilitating side effects for patients.


Now, researchers from MIT Media Lab have developed artificial intelligence (AI) that can determine the minimum drug doses needed to effectively shrink glioblastoma patients’ tumors. They plan to present their research at Stanford University’s 2018 Machine Learning for Healthcare conference.


To create an AI that could determine the best dosing regimen for glioblastoma patients, the MIT researchers turned to a training technique known as reinforcement learning (RL). First, they created a testing group of 50 simulated glioblastoma patients based on a large dataset of those that had previously undergone treatment for their disease. Then they asked their AI to recommend doses of several drugs typically used to treat glioblastoma [oftemozolomide (TMZ) and a combination of procarbazine, lomustine, and vincristine (PVC)] for each patient at regular intervals (either weeks or months).


After the AI prescribed a dose, it would check a computer model capable of predicting how likely a dose is to shrink a tumor. When the AI prescribed a tumor-shrinking dosage, it received a reward. However, if the AI simply prescribed the maximum dose all the time, it received a penalty. According to the researchers, this need to strike a balance between a goal  and the consequences of an action — in this case, tumor reduction and patient quality of life respectively — is unique in the field of RL. Other RL models simply work toward a goal; for example, DeepMind’s AlphaZero simply has to focus on winning a game.


“If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” principal investigator Pratik Shah told MIT News. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A new method uses ultrashort deep-ultraviolet pulses to accurately probe real-time chirality changes in (bio)molecular systems

A new method uses ultrashort deep-ultraviolet pulses to accurately probe real-time chirality changes in (bio)molecular systems | Amazing Science |

Distinguishing between left-handed and right-handed (“chiral”) molecules is crucial in chemistry and the life sciences, and is commonly done using a method called circular dichroism. However, during biochemical reactions the chiral character of molecules may change. EPFL scientists have for the first time developed a method that uses ultrashort deep-ultraviolet pulses to accurately probe such changes in real-time in (bio)molecular systems.


In nature, certain molecules with the same chemical composition, can exist in two different shapes that are mirrors images of each other, much like our hands. This property is known as “chirality” and molecules with different chirality are called enantiomers. Enantiomers can exhibit entirely different chemical or biological properties, and separating them is a major issue in drug development and in medicine.


The method commonly used to detect enantiomers is circular dichroism (CD) spectroscopy. It exploits the fact that light polarized into a circular wave (like a whirlpool) is absorbed differently by left-handed and right-handed enantiomers. Steady-state CD spectroscopy is a major structural tool in (bio)chemical analysis.


During their function, biomolecules undergo structural changes that affect their chiral properties. Probing these in real-time (i.e. between 1 picosecond and 1 nanosecond) provides a view of their biological function, but this has been challenging in the deep-UV spectrum (wavelengths below 300 nm) where most biologically relevant molecules such as amino acids, DNA and peptide helices absorb light.


The limitations are due to the lack of adequate sources of pulsed light and of sensitive detection schemes. But now, the group of Majed Chergui at the Lausanne Centre for Ultrafast Science (EPFL) has developed a setup that allows the visualization of the chiral response of (bio)molecules by CD spectroscopy with a resolution of 0.5 picoseconds.


The setup uses a photoelastic modulator, which is an optical device that can control the polarization of light. In this system, the modulator permits shot-to-shot polarization switching of a 20 kHz femtosecond pulse train in the deep-UV range (250–370 nm). It is then possible to record changes in the chirality of molecules at variable time-delays after they are excited with a short laser pulse.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

First Direct Evidence That Stars Like Our Sun Turn Into Crystals In The Final Stages Of Their Lives

First Direct Evidence That Stars Like Our Sun Turn Into Crystals In The Final Stages Of Their Lives | Amazing Science |

Fifty years after the idea was proposed, astronomers find direct evidence that white dwarfs - the dense, stellar corpses of sun-like stars - can crystallize.


Stars like our sun can turn into crystals in the final stages of their lives, bringing a whole new meaning to those glittering jewels in the sky. Astronomers from the University of Warwick say they’ve found the first direct evidence that white dwarf stars – the dense, stellar corpses of stars like our sun – can crystallize, or turn from a liquid into a solid. The discovery was published Wednesday in the journal Nature.


Astronomers had long suspected such crystallization was possible. But to find direct evidence, the team turned to data gathered by the European Space Agency’s Gaia satellite and analyzed some 15,000 white dwarf candidates. In the process, they uncovered a “pile-up” of stars with colors and luminosities that matched those predicted for crystallized white dwarfs.


The discovery, led by physicist Pier-Emmanuel Tremblay, was announced exactly 50 years after it was first predicted. “All white dwarfs will crystallize at some point in their evolution,” Tremblay said in a media release. “This means that billions of white dwarfs in our galaxy have already completed the process and are essentially crystal spheres in the sky. The sun itself will become a crystal white dwarf in about 10 billion years.”


White dwarfs are extremely dense stars, so the positively charged nuclei in their cores exist as a fluid, the scientists say. But as the star cools, that fluid solidifies and creates a metal core. And because white dwarfs are among our cosmos’ oldest stellar objects, with predictable life stages, astronomers often use them as “clocks” to date surrounding groups of stars. So understanding this crystallization process could bring greater accuracy when scientists assign ages to the stars.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Dark Energy Survey completes six-year mission

Dark Energy Survey completes six-year mission | Amazing Science |

After six years of scanning in depth about a quarter of the southern skies, and cataloguing hundreds of millions of distant galaxies, the Dark Energy Survey will finish taking data tomorrow.


DES is an international collaboration that began mapping a 5000-square-degree area of the sky on August 31, 2013, in a quest to understand the nature of dark energy, the mysterious force that is accelerating the expansion of the universe. Using the Dark Energy Camera, a 520-megapixel digital camera mounted on the Blanco 4-meter telescope at the National Science Foundation’s Cerro Tololo Inter-American Observatory in Chile, scientists on DES took data on a total of 758 nights.


Over those nights, they recorded data from a few hundred million distant galaxies. More than 400 scientists from over 25 institutions around the world have been involved in the project, which is hosted by the US Department of Energy’s Fermi National Accelerator Laboratory. The collaboration has already produced about 200 academic papers, with more to come.


According to DES Director Rich Kron, a Fermilab and University of Chicago scientist, those results and the scientists who made them possible are where much of the real accomplishment of DES lies. “First generations of students and post-doctoral researchers on DES are now becoming faculty at research institutions and are involved in upcoming sky surveys,” Kron says. “The number of publications and people involved are a true testament to this experiment. Helping to launch so many careers has always been part of the plan, and it’s been very successful.”


DES remains one of the most sensitive and comprehensive surveys of distant galaxies ever performed. The Dark Energy Camera is capable of seeing light from galaxies billions of light-years away, capturing it in unprecedented quality.  According to Alistair Walker of the National Optical Astronomy Observatory, a DES team member and the Blanco telescope scientist, equipping the telescope with the Dark Energy Camera transformed it into a state-of-the-art survey machine. 


“DECam was needed to carry out DES, but it also created a new tool for discovery, from the Solar System to the distant universe,” Walker says. “For example, 12 new moons of Jupiterwere recently discovered with DECam, and the detection of distant star-forming galaxies in the early universe, when the universe was only a few percent of its present age, has yielded new insights into the end of the cosmic dark ages.”


The survey generated 50 terabytes (that’s 50 million megabytes) of data over its six observation seasons. That data is stored and analyzed at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign. “As observations end, NCSA is proud to continue supporting the science productivity of the collaboration by making refined data releases and serving the data well into to 2020s,” says Don Petravick, principal investigator for the Dark Energy Survey data management team at NCSA.


Now the job of analyzing that data takes center stage. DES has already released a full range of papers based on its first year of data, and scientists are now diving into the rich seam of cataloged images from the first several years of data, looking for clues to the nature of dark energy. The first step in that process, according to Fermilab and University of Chicago scientist Josh Frieman, former director of DES, is to find the signal in all the noise. “We’re trying to tease out the signal of dark energy against a background of all sorts of non-cosmological stuff that gets imprinted on the data,” Frieman says. “It’s a massive ongoing effort from many different people around the world.” 


The DES collaboration continues to release scientific results from their storehouse of data. Highlights from the previous years include:


DES scientists also spotted the first visible counterpart of gravitational waves ever detected, a collision of two neutron stars that occurred 130 million years ago. DES was one of several sky surveys that detected this gravitational-wave source, opening the door to a new kind of astronomy. Recently DES issued its first cosmology results based on supernovae (207 of them taken from the first three years of DES data), using a method which 20 years ago provided the first evidence for cosmic acceleration. More comprehensive results on dark energy are expected within the next few years.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How do Aliens See us? One-Pixel Views of Earth Reveal Seasonal Changes

How do Aliens See us? One-Pixel Views of Earth Reveal Seasonal Changes | Amazing Science |
By averaging satellite images of the Earth down to a single pixel, researchers trace how the planet’s mean color varies over time, results that inform observations of distant exoplanets.


When Al Gore, then U.S. vice president, originally proposed the Deep Space Climate Observatory (DSCOVR) satellite in 1998, he hoped that its detailed images of the Earth’s surface would inspire the public. They have, and now scientists are finding a novel use for these satellite observations that Gore probably never imagined: studying exoplanets. By averaging thousands of high-resolution DSCOVR images down to just one pixel each, a team of scientists was able to determine how the Earth’s average color varies over a year. The team also compared the data with models of the Earth to reveal how environmental conditions like clouds and snow modulate the appearance of distant exoplanets. These results were presented this week at the 233rd Meeting of the American Astronomical Society, held in Seattle, Wash.


You can mimic what Earth might look like from very far away.

 Aronne Merrelli, an atmospheric scientist at the Space Science and Engineering Center at the University of Wisconsin–Madison, and his colleagues collected over 5,000 images of the sunlit side of the Earth taken in 2016 by the Earth Polychromatic Imaging Camera (EPIC) on board DSCOVR. The researchers repurposed these EPIC data, which were originally intended to reveal information about the planet’s ozone levels, aerosols suspended in the atmosphere, clouds, and vegetation. “We just smash it down to one pixel,” said Merrelli of the data spanning the ultraviolet, visible, and infrared. “We’re throwing away a lot of information.” This single-pixel view of the Earth is similar to the resolution scientists have of distant planets orbiting other stars, said Merrelli. “You can mimic what Earth might look like from very far away.”


The researchers—a mix of Earth scientists and astronomers—then examined how the planet’s average color varied over seasons. They found that Earth tended to be redder from June through September, probably because of the increase in vegetation in the Northern Hemisphere and a reduction in snow cover.


Merrelli and his team also compared the EPIC observations with a model of the Earth’s surface with its current configuration of landmasses and oceans and varying amounts of clouds, snow, and sea ice. These simulations allowed the scientists to determine the impact of dynamic environmental conditions on the planet’s color. They found that clouds played a large role in dictating the planet’s average color. “This type of investigation definitely lays the groundwork for imaging of Earth-like exoplanets,” said Drake Deming, an astronomer at the University of Maryland not involved in the research.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Engineers can now reverse-engineer 3D models

Engineers can now reverse-engineer 3D models | Amazing Science |
A system that uses a technique called constructive solid geometry (CSG) is allowing MIT researchers to deconstruct objects and turn them into 3D models, thereby allowing them to reverse-engineer complex things.

The system appeared in a paper entitled “InverseCSG: Automatic Conversion of 3D Models to CSG Trees” by Tao Du, Jeevana Priya Inala, Yewen Pu, Andrew Spielberg, Adriana Schulz, Daniela Rus, Armando Solar-Lezama, and Wojciech Matusik.

“At a high level, the problem is reverse engineering a triangle mesh into a simple tree. Ideally, if you want to customize an object, it would be best to have access to the original shapes — what their dimensions are and how they’re combined. But once you combine everything into a triangle mesh, you have nothing but a list of triangles to work with, and that information is lost,” said Tao Du to 3DPrintingIndustry. “Once we recover the metadata, it’s easier for other people to modify designs.”

The process cuts objects into simple solids that can then be added together to create complex objects. Because 3D scanning is imperfect, the creation of mesh models of various objects rarely leads to a perfect copy of the original. Using this technique, individual parts are cut away, analyzed and reassembled, allowing for a more precise scan.

“Further, we demonstrated the robustness of our algorithm by solving examples not describable by our grammar. Finally, since our method returns parameterized CSG programs, it provides a powerful means for end-users to edit and understand the structure of 3D meshes,” said Du.

The system detects primitive shapes and then modifies them. This allows it to recreate almost any object with far better accuracy than in previous versions of the software. It’s a surprisingly cool way to begin hacking hardware in order to understand it’s shape, volume and stability.


Video is here

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Our body may cure itself of diabetes in the future

Our body may cure itself of diabetes in the future | Amazing Science |

Researchers have found that neighboring cells can take over functions of damaged or missing insulin-producing cells. The discovery may lead to new treatments for diabetes.


Diabetes is caused by damaged or non-existing insulin cells inability to produce insulin, a hormone that is necessary in regulating blood sugar levels. Many diabetes patients take insulin supplements to regulate these levels. 

In collaboration with other international researchers, researchers at the University of Bergen have, discovered that glucagon producing cells in the pancreas, can change identity and adapt so that they do the job for their neighboring damaged or missing insulin cells. “We are possibly facing the start of a totally new form of treatment for diabetes, where the body can produce its own insulin, with some start-up help,” says Researcher Luiza Ghila at the Raeder Research Lab, Department of Clinical Science, University of Bergen (UiB). The results are published in Nature Cell Biology.

Cells can change identity

For the first time in history, researchers were able to describe the mechanisms behind the process of cell identity. It turns out that this is not at passive process, but is a result of signals from the surrounding cells. In the study, researchers were able to increase the number of insulin producing cells to 5 per cent, by using a drug that influenced the inter-cell signaling process. Thus far, the results have only been shown in animal models. 

“If we gain more knowledge about the mechanisms behind this cell flexibility, then we could possibly be able to control the process and change more cells’ identities so that more insulin can be produced, ” Ghila explains.

Possible new treatment against cell death

According to the researchers, the new discoveries is not only good news for diabetes treatment. 

“The cells´ ability to change identity and function, may be a decisive discovery in treating other diseases caused by cell death, such as Alzheimer´s disease and cellular damage due to heart attacks”, says Luiza Ghila.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Wireless ‘pacemaker for the brain’ could be new standard treatment for neurological disorders

Wireless ‘pacemaker for the brain’ could be new standard treatment for neurological disorders | Amazing Science |

Scientists have developed a ew device which can listen to and stimulate electric current in the brain at the same time.


A new neurostimulator developed by engineers at the University of California, Berkeley, can listen to and stimulate electric current in the brain at the same time, potentially delivering fine-tuned treatments to patients with diseases like epilepsy and Parkinson's. The new device, named the WAND, works like a "pacemaker for the brain," monitoring the brain's electrical activity and delivering electrical stimulation if it detects something amiss.


These devices can be extremely effective at preventing debilitating tremors or seizures in patients with a variety of neurological conditions. But the electrical signatures that precede a seizure or tremor can be extremely subtle, and the frequency and strength of electrical stimulation required to prevent them is equally touchy. It can take years of small adjustments by doctors before the devices provide optimal treatment.


WAND, which stands for wireless artifact-free neuromodulation device, is both wireless and autonomous, meaning that once it learns to recognize the signs of tremor or seizure, it can adjust the stimulation parameters on its own to prevent the unwanted movements. And because it is closed-loop -- meaning it can stimulate and record simultaneously -- it can adjust these parameters in real-time.


"The process of finding the right therapy for a patient is extremely costly and can take years. Significant reduction in both cost and duration can potentially lead to greatly improved outcomes and accessibility," said Rikky Muller assistant professor of electrical engineering and computer sciences at Berkeley.


"We want to enable the device to figure out what is the best way to stimulate for a given patient to give the best outcomes. And you can only do that by listening and recording the neural signatures."


WAND can record electrical activity over 128 channels, or from 128 points in the brain, compared to eight channels in other closed-loop systems. To demonstrate the device, the team used WAND to recognize and delay specific arm movements in rhesus macaques. The device is described in a study that appeared today (Dec. 31, 2018) in Nature Biomedical Engineering.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bees can count with a very small number of nerve cells in their brains

Bees can count with a very small number of nerve cells in their brains | Amazing Science |

Bees can solve seemingly clever counting tasks with very small numbers of nerve cells in their brains, according to researchers at Queen Mary University of London. In order to understand how bees count, the researchers simulated a very simple miniature 'brain' on a computer with just four nerve cells -- far fewer than a real bee has.


The 'brain' could easily count small quantities of items when inspecting one item closely and then inspecting the next item closely and so on, which is the same way bees count. This differs from humans who glance at all the items and count them together.

In this study, published in the journal iScience, the researchers propose that this clever behavior makes the complex task of counting much easier, allowing bees to display impressive cognitive abilities with minimal brainpower.


Previous studies have shown bees can count up to four or five items, can choose the smaller or the larger number from a group and even choose 'zero' against other numbers when trained to choose 'less'. They might have achieved this not by understanding numerical concepts, but by using specific flight movements to closely inspect items which then shape their visual input and simplifies the task to the point where it requires minimal brainpower.


This finding demonstrates that the intelligence of bees, and potentially other animals, can be mediated by very small nerve cells numbers, as long as these are wired together in the right way.

The study could also have implications for artificial intelligence because efficient autonomous robots will need to rely on robust, computationally inexpensive algorithms, and could benefit from employing insect-inspired scanning behaviors.


Lead author Dr Vera Vasas, from Queen Mary University of London, said: "Our model shows that even though counting is generally thought to require high intelligence and large brains, it can be easily done with the smallest of nerve cell circuits connected in the right manner. We suggest that using specific flight movements to scan targets, rather than numerical concepts, explains the bees' ability to count. This scanning streamlines the visual input and means a task like counting requires little brainpower.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers Make World's Smallest Tic-Tac-Toe Game Board with DNA

Researchers Make World's Smallest Tic-Tac-Toe Game Board with DNA | Amazing Science |

It was just about a year ago that Caltech scientists in the laboratory of Lulu Qian, assistant professor of bioengineering, announced they had used a technique known as DNA origami to create tiles that could be designed to self-assemble into larger nanostructures that carry predesigned patterns. They chose to make the world's smallest version of the iconic Mona Lisa.


The feat was impressive, but the technique had a limitation similar to that of Leonardo da Vinci's oil paints: Once the image was created, it could not easily be changed.


Now, the Caltech team has made another leap forward with the technology. They have created new tiles that are more dynamic, allowing the researchers to reshape already-built DNA structures.


When Caltech's Paul Rothemund (BS '94) pioneered DNA origami more than a decade ago, he used the technique to build a smiley face. Qian's team can now turn that smile into a frown, and then, if they want, turn that frown upside down. And they have gone even further, fashioning a microscopic game of tic-tac-toe in which players place their X's and O's by adding special DNA tiles to the board.


"We developed a mechanism to program the dynamic interactions between complex DNA nanostructures," says Qian. "Using this mechanism, we created the world's smallest game board for playing tic-tac-toe, where every move involves molecular self-reconfiguration for swapping in and out hundreds of DNA strands at once."


Putting all the Pieces Together

That swapping mechanism combines two previously developed DNA nanotechnologies. It uses the building blocks from one and the general concept from the other: self-assembling tiles, which were used to create the tiny Mona Lisa; and strand displacement, which has been used by Qian's team to build DNA robots.


Both technologies make use of DNA's ability to be programmed through the arrangement of its molecules. Each strand of DNA consists of a backbone and four types of molecules known as bases. These bases -- adenine, guanine, cytosine, and thymine, abbreviated as A, T, C, and G -- can be arranged in any order, with the order representing information that can be used by cells, or in this case by engineered nanomachines.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New Caledonian crows infer the weight of objects from observing their movements in a breeze

New Caledonian crows infer the weight of objects from observing their movements in a breeze | Amazing Science |

Humans use a variety of cues to infer an object's weight, including how easily objects can be moved. For example, if we observe an object being blown down the street by the wind, we can infer that it is light. A team of scientists tested now whether New Caledonian crows make this type of inference. After training that only one type of object (either light or heavy) was rewarded when dropped into a food dispenser, birds observed pairs of novel objects (one light and one heavy) suspended from strings in front of an electric fan. The fan was either on—creating a breeze which buffeted the light, but not the heavy, object—or off, leaving both objects stationary. In subsequent test trials, birds could drop one, or both, of the novel objects into the food dispenser. Despite having no opportunity to handle these objects prior to testing, birds touched the correct object (light or heavy) first in 73% of experimental trials, and were at chance in control trials. These results suggest that birds used pre-existing knowledge about the behavior exhibited by differently weighted objects in the wind to infer their weight, using this information to guide their choices.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google Brain Is Morphing Into A Translator for Artificial Intelligence

Google Brain Is Morphing Into A Translator for Artificial Intelligence | Amazing Science |
Neural networks are famously incomprehensible, so Been Kim is developing a “translator for humans.”


If a doctor told that you needed surgery, you would want to know why — and you’d expect the explanation to make sense to you, even if you’d never gone to medical school. Been Kim, a research scientist at Google Brain, believes that we should expect nothing less from artificial intelligence. As a specialist in “interpretable” machine learning, she wants to build AI software that can explain itself to anyone.


Since its ascendance roughly a decade ago, the neural-network technology behind artificial intelligence has transformed everything from email to drug discovery with its increasingly powerful ability to learn from and identify patterns in data. But that power has come with an uncanny caveat: The very complexity that lets modern deep-learning networks successfully teach themselves how to drive cars and spot insurance fraud also makes their inner workings nearly impossible to make sense of, even by AI experts. If a neural network is trained to identify patients at risk for conditions like liver cancer and schizophrenia — as a system called “Deep Patient” was in 2015, at Mount Sinai Hospital in New York — there’s no way to discern exactly which features in the data the network is paying attention to. That “knowledge” is smeared across many layers of artificial neurons, each with hundreds or thousands of connections.


As ever more industries attempt to automate or enhance their decision-making with AI, this so-called black box problem seems less like a technological quirk than a fundamental flaw. DARPA’s “XAI” project (for “explainable AI”) is actively researching the problem, and interpretability has moved from the fringes of machine-learning research to its center. “AI is in this critical moment where humankind is trying to decide whether this technology is good for us or not,” Kim says. “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology. We might just drop it.”


Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.


TCAV was originally tested on machine-learning models trained to recognize images, but it also works with models trained on text and certain kinds of data visualizations, like EEG waveforms. “It’s generic and simple — you can plug it into many different models,” Kim says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Quantum brain computer: Scientists are planning to create a quantum computer that acts like a brain

Quantum brain computer: Scientists are planning to create a quantum computer that acts like a brain | Amazing Science |
Combining quantum computing with neural networks could produce AI that can make very complex decisions quickly.


The human brain has amazing capabilities making it in many ways more powerful than the world’s most advanced computers. So it’s not surprising that engineers have long been trying to copy it. Nowadays, artificial neural networks inspired by the structure of the brain are used to tackle some of the most difficult problems in artificial intelligence (AI). But this approach typically involves building software so information is processed in a similar way to the brain, rather than creating hardware that mimics neurons.


Scientists now hope to build the first dedicated neural network computer, using the latest “quantum” technology rather than AI software. By combining these two branches of computing, they hope to produce a breakthrough which leads to AI that operates at unprecedented speed, automatically making very complex decisions in a very short time. However, they need much more advanced AI if they want to create things like truly autonomous self-driving cars and systems for accurately managing the traffic flow of an entire city in real-time. Many attempts to build this kind of software involve writing code that mimics the way neurons in the human brain work and combining many of these artificial neurons into a network. Each neuron mimics a decision-making process by taking a number of input signals and processing them to give an output corresponding to either “yes” or “no”. Each input is weighted according to how important it is to the decision. For example, for AI that could tell you which restaurant you would most enjoy going to, the quality of the food may be more important than the location of the table that’s available, so would be given more weight in the decision-making process. These weights are adjusted in test runs to improve the performance of the network, effectively training the system to work better.


This was how Google’s AlphaGo software learned the complex strategy game Go, playing against a copy of itself until it was ready to beat the human world champion by four games to one. But the performance of the AI software strongly depends on how much input data it can be trained on (in the case of AlphaGo, it was how often it played against itself).


The new neuromorphic project aims to radically speed up this process and boost the amount of input data that can be processed by building neural networks that work on the principles of quantum mechanics. These networks will not be coded in software, but directly built in hardware made of superconducting electrical circuits. We expect that this will make it easier to scale them up without errors. Traditional computers store data in units known as bits, which can take one of two states, either 0 or 1. Quantum computers store data in “qubits”, which can take on many different states. Every extra qubit added to the system doubles its computing power. This means that quantum computers can process huge amounts of data in parallel (at the same time).


So far, only small quantum computers that demonstrate parts of the technology have been successfully built. Motivated by the prospect of significantly greater processing power, many universitiestech giants and start-up companies are now working on designs. But none have yet reached a stage where they can outperform existing (non-quantum) computers.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

We're one step closer to deciphering rodent languages

We're one step closer to deciphering rodent languages | Amazing Science |
UW researchers developed a software called DeepSqueak—derived from self-driving car technology—to demystify mouse and rat communication and monitor how our furry analogs fare in the lab.


Rodents like mice and rats have been staple creatures for laboratory research for nearly a century for good reasons: As human proxies for study, they share more than 97 percent of their DNA with our species. They also live shorter lives, have more babies, are cheaper to purchase and maintain and don’t need to sign waivers to give their lives to science. No wonder nearly 85 percent of the 25-million-plus lab animals used today are either rats or mice.


But despite their omnipresence in lab settings, rodent culture itself is still relatively understudied — especially the combination of chirping, bruxing and other behaviors that constitute rodent language. But a new software called DeepSqueak, outlined in Neuropsychopharmacology and developed by researchers at the University of Washington, could help demystify rodent language to better monitor how our furry analogs fare during experiments.


Nearly 40 years ago, researchers realized that rats and mice use language in the form of ultrasonic vocalizations (USVs) that we would need specialized equipment to hear. “What they do is they whistle, and when you slow it down 10 or 20 times, it sounds just like a bird call,” says Kevin Coffey, a postdoctoral researcher in the Psychiatry and Behavioral Science department at the University of Washington School of Medicine. Coffey has researched rodents (and owned them as pets) for more than a decade, giving him ample time to observe their habits. Researchers like himself have discovered that rodents make these 20 or so types of whistles — which scientists call syllables — depending on the scenario and what they want to accomplish, and structure them together in lots of different combinations, much like language.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Astronomers Clock a Black Hole Spinning at Half the Speed of Light

Astronomers Clock a Black Hole Spinning at Half the Speed of Light | Amazing Science |
Researchers have used X-rays to calculated how fast a black hole spins, something that might help them see what happens as black holes age.


Black holes are massive beasts that annihilate anything that dares to cross them. We don’t know a whole lot about these invisible, terrifying bodies, but astronomers have found a new way to study their mysterious behavior.


By observing the X-rays blasting from a star torn apart by a black hole, a team of researchers were able to calculate how fast the black hole spins — clocking it at nearly 50 percent the speed of light. This marks the first time that astronomers used X-rays, which orbit the black hole every 131 seconds, to calculate its incredible speed. The research, which could help correlate a black hole’s age with its speed, was published today in the journal Science.


The discovery dates back to November 2014, when astronomers were observing a galaxy 300 million light years from Earth. They saw the galaxy’s central, supermassive black hole lure in and rip apart a passing star. Known as a tidal disruption flare, this event created a blast of X-ray radiation that was strong enough to be seen from Earth. Since black holes don’t emit many X-rays on their own, a group of researchers decided to home in on the event.


And luckily for them, various space telescopes started measuring the black hole’s X-ray emissions after the flare was spotted. After combing through their data, the MIT-led team noticed a peculiar trend. They found that bursts of X-rays were appearing once every 131 seconds near the black hole’s event horizon — the point where it starts to swallow up material. These periodic emissions, which persisted for over 450 days, boosted the black hole’s total X-rays emissions by 40 percent.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mysterious radio signals from deep space detected

Mysterious radio signals from deep space detected | Amazing Science |

Astronomers have revealed details of mysterious signals emanating from a distant galaxy, picked up by a telescope in Canada. The precise nature and origin of the blasts of radio waves is unknown.


Among the 13 fast radio bursts, known as FRBs, was a very unusual repeating signal, coming from the same source about 1.5 billion light years away. Such an event has only been reported once before, by a different telescope. "Knowing that there is another suggests that there could be more out there," said Ingrid Stairs, an astrophysicist from the University of British Columbia (UBC). "And with more repeaters and more sources available for study, we may be able to understand these cosmic puzzles - where they're from and what causes them."


The CHIME observatory, located in British Columbia's Okanagan Valley, consists of four 100-meter-long, semi-cylindrical antennas, which scan the entire northern sky each day. The telescope only got up and running last year, detecting 13 of the radio bursts almost immediately, including the repeater. The research has now been published in the journal Nature.


"We have discovered a second repeater and its properties are very similar to the first repeater," said Shriharsh Tendulkar of McGill University, Canada. "This tells us more about the properties of repeaters as a population." FRBs are short, bright flashes of radio waves, which appear to be coming from almost halfway across the Universe.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Deep_In_Depth: Deep Learning, ML & DS!

Automatic Speaker Recognition using Transfer Learning AI

Automatic Speaker Recognition using Transfer Learning AI | Amazing Science |

Even with today’s frequent technological breakthroughs in speech-interactive devices (e.g., Siri and Alexa), few companies have tried enabling multi-user profiles. Google Home has been the most ambitious in this area, allowing up to six user profiles. The recent boom of this technology is what made the potential for this project very exciting to a team of computer engineers. They also wanted to engage in a project that is a hot topic in deep-learning research, create interesting tools, learn more about neural network architectures, and make original contributions where possible.

The computer scientists sought to create a system able to quickly add user profiles and accurately identify their voices with very little training data, a few sentences as most! This learning from one to only a few samples is known as One Shot Learning. This article will outline the different phases of the project in detail.

Via Eric Feuilleaubois
No comment yet.
Scooped by Dr. Stefan Gruenwald!

CRISPR might soon create spicy tomatoes by switching on their chili genes

CRISPR might soon create spicy tomatoes by switching on their chili genes | Amazing Science |

Looking for perfect heat and lots of it? Gene engineers in Brazil think they might be able to create eye-watering tomatoes.


Even though chili peppers and tomato plants diverged from a common ancestor millions of years ago, tomatoes still possess the genetic pathway needed to make capsaicinoids, the molecules that make chilis hot.


Now, Agustin Zsögön from the Federal University of Viçosa in Brazil writes in the journal Trends in Plant Science that gene-editing tools like CRISPR could turn it back on.


Spicy biofactories: Tomatoes are much easier to grow than peppers, so making them hot could turn them into spice factories. “Capsaicinoids are very valuable compounds; they are used in [the] weapons industry for pepper spray, they are also used for anaesthetics [and] there is some research showing that they promote weight loss,” he told the Guardian.


Strange fruit: Tomatoes are not the first food that scientists have suggested could be given an unusual new twist using CRISPR. Sweeter strawberriesnon-browning mushrooms, and tastier ground-cherries have all been either attempted or mooted in the past.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Humans and Leaf-Cutter Ants Contribute to Global Warming through Carbon Dioxide Emissions

Humans and Leaf-Cutter Ants Contribute to Global Warming through Carbon Dioxide Emissions | Amazing Science |

Humans are not the only animals to build elaborate housing and grow crops—or to add carbon dioxide (CO2) to the atmosphere through their industry. A new study shows that the leaf-cutter ant Atta cephalotes is also a master builder and cultivator and a significant source of greenhouse gas emissions.


Found in ecosystems throughout the New World, Atta species excavate massive, several-meter-deep underground nests that include complex tunnels and chambers, exits, and entrances. The ants drag vast quantities of vegetation into the nests to feed their main food source: a fungus called Leucoagaricus gongylophorus. To maintain the proper concentrations of CO2 and oxygen belowground, the nests also feature air vents and chimney-like turrets that enhance ventilation.


Warming soils are releasing ever-increasing amounts of CO2 into the atmosphere, but most climate models don’t account for contributions from animals such as Atta cephalotes, which stir up the soil and release gases at a faster rate. To determine just how much CO2 leaf-cutter ant nests emit, Fernandez-Bou et al. spent more than 2 years monitoring 15 nests in La Selva Biological Station, a rainforest research station in northeast Costa Rica. Each study site included a leaf-cutter ant nest and a similar plot of nestless soil, where the team inserted stainless-steel tubes to collect gas at different depths and measured CO2 emissions from soils and nest openings.


They found that the amount of CO2 wafting from the ant nests and the surrounding soils was 15%–60% higher than from nearby nestless soils. Nest openings were the major source of this increase, with emissions up to 100,000 times greater than from control soil plots.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Inexpensive, efficient bi-metallic electro-catalysts may open floodgates for hydrogen fuel

Inexpensive, efficient bi-metallic electro-catalysts may open floodgates for hydrogen fuel | Amazing Science |

Investigations into non-precious metal catalysts for hydrogen evolution are ongoing. Here, the authors report that a hierarchical nanoporous copper-titanium bimetallic electrocatalyst is able to produce hydrogen from water under a mild overpotential at more than twice the rate of state-of-the-art carbon-supported platinum catalyst. Although both copper and titanium are known to be poor hydrogen evolution catalysts, the combination of these two elements creates unique copper-copper-titanium hollow sites, which have a hydrogen-binding energy very similar to that of platinum, resulting in an exceptional hydrogen evolution activity. In addition, the hierarchical porosity of the nanoporous copper-titanium catalyst also contributes to its high hydrogen evolution activity, because it provides a large-surface area for electrocatalytic hydrogen evolution, and improves the mass transport properties. Moreover, the catalyst is self-supported, eliminating the overpotential associated with the catalyst/support interface.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Feisty hummingbirds prioritize fighting over feeding: Some male hummingbirds have weaponized their bills

Feisty hummingbirds prioritize fighting over feeding: Some male hummingbirds have weaponized their bills | Amazing Science |
Most hummingbirds have bills and tongues exquisitely designed to slip inside a flower, lap up nectar and squeeze every last drop of precious sugar water from their tongue to fuel their frenetic lifestyle.

But in the tropics of South America, University of California, Berkeley, scientists are finding that some male hummers have traded efficient feeding for bills that are better at stabbing and plucking other hummingbirds as they fend off rivals for food and mates. The males' weaponized bills are good not only for pulling feathers and pinching skin, but also wrestling their rivals away from prime feeding spots.

Using high-speed video cameras, the researchers have for the first time captured hummingbird fencing and feeding strategies in slow motion to document the various ways the birds use their bills to fight and the trade-offs they accept when choosing fighting over feeding prowess.

"We understand hummingbirds' lives as being all about drinking efficiently from flowers, but then suddenly we see these weird morphologies -- stiff bills, hooks and serrations like teeth -- that don't make any sense in terms of nectar collection efficiency," said Alejandro Rico-Guevara, a Miller Postdoctoral Fellow at UC Berkeley and the lead scientist on the project. "Looking at these bizarre bill tips, you would never expect that they're from a hummingbird or that they would be useful to squeeze the tongue."

Straighter bills are better for poking, which may explain why in some species females have curved bills to sip inside the curved bells of flowers but the males' beaks are less curved. This has sometimes forced the males to feed on different flowers than the females, ones more adapted to a straighter beak.

"It is all about feeding efficiency in flowers versus proficiency in fighting," he said.

Rico-Guevara acknowledged that hummingbirds have long been known as fierce fighters -- they even attack hawks, owls and other birds if they perceive a threat -- but the fights happen so fast that scientists haven't been able to see the actual outcome.

"Because it happens so fast and they fly away, you can't track them," he said. "But also, people haven't actually looked at the details of the beaks. We are making connections between how feisty they are, the beak morphology behind that and what that implies for their competitiveness."

Rico-Guevara is the lead author of a paper describing how the shape of the bill affects hummingbird feeding and fighting strategies in the January 2019 issue of the journal Integrative Organismal Biology.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's first mission to the Kuiper Belt: New Horizon Spacecraft flew past Ultima Thule

NASA's first mission to the Kuiper Belt: New Horizon Spacecraft flew past Ultima Thule | Amazing Science |

NASA's New Horizons spacecraft flew past Ultima Thule in the early hours of New Year's Day, ushering in the era of exploration from the enigmatic Kuiper Belt, a region of primordial objects that holds keys to understanding the origins of the solar system.


"Congratulations to NASA's New Horizons team, Johns Hopkins Applied Physics Laboratory and the Southwest Research Institute for making history yet again. In addition to being the first to explore Pluto, today New Horizons flew by the most distant object ever visited by a spacecraft and became the first to directly explore an object that holds remnants from the birth of our solar system," said NASA Administrator Jim Bridenstine. "This is what leadership in space exploration is all about."


Signals confirming the spacecraft is healthy and had filled its digital recorders with science data on Ultima Thule reached the mission operations center at the Johns Hopkins Applied Physics Laboratory (APL) today at 10:29 a.m. EST, almost exactly 10 hours after New Horizons' closest approach to the object.


"New Horizons performed as planned today, conducting the farthest exploration of any world in history -- 4 billion miles from the Sun," said Principal Investigator Alan Stern, of the Southwest Research Institute in Boulder, Colorado. "The data we have look fantastic and we're already learning about Ultima from up close. From here out the data will just get better and better!"


Images taken during the spacecraft's approach -- which brought New Horizons to within just 2,200 miles (3,500 kilometers) of Ultima at 12:33 a.m. EST -- revealed that the Kuiper Belt object may have a shape similar to a bowling pin, spinning end over end, with dimensions of approximately 20 by 10 miles (32 by 16 kilometers).


Another possibility is Ultima could be two objects orbiting each other. Flyby data have already solved one of Ultima's mysteries, showing that the Kuiper Belt object is spinning like a propeller with the axis pointing approximately toward New Horizons. This explains why, in earlier images taken before Ultima was resolved, its brightness didn't appear to vary as it rotated. The team has still not determined the rotation period.


As the science data began its initial return to Earth, mission team members and leadership reveled in the excitement of the first exploration of this distant region of space.


"New Horizons holds a dear place in our hearts as an intrepid and persistent little explorer, as well as a great photographer," said Johns Hopkins Applied Physics Laboratory Director Ralph Semmel. "This flyby marks a first for all of us -- APL, NASA, the nation and the world -- and it is a great credit to the bold team of scientists and engineers who brought us to this point."


"Reaching Ultima Thule from 4 billion miles away is an incredible achievement. This is exploration at its finest," said Adam L. Hamilton, president and CEO of the Southwest Research Institute in San Antonio. "Kudos to the science team and mission partners for starting the textbooks on Pluto and the Kuiper Belt. We're looking forward to seeing the next chapter."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

3D-printed robot hand ‘plays’ the piano

Scientists have developed a 3D-printed robotic hand which can play simple musical phrases on the piano by just moving its wrist. And while the robot is no virtuoso, it demonstrates just how challenging it is to replicate all the abilities of a human hand, and how much complex movement can still be achieved through design.


The robotic hand, developed by researchers at the University of Cambridge, was made by 3D-printing soft and rigid materials together to replicate of all the bones and ligaments -- but not the muscles or tendons -- in a human hand. Even though this limited the robot hand's range of motion compared to a human hand, the researchers found that a surprisingly wide range of movement was still possible by relying on the hand's mechanical design.


Using this 'passive' movement -- in which the fingers cannot move independently -- the robot was able to mimic different styles of piano playing without changing the material or mechanical properties of the hand. The results, reported in the journal Science Robotics, could help inform the design of robots that are capable of more natural movement with minimal energy use.


Complex movement in animals and machines results from the interplay between the brain (or controller), the environment and the mechanical body. The mechanical properties and design of systems are important for intelligent functioning, and help both animals and machines to move in complex ways without expending unnecessary amounts of energy.


"We can use passivity to achieve a wide range of movement in robots: walking, swimming or flying, for example," said Josie Hughes from Cambridge's Department of Engineering, the paper's first author. "Smart mechanical design enables us to achieve the maximum range of movement with minimal control costs: we wanted to see just how much movement we could get with mechanics alone."


Over the past several years, soft components have begun to be integrated into robotics design thanks to advances in 3D printing techniques, which has allowed researchers to add complexity to these passive systems.

No comment yet.