Amazing Science
795.9K views | +192 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Robots and Robotics!

Robotic Micro-Scallops Can Swim Through Your Eyeballs

Robotic Micro-Scallops Can Swim Through Your Eyeballs | Amazing Science |

Designing robots on the micro or nanoscale is all about simplicity. There just isn’t room for complex motors or actuation systems. There’s barely room for any electronics whatsoever, not to mention batteries, which is why robots that can swim inside your bloodstream or zip around your eyeballs are often driven by magnetic fields. However, magnetic fields drag around anything and everything that happens to be magnetic, so in general, they’re best for controlling just one single microrobot robot at a time. Ideally, you’d want robots that can swim all by themselves, and a robotic micro-scallop, announced today in Nature Communications, could be the answer.

When we’re thinking about robotic microswimmers motion, the place to start is with understanding how fluids (specifically, biological fluids) work at very small scales. Blood doesn’t behave like water does, in that blood is what’s called a non-Newtonian fluid. All that this means is that blood behaves differently (it changes viscosity, becoming thicker or thinner) depending on how much force you’re exerting on it. The classic example of a non-Newtonian fluid is oobleck, which you can make yourself by mixing one part water with two parts corn starch. Oobleck acts like a liquid until you exert a bunch of force on it (say, by rapidly trying to push your hand into it), at which point its viscosity increases to the point where it’s nearly solid.

These non-Newtonian fluids represent most of the liquid stuff that you have going on in your body (blood, joint fluid, eyeball goo, etc), which, while it sounds like it would be more complicated to swim through, is actually anopportunity for robots. Here’s why: At very small scales, robotic actuators tend to be simplistic and reciprocal. That is, they move back and forth, as opposed to around and around, like you’d see with a traditional motor. In water (or another Newtonian fluid), it’s hard to make a simple swimming robot out of reciprocal motions, because the back and forth motion exerts the same amount of force in both directions, and the robot just moves forward a little, and backward a little, over and over. Biological microorganisms generally do not use reciprocal motions to get around in fluids for this exact reason, instead relying on nonreciprocal motions of flagella and cilia.

However, if we’re dealing with a non-Newtonian fluid, this rule (it’s actually a theorem called the Scallop theorem) doesn’t apply anymore, meaning that it should be possible to use reciprocal movements to get around. A team of researchers led by Prof. Peer Fischer at the Max Planck Institute for Intelligent Systems, in Germany, have figured out how, and appropriately enough, it’s a microscopic robot that’s based on the scallop.

Via Kalani Kirk Hausman
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Samsung plans to make robots work cheaper than any human worker

Samsung plans to make robots work cheaper than any human worker | Amazing Science |

Will no job be safe from the machines?

Robots have already started to replace humans in various jobs, as a result of which humans are out of jobs and rightly worried about sources of employment for the future, and with the rapid advancement in robot technology it’s not like this momentum is going to slow down soon. The Ministry of Trade, Industry and Energy of South Korea has set aside 6.75 billion won or $14.8 million to aid Samsung in developing robots which will be able to function in such facilities.

Samsung and the Ministry aren’t keeping their aim for this a secret, they are well aware that wages are rising in China and want to bring some of that lucrative manufacturing business over to South Korea by replacing human workers with robots.

If they’re able to create robots that can replace humans on smartphone production lines then it’s quite likely that contract manufacturers like Foxconn, which assembles iPhones and other Apple devices, will seek out options that saves them the most amount of money.

It’s easier said than done though, and it might take Samsung and South Korea a considerable amount of time before they’re able to come up with robots that can fully replace humans on the production line. Though when they’re finally able to do so it might shake up things considerably in the global manufacturing industry which China has been leading for quite some time now.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientist Designs Bio-Inspired Robotic Finger That Looks, Feels and Works Like the Real Thing

Scientist Designs Bio-Inspired Robotic Finger That Looks, Feels and Works Like the Real Thing | Amazing Science |

Most robotic parts used today are rigid, have a limited range of motion and don’t really look lifelike. Inspired by both nature and biology, a scientist from Florida Atlantic University has designed a novel robotic finger that looks and feels like the real thing. In an article recently published in the journal Bioinspiration & Biomimetics, Erik Engeberg, Ph.D., assistant professor in the Department of Ocean and Mechanical Engineering within the College of Engineering and Computer Science at FAU, describes how he has developed and tested this robotic finger using shape memory alloy (SMA), a 3D CAD model of a human finger, a 3D printer, and a unique thermal training technique.

“We have been able to thermo-mechanically train our robotic finger to mimic the motions of a human finger like flexion and extension,” said Engeberg. “Because of its light weight, dexterity and strength, our robotic design offers tremendous advantages over traditional mechanisms, and could ultimately be adapted for use as a prosthetic device, such as on a prosthetic hand.”

In the study, Engeberg and his team used a resistive heating process called “Joule” heating that involves the passage of electric currents through a conductor that releases heat. Using a 3D CAD model of a human finger, which they downloaded from a website, they were able to create a solid model of the finger. With a 3D printer, they created the inner and outer molds that housed a flexor and extensor actuator and a position sensor. The extensor actuator takes a straight shape when it’s heated, whereas the flexor actuator takes a curved shape when heated. They used SMA plates and a multi-stage casting process to assemble the finger. An electrical chassis was designed to allow electric currents to flow through each SMA actuator. Its U-shaped design directed the electric current to flow the SMAs to an electric power source at the base of the finger.

This new technology used both a heating and then a cooling process to operate the robotic finger. As the actuator cooled, the material relaxed slightly. Results from the study showed a more rapid flexing and extending motion of the finger as well as its ability to recover its trained shape more accurately and more completely, confirming the biomechanical basis of its trained shape.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA: 'Hedgehog' Robots Hop and Tumble in Microgravity

NASA: 'Hedgehog' Robots Hop and Tumble in Microgravity | Amazing Science |
Hedgehog is a new concept for a robot that is specifically designed to overcome the challenges of traversing small solar system bodies.

Hopping, tumbling and flipping over are not typical maneuvers you would expect from a spacecraft exploring other worlds. Traditional Mars rovers, for example, roll around on wheels, and they can't operate upside-down. But on a small body, such as an asteroid or a comet, the low-gravity conditions and rough surfaces make traditional driving all the more hazardous. Enter Hedgehog: a new concept for a robot that is specifically designed to overcome the challenges of traversing small bodies. The project is being jointly developed by researchers at NASA's Jet Propulsion Laboratory in Pasadena, California; Stanford University in Stanford, California; and the Massachusetts Institute of Technology in Cambridge.

"Hedgehog is a different kind of robot that would hop and tumble on the surface instead of rolling on wheels. It is shaped like a cube and can operate no matter which side it lands on," said Issa Nesnas, leader of the JPL team. The basic concept is a cube with spikes that moves by spinning and braking internal flywheels. The spikes protect the robot's body from the terrain and act as feet while hopping and tumbling.

"The spikes could also house instruments such as thermal probes to take the temperature of the surface as the robot tumbles," Nesnas said.

Two Hedgehog prototypes -- one from Stanford and one from JPL -- were tested aboard NASA's C-9 aircraft for microgravity research in June 2015. During 180 parabolas, over the course of four flights, these robots demonstrated several types of maneuvers that would be useful for getting around on small bodies with reduced gravity. Researchers tested these maneuvers on different materials that mimic a wide range of surfaces: sandy, rough and rocky, slippery and icy, and soft and crumbly.

"We demonstrated for the first time our Hedgehog prototypes performing controlled hopping and tumbling in comet-like environments," said Robert Reid, lead engineer on the project at JPL.

Cameron sherrill's curator insight, September 23, 2015 10:50 PM

This is about how they made a new mars rover that is more durable and they call it "the hedgehog" it can now turn and keep going if flipped over ang makes sharp turns and is adapted to the area more. The hedgehog hops and can ride in all types of environments like ice and dirt. The shape is square so it can operate on all sides if it falls over and was tested many times

Alex russell's comment, September 24, 2015 8:09 PM
This topic was interesting because it involved space and I really enjoy learning about space. The hedgehog is a smart idea because the rover can't go everywhere but with the nhedgehog we can go anywhere we want on Mars and gather even more research.
Scooped by Dr. Stefan Gruenwald!

Remote robotic surgery is both practical and safe, study finds

Remote robotic surgery is both practical and safe, study finds | Amazing Science |

The Nicholson Center at Florida Hospital, which trains doctors to use the latest medical technology, including robots, has been testing the latency between communication-rich environments, such as hospital campuses.

Early results have led researchers to a conclusion that is either astounding or not at all surprising, depending on the depth of your knowledge about network latency and the recent progress of robotic surgery. It turns out that telesurgery, in which a surgeon in one location performs an operation in another with the aid of a robot, could quite easily be practiced today with existing technology. I, for one, was astounded.

"We didn't know if we could stay below the necessary thresholds in terms of latency," says Dr. Roger Smith, CTO of the Nicholson Center, referring to the delay between the moment information is transmitted and the moment it is received. "But it turns out that today's internet has no trouble beating those thresholds. So the barrier to telesurgery really isn't in the technology, but elsewhere."

The burgeoning field of robotic surgery is dominated by Intuitive Surgical, which makes the da Vinci Surgical System. Intuitive received FDA clearance for the da Vinci in 2000, though at that time it wasn't clear how readily surgeons would adopt the new technology or how patients would react to it. But the da Vinci proved its usefulness early on by reducing complications associated with prostate removal. Because of the position of the prostate, surgeons have to enter through the abdomen and then tunnel down to reach it. The invasiveness of the procedure carries high risks, and two common complications are incontinence and impotence. The da Vinci uses long pencil-like rods in place of a surgeon's hands, meaning surgeries performed with it are less invasive and significantly reducing complications and recovery times.

Hundreds of thousands of surgeries are now conducted with da Vinci systems each year--virtually every prostate patient with a choice opts for it--and robotic surgery has quickly passed the crucial adoption threshold. Intuitive Surgical now has an $18.2B market cap. Interestingly, many of the patents that Intuitive acquired when constructing the da Vinci came from tech developed with funding from the Department of Defense. It makes sense. The military has a huge interest in robotic surgery. In combat, evacuating casualties to a state of the art medical facility can be exceedingly difficult. But neither is it practical to staff combat hospitals with the necessary array of surgical specialists. The best answer seems to lie in a robotic surgical device that can be operated remotely by an expert surgeon who is perhaps based hundreds of miles away. The da Vinci was a major step toward that vision, but it's taken longer to clear the network hurdles necessary to make remote surgery viable.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Elon Musk, Stephen Hawking Want to Save the World From Killer Robots on the Battlefield

Elon Musk, Stephen Hawking Want to Save the World From Killer Robots on the Battlefield | Amazing Science |

Elon Musk and Stephen Hawking are among the leaders from the science and technology worlds calling for a ban on autonomous weapons, warning that weapons with a mind of their own "would not be beneficial for humanity."

Along with 1,000 other signatories, Musk and Hawking signed their names to an open letter that will be presented this week at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

Autonomous weapons are defined by the group as artillery that can "search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions."

"Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is -- practically if not legally -- feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the letter, posted on the Future of Life Institute's website says.

If one country pushes ahead with the creation of robotic killers, the group wrote it fears it will spur a global arms race that could spell disaster for humanity.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group," the letter says. "We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."

While the group warns of the potential carnage killer robots could inflict, they also stress they aren't against certain advances in artificial intelligence.

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the letter says. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory!

The man who created the world's first self aware robot says the next big test will change the human-robot relationship forever

The man who created the world's first self aware robot says the next big test will change the human-robot relationship forever | Amazing Science |

Luciano Floridi issued a challenge to scientists to the world in 2005: prove that robots can display the human trait of self-awareness through a knowledge game called the “wise man” test. It was a venture he didn’t ever see being achieved in the foreseeable future. A decade later, the Oxford professor’s seemingly unattainable challenge has been met.

On July 9, 2015, a team of researchers led by Professor Selmer Bringsjord helped a robot solve the riddle, displaying a level of self-awareness and satisfying what had until then been considered “the ultimate sifter” test that could separate human from cyborg. But the professor says there’s a bigger challenge he wants robots to accomplish: self-awareness in real time. If we achieve this milestone, he said, the way we interact with artificial intelligence and robots will drastically change.

“Real time” self-awareness means robots acting upon new situations that they are not pre-programmed for, and translating how to act into physical movements. This is a serious challenge that Bringsjord has not tapped into because self-awareness algorithms are still separate from a robot’s body. If robots could work in real time, mind-to-body, he says, we would break through major barriers that could result in scenarios such as droids that act as our personal chauffeurs.

Via Ben van Lier
TJ Allard's curator insight, July 26, 2015 2:41 PM

ok and......when? Its like I've been reading articles like this for a few years now. 

Scooped by Dr. Stefan Gruenwald!

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons | Amazing Science |

Swarms of microscopic, magnetic, robotic beads could be used within five years by vascular surgeons to clear blocked arteries. These minimally invasive microrobots, which look and move like corkscrew-shaped bacteria, are being developed by an $18-million, 11-institution research initiative headed by the Korea Evaluation Institute of Industrial Technologies (KEIT).

These “microswimmers” are driven and controlled by external magnetic fields, similar to how nanowires from Purdue University and ETH Zurich/Technion (recently covered on KurzweilAI) work, but based on a different design. Instead of wires, they’re made from chains of three or more iron oxide beads, rigidly linked together via chemical bonds and magnetic force. The beads are put in motion by an external magnetic field that causes each of them to rotate. Because they are linked together, their individual rotations cause the chain to twist like a corkscrew and this movement propels the microswimmer. The chains are small enough­­ — the nanoparticles are 50–100 nanometers in diameter — that they can navigate in the bloodstream like a tiny boat, Fantastic Voyage movie style (but without the microscopic humans) via a catheter to navigate directly to the blocked artery, where a drill would clear it completely.

Drilling through plaque:

The inspiration for using the robotic swimmers as tiny drills came from the Borrelia burgdorferi bacteria (shown above), which causes Lyme’s Disease and wreaks havoc inside the body by burrowing through healthy tissue. Its spiral shape enables both its movement and the resultant cellular destruction. By controlling the magnetic field, a surgeon could direct the speed and direction of the microswimmers. The magnetism also allows for joining separate strands of microswimmers together to make longer strings, which can then be propelled with greater force.

Once flow is restored in the artery, the microswimmer chains could disperse and be used to deliver anti-coagulant medication directly to the effected area to prevent future blockage. This procedure could supplant the two most common methods for treating blocked arteries: stenting and angioplasty. Stenting is a way of creating a bypass for blood to flow around the block by inserting a series of tubes into the artery, while angioplasty balloons out the blockage by expanding the artery with help from an inflatable probe.

“Current treatments for chronic total occlusion are only about 60 percent successful,” said MinJun Kim, PhD, a professor in the College of Engineering and director of the Biological Actuation, Sensing & Transport Laboratory (BASTLab) at Drexel University. “We believe that the method we are developing could be as high as 80–90 percent successful and possibly shorten recovery time. The microswimmers are composed of inorganic biodegradable beads so they will not trigger an immune response in the body. We can adjust their size and surface properties to accurately deal with any type of arterial occlusion.” Kim’s research was recently reported in the Journal of Nanoparticle Research.

Mechanical engineers at Drexel University are using these microswimmers as a part of a surgical toolkit being assembled by the Daegu Gyeongbuk Institute of Science and Technology (DGIST)Researchers from other institutions on the project include ETH ZurichSeoul National UniversityHanyang UniversityKorea Institute of Science and Technology, and Samsung Medical Center.

DGIST anticipates testing the technology in lab and clinical settings within the next four years.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

South Korean Kaist Team wins the DARPA Robotics Challenge

South Korean Kaist Team wins the DARPA Robotics Challenge | Amazing Science |

First place in the DARPA Robotics Challenge Finals this past weekend in Pomona, California went to Team Kaist of South Korea for its DRC-Hubo robot, winning $2 million in prize money. Team IHMC Robotics of Pensacola, Fla., with its Running Man (Atlas) robot came in at second place ($1 million prize), followed by Tartan Rescue of Pittsburgh with its CHIMP robot ($500,000 prize).

The DARPA Robotics Challenge, with three increasingly demanding competitions over two years, was launched in response to a humanitarian need that became glaringly clear during the nuclear disaster at Fukushima, Japan, in 2011, DARPA said. The goal was to “accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impacts of natural or man-made disasters.”

The difficult course of eight tasks simulated Fukushima-like conditions, such as driving alone, walking through rubble, tripping circuit breakers, turning valves, and climbing stairs. Representing some of the most advanced robotics research and development organizations in the world, a dozen teams from the United States and another eleven from Japan, Germany, Italy, Republic of Korea and Hong Kong competed.

More DARPA Robotics Challenge videos

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Engineers hand 'cognitive' control to underwater robots with advanced AI system

Engineers hand 'cognitive' control to underwater robots with advanced AI system | Amazing Science |

For the last decade, scientists have deployed increasingly capable underwater robots to map and monitor pockets of the ocean to track the health of fisheries, and survey marine habitats and species. In general, such robots are effective at carrying out low-level tasks, specifically assigned to them by human engineers -- a tedious and time-consuming process for the engineers.

When deploying autonomous underwater vehicles (AUVs), much of an engineer's time is spent writing scripts, or low-level commands, in order to direct a robot to carry out a mission plan. Now a new programming approach developed by MIT engineers gives robots more "cognitive" capabilities, enabling humans to specify high-level goals, while a robot performs high-level decision-making to figure out how to achieve these goals.

For example, an engineer may give a robot a list of goal locations to explore, along with any time constraints, as well as physical directions, such as staying a certain distance above the seafloor. Using the system devised by the MIT team, the robot can then plan out a mission, choosing which locations to explore, in what order, within a given timeframe. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task, or reconfigure the hardware to recover from a failure, on the fly.

In March, the team tested the autonomous mission-planning system during a research cruise off the western coast of Australia. Over three weeks, the MIT engineers, along with groups from Woods Hole Oceanographic Institution, the Australian Center for Field Robotics, the University of Rhode Island, and elsewhere, tested several classes of AUVs, and their ability to work cooperatively to map the ocean environment.

The MIT researchers tested their system on an autonomous underwater glider, and demonstrated that the robot was able to operate safely among a number of other autonomous vehicles, while receiving higher-level commands. The glider, using the system, was able to adapt its mission plan to avoid getting in the way of other vehicles, while still achieving its most important scientific objectives. If another vehicle was taking longer than expected to explore a particular area, the glider, using the MIT system, would reshuffle its priorities, and choose to stay in its current location longer, in order to avoid potential collisions.

"We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support," says Brian Williams, a professor of aeronautics and astronautics at MIT, and principal developer of the mission-planning system. "With this system, we were showing we could safely zigzag all the way around the reef, like an obstacle course." The system is similar to one that Williams developed for NASA following the loss of the Mars Observer, a spacecraft that, days before its scheduled insertion into Mars' orbit in 1993, lost contact with NASA.

By giving robots control of higher-level decision-making, Williams says such a system would free engineers to think about overall strategy, while AUVs determine for themselves a specific mission plan. Such a system could also reduce the size of the operational team needed on research cruises. And, most significantly from a scientific standpoint, an autonomous planning system could enable robots to explore places that otherwise would not be traversable. For instance, with an autonomous system, robots may not have to be in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.

"If you look at the ocean right now, we can use Earth-orbiting satellites, but they don't penetrate much below the surface," Williams says. "You could send sea vessels which send one autonomous vehicle, but that doesn't show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Biorobotics-inspired eye stabilizes robot’s flight, replaces inertial navigation system

Biorobotics-inspired eye stabilizes robot’s flight, replaces inertial navigation system | Amazing Science |

Biorobotics researchers have developed the first aerial robot able to fly over uneven terrain that is stabilized visually without an accelerometer.

Called BeeRotor, it adjusts its speed and avoids obstacles thanks to optic flow sensors inspired by insect vision. It can fly along a tunnel with uneven, moving walls without measuring either speed or altitude. The study was published on February 26 in the journal Bioinspiration & Biomimetics.

Aircraft, ships, and spacecraft currently use a complex inertial navigation system based on accelerometers and gyroscopesto continuously calculate position, orientation, and velocity without the need for external references (known as dead reckoning).

Researchers Fabien Expert and Franck Ruffier at the Institut des Sciences du Mouvement – Etienne-Jules Marey(CNRS/Aix-Marseille Université) decided to create simpler system,  inspired by winged insects. They created BeeRotor, a tethered flying robot able for the first time to adjust its speed and follow terrain with no accelerometer and without measuring speed or altitude, avoiding vertical obstacles in a tunnel with moving walls.

To achieve this, the researchers mimicked the ability of insects to use the passing landscape as they fly. This is known as “optic flow,” the principle you can observe when driving along a road: the view in front is fairly stable, but looking out to either side, the landscape passes by faster and faster, reaching a maximum at an angle of 90 degrees to the path of the vehicle.

To measure optic flow, BeeRotor is equipped with 24 photodiodes (functioning as pixels) distributed at the top and the bottom of its “eye.” This enables it to detect contrasts in the environment as well as their motion. As in insects, the speed at which a feature in the scenery moves from one pixel to another provides the angular velocity of the flow. When the flow increases, this means that either the robot’s speed is increasing or that the distance relative to obstacles is decreasing.

By way of a brain, BeeRotor has three feedback loops: altitude (following the floor or roof), speed (adapting to the size of the tunnel) and stabilization of the eye in relation to the local slope. This enables the robot to always obtain the best possible field of view, independently of its degree of pitch. That allows BeeRotor to avoid very steeply sloping obstacles (see video) no accelerometer and no measures of speed or altitude.

BeeRotor suggests a biologically plausible hypothesis to explain how insects can fly without an accelerometer: using cues from optic flow to remain stable via feedback loops. Optic flow sensors also have industrial applications: such as replacing heavy accelerometers for small robots and as an ultra-light backup system in the event of failure on a space mission.

hadrien's curator insight, February 11, 2016 8:22 PM

Optic flow without expensive camera

Alternative to GPS

Scooped by Dr. Stefan Gruenwald!

Supporting the elderly: A caring robot with ‘emotions’ and memory

Supporting the elderly: A caring robot with ‘emotions’ and memory | Amazing Science |

Researchers at the University of Hertfordshire have developed a prototype of a social robot that supports independent living for the elderly, working in partnership with their relatives or carers.

Farshid Amirabdollahian, a senior lecturer in Adaptive Systems at the university, led a team of nine partner institutions from five European countries as part of the €4,825,492 project called ACCOMPANY (Acceptable Robotics Companions for Ageing Years).

“This project proved the feasibility of having companion technology, while also highlighting different important aspects such as empathy, emotion, social intelligence as well as ethics and its norm surrounding technology for independent living,” Amirabdollahian said.

Madison & Morgan's curator insight, February 11, 2015 1:31 PM

This article is about a robot that can help the elderly in their daily life. The robot is capable of human emotions and has moral ethics. This shows the technological advances that Europe has and relates to economy.

olyvia Schaefer and Rachel Shaberman's curator insight, February 11, 2015 5:09 PM

Europe Arts

Europe has many inventions that they have created, but the most interesting to me is the robot that has emotions and memory.  This robot is supposed to help the elderly with their careers and daily life.  The Europeans were able to create technology that has empathy,emotions, and social intelligence and is just a robot.  The Europeans were able to accomplish something amazing.

ToKTutor's curator insight, February 21, 2015 12:06 PM

Title 5: If a robot can have emotion and memory, can it also be programmed to have instinctive judgment?

Scooped by Dr. Stefan Gruenwald!

What Happens to a Society when Robots Replace Workers?

What Happens to a Society when Robots Replace Workers? | Amazing Science |

The technologies of the past, by replacing human muscle, increased the value of human effort – and in the process drove rapid economic progress. Those of the future, by substituting for man’s senses and brain, will accelerate that process – but at the risk of creating millions of citizens who are simply unable to contribute economically, and with greater damage to an already declining middle class.

Estimates of general rates of technological progress are always imprecise, but it is fair to say that, in the past, progress came more slowly. Henry Adams, the historian, measured technological progress by the power generated from coal, and estimated that power output doubled every ten years between 1840 and 1900, a compounded rate of progress of about 7% per year. The reality was probably much less. For example, in 1848, the world record for rail speed reached60 miles per hour. A century later, commercial aircraft could carry passengers at speeds approaching 600 miles per hour, a rate of progress of only about 2% per year.

By contrast, progress today comes rapidly. Consider the numbers for information storage density in computer memory. Between 1960 and 2003, those densities increased by a factor of five million, at times progressing at a rate of 60% per year. At the same time, true to Moore’s Law, semiconductor technology has been progressing at a 40% rate for more than 50 years. These rates of progress are embedded in the creation of intelligent machines, from robots to automobiles to drones, that will soon dominate the global economy – and in the process drive down the value of human labor with astonishing speed.

This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. If you doubt the march of worker-replacing technology, look at Foxconn, the world’s largest contract manufacturer. It employs more than one million workers in China. In 2011, the company installed 10,000 robots, called Foxbots. Today, the company is installing them at a rate of 30,000 per year. Each robot costs about $20,000 and is used to perform routine jobs such as spraying, welding, and assembly. On June 26, 2013, Terry Gou, Foxconn’s CEO, told his annual meeting that “We have over one million workers. In the future we will add one million robotic workers.” This means, of course, that the company will avoid hiring those next million human workers.

Just imagine what a Foxbot will soon be able to do if Moore’s Law holds steady and we continue to see performance leaps of 40% per year. Baxter, a $22,000 robot that just got a software upgrade, is being produced in quantities of 500 per year. A few years from now, a much smarter Baxter produced in quantities of 10,000 might cost less than $5,000. At that price, even the lowest-paid workers in the least developed countries might not be able to compete.

Tomasz Bienko's curator insight, January 19, 2015 12:29 PM

Przede wszystkim maszyny mogą zastąpić ludzi jako siłę roboczą, ale przecież ku temu między innymi prowadzone są badania i wprowadzane nowe technologie, widać to już teraz w mechanizacji poszczególnych sektorów gospodarki (np. rolnictwa). Człowiek stara się uprościć sobie życie, ale może zjeść własny ogon. To jest chyba bardziej bliższy problem z którym będziemy się musieli zmierzyć rozwijając dalej tę technologię, niż np. bardziej odległe zbuntowanie się sztucznej inteligencji. Biorąc pod uwagę jak Prawo Moore'a z roku na rok ulega modyfikacją, zmiany będzie można zaobserwować już niedługo i to właśnie rosnące bezrobocie może być problemem który dostrzeżemy jako pierwsi w rozwijaniu sztucznej inteligencji. Maszyna nie zastąpi człowieka we wszystkim, na wszystkich stanowiskach, lecz może to też tylko kwestia czasu?

Scooped by Dr. Stefan Gruenwald!

How scientists are upgrading the brain with access to robotic machines

How scientists are upgrading the brain with access to robotic machines | Amazing Science |

Cathy Hutchinson (featured image) was a 53-year-old mother of two who, in 1996, suffered a brain-stem stroke, leaving her a quadriplegic. Ten years later, she became a research subject of a company called Cyberkinetics. The company implanted a device on her brain called the Utah array, “a pill-sized implant whose 96 microelectrodes bristle from its base like a bed of nails.”

Using this, Hutchinson was connected to a computer with two robotic arms by her side. She was instructed to think about positioning one of the arms by a nearby bottle; then, to think about grasping the bottle. Her doing so “prompted the arm to execute a complex gesture — lowering the hand, grasping the bottle and lifting it off the table.”

She then “brought the arm toward her and positioned it by her mouth. She thought again of squeezing her hand, which this time prompted the arm to tilt at the wrist, tipping the bottle so she could drink from its straw.” It was the first time in almost 15 years Hutchinson was able to lift something on her own. “The smile on her face,” said Leigh Hochberg, one of the scientists on the project, “was something I and our whole research team will never forget.”

Journalist Malcolm Gay’s new book “The Brain Electric” describes recent efforts to create brain/computer interfaces, which would, among other applications, allow people with disabilities to lift and hold objects by simply thinking about doing so. By implanting electrodes onto someone’s brain, scientists can map “the electric current of thought itself — the millions of electrical impulses, known as action potentials, that consciously volley between the brain’s estimated 100 billion neurons.”

This “electric language” — like “an exponentially complicated form of Morse code” — is “what makes consciousness possible.” By translating these neural signals into computer language, scientists can create “a brain-computer interface,” granting subjects “mental control over computers and machines.”

This happens by first mapping a subject’s thoughts. While connected to a computer, a subject may think — in a real-life example Gay shares conducted by scientist Eric Leuthardt — of lifting his left index finger. The computer would then analyze the neural patterns associated with this action and “correlate them with specific commands — anything from recreating the lifted finger in a robot hand to moving a cursor across a monitor or playing a video game.”

Once the scientist had decoded these patterns, he could then “conceivably link them to countless digital environments,” allowing the subject control over “everything from robotic appendages to Internet browsers.” While the endgame is still down the road, applications for this sort of technology could be vast. Quadriplegics could hold and lift things and even potentially walk again using computerized exoskeletons.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Artificial skin system transmits the pressure of touch

Artificial skin system transmits the pressure of touch | Amazing Science |

Might someday be applied to prosthetics to mimic human skin’s ability to feel sensation. Researchers have created a sensory system that mimics the ability of human skin to feel pressure and have transmitted the digital signals from the system’s sensors to the brain cells of mice. These new developments, reported in the October 16 issue of Science, could one day allow people living with prosthetics to feel sensation in their artificial limbs.

The system consists of printed plastic circuits, designed to be placed on robotic fingertips. Digital signals transmitted by the system would increase as the fingertips came closer to an object, with the signal strength growing as the fingertips gripped the object tighter.

To simulate this human sensation of pressure, Zhenan Bao of Stanford University and her colleagues developed a number of key components that collectively allow the system to function.

As our fingers first touch an object, how we physically “feel” it depends partially on the mechanical strain that the object exerts on our skin. So the research team used a sensor with a specialized circuit that translates pressure into digital signals.

To allow the sensory system to feel the same range of pressure that human fingertips can, the team needed a highly sensitive sensor. They used carbon nanotubes in formations that are highly effective at detecting the electrical fields of inanimate objects.

Bao noted that the printed circuits of the new sensory system would make it easy to produce in large quantities. “We would like to make the circuits with stretchable materials in the future, to truly mimic skin,” Bao said. “Other sensations, like temperature sensing, would be very interesting to combine with touch sensing.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Aerial Construction: Quadrocopters build a bridge out of ropes

Aerial Construction: Quadrocopters build a bridge out of ropes | Amazing Science |

Building a rope bridge with flying machines in the ETH Zurich Flying Machine Arena. The quadrocopters autonomously assembling a rope bridge. This is part of a body of research in aerial construction, a field that addresses the construction of structures with the aid of flying machines.

In this work, a rope bridge that can support the crossing of a person is built by quadrocopters, showing for the first time that small flying machines are capable of autonomously realizing load-bearing structures at full-scale and proceeding a step further towards real-world scenarios. Except for the required anchor points at both ends of the structure, the bridge consists exclusively of tensile elements and its connections and links are entirely realized by flying machines. Spanning 7.4 m between two scaffolding structures, the bridge consists of nine rope segments for a total rope length of about 120 m and is composed of different elements, such as knots, links, and braids. The rope used for these experiments is made out of Dyneema, a material with a low weight-to-strength ratio and thus suitable for aerial construction. Of little weight (7 g per meter), a 4 mm diameter rope can sustain 1300 kg.

The vehicles are equipped with a motorized spool that allows them to control the tension acting on the rope during deployment. A plastic tube guides the rope to the release point located between two propellers. The external forces and torques exerted on the quadrocopter by the rope during deployment are estimated and taken into account to achieve compliant flight behavior. The assembly of the bridge is performed by small custom quadrocopters and builds upon the Flying Machine Arena, a research and demonstration platform for aerial robotics. The arena is equipped with a motion capture system that provides vehicle position and attitude measurements. Algorithms are run on a computer and commands are then sent to the flying machines via a customized wireless infrastructure.

In order to be able to design tensile structures that are buildable with flying robots, a series of computational tools have been developed, specifically addressing the characteristics of the building method. The design tools allow to simulate, sequence, and evaluate the structure before building.

The location of the scaffolding structure is manually measured before starting the construction. The primary and bracing structure can then be realized without human intervention. Before realizing the stabilizers, the locations of the narrow openings of the bridge are measured and input to the system, which adapts the trajectories accordingly.

More information and related publications can be found on the project website: 


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Minefly: Drones for Scanning and Mapping Underground Mines

Minefly: Drones for Scanning and Mapping Underground Mines | Amazing Science |

New drone technologies and innovation are reaching new heights withever increasing need to improve safety and maximize operational efficiency in underground mines. Keeping in view the risks and challenges usually faced during mining operations, new drone technology is on its way to replace labour-intensive methods of surveying, inspection and mapping. This is emerging as a new trend in the mining industry to capture 3D spatial data in hard-to-access underground areas in mines with an aim to remove much of the risk and increase safety on site. 3D mapping of large-scale sub-surface environments, such as stopes, drifts and ore passes will now become easy and cost effective. This heralds a new era for future underground mining.

In terms of safety and operational efficiency, underground mining has unique challenges, some of which can be addressed with technologies, such as 3D laser mapping and unmanned aerial vehicles commonly known as drones. Therefore there is naturally a push toward building such systems that are robust enough for hard underground mining environments as well as cost effective. The future is promising and the baseline technologies are already there that can be used to build high efficiency products. For example, we at Clickmox Solutions have developed a 3D mapping system based on SLAM (Simultaneous Localization And Mapping) algorithm, which can be installed on drones and vehicles. This system is capable of building 3D maps in real time without the need for GPS signal for positioning.

All such technologies are tied to the so called Internet-of-Things or IoT, which envisions a highly connected and intelligent system where each individual component talks to other components and makes decisions based on the need at that time. 3D scanning and mapping that facilitate automated collision avoidance and positioning are important parts of such a system. It is  envisioned that soon off the shelf technologies based on IoT will make it convenient for miners to use autonomous vehicles and drones to increase safety and productivity.

Graham Williamson's curator insight, March 19, 2016 9:25 AM

There is definitely a future for drones and autonomous vehicles in underground mines. Particularly when the underground environment becomes more hazardous, after a rock fall for example. Being able to send a drone or autonomous vehicle into the mine that can send images to the surface is the preferred option, to putting people in the line of fire. That way the hazards can be assessed without putting more people at risk. I think we are a long way for eliminating all people for working underground, but the less people we put at risk the better.

Craig Mallinson 's curator insight, March 15, 8:18 PM
This article discusses using drones to map underground mines.  The article is close to being an advertisement for Minefly.  I scooped it because I think this sort of technology is fantastic.   In five years using drones to survey underground mines might be an industry standard!  The benefits of drones for surveying are twofold, the drones can prevent people from having to access dangerous areas of the mine and the surveyed data will be better and in real time.
Scooped by Dr. Stefan Gruenwald!

'Natural' selection of robots: On the origin of (robot) species

'Natural' selection of robots: On the origin of (robot) species | Amazing Science |

Researchers have observed the process of evolution by natural selection at work in robots, by constructing a ‘mother’ robot that can design, build and test its own ‘children’, and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.

Researchers led by the University of Cambridge have built a mother robot that can independently build its own children and test which one does best; and then use the results to inform the design of the next generation, so that preferential traits are passed down from one generation to the next.

Without any human intervention or computer simulation beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.

In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next. The results, reported in the open access journal PLOS One, found that preferential traits were passed down through generations, so that the ‘fittest’ individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.

“Natural selection is basically reproduction, assessment, reproduction, assessment and so on,” said lead researcher Dr Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich. “That’s essentially what this robot is doing – we can actually watch the improvement and diversification of the species.”

For each robot child, there is a unique ‘genome’ made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands. As in nature, evolution in robots takes place through ‘mutation’, where components of one gene are modified or single genes are added or deleted, and ‘crossover’, where a new genome is formed by merging genes from two individuals.

In order for the mother to determine which children were the fittest, each child was tested on how far it travelled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation in order to preserve their abilities, while mutation and crossover were introduced in the less successful children.

The researchers found that design variations emerged and performance improved over time: the fastest individuals in the last generation moved at an average speed that was more than twice the average speed of the fastest individuals in the first generation. This increase in performance was not only due to the fine-tuning of design parameters, but also because the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.

“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customization, but we want to see robots that are capable of innovation and creativity.”

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Synthetic biology!

Virginia Tech scientist develops model for robots with bacterial brains

Virginia Tech scientist develops model for robots with bacterial brains | Amazing Science |

Forget the Vulcan mind-meld of the Star Trek generation — as far as mind control techniques go, bacteria is the next frontier.

In a paper published today in Scientific Reports, which is part of the Nature Publishing Group, a Virginia Tech scientist used a mathematical model to demonstrate that bacteria can control the behavior of an inanimate device like a robot. “Basically we were trying to find out from the mathematical model if we could build a living microbiome on a nonliving host and control the host through the microbiome,” said Warren Ruder, an assistant professor of biological systems engineering in both the College of Agriculture and Life Sciences and the College of Engineering

"We found that robots may indeed be able to function with a bacterial brain,” he said. For future experiments, Ruder is building real-world robots that will have the ability to read bacterial gene expression levels in E. coli using miniature fluorescent microscopes. The robots will respond to bacteria he will engineer in his lab.

On a broad scale, understanding the biochemical sensing between organisms could have far reaching implications in ecology, biology, and robotics. In agriculture, bacteria-robot model systems could enable robust studies that explore the interactions between soil bacteria and livestock. In healthcare, further understanding of bacteria’s role in controlling gut physiology could lead to bacteria-based prescriptions to treat mental and physical illnesses. Ruder also envisions droids that could execute tasks such as deploying bacteria to remediate oil spills.

The findings also add to the ever-growing body of research about bacteria in the human body that are thought to regulate health and mood, and especially the theory that bacteria also affect behavior.

The study was inspired by real-world experiments where the mating behavior of fruit flies was manipulated using bacteria, as well as mice that exhibited signs of lower stress when implanted with probiotics.

Ruder’s approach revealed unique decision-making behavior by a bacteria-robot system by coupling and computationally simulating widely accepted equations that describe three distinct elements: engineered gene circuits in E. coli, microfluid bioreactors, and robot movement.

The bacteria in the mathematical experiment exhibited their genetic circuitry by either turning green or red, according to what they ate. In the mathematical model, the theoretical robot was equipped with sensors and a miniature microscope to measure the color of bacteria telling it where and how fast to go depending upon the pigment and intensity of color.

The model also revealed higher order functions in a surprising way. In one instance, as the bacteria were directing the robot toward more food, the robot paused before quickly making its final approach — a classic predatory behavior of higher order animals that stalk prey.

Ruder’s modeling study also demonstrates that these sorts of biosynthetic experiments could be done in the future with a minimal amount of funds, opening up the field to a much larger pool of researchers.

Via Integrated DNA Technologies
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts!

Drones Are Becoming the Oil Industry's Disruptive Technology

Drones Are Becoming the Oil Industry's Disruptive Technology | Amazing Science |
Self-piloting drones like the Boomerang are leading a small but fundamental change in the industry. In oil and gas, equipment doesn’t move without data—where to drill, how deep to go, and so on. With the traffic bottleneck removed, suddenly equipment can move more nimbly and exploration startups can get in the drilling game for a fraction of the traditional entry cost.

Via YEC Geo
YEC Geo's curator insight, July 20, 2015 8:48 AM

Tech disruption coming to the oil bidness?

Scooped by Dr. Stefan Gruenwald!

Tiny Robotic Tentacles Developed That Can Lasso an Ant

Tiny Robotic Tentacles Developed That Can Lasso an Ant | Amazing Science |

With a diameter just twice that of a human hair, they look more like short snips of fishing line than advanced robotic appendages. But these micro-tentacles can curl and grip. They can lasso an ant or scoop up a tiny fish egg. And they could give a robot of any size an astonishingly gentle but precise grasp.

A team of three material scientists at Iowa State University have just invented this new way for robots to softly handle delicate and diminutive objects. As they describe today in a paper in the science journal Scientific Reports, their clever micro-tentacles are hundreds of times smaller than the next smallest self-spiraling, lifelike tentacle, making them a unique tool for everything from microsurgery to microbiology. Better still, they hug with less than 1 micro-newton of force. That's thousands of times softer than your blinking eye, and it makes mechanical pinching (the traditional approach for robot's tiny grip) look absolutely medieval.

"Two of the biggest trends in robotics right now are soft-robotics—utilizing soft materials for purposes like gentler human interaction—and micro-robotics, making robots smaller," says Jaeyoun Kim, the material scientist who lead the team. "These micro-tentacles fuse those together."

Kim and his colleagues built their micro-tentacles out of a cheap, naturally soft, and commercially available material called PDMS. They used the PDMS to form hollow tubes which curl up when the air is sucked out of them. One side of the tube is corked, while the other is connected to a pneumatic controller. The micro-tentacles (which are less than 8 millimeters long) curl in a specific direction because one side of the tube is thinner than the other.

The process wasn't easy. PDMS is quite liquid, almost like olive oil, which makes casting it with precision over a hair-thin, rod-like template almost impossible—it will bead up in drops. But the trio of researchers discovered a way to heat-treat the material to slightly gelatinize it, smoothing out the material and the problem. Another issue was finding a way to remove the tubes from their cylindrical template without destroying them. To do this, the scientists used a tool that looks much like a tiny wire-stripper.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Medical ‘millirobots’ could replace invasive surgery

Medical ‘millirobots’ could replace invasive surgery | Amazing Science |

Using a “Gauss gun” principle, an MRI machine drives a “millirobot” through a hypodermic needle into your spinal cord and guides it into your brain to release life-threatening fluid buildup.

University of Houston researchers have developed a concept for MRI-powered millimeter-size “millirobots” that could one day perform unprecedented minimally invasive medical treatments. This technology could be used to treat hydrocephalus, for example. Current treatments require drilling through the skull to implant pressure-relieving shunts, said Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston. But MRI scanners alone don’t produce enough force to pierce tissues (or insert needles). So the researchers drew upon the principle of the “Gauss gun.”

Here’s how the a Gauss gun works: a single steel ball rolls down a chamber, setting off a chain reaction when it smashes into the next ball, etc., until the last ball flies forward, moving much more quickly the initial ball. Based on that concept, the researchers imagine a medical robot with a barrel self-assembled from three small high-impact 3D-printed plastic components, with slender titanium rod spacers separating two steel balls.

Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston, said the potential technology could be used to treat hydrocephalus and other conditions, allowing surgeons to avoid current treatments that require cutting through the skull to implant pressure-relieving shunts.

Becker was first author of a paper presented at ICRA, the conference of the IEEE Robotics and Automation Society, nominated for best conference paper and best medical robotics paper. “Hydrocephalus, among other conditions, is a candidate for correction by our millirobots because the ventricles are fluid-filled and connect to the spinal canal,” Becker said. “Our noninvasive approach would eventually require simply a hypodermic needle or lumbar puncture to introduce the components into the spinal canal, and the components could be steered out of the body afterwards.”

Future work will focus on exploring clinical context, miniaturizing the device, and optimizing material selection.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers build a robot that can reproduce

Researchers build a robot that can reproduce | Amazing Science |
One of the dreams of both science fiction writers and practical robot builders has been realized, at least on a simple level: Cornell University researchers have created a machine that can build copies of itself.

Admittedly the machine is just a proof of concept -- it performs no useful function except to self-replicate -- but the basic principle could be extended to create robots that could replicate or at least repair themselves while working in space or in hazardous environments, according to Hod Lipson, Cornell assistant professor of mechanical and aerospace engineering, and computing and information science, in whose lab the robots were built and tested.

Lipson and colleagues report on the work in a brief communication in the May 12 issue of Nature.

Their robots are made up of a series of modular cubes -- called "molecubes" -- each containing identical machinery and the complete computer program for replication. The cubes have electromagnets on their faces that allow them to selectively attach to and detach from one another, and a complete robot consists of several cubes linked together. Each cube is divided in half along a long diagonal, which allows a robot composed of many cubes to bend, reconfigure and manipulate other cubes. For example, a tower of cubes can bend itself over at a right angle to pick up another cube.

Although these experimental robots work only in the limited laboratory environment, Lipson suggests that the idea of making self-replicating robots out of self-contained modules could be used to build working robots that could self-repair by replacing defective modules. For example, robots sent to explore Mars could carry a supply of spare modules to use for repairing or rebuilding as needed, allowing for more flexible, versatile and robust missions. Self-replication and repair also could be crucial for robots working in environments where a human with a screwdriver couldn't survive.To begin replication, the stack of cubes bends over and sets its top cube on the table. Then it bends to one side or another to pick up a new cube and deposit it on top of the first. By repeating the process, one robot made up of a stack of cubes can create another just like itself. Since one robot cannot reach across another robot of the same height, the robot being built assists in completing its own construction.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

This New Four-Legged Robot Is Basically Invincible

This New Four-Legged Robot Is Basically Invincible | Amazing Science |

Boston Dynamics, the company that builds incredibly agile robots, has added another four-legged sprinter to its pack. In order to introduce the world to “Spot,” the crew at Boston Dynamic kicked the innocent robot as it walked through the halls of their building — and filmed it. However, as you can see in the YouTube video, Spot never falters under the abuse; it dynamically corrects its balance even after a good shove.

When you’re an advanced robotics builder owned by Google, you don’t have to do much to make a splash. Boston Dynamic’s video (clearly filmed before Snowmageddon) is simply called “Introducing Spot,” and it’s two minutes of the quadruped climbing stairs, walking up hills, and, of course, getting kicked. A four-sentence video description is the only additional information the company is providing about Spot:

  • Spot is a four-legged robot designed for indoor and outdoor operation. It is electrically powered and hydraulically actuated.
  • Spot has a sensor head that helps it navigate and negotiate rough terrain. Spot weighs about 160 lbs.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

‘Cobots’ - robots that work side-by-side with humans - enhance robotic manufacturing and throughput

‘Cobots’ - robots that work side-by-side with humans - enhance robotic manufacturing and throughput | Amazing Science |

Manufacturers have begun experimenting with a new generation of “cobots” (collaborative robots) designed to work side-by-side with humans. To determine best practices for effectively integrating human-robot teams within manufacturing environments, a University of Wisconsin-Madison team headed by Bilge Mutlu, an assistant professor of computer sciences, is working with an MIT team headed by Julie A. Shah, an assistant professor of aeronautics and astronautics.

Their research is funded by a three-year grant from the National Science Foundation (NSF) as part of its National Robotics Initiative program.

Cobots are less expensive and intended to be easier to reprogram and integrate into manufacturing. For example, Steelcase owns four next-generation robots based on a platform called Baxter, made by Rethink Robotics.

Each Baxter robot has two arms and a tablet-like panel for “eyes” that provide cues to help human workers anticipate what the robot will do next.

“This new family of robotic technology will change how manufacturing is done,” says Mutlu. “New research can ease the transition of these robots into manufacturing by making human-robot collaboration better and more natural as they work together.”

Mutlu’s team is building on previous work related to topics such as gaze aversion in humanoid robots, robot gestures, and the issue of “speech and repair.” For example, if a human misunderstands a robot’s instructions or carries them out incorrectly, how should the robot correct the human?

On Rethink Robotics’ blog, founder and chairman Rodney Brooks notes “three exciting and significant trends taking place right now” that he thinks will begin to gain some very real traction in 2015:

  • We will begin to see large-scale deployment of collaborative and intelligent robots in manufacturing.
  • This will be a breakout year for robotics research.
  • Emerging technology will be designed to solve some of the world’s biggest problems.

No comment yet.