Amazing Science
Find tag "robotics"
351.5K views | +17 today
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Making drones more customizable: First-ever standard “operating system” for drones

Making drones more customizable: First-ever standard “operating system” for drones | Amazing Science |

A first-ever standard “operating system” for drones, developed by a startup with MIT roots, could soon help manufacturers easily design and customize unmanned aerial vehicles (UAVs) for multiple applications.

Today, hundreds of companies worldwide are making drones for infrastructure inspection, crop- and livestock-monitoring, and search-and-rescue missions, among other things. But these are built for a single mission, so modifying them for other uses means going back to the drawing board, which can be very expensive.

Now Airware, founded by MIT alumnus Jonathan Downey ’06, has developed a platform — hardware, software, and cloud services — that lets manufacturers pick and choose various components and application-specific software to add to commercial drones for multiple purposes.

The key component is the startup’s Linux-based autopilot device, a small red box that is installed into all of a client’s drones. “This is responsible for flying the vehicle in a safe, reliable manner, and acts as hub for the components, so it can collect all that data and display that info to a user,” says Downey, Airware’s CEO, who researched and built drones throughout his time at MIT.

To customize the drones, customers use software to select third-party drone vehicles and components — such as sensors, cameras, actuators, and communication devices — configure settings, and apply their configuration to a fleet. Other software helps them plan and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud, where it’s aggregated and analyzed, and available to designated users.

If a company decides to use a surveillance drone for crop management, for instance, it can easily add software that stitches together different images to determine which areas of a field are overwatered or underwatered. “They don’t have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,” Downey says. “The entire industry can leverage that.”

Clients have trialed Airware’s platform over the past year — including researchers at MIT, who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations, search-and-rescue missions, and agricultural applications. Another UAV maker, Cyber Technology in Australia, is using the platform for drones responding to car crashes and other disasters, and inspecting offshore oilrigs.

Now, with its most recent $25 million funding round, Airware plans to launch the platform for general adoption later this year, viewing companies that monitor crops and infrastructure — with drones that require specific cameras and sensors — as potential early customers.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Terminator2: Phase-changing material could allow even low-cost robots to switch between hard and soft states

Terminator2: Phase-changing material could allow even low-cost robots to switch between hard and soft states | Amazing Science |

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

Robots built from the material, which is described in a new paper in the journal Macromolecular Materials and Engineering, could also be used in search-and-rescue operations to squeeze through rubble looking for survivors, Hosoi says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A self-organizing thousand-robot swarm can form any shape

A self-organizing thousand-robot swarm can form any shape | Amazing Science |

Following simple programmed rules, autonomous robots arrange themselves into vast, complex shapes.

“Form a sea star shape,” directs a computer scientist, sending the command to 1,024 little bots simultaneously via an infrared light. The robots begin to blink at one another and then gradually arrange themselves into a five-pointed star. “Now form the letter K.”

The ‘K’ stands for Kilobots, the name given to these extremely simple robots, each just a few centimeters across, standing on three pin-like legs. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors.

Just as trillions of individual cells can assemble into an intelligent organism, or a thousand starlings can form a great flowing murmuration across the sky, the Kilobots demonstrate how complexity can arise from very simple behaviors performed en masse (see video). To computer scientists, they also represent a significant milestone in the development of collective artificial intelligence (AI).

View on web

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Matter of Speed: How This Robot Wins Rock-Paper-Scissors Every Single Time Agains Humans – It Cheats

Matter of Speed: How This Robot Wins Rock-Paper-Scissors Every Single Time Agains Humans – It Cheats | Amazing Science |

You can't win rock-paper-scissors 100 percent of the time -- at least not if you're human. Even those well-versed in rock-paper-scissors strategy lose sometimes; that's how we get enough interest for rock-paper-scissors championships. But there's not an RPS savant in the world who can beat the newly unveiled Japanese "Janken" robot at the game.

Named after the Japanese version of rock-paper-scissors, Janken will win the game against a human every time it plays. How? The robot uses high-speed computer visionto see which symbol its human opponent's hand is forming and then, quicker than the eye can see, forms a winning symbol in response.

Think you're fast enough to beat the robot? Well, Janken can detect your movement in just one millisecond and make its own shape in just 20 milliseconds. For reference,it takes 40 milliseconds for the human eye to process a moving image; so we're pretty sure that no matter how good at rock-paper-scissors you are, you can't win.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from #Communication!

Robots + AI + AofA: The astounding athletic power of quadcopters - Raffaello D'Andrea

In a robot lab at TEDGlobal, Raffaello D'Andrea demos his flying quadcopters: robots that think like athletes, solving physical problems with mathematical algorithms and AI.

Quadcopter flight assembled architecture

Via Prof. Hankell
Prof. Hankell's curator insight, July 28, 11:04 AM

We are beginning to see autonomous technology and artificial intelligence that we will interact with as we would with other people...

Scooped by Dr. Stefan Gruenwald!

Robots with muscles: inspired by nature

Robots with muscles: inspired by nature | Amazing Science |

Myorobotics at the Technical University of Munich, takes us on a fascinating journey on how an adorable humanoid robot with muscles, called Roboy, is born in 9 months, and sheds light on the future of robotics, and what kind of future it might bring us. Being fascinated by the complexity and beauty of everything, Rafael Hostettler always had a hard time to choose. That’s why he has an MSc. in Computational Science from ETH Zurich, where he learnt to simulate just about everything on computers, so he didn’t have to make a decision. Now he’s building robots that imitate the building principles of the human musculoskeletal system and travels the world with Roboy. The 3D printed robot boy that plays in a theatre, goes to school and captivates the audience with his fascinating stories. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Developing Robotic Brains Capable of Thoughtful Communication

Developing Robotic Brains Capable of Thoughtful Communication | Amazing Science |

The Hagiwara Lab in the Department of Information and Computer Science of Keio University's Faculty of Science and Technology is trying to realize a robotic brain that can carry on a conversation, or in other words, a robotic brain that can understand images and words and can carry on thoughtful communication with humans.

"Even now, significant progress is being made with robots, and tremendous advancements are being made with the control parts. However, we feel like R&D with regards to the brain has been significantly delayed. When we think about what types of functions are necessary for the brain, the first thing that we as humans do is visual information processing. In other words, the brain needs to be able to process what is seen. The next thing is the language information processing that we as humans implement. By using language capabilities, humans are able to perform extremely advanced intellectual processing. However, even if a robotic brain can process what it sees and use words, it is still lacking one thing, specifically, feelings and emotions. Therefore, as a third pillar, we're conducting research on what is called Kansei Engineering, or affective information processing."

The Hagiwara Lab has adopted an approach of learning from the information processing of the human brain. The team is trying to construct a robotic brain while focusing on three elements: visual information processing, language information processing, and affective information processing, and an even more important point is integrating these three elements.

"With regards to visual information processing, by using a neural network as well, we're trying to recognize items through mechanisms based on experience and intuition in the same manner that is implemented directly by humans without having to use three-dimensional structures or perform complicated mathematical processing. In the conventional object recognition field, patterns from the recognized results are merely converted to symbols. However, by adding language processing to those recognized results, we can comprehensively utilize knowledge to get a better visual image. For example, even if an object is recognized as being a robot, knowledge such as the robot has a human form, or it has arms and legs can also be used. Next will be language information processing because processing of language functions is becoming extremely important. For example, even as a robot, the next step would be for it to recognize something as being cute, not cute, mechanical, or some other type of characteristic. Humans naturally have this type of emotional capability, but in current robotic research, that type of direction is not being researched much. Therefore, at our lab, we're conducting research in a direction that enables robots to understand what they see, to use language information processing to understand what they saw as knowledge, and to then comprehensively use the perspective of feelings and emotions like those of humans as well."

The robotic brain targeted by the Hagiwara Lab is one that is not merely just smart. Instead, the lab is targeting a robotic brain with emotions, feelings, and spirit that will enable it to interact skillfully with humans and other environments. To achieve this, the lab is conducting a broad range of research from the fundamentals of Kansei Engineering to applications thereof in fields such as entertainment, design, and healing.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

SupraPed Robots Will Use Trekking Poles to Hike Across Rough Terrain

SupraPed Robots Will Use Trekking Poles to Hike Across Rough Terrain | Amazing Science |

Last year at the Stanford-Berkeley Robotics Symposium, we saw some tantalizing slides from Oussama Khatib about a humanoid robot that used trekking poles to balance itself. We were promised more details later, and the Stanford researchers delivered at the IEEE International Conference on Robotics and Automation (ICRA) this year, where they presented the concept of SupraPed robots.

The idea is equipping robots with a pair of special trekking poles packed with sensors that, according to the researchers, "transforms biped humanoids into tripeds or quadrupeds or more generally, SupraPeds." By using these smart poles to steady themselves, the robots would be able to navigate through "cluttered and unstructured environments such as disaster sites."

Humans have had a lot of practice walking around on two legs. Robots have not, which isn't their fault, but at the moment, even the best robots are working up to the level of a toddler. Some of them aren't bad at flat terrain, but as we saw in the DARPA Robotics Challenge Trials, varied terrain is very, very difficult. It doesn't just require the physical ability to move and balance, but also the awareness to know what path to take and where feet should be placed.

As good at this as humans are, even we get into situations where our balance and movements with our legs and feet simply aren't enough. And when this happens, we scramble. If we're fancy, we might use a walking stick or hiking poles for balance assistance, and if we're not fancy, sometimes an outstretched arm is enough.

Similar to the research we looked at yesterday, this is an entirely different philosophy about obstacles: instead of things to be avoided, they're things that can potentially be used to complete tasks that would otherwise be unsafe or impossible.

However, this is all simulation, and the programming behind it is fairly complex. The robot (when they throw a real robot into this mix) will have sophisticated 3D vision, tactile sensing, and a special set of actuated ski poles. The SupraPed platform includes a pair smart walking staffs, a whole-body multi-contact control and planning software system, and real-time reactive controllers that integrate both tactile and visual information. Moreover, to bypass the difficulty of programming fully autonomous robot controllers, the SupraPed platform contains a remote haptic teleoperation system which allows the operator remotely give high level command.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Silkworms and robot work together to weave Silk Pavilion

Silkworms and robot work together to weave Silk Pavilion | Amazing Science |

Researchers at MIT Media Lab's Mediated Matter group have created a dome from silk fibers woven by a robotic arm, which was then finished by live silkworms. 

The project is intended to explore how digital and biological fabrication techniques can be combined to produce architectural structures. The team programmed the robotic arm to imitate the way a silkworm deposits silk to build its cocoon. The arm then deposited a kilometer-long silk fiber across flat polygonal metal frames to create 26 panels. These panels were arranged to form a dome, which was suspended from the ceiling.

6500 live silkworms were then placed on the structure. As the caterpillars crawled over the dome, they deposited silk fibres and completed the structure.

The Silk Pavilion was designed and constructed at the MIT Media Lab as part of a research project to explore ways of overcoming the existing limitations of additive manufacturing at architectural scales.

Mediated Matter group director Neri Oxman believes that by studying natural processes such as the way silkworms build their cocoons, scientists can develop ways of "printing" architectural structures more efficiently than can be achieved by current 3D printing technologies.

“In traditional 3D printing the gantry-size poses an obvious limitation; it is defined by three axes and typically requires the use of support material, both of which are limiting for the designer who wishes to print in larger scales and achieve structural and material complexity,” Oxman said earlier this year.

“Once we place a 3D printing head on a robotic arm, we free up these limitations almost instantly." Their research also showed that the worms were attracted to darker areas, so fibers were laid more sparsely on the sunnier south and east elevations of the dome.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers Develop Minibuilders, Tiny Robots Capable of 3D Printing Large Buildings

Researchers Develop Minibuilders, Tiny Robots Capable of 3D Printing Large Buildings | Amazing Science |

It is amazing how quickly the technologies around 3D printing have been developing over the last couple of years. Not only are we seeing Moore's Law-like increases in the speeds of these prints, all the while prices are dropping substantially, but entirely new innovative approaches seem to emerge each day.

For instance, we have already seen 3D printing drones, combo 3D printer/CNC machines,  a 3D printing assembly line, and all sorts of crazy new ways to print with food. Today a unique, but quite innovative approach to 3D printing has been unveiled by a team of researchers at the Institute for Advanced Architecture of Catalonia (IAAC), based in Barcelona, Spain.

One problem with 3D printers today, is that their build envelopes are limited by the size of the actual printer. In order to print a house, you need a 3D printer which is larger than that house. This severely limits the utility of any one device, and equates to substantial costs for any person or company trying to print on a large scale. A team of researchers, led by Sasa Jokic, and Petr Novikov at IAAC, and includes Stuart Maggs, Dori Sadan, Jin Shihui and Cristina Nan, have invented and worked diligently on a method of printing large scale objects, such as buildings, with mobile 3D printing robots they call Minibuilders.

The Minibuilder lineup consists of three different robotic devices, each with dimensions no larger than 42cm. Despite their small size, they are capable of printing buildings of almost any proportion. All three robots, all responsible for different functions, are required during any large 3D printing project. Working together these Minibuilders are able to produce large scale 3D prints without the need for a large scale 3D printer.

Although the technology may not have been perfected, researchers have put in place a stepping stone for a new method of printing buildings and other large object, which we are sure will continue to develop.

What do you think about this new 3D printing system? Could you see large buildings and homes eventually using a technology like this? Let us know in the Minibuilder forum thread at

Anne Pascucci, MPA, CRA's curator insight, June 27, 5:15 AM

What can be next???

Scooped by Dr. Stefan Gruenwald!

Sperm-inspired microrobots controlled by magnetic fields

Sperm-inspired microrobots controlled by magnetic fields | Amazing Science |

MagnetoSperm performs a flagellated swim using weak oscillating magnetic fields.

A team of researchers at the University of Twente (Netherlands) and German University in Cairo has developed sperm-inspired microrobots that can be controlled by weak oscillating magnetic fields.

Described in a cover article in AIP Publishing’s journal Applied Physics Letters, the 322 micron-long robots consist of a head coated in a thick cobalt-nickel layer and an uncoated tail.

When the microrobot is subjected to an oscillating field of less than five millitesla — about the strength of a decorative refrigerator magnet — it experiences a magnetic torque on its head, which causes its flagellum to oscillate and propel it forward.

The researchers are then able to steer the robot by directing the magnetic field lines towards a reference point. The propulsion mechanism allows for swimming at an average speed of about 158 microns/second with a weak 45 Hz magnetic field.

Islam Khalil, PhD, an Assistant Professor of the German University in Cairo, designed the MagnetoSperm microrobots along with Sarthak Misra, PhD, and colleagues at MIRA-Institute for Biomedical Technology and Technical Medicine at the University of Twente.

“Nature has designed efficient tools for locomotion at micro-scales. Our microrobots are either inspired from nature or directly use living micro-organisms such as magnetotactic bacteria and sperm cells for complex micro-manipulation and targeted therapy tasks,” said Misra, the principal investigator of this study, and an associate professor at the University of Twente.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Raptor robot runs at 28.58 mph, faster than any human

Raptor robot runs at 28.58 mph, faster than any human | Amazing Science |

Inspired by dinosaurs, Raptor is a fast-running biped robot developed by the MSC Lab at Korea Advanced Institute of Science and Technology (KAIST). It has two under-actuated legs and a tail inspired by velociraptors, providing stability over high obstacles.

The Raptor robot runs at a speed of 46 km/h (28.58 mph) on a treadmill with off-board power. That’s faster than the fastest human, the Olympic sprinter Usain Bolt, whose top speed has been estimated at 43.92 km/h — but not as fast as Boston Dynamics’ Cheetah, at 47 km/h (29.2 mph).

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Automated 'killer robots' to be debated at UN in Geneva

Automated 'killer robots' to be debated at UN in Geneva | Amazing Science |

A killer robot is a fully autonomous weapon that can select and engage targets without any human intervention. They do not currently exist but advances in technology are bringing them closer to reality. Those in favor of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

However, those who oppose their use believe they are a threat to humanity and any autonomous "kill functions" should be banned.

"Autonomous weapons systems cannot be guaranteed to predictably comply with international law," Prof Sharkey told the BBC. "Nations aren't talking to each other about this, which poses a big risk to humanity."

Prof Sharkey is a member and co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control. Side events at the CCW will be hosted by the Campaign to Stop Killer Robots.

Prof Arkin from the Georgia Institute of Technology told the BBC he hoped killer robots would be able to significantly reduce non-combatant casualties but feared they would be rushed into battle before this was accomplished.

"I support a moratorium until that end is achieved, but I do not support a ban at this time," said Prof Arkin. He went on to state that killer robots may be better able to determine when not to engage a target than humans, "and could potentially exercise greater care in so doing".

Prof Sharkey is less optimistic. "I'm concerned about the full automation of warfare," he says.

The discussion of drones is not on the agenda as they are yet to operate completely autonomously, although there are signs this may change in the near future.

The UK successfully tested the Taranis, an unmanned intercontinental aircraft in Australia this year and America's Defense Advanced Research Projects Agency (Darpa) has made advances with the Crusher, an unmanned ground combat vehicle, since 2006.

The MoD has claimed in the past that it currently has no intention of developing systems that operate without human intervention.

On 21 November 2012 the United States Defense Department issued a directive that, "requires a human being to be 'in-the-loop' when decisions are made about using lethal force," according to Human Rights Watch.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Cloud Robotics: The Plan to Build a Massive Online Brain for All the World’s Robots

Cloud Robotics: The Plan to Build a Massive Online Brain for All the World’s Robots | Amazing Science |

If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his “brain” was on the small side.

Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do.

Today, backed by funding from the National Science Foundation, the Office of Naval Research, Google, Microsoft, and Qualcomm, Saxena and his team unveiled what they call RoboBrain, a kind of online service packed with information and artificial intelligence software that any robot could tap into. Working alongside researchers at the University of California at Berkeley, Brown University, and Cornell University, they hope to create a massive online “brain” that can help all robots navigate and even understand the world around them. “The purpose,” says Saxena, who dreamed it all up, “is to build a very good knowledge graph—or a knowledge base—for robots to use.”

Any researcher anywhere will be able use the service wirelessly, for free, and transplant its knowledge to local robots. These robots, in turn, will feed what they learn back into the service, improving RoboBrain’s know-how. Then the cycle repeats.

These days, if you want a robot to serve coffee or carry packages across a room, you have to hand-code a new software program—or ask a fellow roboticist to share code that’s already been built. If you want to teach a robot a new task, you start all over. These programs, or apps, live on the robot itself, and that, Saxena says, is inefficient. It goes against all the current trends in tech and artificial intelligence, which seek to exploit the power of distributed systems, massive clusters of computers that can power devices over the net. But this is starting to change. RoboBrain is part of an emerging movement known as cloud robotics.

Via Mariaschnee
Tekrighter's curator insight, August 28, 7:01 AM

One of the most perplexing problems in science today is efficient integration of disparate data repositories. This is a step in the right direction.

Scooped by Dr. Stefan Gruenwald!

A Better Hand: Multitasking Like Never Before With These Robotic Fingers

A Better Hand: Multitasking Like Never Before With These Robotic Fingers | Amazing Science |

Many hands make light work, right? Well, MIT researchers have created a wrist-worn robot with a couple of extra digits.

There are several explanations for why the human hand developed the way it has. Some researchers link our opposable thumbs to our ancestors’ need to club and hurl objects at enemies or throw a punch, while others say that a unique gene enhancer (a group of proteins in DNA that activate certain genes) is what led to our anatomy. But most agree that bipedalism, enlarged brains and the need to use tools are what did the trick.

Yet, for as dexterous as our hands make us, a team of researchers at the Massachusetts Institute of Technology think we can do better. Harry Asada, a professor of engineering, has developed a wrist-worn robot that will allow a person to peel a banana or open a bottle one-handed

Together with graduate student Faye Wu, Asada built a pair of robotic fingers that track, mimic and assist a person’s own five digits. The two extra appendages, which look like elongated plastic pointer fingers, attach to a wrist cuff and extend alongside the thumb and pinkie. The apparatus connects to a sensor-laden glove, which measures how a person’s fingers bend and move. An algorithm crunches that movement data and translates it into actions for each robotic finger.

The robot takes a lesson from the way our own five digits move. One control signal from the brain activates groups of muscles in the hand. This synergy, Wu explains in a video demonstration, is much more efficient than sending signals to individual muscles.

In order to map how the extra fingers would move, Wu attached the device to her wrist and began grabbing objects throughout the lab. With each test, she manually positioned the robot fingers onto an object in a way that would be most helpful—for example, steadying a soda bottle while she used her hand to untwist the top. In each instance, she recorded the angles of both her own fingers and those of her robot counterpart.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Origami Inspires the Rise of Self-Folding Robot

Origami Inspires the Rise of Self-Folding Robot | Amazing Science |
A creation made of composite paper can fold and assemble itself and start working without intervention. Such robots could be deployed cheaply and quickly.

An intricately cut sheet lies flat and motionless on a table. Then Samuel Felton, a graduate student at Harvard, connects the batteries, sending electricity coursing through, heating it. The sheet lurches to life, the pieces bending and folding into place. The transformation completes in four minutes, and the sheet, now a four-limbed robot, scurries away at more than two inches a second. The creation, reported Thursday in the journal Science, is the first robot that can fold itself and start working without any intervention from the operator. “We’re trying to make robots as quickly and cheaply as possible,” Mr. Felton said.

Inspired by origami, the Japanese paper-folding art, such robots could be deployed, for example, on future space missions, Mr. Felton said. Or perhaps the technology could one day be applied to Ikea-like furniture, folding from a flat-packed board to, say, a table without anyone fumbling with Allen wrenches or deciphering instructions seemingly rendered in hieroglyphics.

Mr. Felton’s sheet is not simple paper, but a composite made of layers of paper, a flexible circuit board and Shrinky Dinks — plastic sheets, sold as a toy, that shrink when heated above 212 degrees Fahrenheit. The researchers attached to the sheet two motors, two batteries and a microcontroller that served as the brain for the robot. Those components accounted for $80 of the $100 of materials needed for the robot. While the robot could fold itself, the sheet took a couple of hours for Mr. Felton to construct. Still, it was simpler and cheaper than the manufacturing process for most machines today — robots, iPhones, cars — which are made of many separate pieces that are then glued, bolted and snapped together.

Mr. Felton’s adviser, Robert J. Wood, a professor of engineering and applied sciences, was initially interested in building insect-size robots. But for machines that small, “there really are no manufacturing processes that are applicable,” Dr. Wood said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's next Mars rover will make oxygen, to sustain life

NASA's next Mars rover will make oxygen, to sustain life | Amazing Science |

For 17 years, NASA rovers have laid down tire tracks on Mars. But details the space agency divulged this week about its next Martian exploration vehicle underscored NASA's ultimate goal. Footprints are to follow someday.

The last three rovers -- Spirit, Opportunity and Curiosity -- confirmed the Red Planet's ability to support life and searched for signs of past life. The Mars rover of the next decade will hone in on ways to sustain future life there, human life.

"The 2020 rover will help answer questions about the Martian environment that astronauts will face and test technologies they need before landing on, exploring and returning from the Red Planet," said NASA's William Gerstenmaier who works on human missions. This will include experiments that convert carbon dioxide in the Martian atmosphere into oxygen "for human respiration." Oxygen could also be used on Mars in making rocket fuel that would allow astronauts to refill their tanks.

The 2020 rover is the near spitting image of Curiosity and NASA's Jet Propulsion Laboratory announced plans to launch the new edition not long after Curiosity landed on Mars in 2012. But the 2020 rover has new and improved features. The Mars Oxygen ISRU Experiment, or MOXIE for short, is just one. There are super cameras that will send back 3D panoramic images and spectrometers that will analyze the chemical makeup of minerals with an apparent eye to farming.

"An ability to live off the Martian land would transform future exploration of the planet," NASA said in a statement. The 2020 rover will also create a job for a future mission to complete, once the technology emerges to return to Earth from Mars. It will collect soil samples to be sent back for lab analysis at NASA.

Eric Chan Wei Chiang's curator insight, August 2, 7:57 PM

Oxygen production and minerals for farming would pave the way for manned missions and perhaps even a small colony.  The next step would be Mars sample return mission:


Read more scoops on Mars here:

Scooped by Dr. Stefan Gruenwald!

ESA's dropship quadcopter concept may offer precise, safe landings for future Mars rovers

ESA's dropship quadcopter concept may offer precise, safe landings for future Mars rovers | Amazing Science |

The ESA has tested a novel system that may allow the agency to safely land rovers on Mars using a quadcopter-like dropship. A fully automated, proof of concept Skycrane prototype was created over the course of eight months under the ESA's StarTiger program, with the system's hardware largely derived from commercially available quadcopter components.

The primary challenge for the Dropter project development team revolved around creating a system that could successfully detect and navigate hazardous terrain without the aide of real-time human input. This is a vital feature for any potential rover delivery system, as it is impossible to create a directly controllable sky crane due to the distance between the operator and the vehicle that creates a time lag between command and execution.

Therefore the new rover delivery method had to be designed around an autonomous navigation system. Initially the dropship navigates to the pre determined deployment zone using GPS and inertia control. Once in the vicinity of the target zone, the lander switches to vision-based navigation, utilizing laser ranging and barometers to allow it to detect a safe, flat area upon which to set down its precious cargo.

Once such a site is identified, the lander drops to a height of 10 m (33 ft) above the surface and lowers the rover with the use of a bridle, gradually descending until the rover gently touches down on the planet's surface.

The culmination of eight months of development took place at Airbus’s Trauen site, located in northern Germany, where the concept dropship was put through its paces in a 40 m (131 ft) by 40 m (131 ft) recreation of the Martian surface. During the test, the lander managed to successfully use its navigation systems to safely transport a mock rover to the chosen target zone, whereupon the delivery vehicle assessed and selected a flat, safe landing site, and deployed the rover using the 5 m (16 ft) bridle.

Now, with the concept a proven success, the agency and its partners can focus on further developing the dropship for heavier, more realistic payloads.

The video below displays footage of the prototype dropship during the test at Airbus’s Trauen facility.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

WIRED: Have a Drone? Check This Map Before You Fly It

WIRED: Have a Drone? Check This Map Before You Fly It | Amazing Science |

The popularity of drones is climbing quickly among companies, governments and citizens alike. But the rules surrounding where, when and why you can fly an unmanned aerial vehicle aren’t very clear. The FAA has tried to assert control and insist on licensing for all drone operators, while drone pilots and some legal experts claim drones do not fall under the FAA’s purview. The uncertainty—and recent attempts by the FAA to fine a drone pilot and ground a search and rescue organization—has UAV operators nervous.

To help with the question of where it is legal to fly a drone, Mapbox has put together an interactive map of all the no-fly zones for UAVs they could find. Most of the red zones on the map are near airports, military sites and national parks. But as WIRED’s former Editor-in-Chief, Chris Anderson, now CEO of 3-D Robotics and founder of DIY Drones, discovered in 2007 when he crashed a drone bearing a camera into a tree on the grounds of Lawrence Berkeley National Laboratory, there is plenty of trouble in all sorts of places for drone operators to get into.

As one of the map’s authors, Bobby Sudekum, writes on the Mapbox blog, it’s a work in progress. They’ve made the data they collected available for anyone to use, and if you know of other no-fly zones that aren’t on the map, you can add that data to a public repository they started on GitHub.

For instance, you’ll see on the map below that there isn’t a no-fly area over Berkeley Lab, which sits in the greyed area in the hills above UC Berkeley. Similarly, there is no zone marked around Lawrence Livermore National Laboratory, one of the country’s two nuclear weapons labs. I have a call into the lab to check on the rules*, but in the meantime, if you have a drone, just know that in 2006, the lab acquired a Gatling gun that has a range of 1 mile and can fire 4,000 rounds a minute.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Autonomous unmanned aerial octocopter primed to fly the future to your front door

Autonomous unmanned aerial octocopter primed to fly the future to your front door | Amazing Science |
Have you ever seen a horse fly? Maybe you have, but never like this one. This HorseFly has eight rotors, a wirelessly recharging battery and a mission to deliver merchandise right to your doorstep.

The University of Cincinnati and AMP Electric Vehicles, makers of the WorkHorse all-electric delivery truck, collaborated on the HorseFly "octocopter" through an innovative partnership made possible by the University of Cincinnati Research Institute (UCRI).

The newly designed, autonomous unmanned aerial vehicle (UAV) was developed to work in tandem with AMP's delivery trucks, creating a safe, fast and never-before-seen method of delivering goods.

Steve Burns, CEO of AMP, explains the process like this: The HorseFly will be positioned atop a delivery truck, awaiting a package from the driver.

When loaded, the HorseFly will scan the barcode on the package, determine the path to the delivery address via GPS and fly away – completely self-guided – to the appropriate destination. Meanwhile, the delivery truck will continue on its rounds. After successful delivery, the HorseFly will zoom back to the truck for its next delivery run and, if needed, a roughly two-minute wireless recharge.

"Our premise with HorseFly is that the HorseFly sticks close to the horse," Burns says. "If required, the HorseFly will wirelessly recharge from the large battery in the WorkHorse truck. The fact that the delivery trucks are sufficiently scattered within almost any region during the day makes for short flights, as opposed to flying from the warehouse for each delivery."

Paul Orkwis, head of UC's Department of Aerospace Engineering and Engineering Mechanics, looks at the HorseFly and sees its potential to be something more.

"If you want to get really far-fetched and look into the future, at something like a flying car, that's possibly what you could be looking at with this," Orkwis says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Soft-Robotics: The robots of the future won't look anything like the Terminator

Soft-Robotics: The robots of the future won't look anything like the Terminator | Amazing Science |

The field of soft robotics has attracted a rush of attention in the last year. Down the road at Harvard, multiple groups are working on soft robotic hands, jumping legs, exosuits, and quadrupeds that can do the limbo. At Worcester Polytechnic Institute's Soft Robotics Lab, researchers are building a snake. In San Francisco, a startup called Otherlab is buildinginflatable robots that can shake hands, walk, and carry riders. In Italy, a group of researchers built a robotic tentacle modeled after an octopus.

Before the 1970s, car companies made cars safer by making them larger and heavier. Then along came the airbag: a lightweight safety device that folded up invisibly into the vehicle until it sensed a crash. Similar revolutions took place with body armor, bridges, and contact lenses, and these researchers believe something similar is happening with robots.

"It’s not a part of conventional robotics technologies," says Fumiya Iida, a professor of bio-inspired robotics at the Swiss Federal Institute of Technology-Zurich and a member of the IEEE committee on soft robotics. "They have to think completely differently, use different materials, different energy sources. Definitely this is the way we should go in the long run." One of the most impressive rigid robots in the world right now is Boston Dynamics’ 300-pound humanoid Atlas. If Atlas wants to pick up a ball, it needs to sense and compute the precise distance between its digits and the ball and figure out exactly where to place its hand and how much pressure to apply.

Robots like Atlas "are doing a lot of thinking," says Barry Trimmer, PhD, a professor at Tufts and the editor of a new journal, Soft Robotics, which launched last month. "There’s a lot of hesitancy. ‘Where do I put my foot next?’ Animals just don't do that. We need to get away from the idea that you have to control every variable."

By contrast, Harvard’s starfish-shaped soft gripper only needs to be told to inflate. As it’s pumped full of air, it conforms to the shape of an object until its "fingers" have enough pressure to lift it. Another example would be a human picking up a glass of water. We don’t have to compute the exact size and shape of the glass with our brains; our hand adapts to the object. Similarly, Bubbles doesn’t calculate the full length of its movement.

There are technological challenges as well. In addition to air and fluid pressure actuators, soft roboticists are experimenting with dielectric elastomers, elastic materials that expand and contract in response to electric voltage; shape-memory alloys, metal alloys that can be programmed to change shape at certain temperatures; and springs that respond to light. These approaches are still rudimentary, as are the control systems that operate the robots. In the case of many of Harvard’s soft robots, it’s simply a syringe of air attached to a tube.

The field is so new, however, that no possibilities have yet been ruled out. Soft robotics technologies could theoretically be used in a wearable pair of human wings.More practically, soft robots could easily pack eggs or pick fruit — traditional hard robots, equipped with superhuman grips, are more likely to break yolks and inadvertently make applesauce. A mass of wormlike "meshworm" robots could be filled with water and dropped over a disaster area, where they would crawl to survivors. A soft robotic sleeve could be worn to eliminate tremors or supplement strength lost with age. Soft robots could be used in space exploration, where weight is hugely important; in prosthetics, where they would provide comfort and lifelikeness; in the home, where they can help out around the house without trampling the dog; and in surgical robots, where operators have inspired a few lawsuits after puncturing patients’ insides.

Rudolf Kabutz's curator insight, July 3, 3:38 AM

Do robots have to be hard and metallic? Soft spongy robots could have many advantages.

Anne Pascucci, MPA, CRA's curator insight, July 3, 5:44 AM

Very cool!

Scooped by Dr. Stefan Gruenwald!

Collaborative learning — for robots

Collaborative learning — for robots | Amazing Science |
Algorithm lets independent agents collectively produce a machine-learning model without aggregating data.

Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments.

That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless. If constraints on power, communication, or computation mean that the robots can’t pool their data at one location, how can they collectively build a model?

At the Uncertainty in Artificial Intelligence conference in July, researchers from MIT’s Laboratory for Information and Decision Systems will answer that question. They present an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.

In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location.

“A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions,” says Trevor Campbell, a graduate student in aeronautics and astronautics at MIT, who wrote the new paper with his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. “If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution.”

Campbell says that the work was motivated by questions about robot collaboration. But it could also have implications for big data, since it would allow distributed servers to combine the results of their data analyses without aggregating the data at a central location.

“This procedure is completely robust to pretty much any network you can think of,” Campbell says. “It’s very much a flexible learning algorithm for decentralized networks.”

Matt Mayevsky's curator insight, July 1, 1:54 AM

Amazing. 200 - 300 years, this is the end of human civilization, and the beginning of the posthuman.

Scooped by Dr. Stefan Gruenwald!

Paraplegic man has made the first kick of the Soccer World Cup using a mind-controlled robotic exoskeleton

Paraplegic man has made the first kick of the Soccer World Cup using a mind-controlled robotic exoskeleton | Amazing Science |

Juliano Pinto, a 29-year-old with complete paralysis of the lower trunk, performed the symbolic kick-off at the Corinthians Arena in Sao Paulo.

Using his robotic suit, Mr Pinto kicked the official ball a short distance along a mat laid down by the touchline.

But some observers argued the historic event was not given the attention it deserved during the opening ceremony. The identity of the young volunteer was kept a secret until after the event. His robotic exoskeleton was created by a team of more than 150 researchers led by Brazilian neuroscientist Dr Miguel Nicolelis.

Dr Nicolelis tweeted called the event a "great team effort" and afterwards tweeted: "We did it!!!" "It was up to Juliano to wear the exoskeleton, but all of them made that shot. It was a big score by these people and by our science," he commented.

The neuroscientist, who is based at Duke University in the US, is a leading figure in the field of brain-machine interfaces. In breakthrough work published in 2003, he showed that monkeys could control the movement of virtual arms on an avatar using just their brain activity.

The scientists have been working under the banner of a consortium called the Walk Again Project. In a statement, the consortium said the World Cup demonstration would be "just the beginning" of a future "in which people with paralysis may abandon the wheelchair and literally walk again".

But some TV networks didn't capture the event, prompting criticism on Twitter. Some commentators also took aim at ceremony organizers for apparently sidelining the moment in favor of performing acts. "It's the first time an exoskeleton has been controlled by brain activity and offered feedback to the patients," Dr Nicolelis, a neuroscientist at Duke University, told the AFP news agency

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Programming matter by folding: Shape-shifting robots

Programming matter by folding: Shape-shifting robots | Amazing Science |
Self-folding sheets of a plastic-like material point the way to robots that can assume any conceivable 3-D structure.

Programmable matter is a material whose properties can be programmed to achieve specific shapes or stiffnesses upon command. This concept requires constituent elements to interact and rearrange intelligently in order to meet the goal. This research considers achieving programmable sheets that can form themselves in different shapes autonomously by folding. Past approaches to creating transforming machines have been limited by the small feature sizes, the large number of components, and the associated complexity of communication among the units. We seek to mitigate these difficulties through the unique concept of self-folding origami with universal crease patterns.

This approach exploits a single sheet composed of interconnected triangular sections. The sheet is able to fold into a set of predetermined shapes using embedded actuation. To implement this self-folding origami concept, we have developed a scalable end-to-end planning and fabrication process. Given a set of desired objects, the system computes an optimized design for a single sheet and multiple controllers to achieve each of the desired objects. The material, called programmable matter by folding, is an example of a system capable of achieving multiple shapes for multiple functions.

As director of the Distributed Robotics Laboratory at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Professor Daniela Rus researches systems of robots that can work together to tackle complicated tasks. One of the big research areas in distributed robotics is what’s called “programmable matter,” the idea that small, uniform robots could snap together like intelligent Legos to create larger, more versatile robots.

The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) has a Programmable Matter project that funds a good deal of research in the field and specifies “particles … which can reversibly assemble into complex 3D objects.” But that approach turns out to have drawbacks, Rus says. “Most people are looking at separate modules, and they’re really worried about how these separate modules aggregate themselves and find other modules to connect with to create the shape that they’re supposed to create,” Rus says. But, she adds, “actively gathering modules to build up a shape bottom-up, from scratch, is just really hard given the current state of the art in our hardware.”

So Rus has been investigating alternative approaches, which don’t require separate modules to locate and connect to each other before beginning to assemble more complex shapes. Fortunately, also at CSAIL is Erik Demaine, who joined the MIT faculty at age 20 in 2001, becoming the youngest professor in MIT history. One of Demaine’s research areas is the mathematics of origami, and he and Rus hatched the idea of a flat sheet of material with tiny robotic muscles, or actuators, which could fold itself into useful objects. In principle, flat sheets with flat actuators should be much easier to fabricate than three-dimensional robots with enough intelligence that they can locate and attach to each other.

So they designed yet another set of algorithms that, given sequences of folds for several different shapes, would determine the minimum number of actuators necessary to produce all of them. Then they set about building a robot that could actually assume multiple origami shapes. Their prototype, made from glass-fiber and hydrocarbon materials, with an elastic plastic at the creases, is divided into 16 squares about a centimeter across, each of which is further divided into two triangles. The actuators consist of a shape-memory alloy — a metal that changes shape when electricity is applied to it. Each triangle also has a magnet in it, so that it can attach to its neighbors once the right folds have been performed.

Keith Wayne Brown's curator insight, June 2, 6:04 PM

Transformers--more than meets the eye!

Tekrighter's curator insight, June 3, 5:30 AM

Awesome! This is right up there with 3-D printing as the technological advance of the decade...

Scooped by Dr. Stefan Gruenwald!

Robobees: Harvard Project Funds The Engineering Of Robotic Bees Soon To Be In Flight

Robobees: Harvard Project Funds The Engineering Of Robotic Bees Soon To Be In Flight | Amazing Science |

With the alarming decline in the honey bee population sweeping our globe, fear of the multi-billion dollar crop industry collapsing has been on many people’s minds. To tackle this issue, Harvard’s School of Engineering and Applied Sciences has been working with staff from the Department of Organismic and Evolutionary Biology and Northeastern University’s Department of Biology to develop robot bees. And according to a new video just released, these insectoid automatons have already taken flight.

The collaborators envision that the Nature-inspired research could lead to a greater understanding of how to artificially mimic the collective behavior and “intelligence” of a bee colony; foster novel methods for designing and building an electronic surrogate nervous system able to deftly sense and adapt to changing environments; and advance work on the construction of small-scale flying mechanical devices.

More broadly, the scientists anticipate the devices will open up a wide range of discoveries and practical innovations, advancing fields ranging from entomology and developmental biology to amorphous computing and electrical engineering.

Eli Levine's curator insight, May 16, 11:14 AM

If we're able to figure out how to artificially mimic bee colonies, imagine what we could do with our human societies to improve effectiveness, efficiency and to clear away our delusions and non-self preservationist behavior (in terms of the larger social self that we're all apart of).


Imagine a world where we have coordination and cooperation, rather than competition and violence.  Imagine a world where we work to solve common problems that exist on the various scales of human society, from local to global. 


Imagine if we're able to eliminate the petty, chimpish aspects of our brains and psychology, to live happier, healthier lives as a more survivable and adaptable species.


Just think of the possibilities that we could then do, to advance both the universe and ourselves safely (because, if we're able to perceive dangers accurately, why should we advance in such dangerous fashions?)


I may not be down for the first generation of implants.  But I would be down for the fourth, fifth or sixth generation.


That's just me.


Think about it.