Amazing Science
656.1K views | +265 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Bionic Spinal Cord Lets You Move Robotic Limbs With Power of Thought

Bionic Spinal Cord Lets You Move Robotic Limbs With Power of Thought | Amazing Science | Scoop.it

Australian researchers have created a “bionic spinal cord.” They claim that this device could give paralyzed people significant hope of walking again. And if that’s not enough, it could do it utilizing the power of thought and without the necessity of open brain surgery.

 

A research team from the Vascular Bionics Laboratory at the University of Melbourne developed the novel neural-recording device, which both eschews invasive surgery and decreases the risks of a blood-brain barrier breach by being implanted into the brain’s blood vessels.

 

Developed under DARPA’s Reliable Neural-Interface Technology (RE-NET) program, the Stentrode can potentially safely expand the use of brain-machine interfaces (BMIs) in the treatment of physical disabilities and neurological disorders.

 

The researchers describe their “proof-of-concept results” which come from a study conducted on sheep, demonstrating high-fidelity measurements taken from the region of the brain responsible for controlling voluntary movement (called the motor cortex) with the use of the novel device which, as it happens, is just about the size of a paperclip.

 

Notably, the device records neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton.

 

The team, led by neurologist Thomas Oxley, M.D., published their study in an article in the journal Nature Biotechnology.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Robots Get Creative To Cut Through Clutter

Robots Get Creative To Cut Through Clutter | Amazing Science | Scoop.it

Clutter is a special challenge for robots, but new Carnegie Mellon University software is helping robots cope, whether they're beating a path across the Moon or grabbing a milk jug from the back of the refrigerator.

 

The software not only helped a robot deal efficiently with clutter, it surprisingly revealed the robot's creativity in solving problems.

"It was exploiting sort of superhuman capabilities," Siddhartha Srinivasa, associate professor of robotics, said of his lab's two-armed mobile robot, the Home Exploring Robot Butler, or HERB. "The robot's wrist has a 270-degree range, which led to behaviors we didn't expect. Sometimes, we're blinded by our own anthropomorphism." In one case, the robot used the crook of its arm to cradle an object to be moved. "We never taught it that," Srinivasa added.

 

The rearrangement planner software was developed in Srinivasa's lab by Jennifer King, a Ph.D. student in robotics, and Marco Cognetti, a Ph.D. student at Sapienza University of Rome who spent six months in Srinivasa's lab. They will present their findings May 19 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. In addition to HERB, the software was tested on NASA's KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.

 

Robots are adept at "pick-and-place" (P&P) processes, picking up an object in a specified place and putting it down at another specified place. Srinivasa said this has great applications in places where clutter isn't a problem, such as factory production lines. But that's not what robots encounter when they land on distant planets or, when "helpmate" robots eventually land in people's homes.

 

P&P simply doesn't scale up in a world full of clutter. When a person reaches for a milk carton in a refrigerator, he doesn't necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.

 

The rearrangement planner automatically finds a balance between the two strategies, Srinivasa said, based on the robot's progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted or stepped on. And it can be taught to pay attention to items that might be valuable or delicate, in case it must extricate a bull from a china shop.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Disney’s VertiGo Combines Car, Helicopter to Seemingly Defy Gravity

Disney’s VertiGo Combines Car, Helicopter to Seemingly Defy Gravity | Amazing Science | Scoop.it

From Disney and ETH Zurich, this steam-punkish robot can transition from ground to wall and back again.

 

VertiGo is a wall-climbing robot that is capable of transitioning from the ground to the wall, created in collaboration between Disney Research Zurich and ETH. The robot has two tiltable propellers that provide thrust onto the wall, and four wheels. One pair of wheels is steerable, and each propeller has two degrees of freedom for adjusting the direction of thrust. By transitioning from the ground to a wall and back again, VertiGo extends the ability of robots to travel through urban and indoor environments. The robot is able to move on a wall quickly and with agility. The use of propellers to provide thrust onto the wall ensures that the robot is able to traverse over indentations such as masonry. The choice of two propellers rather than one enables a floor-to-wall transition – thrust is applied both towards the wall using the rear propeller, and in an upward direction using the front propeller, resulting in a flip onto the wall.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This five-fingered robot hand is close to human in functionality

This five-fingered robot hand is close to human in functionality | Amazing Science | Scoop.it

A University of Washington team of computer scientists and engineers has built what they say is one of the most highly capable five-fingered robot hands in the world. It can perform dexterous manipulation and learn from its own experience without needing humans to direct it. Their work is described in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.

 

“Hand manipulation is one of the hardest problems that roboticists have to solve,” said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”

 

The UW research team has developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the robot hardware and to real-world tasks like rotating an elongated object.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Danko Nikolić: How to Make Intelligent Robots That Understand the World

There are some amazing robots roving the surface of Mars. However, they are heavily dependent on their human operators. But what if we could provide them with human-like intelligence so that they could find their own way without assistance? What if we could teach them to autonomously deal with completely novel situations? Danko Nikolić, a neuroscientist at the Max-Planck Institute for Brain Research, has his own vision: a novel approach to Artificial Intelligence (AI) that could give robots the capability to understand the world through a method called “AI-Kindergarten”. So, can we provide for a sufficiently strong artificial intelligence to enable a robot to find its way in an environment as hostile and as unpredictable as space?

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

SkinHaptics: Research brings ‘smart hands’ closer to reality

SkinHaptics: Research brings ‘smart hands’ closer to reality | Amazing Science | Scoop.it

Using your skin as a touchscreen has been brought a step closer after UK scientists successfully created tactile sensations on the palm using ultrasound sent through the hand. The University of Sussex-led study -- funded by the Nokia Research Centre and the European Research Council -- is the first to find a way for users to feel what they are doing when interacting with displays projected on their hand. This solves one of the biggest challenges for technology companies who see the human body, particularly the hand, as the ideal display extension for the next generation of smartwatches and other smart devices.

 

Current ideas rely on vibrations or pins, which both need contact with the palm to work, interrupting the display. However, this new innovation, called SkinHaptics, sends sensations to the palm from the other side of the hand, leaving the palm free to display the screen. The device uses 'time-reversal' processing to send ultrasound waves through the hand. This technique is effectively like ripples in water but in reverse -- the waves become more targeted as they travel through the hand, ending at a precise point on the palm.

 

It draws on a rapidly growing field of technology called haptics, which is the science of applying touch sensation and control to interaction with computers and technology. Prof Sriram Subramanian, who leads the research team at the University of Sussex, says that technologies will inevitably need to engage other senses, such as touch, as we enter what designers are calling an 'eye-free' age of technology. He says: "Wearables are already big business and will only get bigger. But as we wear technology more, it gets smaller and we look at it less, and therefore multisensory capabilities become much more important. "If you imagine you are on your bike and want to change the volume control on your smartwatch, the interaction space on the watch is very small. So companies are looking at how to extend this space to the hand of the user. "What we offer people is the ability to feel their actions when they are interacting with the hand."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel
Scoop.it!

Machines are becoming more creative than humans

Machines are becoming more creative than humans | Amazing Science | Scoop.it

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?

 

The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in thestock market evolved to make real money.

 

These AI agents are different from those commonly seen in robotics, vision, and speech processing in that they were not taught to perform specific actions. Instead, they learned the best behaviors on their own by exploring possible behaviors and determining which ones lead to the best outcomes. Many such methods are modeled after similar adaptation in biology. For instance, evolutionary computation takes concepts from biological evolution. The idea is to encode candidate solutions (such as videogame players) in such a way that it is possible to recombine and mutate them to get new solutions. Then, given a large population of candidates with enough variation, a parallel search method is run to find a candidate that actually solves the problem. The most promising candidates are selected for mutation and recombination in order to construct even better candidates as offspring. In this manner, only an extremely tiny fraction of the entire group of possible candidates needs to be searched to find one that actually solves the problem, e.g. plays the game really well.

 

We can apply the same approach to many domains where it is possible to evaluate the quality of candidates computationally. It applies to many design domains, including the design of the space antenna mentioned above, the design of a control system for a finless rocket, or the design of a multilegged, walking robot. Often evolution comes up with solutions that are truly unexpected but still effective — in other words, creative. For instance, when working on a controller that would navigate a robotic arm around obstacles, we accidentally disabled its main motor. It could no longer reach targets far away, because it could not turn around its vertical axis. What the controller evolved to do instead was slowly turn the arm away from the target, using its remaining motors, and then swing it back really hard, turning the whole robot towards the target through inertia!


Via Levin Chin
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves | Amazing Science | Scoop.it
800,000 grasps is just the beginning for Google's large-scale robotic grasping project

 

Teaching robots this skill can be tricky, because there aren’t necessarily obvious connections between sensor data and actions, especially if you have gobs of sensor data coming in all the time (like you do with vision systems). A cleverer way to do it is to just let the robots learn for themselves, instead of trying to teach them at all. At Google Research, a team of researchers, with help from colleagues at X, tasked a 7-DoF robot arm with picking up objects in clutter using monocular visual servoing, and used a deep convolutional neural network (CNN) to predict the outcome of the grasp. The CNN was continuously retraining itself (starting with a lot of fail but gradually getting better), and to speed the process along, Google threw 14 robots at the problem in parallel. This is completely autonomous: all the humans had to do was fill the bins with stuff and then turn the power on.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Robots Getting a Grip with Electroadhesive Fingers

Robots Getting a Grip with Electroadhesive Fingers | Amazing Science | Scoop.it

A new technology allows robots to manipulate varied and delicate objects.

 

One of the more promising areas of robotics research for me right now is in universal end effectors. One of these techniques that’s gotten a lot of attention in the last few years is the balloon type effector (and similar schemes), a pneumatically-controlled rubber bulb that can be deformed/reformed around an object. These are so easy to engineer that even robot hobbyist can create them. We even have such a tutorial, written by Jason Poel Smith, here in Make: Projects.

 

While such a scheme shows promise in many applications, and has the added benefit of being relatively low-tech, there are still applications that are unsuitable for this type of gripper. Enter electroadhesion and the work being done at the École Polytechnique Fédérale in Lausanne, Switzerland. As the above video shows, a team there has been experimenting with the use of electroadhesion to create end efforts that can pick up such fussy cargo as an egg or a sheet of paper.

 

So, how does electroadhesion work? When positive and negative charges are applied to adjacent electrodes of the rubber flaps, an electric field creates opposite charges 
across the flaps, causing electrostatic adhesion between the electrodes. Besides the technique’s ability to handle delicate objects, it also delivers impressive clamping pressure, reportedly up to 100 times its own weight.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Toshiba's Chihira Kanae robot is human-like and speaks 4 languages

Toshiba's Chihira Kanae robot is human-like and speaks 4 languages | Amazing Science | Scoop.it

Toshiba has shown off the latest generation of its Chihira robot at a trade fair in Berlin.

 

The machine - which is designed to look as human-like as possible - has had the German language added to its repertoire. The firm also told the BBC that it upgraded the machine's control system to make its movements smoother. However, one expert suggested the realistic appearance might not be best suited to Western audiences.

 

Prof Noel Sharkey - a roboticist at the University of Sheffield - said he thought the machine still fell "clearly on this side of the uncanny valley". The term refers to the fact that many people feel increasingly uncomfortable the closer a robot gets to appearing like a human being, so long as the two remain distinguishable.

 

Toshiba brought the Chihira Kanae droid to the ITB travel expo to highlight what it hopes could become a viable product for the tourism industry. The machine has been installed at an information desk where it responds to attendees' verbal questions about the conference.

 

It marks the first appearance of the robot outside Japan, where it was unveiled last month.

 

The earlier models in the series are:

  • Chihira Aico, which made its debut at Japan's Ceatec tech show in 2014
  • Chihira Junko, which was launched last October and is currently in use at a Tokyo shopping centre's information desk

 

"We have improved the software and the hardware to improve the air pressure system," explained Hitoshi Tokuda, chief specialist at Toshiba's research and development center. "If the air pressure is unstable, her movements become affected by vibrations. So, if the air flow is very precisely controlled, her movements are smoother."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bacteria-powered microrobots navigate with help from new algorithm

Bacteria-powered microrobots navigate with help from new algorithm | Amazing Science | Scoop.it

The problem with having a microscopic robot propelled by a horde of tail-flailing bacteria is you never know where it's going to end up. The tiny, bio-robots, which amount to a chip coated with a "carpet" of flagellated bacteria, emerged from the primordial ooze of microrobotics research a few years ago as a concept for building microscopic devices and delivering medication at the cellular level. But as with any robot, the challenge for making them useful is bridging the gap from movement to automation.

 

A team of engineers at Drexel University might have done just that, according to research recently published in IEEE Transactions on Robotics about using electric fields to direct the robots in a fluid environment. In a follow-up to a 2014 report that presented a way to use the flagellated bacteria Serratia marcescens and an electric field to make a microrobot mobile, MinJun Kim, PhD, a professor in the College of Engineering and director of Drexel's Biological Actuation, Sensing & Transport (BAST) Lab, is now offering a method for making them agile.

 

"What's a ship without a captain? We know electric fields can be used to push the microrobots in any direction, like a boat carried by the ocean's currents, but in this paper we're exploring how those same fields can be used to help the robot detect obstacles and navigate around them," Kim said.

 

The key to both motion and navigation for the tiny hybrid robots is S. marcescens bacterium. These rod-shaped swimmers, who are known culprits of urinary tract and respiratory infections in hospitals, naturally possess a negative charge, which means they can be manipulated across an electric field as if they were pixels in an etch-a-sketch.

 

When a slimy smear of the bacteria is applied to a substrate, in this case a square chip of photosensitive material called SU-8, you get a negatively charged microrobot that can move around in a fluid by riding the waves of an electric field. The bacteria's whip-like flagella help keep the robot suspended in the fluid environment while also providing a small bit of forward propulsion. The real push comes from two perpendicular electric fields that turn the fluid into an electrified grid. Since the bacteria are negatively charged, the team can manipulate the robots simply by adjusting the strength of the current.

 

"We have shown that we can manually direct the robots or give it a set of coordinates to get it from point A to point B, but our goal in this research is to enable the microrobots to navigate a course with random impediments blocking its way," Kim said. "This requires a level of automation that has not previously been achieved in hybrid microrobotics research."

 

Kim's group met this goal by making a control algorithm that enables the tiny robots to effectively use the shape of the electric field they're riding as a way to detect and avoid obstacles -- like a surfer reading the waves' break to steer clear of submerged hazards.

 

By running a series of tests using charged particles, the team came to understand how the electric field changed when it encountered insulator objects. "The electric field was distorted near the corners of the obstacle," the authors write. "Particles that passed by the first corner of the obstacles also had affected trajectories even though they had a clear space ahead to pass; this is due to the distorted electric field."

 

They used this deformation in the field as input data for their steering algorithm. So when the robot senses a change in the pattern of the field the algorithm automatically adjusts its path of to dodge the obstacle. In this way, the robots are using electric fields both as a mode of transportation and as a means of navigation. In addition to the electric field information, the algorithm also uses image-tracking from a microscope-mounted camera to locate the initial starting point of the robot and its ultimate destination.

 

"With this level of control and input from the environment we can program the microrobot to make a series of value judgments during its journey that affect its path," Kim said. "If for instance we want the robot to avoid as many obstacles as possible, regardless of the distance traveled. Or we could set it to take the most direct, shortest route to the destination -- even if it's through the obstacles. This relative autonomy is an important step for microrobots if we're going to one day put them into a complex system and ask them to perform a task like delivering medication or building a microstructure."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Amazing Science | Scoop.it

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.

 

Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.

 

The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”

 

“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

 

Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This drone can automatically follow forest trails to track down lost hikers

This drone can automatically follow forest trails to track down lost hikers | Amazing Science | Scoop.it
A team of researchers at the University of Zurich just announced that they've developed a drone software that's capable of identifying and following trails.


Leave the breadcrumbs at home, folks, because just this week, a group of researchers in Switzerland announced the development of a drone capable of recognizing and following man-made forest trails. A collaborative effort between the University of Zurich and the Dalle Molle Institute of Artificial Intelligence, the conducted research was reportedly done to remedy the increasing number of lost hikers each year.


According to the University of Zurich, an estimated 1,000 emergency calls are made each year in regards to injured or lost hikers in Switzerland alone, an issue the group believes “inexpensive” drones could solve quickly.


Though the drone itself may get the bulk of the spotlight, it’s the artificial intelligence software developed by the partnership that deserves much of the credit. Run via a combination of AI algorithms, the software continuously scans its surroundings by way of two smartphone-like cameras built-in to the drone’s exterior. As the craft autonomously navigates a forested area, it consistently detects trails before piloting itself down open paths. However, the term “AI algorithms” is an incredibly easy way of describing something wildly complex. Before diving into the research, the team knew it would have to develop a supremely talented computing brain.

more...
Julie Cumming-Debrot's curator insight, February 15, 7:14 AM

What a good idea........never be lost again.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Using static electricity, RoboBees can land and stick to surfaces

Using static electricity, RoboBees can land and stick to surfaces | Amazing Science | Scoop.it

New system extends the lives of flying microrobots.

 

Call them the RoboBats. In a recent article in Science, Harvard roboticists demonstrate that their flying microrobots, nicknamed the RoboBees, can now perch during flight to save energy -- like bats, birds or butterflies.

 

"Many applications for small drones require them to stay in the air for extended periods," said Moritz Graule, first author of the paper who conducted this research as a student at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering at Harvard University. "Unfortunately, smaller drones run out of energy quickly. We want to keep them aloft longer without requiring too much additional energy." The team found inspiration in nature and simple science.

 

"A lot of different animals use perching to conserve energy," said Kevin Ma, a post-doc at SEAS and the Wyss Institute and coauthor. "But the methods they use to perch, like sticky adhesives or latching with talons, are inappropriate for a paperclip-size microrobot, as they either require intricate systems with moving parts or high forces for detachment."

 

Instead, the team turned to electrostatic adhesion -- the same basic science that causes a static-charged sock to cling to a pants leg or a balloon to stick to a wall.

 

When you rub a balloon on a wool sweater, the balloon becomes negatively charged. If the charged balloon is brought close to a wall, that negative charge forces some of the wall's electrons away, leaving the surface positively charged. The attraction between opposite charges then causes the balloon to stick to the wall.

 

"In the case of the balloon, however, the charges dissipate over time, and the balloon will eventually fall down," said Graule. "In our system, a small amount of energy is constantly supplied to maintain the attraction."

 

The RoboBee, pioneered at the Harvard Microrobotics Lab, uses an electrode patch and a foam mount that absorbs shock. The entire mechanism weighs 13.4 mg, bringing the total weight of the robot to about 100mg -- similar to the weight of a real bee. The robot takes off and flies normally. When the electrode patch is supplied with a charge, it can stick to almost any surface, from glass to wood to a leaf. To detach, the power supply is simply switched off.

 

"One of the biggest advantages of this system is that it doesn't cause destabilizing forces during disengagement, which is crucial for a robot as small and delicate as ours," said Graule.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Autonomous Mini Rally Car Teaches Itself to Powerslide

Autonomous Mini Rally Car Teaches Itself to Powerslide | Amazing Science | Scoop.it

Most autonomous vehicle control software is deliberately designed for well-constrained driving that's nice, calm, and under control. Not only is this a little bit boring, it's also potentially less safe: If your car autonomous vehicle has no experience driving aggressively, it won't know how to manage itself if something goes wrong. 

 

At Georgia Tech, researchers are developing control algorithms that allow small-scale autonomous cars to power around dirt tracks at ludicrous speeds. They presented some this week at the 2016 IEEE International Conference on Robotics and Automation in Stockholm, Sweden. Using real-time onboard sensing and processing, the little cars maximize their speed while keeping themselves stable and under control. Mostly.

 

The electrically powered research platform pictured above, which is a scale model one-fifth the size of a vehicle meant for human occupants, is called AutoRally. It's about a meter long, weighs 21kg, and has a top speed of nearly 100 kilometers per hour. It's based on an R/C truck chassis, with some largely 3D-printed modifications to support a payload that includes a GPS, IMU, wheel encoders, a pair of fast video cameras, and a beefy quad-core i7 computer with a Nvidia GTX 750ti GPU and 32 gigs of RAM. All of this stuff is protected inside of an aluminum enclosure that makes crashing (even crashing badly) not that big of a deal.

 

The researchers attest that most of the crashes in the video happened due to either software crashes (as opposed to failures of the algorithm itself), or the vehicle having trouble adapting to changes in the track surface. Since that video was made, they've upgraded the software to make it able to handle a more realistically dynamic environment. The result: AutoRally is now able to drive continuously on a track that, because of temperature changes, goes from, say, partially frozen to a huge puddle of mud over the course of a couple of hours.

They’ve placed all of AutoRally’s specs online (and made the software available on Github) in the hopes that other vehicle autonomy researchers will be able to take advantage of the platform’s robust, high-performance capabilities. The code is open source and ROS compatible, with an accompanying Gazebo-based simulation.

We're hoping that this algorithm will eventually be mature enough to be tried out on a full-size rally car (maybe in a little friendly competition with a human driver). But if that does ever happen, crashing will be a much bigger deal than it is now.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ingestible origami robot unfolds from capsule, removes button battery stuck to wall of simulated stomach

Ingestible origami robot unfolds from capsule, removes button battery stuck to wall of simulated stomach | Amazing Science | Scoop.it

In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bee model could be a breakthrough for robotic development

Bee model could be a breakthrough for robotic development | Amazing Science | Scoop.it

Bees control their flight using the speed of motion - or optic flow - of the visual world around them, but it is not known how they do this. The only neural circuits so far found in the insect brain can tell the direction of motion, not the speed. This study suggests how motion-direction detecting circuits could be wired together to also detect motion-speed, which is crucial for controlling bees’ flight. 

 

“Honeybees are excellent navigators and explorers, using vision extensively in these tasks, despite having a brain of only one million neurons,” said Dr Alex Cope, lead researcher on the paper.

“Understanding how bees avoid walls, and what information they can use to navigate, moves us closer to the development of efficient algorithms for navigation and routing - which would greatly enhance the performance of autonomous flying robotics”, he added.

 

Professor James Marshall, lead investigator on the project, added: “This is the reason why bees are confused by windows - since they are transparent they generate hardly any optic flow as bees approach them.”

 

Dr Cope and his fellow researchers on the project; Dr Chelsea Sabo, Dr Eleni Vasilaki, Prof essor Kevin Gurney, and Professor James Marshall, are now using this research to investigate how bees understand which direction they are pointing in and use this knowledge to solve tasks.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Machines are becoming more creative than humans

Machines are becoming more creative than humans | Amazing Science | Scoop.it

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?

 

The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in the stock market evolved to make real money.

 

Many new applications have suddenly come within our reach thanks to computational creativity — even though most of us do not realize it yet. If you are facing a design problem where potential solutions can be tested automatically, chances are you could evolve those solutions automatically as well. In areas where computers are already used to draft designs, the natural next step is to harness evolutionary search. This will allow human designers to gain more traction for their ideas, such as machine parts that are easier to manufacture, stock portfolios that minimize risk, or websites that result in more conversions. In other areas, it may take some engineering effort to define the design problem for the computer, but the effort may be rewarded by truly novel designs, such as finless rockets, new video game genres, personalized preventive medicine, and safer and more efficient traffic.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cooperating High-Precision Robots Manipulate Microparticles under Microscope

Cooperating High-Precision Robots Manipulate Microparticles under Microscope | Amazing Science | Scoop.it

The robotic manipulation of biological samples that are measured in microns is a challenging task, requiring high precision and dexterity. The end-effectors and the manipulators must be as flexible as possible to manage the variations in the size and shape of the samples, while at the same time protecting them from any form of damage (e.g. perforation).

 

This article discusses the work conducted at the Hamlyn Center for Robotic Surgery of Imperial College London to tackle these challenges. The manipulation tasks were semi-automated by developing a multi-robots cooperation and a compliant end-effector. This solution can be applied to cell measurements, single cell surgery, tissue engineering and cell enucleation.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation
Scoop.it!

Robotic exoskeleton maps sense-deficits in young stroke patients

Robotic exoskeleton maps sense-deficits in young stroke patients | Amazing Science | Scoop.it

Researchers at the University of Calgary are using robotics technology to try to come up with more effective treatments for children who have had strokes.

 

The robotic device measures a patient's position sense — what doctors call proprioception — the unconscious perception of where the body is while in motion or at rest.

 

"Someone whose position sense has been affected might have difficulty knowing where their hand or arm is in space, adding to their difficulty in using their affected, weaker limb," said one of the study's senior researchers, Dr. Kirton of the Cumming School of Medicine's departments of pediatrics and clinical neurosciences.

 

"We can try to make a hand stronger but, if your brain doesn't know where the hand is, this may not translate into meaningful function in daily life."

 

PhD candidate Andrea Kuczynski is doing ongoing research using the KINARM (Kinesiological Instrument for Normal and Altered Reaching Movements) robotic device.

 

During the test the children sit in the KINARM machine with their arms supported by its exoskeleton, which measured movement as they played video games and did other tasks. All the children also had MRIs, which gave researchers a detailed picture of their brain structures.


Via Daniel Perez-Marcos
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Internet of Drones Unveiled in Singapore

The Internet of Drones Unveiled in Singapore | Amazing Science | Scoop.it

A multitude of applications for drones has emerged ranging from naval reconnaissance to offshore platform inspection. Technology company H3 Dynamics has unveiled what it calls the Internet of Drones this week at the Singapore Air Show. 

 

Dronebox is a system that brings together drone-enabled service activities with the Industrial Internet of Things. It is a self-powered system that can be deployed anywhere, including in remote areas where industrial assets, borders or sensitive installations require constant monitoring. 

 

Dronebox can charge drone batteries automatically within its shelter system. Off-grid power is provided primarily by a solar-battery installation. For more advanced requirements, system capabilities can be extended using Remobox, an accessory that provides more advanced communications and hosts a small back-up fuel cell system for year long availability in mission critical locations.

 

Droneboxes can be installed anywhere, so that drones can perform pre-programmed scheduled routines, deploy on demand or be woken up by other drones or sensors as part of a much wider network of “things.” As a network, Droneboxes can increase their effectiveness and mission times using collaborative technologies.

 

Designed as an evolution over today’s many unattended sensors and CCTV cameras installed in cities, borders, or large industrial estates, Dronebox innovates by giving sensors freedom of movement using drones as their vehicles. End-users can deploy flying sensor systems at different locations to offer 24/7 reactivity.

 

The system eases scalability challenges for drone service operators. Such service providers use professional drones to provide their customers with detailed aerial land surveys in mining or agriculture, perform infrastructure inspections or monitor the progress of construction sites. However, some remote locations need regular or prolonged visits which increase travel costs and risks to drone service providers. By pre-deploying Dronebox systems at the right locations, travel to remote areas is no longer required, charging or handling drone batteries is eliminated and sensor data is simply sent through a network for easy access and processing.

 

H3 Dynamics expects the first Dronebox and Remobox to be delivered this year. H3 Dynamics is a group of hardware and software companies with locations in Singapore, Texas, France and South Africa. The Singapore headquartered group includes a high performance fuel cell energy storage systems entity, an integrated robotics systems entity as well as an entity dedicated to advanced field communications, precision tracking and real-time analytics software.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Will this new ‘socially assistive robot’ from MIT Media Lab (or its progeny) replace teachers?

Will this new ‘socially assistive robot’ from MIT Media Lab (or its progeny) replace teachers? | Amazing Science | Scoop.it

Researchers in the Personal Robots Group at the MIT Media Lab, led by Cynthia Breazeal, PhD, have developed a powerful new “socially assistive” robot called Tega that senses the affective (emotional/feeling) state of a learner, and based on those cues, creates a personalized motivational strategy.

 

But what are the implications for the future of education … and society? A furry, brightly colored robot, Tega is the latest in a line of smartphone-based, socially assistive robots developed in the MIT Media Lab. In a nutshell: Tega is fun, effective, and personalized — unlike many human teachers.

 

Breazeal and team say Tega was developed specifically to enable long-term educational interactions with children. It uses an Android device to process movement, perception and thinking and can respond appropriately to individual children’s behaviors — contrasting with (mostly boring) conventional education with its impersonal large class sizes, lack of individual attention, and proclivity to pouring children into a rigid one-size-fits-all mold.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Individually controlled microbots using 'mini force fields'

Individually controlled microbots using 'mini force fields' | Amazing Science | Scoop.it

Researchers are using a technology likened to "mini force fields" to independently control individual microrobots operating within groups, an advance aimed at using the tiny machines in areas including manufacturing and medicine. Until now it was only possible to control groups of microbots to move generally in unison, said David Cappelleri, an assistant professor of mechanical engineering at Purdue University.

 

"The reason we want independent movement of each robot is so they can do cooperative manipulation tasks," he said. "Think of ants. They can independently move, yet all work together to perform tasks such as lifting and moving things. We want to be able to control them individually so we can have some robots here doing one thing, and some robots there doing something else at the same time."

 

Findings are detailed in a research paper appearing this month in the journal Micromachines.  Postdoctoral research associates Sagar Chowdhury and Wuming Jing, and Cappelleri authored the paper. The team developed a system for controlling the robots with individual magnetic fields from an array of tiny planar coils.

 

"The robots are too small to put batteries on them, so they can't have onboard power," Cappelleri said. "You need to use an external way to power them. We use magnetic fields to generate forces on the robots. It's like using mini force fields."

The research is revealing precisely how to control the robots individually.

 

"We need to know, if a robot is here and it needs to go there, how much force needs to be applied to the robot to get it from point A to point B?" Cappelleri said. "Once you figure out what that force has to be, then we say, what kind of magnetic field strength do we need to generate that force?"

 

These microbots are magnetic disks that slide across a surface (shown in this video https://youtu.be/n_jGoi0a6Po). While the versions studied are around 2 millimeters in diameter – about twice the size of a pinhead - researches aim to create microbots that are around 250 microns in diameter, or roughly the size of a dust mite.

 

In previously developed systems the microbots were controlled using fewer coils located around the perimeter of the "workspace" containing the tiny machines. However, this "global" field is not fine enough to control individual microrobots independently.

 

"The approach we came up with works at the microscale, and it will be the first one that can give truly independent motion of multiple microrobots in the same workspace because we are able to produce localized fields as opposed to a global field," Cappelleri said. "What we can do now, instead of having these coils all around on the outside, is to print planar coils directly onto the substrate."

 

The robots are moved using attractive or repulsive forces and by varying the strength of the electrical current in the coils. "You can think about using teams of robots to assemble components on a small scale, which we could use for microscale additive manufacturing," Cappelleri said.

 

Independently controlled microbots working in groups might be useful in building microelectromechanical systems, or MEMS, minuscule machines that could have numerous applications from medicine to homeland security.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from drones
Scoop.it!

DJI launches new era of intelligent flying machines

DJI launches new era of intelligent flying machines | Amazing Science | Scoop.it

DJI, the world’s leading maker of unmanned aerial vehicles, on Tuesday launched the Phantom 4, the first consumer quadcopter camera (or “drone”) to use highly advanced computer vision and sensing technology to make professional aerial imaging easier for everyone.

 

The Phantom 4 expands on previous generations of DJI’s iconic Phantom line by adding new on-board intelligence that make piloting and shooting great shots simple through features like its Obstacle Sensing System, ActiveTrack and TapFlyfunctionality.

“With the Phantom 4, we are entering an era where even beginners can fly with confidence,” said DJI CEO Frank Wang. “People have dreamed about one day having a drone collaborate creatively with them. That day has arrived.”

 

The Phantom 4’s Obstacle Sensing System features two forward-facing optical sensors that scan for obstacles and automatically direct the aircraft around the impediment when possible, reducing risk of collision, while ensuring flight direction remains constant. If the system determines the craft cannot go around the obstacle, it will slow to a stop and hover until the user redirects it. Obstacle avoidance also engages if the user triggers the drone’s “Return to Home” function to reduce the risk of collision when automatically flying back to its take off point.

 

With ActiveTrack, the Phantom 4 breaks new ground, allowing users running the DJI Go app on iOS and Android devices to follow and keep the camera centered on the subject as it moves simply by tapping the subject on their smartphone or tablet. Perfectly-framed shots of moving joggers or cyclists, for example, simply require activating the ActiveTrack mode in the app.

 

The Phantom 4 understands three-dimensional images and uses machine learning to keep the object in the shot, even when the subject changes its shape or turns while moving. Users have full control over camera movement while in ActiveTrack mode – and can even move the camera around the object while it is in motion as the Phantom 4 keeps the subject framed in the center of the shot autonomously. A “pause” button on the Phantom 4’s remote controller allows the user to halt an autonomous flight at any time, leaving the drone to hover.

 

By using the TapFly function in the DJI Go app, users can double-tap a destination for their Phantom 4 on the screen, and the Phantom 4 calculates an optimal flight route to reach the destination, while avoiding any obstructions in its path. Tap another spot and the Phantom 4 will smoothly transition towards that destination making even the beginner pilot look like a seasoned professional.

 

 

DJI


Via Andres Flores
more...
Andres Flores's curator insight, March 1, 10:36 PM
Espectacular...
steve batchelder's curator insight, March 2, 2:54 AM
Espectacular...
Kim Frye's curator insight, March 9, 11:16 AM
Espectacular...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World record: First robot to solve a Rubik's Cube in under 1 second (0.887 s)

Prior to the world record attempt a WCA-conform modified speed cube was scrambled with a computer generated random array and positioned in the robot. Once the start button was hit two webcam shutters were moved away. Thereafter a laptop took two pictures, each picture showing three sides of the cube. Then the laptop identified all colors of the cube and calculated a solution with Tomas Rokicki's extremely fast implementation of Herbert Kociemba's Two-Phase-Algorithm. The solution was handed over to an Arduino-compatible microcontroller board that orchestrated the 20 moves of six high performance steppers. Only 887 milliseconds after the start button had been hit Sub1 broke a historic barrier and finished the last move in new world record time.

Needing several hundreds of working hours to construct, build, program and tune Sub1, it is the first robot that can independently inspect and solve a Rubik's Cube in under 1 second.

The world record has been approved by Guinness World Records on 18-Feb-2016.
more...
No comment yet.