Amazing Science
780.4K views | +155 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Japan's latest humanoid robot makes its own moves

Japan's latest humanoid robot makes its own moves | Amazing Science | Scoop.it

Japan's National Science Museum is no stranger to eerily human androids: It employs two in its exhibition hall already. But for a week, they're getting a new colleague. Called "Alter," it has a very human face like Professor Ishiguro's Geminoids, but goes one step further with an embedded neural network that allows it to move itself. The technology powering this involves 42 pneumatic actuators and, most importantly, a "central pattern generator."

 

That CPG has a neutral network that replicates neurons, allowing the robot to create movement patterns of its own, influenced by sensors that detect proximity, temperature and, for some reason, humidity. The setup doesn't make for human-like movement, but it gives the viewer the very strange sensation that this particular robot is somehow alive. And that's precisely the point.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists Build Crawling Biohybrid Robot That is Part Sea Slug

Scientists Build Crawling Biohybrid Robot That is Part Sea Slug | Amazing Science | Scoop.it
Scientists have built a crwaling robot that is part sea slug and part 3-D printer polymer body parts.

 

Sea slugs typically slither—a perfectly respectable way to get around—but recently a team of scientists saw additional locomotive potential in the odd-looking invertebrate. Specifically, they took a tiny muscle from the sea slug's mouth and used it to make a robot crawl.

 

"We're building a living machine—a biohybrid robot that's not completely organic—yet," Victoria Webster, the PhD student who is leading the research, said in a statement.

 

A sea slug might seem an unlikely source for robot parts. But according to the researchers, sea slugs are exceptionally tough creatures, and that toughness extends down to the cellular level. In the chilly Pacific Ocean, sea slugs endure large swings in temperature, salinity, and habitat as tides move them between deep water and shallow pools. This makes the slug's muscles more adaptable than those of many other species in terms of the conditions in which they can operate.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MACHINE OR LIFE-FORM? Swimming Stingray Robot Is Powered by Real, Living Rat Cells

MACHINE OR LIFE-FORM? Swimming Stingray Robot Is Powered by Real, Living Rat Cells | Amazing Science | Scoop.it
This soft robotic stingray is made of rat heart muscle. Yeah, it's just as crazy as it sounds.

"I THINK WE'VE GOT A BIOLOGICAL LIFE-FORM HERE."


"Roughly speaking, we made this thing with a pinch of rat cardiac cells, a pinch of breast implant, and a pinch of gold. That pretty much sums it up, except for the genetic engineering," says Kit Parker, the bio-engineer at Harvard who led the team that developed the strange robot.

 

Parker's robotic stingray is tiny—a bit more than half an inch long—and weighs only 10 grams. But it glides through liquid with the very same undulating motion used by fish like real stingrays and skates. The robot is powered by the contraction of 200,000 genetically engineered rat heart-muscle cells grown on the underside of the bot. Even stranger, Parker's team developed the robot to follow bright pulses of light, allowing it to smoothly twist and turn through obstacle courses. The fascinating robot was unveiled today in the journalScience.

 

"By using living cells they were able to build this robot in a way that you just couldn't replicate with any other material," says Adam Feinberg, a roboticist at Carnegie Mellon University who has worked with Parker's team before, but was not involved in developing this new robot. "You shine a light, and it triggers the muscles to swim. You couldn't replicate this movement with on-board electronics and actuators while keeping it lightweight and maneuverable. And it really is remote controlled, like a TV set."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Custom Processor Speeds Up Robot Motion Planning by Factor of 1,000

Custom Processor Speeds Up Robot Motion Planning by Factor of 1,000 | Amazing Science | Scoop.it

A preprogrammed FPGA can take motion planning from frustrating to instantaneous.

 

If you’ve ever seen a live robot manipulation demo, you’ve almost certainly noticed that the robot probably spends a lot of time looking like it’s not doing anything. It’s tempting to say that the robot is “thinking” when this happens, and that might even be mostly correct: odds are that you’re watching some poor motion-planning algorithm try and figure out how to get the robot’s arm and gripper to do what it’s supposed to do without running into anything. This motion planning process is both one of the most important skills a robot can have (since it’s necessary for robots to “do stuff”), and also one of the most time and processor intensive. 

 

At the RSS 2016 conference this week, researchers from the Duke Roboticsgroup at Duke University in Durham, N.C., are presenting a paper about “Robot Motion Planning on a Chip,” in which they describe how they can speed up motion planning by three orders of magnitude while using 20 times less power. How? Rather than using general purpose CPUs and GPUs, they instead developed a custom processor that can run collision checking across an entire 3D grid all at once.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Do We Want Robot Warriors to Decide Who Lives or Dies?

Do We Want Robot Warriors to Decide Who Lives or Dies? | Amazing Science | Scoop.it

Czech writer Karel Čapek’s 1920 play R.U.R. (Rossum’s Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans—the robots from the title—toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator.

 

Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robotsin Iraq and Afghanistan. The world’s most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality. The vast majority will, in the near term, be remotely controlled by human operators, who will be “in the loop” to pull the trigger. But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.

 

Not surprisingly, the threat of “killer robots,” as they’ve been dubbed, has triggered an impassioned debate. The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties. In December, more than a hundred countries are expected to discuss the issue as part of a United Nations disarmament meeting in Geneva.


Via Jean-Philippe BOCQUENET, Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Teaching robots to feel pain to protect themselves

Teaching robots to feel pain to protect themselves | Amazing Science | Scoop.it

A pair of researchers with Leibniz University of Hannover has demonstrated the means by which robots might be programmed to experience something akin to pain in animals. As part of their demonstration at last week's IEEE International Conference on Robotics and Automation held in Stockholm, Johannes Kuehn and Sami Haddaddin showed how pain might be used in robots, by interacting with a BioTac fingertip sensor on the end of a Kuka robotic arm that had been programmed to react differently to differing amounts of "pain."

 

The researchers explained that the reason for giving robots pain sensors is the same as for existing biological adaptations—to ensure a reaction that will lessen the damage incurred by our bodies, and perhaps, even more importantly, to help us to remember to avoid similar situations in the future. In the case of the robots, the researchers have built an electric network behind the fingertip sensor meant to mimic nerve pathways below the skin in animals, allowing the robot to "feel" what has been programmed to describe various types, or degrees of pain.

 

In the demonstration, the researchers inflicted varying degrees of pain on the robot, explaining the reasoning behind the programmed reaction: When experiencing light pain or discomfort, for example, the robot recoiled slowly, removing itself from the problem. Moderate pain, on the other hand called for a rapid response, moving quickly away from the source, though it had the option to move back, albeit, tentatively, if need be. Severe pain, on the other hand, is often indicative of damage, thus the robot had been programmed to become passive to prevent further damage.

 

Such robots are likely to incite a host of questions, of course, if they become more common—if a robot acts the same way a human does when touching a hot plate, are we to believe it is truly experiencing pain? And if so, will lawmakers find the need to enact laws to prevent cruelty to robots, as is the case with animals? Only time will tell of course, but one thing that is evident in such demonstrations—as robotics technology advances, researchers are more often forced to make hard decisions, some of which may fall entirely outside the domain of engineers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Using static electricity, RoboBees can land and stick to surfaces

Using static electricity, RoboBees can land and stick to surfaces | Amazing Science | Scoop.it

New system extends the lives of flying microrobots.

 

Call them the RoboBats. In a recent article in Science, Harvard roboticists demonstrate that their flying microrobots, nicknamed the RoboBees, can now perch during flight to save energy -- like bats, birds or butterflies.

 

"Many applications for small drones require them to stay in the air for extended periods," said Moritz Graule, first author of the paper who conducted this research as a student at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering at Harvard University. "Unfortunately, smaller drones run out of energy quickly. We want to keep them aloft longer without requiring too much additional energy." The team found inspiration in nature and simple science.

 

"A lot of different animals use perching to conserve energy," said Kevin Ma, a post-doc at SEAS and the Wyss Institute and coauthor. "But the methods they use to perch, like sticky adhesives or latching with talons, are inappropriate for a paperclip-size microrobot, as they either require intricate systems with moving parts or high forces for detachment."

 

Instead, the team turned to electrostatic adhesion -- the same basic science that causes a static-charged sock to cling to a pants leg or a balloon to stick to a wall.

 

When you rub a balloon on a wool sweater, the balloon becomes negatively charged. If the charged balloon is brought close to a wall, that negative charge forces some of the wall's electrons away, leaving the surface positively charged. The attraction between opposite charges then causes the balloon to stick to the wall.

 

"In the case of the balloon, however, the charges dissipate over time, and the balloon will eventually fall down," said Graule. "In our system, a small amount of energy is constantly supplied to maintain the attraction."

 

The RoboBee, pioneered at the Harvard Microrobotics Lab, uses an electrode patch and a foam mount that absorbs shock. The entire mechanism weighs 13.4 mg, bringing the total weight of the robot to about 100mg -- similar to the weight of a real bee. The robot takes off and flies normally. When the electrode patch is supplied with a charge, it can stick to almost any surface, from glass to wood to a leaf. To detach, the power supply is simply switched off.

 

"One of the biggest advantages of this system is that it doesn't cause destabilizing forces during disengagement, which is crucial for a robot as small and delicate as ours," said Graule.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Autonomous Mini Rally Car Teaches Itself to Powerslide

Autonomous Mini Rally Car Teaches Itself to Powerslide | Amazing Science | Scoop.it

Most autonomous vehicle control software is deliberately designed for well-constrained driving that's nice, calm, and under control. Not only is this a little bit boring, it's also potentially less safe: If your car autonomous vehicle has no experience driving aggressively, it won't know how to manage itself if something goes wrong. 

 

At Georgia Tech, researchers are developing control algorithms that allow small-scale autonomous cars to power around dirt tracks at ludicrous speeds. They presented some this week at the 2016 IEEE International Conference on Robotics and Automation in Stockholm, Sweden. Using real-time onboard sensing and processing, the little cars maximize their speed while keeping themselves stable and under control. Mostly.

 

The electrically powered research platform pictured above, which is a scale model one-fifth the size of a vehicle meant for human occupants, is called AutoRally. It's about a meter long, weighs 21kg, and has a top speed of nearly 100 kilometers per hour. It's based on an R/C truck chassis, with some largely 3D-printed modifications to support a payload that includes a GPS, IMU, wheel encoders, a pair of fast video cameras, and a beefy quad-core i7 computer with a Nvidia GTX 750ti GPU and 32 gigs of RAM. All of this stuff is protected inside of an aluminum enclosure that makes crashing (even crashing badly) not that big of a deal.

 

The researchers attest that most of the crashes in the video happened due to either software crashes (as opposed to failures of the algorithm itself), or the vehicle having trouble adapting to changes in the track surface. Since that video was made, they've upgraded the software to make it able to handle a more realistically dynamic environment. The result: AutoRally is now able to drive continuously on a track that, because of temperature changes, goes from, say, partially frozen to a huge puddle of mud over the course of a couple of hours.

They’ve placed all of AutoRally’s specs online (and made the software available on Github) in the hopes that other vehicle autonomy researchers will be able to take advantage of the platform’s robust, high-performance capabilities. The code is open source and ROS compatible, with an accompanying Gazebo-based simulation.

We're hoping that this algorithm will eventually be mature enough to be tried out on a full-size rally car (maybe in a little friendly competition with a human driver). But if that does ever happen, crashing will be a much bigger deal than it is now.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ingestible origami robot unfolds from capsule, removes button battery stuck to wall of simulated stomach

Ingestible origami robot unfolds from capsule, removes button battery stuck to wall of simulated stomach | Amazing Science | Scoop.it

In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bee model could be a breakthrough for robotic development

Bee model could be a breakthrough for robotic development | Amazing Science | Scoop.it

Bees control their flight using the speed of motion - or optic flow - of the visual world around them, but it is not known how they do this. The only neural circuits so far found in the insect brain can tell the direction of motion, not the speed. This study suggests how motion-direction detecting circuits could be wired together to also detect motion-speed, which is crucial for controlling bees’ flight. 

 

“Honeybees are excellent navigators and explorers, using vision extensively in these tasks, despite having a brain of only one million neurons,” said Dr Alex Cope, lead researcher on the paper.

“Understanding how bees avoid walls, and what information they can use to navigate, moves us closer to the development of efficient algorithms for navigation and routing - which would greatly enhance the performance of autonomous flying robotics”, he added.

 

Professor James Marshall, lead investigator on the project, added: “This is the reason why bees are confused by windows - since they are transparent they generate hardly any optic flow as bees approach them.”

 

Dr Cope and his fellow researchers on the project; Dr Chelsea Sabo, Dr Eleni Vasilaki, Prof essor Kevin Gurney, and Professor James Marshall, are now using this research to investigate how bees understand which direction they are pointing in and use this knowledge to solve tasks.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

SkinHaptics: Research brings ‘smart hands’ closer to reality

SkinHaptics: Research brings ‘smart hands’ closer to reality | Amazing Science | Scoop.it

Using your skin as a touchscreen has been brought a step closer after UK scientists successfully created tactile sensations on the palm using ultrasound sent through the hand. The University of Sussex-led study -- funded by the Nokia Research Centre and the European Research Council -- is the first to find a way for users to feel what they are doing when interacting with displays projected on their hand. This solves one of the biggest challenges for technology companies who see the human body, particularly the hand, as the ideal display extension for the next generation of smartwatches and other smart devices.

 

Current ideas rely on vibrations or pins, which both need contact with the palm to work, interrupting the display. However, this new innovation, called SkinHaptics, sends sensations to the palm from the other side of the hand, leaving the palm free to display the screen. The device uses 'time-reversal' processing to send ultrasound waves through the hand. This technique is effectively like ripples in water but in reverse -- the waves become more targeted as they travel through the hand, ending at a precise point on the palm.

 

It draws on a rapidly growing field of technology called haptics, which is the science of applying touch sensation and control to interaction with computers and technology. Prof Sriram Subramanian, who leads the research team at the University of Sussex, says that technologies will inevitably need to engage other senses, such as touch, as we enter what designers are calling an 'eye-free' age of technology. He says: "Wearables are already big business and will only get bigger. But as we wear technology more, it gets smaller and we look at it less, and therefore multisensory capabilities become much more important. "If you imagine you are on your bike and want to change the volume control on your smartwatch, the interaction space on the watch is very small. So companies are looking at how to extend this space to the hand of the user. "What we offer people is the ability to feel their actions when they are interacting with the hand."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel
Scoop.it!

Machines are becoming more creative than humans

Machines are becoming more creative than humans | Amazing Science | Scoop.it

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?

 

The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in thestock market evolved to make real money.

 

These AI agents are different from those commonly seen in robotics, vision, and speech processing in that they were not taught to perform specific actions. Instead, they learned the best behaviors on their own by exploring possible behaviors and determining which ones lead to the best outcomes. Many such methods are modeled after similar adaptation in biology. For instance, evolutionary computation takes concepts from biological evolution. The idea is to encode candidate solutions (such as videogame players) in such a way that it is possible to recombine and mutate them to get new solutions. Then, given a large population of candidates with enough variation, a parallel search method is run to find a candidate that actually solves the problem. The most promising candidates are selected for mutation and recombination in order to construct even better candidates as offspring. In this manner, only an extremely tiny fraction of the entire group of possible candidates needs to be searched to find one that actually solves the problem, e.g. plays the game really well.

 

We can apply the same approach to many domains where it is possible to evaluate the quality of candidates computationally. It applies to many design domains, including the design of the space antenna mentioned above, the design of a control system for a finless rocket, or the design of a multilegged, walking robot. Often evolution comes up with solutions that are truly unexpected but still effective — in other words, creative. For instance, when working on a controller that would navigate a robotic arm around obstacles, we accidentally disabled its main motor. It could no longer reach targets far away, because it could not turn around its vertical axis. What the controller evolved to do instead was slowly turn the arm away from the target, using its remaining motors, and then swing it back really hard, turning the whole robot towards the target through inertia!


Via Levin Chin
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves | Amazing Science | Scoop.it
800,000 grasps is just the beginning for Google's large-scale robotic grasping project

 

Teaching robots this skill can be tricky, because there aren’t necessarily obvious connections between sensor data and actions, especially if you have gobs of sensor data coming in all the time (like you do with vision systems). A cleverer way to do it is to just let the robots learn for themselves, instead of trying to teach them at all. At Google Research, a team of researchers, with help from colleagues at X, tasked a 7-DoF robot arm with picking up objects in clutter using monocular visual servoing, and used a deep convolutional neural network (CNN) to predict the outcome of the grasp. The CNN was continuously retraining itself (starting with a lot of fail but gradually getting better), and to speed the process along, Google threw 14 robots at the problem in parallel. This is completely autonomous: all the humans had to do was fill the bins with stuff and then turn the power on.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Teal drone is up for pre-orders, can do 70 mph, stay stable in winds up to 40 mph

Teal drone is up for pre-orders, can do 70 mph, stay stable in winds up to 40 mph | Amazing Science | Scoop.it

Teal is self-promoted as the world's fastest production drone. It is fast and can withstand 40mh winds. It is built to run as many apps as you can think of and it has a supercomputer on board.

 

Among the key features: modes for beginners to hardcore racers; control it from a smartphone, tablet or hobby controller; has something called Teal OS as a software platform, opening the way for people to build apps around it; fast processors on board the drone.

 

The drone is powered by NVIDIA TX1. It handles machine learning, autonomous flight, image recognition and more onboard. "This makes Teal a flying supercomputer. You can even plug Teal into a monitor, use it like a normal computer, play games on it...", Teal's inventor explains.

 

Teal has a 13MP wide field of view camera that supports 4K video recording and 3-axis electronic stabilization. Videos and photos can be stored directly on built in 16GB storage or to a microSD card.

 

Speed? How fast is fast? The max horizontal speed is listed as 70 mph. The site FAQ page stated that "Teal can fly over 70 MPH! In test runs, teal has even reached speeds over 85 MPH under certain conditions."

 

Teal is small enough to fit in a standard backpack without disassembly. The diagonal motor to motor measurement is listed as 261mm.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Molecular flip in crystals driven by light creates microrobotic propulsion

Molecular flip in crystals driven by light creates microrobotic propulsion | Amazing Science | Scoop.it

Hokkaido University researchers have designed a crystal material that continually flips between two positions like a paddle, propelling an attached structure, when stimulated by blue light. It could lead to bio-inspired microrobots that deliver drugs to target tissues, for example.

 

The team made azobenzene-oleic acid crystals, composed of an organic compound called azobenzene, commonly used in dye manufacturing, and oleic acid, commonly found in cooking oil. Azobenzene molecules take two structurally different forms: cis and trans, and they were found to switch back and forth when stimulated by the light.

 

The frequency of the motion also increased with increased light intensity. Some crystal complexes they created even exhibited swimming-like motions in the water, the researchers report. Previously reported light-responsive materials have been limited in their ability to deform, the researchers noted.

 

“The importance of this study lies in the realization of macroscopic self-oscillation by the repeated reversible reaction of a molecular machine with the cooperative transformation of a molecular assembly,” the researchers note in a paper published in the journal Angewandte Chemie.

 

“These results provide a fundamental strategy for constructing dynamic self-organizations in supramolecular systems to achieve bioinspired molecular systems.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DOT and FAA Finalize Rules for Drones

DOT and FAA Finalize Rules for Drones | Amazing Science | Scoop.it

he Department of Transportation’s Federal Aviation Administration has finalized the first operational rules (PDF) for routine commercial use of small unmanned aircraft systems (UAS or “drones”), opening pathways towards fully integrating UAS into the nation’s airspace. These new regulations work to harness new innovations safely, to spur job growth, advance critical scientific research and save lives.

 

“We are part of a new era in aviation, and the potential for unmanned aircraft will make it safer and easier to do certain jobs, gather information, and deploy disaster relief,” said U.S. Transportation Secretary Anthony Foxx. “We look forward to working with the aviation community to support innovation, while maintaining our standards as the safest and most complex airspace in the world.”

 

According to industry estimates, the rule could generate more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years.

 

The new rule, which takes effect in late August, offers safety regulations for unmanned aircraft drones weighing less than 55 pounds that are conducting non-hobbyist operations.

 

The rule’s provisions are designed to minimize risks to other aircraft and people and property on the ground. The regulations require pilots to keep an unmanned aircraft within visual line of sight. Operations are allowed during daylight and during twilight if the drone has anti-collision lights. The new regulations also address height and speed restrictions and other operational limits, such as prohibiting flights over unprotected people on the ground who aren’t directly participating in the UAS operation.

 

The FAA is offering a process to waive some restrictions if an operator proves the proposed flight will be conducted safely under a waiver. The FAA will make an online portal available to apply for these waivers in the months ahead.

 

“With this new rule, we are taking a careful and deliberate approach that balances the need to deploy this new technology with the FAA’s mission to protect public safety,” said FAA Administrator Michael Huerta. “But this is just our first step. We’re already working on additional rules that will expand the range of operations.”

 

Under the final rule, the person actually flying a drone must be at least 16 years old and have a remote pilot certificate with a small UAS rating, or be directly supervised by someone with such a certificate. To qualify for a remote pilot certificate, an individual must either pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center or have an existing non-student Part 61 pilot certificate. If qualifying under the latter provision, a pilot must have completed a flight review in the previous 24 months and must take a UAS online training course provided by the FAA. The TSA will conduct a security background check of all remote pilot applications prior to issuance of a certificate.

 

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First teleoperated endolumenal robot from secretive startup Auris cleared for use by FDA

First teleoperated endolumenal robot from secretive startup Auris cleared for use by FDA | Amazing Science | Scoop.it

Teleoperated endolumenal bot can navigate inside the body, image and treat conditions without making incisions.

 

The U.S. Food and Drug Administration (FDA) has just approved the first medical robot from Auris Surgical, a stealthy startup led by the co-founder of industry leader Intuitive Surgical, makers of the widely-used da Vinci robot.

 

The teleoperated ARES robot (the acronym stands for Auris Robotic Endoscopy System), was cleared by the FDA at the end of May, and could now be used for diagnosing and treating patients.

 

Auris, which describes itself only as a “technology company based in Silicon Valley,” was previously thought to be working on a robotic microsurgical system designed to remove cataracts, and the company has in fact filed several patent applications along those lines.

 

However, an investigation by IEEE Spectrum suggests that the company has greater ambitions, including, according to current and former employees, “building the next generation of surgical robots… capable of expanding the applicability of robotics to a broad spectrum of medical procedures.”

 

A close reading of recent patent applications filed by Auris scientists shows that the company is focusing on so-called endolumenal (or endoluminal) surgery. This involves surgeons introducing flexible robots via the body’s natural openings (the mouth in particular), to address conditions of the throat, lungs and gastrointestinal system. IEEE Spectrum can reveal that Auris has already carried out at least one successful human trial of such a robot, outside the United States.

 

Because endolumenal surgery does not involve large incisions or (usually) general anesthesia, it benefits fragile patients who cannot withstand the trauma of normal surgery. The Society for American Gastrointestinal and Endoscopic Surgeons estimates that effective endolumenal therapies for obesity and reflux diseases alone could help more than 1 million patients a year in the United States.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Teaching Robots to Feel and React to Pain

Researchers Teaching Robots to Feel and React to Pain | Amazing Science | Scoop.it

An artificial nervous system could help robots avoid damaging interactions.

 

One of the most useful things about robots is that they don’t feel pain. Because of this, we have no problem putting them to work in dangerous environments or having them perform tasks that range between slightly unpleasant and definitely fatal to a human. And yet, a pair of German researchers believes that, in some cases, feeling and reacting to pain might be a good capability for robots to have.

 

The researchers, from Leibniz University of Hannover, are developing an “artificial robot nervous system to teach robots how to feel pain” and quickly respond in order to avoid potential damage to their motors, gears, and electronics. They described the project last week at the IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden, and we were there to ask them what in the name of Asimov they were thinking when they came up with this concept.

 

Why is it a good idea for robots to feel pain? The same reason why it’s a good idea for humans to feel pain, said Johannes Kuehn, one of the researchers. “Pain is a system that protects us,” he told us. “When we evade from the source of pain, it helps us not get hurt.” Humans that don’t have the ability to feel pain get injured far more often, because their bodies don’t instinctively react to things that hurt them.

 

Kuehn, who worked on the project with Professor Sami Haddadinone of the world’s foremost experts in physical human-robot interaction and safety, argues that by protecting robots from damage, their system will be protecting humans as well. That’s because a growing number of robots will be operating in close proximity to human workers, and undetected damages in robotic equipment can lead to accidents. Kuehn and Haddadin reasoned that, if our biological mechanisms to sense and respond to pain are so effective, why not devise a bio-inspired robot controller that mimics those mechanisms? Such a controller would reflexively react to protect the robot from potentially damaging interactions. 

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Bionic Spinal Cord Lets You Move Robotic Limbs With Power of Thought

Bionic Spinal Cord Lets You Move Robotic Limbs With Power of Thought | Amazing Science | Scoop.it

Australian researchers have created a “bionic spinal cord.” They claim that this device could give paralyzed people significant hope of walking again. And if that’s not enough, it could do it utilizing the power of thought and without the necessity of open brain surgery.

 

A research team from the Vascular Bionics Laboratory at the University of Melbourne developed the novel neural-recording device, which both eschews invasive surgery and decreases the risks of a blood-brain barrier breach by being implanted into the brain’s blood vessels.

 

Developed under DARPA’s Reliable Neural-Interface Technology (RE-NET) program, the Stentrode can potentially safely expand the use of brain-machine interfaces (BMIs) in the treatment of physical disabilities and neurological disorders.

 

The researchers describe their “proof-of-concept results” which come from a study conducted on sheep, demonstrating high-fidelity measurements taken from the region of the brain responsible for controlling voluntary movement (called the motor cortex) with the use of the novel device which, as it happens, is just about the size of a paperclip.

 

Notably, the device records neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton.

 

The team, led by neurologist Thomas Oxley, M.D., published their study in an article in the journal Nature Biotechnology.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Robots Get Creative To Cut Through Clutter

Robots Get Creative To Cut Through Clutter | Amazing Science | Scoop.it

Clutter is a special challenge for robots, but new Carnegie Mellon University software is helping robots cope, whether they're beating a path across the Moon or grabbing a milk jug from the back of the refrigerator.

 

The software not only helped a robot deal efficiently with clutter, it surprisingly revealed the robot's creativity in solving problems.

"It was exploiting sort of superhuman capabilities," Siddhartha Srinivasa, associate professor of robotics, said of his lab's two-armed mobile robot, the Home Exploring Robot Butler, or HERB. "The robot's wrist has a 270-degree range, which led to behaviors we didn't expect. Sometimes, we're blinded by our own anthropomorphism." In one case, the robot used the crook of its arm to cradle an object to be moved. "We never taught it that," Srinivasa added.

 

The rearrangement planner software was developed in Srinivasa's lab by Jennifer King, a Ph.D. student in robotics, and Marco Cognetti, a Ph.D. student at Sapienza University of Rome who spent six months in Srinivasa's lab. They will present their findings May 19 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. In addition to HERB, the software was tested on NASA's KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.

 

Robots are adept at "pick-and-place" (P&P) processes, picking up an object in a specified place and putting it down at another specified place. Srinivasa said this has great applications in places where clutter isn't a problem, such as factory production lines. But that's not what robots encounter when they land on distant planets or, when "helpmate" robots eventually land in people's homes.

 

P&P simply doesn't scale up in a world full of clutter. When a person reaches for a milk carton in a refrigerator, he doesn't necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.

 

The rearrangement planner automatically finds a balance between the two strategies, Srinivasa said, based on the robot's progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted or stepped on. And it can be taught to pay attention to items that might be valuable or delicate, in case it must extricate a bull from a china shop.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Disney’s VertiGo Combines Car, Helicopter to Seemingly Defy Gravity

Disney’s VertiGo Combines Car, Helicopter to Seemingly Defy Gravity | Amazing Science | Scoop.it

From Disney and ETH Zurich, this steam-punkish robot can transition from ground to wall and back again.

 

VertiGo is a wall-climbing robot that is capable of transitioning from the ground to the wall, created in collaboration between Disney Research Zurich and ETH. The robot has two tiltable propellers that provide thrust onto the wall, and four wheels. One pair of wheels is steerable, and each propeller has two degrees of freedom for adjusting the direction of thrust. By transitioning from the ground to a wall and back again, VertiGo extends the ability of robots to travel through urban and indoor environments. The robot is able to move on a wall quickly and with agility. The use of propellers to provide thrust onto the wall ensures that the robot is able to traverse over indentations such as masonry. The choice of two propellers rather than one enables a floor-to-wall transition – thrust is applied both towards the wall using the rear propeller, and in an upward direction using the front propeller, resulting in a flip onto the wall.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This five-fingered robot hand is close to human in functionality

This five-fingered robot hand is close to human in functionality | Amazing Science | Scoop.it

A University of Washington team of computer scientists and engineers has built what they say is one of the most highly capable five-fingered robot hands in the world. It can perform dexterous manipulation and learn from its own experience without needing humans to direct it. Their work is described in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.

 

“Hand manipulation is one of the hardest problems that roboticists have to solve,” said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”

 

The UW research team has developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the robot hardware and to real-world tasks like rotating an elongated object.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Danko Nikolić: How to Make Intelligent Robots That Understand the World

There are some amazing robots roving the surface of Mars. However, they are heavily dependent on their human operators. But what if we could provide them with human-like intelligence so that they could find their own way without assistance? What if we could teach them to autonomously deal with completely novel situations? Danko Nikolić, a neuroscientist at the Max-Planck Institute for Brain Research, has his own vision: a novel approach to Artificial Intelligence (AI) that could give robots the capability to understand the world through a method called “AI-Kindergarten”. So, can we provide for a sufficiently strong artificial intelligence to enable a robot to find its way in an environment as hostile and as unpredictable as space?

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cooperating High-Precision Robots Manipulate Microparticles under Microscope

Cooperating High-Precision Robots Manipulate Microparticles under Microscope | Amazing Science | Scoop.it

The robotic manipulation of biological samples that are measured in microns is a challenging task, requiring high precision and dexterity. The end-effectors and the manipulators must be as flexible as possible to manage the variations in the size and shape of the samples, while at the same time protecting them from any form of damage (e.g. perforation).

 

This article discusses the work conducted at the Hamlyn Center for Robotic Surgery of Imperial College London to tackle these challenges. The manipulation tasks were semi-automated by developing a multi-robots cooperation and a compliant end-effector. This solution can be applied to cell measurements, single cell surgery, tissue engineering and cell enucleation.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation
Scoop.it!

Robotic exoskeleton maps sense-deficits in young stroke patients

Robotic exoskeleton maps sense-deficits in young stroke patients | Amazing Science | Scoop.it

Researchers at the University of Calgary are using robotics technology to try to come up with more effective treatments for children who have had strokes.

 

The robotic device measures a patient's position sense — what doctors call proprioception — the unconscious perception of where the body is while in motion or at rest.

 

"Someone whose position sense has been affected might have difficulty knowing where their hand or arm is in space, adding to their difficulty in using their affected, weaker limb," said one of the study's senior researchers, Dr. Kirton of the Cumming School of Medicine's departments of pediatrics and clinical neurosciences.

 

"We can try to make a hand stronger but, if your brain doesn't know where the hand is, this may not translate into meaningful function in daily life."

 

PhD candidate Andrea Kuczynski is doing ongoing research using the KINARM (Kinesiological Instrument for Normal and Altered Reaching Movements) robotic device.

 

During the test the children sit in the KINARM machine with their arms supported by its exoskeleton, which measured movement as they played video games and did other tasks. All the children also had MRIs, which gave researchers a detailed picture of their brain structures.


Via Daniel Perez-Marcos
more...
No comment yet.