Amazing Science
Follow
Find tag "robotics"
484.4K views | +8 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

'Natural' selection of robots: On the origin of (robot) species

'Natural' selection of robots: On the origin of (robot) species | Amazing Science | Scoop.it

Researchers have observed the process of evolution by natural selection at work in robots, by constructing a ‘mother’ robot that can design, build and test its own ‘children’, and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.


Researchers led by the University of Cambridge have built a mother robot that can independently build its own children and test which one does best; and then use the results to inform the design of the next generation, so that preferential traits are passed down from one generation to the next.


Without any human intervention or computer simulation beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.


In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next. The results, reported in the open access journal PLOS One, found that preferential traits were passed down through generations, so that the ‘fittest’ individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.


“Natural selection is basically reproduction, assessment, reproduction, assessment and so on,” said lead researcher Dr Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich. “That’s essentially what this robot is doing – we can actually watch the improvement and diversification of the species.”


For each robot child, there is a unique ‘genome’ made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands. As in nature, evolution in robots takes place through ‘mutation’, where components of one gene are modified or single genes are added or deleted, and ‘crossover’, where a new genome is formed by merging genes from two individuals.


In order for the mother to determine which children were the fittest, each child was tested on how far it travelled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation in order to preserve their abilities, while mutation and crossover were introduced in the less successful children.


The researchers found that design variations emerged and performance improved over time: the fastest individuals in the last generation moved at an average speed that was more than twice the average speed of the fastest individuals in the first generation. This increase in performance was not only due to the fine-tuning of design parameters, but also because the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.


“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customization, but we want to see robots that are capable of innovation and creativity.”

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Synthetic Biology
Scoop.it!

Virginia Tech scientist develops model for robots with bacterial brains

Virginia Tech scientist develops model for robots with bacterial brains | Amazing Science | Scoop.it

Forget the Vulcan mind-meld of the Star Trek generation — as far as mind control techniques go, bacteria is the next frontier.


In a paper published today in Scientific Reports, which is part of the Nature Publishing Group, a Virginia Tech scientist used a mathematical model to demonstrate that bacteria can control the behavior of an inanimate device like a robot. “Basically we were trying to find out from the mathematical model if we could build a living microbiome on a nonliving host and control the host through the microbiome,” said Warren Ruder, an assistant professor of biological systems engineering in both the College of Agriculture and Life Sciences and the College of Engineering


"We found that robots may indeed be able to function with a bacterial brain,” he said. For future experiments, Ruder is building real-world robots that will have the ability to read bacterial gene expression levels in E. coli using miniature fluorescent microscopes. The robots will respond to bacteria he will engineer in his lab.


On a broad scale, understanding the biochemical sensing between organisms could have far reaching implications in ecology, biology, and robotics. In agriculture, bacteria-robot model systems could enable robust studies that explore the interactions between soil bacteria and livestock. In healthcare, further understanding of bacteria’s role in controlling gut physiology could lead to bacteria-based prescriptions to treat mental and physical illnesses. Ruder also envisions droids that could execute tasks such as deploying bacteria to remediate oil spills.


The findings also add to the ever-growing body of research about bacteria in the human body that are thought to regulate health and mood, and especially the theory that bacteria also affect behavior.


The study was inspired by real-world experiments where the mating behavior of fruit flies was manipulated using bacteria, as well as mice that exhibited signs of lower stress when implanted with probiotics.


Ruder’s approach revealed unique decision-making behavior by a bacteria-robot system by coupling and computationally simulating widely accepted equations that describe three distinct elements: engineered gene circuits in E. coli, microfluid bioreactors, and robot movement.


The bacteria in the mathematical experiment exhibited their genetic circuitry by either turning green or red, according to what they ate. In the mathematical model, the theoretical robot was equipped with sensors and a miniature microscope to measure the color of bacteria telling it where and how fast to go depending upon the pigment and intensity of color.


The model also revealed higher order functions in a surprising way. In one instance, as the bacteria were directing the robot toward more food, the robot paused before quickly making its final approach — a classic predatory behavior of higher order animals that stalk prey.

Ruder’s modeling study also demonstrates that these sorts of biosynthetic experiments could be done in the future with a minimal amount of funds, opening up the field to a much larger pool of researchers.


Via Integrated DNA Technologies
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts
Scoop.it!

Drones Are Becoming the Oil Industry's Disruptive Technology

Drones Are Becoming the Oil Industry's Disruptive Technology | Amazing Science | Scoop.it
Self-piloting drones like the Boomerang are leading a small but fundamental change in the industry. In oil and gas, equipment doesn’t move without data—where to drill, how deep to go, and so on. With the traffic bottleneck removed, suddenly equipment can move more nimbly and exploration startups can get in the drilling game for a fraction of the traditional entry cost.

Via YEC Geo
more...
YEC Geo's curator insight, July 20, 8:48 AM

Tech disruption coming to the oil bidness?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Tiny Robotic Tentacles Developed That Can Lasso an Ant

Tiny Robotic Tentacles Developed That Can Lasso an Ant | Amazing Science | Scoop.it

With a diameter just twice that of a human hair, they look more like short snips of fishing line than advanced robotic appendages. But these micro-tentacles can curl and grip. They can lasso an ant or scoop up a tiny fish egg. And they could give a robot of any size an astonishingly gentle but precise grasp.


A team of three material scientists at Iowa State University have just invented this new way for robots to softly handle delicate and diminutive objects. As they describe today in a paper in the science journal Scientific Reports, their clever micro-tentacles are hundreds of times smaller than the next smallest self-spiraling, lifelike tentacle, making them a unique tool for everything from microsurgery to microbiology. Better still, they hug with less than 1 micro-newton of force. That's thousands of times softer than your blinking eye, and it makes mechanical pinching (the traditional approach for robot's tiny grip) look absolutely medieval.


"Two of the biggest trends in robotics right now are soft-robotics—utilizing soft materials for purposes like gentler human interaction—and micro-robotics, making robots smaller," says Jaeyoun Kim, the material scientist who lead the team. "These micro-tentacles fuse those together."


Kim and his colleagues built their micro-tentacles out of a cheap, naturally soft, and commercially available material called PDMS. They used the PDMS to form hollow tubes which curl up when the air is sucked out of them. One side of the tube is corked, while the other is connected to a pneumatic controller. The micro-tentacles (which are less than 8 millimeters long) curl in a specific direction because one side of the tube is thinner than the other.


The process wasn't easy. PDMS is quite liquid, almost like olive oil, which makes casting it with precision over a hair-thin, rod-like template almost impossible—it will bead up in drops. But the trio of researchers discovered a way to heat-treat the material to slightly gelatinize it, smoothing out the material and the problem. Another issue was finding a way to remove the tubes from their cylindrical template without destroying them. To do this, the scientists used a tool that looks much like a tiny wire-stripper.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Medical ‘millirobots’ could replace invasive surgery

Medical ‘millirobots’ could replace invasive surgery | Amazing Science | Scoop.it

Using a “Gauss gun” principle, an MRI machine drives a “millirobot” through a hypodermic needle into your spinal cord and guides it into your brain to release life-threatening fluid buildup.


University of Houston researchers have developed a concept for MRI-powered millimeter-size “millirobots” that could one day perform unprecedented minimally invasive medical treatments. This technology could be used to treat hydrocephalus, for example. Current treatments require drilling through the skull to implant pressure-relieving shunts, said Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston. But MRI scanners alone don’t produce enough force to pierce tissues (or insert needles). So the researchers drew upon the principle of the “Gauss gun.”


Here’s how the a Gauss gun works: a single steel ball rolls down a chamber, setting off a chain reaction when it smashes into the next ball, etc., until the last ball flies forward, moving much more quickly the initial ball. Based on that concept, the researchers imagine a medical robot with a barrel self-assembled from three small high-impact 3D-printed plastic components, with slender titanium rod spacers separating two steel balls.


Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston, said the potential technology could be used to treat hydrocephalus and other conditions, allowing surgeons to avoid current treatments that require cutting through the skull to implant pressure-relieving shunts.


Becker was first author of a paper presented at ICRA, the conference of the IEEE Robotics and Automation Society, nominated for best conference paper and best medical robotics paper. “Hydrocephalus, among other conditions, is a candidate for correction by our millirobots because the ventricles are fluid-filled and connect to the spinal canal,” Becker said. “Our noninvasive approach would eventually require simply a hypodermic needle or lumbar puncture to introduce the components into the spinal canal, and the components could be steered out of the body afterwards.”


Future work will focus on exploring clinical context, miniaturizing the device, and optimizing material selection.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers build a robot that can reproduce

Researchers build a robot that can reproduce | Amazing Science | Scoop.it
One of the dreams of both science fiction writers and practical robot builders has been realized, at least on a simple level: Cornell University researchers have created a machine that can build copies of itself.

Admittedly the machine is just a proof of concept -- it performs no useful function except to self-replicate -- but the basic principle could be extended to create robots that could replicate or at least repair themselves while working in space or in hazardous environments, according to Hod Lipson, Cornell assistant professor of mechanical and aerospace engineering, and computing and information science, in whose lab the robots were built and tested.

Lipson and colleagues report on the work in a brief communication in the May 12 issue of Nature.

Their robots are made up of a series of modular cubes -- called "molecubes" -- each containing identical machinery and the complete computer program for replication. The cubes have electromagnets on their faces that allow them to selectively attach to and detach from one another, and a complete robot consists of several cubes linked together. Each cube is divided in half along a long diagonal, which allows a robot composed of many cubes to bend, reconfigure and manipulate other cubes. For example, a tower of cubes can bend itself over at a right angle to pick up another cube.

Although these experimental robots work only in the limited laboratory environment, Lipson suggests that the idea of making self-replicating robots out of self-contained modules could be used to build working robots that could self-repair by replacing defective modules. For example, robots sent to explore Mars could carry a supply of spare modules to use for repairing or rebuilding as needed, allowing for more flexible, versatile and robust missions. Self-replication and repair also could be crucial for robots working in environments where a human with a screwdriver couldn't survive.To begin replication, the stack of cubes bends over and sets its top cube on the table. Then it bends to one side or another to pick up a new cube and deposit it on top of the first. By repeating the process, one robot made up of a stack of cubes can create another just like itself. Since one robot cannot reach across another robot of the same height, the robot being built assists in completing its own construction.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This New Four-Legged Robot Is Basically Invincible

This New Four-Legged Robot Is Basically Invincible | Amazing Science | Scoop.it

Boston Dynamics, the company that builds incredibly agile robots, has added another four-legged sprinter to its pack. In order to introduce the world to “Spot,” the crew at Boston Dynamic kicked the innocent robot as it walked through the halls of their building — and filmed it. However, as you can see in the YouTube video, Spot never falters under the abuse; it dynamically corrects its balance even after a good shove.


When you’re an advanced robotics builder owned by Google, you don’t have to do much to make a splash. Boston Dynamic’s video (clearly filmed before Snowmageddon) is simply called “Introducing Spot,” and it’s two minutes of the quadruped climbing stairs, walking up hills, and, of course, getting kicked. A four-sentence video description is the only additional information the company is providing about Spot:


  • Spot is a four-legged robot designed for indoor and outdoor operation. It is electrically powered and hydraulically actuated.
  • Spot has a sensor head that helps it navigate and negotiate rough terrain. Spot weighs about 160 lbs.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

‘Cobots’ - robots that work side-by-side with humans - enhance robotic manufacturing and throughput

‘Cobots’ - robots that work side-by-side with humans - enhance robotic manufacturing and throughput | Amazing Science | Scoop.it

Manufacturers have begun experimenting with a new generation of “cobots” (collaborative robots) designed to work side-by-side with humans. To determine best practices for effectively integrating human-robot teams within manufacturing environments, a University of Wisconsin-Madison team headed by Bilge Mutlu, an assistant professor of computer sciences, is working with an MIT team headed by Julie A. Shah, an assistant professor of aeronautics and astronautics.


Their research is funded by a three-year grant from the National Science Foundation (NSF) as part of its National Robotics Initiative program.

Cobots are less expensive and intended to be easier to reprogram and integrate into manufacturing. For example, Steelcase owns four next-generation robots based on a platform called Baxter, made by Rethink Robotics.


Each Baxter robot has two arms and a tablet-like panel for “eyes” that provide cues to help human workers anticipate what the robot will do next.


“This new family of robotic technology will change how manufacturing is done,” says Mutlu. “New research can ease the transition of these robots into manufacturing by making human-robot collaboration better and more natural as they work together.”


Mutlu’s team is building on previous work related to topics such as gaze aversion in humanoid robots, robot gestures, and the issue of “speech and repair.” For example, if a human misunderstands a robot’s instructions or carries them out incorrectly, how should the robot correct the human?


On Rethink Robotics’ blog, founder and chairman Rodney Brooks notes “three exciting and significant trends taking place right now” that he thinks will begin to gain some very real traction in 2015:


  • We will begin to see large-scale deployment of collaborative and intelligent robots in manufacturing.
  • This will be a breakout year for robotics research.
  • Emerging technology will be designed to solve some of the world’s biggest problems.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Chinese hotel that's staffed ENTIRELY by robots - all your needs are met by cyborgs at $11 per night

The Chinese hotel that's staffed ENTIRELY by robots - all your needs are met by cyborgs at $11 per night | Amazing Science | Scoop.it

A brand new hotel has skipped the recruitment process - because all its staff are robots. From reception desk staff to security doormen and waiters, the Pengheng Space Capsules Hotel in Shenzhen, China, has built, rather than hired, their new employees. Start-up costs and robot maintenance aside, staff bills must be minimal. And it seems the hotel is keen to pass their savings right back to the customer as a night's stay costs just £6.80 per night.


You can even order food and drinks from the lounge area using supplied tablet computers, and your choices arriving via robot waiter just a few minutes later. Best of all, however, is the price. A stay in the hotel costs a mere 70 yuan (£6.81) per person. For that price you might not expect much, but the hotel's facilities are impressive. This capsule hotel - a hotel format coined in Japan featuring many extremely small rooms dubbed capsules - combines affordable and minimalist overnight accommodation with a futuristic vibe.


A eerie blue-lit corridor leads to the space station-style bunks, which resemble something lifted out of a sci-fi film. The hotel also features banks of computers, lockers, washrooms and a DIY laundry room.

With robots, neon and shiny surfaces as far as the eye can see, a stay in the hotel has become a must for both local and visiting tech fans.

more...
india cox's curator insight, May 5, 11:08 PM

This is an incredible way to revolutionize the hotel industry. To completely remove part of the HR process changes the way that hotels operate. I dont think this is a long term solution to streamlining the hotel industry. Rather than going through the long employment process this hotel builds employees. A cost effective way to operate a hotel. However, operating this way doesnt come without its negative aspects. As a guest you generally stay in a hotel to go on a holiday. This hotel is good for a short stay but isnt very appropriate for a long stay. Its more about the capsule hotel experience. The only thing that worries me is the fact that the hotel relies heavily on technology as a part of their service. Who do you call if you have a problem or the tablet breaks? You cant really get the robot to fix it for you. As well as their amenaties. For example,  their DIY laundry service isnt really relevant. When you go to a hotel you want to be on a holiday, you dont want to go and clean your own laundry. If they rely so heavily on technology, why cant you get the robots to do your laundry..?

I dont think this a long term solution and that the negatives out weigh the benefits. The lack of service doesnt really reflect positively on the hospitality industry. It seems cold rather than hospitable. I really dont think this is a good HR solution. Rather an unsustainable shortcut. 

In theory, its great. In practice, not so great. 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

New frontiers: Drones deliver a raft of surprises in 2014

New frontiers: Drones deliver a raft of surprises in 2014 | Amazing Science | Scoop.it
2014 wasn’t the year that drones first entered the consumer lexicon, but it did see the notion of using these unmanned vehicles to our advantage become much more palatable. Package deliveries and carrying out conventional robotic tasks are some concepts that have defined the progress of drones in the past 12 months, but, as is typical of emerging technologies, the more their potential is realized the more they find uses in unexpected new applications. Let’s have a look over some of the year’s more surprising, yet significant, drone projects that promise to shake things up in exciting new ways.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers print out self-learning robots

Researchers print out self-learning robots | Amazing Science | Scoop.it

When the robots of the future are set to extract minerals from other planets, they need to be both self-learning and self-repairing. Researchers at Oslo University have already succeeded in producing self-instructing robots on 3D printers.


“In the future, robots must be able to solve tasks in deep mines on distant planets, in radioactive disaster areas, in hazardous landslip areas and on the sea bed beneath the Antarctic. These environments are so extreme that no human being can cope. Everything needs to be automatically controlled. Imagine that the robot is entering the wreckage of a nuclear power plant. It finds a staircase that no-one has thought of. The robot takes a picture. The picture is analyzed. The arms of one of the robots is fitted with a printer. This produces a new robot, or a new part for the existing robot, which enables it to negotiate the stairs,” hopes Associate Professor Kyrre Glette who is part of the Robotics and intelligent systems research team at Oslo University’s Department of Informatics.


Even if Glette’s ideas remain visions of the future, the robotics team in the Informatics Building have already developed three generations of self-learning robots.


Professor Mats Høvin was the man behind the first model, the chicken-robot named “Henriette”, which received much media coverage when it was launched ten years ago. Henriette had to teach itself how to walk, and to jump over obstacles. And if it lost a leg, it had to learn, unaided, how to hop on the other leg.


A few years later, Masters student Tønnes Nygaard launched the second generation robot. At the same time, the Informatics team developed a simulation program that was able to calculate what the body should look like. Just as for Henriette, its number of legs was pre-determined, but the computer program was at liberty to design the length of the legs and the distance between them.


The third generation of robots brings even greater flexibility. The simulation programme takes care of the complete design and suggests the optimal number of legs and joints.


Simulation is not enough. In order to test the functionality of the robots, they need to undergo trials in the real world. The robots are produced as printouts from a 3D printer. “Once the robots have been printed, their real-world functionalities quite often prove to be different from those of the simulated versions. We are talking of a reality gap. There will always be differences. Perhaps the floor is more slippery in reality, meaning that the friction coefficient will have to be changed. We are therefore studying how the robots deteriorate from simulation to laboratory stage,” says Mats Høvin.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wearable Competition Finalist: Wear a Spy Drone on Your Wrist

Wearable Competition Finalist: Wear a Spy Drone on Your Wrist | Amazing Science | Scoop.it

A drone that can be dispatched with the flick of a wrist feels like an invention likely to fly out from the Batcave, but a Stanford Ph.D. and a Google program manager are close to finalizing a quadcopter that can be worn like a slap bracelet.


Called Nixie, this diminutive drone weighs less than a tenth of a pound, but can capture HD images and sync with a smartphone while its owner is busy scaling an Alp or biking through the Teutoburg forest. “Quadcopters give you a new perspective you can’t get anywhere else,” says Jelena Jovanovic, Nixie’s project manager. “But it’s not really feasible to pilot a drone and keep doing what you’re doing.”


Being able to wear the drone is a cute gimmick, but it’s powerful software packed into a tiny shell could set Nixie apart from bargain Brookstone quadcopters. Expertise in motion-prediction algorithms and sensor fusion will give the wrist-worn whirlybirds an impressive range of functionality. A “Boomerang mode” allows Nixie to travel a fixed distance from its owner, take a photo, then return. “Panorama mode” takes aerial photos in a 360° arc. “Follow me” mode makes Nixie trail its owner and would capture amateur athletes in a perspective typically reserved for Madden all-stars. “Hover mode” gives any filmmaker easy access to impromptu jib shots. Other drones promise similar functionality, but none promise the same level of portability or user friendliness. “We’re not trying to build a quadcopter, we’re trying to build a personal photographer,” says Jovanovic.


Jovanovic and her partner Christoph Kohstall, a Stanford postdoc who holds a Ph.D. in quantum physics and a first-author credit in the journal Nature, believe photography is at a tipping point. Early cameras were bulky, expensive, and difficult to operate. The last hundred years have produced consistently smaller, cheaper, and easier-to-use cameras, but future developments are forking. Google Glass provides the ultimate in portability, but leaves wearers with a fixed perspective. Surveillance drones offer unique vantage points, but are difficult to operate. Nixie attempts to offer the best of both worlds. 


Nixie is an undeniably impressive concept, and while rough prototypes prove the principle, the question remains if its myriad design challenges can be solved without sacrificing the sleek look.


The team’s strong background suggests they can. As a teenager, Kohstall designed a telescope that could follow a point in the sky to take long exposure star photographs using bike frame parts and Lego motors before graduating to writing a treatise on Metastability and Coherence of Repulsive Polarons in a Strongly Interacting Fermi Mixture.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Microbot muscles: Chains of particles assemble and flex | (e) Science News

Microbot muscles: Chains of particles assemble and flex | (e) Science News | Amazing Science | Scoop.it

In a step toward robots smaller than a grain of sand, University of Michigan researchers have shown how chains of self-assembling particles could serve as electrically activated muscles in the tiny machines. So-called microbots would be handy in many areas, particularly medicine and manufacturing. But several challenges lie between current technologies and science fiction possibilities. Two of the big ones are building the 'bots and making them mobile.


"We are inspired by ideas of microscopic robots," said Michael Solomon, a professor of chemical engineering. "They could work together and go places that have never been possible before." Solomon and his group demonstrated that some gold plating and an alternating electric field can help oblong particles form chains that extend by roughly 36 percent when the electric field is on.


"What's really important in the field of nanotechnology right now is not just assembling into structures, but assembling into structures that can change or shape-shift," said Sharon Glotzer, the Stuart W. Churchill Professor of Chemical Engineering, whose team developed computer simulations that helped explain how the chains grew and operated.


The innovation that led to the shape-shifting, said Aayush Shah, a doctoral student in Solomon's group, is the addition of the electric field to control the behavior of the particles. "The particles are like children in a playground," Shah said. "They do interesting things on their own, but it takes a headmaster to make them do interesting things together."


The team started with particles similar to those found in paint, with diameters of about a hundredth the width of a strand of hair. They stretched these particles into football shapes and coated one side of each football with gold. The gilded halves attracted one another in slightly salty water--ideally about half the salt concentration in the sports drink Powerade. The more salt in the water, the stronger the attraction.


Left to their own devices, the particles formed short chains of overlapping pairs, averaging around 50 or 60 particles to a chain. When exposed to an alternating electric field, the chains seemed to add new particles indefinitely. But the real excitement was in the way that the chains stretched.


"We want them to work like little muscles," Glotzer said. "You could imagine many of these fibers lining up with the field and producing locomotion by expanding and contracting." While the force generated by the fibers is about 1,000 times weaker than human muscle tissue per unit area, it may be enough for microbots.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Elon Musk, Stephen Hawking Want to Save the World From Killer Robots on the Battlefield

Elon Musk, Stephen Hawking Want to Save the World From Killer Robots on the Battlefield | Amazing Science | Scoop.it

Elon Musk and Stephen Hawking are among the leaders from the science and technology worlds calling for a ban on autonomous weapons, warning that weapons with a mind of their own "would not be beneficial for humanity."


Along with 1,000 other signatories, Musk and Hawking signed their names to an open letter that will be presented this week at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.


Autonomous weapons are defined by the group as artillery that can "search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions."


"Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is -- practically if not legally -- feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the letter, posted on the Future of Life Institute's website says.


If one country pushes ahead with the creation of robotic killers, the group wrote it fears it will spur a global arms race that could spell disaster for humanity.


"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group," the letter says. "We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."


While the group warns of the potential carnage killer robots could inflict, they also stress they aren't against certain advances in artificial intelligence.


"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the letter says. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

The man who created the world's first self aware robot says the next big test will change the human-robot relationship forever

The man who created the world's first self aware robot says the next big test will change the human-robot relationship forever | Amazing Science | Scoop.it

Luciano Floridi issued a challenge to scientists to the world in 2005: prove that robots can display the human trait of self-awareness through a knowledge game called the “wise man” test. It was a venture he didn’t ever see being achieved in the foreseeable future. A decade later, the Oxford professor’s seemingly unattainable challenge has been met.


On July 9, 2015, a team of researchers led by Professor Selmer Bringsjord helped a robot solve the riddle, displaying a level of self-awareness and satisfying what had until then been considered “the ultimate sifter” test that could separate human from cyborg. But the professor says there’s a bigger challenge he wants robots to accomplish: self-awareness in real time. If we achieve this milestone, he said, the way we interact with artificial intelligence and robots will drastically change.


“Real time” self-awareness means robots acting upon new situations that they are not pre-programmed for, and translating how to act into physical movements. This is a serious challenge that Bringsjord has not tapped into because self-awareness algorithms are still separate from a robot’s body. If robots could work in real time, mind-to-body, he says, we would break through major barriers that could result in scenarios such as droids that act as our personal chauffeurs.


Via Ben van Lier
more...
TJ Allard's curator insight, July 26, 2:41 PM

ok and......when? Its like I've been reading articles like this for a few years now. 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons | Amazing Science | Scoop.it

Swarms of microscopic, magnetic, robotic beads could be used within five years by vascular surgeons to clear blocked arteries. These minimally invasive microrobots, which look and move like corkscrew-shaped bacteria, are being developed by an $18-million, 11-institution research initiative headed by the Korea Evaluation Institute of Industrial Technologies (KEIT).


These “microswimmers” are driven and controlled by external magnetic fields, similar to how nanowires from Purdue University and ETH Zurich/Technion (recently covered on KurzweilAI) work, but based on a different design. Instead of wires, they’re made from chains of three or more iron oxide beads, rigidly linked together via chemical bonds and magnetic force. The beads are put in motion by an external magnetic field that causes each of them to rotate. Because they are linked together, their individual rotations cause the chain to twist like a corkscrew and this movement propels the microswimmer. The chains are small enough­­ — the nanoparticles are 50–100 nanometers in diameter — that they can navigate in the bloodstream like a tiny boat, Fantastic Voyage movie style (but without the microscopic humans) via a catheter to navigate directly to the blocked artery, where a drill would clear it completely.


Drilling through plaque:

The inspiration for using the robotic swimmers as tiny drills came from the Borrelia burgdorferi bacteria (shown above), which causes Lyme’s Disease and wreaks havoc inside the body by burrowing through healthy tissue. Its spiral shape enables both its movement and the resultant cellular destruction. By controlling the magnetic field, a surgeon could direct the speed and direction of the microswimmers. The magnetism also allows for joining separate strands of microswimmers together to make longer strings, which can then be propelled with greater force.


Once flow is restored in the artery, the microswimmer chains could disperse and be used to deliver anti-coagulant medication directly to the effected area to prevent future blockage. This procedure could supplant the two most common methods for treating blocked arteries: stenting and angioplasty. Stenting is a way of creating a bypass for blood to flow around the block by inserting a series of tubes into the artery, while angioplasty balloons out the blockage by expanding the artery with help from an inflatable probe.


“Current treatments for chronic total occlusion are only about 60 percent successful,” said MinJun Kim, PhD, a professor in the College of Engineering and director of the Biological Actuation, Sensing & Transport Laboratory (BASTLab) at Drexel University. “We believe that the method we are developing could be as high as 80–90 percent successful and possibly shorten recovery time. The microswimmers are composed of inorganic biodegradable beads so they will not trigger an immune response in the body. We can adjust their size and surface properties to accurately deal with any type of arterial occlusion.” Kim’s research was recently reported in the Journal of Nanoparticle Research.


Mechanical engineers at Drexel University are using these microswimmers as a part of a surgical toolkit being assembled by the Daegu Gyeongbuk Institute of Science and Technology (DGIST)Researchers from other institutions on the project include ETH ZurichSeoul National UniversityHanyang UniversityKorea Institute of Science and Technology, and Samsung Medical Center.


DGIST anticipates testing the technology in lab and clinical settings within the next four years.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

South Korean Kaist Team wins the DARPA Robotics Challenge

South Korean Kaist Team wins the DARPA Robotics Challenge | Amazing Science | Scoop.it

First place in the DARPA Robotics Challenge Finals this past weekend in Pomona, California went to Team Kaist of South Korea for its DRC-Hubo robot, winning $2 million in prize money. Team IHMC Robotics of Pensacola, Fla., with its Running Man (Atlas) robot came in at second place ($1 million prize), followed by Tartan Rescue of Pittsburgh with its CHIMP robot ($500,000 prize).


The DARPA Robotics Challenge, with three increasingly demanding competitions over two years, was launched in response to a humanitarian need that became glaringly clear during the nuclear disaster at Fukushima, Japan, in 2011, DARPA said. The goal was to “accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impacts of natural or man-made disasters.”


The difficult course of eight tasks simulated Fukushima-like conditions, such as driving alone, walking through rubble, tripping circuit breakers, turning valves, and climbing stairs. Representing some of the most advanced robotics research and development organizations in the world, a dozen teams from the United States and another eleven from Japan, Germany, Italy, Republic of Korea and Hong Kong competed.


More DARPA Robotics Challenge videos

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Engineers hand 'cognitive' control to underwater robots with advanced AI system

Engineers hand 'cognitive' control to underwater robots with advanced AI system | Amazing Science | Scoop.it

For the last decade, scientists have deployed increasingly capable underwater robots to map and monitor pockets of the ocean to track the health of fisheries, and survey marine habitats and species. In general, such robots are effective at carrying out low-level tasks, specifically assigned to them by human engineers -- a tedious and time-consuming process for the engineers.


When deploying autonomous underwater vehicles (AUVs), much of an engineer's time is spent writing scripts, or low-level commands, in order to direct a robot to carry out a mission plan. Now a new programming approach developed by MIT engineers gives robots more "cognitive" capabilities, enabling humans to specify high-level goals, while a robot performs high-level decision-making to figure out how to achieve these goals.


For example, an engineer may give a robot a list of goal locations to explore, along with any time constraints, as well as physical directions, such as staying a certain distance above the seafloor. Using the system devised by the MIT team, the robot can then plan out a mission, choosing which locations to explore, in what order, within a given timeframe. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task, or reconfigure the hardware to recover from a failure, on the fly.


In March, the team tested the autonomous mission-planning system during a research cruise off the western coast of Australia. Over three weeks, the MIT engineers, along with groups from Woods Hole Oceanographic Institution, the Australian Center for Field Robotics, the University of Rhode Island, and elsewhere, tested several classes of AUVs, and their ability to work cooperatively to map the ocean environment.


The MIT researchers tested their system on an autonomous underwater glider, and demonstrated that the robot was able to operate safely among a number of other autonomous vehicles, while receiving higher-level commands. The glider, using the system, was able to adapt its mission plan to avoid getting in the way of other vehicles, while still achieving its most important scientific objectives. If another vehicle was taking longer than expected to explore a particular area, the glider, using the MIT system, would reshuffle its priorities, and choose to stay in its current location longer, in order to avoid potential collisions.

"We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support," says Brian Williams, a professor of aeronautics and astronautics at MIT, and principal developer of the mission-planning system. "With this system, we were showing we could safely zigzag all the way around the reef, like an obstacle course." The system is similar to one that Williams developed for NASA following the loss of the Mars Observer, a spacecraft that, days before its scheduled insertion into Mars' orbit in 1993, lost contact with NASA.


By giving robots control of higher-level decision-making, Williams says such a system would free engineers to think about overall strategy, while AUVs determine for themselves a specific mission plan. Such a system could also reduce the size of the operational team needed on research cruises. And, most significantly from a scientific standpoint, an autonomous planning system could enable robots to explore places that otherwise would not be traversable. For instance, with an autonomous system, robots may not have to be in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.


"If you look at the ocean right now, we can use Earth-orbiting satellites, but they don't penetrate much below the surface," Williams says. "You could send sea vessels which send one autonomous vehicle, but that doesn't show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Biorobotics-inspired eye stabilizes robot’s flight, replaces inertial navigation system

Biorobotics-inspired eye stabilizes robot’s flight, replaces inertial navigation system | Amazing Science | Scoop.it

Biorobotics researchers have developed the first aerial robot able to fly over uneven terrain that is stabilized visually without an accelerometer.

Called BeeRotor, it adjusts its speed and avoids obstacles thanks to optic flow sensors inspired by insect vision. It can fly along a tunnel with uneven, moving walls without measuring either speed or altitude. The study was published on February 26 in the journal Bioinspiration & Biomimetics.


Aircraft, ships, and spacecraft currently use a complex inertial navigation system based on accelerometers and gyroscopesto continuously calculate position, orientation, and velocity without the need for external references (known as dead reckoning).


Researchers Fabien Expert and Franck Ruffier at the Institut des Sciences du Mouvement – Etienne-Jules Marey(CNRS/Aix-Marseille Université) decided to create simpler system,  inspired by winged insects. They created BeeRotor, a tethered flying robot able for the first time to adjust its speed and follow terrain with no accelerometer and without measuring speed or altitude, avoiding vertical obstacles in a tunnel with moving walls.


To achieve this, the researchers mimicked the ability of insects to use the passing landscape as they fly. This is known as “optic flow,” the principle you can observe when driving along a road: the view in front is fairly stable, but looking out to either side, the landscape passes by faster and faster, reaching a maximum at an angle of 90 degrees to the path of the vehicle.


To measure optic flow, BeeRotor is equipped with 24 photodiodes (functioning as pixels) distributed at the top and the bottom of its “eye.” This enables it to detect contrasts in the environment as well as their motion. As in insects, the speed at which a feature in the scenery moves from one pixel to another provides the angular velocity of the flow. When the flow increases, this means that either the robot’s speed is increasing or that the distance relative to obstacles is decreasing.


By way of a brain, BeeRotor has three feedback loops: altitude (following the floor or roof), speed (adapting to the size of the tunnel) and stabilization of the eye in relation to the local slope. This enables the robot to always obtain the best possible field of view, independently of its degree of pitch. That allows BeeRotor to avoid very steeply sloping obstacles (see video) no accelerometer and no measures of speed or altitude.


BeeRotor suggests a biologically plausible hypothesis to explain how insects can fly without an accelerometer: using cues from optic flow to remain stable via feedback loops. Optic flow sensors also have industrial applications: such as replacing heavy accelerometers for small robots and as an ultra-light backup system in the event of failure on a space mission.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Supporting the elderly: A caring robot with ‘emotions’ and memory

Supporting the elderly: A caring robot with ‘emotions’ and memory | Amazing Science | Scoop.it

Researchers at the University of Hertfordshire have developed a prototype of a social robot that supports independent living for the elderly, working in partnership with their relatives or carers.


Farshid Amirabdollahian, a senior lecturer in Adaptive Systems at the university, led a team of nine partner institutions from five European countries as part of the €4,825,492 project called ACCOMPANY (Acceptable Robotics Companions for Ageing Years).


“This project proved the feasibility of having companion technology, while also highlighting different important aspects such as empathy, emotion, social intelligence as well as ethics and its norm surrounding technology for independent living,” Amirabdollahian said.



more...
Madison & Morgan's curator insight, February 11, 1:31 PM

This article is about a robot that can help the elderly in their daily life. The robot is capable of human emotions and has moral ethics. This shows the technological advances that Europe has and relates to economy.

olyvia Schaefer and Rachel Shaberman's curator insight, February 11, 5:09 PM

Europe Arts

Europe has many inventions that they have created, but the most interesting to me is the robot that has emotions and memory.  This robot is supposed to help the elderly with their careers and daily life.  The Europeans were able to create technology that has empathy,emotions, and social intelligence and is just a robot.  The Europeans were able to accomplish something amazing.

ToKTutor's curator insight, February 21, 12:06 PM

Title 5: If a robot can have emotion and memory, can it also be programmed to have instinctive judgment?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

What Happens to a Society when Robots Replace Workers?

What Happens to a Society when Robots Replace Workers? | Amazing Science | Scoop.it

The technologies of the past, by replacing human muscle, increased the value of human effort – and in the process drove rapid economic progress. Those of the future, by substituting for man’s senses and brain, will accelerate that process – but at the risk of creating millions of citizens who are simply unable to contribute economically, and with greater damage to an already declining middle class.


Estimates of general rates of technological progress are always imprecise, but it is fair to say that, in the past, progress came more slowly. Henry Adams, the historian, measured technological progress by the power generated from coal, and estimated that power output doubled every ten years between 1840 and 1900, a compounded rate of progress of about 7% per year. The reality was probably much less. For example, in 1848, the world record for rail speed reached60 miles per hour. A century later, commercial aircraft could carry passengers at speeds approaching 600 miles per hour, a rate of progress of only about 2% per year.


By contrast, progress today comes rapidly. Consider the numbers for information storage density in computer memory. Between 1960 and 2003, those densities increased by a factor of five million, at times progressing at a rate of 60% per year. At the same time, true to Moore’s Law, semiconductor technology has been progressing at a 40% rate for more than 50 years. These rates of progress are embedded in the creation of intelligent machines, from robots to automobiles to drones, that will soon dominate the global economy – and in the process drive down the value of human labor with astonishing speed.


This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. If you doubt the march of worker-replacing technology, look at Foxconn, the world’s largest contract manufacturer. It employs more than one million workers in China. In 2011, the company installed 10,000 robots, called Foxbots. Today, the company is installing them at a rate of 30,000 per year. Each robot costs about $20,000 and is used to perform routine jobs such as spraying, welding, and assembly. On June 26, 2013, Terry Gou, Foxconn’s CEO, told his annual meeting that “We have over one million workers. In the future we will add one million robotic workers.” This means, of course, that the company will avoid hiring those next million human workers.


Just imagine what a Foxbot will soon be able to do if Moore’s Law holds steady and we continue to see performance leaps of 40% per year. Baxter, a $22,000 robot that just got a software upgrade, is being produced in quantities of 500 per year. A few years from now, a much smarter Baxter produced in quantities of 10,000 might cost less than $5,000. At that price, even the lowest-paid workers in the least developed countries might not be able to compete.

more...
Tomasz Bienko's curator insight, January 19, 12:29 PM

Przede wszystkim maszyny mogą zastąpić ludzi jako siłę roboczą, ale przecież ku temu między innymi prowadzone są badania i wprowadzane nowe technologie, widać to już teraz w mechanizacji poszczególnych sektorów gospodarki (np. rolnictwa). Człowiek stara się uprościć sobie życie, ale może zjeść własny ogon. To jest chyba bardziej bliższy problem z którym będziemy się musieli zmierzyć rozwijając dalej tę technologię, niż np. bardziej odległe zbuntowanie się sztucznej inteligencji. Biorąc pod uwagę jak Prawo Moore'a z roku na rok ulega modyfikacją, zmiany będzie można zaobserwować już niedługo i to właśnie rosnące bezrobocie może być problemem który dostrzeżemy jako pierwsi w rozwijaniu sztucznej inteligencji. Maszyna nie zastąpi człowieka we wszystkim, na wszystkich stanowiskach, lecz może to też tylko kwestia czasu?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Delivery drones test successful in France

Delivery drones test successful in France | Amazing Science | Scoop.it

If pilot projects from companies like Bizzby and DHL Parcel are any indication, the skies of Europe could soon be buzzing with parcel delivery drones. GeoPost, the express delivery arm of French mail service La Poste, has now revealed that it undertook drone delivery testing at the Centre d'Etudes et d'Essais pour Modèles Autonomes (CEEMA) in September.


As part of its ongoing GeoDrone project, GeoPost partnered with Atechsys to develop an electric delivery drone capable of autonomously transporting a parcel up to dimensions of 40 x 30 x 20 cm (16 x 12 x 8 in) and 4 kg (9 lb) in weight within a 20 km (12 mile) radius. The project is looking at the use of drones to access isolated areas such as mountains, islands and rural areas, as well as providing a means of responding to emergency situations.


Demonstrating the possible use of drones in real world conditions, the test involved automated take-off, flight phase, landing and return to base. Unfortunately, GeoPost hasn't released any specs on the prototype itself but we can tell you that the 3.7 kg (8.2 lb) six-rotor prototype is reported to have successfully transported a 2 kg (4.4 lb) package over a distance of 1,200 m (about 4,000 ft) at the CEEMA site in the south of France.

more...
Be-Bound®'s curator insight, January 5, 3:38 AM

Amazon tried it last year, now the French Mail service and soon many more will follow. The technology and the logistics are mastered, without the shadow of a doubt, however, now the big challenge will be traffic regulation and authorization.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Worm's Mind In A Lego Body: Scientists Map Brain Connectome of C.elegans and Upload it to a Lego Robot

A Worm's Mind In A Lego Body: Scientists Map Brain Connectome of C.elegans and Upload it to a Lego Robot | Amazing Science | Scoop.it

Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot - what happens next? It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don't have a concrete example of the principle in action.


The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.


The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload "3". The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.

The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the "nose" then UDP packets are sent to the sensory neurons in the network.


The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor.  The motor neurons can be excitatory or inhibitory and positive and negative weights are used. 


And the result? It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.


More Information: The Robotic Worm (Biocoder pdf - free on registration)
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Autonomous, human-sized security robots are almost here

Autonomous, human-sized security robots are almost here | Amazing Science | Scoop.it

As the sun set on a warm November afternoon, a quartet of five-foot-tall, 300-pound shiny white robots patrolled in front of Building 1 on Microsoft’s Silicon Valley campus. Looking like a crew of slickDaleks imbued with the grace of Fred Astaire, they whirred quietly across the concrete in different directions, stopping and turning in place so as to avoid running into trash cans, walls, and other obstacles.

The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools.


Knightscope, a startup based in Mountain View, California, has been busy designing, building, and testing the robot, known as the K5, since 2013. Seven have been built so far, and the company plans to deploy four before the end of the year at an as-yet-unnamed technology company in the area. The robots are designed to detect anomalous behavior, such as someone walking through a building at night, and report back to a remote security center.


“This takes away the monotonous and sometimes dangerous work, and leaves the strategic work to law enforcement or private security, depending on the application,” Knightscope cofounder and vice president of sales and marketing Stacy Stephens said as a K5 glided nearby.


In order to do the kind of work a human security guard would normally do, the K5 uses cameras, sensors, navigation equipment, and electric motors—all packed into its dome-shaped body with a big rechargeable battery and a computer. There are four high-definition cameras (one on each side of the robot), a license-plate recognition camera, four microphones, and a weather sensor (which looks like a DVD-player slot) for measuring barometric pressure, carbon dioxide levels, and temperature. The robots use Wi-Fi or a wireless data network to communicate with each other and with people who can remotely monitor its cameras, microphones, and other sources of data.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Prototyping a biological drone, made from bacteria and fungi that melts away after use

Prototyping a biological drone, made from bacteria and fungi that melts away after use | Amazing Science | Scoop.it

What's stealthier than an ordinary drone? One that can disintegrate when it needs to, in order to destroy evidence of its spying activities. A team of researchers from various educational institutions and NASA Ames Research Center has developed a biodegradable drone made of mycelium (or the vegetative part of fungi), which recently completed its first flight. According to Lynn Rothschild of NASA Ames, once the drone, say, self-destroys by diving into a puddle, "No one would know if you'd spilled some sugar water or if there'd been an airplane there."


A New York company called Ecovative Design grew mycelia into a custom drone-shaped chassis you see above. Unfortunately, some parts of the drone just can't be replaced with biodegradable materials for now, though the team tried to stay true to the idea and used silver nanoparticle ink (which can disintegrate along with the chassis) to print the device's circuits. For the test flight earlier this month, the team had to use propellers, controls and batteries taken from an ordinary quadcopter, but that might change in the future. You can read all about the development process on the scientists' website, where you can also download some 3D printable files of a few drone chassis concepts.

more...
No comment yet.