The future of drone surveillance is coming in a swarm of bug-sized flying spies.
Forget the roachbots and the swarm of MIT humanoid robots dancing in sync, as well as "disposable" quarter-sized kilobots which are "cheap enough to swarm in the thousands," and think instead of DARPA-like tiny insect cyborg drones that are "designed to go places that soldiers cannot" to work as spies or as swarm weapons. Is this a mosquito micro air vehicle (MAV)?
Alan Lovejoy wrote, "Such a device could be controlled from a great distance and is equipped with a camera, microphone. It could land on you and then use its needle to take a DNA sample with the pain of a mosquito bite. Or it could inject a micro RFID tracking device under your skin." While DNA-sucking, RFID-chip-injecting mosquito drones are currently a bunch of bunk, a Bing image search shows a multitude of MAVs that aren't simply CGI mockups.
This scientific field is called universal artificial intelligence, with AIXI being the resulting super-intelligent agent.
The goal of AIXI is to maximise its reward over its lifetime – that's the planning part.
In summary, every interaction cycle consists of observation, learning, prediction, planning, decision, action and reward, followed by the next cycle.
If you're interested in exploring further, AIXI integrates numerous philosophical, computational and statistical principles:
Ockham's razor (simplicity) principle for model selectionEpicurus principle of multiple explanations as a justification of model averagingBayes rule for updating beliefsTuring machines as universal description languageKolmogorov complexity to quantify simplicitySolomonoff's universal prior andBellman equations for sequential decision making.
Bbiohacker & transhumanist Tim Cannon may be the first human to implant a computer chip capable of transmitting biometrical data to an android device.
During the Global Futures 2045 International Congress in New York last June, Kurzweil laid out his predictions of the march towards singularity in detail.
“We’re going to become increasingly non-biological to the point where the non-biological part dominates and the biological part is not important any more. In fact the non-biological part – the machine part – will be so powerful it can completely model and understand the biological part,” Kurzweil said. “So even if that biological part went away it wouldn’t make any difference.”
While the push towards immortality is praised by some, many see a much darker outcome, with such future technologies controlled by a wealthy few as average humans become increasingly irrelevant to the “transhumanist utopia.”
“While everyone would welcome some of the technological advancements predicted by Kurzweil, most notably the virtual elimination of all diseases, his fixation with cheating death by achieving technological singularity has several dark spiritual and practical overtones that have not been properly debated,” Infowars writer Paul Joseph Watson notes. “Moral considerations are once again being cast aside in the feverish pursuit of technological progress at all costs.”
The U.S. government now plans to spend $70 million over the next five years to fund the Systems-Based Neurotechnology for Emerging Therapies project, a surgically implanted brain pacemaker that will monitor the mental health and brainwaves of soldiers and veterans in real time.
The following is the opinion of the author only and does not represent an official position of H+...
Transhumanism rejects the idea of a fixed and unchanging human nature. We observe that what we call our “self” is in part a social construct that exists outside of the body although the brain is the seat of intelligence and everything that makes us “us”. Since we can now alter the architecture of our bodies and brains, we can become more. Already millions of people are electronic cyborgs using technology to keep them alive or to see and hear. We also consider that elements of what make humans special, intelligence, consciousness, and qualia of experience arise in other living beings and might also arise in man made artifacts: artificial intelligences and robots.
A new breed of hobbyists, scientists, and entrepreneurs are working on echolocation implants, brain-controlled software programs, and even cybernetic rats. Their experiments will change the future of tech.
Would you give your brain a jolt if a Harvard scientist said it could make you smarter, more creative and less depressed?
This couldn’t possibly be a good idea. On Friday the 13th of September, in an old brick building on 13th Street in Boston’s Charlestown neighborhood, a pair of electrodes was attached to my forehead, one over my brain’s left prefrontal cortex, the other just above my right eye socket. I was about to undergo transcranial direct-current stimulation, or tDCS, an experimental technique for delivering extremely low dose electrical stimulation to the brain. Using less than 1 percent of the electrical energy necessary for electroconvulsive therapy, powered by an ordinary nine-volt battery, tDCS has been shown in hundreds of studies to enhance an astonishing, seemingly implausible variety of intellectual, emotional and movement-related brain functions. And its side effects appear limited to a mild tingling at the site of the electrode, sometimes a slight reddening of the skin, very rarely a headache and certainly no seizures or memory loss. Still, I felt more than a bit apprehensive as I prepared to find out if a little bit of juice could amp up my cognitive reserves and make me, in a word, smarter.
The first modern experiments with tDCS came in fits and starts. In 1981, Niels Birbaumer, a neuroscientist at the University of Tübingen, Germany, reported that by applying extremely low doses of direct-current electricity — one-third of a milliamp, not enough to power a hearing aid — to the heads of healthy volunteers, he could speed their response on a simple test of reaction time. The Italian neurophysiologist Alberto Priori began his own experiments in 1992, applying just a tiny bit more electricity, about half a milliamp. He found that enough of the electricity crossed through volunteers’ skulls — electrons flowing from the cathodal electrode to the anodal electrode — to cause brain cells near the anodal to become excited. Despite repeating the experiment multiple times to be sure of the results, it took Priori six years to get his findings published in a scientific journal, in 1998. As he told me, “People kept telling me it can’t be true, it’s too easy and simple.”
Rencontre avec Eric Sadin, philosophe spécialiste de l’évolution de nos rapports au numérique.
Ecrivain et philosophe, Eric Sadin est l’un des rares intellectuels français à penser le changement de civilisation induit par la numérisation de notre monde. Son dernier livre, l’Humanité augmentée (1), a reçu le prix de l’«essai le plus influent de l’année sur le digital» au Hub Forum 2013. Il y évoque la naissance d’un homme-interface connecté à une intelligence artificielle ambiante…
Mais quand le PDG de Google dit : «Nous voulons devenir le troisième hémisphère de votre cerveau», c’est inquiétant, non ?
Oui et non. Google s’inscrit dans ce courant transhumaniste qui consiste à vouloir augmenter l’humanité, à réparer nos déficiences originelles, à améliorer nos capacités physiques et cognitives. Pour Google, Apple, IBM et les autres, il ne s’agit pas de nous dominer façon Big Brother, mais de monétiser la maîtrise technologique en privilégiant la conception d’agents intelligents. Il faut se défaire une fois pour toutes de l’opposition binaire entre technophiles et technophobes. Le temps est à la complexité. Il s’agit aujourd’hui de saisir, dans chacune des situations nouvelles, les perspectives qui s’ouvrent autant que les risques qui pointent. C’est ce travail de «cartographie multicouche» que je m’efforce de développer dans mes livres.
Ever wished you could unlock doors, turn on your lights, or log into your computer with a simple swipe of your hand? Amal Graafstra does just that as one of the first and most well-known "do-it-yourself" RFID (radio-frequency identification) implantees in the world. In this talk, Amal talks about his journey as a pioneer in RFID implementation and what you should know about biohacking.
Using simple radio signals, MIT researchers can pinpoint someone's location -- through a wall -- with near exact precision.
Massachusetts Institute of Technology researchers have developed a device that can see through walls and pinpoint a person with incredible accuracy. They call it the "Kinect of the future," after Microsoft's Xbox 360 motion sensing camera.
Shown publicly this week for the first time the project from MIT's Computer Science and Artificial Laboratory (CSAIL) used three radio antennas spaced about a meter apart and pointed at a wall. A desk cluttered with wires and circuits generated and interpreted the radio waves. On the other side of the wall a single person walked around the room and the system represented that person as a red dot on a computer screen. The system tracked the movements with an accuracy of plus or minus 10 centimeters, which is about the width of an adult hand.
Fadel Adib, a Ph.D student on the project, said that gaming could be one use for the technology, but that localization is also very important. He said that Wi-Fi localization, or determining someone's position based on Wi-Fi, requires the user to hold a transmitter, like a smartphone for example.
"What we're doing here is localization through a wall without requiring you to hold any transmitter or receiver [and] simply by using reflections off a human body," he said. "What is impressive is that our accuracy is higher than even state of the art Wi-Fi localization."
Although some people might find the idea of love with a machine repulsive, experts predict that as the technology advances and robots become more human-like, we will view our silicon cousins in a friendlier light.
Jason Nemeth, in his essay, "Should Robots Feel" believes love-companion robots will be practical in the future, and may one day satisfy all our intimate desires. Nemeth is not sure whether human/robot love will experience higher success rates than love between two humans, but he says tomorrow's robots will unlock the possibilities, and humans eager to experiment will take it from there.
Carnegie Mellon's Hans Moravec believes that by late 2020s, we will create robots in humanoid form. These 'bots would 'drink wine' for fuel, breathe air like us, and appear amazingly human-like.
Design tricks like these, along with soft 'nanoskin' will make tomorrow's 'bots seem uncannily human, encouraging us to perceive them as friends. Author Ray Kurzweil says tomorrow's 'droids could quickly learn to flesh out our positive feelings, providing an addictive allure almost impossible for us to resist.
David Levy, author of Love and Sex with Robots, predicts that as robots become more sophisticated, growing numbers of adventurous humans will enter into intimate relationships with these intelligent 'bots.
Consequently, computer scientists are taking an ecological perspective by looking at the new environment in terms of a competitive population of adaptive trading agents.
“Even though each trading algorithm/robot is out to gain a profit at the expense of any other, and hence act as a predator, any algorithm which is trading has a market impact and hence can become noticeable to other algorithms,” said Neil Johnson, a professor of physics at the College of Arts and Sciences at the University of Miami (UM) and lead author of the new study. “So although they are all predators, some can then become the prey of other algorithms depending on the conditions. Just like animal predators can also fall prey to each other.”
When there’s a normal combination of prey and predators, he says, everything is in balance. But once predators are introduced that are too fast, they create extreme events.
"What we see with the new ultrafast computer algorithms is predatory trading,” he says. “In this case, the predator acts before the prey even knows it's there."
Johnson describes this new ecology as one consisting of mobs of ultrafast bots that frequently overwhelm the system. When events last less than a second, the financial world transitions to a new one inhabited by packs of aggressively trading algorithms.
‘The movement of living creatures triggers sensations, emotions and communication,’ says Bob de Graaf. Fascinated with autonomous movement, he has created ‘Species…
‘The movement of living creatures triggers sensations, emotions and communication,’ says Bob de Graaf. Fascinated with autonomous movement, he has created ‘Species of Illumination’; two lights that act and react like autonomous creatures. Wallace responds to changes in light intensity in its environment and brings light to the darkest corners. Darwin searches for sunlight to charge its battery during the daytime, and in the evening wanders around the house, ‘accompanying’ us with its light. The interaction and emotional relationship they bring contribute to our well being. They behave like pets. They are lively lights you can play with.
Epson provides some more details in a press release:
"Epson's autonomous dual-arm robot is able to accurately recognize the position and orientation of objects in three-dimensional space. The two robot arms are equipped with newly developed force sensors that give the robot human-like control over the force exerted by the arms, enabling the robots to transport and assemble objects without damaging them. A multipurpose end effector can grasp, clamp, and insert objects of various shapes and sizes. The robot can be made to perform a wide range of tasks simply by teaching it objects and task scenarios."
If all that sounds like what we've heard about other robots like Rethink Robotics' Baxter, ABB's Frida, or Kawada Industries' Nextage, it's no coincidence. That's definitely one of the hottest trends in industrial robotics. However, there remains the rather important issue of price, which Epson is not yet ready to disclose. If the company's dual-arm robot has a competitive price, Epson—already a big player in the SCARA market—is in a strong position to enter this new era of manufacturing automation.
The melding of man and machine is not only an imaginable future it may be the key to sustainable and healthy living on an overcrowded planet.
Inventor and futurist Ray Kurzweil predicts that before mid-century the exponential acceleration of information technologies, robotics, medical science, and artificial intelligence will result in a “singularity," a point at which humans will essentially merge with their technology. Such an event may seem implausible, but discoveries of how technology and humans really interact are being made every day, leading one to the conclusion that it’s not an unimaginable future--and, in fact, it may be the key to sustainable living on an increasingly overcrowded planet.
Heart pacemakers and artificial hips already demonstrate the seeds of Kurzweil’s vision. Innovations like Google Glass could get more real-time information a lot closer to us very soon. But could we use such technology to measure what our bodies actually need and then design foods and homes that maximize our limited resources to sustain a growing population?
The device in question could also be configured to transmit commands to your phone, also useful in a noisy environment and when one's hands are full. Power for the device could be supplied by a variety of methods, including "solar panel technology, capacitive technology, nanotechnology, or electro-mechanical technology."
The tattoo could also be programmed to respond to a variety of audio sources, including "a user's vocal intonation ... a specific word or words ... a melody ... or a harmonic tone/vibration," and in response to those inputs send a variety of notifications or commands to the user's mobile device.
Also, one envisioned embodiment wouldn't be an electronic throat tattoo at all, but instead "a collar or band that would be worn around the throat [of] a user." Even better.
Those ideas may be all well and good, but others go downhill from there. The filing also describes the electronic skin tattoo as having a display and a user interface. Needless to say, such a display and UI wouldn't be of much use unless one had a mirror handy to view it. Perhaps the UI text could be displayed in reverse.
For graduate student Deven Vignali of Libby, the three-dimensional data visualization center at Montana Tech has made his life easier.
For graduate student Deven Vignali of Libby, the three-dimensional data visualization center at Montana Tech has made his life easier.
He’s using the $60,000 state-of-the-art software and tracking system to conduct research for his master’s thesis. He’s proving that passive seismic acquisition techniques can be used to monitor geo-thermal resources, as in hot springs.
It’s the fastest high-performance computing system within Montana academia, said Jeff Braun, head of the Tech computer engineering and software engineering departments.
Consider Vignali’s perspective:
“It reduced my simulation model run time from about 18 hours to about three hours, so it positively affected my project,” said Vignali.
The system has 10 teraflops of theoretical speed and 1.5 terabytes of memory. Translation: It is equal to between 200 to 400 times the memory of a typical home laptop computer.
Chris Zhang is building emotionally-savvy “virtual partners” to help rehabilitate patients by teaching robots how to mimic and respond to human emotion.
Although the project started in 2005, Zhang, a professor of mechanical engineering at the University of Saskatchewan, has been in charge since 2007. The work is funded by the Natural Sciences and Engineering Research Council and there are currently four other members working on the project.
The goal of the project is to design machines that can analyze human emotion. Cameras and sensors track emotions in conjunction with other hardware such as a joystick and an ocular movement tracker. This hardware records information such as blood pressure, heartbeat, skin conductivity and eye movement.
Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late '60s, eerily intelligent computers were everywhere, whether it was 2001's HAL or Star Trek'...
Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late '60s, eerily intelligent computers were everywhere, whether it was 2001's HAL or Star Trek's omnipresent Enterprise computer. As Allen recalls in his memoir, "machines that behaved like people, even people gone mad, were all the rage back then." He would tag along to his father's job at the library, overwhelmed by the information, and daydream about "the sci-fi theme of a dying or threatened civilization that saves itself by finding a trove of knowledge." What if you could collect all the world's information in a single computer mind, one capable of intelligent thought, and be able to communicate in simple human language?
Forty years later, with nearly 9 billion dollars to Allen's name, that idea is beginning to seem like more than just fantasy. Much of the technology is already here. We talk to our phones and aren't surprised when they talk back. A web search can answer nearly any question, undergirded by a semantic understanding of the structure of online information. But while the tools are powerful, the processes behind them are still fairly basic. Siri only understands a small subset of questions, and she can't reason, or do anything you might call thinking. Even Watson, IBM's Jeopardy champ, can only handle simple questions with unambiguous phrasing. Already, Google is looking to the Star Trek computer as a guiding light for its voice search — but it's still a long way off. If technology is going to get there, we'll need computers that are better at talking and, more crucially, better at reasoning.
Sociable robots come equipped with the very abilities that humans have evolved to ease our interactions with one another: eye contact, gaze direction, turn-taking, shared attention. They are programmed to learn the way humans learn, by starting with a core of basic drives and abilities and adding to them as their physical and social experiences accrue. People respond to the robots’ social cues almost without thinking, and as a result the robots give the impression of being somehow, improbably, alive.