 Your new post is loading...
 Your new post is loading...
A superlattice is basically just a crystal where proper atoms have been swapped out for artificial atoms. Like normal crystals, superlattices can be grown over a period of time as nanocrystals in a solution, lose energy, and fall into the ordered patterns of the crystal structure. It's been assumed that this period of time must be fairly long, but what the researchers behind the current paper discovered is that this is not necessarily the case at all..
Recently, a team from Stanford University and Sandia National Laboratories took a different approach to brain-like computing systems. Rather than simulating a neural network with software, they made a device that behaves like the brain’s synapses—the connection between neurons that processes and stores information—and completely overhauled our traditional idea of computing hardware. The artificial synapse, dubbed the “electrochemical neuromorphic organic device (ENODe),” may one day be used to create chips that perform brain-like computations with minimal energy requirements. Made of flexible, organic material compatible with the brain, it may even lead to better brain-computer interfaces, paving the way for a cyborg future.
Scientists say it's possible to build a new type of self-replicating computer that replaces silicon chips with processors made from DNA molecules, and it would be faster than any other form of computer ever proposed - even quantum computers. Called a nondeterministic universal Turing machine (NUTM), it's predicted that the technology could execute all possible algorithms at once by taking advantage of DNA's ability to replicate almost perfect copies of itself over billions of years. The basic idea is that our current electronic computers are based on a finite number of silicon chips, and we're fast approaching the limit for how many we can actually fit in our machines. To address this limitation, researchers are currently working on making quantum computers a reality - super-powerful devices that replace the bits of electronic computers with quantum-entangled particles called qubits. Unlike regular bits that can only take on the form of 1 or 0 in the binary code, qubits can take the form of 0, 1, or a superposition of the two simultaneously, which allows them to perform many different calculations at once.
Via Wildcat2030
Here’s how the system works: The tiny implant, about the size of a baby aspirin, is inserted into the motor cortex, the part of the brain responsible for voluntary movement. The implant’s array of electrodes record electrical signals from neurons that “fire” as the person thinks of making a motion like moving their right hand—even if they’re paralyzed and can’t actually move it. The BrainGate decoding software interprets the signal and converts it into a command for the computer cursor.
Azuma Hikari is a 58-cm hologram and virtual assistant created by Japanese tech company Gatebox that comes with a $2,600 price tag. Azuma is built with a machine learning algorithm that helps her recognize her "master's" voice, learn his sleeping habits, and send him messages through Gatebox's native chat app.
Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I've thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought. With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer's mind.
"Evidence suggesting that our universe is tailor-made for habitable planets — ones that could reasonably support life — continues to pile up. And as humanity flings cosmos-observing technologies further into the sky, we’re developing a deeper appreciation of just how likely it is that life could have emerged anywhere.
So, why is our universe so void of activity when it seems suitable for beings who might create technologies like ours to emerge?
Many have speculated on the Fermi paradox, the name given to this head-scratching question, but last year I profiled one promising theory called the transcension hypothesis.
Proposed by futurist John Smart, the theory suggests that technological civilizations — and the Fermi paradox assumes there would be many by now — don’t colonize outer space (and thus flood the cosmos with signatures we could easily find). Instead, they move toward inner space by building vast digital realities on computers much smaller than we can detect."
For years now, scientists have been working to make cells into computers. It’s a logical goal; cells store information in something roughly approximating memory, they behave due to the strict, rules-based expression of programming in response to stimuli, and they can carry out operations with astonishing speed. Each cell contains enough physical complexity to theoretically be quite a powerful computing unit all on its own, but each is also small enough to pack by the millions into tiny physical spaces. With a fully realized ability to program cell behavior as reliably as we do computer behavior, there’s no telling what biological computing could accomplish. Now, researchers from MIT have taken a step toward this possible future, with cellular machines that can perform simple computational operations and store, then recall, memory. In principle, they provide the sort of control we’d need to design and build real cellular computers, but they could just revolutionize cell biology long before that future comes about.
Musk said he thinks there’s a “one in billions” chance that we’re not living in a computer simulation right now, meaning Musk is a firm believer in the hypothesis that a super intelligent artificial intelligence created the universe as we know it. "There’s a one in billions chance we’re in base reality" “The strongest argument for us being in a simulation, probably being in a simulation is the following: 40 years ago, we had pong, two rectangles and a dot,” Musk said. “That is what games were. Now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, augmented reality, if you assume any rate of improvement at all, the games will become indistinguishable from reality.”
"Initial tests have shown that they can successfully encode and recover 100 percent of binary data from synthetic DNA.
Researchers have previously succeeded in storing binary data in DNA base pairs when they wrote mp3 files onto DNA with a storage density of 2.2 petabytes in a single gram."
Google Image search, instead of returning photos it indexed from millions of sites, could take a query and based on what it knows about good photography and you, build an entire new photo from scratch that would perfectly match your request. Instead of searching for a photo, it would create it.
The thought experiment driving the evolution and development of this idea for over 8 years is: “if I were a sentient entity at the scale of our planet, what would I need in order to become functionally self-aware?” the subtext to that inquiry is, “what evidence, if anything, is there which points to the possibility that a higher order entity (existing governments, ngo’s and corporations not withstanding) may be emerging from the sum of human interactions?”
In my last post, I added more future scenarios to this visual describing the complexity and impact of our emerging future. The one piece left unfinished was the expansion of innovation accelerators to include emerging and future accelerators.
Via nukem777
|
Transistors based on graphene ribbons could result in much faster, more efficient computers and other devices. Researchers use a magnetic field to control current flow.
Via Mariaschnee
Here’s what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spider’s legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can. You’re in pretty deep with this thing; there’s no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computer’s hardware. As the work proceeds, another mechanical appendage – less delicate, less careful – removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing. At some point, you become aware that you are no longer present in your body. You observe – with sadness, or horror, or detached curiosity – the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.
Via Wildcat2030
Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years. I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof.
Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.
Researchers have developed an AI system testing a small subset of pictures and was able to predict criminality with an 89.5 percent accuracy. Such technology brings up some ethical and legal questions about its application, joining a long list of concerns AI raises in general.
Humanity took a step forward into that future recently, when Israeli scientists revealed that they’ve developed a new type of brain-machine interface, which for the first time has allowed a human operator to control a nanobot implanted inside the body of a living creature (in this case a cockroach), simply by using his thoughts.
We are talking about the implications of intelligence that can make refinements to itself over a time-course that bears no relationship as to what we experience as apes. We are talking about a system that can make changes to its own source code to become better and better at learning and more and more knowledgeable and, if we give it access to the Internet, it has instantaneous access to all human and machine knowledge. It does thousands of years of work every day of our lives--thousands of years of equivalent human intellectual work. Our intuitions completely falter to capture how immensely powerful such a system would be and there is no reason to think this isn't possible.
Soon, you can control a computer screen with just your eyes. Eyefluence, a Silicon Valley-based startup, is working on a technology for hands-free navigation using augmented reality glasses. The glasses are equipped with cameras that can assess where you’re looking at any given time and users can click on icons with a glance.
Researchers in Germany are developing a way for robots to feel pain, in the hopes that doing so will enable them to better protect humans. The researchers, from Leibniz University of Hannover
Google's patent, published Thursday, describes an implantable contact lens that can correct vision for good.
"The nanostructures exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks."
A team of psychology and computer engineering professors at Binghamton University was able to successfully match test subjects to their uniquely identifying brain waves—or, brainprints—with 100 percent accuracy.
What the study’s authors discovered was each person’s brain wave responses to the visual stimuli were unique enough to identify them by—very much like fingerprints or DNA.
It’s not illogical to wonder, then, how brainprints can and will be used to surveil citizens outside of the lab.
|