The realisation of quantum networks is one of the major challenges of modern physics. Now, new research shows how high-quality photons can be generated from ‘solid-state’ chips, bringing us closer to the quantum ‘Internet’. We are at the dawn of quantum-enabled technologies, and quantum computing is one of many thrilling possibilities.
The number of transistors on a microprocessor continues to double every two years, amazingly holding firm to a prediction by Intel co-founder Gordon Moore almost 50 years ago.
If this is to continue, conceptual and technical advances harnessing the power of quantum mechanics in microchips will need to be investigated within the next decade. Developing a distributed quantum network is one promising direction pursued by many researchers today.
A variety of solid-state systems are currently being investigated as candidates for quantum bits of information, or qubits, as well as a number of approaches to quantum computing protocols, and the race is on for identifying the best combination. One such qubit, a quantum dot, is made of semiconductor nanocrystals embedded in a chip and can be controlled electro-optically.
Single photons will form an integral part of distributed quantum networks as flying qubits. First, they are the natural choice for quantum communication, as they carry information quickly and reliably across long distances. Second, they can take part in quantum logic operations, provided all the photons taking part are identical.
Unfortunately, the quality of photons generated from solid-state qubits, including quantum dots, can be low due to decoherence mechanisms within the materials. With each emitted photon being distinct from the others, developing a quantum photonic network faces a major roadblock.
Now, researchers from the Cavendish Laboratory at Cambridge University have implemented a novel technique to generate single photons with tailored properties from solid-state devices that are identical in quality to lasers. Their research is published today in the journal Nature Communications.
As their photon source, the researchers built a semiconductor Schottky diode device containing individually addressable quantum dots. The transitions of quantum dots were used to generate single photons via resonance fluorescence – a technique demonstrated previously by the same team.
Under weak excitation, also known as the Heitler regime, the main contribution to photon generation is through elastic scattering. By operating in this way, photon decoherence can be avoided altogether. The researchers were able to quantify how similar these photons are to lasers in terms of coherence and waveform – it turned out they were identical.
“Our research has added the concepts of coherent photon shaping and generation to the toolbox of solid-state quantum photonics,” said Dr Mete Atature from the Department of Physics, who led the research.
“We are now achieving a high-rate of single photons which are identical in quality to lasers with the further advantage of coherently programmable waveform - a significant paradigm shift to the conventional single photon generation via spontaneous decay.”
There are already protocols proposed for quantum computing and communication which rely on this photon generation scheme, and this work can be extended to other single photon sources as well, such as single molecules, colour centres in diamond and nanowires.
“We are at the dawn of quantum-enabled technologies, and quantum computing is one of many thrilling possibilities,” added Atature.
As computing moves from our desktops to our phones, we look into the future to see how technology will become increasingly ingrained in our movements and our active lives. From the Nike Fuelband to Google Glass, consumers are already seeing hints of the future of wearable devices. They have the possibility to make us more knowledgeable about ourselves and our surroundings, and connect us with each other in an uninterrupted, more intimate way. From DIY wearables to high-tech sensors and smart fabrics, the years ahead will show how integrated technology can impact our lives for the better.
Featuring: Sandy Pentland, MIT Sabine Seymour, Parsons Steven Dean, G51Studio Becky Stern, Adafruit
The problem with creating Stuxnet, the world's most sophisticated malware worm, is that it could eventually go rogue. Which is precisely what has happened. The US- and Israeli-built virus has spread to a Russian nuclear plant — and even the International Space Station.
Stuxnet is an incredibly powerful computer worm that was created by the United States and Israel to attack Iran's nuclear facilities. It initially spreads through Microsoft Windows and targets Siemens industrial control systems. It's considered the first malware that both spies and subverts industrial systems. It's even got a programmable logic controller rootkit for the automation of electromechanical processes.
Let that last point sink in for just a second. This thing, with a little bit of coaxing, can actually control the operation of machines and computers it infects.
For more on Stuxnet, I highly encourage you to watch this sobering TED talk by Ralph Lagner where he describes it as "a 21st century cyber weapon."
YouTube genereert volgens fabrikant van netwerkapparatuur Sandvine inmiddels meer ip-verkeer in Europa dan het http-protocol. De videodienst van Google zou inmiddels 28 procent van de gebruikte bandbreedte opeisen en laat daarmee ook bittorrent-verkeer achter zich.
Volgens het door Sandvine gemeten dataverkeer in Europa komt het YouTube-verkeer in de tweede helft van dit jaar uit op 24,2 procent. Het http-verkeer zou 13,6 procent bedragen. Met dit aandeel overstijgt de videodienst van Google voor de eerste keer het http-verkeer. Ook wordt er door YouTube inmiddels meer data verstookt dat via het bittorrent-protocol; het p2p-verkeer via bittorrent is goed voor een aandeel van 18 procent. Op de vierde plaats met een aandeel van 4,6 procent is Facebook te vinden. De onderzoekers stellen dat het groeiende aandeel van YouTube vooral te danken is aan smartphones en tablets.
Uit de cijfers van Sandvine blijkt verder dat de verhoudingen in de VS anders liggen. Met een aandeel van 28,2 procent is videodienst Netflix de grootverbruiker. De streamingvideodienst is pas sinds relatief korte tijd in enkele Europese landen, waaronder Nederland, beschikbaar en heeft hierdoor in Europa nog slechts met een aandeel van iets meer dan 3 procent een klein aandeel in het gegenereerde dataverkeer. In de Verenigde Staten kunnen op piektijden YouTube en Netflix samen verantwoordelijk zijn voor meer dan de helft van het dataverkeer.
Mobiele gebruikers verbruiken nog vooral http-verkeer met een aandeel van 22,6 procent. YouTube volgt echter met een aandeel van 19 procent. Facebook- en ssl-verkeer nemen de derde en vierde plaats in. Bittorrent is met een aandeel van 5,7 procent goed voor de vijfde plaats.
Jaron Lanier is a technology inventor and philosopher who has been dubbed the prophet of the digital age. He coined the phrases 'Virtual Reality' and 'digital Maoism'. His last book, You Are Not A Gadget, was a hugely influential and hotly debated critique of the 'hive mind'. In this rare appearance in London he talks about his new book, Who Owns the Future?, with James Bridle.
Digital technologies dawned with the promise that they would bring us all greater economic stability and power. That utopian image has stuck. But, Lanier argues, the efficiencies brought by digi-techs are having the effect of concentrating wealth while reducing overall growth. He predicts that, as more industries are transformed by digital technologies, huge waves of permanent unemployment are likely to follow those already sweeping through many creative industries.
But digital hubs are designed on the principle that people don't get paid for sharing. Every time we apply for a loan, update Facebook, use our credit cards, post pictures on Instagram or search on Google, we work for free says Lanier. He argues that artificial intelligence over a network can be understood as a massive accounting fraud that ruins markets. Past technological revolutions rewarded people with new wealth and capabilities. He will explain why, without that reward, the middle classes - who form the basis of democracy as he sees it - are threatened, placing the future of human dignity itself at risk.
Lanier discusses his analysis of the deep links between democracy and capitalism, and shares his thoughts for how humanity can find a new vision for the future.
This event was part of The School of Life's 'In Conversation' series and took place at Conway Hall on 6th March 2013.
Between smart phones, smart pads, apps, cloud computing, and the myriad of other technological advances and transformations occurring today, many company leaders are wondering how to navigate it all. Historically, CEOs and other C-suite executives are used to having control over everything within the company’s walls. As such, they are not happy with the increased focus on such things as cloud computing, yet that’s precisely what their company’s staff is using when they use their personal computers to search Google or access other cloud-based applications.
As a strategic consultant to large organizations, I’m amazed at how many executives are not embracing this paradigm shift. As a leader, you have to ask yourself, “Will there be more heavy hardware and software going to the cloud next year than there is this year? The answer is a resounding “Yes!” That means you can’t ignore it. "
What happens to privacy when an algorithm can understand us as well as that government employee might?
When computers figure out what you’re saying, will they care? Will you?
If the content of your e-mail or a transcript of your phone conversations were being read by someone working for the government, you might consider that a violation of privacy. But most people understand that computer programs and algorithms, such as the Google Adsense program, constantly analyze not only our communications’ metadata (whom we spoke to, from where, for how long, etc.), but also, often, the literal content of those exchanges.
What happens to privacy when an algorithm can understand us as well as that government employee might?
At present, there is no computer program that can perfectly interpret human speech, although attempts at solving this problem have spanned five decades. Humans can intuitively comprehend the difference in meaning between words; computers cannot. Several elements factor into determining a sentence’s meaning—context, syntax, and logic, for instance, all help us to understand what is being conveyed.
Past attempts to help computers better understand language involved manually coding definitions of words onto disc or memory. This method has yielded little by way of concrete results for the simple reason that language is a lot more complex than mere dictionary definitions.
“I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know,” says Katrin Erk, a University of Texas at Austin linguistics professor and co-author of the recent study “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics in Atlanta in June.
Programming dictionary definitions is a challenge. Definitions are not always clear-cut, and variation in definitions from one dictionary to another adds to this problem. The solution, the researchers hypothesized, was found not in dictionary definitions but in paraphrasing and the use of more flexible definitions.
Erk and her colleagues combined two distinct approaches to attack the problem. The first piece was Montague grammar (named after philosopher and mathematician Richard Montague), which uses a formal system of first-order logic, a systematic method of machine learning where a programmer assigns rigid meanings based on syntax and each word’s definition. Erk combined Montague’s method with a model of distributional, or vector-space, semantics called a Markov model. This approach maps words based on closeness of meaning. Semantically similar words are placed closer together, while words with more distant meanings are further apart.
“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” explains Erk. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”
The Texas Advanced Computing Center’s Longhorn supercomputer used grammar and syntax analysis, word-meaning models, and first-order logic to predict sentences’ meanings. The supercomputer correctly predicted meanings with up to 85% accuracy.
The research was funded by the Defense Advanced Research Projects Agency (DARPA). Originally created in 1958 as a reaction to the Soviet Union’s Sputnik launch, DARPA is the Defense Department agency responsible for developing new technology for military use.
According to Raymond Mooney of the University of Texas at Austin, “DARPA’s been investing in natural language processing for a very long time. The obvious application that there’s always pushing on is intelligence gathering. They want to be able to have computers that can read all the newspapers all over the world and then compile all of the events that are happening so that intelligence officers can query that effectively to answer questions.”
As is the case with any technology, potential exists for abuse. “I’m optimistic that this will be put to good use. But are there ways that the technology could be abused in various ways? Sure,” says Mooney.
DARPA did not respond to requests for comment on this story.
Peter Eckersley, director of technology projects at the Electronic Frontier Foundation (EFF), details the implications that a semantic search program could have for personal privacy. In particular, he notes the possibility of misuse, especially in the wake of the recent scandal involving the National Security Agency’s (NSA)’s monitoring of users’ metadata.
If prior circumstances are any indication, the potential for information collection—and subsequent use of that information—is great. Eckersley points out that the Foreign Intelligence Surveillance Court (FISC) issued a single warrant that allowed the NSA to collect vast amounts of metadata about Verizon cell-phone users.
“Once that’s been disclosed, the NSA can do whatever it wants with that data,” Eckersley says. Presumably, the government could issue a warrant of similar scope to analyze private information using semantic searches.
“It’s intrusive to have algorithms scanning your email. And it’s more intrusive to have smarter algorithms scanning your email. We should be concerned about the privacy implications of both of these cases, but the danger clearly gets worse as the software gets smarter,” he says.
Even people who consider themselves boring and of no interest to the government should be concerned, according to Eckersley.
“They’re not going to spend the time to catch that one little thing, that one little secret that they have, that they don’t want to share,” he says. “If there’s an algorithm that can effectively read everyone’s communications in incredible depth, then any reassurance of that is gone.”
On the other hand, Mooney believes that such technology likely will not be at the government’s disposal anytime soon.
“This is an incremental step, but it certainly isn’t the end-all be-all of understanding language,” he says. Mooney estimates that the field has another half century of work ahead of itself before computers are fully capable of comprehending what we say. —Keturah Hetrick
Sources: Interviews with Raymond Mooney, University of Texas; Peter Eckersley, Electronic Frontier Foundation.
Google Inc. is now aligned with the notorious ALEC. Quietly, Google has joined ALEC -- the American Legislative Exchange Council -- the shadowy corporate alliance that pushes odious laws through stat
......"Google has widely mythologized itself as some kind of humanistic techno-pioneer. Obscured in a fog of digital legend is the agenda that more than ever is transfixed with maximizing profits while capitalizing on anti-democratic leverage of corporate power. Google’s involvement in ALEC is consistent with the company’s mega-business model that relentlessly exploits rigorous data-mining of emails, online searches and so much more."
Do humans need a new way of thinking to contend with rapid technological change?
Q. Gray, we once discussed how a new psychology could be beneficial for society in regards to its understanding of the future and advancing technology. Can you elaborate your thoughts on this and why this might be desired?
A. We are experiencing unbelievable technological and psychological exponential change on this planet. We need a new set of psychological skills to cope with this change. Our world looks very different culturally, psychologically, and environmentally than it did 15 years ago. The mobile revolution has dramatically changed our world view, empowered women, and increased our empathy. Corrupt governments have been toppled and wars avoided because our species has become so digitally connected.
Negative and pessimistic views of technology have always existed. I can just imagine some pessimistic Sumerian in 3500 B.C. screaming about the evils of the wheel. We still deal with this fear based psychology in modern culture. Technology will mirror the culture and the psychology creating it. We need new psychological scaffolding to work with. Less fear and more optimism. A more holistic overview of how we look at ourselves and these new technological advancements could transform our planet.
I would love to see a future filled with abundance for everyone, however, we will not achieve this utopia until our cultural shadow-self is integrated. Taking psychological ownership of our fear is the first step. Technology is just a reflection of the deep unconscious cultural mind.
Q. What are some of the hang-ups you see with the current psychology that society seems to be stuck in when it comes to topics like transhumanism and radical life extension ideas?
A. Some people are hung up by fear. They are frightened of the changes they see around them. They are operating on old religious beliefs, old mythologies, and old information. The new techno-generation or transhuman generation are not buying it. They want to thrive. They want to live hundreds of years. They are not waiting on a rapture, doomsday, or cancer. They want utopia and they want it now. How can any culture be psychologically sound if we are told that we are "only human?" We have been conditioned to be frightened of our power, our love, our worth. Watch a child play and you will see real power. They are fearless. We are taught to desire the death wish, and it is not natural. Immortality may be impossible, but imagine what humanity could learn if we all lived 700 years. We could travel deep into utopia and beyond. We are experiencing a shedding of old psychological, political, and religious references. We could call it a psyco-digital-moulting.
Q. What are some good methods for getting society to embrace a new psychology for dealing better with the future?
A. One method is too seek out social support systems that are open and techno-positive. We need psychologists, elected representatives, and teachers to fully embrace technology. Transhumanism gives us a chance to unleash our biological and digital imagination into the material world. We can use bio-hacking and brain computer interface devices to allow people to walk again, see again, or communicate again. Showing the positive impacts of technology will help society embrace a new set of skills. The future is inclusive, not exclusive.
Q. I feel like technology is expanding very quickly, but human culture and psychology are not keeping up with it. How long can this go on before this discrepancy seriously hurts our species?
A. The technological singularity has been predicted to happen around 2045 so I would say that is our make or break time. As a futurist-philosopher, I am fascinated by the future of psychology. We have already crossed the threshold into the unknown. If we do not educate ourselves psychologically, we will face huge catastrophes in the near future. Digital outliers, like the elderly, Luddites, and poverty stricken will have a harder time understanding the new techno-psychology. Imagine a world filled with clones or artificially intelligent robots that can pass the "Turring Test" of believability. Are psychologists talking about the near future issues of robotic romance, digital-death remorse, or clone envy? Sounds like science fiction, but it is closer than most people would believe.
Q. How can the media play a better role in bringing forth a new psychology of the future and advancing technology?
A. Media consumption is no longer a passive experience. We touch, pinch, and swipe our way through the media world today. This interactive experience of seeking out knowledge and getting media from our social networks allows an entirely new psychology to emerge.
The way we consume media has radically shifted in the last 15 years. We can "binge-watch" entire seasons of shows in one day now that we have YouTube and Netflix. We can create individual "programming" based on our own psychological needs and desires.
The major media networks need to focus more on the positive implications of technology and less on dystopian fears. The people in the media need to be information leaders. They need to educate and enlighten. Our cultural amygdala is worn out. No more terror alerts. No more fear. We are ready for a higher road.
The multi-billion Euro Human Brain Project, co-funded by the European Union, plans to use supercomputers to model the human brain and then use the research to simulate drugs and treatments for diseases, create learning artificial intelligence and...
The power of Wolfram Alpha — the intelligent search engine that can answer natural language questions and solve complex math problems — is being built into an upcoming programming language that its founder, Stephen Wolfram, says will be incredibly easy to use. The language, Wolfram writes, is "a way to go from an idea to a fully deployed realization in an absurdly short time." It's called Wolfram Language, and it's an evolution of what's been used inside of his company Wolfram's popular Mathematica software for over 25 years now.
COMPLEX FUNCTIONS ARE BUILT RIGHT INTO WOLFRAM LANGUAGE
Wolfram's intention is to build a language that includes simple ways to do regularly complex tasks, from image processing, to creating graphs, to understanding natural language. "It becomes trivial to write a program that makes use of the latest stock price, computes the next high tide, generates a street map, shows an image of a type of airplane, or a zillion other things," writes Wolfram.
On Thursday IBM will announce that Watson, the computing system that beat all the humans on “Jeopardy!” two years ago, will be available in a form more than twice as powerful via the Internet.
Companies, academics and individual software developers will be able to use it at a small fraction of the previous cost, drawing on IBM’s specialists in fields like computational linguistics to build machines that can interpret complex data and better interact with humans.
IBM’s move to make its marquee technology more widely available is the latest effort among big technology companies to make the world’s most powerful computers as accessible as the Angry Birds video game.
It is also an indication of how quickly the technology industry is changing, from complex systems that cost millions to install to pay-as-you-go deals that provide small companies and even individuals access to technology that just a few years ago only the largest companies could afford.
“The next generation will look back and see 2013 as a year of monumental change,” said Stephen Gold, vice president of the Watson project at IBM.
“This is the start of a shift in the way people interact with computers.”
The former director of Apple's Siri is taking Samsung's version of the artificial intelligence system to the next level. Luc Julia, vice president and innovation fellow at Samsung's Open Innovation Center in Menlo Park, Calif., demonstrated SAMI (Samsung Architecture for Multimodal Interactions), the Siri-like system central to Samsung's Internet of Things (IoT) strategy, at the MEMS Executive Congress 2013 in Napa, Calif., Nov. 7-8.
SAMI is an interactive artificial intelligence (AI) similar to Apple's Siri that Julia helped develop when at Apple. Samsung's SAMI, however, goes far beyond Apple's Siri by aggregating sensor data from all types and brands of IoT devices in the cloud. The open system will then allow Samsung ecosystem partners -- some financed by a $100 million accelerator fund -- to perform deep analytics on that data before sending smart advice back to users.
Samsung's Menlo Park OIC (Open Innovation Center) is exploring possibilities with about 48 companies regarding providing the innovative sensors and actuators for wellness, smart-homes, and smart-cars, as well as the cloud-based "brains" for projects like SAMI. Samsung is engaging startups that it will fund with early-stage investments in the $100,000 to $2 million range from Samsung's $100 million accelerator fund.
Going from 20 billion internet of things devices today to 1.5 trillion by 2020
According to Julia, who left Apple last year, there are about 20 billion IoT devices out there today, but he predicts that number will grow to 1.5 trillion by 2020. Driving that growth will be what he calls the "explosion" of the smartphone, whereby the sensors inside it, and dozens more in dedicated IoT devices, explode like shrapnel that becomes embedded into wearable devices around the body.
What digital technology has done to journalism and recorded music, Lanier says, could easily happen to other areas of work—such as transportation, medical services, and education—as our lives become increasingly software mediated. It’s not that humans will become unnecessary, but that the uniquely human contributions to information networks are not, under current design, sufficiently valued. He draws a direct line between the increasing migration of economic activity online and the decrease in economic opportunity generally.
Lanier is not anti-technology (and is actually a Silicon Valley insider with an impressive pedigree) and he stresses that current problems are not an inevitable facet of networked economics. The power of all digital networks depends on human beings uploading sufficient raw data, but average people aren’t compensated under our current system. Lanier’s central idea for rethinking network design is for people to be continually compensated for their information contributions, in nano-payments, according to the value they create.
NASA has set a new record for communication in space, beaming information to and from a probe named LADEE that is currently flying around the moon 380,000 kilometers away.
Aboard LADEE is the Lunar Laser Communication Demonstration (LLCD), which achieved super-fast download speeds of 622 megabits per second (Mbps) and an upload rate of 20 Mbps. In comparison, the internet at WIRED’s office in San Francisco gets download rates of 75 Mbps and uploads at 50 Mbps. NASA’s typical communications with the moon are about five times slower than what LLCD provided.
Until now, NASA has used radio waves to communicate with its spacecraft out in the solar system. As a probe gets farther away, you need more power to transmit a signal. Earth-based receiving dishes have to be bigger, too, so that NASA’s most-distant probe, Voyager 1, relies on a 70-meter antenna to be heard. LLCD relies on three ground-based terminals at telescopes in New Mexico, California, and Spain to communicate.
The agency is currently interested in creating better laser-based communication relays. With a concentrated beam of light, a spacecraft could send data at much faster rates that could carry higher resolution images and transmit 3-D videos from deep space. Of course, the method is challenging because it requires very high precision. If the skinny laser beam doesn’t exactly hit its target over a ridiculously far distance, it will lead to dropped calls and no communication. LLCD also has a slower transmission rate when the moon is on the horizon — and the signal has to travel through a greater amount of interfering atmosphere — than when it is directly overhead.
LLCD is actually a precursor to a larger and even more capable project, the Laser Communications Relay Demonstration (LCRD), which will further test the technology and is expected to launch in 2017. One day, such communication systems could be part of a fast interplanetary internet that will beam data around the solar system.
This is not a joke, even though it sounds like a bad sci-fi movie or the title of a smudgy xeroxed screed being circulated amongst conspiracy theorists. I just read an article in Fast Company citing a recent Oxford University study that shows how almost half (47% ) of current jobs could be done by machines in the fairly near future.
Reading this article sent me on an internet search for “jobs in danger of being automated” (btw, a search I would have done with the aid of a librarian, 20 years ago). I found a lot of references to the above study, so I finally went to the study itself, by researchers Carl Frey and Michael Osborne. It’s fascinating reading – and my key take-away was the jobs least at risk are those that require the highest degree of “perception and manipulation, creative intelligence, and social intelligence.”
In other words, you should start looking over your shoulder for your Rosie the Robot replacement if your job:
Belinda Suvaal's insight:
Als jouw taak vervangen kan worden door algoritmes, dan is het tijd om alvast over een carreerchange na te denken. Ik denk dat dit over een paar jaar voor 'de middenmoot' zal gelden.