BBC News Do quantum computers threaten global encryption systems? BBC News With that secure channel created, different encryption systems that are much less susceptible to attack by quantum computers are used to protect data shuttling back and forth.
Search YouTube for “baby” and “iPad” and you’ll find clips featuring one-year-olds attempting to manipulate magazine pages and television screens as though they were touch-sensitive displays. These children are one step away from assuming that such technology is a natural, spontaneous part of the material world.
Nature needed about one billion years to create the simplest single-cell organisms that swam around in the primordial soup. Now, scientists are eager to create synthetic life – but better and faster.
Hamilton Smith (Nobel Prize in Chemistry 1978 with Werner Arber and Daniel Nathans) started his lecture at the 64th Nobel Laureate Meeting in Lindau with a quote from Richard Feynman (Nobel Prize in Physics 1965): Feynman had probably meant physical models, whereas Smith referred to living organisms. In his laboratory at the J. Craig Venter Institute, he tries to create synthetic cells: “I hope that if we create that, we will understand.”
Nowadays, the entire human genome has been decoded. But how a live human being develops from DNA molecules, a human being that can breath, eat, walk, study, love, receive Nobel Prizes and award them – nobody really understands yet. Even for single-cell organisms, this isn’t crystal clear. Even the simplest bacteria exhibit genes without apparent function, that are not essential for life. During evolution, a lot of ‘genetic waste’ has accumulated that might have been useful at some point, but was rendered useless by mutations. Some genetic fragments were in fact smuggled into the genome by viruses, others were created by accidental duplications of genetic segments. Numerous molecular mechanisms lead to many genetic variations – rendering evolution possible in the first place. But over time, many of these genes and segments have become useless.
Currently Smith tries to tidy up the genome of Mycoplasma mycoides, a microbe normally living in the digestive tract of ruminants. Originally Smith and his team wanted to use the genome of Mycoplasma genitalium, the bacterium with the smallest known genome – it needs only 475 genes to live. Smith estimates that about 100 of these are non-essential. But since M. mycoides has a much higher cell division rate, although its genome is twice as large, experiments with M. mycoides proved to be more effective. During this ‘minimal cell project’, the researchers switch off one gene after another and study the effects on the microbes. (And the slower the microbes grow, the longer the researchers have to wait for their results.) Smith’s final goal is “a genome that is very understandable – we are searching for the genetic kernels of life”.
Smith also assumes that all genes from the last group can be switched off without negative impacts on the microbes. Concerning the middle category, the researchers have to carefully weigh all options. When all is done, the result should be a bacterium that can still multiply rapidly, at least in laboratory conditions that offer plenty of nourishment, constant temperatures, but no competitors. The researchers’ goal is a fifty percent genome reduction in a happily thriving microbe that divides at least once in 100 minutes.
Smith likes using computer terms to describe his work. He compares the genome of any organism with its software, the rest is hardware (the cytoplasm, proteins and enzymes), controlled by said software. As soon as a cell receives a new genetic program, it starts to put this program to use. In order to test their own synthetic programs, Smith and his team replaced the bacterium’s DNA with synthetic DNA containing their basic program. To date, the old ‘hardware’ has not adopted the new program ‘update’. In computer speak, troubleshooting and maintenance are called “debugging”: Smith and his team will be busy with debugging for some time.
Drawing on the work of a clever cadre of academic researchers, the biggest names in tech—including Google, Facebook, Microsoft, and Apple—are embracing a more powerful form of AI known as “deep learning,” using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help.
It was one of the most tedious jobs on the internet. A team of Googlers would spend day after day staring at computer screens, scrutinizing tiny snippets of street photographs, asking themselves the same question over and over again: “Am I looking at an address or not?’ Click. Yes. Click. Yes. Click. No. This was…
Researchers from MIT’s Laboratory for Information and Decision Systems have developed an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.
In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location, as described in an arXiv paper.
Wired AI Systems Will Prove Useful Long Before They Become Self-Aware Wired winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice.
The human brain is the world’s most sophisticated computer, capable of learning new things on the fly, using very little data. It can recognize objects, understand speech, respond to change. Since the early days of digital technology, scientists have worked to build computers that were more like the three-pound organ inside your head. Most efforts…
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In recent years a robust science of networks has been established, so we’ve gained important insights into how they function. It’s time we start putting the science to work in how we manage enterprises.
While at conferences and doing research and writing over the past couple of years, I’ve noticed a lot of confusion about the terms “posthuman,” “transhuman,” and “posthumanism.” A lot of people—including scholars who should know better—use these terms pretty much interchangeably and indiscriminately. Part of the problem is that these terms are all fairly new. So for clarity’s sake, I offer these simple thumbnail definitions of all three terms…