Marcus du Sautoy: A physicist has formulated a mathematical theory that purports to explain why the universe works the way it does – and it feels like 'the answer'
Two years ago, a mathematician and physicist whom I've known for more than 20 years arranged to meet me in a bar in New York. What he was about to show me, he explained, were ideas that he'd been working on for the past two decades. As he took me through the equations he had been formulating I began to see emerging before my eyes potential answers for many of the major problems in physics. It was an extremely exciting, daring proposal, but also mathematically so natural that one could not but feel that it smelled right.
He has spent the past two years taking me through the ins and outs of his theory and that initial feeling that I was looking at "the answer" has not waned. On Thursday in Oxford he will begin to outline his ideas to the rest of the mathematics and physics community. If he is right, his name will be an easy one to remember: Eric Weinstein.
As far as I can tell, the practice of photographing one’s food — whether in restaurants or at family gatherings — is generally deplored. The New York Times Style section, in its dual role as avatar and caricature of urban mores, reported in January that restaurants in Manhattan were banning it. (‘It’s a disaster in terms of momentum, settling into the meal,’ said one chef. ‘It’s hard to build a memorable evening when flashes are flying every six minutes.’) Your friends tend to be annoyed if you cram your Facebook and Twitter feeds with snapshots of your latest delicacy, too. ‘You posted an Instagram-ed picture of a handful of blueberries the other day,’ wrote Katherine Markovich in McSweeney’s, sneering at the iPhone-toting hordes of amateur photographers. ‘What would your day have been without those blueberries? Would you have felt a little less connected to the earth and, ultimately, yourself?’
We laugh at the thought of a beautiful moment ruined by Instagram, but meals continue to fill our online lives. The internet is brimming with steak and fried eggs, kale and rice, ice cream and coffee. Food, of course, can be a sign of status, and documenting our every dinner might be a vehicle for self-expression: ‘Tell me what you eat,’ said Jean Anthelme Brillat-Savarin, the 19th-century French lawyer, politician and gastronome, ‘and I will tell you what you are’. But the exotic cuisines, fine wines and clever plating that we recognise today are all built on the simple act of dining together. Food is inherently social, best consumed with friends or family; even eating with strangers is better than eating alone. It is essential to our social life that we invite people to eat with us, even when we’re separated by space and time.
We believe quantum computing may help solve some of the most challenging computer science problems, particularly in machine learning. Machine learning is all about building better models of the world to make more accurate predictions. If we want to cure diseases, we need better models of how they develop. If we want to create effective environmental policies, we need better models of what’s happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer.
So today we’re launching the Quantum Artificial Intelligence Lab. NASA’s Ames Research Center will host the lab, which will house a quantum computer from D-Wave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. Our goal: to study how quantum computing might advance machine learning.
Machine learning is highly difficult. It’s what mathematicians call an “NP-hard” problem. That’s because building a good model is really a creative act. As an analogy, consider what it takes to architect a house. You’re balancing lots of constraints -- budget, usage requirements, space limitations, etc. -- but still trying to create the most beautiful house you can. A creative architect will find a great solution. Mathematically speaking the architect is solving an optimization problem and creativity can be thought of as the ability to come up with a good solution given an objective and constraints.
(Phys.org) —SheerWind Inc. of Chaska, Minnesota is claiming in a press release that its newly developed funnel-based wind turbine system is capable of producing 600 percent more power than conventional wind turbines.
Arrays of tree-like nanowires consisting of Si trunks and TiO2 branches facilitate solar water-splitting in a fully integrated artificial photosynthesis system
Lawrence Berkeley National Laboratory (Berkeley Lab) scientists have developed the first fully integrated nanosystem for artificial photosynthesis, in which solar energy is directly converted into chemical fuels.
“Similar to the chloroplasts in green plants that carry out photosynthesis, our artificial photosynthetic system is composed of two semiconductor light absorbers, an interfacial layer for charge transport, and spatially separated co-catalysts,” says Peidong Yang, a chemist with Berkeley Lab’s Materials Sciences Division, who led this research.
“To facilitate solar water- splitting in our system, we synthesized tree-like nanowire heterostructures, consisting of silicon trunks and titanium oxide branches. Visually, arrays of these nanostructures very much resemble an artificial forest.
“In natural photosynthesis, the energy of absorbed sunlight produces energized charge-carriers that execute chemical reactions in separate regions of the chloroplast,” Yang says. “We’ve integrated our nanowire nanoscale heterostructure into a functional system that mimics the integration in chloroplasts and provides a conceptual blueprint for better solar-to-fuel conversion efficiencies in the future.”
A far-flung team is trying to build the first digital lifeform to work out the basic principles of the brain.
For all the talk of artificial intelligence and all the games of SimCity that have been played, no one in the world can actually simulate living things. Biology is so complex that nowhere on Earth is there a comprehensive model of even a single simple bacterial cell.
And yet, these are exciting times for "executable biology," an emerging field dedicated to creating models of organisms that run on a computer. Last year, Markus Covert's Stanford lab created the best ever molecular model of a very simple cell. To do so, they had to compile information from 900 scientific publications. An editorial that accompanied the study in the journal Cell was titled, "The Dawn of Virtual Cell Biology."
In January of this year, the one-billion euro Human Brain Project received a decade's worth of backing from the European Union to simulate a human brain in a supercomputer. It joins Blue Brain, an eight-year-old collaboration between IBM and the Swiss Federal Institute of Technology in Lausanne, in this quest. In an optimistic moment in 2009, Blue Brain's director claimed such a model was possible by 2019. And last month, President Obama unveiled a $100 million BRAIN Initiative to give "scientists the tools they need to get a dynamic picture of the brain in action." An entire field, connectomics, has emerged to create wiring diagrams of the connections between neurons ("connectomes"), which is a necessary first step in building a realistic simulation of a nervous system. In short, brains are hot, especially efforts to model them in silico.
Google, in partnership with NASA and the Universities Space Research Association (USRA), has launched an initiative to investigate how quantum computing might lead to breakthroughs in machine learning, a branch of AI that focuses on construction and study of systems that learn from data..
The new lab will use the D-Wave Two quantum computer.A recent study (see “Which is faster: conventional or quantum computer?“) confirmed the D-Wave One quantum computer was much faster than conventional machines at specific problems.
The machine will be installed at the NASA Advanced Supercomputing Facility at the NASA Ames Research Center in Mountain View, California.
“We hope it helps researchers construct more efficient and more accurate models for everything from speech recognition, to web search, to protein folding,” said Hartmut Neven, Google director of engineering.
“Machine learning is highly difficult. It’s what mathematicians call an ‘NP-hard’ problem,” he said. “Classical computers aren’t well suited to these types of creative problems. Solving such problems can be imagined as trying to find the lowest point on a surface covered in hills and valleys.
“Classical computing might use what’s called a ‘gradient descent’: start at a random spot on the surface, look around for a lower spot to walk down to, and repeat until you can’t walk downhill anymore. But all too often that gets you stuck in a “local minimum” — a valley that isn’t the very lowest point on the surface.
“That’s where quantum computing comes in. It lets you cheat a little, giving you some chance to ‘tunnel’ through a ridge to see if there’s a lower valley hidden beyond it. This gives you a much better shot at finding the true lowest point — the optimal solution.”
Google has already developed some quantum machine-learning algorithms, Neven said. “One produces very compact, efficient recognizers — very useful when you’re short on power, as on a mobile device. Another can handle highly polluted training data, where a high percentage of the examples are mislabeled, as they often are in the real world. And we’ve learned some useful principles: e.g., you get the best results not with pure quantum computing, but by mixing quantum and classical computing.”
Using analog computation circuits, MIT engineers design cells that can compute logarithms, divide and take square roots.
MIT engineers have transformed bacterial cells into living calculators that can compute logarithms, divide, and take square roots, using three or fewer genetic parts.
Inspired by how analog electronic circuits function, the researchers created synthetic computation circuits by combining existing genetic “parts,” or engineered genes, in novel ways.
The circuits perform those calculations in an analog fashion by exploiting natural biochemical functions that are already present in the cell rather than by reinventing them with digital logic, thus making them more efficient than the digital circuits pursued by most synthetic biologists, according to Rahul Sarpeshkar and Timothy Lu, the two senior authors on the paper, describing the circuits in the May 15 online edition of Nature.
(Phys.org) —A computer science professor at Amherst College who recently devised and conducted experiments to test the speed of a quantum computing system against conventional computing methods will soon be presenting a paper with her verdict: quantum...
Neurogaming is riding on the heels of some exponential technologies that are converging on each other.
Gaming as a hobby evokes images of lethargic teenagers huddled over their controllers, submerged in their couch surrounded by candy bar wrappers. This image should soon hit the reset button since a more exciting version of gaming is coming. It’s called neurogaming, and it’s riding on the heels of some exponential technologies that are converging on each other. Many of these were on display recently in San Francisco at the NeuroGaming Conference and Expo; a first-of-its-kind conference whose existence alone signals an inflection point in the industry.
Conference founder, Zack Lynch, summarized neurogaming to those of us in attendance as the interface, “where the mind and body meet to play games.”
The evolutionary causes of the Internet's inescapable charisma
So here you are, once again, on the Internet. (Hello, there. Welcome back, friend.) Here you are, another Norm within the Cheers that is the World Wide Web, hanging out in the place where everybody (or, more likely, nobody) knows your name.
But why are you really here? I mean, why are you really here? Why, ultimately, do you -- and, because I'm right here with you, we -- keep coming back to this crazy place, day after day?
It's easy to attribute the web's ongoing magnetism to the powerful combination that is "human connection" and "cat videos"; that isn't the full story, though. The Internet is beguiling not just because of its content, but because of its structure.
That's according to Tom Stafford, a cognitive scientist at the University of Sheffield in the U.K. The Internet, Sheffield told told LiveScience, offers the same kind of incentives and rewards that, say, slot machines do: You could pull several -- even several hundred -- rounds of duds (cherry-bar-7! bell-bell-lemon! unfunny "humor" piece! terrible listicle! bell-bell-lemon!). But when you get that one payoff -- when you hit even the smallest of jackpots -- your patience is rewarded. The monotony of the arm-pulls or the button-presses seems to be justified by the win. You get a rush of dopamine. You are happy. (For more on how this works, check out the excellent "No Armed Bandit" episode of Roman Mars's 99 Percent Invisible podcast.)
Something really interesting happens in the curation process, because stories don’t have intrinsic value. An unshared story is basically like rubbish, lying around without any value. Stories gain their meaning and value by sharing, but it’s not as simple as that. The curator imparts her own value, status and trust, upon the story.
Curators represent a new type of tribal leadership that operates bottom-up and peer to peer. As a member of a tribe, curators will always be more native and relevant than any outsiders will ever be. Within a tribe they are not only appreciated for leveraging their insider skills, but for sustaining and developing their culture.
In 1962, former CIA Director John McCone used his own insight to take a second look at American U-2 spy plane images indicating the Soviet Union was installing surface-to-air anti-aircraft missiles (SAMs) in Cuba.
“Aha!” moments hit us all. Although they can range from new ways to tie a shoelace to ideas for the latest smart phone, acting upon such “insights” remains key to our collective future.
Or so says Gary Klein, a longtime applied cognitive psychologist and author of “Seeing What Others Don’t: The Remarkable Ways We Gain Insights” (Public Affairs), due out next month. In deft, readable and well-researched chapters, Klein argues that by simply being more attuned to life’s connections, contradictions and moments of creative desperation, we have a better chance of recognizing insights as they happen.
And therein lies the rub. As Klein notes, acting on insights can be both risky and complicated, particularly inside bureaucratic hierarchies which aren’t naturally disposed to disruption and change. History is rife with accounts of messengers bearing insights simply being ignored or worse. Thus, Forbes.com called Klein at his Ohio home for ideas on what to do with insights once we have them.
How do you define insight?
An insight is an unexpected shift in the way we understand events and the story we tell about what’s happening.
Not sure about his definition of insight,still an important aspect
(Phys.org) —In the wake of the sobering news that atmospheric carbon dioxide is now at its highest level in at least three million years, an important advance in the race to develop carbon-neutral renewable energy sources has been achieved.
Eric Schmidt, Jared Cohen, and Steve Clemons discuss the political limitations of the Internet.
It's easy to assume that a global Internet, with all its promise of scaled communication and education and democratization, will eventually help to foster democracy. But it's also not entirely accurate to assume that. In a conversation with The Atlantic's Steve Clemons yesterday evening, Eric Schmidt and Jared Cohen -- co-Googlers and co-authors of The New Digital Age: Reshaping the Future of People, Nations, and Business -- made a point of emphasizing the limitations of technological innovation. Particularly when it comes to geopolitical change.
How will a mass influx of robots affect human employment?
In the book Race Against the Machine, Erik Brynjolfsson and Andrew McAfee of MIT’s Sloan School of Management present a chart showing U.S. productivity, GDP, employment, and income from 1953 to 2011. The chart looks as you would expect from 1953 until the mid-1980s, with every one of the measures rising together: employees work more productively, companies make more money, and more hires occur as the middle class swells.
Then, during Reagan’s tenure, the bad news begins to show its face. First, even though productivity and GDP continue their upward arc, median household income starts to level off. That is unsettling, since it suggests that companies can get richer and yet employees can stop benefiting from increasing GDP: what happened to trickle-down? A decade later, in the mid-1990s, more trouble crops up: employment flattens as GDP and productivity continue even faster growth.
Brynjolfsson and McAfee argue that these are signs of a true sea change in the dynamics of productivity and employment. Contrary to popular conceptions that all we need is more technological innovation to increase employment, they argue, technological innovation is itself among the forces behind the change.
The elephant in the room is how robotics will play out for human employment in the long term. New robots will take on advanced manufacturing, tutoring, scheduling, and customer relations. They operate equipment, manage construction, operate backhoes, and yes, even drive tomorrow’s cars.
Take it easy when using social media. The signs of lurking at someone's account are easy to spot
Just met a really interesting guy or gal? These days, the first thing most people do is turn to social media to find out everything they can about the person. But this can lead to disaster — and quickly get you labeled as a creeper. First date? It's not going to happen if you're sloppy while sleuthing.
When it comes to using social media, consult an expert — also known as a teenager. Our resident teen expert (my daughter Elizabeth) tells TechNewsDaily how to gather information about someone without creeping your crush out.
Instagram has replaced Facebook as the top social network to post photos of your activities, friends and interests. And it's easy to find people on Instagram because searching by real name brings up a person's account even if they use a catchy screen name. But it's easy to slip up.
"Be careful when scrolling through an Instagram profile," 15-year-old Elizabeth said. "It's easy to accidentally double tap [a shortcut to liking a photo], then everyone knows you scrolled all the way down to pictures posted 42 weeks ago." [See also: How to Use Instagram Like a 15-Year-Old Girl ]
Liking a photo on Instagram posts your name in a list and sends a notification to the photo's owner.
To see what people are thinking about, head to Twitter. While favoriting a tweet can be a subtle way to show interest, don't go overboard.
Enhancing the flow of information through the brain could be crucial to making neuroprosthetics practical.
The abilities to learn, remember, evaluate, and decide are central to who we are and how we live. Damage to or dysfunction of the brain circuitry that supports these functions can be devastating, leading to Alzheimer’s, schizophrenia, PTSD, or many other disorders. Current treatments, which are drug-based or behavioral, have limited efficacy in treating these problems. There is a pressing need for something more effective.
One promising approach is to build an interactive device to help the brain learn, remember, evaluate, and decide. One might, for example, construct a system that would identify patterns of brain activity tied to particular experiences and then, when called upon, impose those patterns on the brain. Ted Berger, Sam Deadwyler, Robert Hampsom, and colleagues have used this approach (see “Memory Implants”). They are able to identify and then impose, via electrical stimulation, specific patterns of brain activity that improve a rat’s performance in a memory task. They have also shown that in monkeys stimulation can help the animal perform a task where it must remember a particular item.
Their ability to improve performance is impressive. However, there are fundamental limitations to an approach where the desired neural pattern must be known and then imposed. The animals used in their studies were trained to do a single task for weeks or months and the stimulation was customized to produce the right outcome for that task. This is only feasible for a few well-learned experiences in a predictable and constrained environment.