Your new post is loading...
Your new post is loading...
A radical theory predicting the existence of “time crystals” — perpetual motion objects that break the symmetry of time — is being put to the test.
In February 2012, the Nobel Prize-winning physicist Frank Wilczek decided to go public with a strange and, he worried, somewhat embarrassing idea. Impossible as it seemed, Wilczek had developed an apparent proof of “time crystals” — physical structures that move in a repeating pattern, like minute hands rounding clocks, without expending energy or ever winding down. Unlike clocks or any other known objects, time crystals derive their movement not from stored energy but from a break in the symmetry of time, enabling a special form of perpetual motion.
“Most research in physics is continuations of things that have gone before,” said Wilczek, a professor at the Massachusetts Institute of Technology. This, he said, was “kind of outside the box."
The idea came to Wilczek while he was preparing a class lecture in 2010. “I was thinking about the classification of crystals, and then it just occurred to me that it’s natural to think about space and time together,” he said. “So if you think about crystals in space, it’s very natural also to think about the classification of crystalline behavior in time.”
When matter crystallizes, its atoms spontaneously organize themselves into the rows, columns and stacks of a three-dimensional lattice. An atom occupies each “lattice point,” but the balance of forces between the atoms prevents them from inhabiting the space between. Because the atoms suddenly have a discrete, rather than continuous, set of choices for where to exist, crystals are said to break the spatial symmetry of nature — the usual rule that all places in space are equivalent. But what about the temporal symmetry of nature — the rule that stable objects stay the same throughout time?
Via Dr. Stefan Gruenwald
A team of entomologists from the University of Illinois has found a possible link between the practice of feeding commercial honeybees high-fructose corn syrup and the collapse of honeybee colonies around the world.
Since approximately 2006, groups that manage commercial honeybee colonies have been reporting what has become known as colony collapse disorder—whole colonies of bees simply died, of no apparent cause. As time has passed, the disorder has been reported at sites all across the world, even as scientists have been racing to find the cause, and a possible cure. To date, most evidence has implicated pesticides used to kill other insects such as mites. In this new effort, the researchers have found evidence to suggest the real culprit might be high-fructose corn syrup, which beekeepers have been feeding bees as their natural staple, honey, has been taken away from them.
Commercial honeybee enterprises began feeding bees high-fructose corn syrup back in the 70's after research was conducted that indicated that doing so was safe. Since that time, new pesticides have been developed and put into use and over time it appears the bees' immunity response to such compounds may have become compromised.
The researchers aren't suggesting that high-fructose corn syrup is itself toxic to bees, instead, they say their findings indicate that by eating the replacement food instead of honey, the bees are not being exposed to other chemicals that help the bees fight off toxins, such as those found in pesticides.
Specifically, they found that when bees are exposed to the enzyme p-coumaric, their immune system appears stronger—it turns on detoxification genes. P-coumaric is found in pollen walls, not nectar, and makes its way into honey inadvertently via sticking to the legs of bees as they visit flowers. Similarly, the team discovered other compounds found in poplar sap that appear to do much the same thing. It all together adds up to a diet that helps bees fight off toxins, the researchers report. Taking away the honey to sell it, and feeding the bees high-fructose corn syrup instead, they claim, compromises their immune systems, making them more vulnerable to the toxins that are meant to kill other bugs.
Via Dr. Stefan Gruenwald
Biologists at UC San Diego have identified eight genes never before suspected to play a role in wound healing that are called into action near the areas where wounds occur.
After injury to the animal epidermis, a variety of genes are transcriptionally activated in nearby cells to regenerate the missing cells and facilitate barrier repair. The range and types of diffusible wound signals that are produced by damaged epidermis and function to activate repair genes during epidermal regeneration remains a subject of very active study in many animals. In Drosophila embryos, serine proteases are locally activated around wound sites, and are also required for localized activation of epidermal repair genes. The serine protease trypsin is sufficient to induce a striking global epidermal wound response without inflicting cell death or compromising the integrity of the epithelial barrier. The fly researchers developed a trypsin wounding treatment as an amplification tool to more fully understand the changes in the Drosophila transcriptome that occur after epidermal injury.
By comparing these array results with similar results on mammalian skin wounding they were able to see which evolutionarily conserved pathways are activated after epidermal wounding in very diverse animals. This innovative serine protease-mediated wounding protocol allowed the researchers to identify 8 additional genes that are activated in epidermal cells in the immediate vicinity of puncture wounds, and the functions of many of these genes suggest novel genetic pathways that may control epidermal wound repair. Additionally, these data augments the evidence that clean puncture wounding can mount a powerful innate immune transcriptional response, with different innate immune genes being activated in an interesting variety of ways. These include puncture-induced activation only in epidermal cells in the immediate vicinity of wounds, or in all epidermal cells, or specifically in the fat body, or in multiple tissues.
Via Dr. Stefan Gruenwald
By mimicking a technique used by an intestinal parasite of fish, researchers have developed a flexible patch studded with microneedles that holds skin grafts in place more strongly than surgical staples do. After burrowing into the walls of a fish's intestines, the spiny-headed worm Pomphorhynchus laevis inflates its proboscis to better embed itself in the soft tissue. In the new patch (sample shown in main image), the stiff polystyrene core of the 700-micrometer-tall needles (inset) penetrates the tissue; then a thin hydrogel coating on the tip of each needle—a coating based on the material in disposable diapers that expands when it gets wet—swells to help anchor the patch in place. In tests using skin grafts, adhesion strength of the patch was more than three times higher than surgical staples, the researchers report online today in Nature Communications. Because the patch doesn't depend on chemical adhesives for its gripping power, there's less chance for patients to have an allergic reaction. And because the microneedles are about one-quarter the length of typical surgical staples, the patches cause less tissue damage when they're removed, the researchers contend. Besides holding grafts in place, the patch could be used to hold the sides of a wound or an incision together—even, in theory, ones inside the body if a slowly dissolving version of the patch can be developed. Moreover, the researchers say, the hydrogel coating holds promise as a way to deliver proteins, drugs, or other therapeutic substances to patients.
Via Dr. Stefan Gruenwald
Scientists investigate previously unknown sprays of X-rays and bursts of gamma rays.
A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.
But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.
Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.
Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.
Via Dr. Stefan Gruenwald
A 15-unit apartment building has been constructed in the German city of Hamburg that has 129 algae filled louvered tanks hanging over the exterior of the south-east and south-west sides of the building—making it the first in the world to be powered exclusively by algae. Designed by Arup, SSC Strategic Science Consultants and Splitterwerk Architects, and named the Bio Intelligent Quotient (BIQ) House, the building demonstrates the ability to use algae as a way to heat and cool large buildings.
Via Dr. Stefan Gruenwald
The Chinese government says its so-called "one-child policy" has succeeded in reining in its population. But more than three decades after the policy's imple...
Via Natalie K Jensen
Sallyann Griffin's insight:
The road to hell is paved with good intentions.
In the past 100 years, average temperatures on Earth have changed by 1.3 degrees. Previously, that large of a swing took 5,000 years. That's the word from researchers who pored over temperature data going back to the end of the last ice age.
There's plenty of evidence that the climate has warmed up over the past century, and climate scientists know this has happened throughout the history of the planet. But they want to know more about how this warming is different.
Now a research team says it has some new answers. It has put together a record of global temperatures going back to the end of the last ice age — about 11,000 years ago — when mammoths and saber-tooth cats roamed the planet. The study confirms that what we're seeing now is unprecedented.
What the researchers did is peer into the past. They read ice cores from polar regions that show what temperatures were like over hundreds of thousands of years. But those only reveal changes in those specific regions; cores aren't so good at depicting what happened to the whole planet. Tree rings give a more global record of temperatures, but only back about 2,000 years.
Shaun Marcott, a geologist at Oregon State University, says "global temperatures are warmer than about 75 percent of anything we've seen over the last 11,000 years or so." The other way to look at that is, 25 percent of the time since the last ice age, it's been warmer than now.
You might think, so what's to worry about? But Marcott says the record shows just how unusual our current warming is. "It's really the rates of change here that's amazing and atypical," he says. Essentially, it's warming up superfast.
Here's what happened. After the end of the ice age, the planet got warmer. Then, 5,000 years ago, it started to get cooler — but really slowly. In all, it cooled 1.3 degrees Fahrenheit, up until the last century or so. Then it flipped again — global average temperature shot up.
"Temperatures now have gone from that cold period to the warm period in just 100 years," Marcott says.
So it's taken just 100 years for the average temperature to change by 1.3 degrees, when it took 5,000 years to do that before.
The research team tracked temperature by studying chemicals in the shells of tiny, fossilized sea creatures called foraminifera. Their temperature record matches other techniques that look back 2,000 years, which supports the validity of their much longer record.
Via Dr. Stefan Gruenwald
When a school teacher writes her name on a blackboard on the first day of class, what she's really doing is crushing the skeletons of terribly ancient earthlings into a form that spells out the name "Mrs. ...".
A piece of chalk, when you think about too much, is a miracle. What is it, exactly? Well, if you look under a microscope, as British naturalist Thomas Huxley did in the 1860s, what you see is this (see figure). Chalk is composed of extremely small white globules. They look, up close, like snowballs made from brittle paper plates. Those plates, it turns out, are part of ancient skeletons that once belonged to roundish little critters that lived and floated in the sea, captured a little sunshine and carbon, then died and sank to the bottom. There still are trillions of them floating about in the oceans today, sucking up carbon dioxide, pocketing the carbon. Over the millennia, so many have died and plopped on top of each other, the weight of them and the water above has pressed them into a white blanket of rock, entirely composed of teeny skeletons. Scientists call these ancient plates "coccoliths." Technically, they are single-celled phytoplankton algae.
Chalk doesn't proclaim itself. It is usually out of view, buried in the ground below. Every so often, when a highway is being carved through a mountain, or when the sea and wind erode the side of a hill, that's when the green cover comes off, then you can see it. The White Cliffs of Dover are all chalk, piled hundreds of feet high.
In 1853, when the transatlantic cable was being laid, engineers would occasionally yank thick loops of wire up 10,000 feet from the ocean bottom, and every time, they found the same coating of white muck: chalk again. It turns out, writes biologist Bernd Heinrich, "the Atlantic mud, which stretches over a huge plain thousands of square miles, is raw chalk."
“A great chapter of the history of the world is written in chalk.
Since then geologists have found a chalk layer stretching 3,000 miles across Europe into Asia. It's under France, Germany, Russia, Egypt, Syria. How did it get there?
That, said Thomas Huxley (who first saw those teeny skeletons under his microscope) is one of the "most startling conclusions of physical science." In 1868, he gave a lecture to the "working men of Norwich" where he declared that "a great chapter of the history of the world is written in chalk."
Via Dr. Stefan Gruenwald
In the invisible, parallel world of Earth's they kill half the bacteria in the ocean every day, and invade a microbe host 10 trillion times a second around the world. There are 10 billion trillion, trillion viruses inhabiting Planet Earth, which is more stars than are in the Universe -- stacked end to end, they would reach out 100 million light years.
Over tens, hundreds and millions years, our ancestors have been picking up retroviruses (HIV is a retrovirus) that reproduce by taking their genetic material and inserting it into our own chromosomes. There are probably about 100,000 elements in the human genome that you can trace to a virus ancestor. They make up about 8 percent of our genome, and genes that encode proteins only make up 1.2 percent of our genome making us more virus than human.
Occasionally, a retrovirus will end up in a sperm cell or an egg and insert its genes there, which then may give rise to a new organism, a new animal, a new person where every cell in that body has got that virus.
Via Dr. Stefan Gruenwald
Hoping to expand our understanding of auroras and other fleeting atmospheric events, a team of space-weather researchers designed and built NORUSCA II, a new camera with unprecedented capabilities that can simultaneously image multiple spectral bands, in essence different wavelengths or colors, of light. The camera was tested at the Kjell Henriksen Observatory (KHO) in Svalbard, Norway, where it produced the first-ever hyperspectral images of auroras—commonly referred to as "the Northern (or Southern) Lights"—and may already have revealed a previously unknown atmospheric phenomenon.
Via Sakis Koukouvis, Dr. Stefan Gruenwald
Scientists at Princeton University used off-the-shelf printing tools to create a functional ear that can 'hear' radio frequencies far beyond the range of normal human capability.
Creating organs using 3D printers is a recent advance; several groups have reported using the technology for this purpose in the past few months. But this is the first time that researchers have demonstrated that 3D printing is a convenient strategy to interweave tissue with electronics.
The technique allowed the researchers to combine the antenna electronics with tissue within the highly complex topology of a human ear. The researchers used an ordinary 3D printer to combine a matrix of hydrogel and calf cells with silver nanoparticles that form an antenna. The calf cells later develop into cartilage.
Manu Mannoor, a graduate student in McAlpine's lab and the paper's lead author, said that additive manufacturing opens new ways to think about the integration of electronics with biological tissue and makes possible the creation of true bionic organs in form and function. He said that it may be possible to integrate sensors into a variety of biological tissues, for example, to monitor stress on a patient's knee meniscus.
David Gracias, an associate professor at Johns Hopkins and co-author on the publication, said that bridging the divide between biology and electronics represents a formidable challenge that needs to be overcome to enable the creation of smart prostheses and implants.
"Biological structures are soft and squishy, composed mostly of water and organic molecules, while conventional electronic devices are hard and dry, composed mainly of metals, semiconductors and inorganic dielectrics," he said. "The differences in physical and chemical properties between these two material classes could not be any more pronounced."
The finished ear consists of a coiled antenna inside a cartilage structure. Two wires lead from the base of the ear and wind around a helical "cochlea" – the part of the ear that senses sound – which can connect to electrodes. Although McAlpine cautions that further work and extensive testing would need to be done before the technology could be used on a patient, he said the ear in principle could be used to restore or enhance human hearing. He said electrical signals produced by the ear could be connected to a patient's nerve endings, similar to a hearing aid. The current system receives radio waves, but he said the research team plans to incorporate other materials, such as pressure-sensitive electronic sensors, to enable the ear to register acoustic sounds.
In addition to McAlpine, Verma, Mannoor and Gracias the research team includes: Winston Soboyejo, a professor of mechanical and aerospace engineering at Princeton; Karen Malatesta, a faculty fellow in molecular biology at Princeton; Yong Lin Kong, a graduate student in mechanical and aerospace engineering at Princeton; and Teena James, a graduate student in chemical and biomolecular engineering at Johns Hopkins.
Via Dr. Stefan Gruenwald
In 2012, more than 3 million people had stents inserted in their coronary arteries. These tiny mesh tubes prop open blood vessels healing from procedures like a balloon angioplasty, which widens arteries blocked by clots or plaque deposits.
After about six months, most damaged arteries are healed and stay open on their own. The stent, however, is there for a lifetime. Most of the time, that's not a problem, says Patrick Bowen, a doctoral student studying materials science and engineering at Michigan Technological University.
The arterial wall heals in around the old stent with no ill effect. But the longer a stent is in the body, the greater the risk of late-stage side effects. For example, a permanent stent can cause intermittent inflammation and clotting at the implant site. In a small percentage of cases, the tiny metal segments that make up the stent can break and end up poking the arterial wall.
"When the stent stays in place 15, 20 or 25 years, you can see these side effects," says Bowen. "It's not uncommon to have a stent put in at age 60, and if you live to be 80, that's a long time for something to remain inert in your body."
That's why researchers are trying to develop a bioabsorbable stent, one that would gradually -- and harmlessly -- dissolve after the blood vessel is healed.
Many studies have investigated iron- and magnesium-based stents. However, iron is not promising: it rusts in the artery. Magnesium, on the other hand, dissolves too quickly. "We wondered, 'Isn't there something else?'" Bowen said. "And we thought, 'Why not zinc?'"
So they placed tiny zinc wires in the arteries of rats. The results were amazing. "The corrosion rate was exactly where it needed to be," Bowen said. The wires degraded at a rate just below 0.2 millimeters per year -- the "magic" value for bioabsorbable stents -- for the first three months.
After that, the corrosion accelerated, so the implant would not remain in the artery for too long. On top of that, the rats' arteries appeared healthy when the wires were removed, with tissue firmly grasping the implant.
"Plus, zinc reduces atherosclerosis," he added, referring to zinc's well-known ability to fight the development of plaque in the arteries. "How cool is that? A zinc stent might actually have health benefits."
There is one drawback. "A stent made of conventional zinc would not be strong enough to hold open a human artery," he said. "We need to beef it up, double the strength."
"The good news is that there are commercial zinc alloys that are up to three times stronger," Bowen said. "We know we can get there. We just don't want to ruin our corrosion behavior."
Via Dr. Stefan Gruenwald
Two children with an aggressive form of childhood leukemia had a complete remission of their disease—showing no evidence of cancer cells in their bodies—after treatment with a novel cell therapy that reprogrammed their immune cells to rapidly multiply and destroy leukemia cells. A research team from The Children’s Hospital of Philadelphia and the University of Pennsylvania published the case report of two pediatric patients Online First today in The New England Journal of Medicine. It will appear in the April 18 print issue.
The current study builds on Grupp’s ongoing collaboration with Penn Medicine scientists who originally developed the modified T cells as a treatment for B-cell leukemias. The Penn team reported on early successful results of a trial using this cell therapy in three adult chronic lymphocytic leukemia (CLL) patients in August of 2011. Two of those patients remain in remission more than 2½ years following their treatment, and as the Penn researchers reported in December 2012 at the annual meeting of the American Society of Hematology, seven out of ten adult patients treated at that point responded to the therapy. The team is led by the current study’s senior author, Carl H. June, M.D., the Richard W. Vague Professor in Immunotherapy in the department of Pathology and Laboratory Medicine and the Perelman School of Medicine at the University of Pennsylvania and director of Translational Research in Penn’s Abramson Cancer Center.
“We’re hopeful that our efforts to treat patients with these personalized cellular therapies will reduce or even replace the need for bone marrow transplants, which carry a high mortality risk and require long hospitalizations,” June said. “In the long run, if the treatment is effective in these late-stage patients, we would like to explore using it up front, and perhaps arrive at a point where leukemia can be treated without chemotherapy.”
The research team colleagues adapted the original CLL treatment to combat another B-cell leukemia: ALL, which is the most common childhood cancer. After decades of research, oncologists can currently cure 85 percent of children with ALL. Both children in the current study had a high-risk type of ALL that stubbornly resists conventional treatments.
The new study used a relatively new approach in cancer treatment: immunotherapy, which manipulates the immune system to increase its cancer-fighting capabilities. Here the researchers engineered T cells to selectively kill another type of immune cell called B cells, which had become cancerous.
The researchers removed some of each patient’s own T cells and modified them in the laboratory to create a type of CAR (chimeric antigen receptor) cell called a CTL019 cell. These cells are designed to attack a protein called CD19 that occurs only on the surface of certain B cells.
By creating an antibody that recognizes CD19 and then connecting that antibody to T cells, the researchers created in CTL019 cells a sort of guided missile that locks in on and kills B cells, thereby attacking B-cell leukemia. After being returned to the patient’s body, the CTL019 cells multiply a thousand times over and circulate throughout the body. Importantly, they persist for months afterward, guarding against a recurrence of this specific type of leukemia.
While the CTL019 cells eliminate leukemia, they also can generate an overactive immune response, called a cytokine release syndrome, involving dangerously high fever, low blood pressure, and other side effects. This complication was especially severe in Emily, and her hospital team needed to provide her with treatments that rapidly relieved the treatment-related symptoms by blunting the immune overresponse, while still preserving the modified T cells’ anti-leukemia activity.
“The comprehensive testing plan that we have put in place to study patients’ blood and bone marrow while they’re undergoing this therapy is allowing us to be able to follow how the T cells are behaving in patients in real time, and guides us to be able to design more detailed and specific experiments to answer critical questions that come up from our studies,” Kalos said.
The CTL019 therapy eliminates all B cells that carry the CD19 cell receptor: healthy cells as well as those with leukemia. Patients can live without B cells, although they require regular replacement infusions of immunoglobulin, which can be given at home, to perform the immune function normally provided by B cells.
The research team continues to refine their approach using this new technology and explore reasons why some patients may not respond to the therapy or may experience a recurrence of their disease. Grupp said the appearance of the CD19-negative leukemia cells in the second child may have resulted from her prior treatments. Unlike Emily, the second patient had received an umbilical cord cell transplant from a matched donor, so her engineered T cells were derived from her donor (transplanted) cells, with no additional side effects. Oncologists had previously treated her with blinatumomab, a monoclonal antibody, in hopes of fighting the cancer. The prior treatments may have selectively favored a population of CD19-negative T cells.
“The emergence of tumor cells that no longer contain the target protein suggests that in particular patients with high-risk ALL, we may need to broaden the treatment to include additional T cells that may go after additional targets,” added Grupp. “However, the initial results with this immune-based approach are encouraging, and may later even be developed into treatments for other types of cancer.”
Via Dr. Stefan Gruenwald
Researchers at the Harvard Stem Cell Institute have discovered a hormone that holds promise for a dramatically more effective treatment of type 2 diabetes, a metabolic illness afflicting an estimated 26 million Americans. It could eventually mean that instead of taking insulin injections three times a day, you might take an injection of this hormone once a week or once a month, or in the best case maybe even once a year,” said Doug Melton (right). Melton and postdoctoral fellow Peng Yi discovered the hormone betatrophin, which has the potential to improve diabetes treatment.
Researchers at the Harvard Stem Cell Institute (HSCI) have discovered a hormone that holds promise for a dramatically more effective treatment of type 2 diabetes, a metabolic illness afflicting an estimated 26 million Americans. The researchers believe that the hormone might also have a role in treating type 1, or juvenile, diabetes.
The hormone, called betatrophin, causes mice to produce insulin-secreting pancreatic beta cells at up to 30 times the normal rate. The new beta cells only produce insulin when called for by the body, offering the potential for the natural regulation of insulin and a great reduction in the complications associated with diabetes, the leading medical cause of amputations and non-genetic loss of vision.
The researchers who discovered betatrophin, HSCI co-director Doug Meltonand postdoctoral fellow Peng Yi, caution that much work remains to be done before it could be used as a treatment in humans. But the results of their work, which was supported in large part by a federal research grant, already have attracted the attention of drug manufacturers.
Via Dr. Stefan Gruenwald
Will an astronaut who falls into a black hole be crushed or burned to a crisp?
In March 2012, Joseph Polchinski began to contemplate suicide — at least in mathematical form. A string theorist at the Kavli Institute for Theoretical Physics in Santa Barbara, California, Polchinski was pondering what would happen to an astronaut who dived into a black hole. Obviously, he would die. But how?
According to the then-accepted account, he wouldn’t feel anything special at first, even when his fall took him through the black hole’s event horizon: the invisible boundary beyond which nothing can escape. But eventually — after hours, days or even weeks if the black hole was big enough — he would begin to notice that gravity was tugging at his feet more strongly than at his head. As his plunge carried him inexorably downwards, the difference in forces would quickly increase and rip him apart, before finally crushing his remnants into the black hole’s infinitely dense core.
But Polchinski’s calculations, carried out with two of his students — Ahmed Almheiri and James Sully — and fellow string theorist Donald Marolf at the University of California, Santa Barbara (UCSB), were telling a different story. In their account, quantum effects would turn the event horizon into a seething maelstrom of particles. Anyone who fell into it would hit a wall of fire and be burned to a crisp in an instant.
The team’s verdict, published in July 2012, shocked the physics community. Such firewalls would violate a foundational tenet of physics that was first articulated almost a century ago by Albert Einstein, who used it as the basis of general relativity, his theory of gravity. Known as the equivalence principle, it states in part that an observer falling in a gravitational field — even the powerful one inside a black hole — will see exactly the same phenomena as an observer floating in empty space. Without this principle, Einstein’s framework crumbles.
Well aware of the implications of their claim, Polchinski and his co-authors offered an alternative plot ending in which a firewall does not form. But this solution came with a huge price. Physicists would have to sacrifice the other great pillar of their science: quantum mechanics, the theory governing the interactions between subatomic particles.
The result has been a flurry of research papers about firewalls, all struggling to resolve the impasse, none succeeding to everyone’s satisfaction. Steve Giddings, a quantum physicist at the UCSB, describes the situation as “a crisis in the foundations of physics that may need a revolution to resolve”.
With that thought in mind, black-hole experts came together last month at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland, to grapple with the issue face to face. They hoped to reveal the path towards a unified theory of ‘quantum gravity’ that brings all the fundamental forces of nature under one umbrella — a prize that has eluded physicists for decades.
The firewall idea “shakes the foundations of what most of us believed about black holes”, said Raphael Bousso, a string theorist at the University of California, Berkeley, as he opened his talk at the meeting. “It essentially pits quantum mechanics against general relativity, without giving us any clues as to which direction to go next.”
The roots of the firewall crisis go back to 1974, when physicist Stephen Hawking at the University of Cambridge, UK, showed that quantum effects cause black holes to run a temperature2. Left in isolation, the holes will slowly spew out thermal radiation — photons and other particles — and gradually lose mass until they evaporate away entirely (see Figure).
These particles aren’t the firewall, however; the subtleties of relativity guarantee that an astronaut falling through the event horizon will not notice this radiation. But Hawking’s result was still startling — not least because the equations of general relativity say that black holes can only swallow mass and grow, not evaporate.
Hawking’s argument basically comes down to the observation that in the quantum realm, ‘empty’ space isn’t empty. Down at this sub-sub-microscopic level, it is in constant turmoil, with pairs of particles and their corresponding antiparticles continually popping into existence before rapidly recombining and vanishing. Only in very delicate laboratory experiments does this submicroscopic frenzy have any observable consequences. But when a particle–antiparticle pair appears just outside a black hole’s event horizon, Hawking realized, one member could fall in before the two recombined, leaving the surviving partner to fly outwards as radiation. The doomed particle would balance the positive energy of the outgoing particle by carrying negative energy inwards — something allowed by quantum rules. That negative energy would then get subtracted from the black hole’s mass, causing the hole to shrink.
Hawking’s original analysis has since been refined and extended by many researchers, and his conclusion is now accepted almost universally. But with it came the disturbing realization that black-hole radiation leads to a paradox that challenges quantum theory.
Quantum mechanics says that information cannot be destroyed. In principle, it should be possible to recover everything there is to know about the objects that fell in a black hole by measuring the quantum state of the radiation coming out. But Hawking showed that it was not that simple: the radiation coming out is random. Toss in a kilogram of rock or a kilogram of computer chips and the result will be the same. Watch the black hole even until it dies, and there would still be no way to tell how it was formed or what fell in it.
But that same year, the deadlock was broken by a discovery made by Juan Maldacena, a physicist then at Harvard University in Cambridge. Maldacena’s insight built on an earlier proposal that any three-dimensional (3D) region of our Universe can be described by information encoded on its two-dimensional (2D) boundary, in much the same way that laser light can encode a 3D scene on a 2D hologram. “We used the word ‘hologram’ as a metaphor,” says Leonard Susskind, a string theorist at Stanford University in California, and one of those who came up with the proposal. “But after doing more mathematics, it seemed to make literal sense that the Universe is a projection of information on the boundary.”
What Maldacena came up with was a concrete mathematical formulation of the hologram idea that made use of ideas from superstring theory, which posits that elementary particles are composed of tiny vibrating loops of energy. His model envisages a 3D universe containing strings and black holes that are governed only by gravity, bounded by a 2D surface on which elementary particles and fields obey ordinary quantum laws without gravity. Hypothetical residents of the 3D space would never see this boundary because it is infinitely far away. But that wouldn’t matter: anything happening in the 3D universe could be described equally well by equations in the 2D universe, and vice versa. “I found that there’s a mathematical dictionary that allows you to go back and forth between the languages of these two worlds,” Maldacena explains.
One of the most promising resolutions, according to Susskind, has come from Daniel Harlow, a quantum physicist at Princeton University in New Jersey, and Patrick Hayden, a computer scientist at McGill University in Montreal, Canada. They considered whether an astronaut could ever detect the paradox with a real-world measurement. To do so, he or she would first have to decode a significant portion of the outgoing Hawking radiation, then dive into the black hole to examine the infalling particles. The pair’s calculations show that the radiation is so tough to decode that the black hole would evaporate before the astronaut was ready to jump in. “There’s no fundamental law preventing someone from measuring the paradox,” says Harlow. “But in practice, it’s impossible.”
Giddings, however, argues that the firewall paradox requires a radical solution. He has calculated that if the entanglement between the outgoing Hawking radiation and its infalling twin is not broken until the escaping particle has travelled a short distance away from the event horizon, then the energy released would be much less ferocious, and no firewall would be generated. This protects the equivalence principle, but requires some quantum laws to be modified. At the CERN meeting, participants were tantalized by the possibility that Giddings’ model could be tested: it predicts that when two black holes merge, they may produce distinctive ripples in space-time that can be detected by gravitational-wave observatories on Earth.
There is another option that would save the equivalence principle, but it is so controversial that few dare to champion it: maybe Hawking was right all those years ago and information is lost in black holes. Ironically, it is Preskill, the man who bet against Hawking’s claim, who raised this alternative, at a workshop on firewalls at Stanford at the end of last year. “It’s surprising that people are not seriously thinking about this possibility because it doesn’t seem any crazier than firewalls,” he says — although he adds that his instinct is still that information survives.
The reluctance to revisit Hawking’s old argument is a sign of the immense respect that physicists have for Maldacena’s dictionary relating gravity to quantum theory, which seemingly proved that information cannot be lost. “This is the deepest ever insight into gravity because it links it to quantum fields,” says Polchinski, who compares Maldacena’s result — which has now accumulated close to 9,000 citations — to the nineteenth-century discovery that a single theory connects light, electricity and magnetism. “If the firewall argument had been made in the early 1990s, I think it would have been a powerful argument for information loss,” says Bousso. “But now nobody wants to entertain the possibility that Maldacena is wrong.”
Maldacena is flattered that most physicists would back him in a straight-out fight against Einstein, although he believes it won’t come to that. “To completely understand the firewall paradox, we may need to flesh out that dictionary,” he says, “but we won’t need to throw it out.”
The only consensus so far is that this problem will not go away any time soon. During his talk, Polchinski fielded all proposed strategies for mitigating the firewall, carefully highlighting what he sees as their weaknesses. “I’m sorry that no one has gotten rid of the firewall,” he concludes. “But please keep trying.”
Via Dr. Stefan Gruenwald
This is a special bat, and not just because of its strikingly beautiful spots and stripes. This is a rare specimen, whose discovery in South Sudan led researchers to identify a new genus of bat. The bat is just the fifth specimen of its kind ever collected.
The distinctly patterned bat was discovered by researchers from Bucknell University and Fauna & Flora International during a field research expedition with wildlife authorities in South Sudan.
DeeAnn Reeder, an Associate Professor of Biology at Bucknell and first author of the paper announcing the new bat genus, recognized the bat as the same species as a specimen captured in the Democratic Republic of the Congo in 1939. That specimen was classified as Glauconycteris superba, but after detailed analyses she and her colleagues determined it did not belong in the genus Glauconycteris. It was so unique that they needed to create a new genus for it.
Reeder and her colleagues named the new genus Niumbaha, which means “rare” or “unusual” in Zande, the language spoken in Western Equatoria State, where the bat was captured. The bat’s full scientific name is Niumbaha superba, reflecting both the rarity and the magnificence of this creature.
“Our discovery of this new genus of bat is an indicator of how diverse the area is and how much work remains,” Reeder said in a press release.
Via Dr. Stefan Gruenwald
A new research suggests that more than 1,000 bird species became extinct on Pacific islands following human colonization.
Scientists had long known extinction rates in the region were high but estimates varied from 800 to 2,000 bird species. The researchers led by Prof Tim Blackburn of the University of Tennessee studied the extinction rates of nonperching land birds on Pacific islands from 700 to 3,500 years ago. They used fossil records from 41 Pacific islands such as Hawaii and Fiji to run an analytical technique called the Bayesian mark-recapture method. This allowed them to model gaps in the fossil record for more than 300 Pacific islands and estimate the number of unknown extinct species.
“We used information on what species are currently on the islands and what species are in the fossil record to estimate the probability of finding a species in the fossil record,” said co-author Prof Alison Boyer, also from the University of Tennessee.
The team found that nearly 983, or two-thirds, of land bird populations disappeared between the years of the first human arrival and European colonization. Disappearances are linked to overhunting by people, forest clearance and introduced species.
“We calculate that human colonization of remote Pacific islands caused the global extinction of close to a thousand species of nonperching land birds alone,” Prof Boyer said. “However, it is likely there are more species that were affected by human presence. Sea bird and perching bird extinctions will add to this total.”
Species lost include several species of moa-nalos, large flightless waterfowl from Hawai’i, and the New Caledonian Sylviornis, a relative of the game birds but which weighed in at around 30 kg, three times as heavy as a swan.
The researchers found the extinction rates differed depending on island and species characteristics. For example, larger islands had lower rates of extinction because they had larger populations of each bird species. Islands with more rainfall also had lower extinction rates because they experienced less deforestation by settlers. Bird species that were flightless and large-bodied had a higher rate of extinction because they were easier and more profitable to hunt and their lower rates of population growth inhibited recovery from overhunting or habitat loss.
“Flightless species were 33 times more likely to go extinct than those that could fly,” Prof Boyer explained. “Also, species that only populated a single island were 24 more times likely to go extinct than widespread species.”
Via Dr. Stefan Gruenwald
In the early 1950's, a 66-year-old woman with colon cancer received a blood transfusion - but she suffered a severe rejection of the transfused blood. When writing the case study, the medical journal Revue D'Hématologie identified her only as "Patient Vel."
It was determined that Mrs. Vel had developed a potent antibody against some unknown molecule found on the red blood cells of most people in the world—but not found on her own red blood cells. But the molecule was unknown, nobody could find it. A blood mystery began, and, from her case, a new blood type, "Vel-negative," was described in 1952.
Soon it was discovered that Mrs. Vel was not alone. It is estimated that over 200,000 people in Europe and a similar number in North America are Vel-negative, about 1 in 2,500. For these people, successive blood transfusions could easily turn to kidney failure and death. So, for sixty years, doctors and researchers have hunted for the underlying cause of this blood type.
Now a team of scientists from the University of Vermont and France has found the missing molecule—a tiny protein called SMIM1—and the mystery is solved. "Our findings promise to provide immediate assistance to health-care professionals should they encounter this rare but vexing blood type," says University of Vermont's Bryan Ballif. Last year, Ballif and Arnaud identified the proteins responsible for two other rare blood types, Junior and Langeris, moving the global count of understood blood types or systems from 30 to 32. Now, with Vel, the number rises to 33. The little protein didn't reveal its identity easily. "I had to fish through thousands of proteins," Ballif says. And several experiments failed to find the culprit because of its unusual biochemistry—and pipsqueak size. But he eventually nabbed it using a high-resolution mass spectrometer funded by the Vermont Genetics Network. And what he found was new to science. "It was only a predicted protein based on the human genome," says Ballif, but hadn't yet been observed. It has since been named: Small Integral Membrane Protein 1, or SMIM1.
Next, Lionel Arnaud of the French National Institute of Blood Transfusion
Today, personalized medicine— where doctors treat us based on our unique biological makeup—is a hot trend. "The science of blood transfusion has been attempting personalized medicine since its inception," Ballif notes, "given that its goal is to personalize a transfusion by making the best match possible between donor and recipient.
"Identifying and making available rare blood types such as Vel-negative blood brings us closer to a goal of personalized medicine. Even if you are that rare one person out of 2,500 that is Vel-negative, we now know how to rapidly type your blood and find blood for you—should you need a transfusion."
Via Dr. Stefan Gruenwald
At some point in the next decade, if advances in biotechnology continue on their current path, clones of extinct species such as the passenger pigeon, Tasmanian tiger and wooly mammoth could once again live among us. But cloning lost species—or “de-extinction” as some scientists call it—presents us with myriad ethical, legal and regulatory questions that must be answered, such as which (if any) species should be brought back and whether or not such creatures could be allowed to return to the wild. Such questions are set to be addressed at the TEDx DeExtinction conference, a day-long event in Washington, D.C., organized by Stewart Brand’s Revive & Restore project. Brand previewed the topics for discussion last week at the TED2013 conference in Long Beach, Calif.
Scientists are actively working on methods and procedures for bringing extinct species back to life, says Ryan Phelan, executive director of Revive & Restore and co-organizer of the TEDx event. “The technology is moving fast. What Stewart and I are trying to do with this meeting is for the first time to allow the public to start thinking about this. We’re going to hear from people who take it quite seriously. De-extinction is going to happen, and the questions are how does it get applied, when does it get used, what are the criteria which are going to be set?”
Cloning extinct species has been tried before—with moderate success. An extinct Pyrenean ibex, or bucardo, (Capra pyrenaica pyrenaica) was born to a surrogate mother goat in 2009, nine years after the last member of its species was killed by a falling tree. The cloned animal lived for just seven minutes. Revive & Restore itself has launched a project to try to resurrect the passenger pigeon, which went extinct in 1914.
Via Dr. Stefan Gruenwald
Scientists scanning the human brain can now tell whom a person is thinking of, the first time researchers have been able to identify what people are imagining from imaging technologies.
Work to visualize thought is starting to pile up successes. Recently, scientists have used brain scans to decode imagery directly from the brain, such as what number people have just seen and what memory a person is recalling. They can now even reconstruct videos of what a person has watched based on their brain activity alone. Cornell University cognitive neuroscientist Nathan Spreng and his colleagues wanted to carry this research one step further by seeing if they could deduce the mental pictures of people that subjects conjure up in their heads.
“We are trying to understand the physical mechanisms that allow us to have an inner world, and a part of that is how we represent other people in our mind,” Spreng says. His team first gave 19 volunteers descriptions of four imaginary people they were told were real. Each of these characters had different personalities. Half the personalities were agreeable, described as liking to cooperate with others; the other half were less agreeable, depicted as cold and aloof or having similar traits. In addition, half these characters were described as outgoing and sociable extroverts, while the others were less so, depicted as sometimes shy and inhibited. The scientists matched the genders of these characters to each volunteer and gave them popular names like Mike, Chris, Dave or Nick, or Ashley, Sarah, Nicole or Jenny.
The researchers then scanned volunteers’ brains using functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow. During the scans, the investigators asked participants to predict how each of the four fictitious people might behave in a variety of scenarios — for instance, if they were at a bar and someone else spilled a drink, or if they saw a homeless veteran asking for change.
“Humans are social creatures, and the social world is a complex place,” Spreng says. “A key aspect to navigating the social world is how we represent others.” The scientists discovered that each of the four personalities were linked to unique patterns of brain activity in a part of the organ known as the medial prefrontal cortex. In other words, researchers could tell whom their volunteers were thinking about.
“This is the first study to show that we can decode what people are imagining,” Spreng says. The medial prefrontal cortex helps people deduce traits about others. These findings suggest this region is also where personality models are encoded, assembled and updated, helping people understand and predict the likely behavior of others and prepare for the future.
Via Dr. Stefan Gruenwald
Data released early this year from the European Space Agency's (ESO) HARPS planet finder shows that rocky planets not much bigger than Earth are very common in the habitable zones around faint red stars. The international team estimates that there are tens of billions of such planets in the Milky Way galaxy alone, and probably about one hundred in the Sun’s immediate neighbourhood. This was the first direct measurement of the frequency of super-Earths around red dwarfs, which account for 80% of the stars in the Milky Way.
This first direct estimate of the number of light planets around red dwarf stars was announced early this year by an international team using observations with the HARPS spectrograph on the 3.6-metre telescope at ESO's La Silla Observatory in Chile. A prior announcement, showing that planets are ubiquitous in our galaxy used a different method that was not sensitive to this important class of exoplanets.
The HARPS team has been searching for exoplanets orbiting the most common kind of star in the Milky Way — red dwarf stars (also known as M dwarfs). These stars are faint and cool compared to the Sun, but very common and long-lived, and therefore account for 80% of all the stars in the Milky Way.
"Our new observations with HARPS mean that about 40% of all red dwarf stars have a super-Earth orbiting in the habitable zone where liquid water can exist on the surface of the planet," says Xavier Bonfils (IPAG, Observatoire des Sciences de l'Univers de Grenoble, France), the leader of the team."Because red dwarfs are so common — there are about 160 billion of them in the Milky Way — this leads us to the astonishing result that there are tens of billions of these planets in our galaxy alone."
The HARPS team surveyed a carefully chosen sample of 102 red dwarf stars in the southern skies over a six-year period. A total of nine super-Earths (planets with masses between one and ten times that of Earth) were found, including two inside the habitable zones of Gliese 581 and Gliese 667 C respectively. The astronomers could estimate how heavy the planets were and how far from their stars they orbited.
By combining all the data, including observations of stars that did not have planets, and looking at the fraction of existing planets that could be discovered, the team has been able to work out how common different sorts of planets are around red dwarfs. They find that the frequency of occurrence of super-Earths in the habitable zone is 41% with a range from 28% to 95%.
On the other hand, more massive planets, similar to Jupiter and Saturn in our Solar System, are found to be rare around red dwarfs. Less than 12% of red dwarfs are expected to have giant planets (with masses between 100 and 1000 times that of the Earth).
As there are many red dwarf stars close to the Sun the new estimate means that there are probably about one hundred super-Earth planets in the habitable zones around stars in the neighbourhood of the Sun at distances less than about 30 light-years.
Via Dr. Stefan Gruenwald