Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The Advanced Camera for Surveys, one of the Hubble’s advanced instruments, has taken a spectacularly detailed image of a galaxy called SBS 1415+437.
Discovered in 1995 by a team of astronomers from the United States and Ukraine, SBS 1415+437 lies in the constellation Boötes at a distance of about 45.3 million light-years. It is a galaxy type known as a cometary blue compact dwarf galaxy. Astronomers initially thought that SBS 1415+437 was a truly young galaxy that did not start to form stars until 100 million years ago, but a recent study has suggested that the galaxy is in fact older, containing stars 1.3 billion years old.
SBS 1415+437, otherwise known as PGC 51017, SBSG 1415+437 or SDSS CGB 12067.1, also belongs to a rare group of starburst galaxies called Wolf–Rayet galaxies. The galaxy has an unusually high number of extremely hot and massive Wolf–Rayet stars. These stars are among the largest and shortest lived stars known, typically over 20 solar masses with surface temperatures well over 25,000 K. Many of the brightest and most massive stars in the Milky Way are Wolf–Rayet stars.
These massive stars are in the stage of their stellar evolution where they undergo heavy mass loss. A typical Wolf–Rayet star can lose a mass equal to that of our Sun in just 100,000 years. Because of this it is unusual to find more than a few of these stars per galaxy – except in Wolf–Rayet galaxies, like the one in this image.
From fun-house mirrors to holograms, we have all experienced incredible optical illusions. Right now, scientists are fascinated by the prospect of finding a way to perform an even more challenging trick: hiding things in plain sight. We've made some metamaterials that have refractive indices that can redirect particular wavelengths of light. But one issue scientists have found particularly difficult to address is how to mask corners. Sharp corners are pretty common, and it's difficult to figure out ways to guide the surface waves of light around corners, as the light experiences scattering loss when encountering sharp corners.
That's because there is a large mismatch in momentum of the light waves at the surface of an object before and after passing around the corner of an extremely compact shape. Though scientists have successfully developed a few materials that can perform scattering-free guidance of surface waves around corners, these methods are limited. They rely on photonic crystals with a large magnetic response, which limits the types of waves it can influence.
When waves encounter a sharp corner, they pass through compact space, which causes the change in momentum (yes, photons have momentum). More advanced cloaking methods have focused on compensating for this change in momentum by curving the electromagnetic space in a way that tricks light waves into behaving as if they're moving in a straight line. Through this method, transformative optics has made strides towards developing a real invisibility cloak.
In the new work, scientists have demonstrated a way of bending surface light waves around sharp corners, one that works across a broad range of wavelengths, exhibiting almost ideal transmission. This method is able to bend the waves in a way that does not disturb other wave properties, such as the amplitude and phase. This could actually allow for the development of an invisibility cloak.
The scientists created bending adaptors that were essentially “corner cloaks”—able to hide corners as the waves traveled around them. Physically, the corner cloaks are triangular pieces that can be placed over a sharp corner. The cloaks are made of layered structures of subwavelength foam and ceramic materials that had a refractive index that's able to redirect light. Experimental results show that the cloaks almost completely conceal their presence from anyone looking at the light that passed through them. It appears that the ultimate Harry Potter fantasy might be right around the corner for some of us.
Researchers are exploring the idea of treating disease by replacing the defective gene causing the trouble
A new technology for “editing” defective genes has raised hopes for a future generation of medicines treating intractable diseases like cancer, cystic fibrosis and sickle-cell anemia. Such drugs could home in on a specific gene causing a disease, then snip it out and, if necessary, replace it with a healthy segment of DNA. Drugs of this type wouldn’t hit the mass market for years, if ever; pharmaceutical firms are only now exploring how to make drugs using the gene-editing technology, called Crispr-Cas9. But the approach offers tremendous potential for developing new treatments for diseases caused by a mutated gene.
“What if you could go right to the root cause of that disease and repair the broken gene? That’s what people are excited about,” says Katrine Bosley, chief executive of privately held Editas Medicine. Its projects include developing a gene-editing drug treating one type of Leber congenital amaurosis, a rare disease that causes blindness in children.
Crispr-Cas9 isn’t the only technology capable of editing genes, but researchers consider it easier to use than other methods, says Dana Carroll, a professor of biochemistry at the University of Utah School of Medicine, who helped pioneer another gene-editing approach called zinc finger nucleases.
Among other efforts under way using Crispr-Cas9 technology, privately held Intellia Therapeutics Inc., in partnership with Novartis AG, is probing how to create a gene-editing drug that could harness the immune system to fight certain blood cancers. The two companies are also exploring the treatment of hereditary blood disorders like sickle-cell anemia and beta thalassemia.
Intellia CEO Nessan Bermingham says drugs based on Crispr-Cas9 promise to complement the pills and biotech drugs currently available, targeting diseases that aren’t well treated by existing therapies. “This is a new tool to target and treat disease,” he says. Industry and academic laboratories are also using the technology for more immediate effect: to genetically engineer mice and other animals so that they have humanlike diseases that researchers can then readily study.
Using Crispr-Cas9 to make the animal models is “much quicker, easier than the other methods that have been available,” says Tim Harris, senior vice president of precision medicine at Biogen Inc. The company is using the technology to study amyotrophic lateral sclerosis, or Lou Gehrig’s disease, which has lacked good animal models.
Crispr-Cas9 attracted notoriety in April, when Chinese scientists reported trying to repair the genes that cause beta thalassemia in 86 human embryos obtained from a fertilization clinic. The work raised fears that gene editing could be used to tweak babies in many ways before they were born.
Tofacitinib is a JAK 1/3 inhibitor that was approved by the US Food and Drug Administration in 2012 for the treatment of moderate to severe rheumatoid arthritis. Within dermatology, oral and topical formulations of tofacitinib have been demonstrated to be safe and effective for the treatment of plaque psoriasis,3- 7 and researchers have recently described the success of oral tofacitinib in treating alopecia universalis.8 Clinical trials evaluating tofacitinib treatment are presently under way.
Alopecia areata and vitiligo share genetic risk factors and can co-occur within families and individual patients, suggesting a common pathogenesis.9 As such, it is not surprising that a medication that has been shown to be effective in treating alopecia areata8 may also be effective in treating vitiligo. Moreover, recent advances in the scientific understanding of vitiligo support the use of JAK inhibitors for this condition. Interferon-gamma–induced expression of C-X-C motif chemokine 10 (CXCL10) in keratinocytes is an important mediator of depigmentation in vitiligo.2 Antibody neutralization of interferon gamma or CXCL10 reverses depigmentation.10
The researchers now propose that because interferon gamma signal transduction occurs through JAK 1/2,11 the use of the JAK 1/3 inhibitor tofacitinib effectively leads to blockade of interferon gamma signaling and downstream CXCL10 expression, thus giving rise to repigmentation in vitiligo.
This is the first scientific investigation to demonstrate effective pathogenesis-based therapy for a patient with vitiligo. The fairly rapid response and the repigmentation of the hands, which are often resistant to therapy, are noteworthy. Further investigation of the efficacy and safety of tofacitinib in the treatment of patients with vitiligo, including those for whom the condition has been more long-standing, will be important. Although uncommon, serious adverse effects, including malignant disease, have been reported in patients taking tofacitinib; therefore, investigation of the efficacy of a topical formulation for the treatment of localized vitiligo would be useful.
The presented case exemplifies the ways by which advances in basic science can guide treatment decisions and ultimately benefit patients. As scientists better understand the pathomechanisms of different diseases, targeted therapy becomes possible, and existing medications can be repurposed and/or new medications created for diseases with limited, if any, treatment options.
Study suggests that all craters 6 kilometers across or larger have been found. No more impact craters as big as Canada's Clearwater West and Clearwater East, which measure more than 10 kilometers across, remain to be discovered.
Mars is pocked with more than 300,000 craters, created by asteroid impacts. The moon is blanketed with millions more, too many to count. But the surface of Earth, constantly eroded by wind and rain, hides its history. Just 128 confirmed impact craters have been spotted on Earth’s surface. However, a new study suggests that this low number is not the result of lazy searching; all of the big impact craters on the planet's surface have been found, leaving none to be discovered.
“I'm definitely surprised.” says Brandon Johnson, a planetary scientist at the Massachusetts Institute of Technology in Cambridge, who was not involved in the study. “It’s the first time anyone has done this kind of thing—taking into account the effects of erosion.”
In 2014, Johnson led a similar study, which found that for craters 85 kilometers in diameter and larger, the geologic record ought to be complete. Based on the rate of impacts and the age of the crust, his team predicted eight craters this size, and there are six or seven that have been confirmed. These giant craters are deep enough to survive erosion, but they can be destroyed by plate tectonics, which splits apart, subducts, or otherwise jumbles up the crust the craters sit on—a process Johnson’s study examined.
Now, Stefan Hergarten and Thomas Kenkmann, geophysicists at the University of Freiburg in Germany, have taken the analysis further and found that the documented record is complete down to much smaller impact craters. They combined estimates of asteroid impact rates with rates of erosion, and compared the resulting theoretical crater distribution with what geologists actually see. For the 70 craters larger than 6 kilometers across, the record is complete, they say: There are no more to be found, as the researchers report in Earth and Planetary Science Letters.
2D materials were considered impossible until the discovery of graphene around ten years ago. However, they have been observed only in the solid phase, because the thermal atomic motion required for molten materials easily breaks the thin and fragile membrane.
Therefore, the possible existence of an atomically thin flat liquid was considered impossible.
Now, physicists at the University of Jyväskylä have conducted quantum molecular dynamics simulations that predict a liquid phase in atomically thin gold islands that patch small pores of graphene. According to the simulations, gold atoms flow and change places in the plane, while the surrounding graphene template retains the planarity of liquid membrane. “Here the role of graphene is similar to circular rings through which children blow soap bubbles,” said Dr Pekka Koskinen, lead author on the paper published in the journal Nanoscale.
“In general, the existence of a 2D liquid phase requires three conditions. First, the pore template itself has to remain stable at high temperatures, a condition easily met by graphene,” the scientists wrote in the paper.
“Second, edge interactions need to favor planar bonding and be robust enough to endure high temperatures. Our supplementary calculations showed that the gold-carbon interface has bending rigidity comparable to that of the 2D gold membrane, which is sufficient to retain the patch steady under gold diffusion.”
“Third, the membrane itself has to display 2D diffusion before out-of-plane fluctuations grow too large and initiate rupturing.”
Currently the flat liquid exists only in computers and is still waiting for experimental confirmation. “Unfortunately, simulations suggest that the flat liquid is volatile,” Dr Koskinen said. “In experiments the liquid membrane might burst too early, like a soap bubble that bursts before one gets a proper look at it.” “But again, even graphene was previously considered too unstable to exist.”
The list of paranoia-inducing threats to your computer’s security grows daily: Keyloggers, trojans, infected USB sticks, ransomware…and now the rogue falafel sandwich.
Researchers at Tel Aviv University and Israel’s Technion research institute have developed a new palm-sized device that can wirelessly steal data from a nearby laptop based on the radio waves leaked by its processor’s power use. Their spy bug, built for less than $300, is designed to allow anyone to “listen” to the accidental radio emanations of a computer’s electronics from 19 inches away and derive the user’s secret decryption keys, enabling the attacker to read their encrypted communications. And that device, described in a paper they’re presenting at the Workshop on Cryptographic Hardware and Embedded Systems in September, is both cheaper and more compact than similar attacks from the past—so small, in fact, that the Israeli researchers demonstrated it can fit inside a piece of pita bread.
“The result is that a computer that holds secrets can be readily tapped with such cheap and compact items without the user even knowing he or she is being monitored,” says Eran Tomer, a senior lecturer in computer science at Tel Aviv University. “We showed it’s not just possible, it’s easy to do with components you can find on eBay or even in your kitchen.”
Their key-stealing device, which they call the Portable Instrument for Trace Acquisition (yes, that spells PITA) consists of a loop of wire to act as an antenna, a Rikomagic controller chip, a Funcube software defined radio, and batteries. It can be configured to either collect its cache of stolen data on an SD storage card or to transmit it via Wifi to a remote eavesdropper. The idea to actually cloak the device in a pita—and name it as such—was a last minute addition, Tomer says. The researchers found a piece of the bread in their lab on the night before their deadline and discovered that all their electronics could fit inside it.
A person’s sense of smell may reveal a lot about his or her identity.
A new test can distinguish individuals based upon their perception of odors, possibly reflecting a person’s genetic makeup, scientists report online June 22 in Proceedings of the National Academy of Sciences.
Most humans perceive a given odor similarly. But the genes for the molecular machinery that humans use to detect scents are about 30 percent different in any two people, says neuroscientist Noam Sobel of the Weizmann Institute of Science in Rehovot, Israel. This variation means that nearly every person’s sense of smell is subtly different. Nobody had ever developed a way to test this sensory uniqueness, Sobel says.
Sobel and his colleagues designed a sensitive scent test they call the “olfactory fingerprint.” In an experiment, test subjects rated how strongly 28 odors such as clove or compost matched 54 adjectives such as “nutty” or “pleasant.” An olfactory fingerprint describes individuals’ perceptions of odors’ similarities, not potentially subjective scent descriptions.
All 89 subjects in the study had distinct olfactory fingerprints. The researchers calculated that just seven odors and 11 descriptors could have identified each individual in the group. With 34 odors, 35 descriptors, and around five hours of testing per person, the scientists estimate they could individually identify about 7 billion different people, roughly the entire human population.
People with similar olfactory fingerprints also showed similarity in their genes for immune system proteins linked to body odor and mate choice. This finding means that people with similar olfactory fingerprints probably smell alike to others, says study author Lavi Secundo, also a neuroscientist at the Weizmann Institute.
It has been shown that people can use smell to detect their genetic similarity to others and avoid inbreeding, says neuroscientist Joel Mainland of Monell Chemical Senses Center in Philadelphia.
Sobel says that the olfactory fingerprint could someday be used to construct smell-based social networks. The test could also become a diagnostic tool for diseases that affect the sense of smell, including Parkinson’s disease, he says.
Administering scent tests can be cumbersome, so it will be hard to use such tests in the clinic without scent-generating electronic devices, Mainland says. But using scent perception to identify genetic markers is interesting, and from a security standpoint, he adds, an olfactory fingerprint would be very hard to copy or steal. “There might be applications of this that we haven’t thought of.”
It's a weapon that fights malaria – a laser scan can give an accurate diagnosis in seconds, without breaking the skin, just like the fictional tricorder in Star Trek.
It works by pulsing energy into a vein in a person's wrist or earlobe. The laser's wavelength doesn't harm human tissue, but is absorbed by hemozoin – waste crystals that are produced by the malaria parasite Plasmodium falciparum when it feeds on blood.
When the crystals absorb this energy, they warm the surrounding blood plasma, making it bubble. An oscilloscope placed on the skin alongside the laser senses these nanoscale bubbles when they start popping, detecting malaria infections in only 20 seconds.
"It's the first true non-invasive diagnostic," says Dmitri Lapotko of Rice University in Houston, Texas, whose team used the probe to correctly identify which person had malaria in a test of six individuals. They even managed to use the device to show whether dead mosquitoes were carrying the parasite.
Malaria threatens half the world's population, killing 584,000 people in 2013. Existing tests for malaria are already quick, taking only 15 to 20 minutes to give a diagnosis, but they could be simpler. Blood has to be taken, the test has to be conducted by trained personnel to get reliable results, and extra chemical reagents must be used.
Lapotko says that a single, battery-powered device the size of a shoebox would house everything associated with the small probe, with no other reagents, facilities or specialist personnel required. The team estimates that a single unit would cost around $15,000, but that this could test 200,000 people – potentially bringing the per-person cost of testing down from as much as 50 cents to under 8 cents.
The team is now preparing for trials in Africa. "The possibility of diagnosing a malaria infection with the device, without any blood-taking and with results available in seconds will provide a fantastic new tool for the control and eventual elimination of malaria," says Umberto D'Alessandro of the UK Medical Research Council Unit in Gambia.
However Perkins says further tweaks are needed before the probe can become a mainstream diagnostic. For example, it gives a more ambiguous result if a patient has a dark skin – a potentially huge pitfall given that children living in Africa account for the majority of malaria deaths. But Lapotko's team is confident it can overcome this effect by switching to a different wavelength of laser.
The leap from single-celled life to multicellular creatures is easier than we ever thought. And it seems there's more than one way it can happen. The mutation of a single gene is enough to transform single-celled brewer's yeast into a "snowflake" that evolves as a multicellular organism. Similarly, single-celled algae quickly evolve into spherical multicellular organisms when faced with predators that eat single cells. These findings back the emerging idea that this leap in complexity isn't the giant evolutionary hurdle it was thought to be.
At some point after life first emerged, some cells came together to form the first multicellular organism. This happened perhaps as early as 2.1 billion years ago. Others followed – multicellularity is thought to have evolved independently at least 20 times – eventually giving rise to complex life, such as humans. But no organism is known to have made that transition in the past 200 million years, so how and why it happened is hard to study.
Back in 2011, evolutionary biologists William Ratcliff and Michael Travisano at the University of Minnesota in St Paul coaxed unicellular yeast to take on a multicellular "snowflake" form by taking the fastest-settling yeast out of a culture and using it to found new cultures. And then repeating the process. Because clumps of yeast settle faster than individual cells, this effectively selected yeast that stuck together instead of separating after cell division.
The team's latest work shows that this transformation from a single to multicellular existence can be driven by a single gene called ACE2 that controls separation of daughter cells after division, Ratcliff told the 15-19 June Astrobiology Science Conference in Chicago. And because the snowflake grows in a branching, tree-like pattern, any later mutations are confined to single branches. When the original snowflake gets too large and breaks up, these mutant branches fend for themselves, allowing the value of their new mutation to be tested in the evolutionary arena.
"A single mutation creates groups that as a side effect are capable of Darwinian evolution at the multicellular level," says Ratcliff, who is now at the Georgia Institute of Technology in Atlanta. Ratcliff's team has previously also evolved multicellularity in single-celled algae called Chlamydomonas
, through similar selection for rapid settling. The algal cells clumped together in amorphous blobs.
Now the feat has been repeated, but with predators thrown into the mix. A team led by Matt Herron of the University of Montana in Missoula exposedChlamydomonas to a paramecium, a single-celled protozoan that can devour single-celled algae but not multicellular ones. Sure enough, two of Herron's five experimental lines became multicellular within six months, or about 600 generations, he told the conference. This time, instead of daughter cells sticking together in an amorphous blob as they did under selection for settling, the algae formed predation-resistant, spherical units of four, eight or 16 cells that look almost identical to related species of algae that are naturally multicellular.
This month Wisconsin-based company Wicab announced that the US Food and Drug Administration cleared a nonsurgical vision aid for the profoundly blind. The safety and effectiveness of their product, BrainPort V100, were supported by clinical data.
An FDA press announcement on June 18 said the FDA "today allowed marketing of a new device that when used along with other assistive devices, like a cane or guide dog, can help orient people who are blind by helping them process visual images with their tongues." What exactly is BrainPort V100? This is an oral electronic vision aid, said the company. It makes use of electro-tactile stimulation in orientation, mobility, and object recognition.
The FDA described the components in the BrainPort V100 as "a battery-powered device that includes a video camera mounted on a pair of glasses and a small, flat intra-oral device containing a series of electrodes that the user holds against their tongue. Software converts the image captured by the video camera into electrical signals that are then sent to the intra-oral device and perceived as vibrations or tingling on the user's tongue." This product does not replace a guide dog and cane; it is an "adjunctive" device to assistive methods such as dog and cane.
How does it work? The BrainPort V100's video camera mounted on sunglasses has an adjustable field of view (zoom). It translates digital information from a video camera to electrical stimulation patterns perceived as vibrations or tingling on the surface of the user's tongue. The tongue item is connected to the glasses by flexible cable. A small hand-held unit provides user controls and houses a rechargeable battery. The system will run for approximately three hours on a single charge.
"Users describe the experience as streaming images drawn on their tongue with small bubbles. With training, users are able to interpret the shape, size, location and position of objects in their environment, and to determine if objects are moving or stationary."
New Milestone Will Enable System to Address Larger and More Complex Problems
D-Wave Systems Inc., the world's first quantum computing company, today announced that it has broken the 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort.
This is a major technological and scientific achievement that will allow significantly more complex computational problems to be solved than was possible on any previous quantum computer.
D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.” Every additional qubit doubles the search space of the processor. At 1000 qubits, the new processor considers 21000possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.
“For the high-performance computing industry, the promise of quantum computing is very exciting. It offers the potential to solve important problems that either can’t be solved today or would take an unreasonable amount of time to solve,” said Earl Joseph, IDC program vice president for HPC. “D-Wave is at the forefront of this space today with customers like NASA and Google, and this latest advancement will contribute significantly to the evolution of the Quantum Computing industry.”
As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.
“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems. Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”
The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:
Die-offs in such creatures could have ramifications up the food chain in some of the most productive fisheries in the world and provide a preview of what is in store for the rest of the world’s oceans down the road.
“The Arctic can be a great indicator” of future issues, oceanographer Jeremy Mathis, of the Pacific Marine Environmental Laboratory, said.
Ocean acidification is a process happening in tandem with the warming of the planet and is driven by the same human-caused increase of carbon dioxide in the atmosphere that is trapping excess heat. The oceans absorb much of that excess CO2, where it dissolves and reacts with water to form carbonic acid.
As CO2 emissions have continued to grow, so has the amount of carbonic acid in the oceans, decreasing their pH. The ocean generally has a pH of 8.2, making it slightly basic (a neutral pH is 7, while anything above is basic and anything below is acidic). An ocean that is becoming less basic is a problem for the creatures like shellfish and coral that depend on specific ocean chemistry to have enough of the mineral calcium carbonate to make their hard shells and skeletons.
Small snails the size of a human fingernail in polar coastal waters can react very quickly to increased acidity, with their shells dissolving. Such tiny creatures are often the linchpins of marine ecosystems, causing a domino effect up the food chain when they collapse. That’s a major concern in an area that has some of the globe’s most productive fisheries, especially the Bering Sea.
The polar oceans are particularly threatened by ocean acidification, as cold water is better at absorbing CO2 than warm water is. And in regions near the coast, this process is helped along by glacier melt and river runoff that also shift the water’s chemistry toward increased CO2 absorption.
In the movie Interstellar, the main character Cooper escapes from a black hole in time to see his daughter Murph in her final days. Some have argued that the movie is so scientific that it should be taught in schools. In reality, many scientists believe that anything sent into a black hole would probably be destroyed. But a new study suggests that this might not be the case after all. The research says that, rather than being devoured, a person falling into a black hole would actually be absorbed into a hologram — without even noticing. The paper challenges a rival theory stating that anybody falling into a black hole hits a “firewall” and is immediately destroyed.
Forty years ago, Stephen Hawking shocked the scientific establishment with his discovery that black holes aren’t really black. Classical physics implies that anything falling through the horizon of a black hole can never escape. But Hawking showed that black holes continually emit radiation once quantum effects are taken into account. Unfortunately, for typical astrophysical black holes, the temperature of this radiation is far lower than that of the cosmic microwave background, meaning detecting them is beyond current technology.
Hawking’s calculations are perplexing. If a black hole continually emits radiation, it will continually lose mass—eventually evaporating. Hawking realised that this implied a paradox: if a black hole can evaporate, the information about it will be lost forever. This means that even if we could measure the radiation from a black hole we could never figure out it was originally formed. This violates an important rule of quantum mechanics that states information cannot be lost or created.
Another way to look at this is that Hawking radiation poses a problem with determinism for black holes. Determinism implies that the state of the universe at any given time is uniquely determined from its state at any other time. This is how we can trace its evolution both astronomically and mathematically though quantum mechanics.
This means that the loss of determinism would have to arise from reconciling quantum mechanics with Einstein’s theory of gravity – a notoriously hard problem and ultimate goal for many physicists. Black hole physics provides a test for any potential quantum gravity theory. Whatever your theory is, it must explain what happens to the information recording a black hole’s history.
It took two decades for scientists to come up with a solution. They suggested that the information stored in a black hole is proportional to its surface area (in two dimensions) rather than its volume (in three dimensions). This could be explained by quantum gravity, where the three dimensions of space could be reconstructed from a two-dimensional world without gravity – much like a hologram. Shortly afterwards, string theory, the most studied theory of quantum gravity, was shown to be holographic in this way.
Using holography we can describe the evaporation of the black hole in the two-dimensional world without gravity, for which the usual rules of quantum mechanics apply. This process is deterministic, with small imperfections in the radiation encoding the history of the black hole. So holography tells us that information is not lost in black holes, but tracking down the flaw in Hawking’s original arguments has been surprisingly hard.
Evolutionary biologists have long wondered why the eardrum—the membrane that relays sound waves to the inner ear—looks in humans and other mammals remarkably like the one in reptiles and birds. Did the membrane and therefore the ability to hear in these groups evolve from a common ancestor? Or did the auditory systems evolve independently to perform the same function, a phenomenon called convergent evolution? A recent set of experiments performed at the University of Tokyo and the RIKEN Evolutionary Morphology Laboratory in Japan resolves the issue.
When the scientists genetically inhibited lower jaw development in both fetal mice and chickens, the mice formed neither eardrums nor ear canals. In contrast, the birds grew two upper jaws, from which two sets of eardrums and ear canals sprouted. The results, published in Nature Communications, confirm that the middle ear grows out of the lower jaw in mammals but emerges from the upper jaw in birds—all supporting the hypothesis that the similar anatomy evolved independently in mammals and in reptiles and birds. (Scientific American is part of Springer Nature.) Fossils of auditory bones had supported this conclusion as well, but eardrums do not fossilize and so could not be examined directly.
Scientists at Karolinska Institutet have managed to build a fully functional neuron by using organic bioelectronics. This artificial neuron contain no ‘living’ parts, but is capable of mimicking the function of a human nerve cell and communicate in the same way as our own neurons do.
Neurons are isolated from each other and communicate with the help of chemical signals, commonly called neurotransmitters or signal substances. Inside a neuron, these chemical signals are converted to an electrical action potential, which travels along the axon of the neuron until it reaches the end. Here at the synapse, the electrical signal is converted to the release of chemical signals, which via diffusion can relay the signal to the next nerve cell.
To date, the primary technique for neuronal stimulation in human cells is based on electrical stimulation. However, scientists at the Swedish Medical Nanoscience Centre (SMNC) at Karolinska Institutet's Department of Neuroscience in collaboration with collegues at Linköping University, have now created an organic bioelectronic device that is capable of receiving chemical signals, which it can then relay to human cells.
“Our artificial neuron is made of conductive polymers and it functions like a human neuron”, says lead investigator Agneta Richter-Dahlfors, professor of cellular microbiology. “The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal. This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored.“
For the past twenty years, physicists have studied ultracold atomic gases of the two classes of particles: fermions (electrons, protons, neutrons, quarks, atoms) and bosons.
In 2009, physicists at Harvard University devised a microscope that successfully imaged individual bosons in a tightly spaced optical lattice.
The second boson microscope was created by scientists at the Max Planck Institute of Quantum Optics in Germany in 2010. These microscopes revealed, in unprecedented detail, the behavior of bosons under strong interactions. However, no one had yet developed a comparable microscope for fermions.
The new technique developed by Prof Martin Zwierlein and his colleagues at MIT uses two laser beams trained on a cloud of fermionic atoms in an optical lattice. The two beams, each of a different wavelength, cool the cloud, causing individual fermions to drop down an energy level, eventually bringing them to their lowest energy states – cool and stable enough to stay in place. At the same time, each fermion releases light, which is captured by the microscope and used to image the fermion’s exact position in the lattice – to an accuracy better than the wavelength of light.
With the new technique, Prof Zwierlein’s team was able to cool and image over 95% of the fermionic atoms making up a cloud of potassium gas. "An intriguing result from the technique appears to be that it can keep fermions cold even after imaging. That means I know where they are, and I can maybe move them around with a little tweezer to any location, and arrange them in any pattern I’d like,” said Prof Zwierlein, who is the senior author on the study published in the journal Physical Review Letters.
In an engineering first, Stanford University scientists have invented a low-cost water splitter that uses a single catalyst to produce both hydrogen and oxygen gas 24 hours a day, seven days a week. The researchers believe that the device, described in anopen-access study published today (June 23) in Nature Communications, could provide a renewable source of clean-burning hydrogen fuel for transportation and industry.
“We have developed a low-voltage, single-catalyst water splitter that continuously generates hydrogen and oxygen for more than 200 hours, an exciting world-record performance,” said study co-author Yi Cui, an associate professor of materials science and engineering at Stanford and of photon science at the SLAC National Accelerator Laboratory.
Hydrogen has long been promoted as an emissions-free alternative to gasoline. But most commercial-grade hydrogen is made from natural gas — a fossil fuel that contributes to global warming. So scientists have been trying to develop a cheap and efficient way to extract pure hydrogen from water.
A conventional water-splitting device consists of two electrodes submerged in a water-based electrolyte. A low-voltage current applied to the electrodes drives a catalytic reaction that separates molecules of H2O, releasing bubbles of hydrogen on one electrode and oxygen on the other.
In these devices, each electrode is embedded with a different catalyst, typically platinum and iridium, two rare and costly metals. But in 2014, Stanford chemist Hongjie Dai developed a water splitter made of inexpensive nickel and iron that runs on an ordinary 1.5-volt battery.
In conventional water splitters, the hydrogen and oxygen catalysts often require different electrolytes with different pH — one acidic, one alkaline — to remain stable and active. “For practical water splitting, an expensive barrier is needed to separate the two electrolytes, adding to the cost of the device,” Wang explained.
“Our water splitter is unique because we only use one catalyst, nickel-iron oxide, for both electrodes,” said graduate student Haotian Wang, lead author of the study. “This bi-functional catalyst can split water continuously for more than a week with a steady input of just 1.5 volts of electricity. That’s an unprecedented water-splitting efficiency of 82 percent at room temperature.”
Wang and his colleagues discovered that nickel-iron oxide, which is cheap and easy to produce, is actually more stable than some commercial catalysts made of expensive precious metals. The key to making a single catalyst possible was to use lithium ions to chemically break the metal oxide catalyst into smaller and smaller pieces. That “increases its surface area and exposes lots of ultra-small, interconnected grain boundaries that become active sites for the water-splitting catalytic reaction,” Cui said. “This process creates tiny particles that are strongly connected, so the catalyst has very good electrical conductivity and stability.”
A rare DNA base, previously thought to be a temporary modification, has been shown to be stable in mammalian DNA, suggesting that it plays a key role in cellular function.
Researchers from the University of Cambridge and the Babraham Institute have found that a naturally occurring modified DNA base appears to be stably incorporated in the DNA of many mammalian tissues, possibly representing an expansion of the functional DNA alphabet.
The new study, published in the journal Nature Chemical Biology, has found that this rare ‘extra’ base, known as 5-formylcytosine (5fC) is stable in living mouse tissues. While its exact function is yet to be determined, 5fC’s physical position in the genome makes it likely that it plays a key role in gene activity.
“This modification to DNA is found in very specific positions in the genome – the places which regulate genes,” said the paper’s lead author Dr Martin Bachman, who conducted the research while at Cambridge’s Department of Chemistry. “In addition, it’s been found in every tissue in the body – albeit in very low levels.”
“If 5fC is present in the DNA of all tissues, it is probably there for a reason,” said Professor Shankar Balasubramanian of the Department of Chemistry and the Cancer Research UK Cambridge Institute, who led the research. “It had been thought this modification was solely a short-lived intermediate, but the fact that we’ve demonstrated it can be stable in living tissue shows that it could regulate gene expression and potentially signal other events in cells.”
Since the structure of DNA was discovered more than 60 years ago, it’s been known that there are four DNA bases: G, C, A and T (Guanine, Cytosine, Adenine and Thymine). The way these bases are ordered determines the makeup of the genome. In addition to G, C, A and T, there are also small chemical modifications, or epigenetic marks, which affect how the DNA sequence is interpreted and control how certain genes are switched on or off. The study of these marks and how they affect gene activity is known as epigenetics.
5fC is one of these marks, and is formed when enzymes called TET enzymes add oxygen to methylated DNA – a DNA molecule with smaller molecules of methyl attached to the cytosine base. First discovered in 2011, it had been thought that 5fC was a ‘transitional’ state of the cytosine base which was then being removed from DNA by dedicated repair enzymes. However, this new research has found that 5fC can actually be stable in living tissue, making it likely that it plays a key role in the genome.
Using high-resolution mass spectrometry, the researchers examined levels of 5fC in living adult and embryonic mouse tissues, as well as in mouse embryonic stem cells – the body’s master cells which can become almost any cell type in the body.
They found that 5fC is present in all tissues, but is very rare, making it difficult to detect. Even in the brain, where it is most common, 5fC is only present at around 10 parts per million or less. In other tissues throughout the body, it is present at between one and five parts per million.
The researchers applied a method consisting of feeding cells and living mice with an amino acid called L-methionine, enriched for naturally occurring stable isotopes of carbon and hydrogen, and measuring the uptake of these isotopes to 5fC in DNA. The lack of uptake in the non-dividing adult brain tissue pointed to the fact that 5fC can be a stable modification: if it was a transient molecule, this uptake of isotopes would be high.
The researchers believe that 5fC might alter the way DNA is recognised by proteins. “Unmodified DNA interacts with a specific set of proteins, and the presence of 5fC could change these interactions either directly or indirectly by changing the shape of the DNA duplex,” said Bachman. “A different shape means that a DNA molecule could then attract different proteins and transcription factors, which could in turn change the way that genes are expressed.”
Pairing a donated organ with a potential recipient is a critical task requiring a near-perfect match. The necessity for such perfection is highly limiting, and for the thousands of Canadians waiting on transplant lists it can understandably diminish any hope of getting better or even surviving. However, a study co-authored by Vancouver Coastal Health Research Institute scientist Dr. Caigan Du holds promise that in the future, the use of stem cells during the transplant process may eliminate the need for a perfect match.
Published in the October 2014 issue of Lung, Dr. Du and his colleagues’ study details use of a novel murine model and the intravenous delivery of mesenchymal stem cells (MSCs) to lung transplant recipients before transplantation to find whether MSCs could protect lungs from cold ischemia reperfusion injury (IRI). Cold IRI is inevitable damage that happens to transplanted organs during two processes: first, when the donor organ is cooled and, second, when blood flow is returned once the organ is transplanted into the host’s body (i.e. reperfusion). IRI is a major cause of failure for newly transplanted lungs and immune cells are thought to mediate the injury that happens during reperfusion.
Understanding that stem cells can run down, or suppress, the immune response, the researchers sought to determine whether MSCs could stop the host’s immune cells from damaging the new lung when blood flow returned. “When a host receives a new lung that has been kept cold, reperfusion brings with it oxygen but also immune cells that can activate the host’s immune system,” he explains. “This can cascade into a process that damages the organ because the immune response starts fighting it as though it’s a foreign entity.”
The researchers found that by administering MSCs to the lung recipient prior to transplantation, reperfusion injury to the transplanted lung was reduced. Furthermore, survival rates and lung blood oxygenation levels among MSC-recipients fared better than for those that did not receive stem cells. Intravenously delivered MSCs effectively improved function of the transplanted lungs. “We essentially prepped the body so it was better able to accept the lung,” Dr. Du says.
Microneedle-array patches loaded with hypoxia-sensitive vesicles provide fast glucose-responsive insulin delivery
Painful insulin injections could become a thing of the past for the millions of Americans who suffer from diabetes, thanks to a new invention from researchers at the University of North Carolina and NC State, who have created the first "smart insulin patch" that can detect increases in blood sugar levels and secrete doses of insulin into the bloodstream whenever needed.
The patch - a thin square no bigger than a penny - is covered with more than one hundred tiny needles, each about the size of an eyelash. These "microneedles" are packed with microscopic storage units for insulin and glucose-sensing enzymes that rapidly release their cargo when blood sugar levels get too high.
The study, which is published in the Proceedings of the National Academy of Sciences, found that the new, painless patch could lower blood glucose in a mouse model of type 1 diabetes for up to nine hours. More pre-clinical tests and subsequent clinical trials in humans will be required before the patch can be administered to patients, but the approach shows great promise.
"We have designed a patch for diabetes that works fast, is easy to use, and is made from nontoxic, biocompatible materials," said co-senior author Zhen Gu, PhD, a professor in the Joint UNC/NC State Department of Biomedical Engineering. Gu also holds appointments in the UNC School of Medicine, the UNC Eshelman School of Pharmacy, and the UNC Diabetes Care Center. "The whole system can be personalized to account for a diabetic's weight and sensitivity to insulin," he added, "so we could make the smart patch even smarter."
Diabetes affects more than 387 million people worldwide, and that number is expected to grow to 592 million by the year 2035. Patients with type 1 and advanced type 2 diabetes try to keep their blood sugar levels under control with regular finger pricks and repeated insulin shots, a process that is painful and imprecise. John Buse, MD, PhD, co-senior author of the PNAS paper and the director of the UNC Diabetes Care Center, said, "Injecting the wrong amount of medication can lead to significant complications like blindness and limb amputations, or even more disastrous consequences such as diabetic comas and death."
Researchers have tried to remove the potential for human error by creating "closed-loop systems" that directly connect the devices that track blood sugar and administer insulin. However, these approaches involve mechanical sensors and pumps, with needle-tipped catheters that have to be stuck under the skin and replaced every few days.
Thanks to the latest advances in computer vision, we now have machines that can pick you out of a line-up. But what if your face is hidden from view? An experimental algorithm out of Facebook's artificial intelligence lab can recognise people in photographs even when it can't see their faces. Instead it looks for other unique characteristics like your hairdo, clothing, body shape and pose.
Modern face-recognition algorithms are so good they've already found their way into social networks, shops and even churches. Yann LeCun, head of artificial intelligence at Facebook, wanted to see they could be adapted to recognise people in situations where someone's face isn't clear, something humans can already do quite well.
"There are a lot of cues we use. People have characteristic aspects, even if you look at them from the back," LeCun says. "For example, you can recognize Mark Zuckerberg very easily, because he always wears a gray T-shirt."
The research team pulled almost 40,000 public photos from Flickr - some of people with their full face clearly visible, and others where they were turned away - and ran them through a sophisticated neural network.
The final algorithm was able to recognise individual people's identities with 83 per cent accuracy. It was presented earlier this month at the Computer Vision and Pattern Recognition conference in Boston, Massachusetts. An algorithm like this could one day help power photo apps like Facebook's Moments, released last week.
Moments scours through a phone's photos, sorting them into separate events like a friend's wedding or a trip to the beach and tagging whoever it recognises as a Facebook friend. LeCun also imagines such a tool would be useful for the privacy-conscious - alerting someone whenever a photo of themselves, however obscured, pops up on the internet.
The flipside is also true: the ability to identify someone even when they are not looking at the camera raises some serious privacy implications. Last week, talks over rules governing facial recognition collapsed after privacy advocates and industry groups could not agree.
"If, even when you hide your face, you can be successfully linked to your identify, that will certainly concern people," says Ralph Gross at Carnegie Mellon University in Pittsburgh, Pennsylvania, who says the algorithm is impressive. "Now is a time when it's important to discuss these questions."
Four leading botanical gardens from around the world want to make it easier for researchers to identify plants in the field.
When plant biologists and field researchers come across a species they’ve never seen before, they turn to thick encyclopedia-like volumes called monographs with titles such as Flora Braseliensis that characterize each species in a region in great detail. But not every species has been well-described in this literature. Thomas estimates that only 10 percent of species in the American tropics have been properly characterized. And the reference materials that do exist sometimes don’t match, or are inaccessible to anyone who doesn’t have access through a university.
Four of the world’s leading botanical gardens would like to change that. Since 2012, they have been working toward building a free online database called World Flora Online of the world’s plant species – all 350,000 of them – so that scientists can more easily identify plants and share information about them. Thomas calls it “the WebMD” for plant biology. With a fresh new round of funding this spring including a $1.2 million grant from the Alfred P. Sloan Foundation and a $600,000 commitment from Google accompanied by a pledge to provide cloud storage for the project, the consortium has expanded to include 35 affiliates from around the world.
“Plants are hugely, hugely important for us,” says Doron Weber, vice president at the Sloan Foundation. “Plant research is very promising -- it's necessary for food, for medicines, for various materials. It's also the basis of healthy ecosystems and habitats. You can be completely bottom line about this.”
Despite more than 30 years of intense research, a cure or vaccine for HIV still continues to elude us. But scientists are not quitting, and slowly but surely they seem to be making promising progress in this field. For example, two new mouse studies have just come out that demonstrate that a novel vaccine candidate is able to prompt the beginnings of an immune reaction needed to prevent infection. While the results are not the “breakthrough” everyone is looking for, they are certainly a stride in the right direction.
Vaccines can be made in a variety of different ways, for example by inactivating whole pathogens or isolating particular components of them, both with the ultimate goal of stimulating a defense response from the immune system, readying it for any future assaults. But the problem with pesky HIV is that it mutates remarkably rapidly, changing its components so that they become unrecognizable by the immune system. This means that should a vaccine be successful in inducing the production of protective antibodies, they usually have such a narrow window of activity that they are effectively useless.
But there are some antibodies that are different, called broadly neutralizing antibodies (bNAbs), and scientists have high hopes that these may hold the key to producing a successful HIV vaccine. As the name suggests, rather than being specific to just one target, these antibodies are able to recognize and inhibit a range of HIV variants, or strains, and thus are much more therapeutically useful. Although a subset of HIV-positive individuals produce these antibodies, scientists have so far failed to induce their production via vaccination.
Many researchers believe the key to achieving this is by presenting the body with multiple targets, or antigens, that differ slightly, training the immune system to recognize and hone in on the more conserved elements of HIV that are found in different strains. One particular molecule that scientists are interested in is an antigen called eOD-GT8, which was engineered by researchers, headed by William Schief, at The Scripps Research Institute.
Rather than attempting to directly elicit bNAbs, this antigen is designed to stimulate the production of precursor antibodies that will eventually mature into bNAbs following prolonged exposure to the virus, The Scientist explains. So by starting off with these immature antibodies, scientists hypothesize it may be possible to encourage them to develop into bNAbs over time by gradually exposing the immune system to slightly different HIV antigens, forcing the antibodies to mutate in order to recognize more conserved regions of the virus.
When testing this molecule out in mice genetically engineered to produce antibodies similar to those found in humans, the researchers found that it was indeed able to elicit these first-line antibodies. Additionally, they found it also created a pool of antibody-producing “memory” B cells that the researchers believe could be boosted through exposure to different antigens, sort of like receiving booster shots. These findings have been reported in Science.
Via Steven Krohn