NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
Andy Rubin, the Google executive who developed Google’s free Android software, has revealed to The New York Times he is working on a secret Google project to create a new generation of robots.
The goal: improve the efficiency of manufacturing of small electronics — now largely manual — and packing goods in warehouses, and ultimately making home deliveries — perhaps via Google-designed autonomous vehicles.
“Google has recently started experimenting with package delivery in urban areas with its Google Shopping service, and it could try to automate portions of that system,” the Times suggests.
Google executives describe this robotic vision as a “moonshot.” But it appears to be “more realistic than Amazon’s proposed drone delivery service,” opines the Times.
Stephen Colbert begs to disagree: “These Amazon drones are a great idea, and guaranteed to be safe, thanks to all the drone testing we’ve done overseas. I mean, worse-case scenario, a few homes get carpet-gifted with some collateral generosity.”
Until now, 3D printing has been a polymer affair, with most people in the maker community using the machines to make all manner of plastic consumer goods, from tent stakes to chess sets. A new low-cost 3D printer developed by Michigan Technological University’s Joshua Pearce and his team could add hammers to that list. The detailed plans, software and firmware are all freely available and open-source, meaning anyone can use them to make their own metal 3D printer.
Pearce is the first to admit that his new printer is a work in progress. So far, the products he and his team have produced are no more intricate than a sprocket. But that’s because the technology is so raw. “Similar to the incredible churn in innovation witnessed with open-sourcing of the first RepRap plastic 3D printers, I anticipate rapid progress when the maker community gets their hands on it,” says Pearce, an associate professor of materials science and engineering/electrical and computer engineering. “Within a month, somebody will make one that’s better than ours, I guarantee it.”
Using under $1,500 worth of materials, including a small commercial MIG welder and an open-source microcontroller, Pearce’s team built a 3D metal printer than can lay down thin layers of steel to form complex geometric objects. Commercial metal printers are available, but they cost over half a million dollars.
His make-it-yourself metal printer is less expensive than off-the-shelf commercial plastic 3D printers and is affordable enough for home use, he said. However, because of safety concerns, Pearce suggests that for now it would be better off in the hands of a shop, garage or skilled DIYer, since it requires more safety gear and fire protection equipment than the typical plastic 3D printer.
While metal 3D printing opens new vistas, it also raises anew the specter of homemade firearms. Some people have already made guns with both commercial metal and plastic 3D printers, with mixed results. While Pearce admits to some sleepless nights as they developed the metal printer, he also believes that the good to come from all types of distributed manufacturing with 3D printing will far outweigh the dangers.
Neurons that encode spatial information form “geotags” for specific memories and these geotags are activated immediately before those memories are recalled, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered. They used a video game in which people navigate through a virtual town delivering objects to specific locations.
“These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags,” said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.
Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.
As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering.
After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.
After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.
This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories (the list of items that had been delivered).
“During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device,” Kahana said. “These ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation.”
Proteins accomplish something rather amazing: A protein can have many functions, with a given function being determined by the way they fold into a specific three-dimensional geometry, or conformations. Moreover, the structural transitions form one conformation to another is reversible. However, while these dynamics affect protein conformation and therefore function, and so are critical to a wide range of areas, methods for understanding how proteins behave near surfaces, which is complicated by protein and surface heterogeneities, has remained elusive. Recently, however, scientists at University of Colorado utilized a method known as Single-Molecule Förster Resonance Energy Transfer (SM-FRET) tracking to monitor dynamic changes in protein structure and interfacial behavior on surfaces by single-molecule Förster resonance energy transfer, allowing them to explicate changes in protein structure at the single-molecule level. (SM-FRET describes energy transfer between two chromophores – molecular components that determine its color.) In addition, the researchers state that their approach is suitable for studying virtually any protein, thereby providing a framework for developing surfaces and surface modifications with improved biocompatibility.
Prof. Joel L. Kaar discussed the paper he and his co-authors, Dr. Sean Yu McLoughlin, Prof. Mark Kastantin and Prof. Daniel K. Schwartz, recently published in Proceedings of the National Academy of Sciences. "The primary challenges in devising our approach to characterizing changes in protein structure were implementing a site-specific labeling method, which enabled single-molecule resolution, as well as a method to only image molecules at the solution-surface interface," Kaar tells Phys.org. The scientists overcame the former challenge by incorporating unnatural amino acids – that is, those not among the 20 so-called standard amino acids – with unique functional groups for labeling with fluorophores (chemical compounds that can re-emit light upon light excitation); the latter, by using total internal reflection fluorescence microscopy, which only excites molecules in the near-surface environment, thereby minimizing the background fluorescence of molecules free in solution. "Although site-specific labeling methods have been used to monitor changes in protein conformation mainly in bulk solution, such techniques have not previously been exploited to study freely diffusible protein molecules at interfaces," Kaar adds. As such, the researchers are the first to apply site-specific labeling methods to study protein-surface interactions.
"The major challenge associated with incorporating unnatural amino acids for labeling was related to the optimization of protein expression," Kaar continues. Specifically, he explains, the expression of the enzyme organophosphorus hydrolase (OPH) – which is notoriously difficult to make in large quantities due to inclusion body formation – with the unnatural amino acid p-azido-L-Phe (AzF) had to be optimized to efficiently incorporate p-azido-L-Phe. (Inclusion body formation refers to the intracellular aggregation of partially folded expressed proteins,) "This process required modification of expression conditions," he adds, "in which bacteria with modified genetic machinery were grown to enable production of soluble enzyme for single-molecule experiments."
For decades, drug development was mostly a game of trial and error, with brute-force candidate screens throwing up millions more duds than winners. Researchers are now using computers to get a head start. By analysing the chemical structure of a drug, they can see if it is likely to bind to, or ‘dock’ with, a biological target such as a protein. Such algorithms are particularly useful for finding potentially toxic side effects that may come from unintended dockings to structurally similar, but untargeted, proteins.
Last week, researchers presented a computational effort that assesses billions of potential dockings on the basis of drug and protein information held in public databases. “It’s the largest computational docking ever done by mankind,” says Timothy Cardozo, a pharmacologist at New York University’s Langone Medical Center, who presented the project on 19 November at the US National Institutes of Health’s High Risk–High Reward Symposium in Bethesda, Maryland. The result, a website called Drugable (drugable.com) that is backed by the US National Library of Medicine (NLM), is still in testing, but it will eventually be available for free, allowing researchers to predict how and where a compound might work in the body, purely on the basis of chemical structure.
Predicting how untested compounds will interact with proteins in the body, as Drugable attempts to do, is more challenging. In setting up the website, Cardozo’s group selected about 600,000 molecules from PubChem and the European Bioinformatics Institute’s ChEMBL, which together catalogue millions of publicly available compounds. The group evaluated how strongly these molecules would bind to 7,000 structural ‘pockets’ on human proteins also described in the databases. Computing giant Google awarded the researchers the equivalent of more than 100 million hours of processor time on its supercomputers for the mammoth effort.
Cardozo acknowledges that the computations are just an initial step in drug discovery. After predicting whether a protein can bind to a compound, drug developers must test the drug’s action on the same protein in a cell to see what actually happens to the protein’s function, as well as how much of the drug is needed and under what conditions. Then come animal trials and, if researchers are lucky, human trials. But these extra data are often proprietary and held by pharmaceutical companies, says Brian Shoichet, a computational biologist at the University of California, San Francisco. Some public databases such as PubChem, maintained by the NLM, hold the results of automated tests of drugs on proteins in yeast cells, but they contain inaccuracies and false positives, he says.
Pharmaceutical companies have been doing similar computational predictions for years, says Jeremy Jenkins, a researcher at the Novartis Institutes. But he says that Novartis, which has a library of 1.5 million public and proprietary compounds, has never attempted to analyse as many proteins and drugs at once as Drugable has done.
Cardozo hopes that Drugable will be particularly helpful in evaluating psychiatric drugs, which often act in ways that are difficult to measure. As a demonstration, Cardozo’s group applied Drugable’s algorithm to clozapine and chlorpromazine, two drugs often prescribed to treat schizophrenia.
As expected, Drugable showed that the two drugs bind most strongly to receptors for the neurotransmitters serotonin and dopamine, which are expressed in the parts of the brain involved in higher information processing. But it found that clozapine, which also stabilizes mood disorders such as depression, binds strongly to a particular dopamine receptor called DRD4, which is expressed in the brain’s pineal gland — a known mood regulator.
The group also found that clozapine binds to a receptor in the part of the brain that regulates saliva production; excessive salivation is a known side effect of clozapine. Although the biochemical explanations for mood regulation and salivation have been proposed before, Cardozo says that Drugable can be used to reveal the most plausible mechanisms.
Articles in the Temperature Rising series from The New York Times.
Continued global warming poses a risk of rapid, drastic changes in some human and natural systems, a scientific panel warned Tuesday, citing the possible collapse of polar sea ice, the potential for a mass extinction of plant and animal life and the threat of immense dead zones in the ocean. Articles in this series focus on the central arguments in the climate debate and examine the evidence for global warming and its consequences.
At the same time, some worst-case fears about climate change that have entered the popular imagination can be ruled out as unlikely, at least over the next century, the panel found. These include a sudden belch of methane from the ocean or the Arctic that would fry the planet, as well as a shutdown of the heat circulation in the Atlantic Ocean that would chill nearby land areas — the fear on which the 2004 movie “The Day After Tomorrow” was loosely based.
In a recent report, the panel appointed by the National Research Council called for the creation of an early warning system to alert society well in advance to changes capable of producing chaos. Nasty climate surprises have occurred already, and more seem inevitable, perhaps within decades, panel members warned. But, they said, little has been done to prepare.
“The reality is that the climate is changing,” said James W. C. White, a paleoclimatologist at the University of Colorado Boulder who headed the committee on abrupt impacts of climate change. “It’s going to continue to happen, and it’s going to be part of everyday life for centuries to come — perhaps longer than that.”
Scientists have identified a way to block sperm transport during ejaculation, which could lead to a male contraceptive pill.
Published in the journal, Proceedings of the National Academy of Science, USA, scientists have found that complete male infertility could be achieved by blocking two proteins found on the smooth muscle cells that trigger the transport of sperm.
The researchers demonstrated that the absence of two proteins in mouse models, α1A-adrenoceptor and P2X1-purinoceptor, which mediate sperm transport, caused infertility, without effects on long-term sexual behavior or function.
Lead researchers, Dr. Sab Ventura and Dr. Carl White of the Monash Institute of Pharmaceutical Sciences, believe the knowledge could be applied to the potential development of a contraceptive pill for men.
“Previous strategies have focused on hormonal targets or mechanisms thatproduce dysfunctional sperm incapable of fertilization, but they often interfere with male sexual activity and cause long term irreversible effects on fertility,” Dr. Ventura said.
“We’ve shown that simultaneously disrupting the two proteins that control the transport of sperm during ejaculation causes complete male infertility, but without affecting the long-term viability of sperm or the sexual or general health of males. The sperm is effectively there but the muscle is just not receiving the chemical message to move it.
Dr. Ventura said there was already a drug that targets one of the two proteins, but they would have to find a chemical and develop a drug to block the second one.
“This suggests a therapeutic target for male contraception. The next step is to look at developing an oral male contraceptive drug, which is effective, safe, and readily reversible.” If successful, it is hoped a male contraceptive pill could be available within ten years.
The ability to shrink laboratory-scale processes to automated chip-sized systems would revolutionize biotechnology and medicine. For example, inexpensive and highly portable devices that process blood samples to detect biological agents such as anthrax are needed by the U.S. military and for homeland security efforts. One of the challenges of "lab-on-a-chip" technology is the need for miniaturized pumps to move solutions through micro-channels. Electroosmotic pumps (EOPs), devices in which fluids appear to magically move through porous media in the presence of an electric field, are ideal because they can be readily miniaturized. EOPs however, require bulky, external power sources, which defeats the concept of portability. But a super-thin silicon membrane developed at the University of Rochester could now make it possible to drastically shrink the power source, paving the way for diagnostic devices the size of a credit card.
"Up until now, electroosmotic pumps have had to operate at a very high voltage—about 10 kilovolts," said James McGrath, associate professor of biomedical engineering. "Our device works in the range of one-quarter of a volt, which means it can be integrated into devices and powered with small batteries."
McGrath and his team use porous nanocrystalline silicon (pnc-Si) membranes that are microscopically thin—it takes more than one thousand stacked on top of each other to equal the width of a human hair. And that's what allows for a low-voltage system.
A porous membrane needs to be placed between two electrodes in order to create what's known as electroosmotic flow, which occurs when an electric field interacts with ions on a charged surface, causing fluids to move through channels. The membranes previously used in EOPs have resulted in a significant voltage drop between the electrodes, forcing engineers to begin with bulky, high-voltage power sources. The thin pnc Si membranes allow the electrodes to be placed much closer to each other, creating a much stronger electric field with a much smaller drop in voltage. As a result, a smaller power source is needed.
A microfluidic bioreactors consists of two chambers separated by a nanoporous silicon membrane. It allows for flow-based assays using minimal amounts of reagent. The ultra-thin silicon membrane provides an excellent mimic of biological barrier properties. The shown image combines two exposures in order to capture the brighter and darker parts of the scene, which exceed the dynamic range of the camera sensor. The resulting composite is truer to what the eye actually sees.
Along with medical applications, it's been suggested that EOPs could be used to cool electronic devices. As electronic devices get smaller, components are packed more tightly, making it easier for the devices to overheat. With miniature power supplies, it may be possible to use EOPs to help cool laptops and other portable electronic devices.
McGrath said there's one other benefit to the silicon membranes. "Due to scalable fabrication methods, the nanocrystalline silicon membranes are inexpensive to make and can be easily integrated on silicon or silica-based microfluid chips."
In a recent study, published in the journal of Nature Neuroscience, the researchers trained mice to fear the smell of cherry blossom using electric shocks before allowing them to breed. The offspring produced showed fearful responses to the odor of cherry blossom compared to a neutral odor, despite never having encountered them before. The following generation also showed the same behavior. This effect continued even if the mice had been fathered through artificial insemination.
The researchers found the brains of the trained mice and their offspring showed structural changes in areas used to detect the odor. The DNA of the animals also carried chemical changes, known as epigenetic methylation, on the gene responsible for detecting the odor. This suggests that experiences are somehow transferred from the brain into the genome, allowing them to be passed on to later generations.
The researchers now hope to carry out further work to understand how the information comes to be stored on the DNA in the first place.
They also want to explore whether similar effects can be seen in the genes of humans.
Prof. Marcus Pembrey, a paediatric geneticist at University College London, said the work provided "compelling evidence" for the biological transmission of memory. He added: "It addresses constitutional fearfulness that is highly relevant to phobias, anxiety and post-traumatic stress disorders, plus the controversial subject of transmission of the ‘memory’ of ancestral experience down the generations.
"It is high time public health researchers took human transgenerational responses seriously. "I suspect we will not understand the rise in neuropsychiatric disorders or obesity, diabetes and metabolic disruptions generally without taking a multigenerational approach.”
Prof. Wolf Reik, head of epigenetics at the Babraham Institute in Cambridge, said, however, further work was needed before such results could be applied to humans. He said: "These types of results are encouraging as they suggest that transgenerational inheritance exists and is mediated by epigenetics, but more careful mechanistic study of animal models is needed before extrapolating such findings to humans.”
Another study in mice has shown that their ability to remember can be effected by the presence of immune system factors in their mother's milk. Dr Miklos Toth, from Cornell University in New York, found that chemokines carried in a mother's milk caused changes in the brains of their offspring, affecting their memory in later life.
Dark matter makes up about a quarter of the cosmos, but we still don't know what it is. As part of a two-part series called Light & Dark on BBC Four, physicist Jim Al-Khalili pondered how close we are to understanding the mysterious "dark stuff".
Given all the progress we've made in modern physics over the past century, you may be forgiven for thinking that physicists are approaching a complete understanding of what makes up everything in our Universe.
For example, all the publicity surrounding the discovery of the Higgs boson last year seemed to be suggesting that this was one of the final pieces of the jigsaw - that all the fundamental building blocks of reality were now known.
So it might come as something of a shock to many people to hear that we still don't know what 95% of the Universe is made of. The stars in galaxies revolve around like undissolved coffee granules on the surface of you mug of coffee just after you've stopped stirring it.
It's all rather embarrassing. Everything we see: our planet and everything on it, the moon, the other planets and their moons, the Sun, all the stars in the sky that make up our Milky Way galaxy, all the other billions of galaxies beyond with their stars and clouds of interstellar gas, as well as all the dead stars and black holes that we can no longer see; it all amounts to less than 5% of the Universe.
And we don't even know if space goes on for ever, what shape the Universe is, what caused the Big Bang that created it, even whether it is just one of many embedded multiverses.
There’s no shortage of smartphone appsto help people track their health. And in recent months, medical apps have started growing up, leaving behind the novelty of attaching probes to a smartphone to offer, they hope, serious clinical tools.
Last month in a Ted Talk, Shiv Gaglani showed that a standard physical exam can now be done using only smartphone apps and attachments. From blood pressure cuff to stethoscope and otoscope — the thing the doctor uses to look in your ears — all of the doctor’s basic instruments are now available in “smart” format.
At a CERN seminar November 26th, Aliaksandr (Sasha) Pranko of the Physics Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) presented key direct evidence that the “Higgs-like” particle discovered at CERN last year does what a Higgs is supposed to do: it couples not only to other bosons but to fermions as well.
Pranko is a member of Berkeley Lab’s contingent of the ATLAS Collaboration at the Large Hadron Collider. Pranko reported the results of the ATLAS search for pairs of fermions – including quarks, constituents of hadrons such as protons, and leptons, particles in their own right such as electrons and neutrinos. The ATLAS search concentrated on finding pairs of bottom (b) quarks; pairs of muons, which are heavier “cousins” of the electron; and pairs of tau leptons, cousins of the electron that are heavier still.
The b-quark and muon searches yielded no events in excess of the cluttered experimental background, but the search for pairs of tau particles yielded striking results, showing marked evidence at a high level of confidence that a Higgs boson can indeed decay to a pair of taus. This was the first evidence that the Higgs couples with leptons.
“Since Higgs coupling should be dependent on particle mass, tau coupling should be much bigger than muon coupling,” says Ian Hinchliffe, who leads Berkeley Lab’s ATLAS contingent. “The ATLAS experiment has very high resolution in muons, but the expected signal is very small.” And detecting decay to a pair of taus is very complicated, due to the large backgrounds and the missing energy carried off by neutrinos from the tau decays.
Hinchliffe credits Pranko with co-inventing the “Missing Mass Calculator” method of reconstructing particle masses, in particular those of tau pairs, and serving as co-leader of the group responsible for the ATLAS Collaboration’s analysis of the data that revealed the Higgs’s coupling to the tau lepton.
The ATLAS results were based on the full data set with the LHC’s colliding beams running at 8 TeV (eight trillion electron volts) center-of-mass proton collisions during the last year of its run, before it recessed for maintenance. The LHC is now preparing for even higher energy runs beginning in 2015.
Using DNA self-assembly, the Aarhus University researchers Magnus Stougaard, Oskar Franch and Brian Christensen designed eight unique DNA molecules from the body’s own natural molecules. When these molecules are mixed together, they spontaneously aggregate in a usable form – the nanocage (see figure).
The nanocage has four functional elements that transform themselves in response to changes in the surrounding temperature. These transformations either close (figure 1A) or open (figure 1B) the nanocage. By exploiting the temperature changes in the surroundings, the researchers trapped an active enzyme called horseradish peroxidase (HRP) in the nanocage (figure 1C). They used HRP as a model because its activity is easy to trace.
This is possible because the nanocage’s outer lattice has apertures with a smaller diameter than the central spherical cavity. This structure makes it possible to encapsulate enzymes or other molecules that are larger than the apertures in the lattice, but smaller than the central cavity.
The researchers have just published these results in the renowned journal ACS Nano. Here the researchers show how they can utilise temperature changes to open the nanocage and allow HRP to be encapsulated before it closes again.
They also show that HRP retains its enzyme activity inside the nanocage and converts substrate molecules that are small enough to penetrate the nanocage to products inside.
The encapsulation of HRP in the nanocage is reversible, in such a way that the nanocage is capable of releasing the HRP once more in reaction to temperature changes. The researchers also show that the DNA nanocage – with its enzyme load – can be taken up by cells in culture.
Looking towards the future, the concept behind this nanocage is expected to be used for drug delivery, i.e. as a means of transport for medicine that can target diseased cells in the body in order to achieve a more rapid and more beneficial effect.
NASA's Cassini spacecraft has obtained the highest-resolution movie yet of a unique six-sided jet stream, known as the hexagon, around Saturn's north pole.
This is the first hexagon movie of its kind, using color filters, and the first to show a complete view of the top of Saturn down to about 70 degrees latitude. Spanning about 20,000 miles (30,000 kilometers) across, the hexagon is a wavy jet stream of 200-mile-per-hour winds (about 322 kilometers per hour) with a massive, rotating storm at the center. There is no weather feature exactly, consistently like this anywhere else in the solar system.
"The hexagon is just a current of air, and weather features out there that share similarities to this are notoriously turbulent and unstable," said Andrew Ingersoll, a Cassini imaging team member at the California Institute of Technology in Pasadena. "A hurricane on Earth typically lasts a week, but this has been here for decades -- and who knows -- maybe centuries."
Weather patterns on Earth are interrupted when they encounter friction from landforms or ice caps. Scientists suspect the stability of the hexagon has something to do with the lack of solid landforms on Saturn, which is essentially a giant ball of gas.
Better views of the hexagon are available now because the sun began to illuminate its interior in late 2012. Cassini captured images of the hexagon over a 10-hour time span with high-resolution cameras, giving scientists a good look at the motion of cloud structures within.
They saw the storm around the pole, as well as small vortices rotating in the opposite direction of the hexagon. Some of the vortices are swept along with the jet stream as if on a racetrack. The largest of these vortices spans about 2,200 miles (3,500 kilometers), or about twice the size of the largest hurricane recorded on Earth.
Scientists analyzed these images in false color, a rendering method that makes it easier to distinguish differences among the types of particles suspended in the atmosphere -- relatively small particles that make up haze -- inside and outside the hexagon.
Quantum entanglement is weird enough, but it might get weirder still through a possible association with hypothetical wormholes. Over the past year, theorists have been hard at work exploring the entanglement of two black holes. A pair of papers in Physical Review Letters advances the story by showing that a string-based representation of two entangled quarks is equivalent to the spacetime contortions of a wormhole.
A common feature of entanglement and wormholes is that they both seemingly imply faster-than-light travel. If one imagines two entangled particles separated by a large distance—a so-called Einstein-Podolsky-Rosen (EPR) pair—then a measurement of one has an immediate effect on the measurement probabilities of the other, as if information travels instantaneously between them. Similarly, a wormhole—or Einstein-Rosen (ER) bridge—is a “shortcut” connecting separate points in space, but no information can actually pass through. Recent work has shown that the spacetime geometry of a wormhole is equivalent to what you’d get if you entangled two black holes and pulled them apart—an equivalence that can be summarized by “ER = EPR.”
The latest papers in this development extend the equivalence beyond black holes to quarks. As previous studies have shown, two entangled quarks can be represented as the endpoints of a string in a higher dimensional space, where certain calculations end up being easier.
Kristan Jensen of the University of Victoria, Canada, and Andreas Karch of the University of Washington, Seattle, imagine the entangled quarks are accelerating away from each other, so that they are no longer in causal contact. In this case, the connecting string becomes mathematically equivalent to a wormhole. Using a different approach, Julian Sonner from the Massachusetts Institute of Technology, Cambridge, has derived the same result starting from quark/antiquark creation in a strong electric field (the Schwinger effect).
The wormhole connection may provide new insights into entanglement, as suggested by calculations that equate the entropy of the wormhole to that of the quarks.
Clinical trials — which usually compare the effectiveness of medical treatments to placebos — often get published in peer-reviewed journals only if they gave favourable results. The results of clinical trials are going unpublished as much as half the time, and those that are published omit some key details, a study has found.
US law requires the results of medical research for drugs approved by the US Food and Drug Administration to be submitted to a database called ClinicalTrials.gov. Results, including adverse effects, have been made public there since 2008. Researchers who do not post results within a year of trial completion risk losing grants and can be fined as much as US$10,000 per day. But the database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug.
In an analysis of 600 trials picked at random from the database, Agnes Dechartres, an epidemiologist at Paris Descartes University, and her colleagues have now found that only 50% had made their way into print. “Non-publication is a crucial problem for all stakeholders, from patients to health policy-makers,” says Dechartres. For one thing, she says, failure to publish results in journals breeches the implied contract with patients who participated in the trials. “If results are not [fully] available, we can consider that research wasted,” she says.
NASA's Hubble Space Telescope has detected water in the atmospheres of five planets beyond our solar system, two recent studies reveal.
The five exoplanets with hints of water are all scorching-hot, Jupiter-size worlds that are unlikely to host life as we know it. But finding water in their atmospheres still marks a step forward in the search for distant planets that may be capable of supporting alien life, researchers said.
"We're very confident that we see a water signature for multiple planets," Avi Mandell, of NASA's Goddard Space Flight Center in Greenbelt, Md., lead author of one of the studies, said in a statement. "This work really opens the door for comparing how much water is present in atmospheres on different kinds of exoplanets — for example, hotter versus cooler ones."
The two research teams used Hubble's Wide Field Camera 3 to analyze starlight passing through the atmospheres of the five "hot Jupiter" planets, which are known as WASP-17b, HD209458b, WASP-12b, WASP-19b and XO-1b.
The atmospheres of all five planets showed signs of water, with the strongest signatures found in the air of WASP-17b and HD209458b. "To actually detect the atmosphere of an exoplanet is extraordinarily difficult. But we were able to pull out a very clear signal, and it is water," Drake Deming of the University of Maryland, lead author of the other recent study, said in a statement.
Water is thought to be a common constituent of exoplanet atmospheres and has been found in the air of several other distant worlds to date. But the new work marks the first time scientists have measured and compared profiles of the substance in detail across multiple alien worlds, researchers said.
"These studies, combined with other Hubble observations, are showing us that there are a surprisingly large number of systems for which the signal of water is either attenuated or completely absent," Heather Knutson of the California Institute of Technology in Pasadena, a co-author on Deming's paper, said in a statement. "This suggests that cloudy or hazy atmospheres may in fact be rather common for hot Jupiters."
Dr. Theodore Berger's research is currently focused primarily on the hippocampus, a neural system essential for learning and memory functions.
Theodore Berger leads a multi-disciplinary collaboration with Drs. Marmarelis, Song, Granacki, Heck, and Liu at the University of Southern California, Dr. Cheung at City University of Hong Kong, Drs. Hampson and Deadwyler at Wake Forest University, and Dr. Gerhardt at the University of Kentucky, that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease), and is considered to underlie the memory deficits characteristic of these neurological conditions.
The essential goals of Dr. Berger's multi-laboratory effort include: (1) experimental study of neuron and neural network function during memory formation -- how does the hippocampus encode information?, (2) formulation of biologically realistic models of neural system dynamics -- can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event?, (3) microchip implementation of neural system models -- can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization?, and (4) creation of conformal neuron-electrode interfaces -- can cytoarchitectonic-appropriate multi-electrode arrays be created to optimize bi-directional communication with the brain? By integrating solutions to these component problems, the team is realizing a biomimetic model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus.
Scientists have long suspected that corvids – the family of birds including ravens, crows and magpies – are highly intelligent.
The Tübingen researchers are the first to investigate the brain physiology of crows' intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage – and which is an effort even for humans.
The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image – while another group of cells always responded when they were operating on the "different image" rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.
The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. "Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs," says Lena Veit. "This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy." Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. "Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains," says Professor Andreas Nieder.
Recently, researchers have been developing a new type of laser that combines photons and plasmons (electron density oscillations) into a single radiation-emitting device with unique properties.
The hybrid photon-plasmon nanowire laser is composed of a Ag nanowire and a CdSe nanowire coupled into an X-shape. This type of coupling enables the photonic and plasmonic modes to be separated, which gives the hybrid laser advantageous features.
"Compared to conventional photon lasers, the hybrid photon-plasmon nanowire laser offers two outstanding possibilities: the extremely thin laser beam (e.g., down to the size of a single molecule) and the ultrafast modulation (e.g., >THz repetition rate), both stemming from the longitudinally separable pure plasmon nanowire mode," Limin Tong, Professor at Zhejiang University in Hangzhou China, told Phys.org. "Owing to the above-mentioned merits, photon-plasmon lasers are potentially better for certain applications such as strong coupling of quantum nanoemitters, ultra-sensitivity optical sensing, and ultrafast-modulated coherent sources."
In a new study, the researchers have demonstrated that the photon and plasmon nanowire waveguides can be coupled in the longitudinal direction; that is, along the direction of the beams. This type of coupling makes it possible to spatially separate the plasmonic mode from the photonic mode, and to simultaneously use both modes. Under excitation, strong luminous spots are observed at both ends of the hybrid cavity, with interference rings indicating strong spatial coherence of the light emitted. The output spot of the Ag nanowire is much smaller than that of the CdSe nanowire, indicating much tighter confinement of the plasmon radiation.
The advantages of ultratight confinement and ultrafast modulation offered by side-coupling a plasmonic nanowire waveguide to a photonic one enable the hybrid laser to provide very precise lasing, which could be delivered to very small areas such as quantum dots. Photon-plasmon lasers can also have applications for nanophotonic circuits, biosensing, and quantum information processing. The researchers plan to make further improvements to the laser in the future.
"One of our future plans is to introduce the ultrafast nonlinear effects of the plasmonic nanowire into the hybrid laser, and explore the possibility of ultrafast-modulation of the nanolaser, while offering a far-field-accessible pure plasmon cavity mode with sub-diffration-limited beam size," Tong said.
Intelligence is a very difficult concept and, until recently, no one has succeeded in giving it a satisfactory formal definition.
Most researchers have given up grappling with the notion of intelligence in full generality, and instead focus on related but more limited concepts – but Marcus Hutter argues that mathematically defining intelligence is not only possible, but crucial to understanding and developing super-intelligent machines. From this, his research group has even successfully developed software that can learn to play computer games from scratch.
But first, how do we define "intelligence"? Hutter's group has sifted through the psychology, philosophy and artificial intelligence literature and searched for definitions individual researchers and groups came up with. The characterizations are very diverse, but there seems to be a recurrent theme which we have aggregated and distilled into the following definition: Intelligence is an agent's ability to achieve goals or succeed in a wide range of environments.
The emerging scientific field is called universal artificial intelligence, with AIXI being the resulting super-intelligent agent. AIXI has a planning component and a learning component. The goal of AIXI is to maximise its reward over its lifetime – that's the planning part.
In summary, every interaction cycle consists of observation, learning, prediction, planning, decision, action and reward, followed by the next cycle. If you're interested in exploring further, AIXI integrates numerous philosophical, computational and statistical principles:
• Ockham's razor (simplicity) principle for model selection
• Epicurus principle of multiple explanations as a justification of model
• Bayes rule for updating beliefs
• Turing machines as universal description language
• Kolmogorov complexity to quantify simplicity
• Solomonoff's universal prior, and
• Bellman equations for sequential decision making.
AIXI's algorithm rigorously and uniquely defines a super-intelligent agent that learns to act optimally in arbitrary unknown environments. One can prove amazing properties of this agent – in fact, one can prove that in a certain sense AIXI is the most intelligent system possible. Note that this is a rather coarse translation and aggregation of the mathematical theorems into words, but that is the essence.
Since AIXI is incomputable, it has to be approximated in practice. In recent years, we have developed various approximations, ranging from provably optimal to practically feasible algorithms.
The point is not that AIXI is able to play these games (they are not hard) – the remarkable fact is that a single agent can learn autonomously this wide variety of environments. AIXI is given no prior knowledge about these games; it is not even told the rules of the games! It starts as a blank canvas, and just by interacting with these environments, it figures out what is going on and learns how to behave well. This is the really impressive feature of AIXI and its main difference to most other projects.
Even though IBM Deep Blue plays better chess than human Grand Masters, it was specifically designed to do so and cannot play Jeopardy. Conversely, IBM Watson beats humans in Jeopardy but cannot play chess – not even TicTacToe or Pac-Man. AIXI is not tailored to any particular application. If you interface it with any problem, it will learn to act well and indeed optimally.
Scientists in Japan use ancient trees to look back on the history of our local cosmos, and discover a mystery.
Since the invention of the telescope in the year 1608, mankind has collected information about our local cosmos. As it turns out, we’re not the only ones. Trees have been doing the same for millennia.
A group of physicists led by Nagoya University graduate student Fusa Miyake has begun using information stored in ancient Japanese cedars to gain the oldest firsthand accounts of the local universe. They have discovered, hidden within tree rings, clear evidence of some surprisingly high-energy events—possibly supernovae or solar flares—that occurred more than 1200 years ago.
On Japan’s Yakushima island, trees regularly live at least a thousand years, thriving under the tree equivalent of a low-carb diet in the form of a low-nutrition granite bedrock that encourages a slower pace of growth. Miyake and her team examined core samples from two trees on this small island. Back at Nagoya University, they studied the number and thickness of the tree’s rings not just to determine the age of the trees but also to gather information about the atmosphere they breathed.
When high-energy radiation from space enters Earth’s upper atmosphere, it interacts with naturally occurring atmospheric molecules to produce the isotope carbon-14. As trees are firmly plugged into the earth’s carbon cycle by photosynthesis, the carbon-14 ends up in each tree ring, creating an annual record etched into the flesh of the tree of the average carbon-14 level in Earth’s atmosphere.
Miyake and her colleagues had good reason to focus on the rings corresponding to 775 AD. A previous project called IntCal, which uses tree records of carbon-14 levels to calibrate carbon-14 dating, had seen a noticeable rise in carbon-14 levels toward the end of the 8th century.
The signal Miyake’s team found was far above anything seen in recent times, indicating that Earth had been bombarded by an extremely intense burst of radiation. The rings revealed that, over the course of one year, the atmospheric level of carbon-14 rose 1.2 percent: nearly 20 times the normal variation.
This massive flash of radiation could have been caused by a supernova; a gamma ray burst from a supremely rare galactic event such as a collision of two neutron stars; or a super solar flare at least 10 times the size of the largest observed flare.
Using their knowledge of earth sciences, biology and astronomy, Miyake’s team uncovered a smoking gun in a cosmological whodunit. Now all that remains is to identify who fired that gun.
There could be hope for diabetics who are tired of giving themselves insulin injections on a daily basis. Researchers at North Carolina State University and the University of North Carolina at Chapel Hill are developing a system in which a single injection of nanoparticles could deliver insulin internally for days at a time – with a little help from pulses of ultrasound.
The biocompatible and biodegradable nanoparticles are made of poly(lactic-co-glycolic acid), and contain a payload of insulin. Each particle has either a positively-charged chitosan coating, or a negatively-charged alginate coating. When the two types of particles are mixed together, these oppositely-charged coatings cause them to be drawn to each other by electrostatic force.