Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The patron animal of quantum theory poses for a unique portrait in which the camera and the sitter don't share a single photon – except by entanglement.
Information is central to quantum mechanics. In particular, quantum interference occurs only if there exists no information to distinguish between the superposed states. The mere possibility of obtaining information that could distinguish between overlapping states inhibits quantum interference1, 2. Gabriela Barreto Lemos at the Austrian Academy of Sciences introduces and experimentally demonstrates a quantum imaging concept based on induced coherence without induced emission3, 4. The experiment uses two separate down-conversion nonlinear crystals (numbered NL1 and NL2), each illuminated by the same pump laser, creating one pair of photons (denoted idler and signal). If the photon pair is created in NL1, one photon (the idler) passes through the object to be imaged and is overlapped with the idler amplitude created in NL2, its source thus being undefined.
Interference of the signal amplitudes coming from the two crystals then reveals the image of the object. The photons that pass through the imaged object (idler photons from NL1) are never detected, while we obtain images exclusively with the signal photons (from NL1 and NL2), which do not interact with the object.
The experiment is fundamentally different from previous quantum imaging techniques, such as interaction-free imaging5 or ghost imaging6, 7, 8, 9, because now the photons used to illuminate the object do not have to be detected at all and no coincidence detection is necessary. This enables the probe wavelength to be chosen in a range for which suitable detectors are not available. To illustrate this, the researchers show images of objects that are either opaque or invisible to the detected photons. This experiment is a prototype in quantum information—knowledge can be extracted by, and about, a photon that is never detected.
Professor Anthony Amend from the University of Hawaii at Manoa showed very recently that the fungus genus Malassezia is not only found on human skin with conditions such as dandruff and eczema, but has also been identified in marine environments such as deep-sea sediment, hydrothermal vents, corals, guts of lobster larvae, eel tissue, and Antarctica soils.
More remarkably, sequencing and tree building of species relatedness shows that the marine species and terrestrial (non-marine) species do not group together but “interdigitate”, or are spread randomly in the way they group together in their relatedness. The evidence suggests that the marine and terrestrial forms have jumped repeatedly between habitats.
The data was obtained from a number of sources, most of them “environmental sequencing” projects around the world which aim to do simultaneous sequencing of all DNA found in a sample. Done correctly the analysis yields in one try the identities of all organisms captured in a sample.
Prior to this analysis, it was thought that these fungus evolved to become optimally suited to mammalian skin. But the careful analysis of environmental sequencing efforts overturned that belief. One species could be spread out all over the globe, on land as well as in ocean. One example is Malassezia restricta, found on human skin but also in extreme habitats such as arctic soil and hydrothermal vents. Marine animals also carry this fungus, including higher order seals and lower order fish, lobsters, and corals.
One criticism is that sequencing is bound to become contaminated especially with a fungus endemic to human skin. However, the detection of completely novel species cannot be explained by contamination. And moreover, RNA is a fairly unstable molecule, so in the cases where detection occurred for some of the samples in which there was sufficient time for degradation suggests that microbes were actively generating RNA.
While it is associated with many skin conditions it is unclear as of yet whether the fungus is a causal factor. This is simply because disease etiology is a complex interplay of an individual immune system and disease agent.
Racetrack Playa is home to an enduring Death Valley mystery. Littered across the surface of this dry lake, also called a "playa," are hundreds of rocks – some weighing as much as 320 kilograms (700 pounds) – that seem to have been dragged across the ground, leaving synchronized trails that can stretch for hundreds of meters.
What powerful force could be moving them? Researchers have investigated this question since the 1940s, but no one has seen the process in action – until now. In a paper published in the journal PLOS ONE on Aug. 27, a team led by Scripps Institution of Oceanography, UC San Diego, paleobiologist Richard Norris reports on first-hand observations of the phenomenon.
Because the stones can sit for a decade or more without moving, the researchers did not originally expect to see motion in person. Instead, they decided to monitor the rocks remotely by installing a high-resolution weather station capable of measuring gusts to one-second intervals and fitting 15 rocks with custom-built, motion-activated GPS units. The National Park Service would not let them use native rocks, so they brought in similar rocks from an outside source.
The experiment was set up in winter 2011 with permission of the Park Service. Then – in what Ralph Lorenz of the Applied Physics Laboratory at the Johns Hopkins University, one of the paper's authors, suspected would be "the most boring experiment ever" – they waited for something to happen.
But in December 2013, Norris and co-author and cousin Jim Norris arrived in Death Valley to discover that the playa was covered with a pond of water seven centimeters (three inches) deep. Shortly after, the rocks began moving.
"Science sometimes has an element of luck," Richard Norris said. "We expected to wait five or ten years without anything moving, but only two years into the project, we just happened to be there at the right time to see it happen in person."
Their observations show that moving the rocks requires a rare combination of events. First, the playa fills with water, which must be deep enough to form floating ice during cold winter nights but shallow enough to expose the rocks. As nighttime temperatures plummet, the pond freezes to form thin sheets of "windowpane" ice, which must be thin enough to move freely but thick enough to maintain strength. On sunny days, the ice begins to melt and break up into large floating panels, which light winds drive across the playa, pushing rocks in front of them and leaving trails in the soft mud below the surface.
"On Dec. 21, 2013, ice breakup happened just around noon, with popping and cracking sounds coming from all over the frozen pond surface," said Richard Norris. "I said to Jim, 'This is it!'"
The video in this article nicely explains how the non-aerodynamic rocks of Death Valley's Racetrack Playa move, leaving behind their trail in the hot desert. Numerous attempts using GPS receivers and old-fashioned observations. But observing ice in Death Valley is so rare that no one had ever seen it until now. On very rare occasions, when it rains in the region, water will accumulate in the playa. If the wind is powerful and consistent enough, the wind will push the panels of ice against these rocks and over time, the ice floes will push these rocks, leaving behind distinctive trails. This perfect combination of water, wind, ice and heat creates a remarkable signature on the landscape.
Seagate Technology (NASDAQ:STX), a world leader in storage solutions, today announced it is shipping the world’s first 8TB hard disk drive. An important step forward in storage, the 8TB hard disk drive provides scale-out data infrastructures with supersized-capacity, energy-efficiency and the lowest total cost of ownership (TCO) in the industry for cloud content, object storage and back-up disaster recovery storage.
“As our world becomes more mobile, the number of devices we use to create and consume data is driving an explosive growth in unstructured data. This places increased pressure on cloud builders to look for innovative ways to build cost-effective, high capacity storage for both private and cloud-based data centers,” said Scott Horn, Seagate vice president of marketing. “Seagate is poised to address this challenge by offering the world’s first 8TB HDD, a ground-breaking new solution for meeting the increased capacities needed to support the demand for high capacity storage in a world bursting with digital creation, consumption and long-term storage.”
A cornerstone for growing capacities in multiple applications, the 8TB hard drive delivers bulk data storage solutions for online content storage providing customers with the highest capacity density needed to address an ever increasing amount of unstructured data in an industry-standard 3.5-inch HDD. Providing up to 8TB in a single drive slot, the drive delivers maximum rack density, within an existing footprint, for the most efficient data center floor space usage possible.
“Public and private data centers are grappling with efficiently storing massive amounts of unstructured digital content,” said John Rydning, IDC’s research vice president for hard disk drives. “Seagate’s new 8TB HDD provides IT managers with a new option for improving storage density in the data center, thus helping them to tackle one of the largest and fastest growing data categories within enterprise storage economically.”
The 8TB hard disk drive increases system capacity using fewer components for increased system and staffing efficiencies while lowering power costs. With its low operating power consumption, the drive reliably conserves energy thereby reducing overall operating costs. Helping customers economically store data, it boasts the best Watts/GB for enterprise bulk data storage in the industry.
“Cleversafe is excited to once again partner with Seagate to deliver to our customers what is truly an innovative storage solution. Delivering absolute lowest cost/TB along with the performance and reliability required for massive scale applications, the new 8TB hard disk drive is ideal for meeting the needs of our enterprise and service provider customers who demand optimized hardware and the cost structure needed for massive scale out,” said Tom Shirley, senior vice president of research and development, Cleversafe.
Outfitted with enterprise-class reliability and support for archive workloads, it features multi-drive RV tolerance for consistent enterprise-class performance in high density environments. The drive also incorporates a proven SATA 6Gb/s interface for cost-effective, easy system integration in both private and public data centers.
Scientists using mission data from NASA’s Cassini spacecraft have identified 101 distinct geysers erupting on Saturn’s icy moon Enceladus. Their analysis suggests it is possible for liquid water to reach from the moon’s underground sea all the way to its surface.
Over a period of almost seven years, Cassini’s cameras surveyed the south polar terrain of the small moon, a unique geological basin renowned for its four prominent "tiger stripe” fractures and the geysers of tiny icy particles and water vapor first sighted there nearly 10 years ago. The result of the survey is a map of 101 geysers, each erupting from one of the tiger stripe fractures, and the discovery that individual geysers are coincident with small hot spots. These relationships pointed the way to the geysers’ origin.
After the first sighting of the geysers in 2005, scientists suspected repeated flexing of Enceladus by Saturn’s tides as the moon orbits the planet had something to do with their behavior. One suggestion included the back-and-forth rubbing of opposing walls of the fractures generating frictional heat that turned ice into geyser-forming vapor and liquid.
Alternate views held that the opening and closing of the fractures allowed water vapor from below to reach the surface. Before this new study, it was not clear which process was the dominating influence. Nor was it certain whether excess heat emitted by Enceladus was everywhere correlated with geyser activity.
To determine the surface locations of the geysers, researchers employed the same process of triangulation used historically to survey geological features on Earth, such as mountains. When the researchers compared the geysers’ locations with low-resolution maps of thermal emission, it became apparent the greatest geyser activity coincided with the greatest thermal radiation. Comparisons between the geysers and tidal stresses revealed similar connections. However, these correlations alone were insufficient to answer the question, “What produces what?”
The answer to this mystery came from comparison of the survey results with high-resolution data collected in 2010 by Cassini’s heat-sensing instruments. Individual geysers were found to coincide with small-scale hot spots, only a few dozen feet (or tens of meters) across, which were too small to be produced by frictional heating, but the right size to be the result of condensation of vapor on the near-surface walls of the fractures. This immediately implicated the hot spots as the signature of the geysering process.
“Once we had these results in hand we knew right away heat was not causing the geysers, but vice versa,” said Carolyn Porco, leader of the Cassini imaging team from the Space Science Institute in Boulder, Colorado, and lead author of the first paper. “It also told us the geysers are not a near-surface phenomenon, but have much deeper roots.”
WorldView-3, the world’s first multi-payload, super-spectral, high-resolution commercial satellite for earth observations and advanced geospatial solutions, launched into orbit on Aug. 13 aboard an Atlas rocket. Operating at an expected altitude of 617 km, WorldView-3 will have an average revisit time of less than one day and will be capable of collecting up to 680,000 square kilometers of imagery per day. Its data-rich imagery will discover new sources of minerals and fuels, manage forests and farms, and accelerate DigitalGlobe’s exploitation of Geospatial Big Data™ – a living digital inventory of the surface of the Earth.
The data should lead to much nicer imagery in online mapping services from companies like Google and Microsoft (both of which are DigitalGlobe customers), although it's not just cosmetic. Higher-res photos will help track large farms, spot mineral deposits and otherwise deliver a clearer view of our planet that has previously been limited to the government -- don't be surprised if it's easier to spot landmarks on a map without using markers.
The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world's facts
GOOGLE is building the largest store of knowledge in human history – and it's doing so without any human help.
Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.
Knowledge Vault is a type of "knowledge base" – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type "Where was Madonna born" into Google, for example, the place given is pulled from Google's existing knowledge base.
This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.
So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.
Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as "confident facts", to which Google's model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.
"It's a hugely impressive thing that they are pulling off," says Fabian Suchanek, a data scientist at Télécom ParisTech in France. Google's Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.
Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it's only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.
Tom Austin, a technology analyst at Gartner in Boston, says that the world's biggest technology companies are racing to build similar vaults. "Google, Microsoft, Facebook, Amazon and IBM are all building them, and they're tackling these enormous problems that we would never even have thought of trying 10 years ago," he says.
The very first stars in the Universe might have been hundreds of times more massive than the Sun.
Astronomers have found evidence for the existence of the monster stars long thought to have populated the early Universe. Weighing in at hundreds of times the mass of the Sun, such stars would have been the first to fuse primordial hydrogen and helium into heavier elements, leaving behind a chemical signature that the researchers have now found in an ancient, second-generation star.
Little is known about the Universe’s first stars, which would have formed out of clouds of hydrogen, helium and a tiny amount of lithium in the first few hundred million years after the Big Bang.
Simulations have long predicted that some of this first batch of stars were enormous. With masses of more than 100 times that of the Sun, they would have lived and died in the cosmic blink of an eye, a few million years. As they exploded in supernovae, they created the first heavy elements from which later galaxies and stars evolved. But no traces of their existence have previously been found.
Now, using a technique called stellar archaeology, Wako Aoki at the National Astronomical Observatory of Japan in Tokyo and his colleagues have found the first hint of such a star, preserved in the chemical make-up of its ancient daughter. The chemistry of this relic — a star called SDSS J0018-0939 — suggests that it may have formed from a cloud of gas seeded with material created in the explosion of a single, very massive star. The results were published in Science on 21 August.
“This is a much awaited discovery,” says Naoki Yoshida, an astrophysicist at the University of Tokyo who was not involved in the study. That such chemical signatures have never been found in the Universe, despite many theoretical studies predicting their existence, is a long-standing puzzle, he says. “It seems Aoki et al. have finally found an old relic that shows intriguing evidence that there really was such a monstrous star in the distant past.”
An artificial protein that self-assembles around and protects DNA could be ideal for gene therapy, nanomachines and synthetic biology.
Dutch scientists have built a simple model of viruses’ protective coats in an attempt to create viral mimics that could fight diseases, as opposed to causing them. Rather than copying natural proteins, Renko de Vries from Wageningen University and his team designed and built a three-part protein from scratch that self-assembles around DNA.
‘The protein is exceedingly simple in its primary and secondary structure, yet captures the essence of self-assembly for the tobacco mosaic virus,’ de Vries tells Chemistry World. This knowledge could enable superior vehicles for getting DNA and RNA into cells, for example for gene therapy, and templates for improved DNA machines. ‘You could probably do the same with supramolecular chemistry,’ de Vries adds, ‘but the protein approach has the beauty that you can expand in the direction of synthetic biology.’
The ‘no-frills’ coat sprung from de Vries’ discussions with Paul van der Schoot’s Technical University of Eindhoven team, who had developed a theoretical model of tobacco mosaic virus self-assembly. ‘We established the crucial mechanisms and then started designing these molecules,’ de Vries explains.
The protein’s first segment, which bound to the DNA to be encapsulated, simply comprised 12 lysine amino acid building blocks. The second was a ‘silk-like’ protein sequence, containing repeat units of mostly alanine and glycine amino acids, that can form stiff filaments. Varying the number of repeat silk-like units allowed the chemists to dictate cooperation between segments during coat assembly. The third segment was a random 400 residue sequence with many prolines and other hydrophilic, uncharged amino acids that stopped the rod-shaped ‘virus-like particles’ (VLPs) clumping together.
‘We found that the self-assembly was really quite spectacular,’ de Vries recalls. ‘If you have one protein sticking to the nucleic acid template, that accelerates binding of further proteins. That ensures that you always have at least a couple of templates perfectly coated, even if you do not have enough protein. For the 2500 base pair linear DNA we used, about 400 copies of the artificial virus protein are needed to make the complete coat.’
Out of five different silk-like segment lengths the team tried, only the two longest ones led to fully cooperative coat self-assembly. These VLPs compacted their central DNA most and protected it from enzyme attack for longer. However, all of the different silk-like segment lengths produced VLPs that could transfect DNA into cells with similar efficiency.
A unique experiment at the U.S. Department of Energy's Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe – including whether we live in a hologram.
Much like characters on a television show would not know that their seemingly 3D world exists only on a 2D screen, we could be clueless that our 3D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions. Get close enough to your TV screen and you'll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe's information may be contained in the same way, and that the natural "pixel size" of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.
"We want to find out whether space-time is a quantum system just like matter is," said Craig Hogan, director of Fermilab's Center for Particle Astrophysics and the developer of the holographic noise theory. "If we see something, it will completely change ideas about space we've used for thousands of years."
Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty . The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.
Essentially, the experiment probes the limits of the universe's ability to store information. If there are a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location – even in principle. The instrument testing these limits is Fermilab's Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.
Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way – being carried along on a jitter of space itself.
"Holographic noise" is expected to be present at all frequencies, but the scientists' challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high – millions of cycles per second – that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.
"If we find a noise we can't get rid of, we might be detecting something fundamental about nature–a noise that is intrinsic to spacetime," said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. "It's an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works."
A group of cells developed into a thymus - a critical part of the immune system - when transplanted into mice. The findings, published in Nature Cell Biology, could pave the way to alternatives to organ transplantation.
Experts said the research was promising, but still years away from human therapies.
The thymus is found near the heart and produces a component of the immune system, called T-cells, which fight infection. Scientists at the Medical Research Council centre for regenerative medicine at the University of Edinburgh started with cells from a mouse embryo.
These cells were genetically "reprogrammed" and started to transform into a type of cell found in the thymus. These were mixed with other support-role cells and placed inside mice.
Once inside, the bunch of cells developed into a functional thymus.
It is similar to a feat last year, when lab-grown human brains reached the same level of development as a nine-week-old fetus.
The thymus is a much simpler organ and in these experiments became fully functional. Structurally it contained the two main regions - the cortex and medulla - and it also produced T-cells.
Prof. Clare Blackburn, part of the research team, said it was "tremendously exciting" when the team realized what they had achieved. "This was a complete surprise to us, that we were really being able to generate a fully functional and fully organized organ starting with reprogrammed cells in really a very straightforward way. This is a very exciting advance and it's also very tantalising in terms of the wider field of regenerative medicine."
Patients who need a bone marrow transplant and children who are born without a functioning thymus could all benefit. Ways of boosting the thymus could also help elderly people. The organ shrinks with age and leads to a weaker immune system. However, there are a number of obstacles to overcome before this research moves from animal studies to hospital therapies. The current technique uses embryos. This means the developing thymus would not be a tissue match for the patient.
In a study published in Nature Genetics, researchers from Uppsala University present the first global analysis of genome variation in honeybees. The findings show a surprisingly high level of genetic diversity in honeybees, and indicate that the species most probably originates from Asia, and not from Africa as previously thought.
The honeybee (Apis mellifera) is of crucial importance for humanity. One third of our food is dependent on the pollination of fruits, nuts and vegetables by bees and other insects. Extensive losses of honeybee colonies in recent years are a major cause for concern. Honeybees face threats from disease, climate change, and management practices. To combat these threats it is important to understand the evolutionary history of honeybees and how they are adapted to different environments across the world.
"We have used state-of-the-art high-throughput genomics to address these questions, and have identified high levels of genetic diversity in honeybees. In contrast to other domestic species, management of honeybees seems to have increased levels of genetic variation by mixing bees from different parts of the world. The findings may also indicate that high levels of inbreeding are not a major cause of global colony losses", says Matthew Webster, researcher at the department of Medical Biochemistry and Microbiology, Uppsala University.
Another unexpected result was that honeybees seem to be derived from an ancient lineage of cavity-nesting bees that arrived from Asia around 300,000 years ago and rapidly spread across Europe and Africa. This stands in contrast to previous research that suggests that honeybees originate from Africa.
Reference: A worldwide survey of genome sequence variation provides insight into the evolutionary history of the honeybee Apis mellifera, Nature Genetics, 2014. dx.doi.org/10.1038/ng.3077
Virginia Tech professor and Fralin Life Institute affiliate Jim Westwood has made a discovery about plant-to-plant communication: enormous amounts of genetic messages in the form of mRNA transcripts are transmitted from the parasitic plant Cuscuta (known more commonly as dodder and strangleweed) to its hosts.
Using Illumina next generation sequencing technologies to sequence the tissues of the host and an attached parasite, the team found that the number of genes that gets passed into the host depends on the identity of the host. The tomato plant received 347 of the strangleweed’s mRNAs, whereas the Arabidopsis received an astonishing 9514 mRNAs. When Arabidopsis plant receives this many mRNAs, the total genetic material of tissues in contact with the strangleweed is about 45% from the parasite.
The new quantitative result builds on Professor Westwood’s prior discovery of RNA transfer between the parasitic plant and its host plants. In the prior study, Westwood found that when the strangleweed uses its haustorium (piercing appendage) to penetrate the stems of its host plants, it passes on its own RNA to the host, though only tens of mRNAs were identified. The discovery challenged our understanding that mRNAs are mainly kept within cells.
But now the research team has quantified the extent to which the messages are passed. mRNA stands for “messenger RNA” and are the snippets of genetic information that are created from DNA. Typically an mRNA molecule is “read” by a molecule machine known as a ribosome and turned into a protein which carries out particular functions in the cell. And usually, more mRNAs means more protein. Therefore, the conversion from DNA to mRNA is one way to amplify or control the activation of a gene.
It is not yet clear what are the functions of the transmitted genes but bioinformatic analysis shows that hydrolase activity, metabolism and response to stimulus gene groups were among the most represented in those that crossed the species bridge.
Westwood has determined that the host plant may be receiving orders of a kind from the parasitic plant, such as lowering its natural defense system so that the strangleweed can more easily attack them.
The findings by Westwood, Professor of weed science, plant pathology and physiology at the College of Agriculture and Life Sciences, is even more surprising when considered against prior thought that mRNA is unstable, short-lived and fragile.
The discoveries also opens new avenues in the research of the eradication of parasitic plants such as broomrape and witchweed, two plants that pose serious threats to legumes and other crops. This also has intriguing implications for increasing efficiency of yields.
Future plans include expansion of such research to other organismal domains, such as fungi and bacteria, also exchange the mRNA. But the meaning and the outcome of the transmitted messages remain yet unclear and work must be done to find out what the plants are saying to each other.
A new invisible ink that reveals secret messages when squeezed could be useful in preventing fraud.
It could be the ultimate stress ball for spies. An invisible ink creates secret messages on bendy plastic that are only revealed when you give it a squeeze.
Previously, Jianping Ge of the East China Normal University in Shanghai, China, and his colleagues created invisible inks that appear when submerged underwater or exposed to a magnetic field. Now they've made an ink you can reveal just by squeezing with your hand. The team first embedded an array of silica crystals in a plastic gel. The crystals reflect light at a certain wavelength depending on their spacing and the angle of viewing, so the relaxed gel appears green, but squeezing or stretching it turns it red or blue.
Next, the team coated the surface with another clear plastic gel, and put a cut-out template of a secret image on top. They shone ultraviolet light on the set-up, which linked the two gels around the cut-out, but left them separate in the parts covered by the template. The linked gels are firmer, so they don't change colours when squeezed. After the cut-out was removed, its silhouette only appeared when the gels were squeezed.
Ge says he is talking to companies about using the technique to protect against counterfeit goods. "These invisible photonic patterns can be potential anti-fake labels," he says. Jon Kellar, a materials engineer at the South Dakota School of Mines and Technology in Rapid City, South Dakota, agrees that the hidden images could help combat fraud, but he thinks that the fabrication process will need to be simplified for commercial use.
Nutritional starvation therapy is under intensive investigation because it provides a potentially lower toxicity with higher specificity than conventional cancer therapy. Autophagy, often triggered by starvation, represents an energy-saving, pro-survival cellular function; however, dysregulated autophagy could also lead to cell death, a process distinct from the classic caspase-dependent apoptosis.
A recent study shows how arginine starvation specifically kills tumor cells by a novel mechanism involving mitochondria dysfunction, reactive oxygen species generation, DNA leakage, and chromatin autophagy, where leaked DNA is captured by giant autophagosomes.
Cells when stressed, whether cancerous or not, undergo a process of cellular suicide that involves controlled dismantling of its interior components such as proteins, DNA, and various compartments. By far the most famous of such processes is “apoptosis”. The authors in this study have found another, distinct process involving mitochondria dysfunction, reactive oxygen species (ROS) generation, DNA leakage, and chromatin autophagy.
The senior author, Professor Hsing-Jien Kung, both a cancer biology at UC Davis and the Director of the National Health Research Institutes in Taipei, Taiwan, first discovered in 2009 the basic mechanism by which arginine shortage kills cancer cells.
“Traditional cancer therapies involve ‘poisoning‘ by toxic chemicals or ‘burning‘ by radiation cancer cells to death, which often have side effects,” according to Professor Kung. “An emerging strategy is to ‘starve’ cancer cells to death, taking advantage of the different metabolic requirements of normal and cancer cells. This approach is generally milder, but as this study illustrates, it also utilizes a different death mechanism, which may complement the killing effects of the conventional therapy.”
Results of a recent animal study offer new optimism for microbicides, biomedical products being developed to protect people against sexually transmitted infections (STIs), including HIV. Population Council scientists and their partners have found that a proprietary microbicide gel developed by the Council is safe, stable, and can prevent the transmission of HIV, herpes simplex virus 2 (HSV-2), and human papillomavirus (HPV), in both the vagina and rectum in animals. It has a window of efficacy in the vagina against all three viruses of at least eight hours prior to exposure. An in vitro study also provides the first data that the gel is effective against multiple strains of HIV.
The gel, known as MZC, contains MIV-150, zinc acetate, and carrageenan. MIV-150 and zinc acetate are potent antiviral agents that inhibit HIV via different mechanisms of action. MIV-150 is an enzyme inhibitor that blocks an early step of HIV replication in target cells, and zinc acetate is an antiviral agent with demonstrated activity against HIV and HSV-2. These compounds are mixed in a water-based solution of carrageenan, a compound derived from seaweed that has been shown to have potent activity against HPV. Infection with HSV-2 or HPV is associated with increased risk of HIV infection. Researchers believe that microbicides that target HIV, HSV-2, and HPV may more effectively limit HIV transmission than those that target HIV alone.
In this study, Council scientists and their partners used macaque and mouse models to examine whether MZC gel could prevent vaginal and rectal transmission of SHIV-RT (a virus combining genes from HIV and SIV, the monkey version of HIV), HSV-2, and HPV.
They found that MZC:
The study was designed to establish proof of concept in monkeys and mice before taking steps to test in humans. Preclinical testing in animals is required by the FDA and is important to ensure the highest level of safety and to build the evidence base for potential efficacy in humans. Phase 1 safety trials of the gel in humans are now underway.
“In addition to the gel,” said Fernández-Romero, “we are exploring sustained-release intravaginal rings and on-demand nanofiber-based delivery systems for MZC.” He stressed that developing different delivery systems for effective medications is an important step in ensuring the ultimate success of any microbicide, adding, “There is a growing demand for microbicides that prevent multiple STIs, and we are committed to ensuring that women and men have options when choosing what works most effectively for their own protection.”
Kit Lam and colleagues from UC Davis and other institutions have created dynamic nanoparticles (NPs) that could provide an arsenal of applications to diagnose and treat cancer. Built on an easy-to-make polymer, these particles can be used as contrast agents to light up tumors for MRI and PET scans or deliver chemo and other therapies to destroy tumors. In addition, the particles are biocompatible and have shown no toxicity. The study was published online today in Nature Communications.
“These are amazingly useful particles,” noted co-first author Yuanpei Li, a research faculty member in the Lam laboratory. “As a contrast agent, they make tumors easier to see on MRI and other scans. We can also use them as vehicles to deliver chemotherapy directly to tumors; apply light to make the nanoparticles release singlet oxygen (photodynamic therapy) or use a laser to heat them (photothermal therapy) – all proven ways to destroy tumors.”
Jessica Tucker, program director of Drug and Gene Delivery and Devices at the National Institute of Biomedical Imaging and Bioengineering, which is part of the National Institutes of Health, said the approach outlined in the study has the ability to combine both imaging and therapeutic applications in a single platform, which has been difficult to achieve, especially in an organic, and therefore biocompatible, vehicle.
"This is especially valuable in cancer treatment, where targeted treatment to tumor cells, and the reduction of lethal effects in normal cells, is so critical,” she added.
Though not the first nanoparticles, these may be the most versatile. Other particles are good at some tasks but not others. Non-organic particles, such as quantum dots or gold-based materials, work well as diagnostic tools but have safety issues. Organic probes are biocompatible and can deliver drugs but lack imaging or phototherapy applications.
Built on a porphyrin/cholic acid polymer, the nanoparticles are simple to make and perform well in the body. Porphyrins are common organic compounds. Cholic acid is produced by the liver.
To further stabilize the particles, the researchers added the amino acid cysteine (creating CNPs), which prevents them from prematurely releasing their therapeutic payload when exposed to blood proteins and other barriers. At 32 nanometers, CNPs are ideally sized to penetrate tumors, accumulating among cancer cells while sparing healthy tissue.
In the study, the team tested the nanoparticles, both in vitro and in vivo, for a wide range of tasks. On the therapeutic side, CNPs effectively transported anti-cancer drugs, such as doxorubicin. Even when kept in blood for many hours, CNPs only released small amounts of the drug; however, when exposed to light or agents such as glutathione, they readily released their payloads. The ability to precisely control chemotherapy release inside tumors could greatly reduce toxicity. CNPs carrying doxorubicin provided excellent cancer control in animals, with minimal side effects.
There are already smartwatches and other wearables with cellular data built-in, but the bulky hardware they need for that wireless access makes them less than elegant. Intel clearly isn't happy with this state of affairs, as it just unveiled an extra-tiny modem that should put truly sleek, always-connected devices on your body -- and seemingly everywhere else. The new XMM 6255 isn't much larger than a penny (0.47 square inches), but delivers a full-fledged 3G data link. It's built to take abuses like power spikes, and it doesn't need a big antenna to get a good connection; it can even get solid performance in a low-signal area like your basement.
XMM™ 6255 features the SMARTI™ UE2p transceiver, which is based on our unique new Intel® Power Transceiver technology, the industry’s first design to combine transmit & receive functionality with a fully integrated power amplifier and power management, all on a single chip. This design approach reduces XMM™ 6255’s component requirements, resulting in a smaller modem that helps manufacturers minimize their build of material costs. It also protects the radio from overheating, voltage peaks and damage under tough usage conditions, which is important for safety monitors and other critical IoT devices.
Additionally, the XMM™ 6255 modem features a unique radio architecture that enables it to perform exceptionally well in challenging real-world situations, including:
The company isn't ready to say just who's using the miniscule modem in finished products, but the technology could be relatively ubiquitous. Besides more wearables that don't have to rely on your phone to get online, you could see a larger internet of things where even relatively small devices have their own internet service; it's reasonable to expect a lot of smart sensors and security systems that can always talk to the outside world.
If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his “brain” was on the small side.
Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do.
Today, backed by funding from the National Science Foundation, the Office of Naval Research, Google, Microsoft, and Qualcomm, Saxena and his team unveiled what they call RoboBrain, a kind of online service packed with information and artificial intelligence software that any robot could tap into. Working alongside researchers at the University of California at Berkeley, Brown University, and Cornell University, they hope to create a massive online “brain” that can help all robots navigate and even understand the world around them. “The purpose,” says Saxena, who dreamed it all up, “is to build a very good knowledge graph—or a knowledge base—for robots to use.”
Any researcher anywhere will be able use the service wirelessly, for free, and transplant its knowledge to local robots. These robots, in turn, will feed what they learn back into the service, improving RoboBrain’s know-how. Then the cycle repeats.
These days, if you want a robot to serve coffee or carry packages across a room, you have to hand-code a new software program—or ask a fellow roboticist to share code that’s already been built. If you want to teach a robot a new task, you start all over. These programs, or apps, live on the robot itself, and that, Saxena says, is inefficient. It goes against all the current trends in tech and artificial intelligence, which seek to exploit the power of distributed systems, massive clusters of computers that can power devices over the net. But this is starting to change. RoboBrain is part of an emerging movement known as cloud robotics.
Yeast that can make opiates from other molecules raise the prospect of tanks of drug-producing microorganisms replacing open fields of opium poppies.
Severe pain? Reach for the yeast. Genetically engineered yeasts can now efficiently produce a range of opiates, including morphine and oxycodone. With growing anxieties about supplies of opium poppies, it could be just what the doctor ordered.
Opiates are primarily used as painkillers and cough suppressants, and many of the most widely used opiates can be produced only from opium poppies (Papaver somniferum). Demand for these drugs is booming. But of the poppies farmed to supply these drugs, some 50 per cent are grown on the Australian island of Tasmania, so poor growing seasons can affect availability.
As drug companies search for new places to grow poppies, Christina Smolkefrom Stanford University, California, and her colleagues have been looking at getting yeast to make these complex drugs from simple sugars.
Some opiates, like morphine, are made naturally by poppies. Others, like oxycodone, are produced by chemically altering one of the plant's natural alkaloid chemicals – in this case thebaine. Back in 2008, Smolke inserted a number of genes – including some from the opium poppy – into yeasts, and got them to turn simple sugar molecules into a complex precursor of opiates: salutaridine. Now, in her latest work, she has solved the other end of the pathway, engineering yeasts to take complex precursors like thebaine and synthesise the finished products, including oxycodone.
"This work gets us very close," says Smolke. All that's left is to combine the two stages in one strain of yeast, and solve the last few steps: getting the yeast to turn salutaridine into thebaine, completing the pathway from sugar to opiate product.
The benefits of yeast over poppies are manifold, Smolke says. She thinks that when the system is finished, a 1000-litre tank could produce as much morphine as a hectare of poppies. She believes the method, when completed, will also increase security. "It is difficult or impossible to secure many thousands of acres of poppy fields which are grown out in the open," she says. "Yeast will be grown in closed fermenters and can be kept in secure facilities."
A team of theoretical physicists at the University of Hamburg, Germany have just published the schematics for a method that tackles the biggest hurdle in quantum computing: keeping everything cool.
One of the biggest issues facing the development of quantum computers—tomorrow's supercomputers based on the strange principles of quantum physics—is keeping everything cool. Electronics make heat, and while your laptop and smartphone can use fans or heat-absorbing water tanks, those just won't cut it for quantum computing, which will take advantage of the quirks of quantum mechanics to create computers that calculate at insane speeds.
The cycle of cell division—one cell splitting itself into two—is a crucial and complex process managed by finely tuned molecular machines. When working properly, cell division assures healthy growth. When running out of control, it can usher in cancer.
Blocking cell division in disease has been the target of researchers hoping to induce the death of abnormal cells before they become cancerous tumors. Finding the right chemical compound to inhibit cell division gone awry has proved difficult: Target the cell cycle too broadly and healthy cells will also suffer, as when chemotherapy hits all cells that divide rapidly, not just cancerous ones. Narrow the sights too tightly and the misbehaving machine churns on.
Now a team led by Randall King of Harvard Medical School has shown how two chemical inhibitors working together act better than either one alone, shutting down the dividing cell by stalling mitosis, one step in the cycle during which the cell copies and then lines up chromosomes properly so each daughter cell has a complete set.
"Simultaneous disruption of multiple interactions in a protein machine may be an interesting way to go in terms of trying to design future therapeutic strategies," said King, HMS professor of cell biology. "You're basically targeting one step in the pathway, but there's a lot of complexity in that one step. The idea is to disable the biochemical or enzymatic function by simultaneously targeting multiple sites."
King discovered the two inhibitors 10 years ago, in the very first screen conducted at the Institute of Chemistry and Cell Biology-Longwood Screening Facility at HMS. It was an unbiased chemical screen, set up with no assumptions about what they might find. Especially in the era before the discovery of RNA interference and its usefulness in silencing genes, scientists needed chemical tools that would perturb biological processes in other ways, so they could understand in detail how the mechanisms they were examining worked.
King's goal in 2004 was to fish through all the identified candidates from these early screens for chemical compounds that would somehow illuminate the cell cycle pathway and perhaps stymie one of its protein machines: the anaphase-promoting complex/cyclosome (APC/C). This protein complex marks certain proteins for degradation by the proteasome, the cell's waste-disposal site, before it can progress through mitosis.
If the APC/C doesn't tag these proteins with a protein called ubiquitin, the proteasome doesn't recognize them, they don't get discarded and mitosis cannot proceed, stalling the cell cycle before it can properly segregate its chromosomes for faithful division.
In 2010 King and his colleagues published a paper in Cancer Cell that described in detail how one of the inhibitors, called tosyl-L-arginine methyl ester (TAME), weakens the interaction between the APC/C and its critical activating protein, Cdc20. Degradation is blocked, but only partially. That means the cell cycle is delayed briefly, but still continues toward mitotic exit.
Now the scientists have shown how another compound, also discovered in the original 2004 chemical screen, binds in a pocket on Cdc20 that normally recruits the targets of APC/C. Called apcin (for APC inhibitor), it also delays mitosis, but only by a little bit.
Together, TAME and apcin slow mitosis to a crawl. The cell dies before it can leave mitosis.
Mass and length may not be fundamental properties of nature, according to new ideas bubbling out of the multiverse.
Though galaxies look larger than atoms and elephants appear to outweigh ants, some physicists have begun to suspect that size differences are illusory. Perhaps the fundamental description of the universe does not include the concepts of “mass” and “length,” implying that at its core, nature lacks a sense of scale.
This little-explored idea, known as scale symmetry, constitutes a radical departure from long-standing assumptions about how elementary particles acquire their properties. But it has recently emerged as a common theme of numerous talks and papers by respected particle physicists. With their field stuck at a nasty impasse, the researchers have returned to the master equations that describe the known particles and their interactions, and are asking: What happens when you erase the terms in the equations having to do with mass and length?
Nature, at the deepest level, may not differentiate between scales. With scale symmetry, physicists start with a basic equation that sets forth a massless collection of particles, each a unique confluence of characteristics such as whether it is matter or antimatter and has positive or negative electric charge. As these particles attract and repel one another and the effects of their interactions cascade like dominoes through the calculations, scale symmetry “breaks,” and masses and lengths spontaneously arise.
Similar dynamical effects generate 99 percent of the mass in the visible universe. Protons and neutrons are amalgams — each one a trio of lightweight elementary particles called quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. “Most of the mass that we see is generated in this way, so we are interested in seeing if it’s possible to generate all mass in this way,” said Alberto Salvio, a particle physicist at the Autonomous University of Madrid and the co-author of a recent paper on a scale-symmetric theory of nature.
In the equations of the “Standard Model” of particle physics, only a particle discovered in 2012, called the Higgs boson, comes equipped with mass from the get-go. According to a theory developed 50 years ago by the British physicist Peter Higgs and associates, it doles out mass to other elementary particles through its interactions with them. Electrons, W and Z bosons, individual quarks and so on: All their masses are believed to derive from the Higgs boson — and, in a feedback effect, they simultaneously dial the Higgs mass up or down, too.
The new scale symmetry approach rewrites the beginning of that story.
The concept seems far-fetched, but it is garnering interest at a time of widespread soul-searching in the field. When the Large Hadron Collider at CERN Laboratory in Geneva closed down for upgrades in early 2013, its collisions had failed to yield any of dozens of particles that many theorists had included in their equations for more than 30 years. The grand flop suggests that researchers may have taken a wrong turn decades ago in their understanding of how to calculate the masses of particles.
“We’re not in a position where we can afford to be particularly arrogant about our understanding of what the laws of nature must look like,” said Michael Dine, a professor of physics at the University of California, Santa Cruz, who has been following the new work on scale symmetry. “Things that I might have been skeptical about before, I’m willing to entertain.”
During plant growth, dividing cells in meristems must coordinate transitions from division to expansion and differentiation. Three distinct developmental zones are generated: the meristem, where the cell division takes place, and elongation and differentiation zones. At the same time, plants can rapidly adjust their direction of growth to adapt to environmental conditions.
In Arabidopsis thaliana roots, many aspects of zonation are controlled by the plant hormone auxin and auxin-induced PLETHORA transcription factors. Both show a graded distribution with a maximum near the root tip. In addition, auxin is also pivotal for tropic responses of the roots.
Ari Pekka Mähönen from the University of Helsinki, Finland, with his group and Dutch colleagues has found out with the help of experimentation and mathematical modelling how the two factors together regulate root growth.
"Cell division in the meristem is maintained by PLETHORA transcription factors. These proteins are solely transcribed in the stem cells, in a narrow region within the meristematic cells located in the tip of the root. So PLETHORA proteins are most abundant in the stem cells," Ari Pekka Mähönen, Research Fellow financed by the Academy of Finland says.
Outside the stem cells the amount of PLETHORA protein in the cells halves each time the cells divide. In the end there is so little PLETHORA left in the cells that they cannot stay in the dividing mode. This is when the cells start to elongate and differentiate.
Auxin is the factor taking care of many aspects of root growth. If there is enough PLETHORA in the root cells, auxin affects the rate of root cell division. If there is little or no PLETHORA in the cells, auxin regulates cell differentiation and elongation. In addition to this direct, rapid regulation, auxin also regulates cell division, expansion and differentiation indirectly and slowly by promoting PLETHORA transcription. This dual action of auxin keeps the structure and growth of the root very stable.
When PLETHORA levels gradually diminish starting from the root tip upwards, the cell division, elongation and differentiation zones are created. And this inner organisation stays even if the growth direction of the root changes.
"The gravity and other environmental factors can change the auxin content of the cells, and quite rapidly. This all affects the growth direction of the root. And of course it is important for the plant to maintain the organization while directing their roots there where water and nutrients most likely are to be found."
Cables designed by graduate student Saman Jahani (left) and electrical engineering professor Zubin Jacob are 10 times smaller than existing fiber optic cables—small enough to replace copper wiring still used on computer chips. “We’re already transmitting data from continent to continent using fiber optics, but the killer application is using this inside chips for interconnects—that is the Holy Grail,” says Zubin Jacob, an electrical engineering professor leading the research. “What we’ve done is come up with a fundamentally new way of confining light to the nano scale.”
Jahani and Jacob have used metamaterials to redefine the textbook phenomenon of total internal reflection, discovered 400 years ago by German scientist Johannes Kepler while working on telescopes.
Researchers around the world have been stymied in their efforts to develop effective fibre optics at smaller sizes. One popular solution has been reflective metallic claddings that keep light waves inside the cables. But the biggest hurdle is increased temperatures: metal causes problems after a certain point.
“If you use metal, a lot of light gets converted to heat. That has been the major stumbling block. Light gets converted to heat and the information literally burns up—it’s lost.”
Jacob and Jahani have designed a new, non-metallic metamaterial that enables them to “compress” and contain light waves in the smaller cables without creating heat, slowing the signal or losing data. Their findings will be published Aug. 20 in Optica, The Optical Society’s new high-impact photonics journal. The article is available online.