Devising a novel method to identify potential genetic regulators in planarian stem cells, scientists have determined which of those genes affect the two main functions of stem cells.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Norwegian researchers have recently found evidence of a generalized active network for cognitive functions of the brain.
“I experienced a kind of moment that may be more common for theoretical physicists: the idea that something just has to be there, even though you cannot see it,” says neuroscientist Kenneth Hugdahl from the Bergen fMRI Group in an interview with the University of Bergen’s newspaper På Høyden.
Initially Hugdahl thought that he was just misunderstanding. But during preparations for a lecture he sat with nine fMRI images in front of him, when he suddenly discovered that the active red and yellow regions in the brain-map appeared in almost the same places in all images. The neuroscientist had to ask himself: could it be possible that there was an existing fn the brain that overlapped between all cognitive functions
The article On the existence of a generalized non-specific task-dependent network was published in the online journal Frontiers in Human Neuroscience.
Although the idea has been mentioned before, no brain researchers previously have been able to empirically prove that there is a cognitive network “for everything”. The idea of something that works as some sort of wiring diagram for the brain is therefore quite revolutionary.
Traditionally, this kind of brain research has focused on looking at individual functions of problem solving in specific areas of the brain. Hugdahl and his colleagues' article could be the first step in a new direction, toward something that can become the neuroscientific version of the “theory of everything” – one single explanation for all active, cognitive functions.
Researchers led by a team at UNSW Australia have used the Australian Synchrotron to turn the discovery of an ultra-low density and corrosion-resistant magnesium alloy into the first step toward mass-producing ‘stainless magnesium’, a new high-strength, lightweight metal, paving the way for cars, trucks and airplanes that can travel further distances on less gasoline.
The magnesium-lithium alloy weighs half as much as aluminum and is 30 percent lighter than magnesium, making it an attractive candidate to replace these commonly used metals to improve fuel efficiency and greatly reduce greenhouse gas emissions from transport vehicles.
The findings, published in the current edition of Nature Materials with researchers from Monash University in Melbourne, describe how the alloy forms a protective layer of carbonate-rich film upon atmospheric exposure, making it immune to corrosion when tested in laboratory settings.
Professor Michael Ferry, from UNSW’s School of Materials Science and Engineering, says this formation of a protective surface layer can be considered similar to the way a layer of chromium oxide enables the protection of stainless steel.
‘Many similar alloys have been created as researchers seek to combine the incredible lightness of lithium with the strength and durability of magnesium to develop a new metal that will boost the fuel efficiency and distance capacity of airplanes, cars and spacecraft. ‘This is the first magnesium-lithium alloy to stop corrosion from irreversibly eating into the alloy, as the balance of elements interacts with ambient air to form a surface layer which, even if scraped off repeatedly, rapidly reforms to create reliable and durable protection.’
Professor Ferry, senior author of the paper led by Dr Wanqiang Xu also from UNSW, says this excellent corrosion resistance was observed by chance, when his team noticed a heat-treated sample from Chinese aluminum-production giant, CHALCO, sitting, inert, in a beaker of water.
‘To see no corroded surfaces was perplexing and, by partnering with scientists on the Powder Diffraction (PD) beamline at the Australian Synchrotron, we found the alloy contains a unique nanostructure that enables the formation of a protective surface film. ‘Now we’ve turned our attention to investigating the molecular composition of the underlying alloy and the carbonate-rich surface film, to understand how the corrosion process is impeded in this “stainless magnesium”.’
Imagine a world where every one of the billions of lightbulbs in use today is a wireless hotspot delivering connectivity at speeds that can only be dreamed of with Wi-Fi. That's the goal of the man who invented such a technology, and this week Li-Fi took a step out of the domain of science fiction and into the realm of the real when it was shown to deliver speeds 100 times faster than current Wi-Fi technology in actual tests.
An Estonian startup called Velmenni used a Li-Fi-enabled lightbulb to transmit data at speeds as fast as 1 gigabit per second (Gbps), which is about 100 times faster than current Wi-Fi technology, meaning a high-definition film could be downloaded within seconds. The real-world test is the first to be carried out, but laboratory tests have shown theoretical speeds of 224 Gbps.
So, just what is Li-Fi, how does it work, and will it really revolutionize the way we connect to the Internet? Li-Fi refers to visible light communications (VLC) technology, which delivers high-speed, bidirectional, networked mobile communications in a manner similar to Wi-Fi. It promises huge speed advantages, as well as more-secure communications and reduced device interference.
The term was coined by German physicist Harald Haas during a TED Talk when he outlined the idea of using lightbulbs as wireless routers. That address was delivered four years ago, and many people speculated that, like a lot of apparent revolutionary breakthroughs, Li-Fi would go the way of other "next big things" and not come to fruition. A year after his TED Talk, though, Haas, a professor of mobile communications at the University of Edinburgh, created pureLiFi with a group of people who had been researching the technology since 2008. The company has claimed to be the "recognized leaders in Li-Fi technology" and has already produced two products. On Wednesday, pureLiFi announced a partnership in which a French industrial-lighting company will roll out the firm's VLC technology in its products by the third quarter of 2016.
Haas said during his Ted Talk in 2011 that the current infrastructure would allow every single LED lightbulb to be transformed into an ultrafast wireless router. "All we need to do is fit a small microchip to every potential illumination device and this would then combine two basic functionalities: illumination and wireless data transmission," Haas said. "In the future, we will not only have 14 billion lightbulbs, we may have 14 billion Li-Fis deployed worldwide for a cleaner, greener and even brighter future."
Because Li-Fi technology uses visible light as its means of communication, it won't work through walls. This means that to have a Li-Fi network throughout your house, you will need these lightbulbs in every room (and maybe even the fridge) to have seamless connectivity.
Another major issue is that Li-Fi does not work outdoors, meaning that public Li-Fi will not be able to replace public Wi-Fi networks any time soon. While Li-Fi's employment in direct sunlight won't be possible, pureLiFi said that through the use of filters the technology can be used indoors even when sunlight is present.
The uncertainty principle is based on how disruptive any act of measurement is. If, for instance, a photon, or particle of light, from a microscope is used to view an electron, the photon will bounce off that electron and disrupt its momentum, said study co-author Tom Purdy, a physicist at JILA, a joint institute of the University of Colorado, Boulder and the National Institute of Standards and Technology.
But the bigger the object, the less of an effect a bouncing photon will have on its momentum, making the uncertainty principle less and less relevant at larger scales.
In recent years, however, physicists have been pushing the limits on which scales the principle appears in. To that end, Purdy and his colleagues created a 0.02-inch-wide (0.5 millimeters) drum made of silicon nitride, a ceramic material used in spaceships, drawn tight across a silicon frame.
They then set the drum between two mirrors, and shined laser light on it. Essentially, the drum is measured when photons bounce off the drum and deflect the mirrors a given amount, and increasing the number of photons boosts the measurement accuracy. But more photons cause greater and greater fluctuations that cause mirrors to shake violently, limiting the measurement accuracy. That extra shaking is the proof of the uncertainty principle in action.
The setup was kept ultra-cold to prevent thermal fluctuations from drowning out this quantum effect. The findings could have implications for the hunt for gravitational waves predicted by Einstein's theory of general relativity. In the next few years, the Laser Interferometer Gravitational Wave Observatory (LIGO), a pair of observatories in Louisiana and Washington, is set to use tiny sensors to measure gravitational waves in space-time, and the uncertainty principle could set limits on LIGO's measurement abilities.
Via Jocelyn Stoller
Mathematical modeling enables $100 depth sensor to approximate the measurements of a $100,000 piece of lab equipment.
The system uses a technique called fluorescence lifetime imaging, which has applications in DNA sequencing and cancer diagnosis, among other things. So the new work could have implications for both biological research and clinical practice.
“The theme of our work is to take the electronic and optical precision of this big expensive microscope and replace it with sophistication in mathematical modeling,” says Ayush Bhandari, a graduate student at the MIT Media Lab and one of the system’s developers. “We show that you can use something in consumer imaging, like the Microsoft Kinect, to do bioimaging in much the same way that the microscope is doing.”
The MIT researchers reported the new work in the Nov. 20 issue of the journal Optica. Bhandari is the first author on the paper, and he’s joined by associate professor of media arts and sciences Ramesh Raskar and Christopher Barsi, a former research scientist in Raskar’s group who now teaches physics at the Commonwealth School in Boston.
Fluorescence lifetime imaging, as its name implies, depends on fluorescence, or the tendency of materials known as fluorophores to absorb light and then re-emit it a short time later. For a given fluorophore, interactions with other chemicals will shorten the interval between the absorption and emission of light in a predictable way. Measuring that interval — the “lifetime” of the fluorescence — in a biological sample treated with a fluorescent dye can reveal information about the sample’s chemical composition.
In traditional fluorescence lifetime imaging, the imaging system emits a burst of light, much of which is absorbed by the sample, and then measures how long it takes for returning light particles, or photons, to strike an array of detectors. To make the measurement as precise as possible, the light bursts are extremely short.
The fluorescence lifetimes pertinent to biomedical imaging are in the nanosecond range. So traditional fluorescence lifetime imaging uses light bursts that last just picoseconds, or thousandths of nanoseconds.
A new study finds that while the human brain can distinguish between millions of colors, it has difficulty remembering specific shades. For example, most people can easily tell the difference between azure, navy and ultramarine, but when it comes to remembering these shades, people tend to label them all as blue, the study found. This tendency to lump colors together could explain why it's so hard to match the color of house paint based on memory alone, the researchers said. [Eye Tricks: Gallery of Visual Illusions].
Many cultures have the same color words or categories, said Jonathan Flombaum, a cognitive psychologist at Johns Hopkins University in Baltimore. "But at the same time, there's a lot of debate around the role those categories play in the perception of color," he said.
In the study, Flombaum and his colleagues conducted four experiments on four different groups of people. In the first experiment, they asked people to look at a color wheel with 180 different hues, and asked them to find the best name for each color. The exercise was designed to find the perceived boundaries between colors, the researchers said. In a second experiment, the scientists showed different people the same colors, but this time they asked them to find the "best example" of a particular color.
For a third experiment, the researchers showed participants colored squares, and asked them to select the best match on the color wheel. In a fourth experiment, another group of participants completed the same task, but there was a delay of 90 milliseconds between when each color square was displayed and when they were asked to select the best match on the color wheel.
The results revealed that categories are indeed important in how people identify and remember colors. The participants who were asked to name the colors reliably saw five hues: blue, yellow, pink, purple and green. Most of the colors were given one name, butambiguous colors got two labels, such as blue and green. "Where that fuzzy naming happened, those are the boundaries" between colors, Flombaum told Live Science
But what was really striking was how the people in the memory experiment remembered the colors they saw, the scientists said. The researchers expected that the participants' responses for what colors they had seen would reflect a bell curve centered on the correct color. But instead, they found that the distribution of responses was skewed toward the "best example" of the color they had seen, not the true color.
The findings suggest that the brain remembers colors as discrete categories as well as a continuum of shades, and combines these representations to produce a memory. There could be many reasons for this, but it likely boils down to efficiency, Flombaum said. "Most of the time, what we care about is the category," he said.
Via Levin Chin
Sequencing reveals that the genome of the Tardigrade has been published, revealing approximately 6,000 genes of foreign origin.
The tardigrade, also known as the water bear, is renowned for many reasons. The nearly indestructible micro-organism is known to have the capacity to survive extreme temperatures (-272C to 151C), and is the only animal able to survive in the vacuum of space.
Today, with the publication of its genome in PNAS, the humble water bear can add another item to its exhaustive list of superlatives. Sequencing of the genome, performed by a team of researchers at the University of North Carolina at Chapel Hill, has revealed that a massive portion of the tiny organism’s genome is of foreign origin. Indeed, nearly 17.5% of the water bear’s genome is comprised of foreign DNA, translating to a genetic complement of approximately 6,000 genes. These genes are primarily of bacterial origin, though genes from fungi and plants have also been identified.
Horizontal gene transfer, defined as the shifting of genetic material materially (thus horizontally) between organisms is widespread in the microscopic world. In humans, however, the process does occur, but in a limited fashion, and via transposons and viruses. Other microscopic animals are also known to have large complements of foreign genes.
The authors of the newly published work have proposed a method by which this extremely extensive gene transfer may have occurred. Tardigrades have long been known to undergo, and survive, the process of desiccation (extreme drying out). The authors therefore postulated that during this drying out process and the subsequent rehydration, the tardigrade’s genome may have undergone significant sheering and breakage, resulting in a general loss of integrity and leakiness experienced by the water bear’s nucleus. In turn, this compromised nuclear integrity may have enabled foreign genetic material to readily integrate the genome, in much the same way as scientists perform gene transfer through the process of electroporation.
For now, the tardigrade has a dual claim to fame, being the only known animal to survive the vacuum of space, and being the animal with the largest genetic complement. Only with the study of other micro-organisms will we be able to validate if the humble tardigrade maintains its two, current, great claims to fame.
"Animals that can survive extreme stresses may be particularly prone to acquiring foreign genes—and bacterial genes might be better able to withstand stresses than animal ones," said Boothby, a postdoctoral fellow in Goldstein's lab. After all, bacteria have survived the Earth's most extreme environments for billions of years.
The team speculates that the DNA is getting into the genome randomly but what is being kept is what allows tardigrades to survive the harshest of environments, e.g. stick a tardigrade in a - 80 celsius freezer for a year or 10 and it starts running around in 20 minutes after thawing.
This is what the team thinks happens: when tardigrades are under conditions of extreme stress such as desiccation - or a state of extreme dryness—Boothby and Goldstein believe that the tardigrade's DNA breaks into tiny pieces. When the cell rehydrates, the cell's membrane and nucleus, where the DNA resides, becomes temporarily leaky and DNA and other large molecules can pass through easily. Tardigrades not only can repair their own damaged DNA as the cell rehydrates but also stitch in the foreign DNA in the process, creating a mosaic of genes that come from different species.
Army ants build living bridges by linking their bodies to span gaps and create shortcuts across rainforests in Central and South America. An international team of researchers has now discovered these bridges can move from their original building point to span large gaps and change position as required.
The bridges stop moving when they become so long that the increasing costs incurred by locking workers into the structure outweigh the benefit that the colony gains from further shortening their trail. Bridges dismantle when the ants in the structure sense the traffic walking over them slows down below a critical threshold.
Co-lead author Dr Christopher Reid, a postdoctoral researcher at the University of Sydney's Insect Behaviour and Ecology Lab and formerly with the New Jersey Institute of Technology, said the findings could be applied to develop swarm robotics for exploration and rescue operations. By analysing how ants optimise utility, researchers may be able to create simple control algorithms to allow swarms of robots to behave in similar ways to an ant colony.
The team of researchers - from the Max Planck Institute for Ornithology (Konstanz, Germany), University of Konstanz, and the United States's New Jersey Institute of Technology, Princeton University and George Washington University - found the bridges can assemble and disassemble in seconds. They can also change their position in response to the immediate environment.
The dynamic nature of the bridges has been found to facilitate travel by the colony at maximum speed, across unknown and potentially dangerous terrains. Prior to the study it was assumed that, once they had been built, the bridges were relatively static structures.
The paper, 'Army ants dynamically adjust living bridges in response to a cost-benefit trade-off', is being published in the journal Proceedings of the National Academy of Sciences (PNAS).
A Northeastern University research team has found “extensive” leakage of users’ information — device and user identifiers, locations, and passwords — into network traffic from apps on mobile devices, including iOS, Android, and Windows phones. The researchers have also devised a way to stop the flow.
David Choffnes, an assistant professor in the College of Computer and Information Science, and his colleagues developed a simple, efficient cloud-based system called ReCon. It detects leaks of “personally identifiable information,” alerts users to those breaches, and enables users to control the leaks by specifying what information they want blocked and from whom.
The team’s study followed 31 mobile device users with iOS devices and Android devices who used ReCon for a period of one week to 101 days and then monitored their personal leakages through a ReCon secure webpage. The results were alarming. “Depressingly, even in our small user study we found 165 cases of credentials being leaked in plaintext,” the researchers wrote.
Of the top 100 apps in each operating system’s app store that participants were using, more than 50 percent leaked device identifiers, more than 14 percent leaked actual names or other user identifiers, 14–26 percent leaked locations, and three leaked passwords in plaintext. In addition to those top apps, the study found similar password leaks from 10 additional apps that participants had installed and used.
The password-leaking apps included MapMyRun, the language app Duolingo, and the Indian digital music app Gaana. All three developers have since fixed the leaks. Several other apps continue to send plaintext passwords into traffic, including a popular dating app.
“What’s really troubling is that we even see significant numbers of apps sending your password, in plaintext readable form, when you log in,” says Choffnes. In a public-WiFi setting, that means anyone running “some pretty simple software” could nab it.
Computed-tomography reconstruction obtained from the transmitted intensity using standard filtered backprojection. b, Orientation of bone ultrastructure as determined using SAS tensor tomography.
The mechanical properties of many materials are based on the macroscopic arrangement and orientation of their nanostructure. This nanostructure can be ordered over a range of length scales. In biology, the principle of hierarchical ordering is often used to maximize functionality, such as strength and robustness of the material, while minimizing weight and energy cost.
Methods for nanoscale imaging provide direct visual access to the ultrastructure (nanoscale structure that is too small to be imaged using light microscopy), but the field of view is limited and does not easily allow a full correlative study of changes in the ultrastructure over a macroscopic sample. Other methods of probing ultrastructure ordering, such as small-angle scattering of X-rays or neutrons, can be applied to macroscopic samples; however, these scattering methods remain constrained to two-dimensional specimens1, 2, 3, 4 or to isotropically oriented ultrastructures5, 6, 7. These constraints limit the use of these methods for studying nanostructures with more complex orientation patterns, which are abundant in nature and materials science.
Now, a team of scientists introduce an imaging method that combines small-angle scattering with tensor tomography to probe nanoscale structures in three-dimensional macroscopic samples in a non-destructive way. They demonstrate the method by measuring the main orientation and the degree of orientation of nanoscale mineralized collagen fibrils in a human trabecula bone sample with a spatial resolution of 25 micrometres. Symmetries within the sample, such as the cylindrical symmetry commonly observed for mineralized collagen fibrils in bone8, 9, 10, allow for tractable sampling requirements and numerical efficiency.
Small-angle scattering tensor tomography is applicable to both biological and materials science specimens, and may be useful for understanding and characterizing smart or bio-inspired materials. Moreover, because the method is non-destructive, it is appropriate for in situ measurements and allows, for example, the role of ultrastructure in the mechanical response of a biological tissue or manufactured material to be studied.
Via Ath Godelitsas
Technology is allowing researchers to generate vast amounts of information about tumors. The next step is to use this genomic data to transform patient care.
Adrian Lee has dedicated his career to studying breast cancer, which is to say he is actually tackling many different diseases at once. “No two breast cancers are the same,” says Lee, a pharmacologist and chemical biologist at the University of Pittsburgh in Pennsylvania. “Cancer is way more complex than we know.”
Lee is using genomic technology to fully describe cancers of the breast and apply that knowledge to guide treatment decisions for individual patients. “We can now analyse multiple variables from a single specimen, such as changes in DNA, changes in RNA and changes in methylation,” he says. “Genome-wide scans allow for better systems biology and allow us to learn what's gone wrong in a particular tumor.”
Sequencing tumors is faster, cheaper and easier than ever. With many researchers collecting sequence data and uploading these to public databases such as the The Cancer Genome Atlas (TCGA), opportunities to describe the many different cancers that arise in breast tissue are upon us. “The challenge used to be generating the data,” says Nicholas Navin, a geneticist at The University of Texas MD Anderson Cancer Center in Houston. “Those issues have been resolved. Now the challenge is data processing and data analysing — interpreting the mutations and communicating those to oncologists.”
At the University of Pittsburgh, researchers are working to link the molecular signatures of people with breast cancer to a host of clinical data, including demographic information associated with risk such as age, ethnicity and body weight. They are mining electronic health records for clinical correlates, treatment interactions and outcomes. “We've got a big haystack and we're trying to find the needle,” says Lee. “But we're also trying to incriminate the needle, by linking it to lots of things.” Collecting all that data from patients' electronic records adds up, Lee says. It takes infrastructure — Pittsburgh has already accumulated 5 petabytes, or 5 million gigabytes, which is enough data to overload around 40,000 new iPhone 6 devices.
Making the connection between the reams of data coming out of sequencing laboratories and the individual women fighting breast cancer takes big-time computing power. Big data needs researchers who are comfortable with statistical noise and those who are old hands at the iterative process required to create flexible computer programs.
Big-data researchers take a large data set and look for patterns. The idea is to identify mutations that can be targeted with drug treatment. It is the essence of personalized medicine: screen a patient's tumour for a set of biomarkers to choose the best treatment to fight the cancer. Big-data researchers believe that analysing the data of the thousands of tumours that have come before will reveal patterns that can improve screening and diagnosis, and inform treatment.
Lee and his colleagues have illustrated how big-data science led to a rethink of breast cancer1. They used two public databases — TCGA and METABRIC (Molecular Taxonomy of Breast Cancer International Consortium), which contain data on the entire set of genes, RNA transcripts and proteins of thousands of breast-cancer tumours — to parse out potential differences in the molecular signatures of breast tumours in younger compared with older women. Women who are diagnosed before the age of 40 tend to have worse disease: they are more likely to have later-stage cancers, poorer prognoses and worse survival outcomes than older women.
Via Integrated DNA Technologies
Cardiologists from the Institute of Cardiology, Warsaw, Poland have used Google Glass in a challenging surgical procedure, successfully clearing a blockage in the right coronary artery of a 49-year-old male patient and restoring blood flow, reports the Canadian Journal of Cardiology.
Chronic total occlusion, a complete blockage of the coronary artery, sometimes referred to as the “final frontier in interventional cardiology,” represents a major challenge for catheter-based percutaneous coronary intervention (PCI), according to the cardiologists.
That’s because of the difficulty of recanalizing (forming new blood vessels through an obstruction) combined with poor visualization of the occluded coronary arteries.
Coronary computed tomography angiography (CTA) is increasingly used to provide physicians with guidance when performing PCI for this procedure. The 3-D CTA data can be projected on monitors, but this technique is expensive and technically difficult, the cardiologists say.
So a team of physicists from the Interdisciplinary Centre for Mathematical and Computational Modelling of the University of Warsaw developed a way to use Google Glass to clearly visualize the distal coronary vessel and verify the direction of the guide-wire advancement relative to the course of the blocked vessel segment.
So a team of physicists from the Interdisciplinary Centre for Mathematical and Computational Modelling of the University of Warsaw developed a way to use Google Glass to clearly visualize the distal coronary vessel and verify the direction of the guide-wire advancement relative to the course of the blocked vessel segment.
The procedure was completed successfully, including implantation of two drug-eluting stents. “This case demonstrates the novel application of wearable devices for display of CTA data sets in the catheterization laboratory that can be used for better planning and guidance of interventional procedures, and provides proof of concept that wearable devices can improve operator comfort and procedure efficiency in interventional cardiology,” said lead investigator Maksymilian P. Opolski, MD, PhD, of the Department of Interventional Cardiology and Angiology at the Institute of Cardiology, Warsaw, Poland.
“We believe wearable computers have a great potential to optimize percutaneous revascularization, and thus favorably affect interventional cardiologists in their daily clinical activities,” he said. He also advised that “wearable devices might be potentially equipped with filter lenses that provide protection against X-radiation.
Groundwater: it's one of the planet's most exploited, most precious natural resources. It ranges in age from months to millions of years old. Around the world, there's increasing demand to know how much we have and how long before it's tapped out.
For the first time since a back-of-the-envelope calculation of the global volume of groundwater was attempted in the 1970s, an international group of hydrologists has produced the first data-driven estimate of the Earth's total supply of groundwater. The study, led by Dr. Tom Gleeson of the University of Victoria with co-authors at the University of Texas at Austin, the University of Calgary and the University of Göttingen, was published today in Nature Geoscience.
The bigger part of the study is the "modern" groundwater story. The report shows that less than six per cent of groundwater in the upper two kilometres of the Earth's landmass is renewable within a human lifetime. "This has never been known before," says Gleeson. "We already know that water levels in lots of aquifers are dropping. We're using our groundwater resources too fast—faster than they're being renewed."
With the growing global demand for water—especially in light of climate change—this study provides important information to water managers and policy developers as well as scientists from fields such as hydrology, atmospheric science, geochemistry and oceanography to better manage groundwater resources in a sustainable way, he says.
Via Catherine Russell
A study finds that tarantulas evolved almost exactly the same shade of vibrant blue at least eight separate times. That is the conclusion of a study by US biologists, exploring how the colour is created in different tarantula species. The hue is caused by tiny structures inside the animals' hairs, but those shapes vary across the family tree.
This suggests, the researchers say, that the striking blue is not driven by sexual selection - unlike many other bright colors in the animal kingdom. This argument is also supported by the fact that tarantulas have poor color vision, and do not appear to show off their hairy blue body parts during courtship.
One day, Mars may have rings like Saturn does. The martian moon Phobos, which is spiralling inexorably closer towards the red planet, will disintegrate to form a ring system some 20 million to 40 million years from now, according to calculations published on 23 November. Other research suggests that long grooves on Phobos's surface may represent the first stages of that inevitable crack-up.
Phobos may not be alone in its doom. Researchers have speculated that Neptune’s moon Triton might also be falling apart. And other, now-vanished moons elsewhere in the Solar System may have suffered a similar fate in the distant past, migrating towards their planet and shredding into a ring system before vanishing. Saturn's iconic rings may have formed in this way too.
Watching Phobos in the first stages of its death throes is a rare chance for scientists to witness a process that could have been widespread in the early Solar System, says Benjamin Black, a planetary scientist at the University of California, Berkeley. He and his Berkeley colleague Tushar Mittal published the ring-system paper on 23 November in Nature Geoscience1.
By nearly any measure, Phobos is a bizarre place. It is tiny, measuring 22 kilometers across, and close to its planet — just 6,000 kilometers above the surface. Each year, Mars’s gravity pulls Phobos several centimeters closer, and scientists have long known that the moon would either plummet to its death intact or shred into a ring system before its doom.
To predict how Phobos’s death might unfold, Black and Mittal took information such as the density and strength of Phobos and compared it to a model used to estimate rock strength. They calculated that the weakest parts of Phobos would begin to spread out and form a ring about 20 million years from now.
The world’s most powerful accelerator, the 27 km long Large Hadron Collider (LHC) operating at CERN in Geneva established collisions between lead nuclei, this morning, at the highest energies ever. The LHC has been colliding protons at record high energy since the summer, but now the time has now come to collide large nuclei (nuclei of lead, Pb, consist of 208 neutrons and protons). The experiments aim at understanding and studying the properties of strongly interacting systems at high densities and thus the state of matter of the Universe shortly after the Big Bang.
In the very beginning, just a few billionths of a second after the Big Bang, the Universe was made up of an extremely hot and dense ‘primordial soup’ consisting of the fundamental particles, especially quarks and gluons. This state is called the quark-gluon-plasma (QGP). Approximately one millionth of a second after the Big Bang, quarks and gluons became confined inside the protons and the neutrons, which are the present day constituents of the atomic nuclei.
The so-called strong force, mediated by the gluons, binds the quarks to each other and - under normal circumstances, trap them inside the nuclear particles. It is however, possible to recreate a state of matter consisting of quarks and gluons, and which behaves as a liquid, in close imitation of the state of matter prevailing in the very early universe. It is this state that has now been realised at the highest temperatures ever attained in collisions using lead ions from the LHC accelerator at CERN.
“The collision energy between two nuclei reaches 1000 TeV. This energy is that of a bumblebee hitting us on the cheek on a summer day. But the energy is concentrated in a volume that is approximately 10-27 (a billion-billion-billion) times smaller. The energy concentration (density) is therefore tremendous and has never been realised before under terrestrial conditions,” explains Jens Jørgen Gaardhøje, professor at the Niels Bohr Institute at the University of Copenhagen and head of the Danish research group within the ALICE experiment at CERN.
The first collisions were recorded by the LHC detectors, including the dedicated heavy-ion detector ALICE, which has significant Danish participation, immediately after the LHC’s two counter-circulating beams were aimed at each other this morning at 11:15 AM.
Tufts University biologists have electrically modified flatworms to grow heads and brains characteristic of another species of flatworm — without altering their genomic sequence. This suggests bioelectrical networks as a new kind of epigenetics (information existing outside of a genomic sequence) to determine large-scale anatomy. Besides the overall shape of the head, the changes included the shape of the brain and the distribution of the worm’s adult stem cells.
The discovery could help improve understanding of birth defects and regeneration by revealing a new pathway for controlling complex pattern formation similar to how neural networks exploit bioelectric synapses to store and re-write information in the brain.
The findings are detailed in the open-access cover story of the November 2015 edition of the International Journal of Molecular Sciences, appearing online Nov. 24.
“These findings raise significant questions about how genes and bioelectric networks interact to build complex body structures,” said the paper’s senior author Michael Levin, Ph.D., who holds the Vannevar Bush Chair in biology and directs the Center for Regenerative and Developmental Biology in the School of Arts and Sciences at Tufts. Knowing how shape is determined and how to influence it is important because biologists could use that knowledge, for example, to fix birth defects or cause new biological structures to grow after an injury, he explained.
The researchers worked with Girardia dorotocephala — free-living planarian flatworms, which have remarkable regenerative capacity. They induced the development of different species-specific head shapes by interrupting gap junctions, which are protein channels that enable cells to communicate with each other by passing electrical signals back and forth.
The ease with which a particular shape could be coaxed from a Girardia dorotocephala worm was proportional to the proximity of the target worm on the evolutionary timeline. The closer the two species were related, the easier it was to effect the change. This observation strengthens the connection to evolutionary history, suggesting that modulation of physiological circuits may be one more tool exploited by evolution to alter animal body plans.
However, this shape change was only temporary. Weeks after the planaria completed regeneration to the other species’ head shapes, the worms once again began remodeling and re-acquired their original head morphology. Additional research is needed to determine how this occurs. The authors also presented a computational model that explains how changes in cell-to-cell communication can give rise to the diverse shape types.
Genetic residue from ancient viral infections has been repurposed to play a vital role in acquiring pluripotency, the developmental state that allows a fertilized human egg to become all the cells in the body.
Genetic material from ancient viral infections is critical to human development, according to researchers at the Stanford University School of Medicine. They’ve identified several noncoding RNA molecules of viral origins that are necessary for a fertilized human egg to acquire the ability in early development to become all the cells and tissues of the body. Blocking the production of this RNA molecule stops development in its tracks, they found.
The discovery comes on the heels of a Stanford study earlier this year showing that early human embryos are packed full of what appear to be viral particles arising from similar left-behind genetic material. “We’re starting to accumulate evidence that these viral sequences, which originally may have threatened the survival of our species, were co-opted by our genomes for their own benefit,” said Vittorio Sebastiano, PhD, an assistant professor of obstetrics and gynecology. “In this manner, they may even have contributed species-specific characteristics and fundamental cell processes, even in humans.”
Sebastiano is a co-lead and co-senior author of the study, published online Nov. 23 in Nature Genetics.Postdoctoral scholar Jens Durruthy-Durruthy, PhD, is the other lead author. The other senior author of the paper is Renee Reijo Pera, PhD, a former professor of obstetrics and gynecology at Stanford who is now on the faculty of Montana State University.
Sebastiano and his colleagues were interested in learning how cells become pluripotent, or able to become any tissue in the body. A human egg becomes pluripotent after fertilization, for example. And scientists have learned how to induce other, fully developed human cells to become pluripotent by exposing them to proteins known to be present in the very early human embryo. But the nitty-gritty molecular details of this transformative process are not well understood in either case.
The researchers knew that a type of RNA molecules called long-intergenic noncoding, or lincRNAs, have been implicated in many important biological processes, including the acquisition of pluripotency. These molecules are made from DNA in the genome, but they don’t go on to make proteins. Instead they function as RNA molecules to affect the expression of other genes.
Sebastiano and Durruthy-Durruthy used recently developed RNA sequencing techniques to examine which lincRNAs are highly expressed in human embryonic stem cells. Previously, this type of analysis was stymied by the fact that many of the molecules contain highly similar, very repetitive regions that are difficult to sequence accurately.
They identified more than 2,000 previously unknown RNA sequences, and found that 146 are specifically expressed in embryonic stem cells. They homed in on the 23 most highly expressed sequences, which they termed HPAT1-23, for further study. Thirteen of these, they found, were made up almost entirely of genetic material left behind after an eons-ago infection by a virus called HERV-H.
HERV-H is what’s known as a retrovirus. These viruses spread by inserting their genetic material into the genome of an infected cell. In this way, the virus can use the cell’s protein-making machinery to generate viral proteins for assembly into a new viral particle. That particle then goes on to infect other cells. If the infected cell is a sperm or an egg, the retroviral sequence can also be passed to future generations.
HIV is one common retrovirus that currently causes disease in humans. But our genomes are also littered with sequences left behind from long-ago retroviral infections. Unlike HIV, which can go on to infect new cells, these retroviral sequences are thought to be relatively inert; millions of years of evolution and accumulated mutations mean that few maintain the capacity to give instructions for functional proteins.
After identifying HPAT1-23 in embryonic stem cells, Sebastiano and his colleagues studied their expression in human blastocysts — the hollow clump of cells that arises from the egg in the first days after fertilization. They found that HPAT2, HPAT3 and HPAT5 were expressed only in the inner cell mass of the blastocyst, which becomes the developing fetus. Blocking their expression in one cell of a two-celled embryo stopped the affected cell from contributing to the embryo’s inner cell mass. Further studies showed that the expression of the three genes is also required for efficient reprogramming of adult cells into induced pluripotent stem cells.
The wired rose leaf can be darkened or lightened with a zap of electricity.
Scientists have just taken a surprising leap toward actually integrating living plants into human electronics and power systems: A team of Swedish botanists and electrical engineers unveiled a fascinating method of growing and powering conductive wires inside living plants. Led by Eleni Stavrinidou—a bioelectronic engingeer at Linköping University in Linköping, Sweden—the scientists employed a transparent, conductive gel that cut roses could naturally soak up into their stems and leaves.
After a few hours, the gel material would harden and form flexible wires inside the plants' stems. Thanks to the fantastic properties of the plant-embedded wires, electric current could even be run through the wired stems, without (as far as the scientists could tell) damaging the plants.
"Although many attempts have been made to augment plant function with electroactive materials, [until now] plants' 'circuitry' has never been directly merged with electronics," writes the reseach team. The scientists describe their curious, bionic vegetation today in a remarkably titled science paper —"Electronic plants"—in the journal Science Advances.
Stavrinidou's research team tested countless conductive materials before they came across a winner. Their aim was to get plants to soak up materials that could later harden into wires through the plants xylem, the vein-like system a plant uses to transport water and nutrients. However, most materials (for example, two molecules called pyrrole and aniline) either simply wouldn't uptake, proved toxic when it came down to the hardening phase, or would clog the xylem. In the end, the winning material was a transparent, organic polymer that basically acts like conductive plastic. It's a flavor of a material called PEDOT—short for poly(3,4-ethylenedioxythiophene).
Via Neelima Sinha
Scientists from the University of Leicester have for the first time created a detailed image of a toxin - called pneumolysin - associated with deadly infections such as bacterial pneumonia, meningitis and septicaemia.
he three-year study involving four research groups from across the University has been described as an exciting advance because it points to the possibility of creating therapeutics that block assembly of pneumolysin pores to treat people with pneumococcal disease. The University has recently set up a company Axendos Therapeutics to pursue this aim.
Using a technique called X-ray crystallography at Diamond Light Source, the UK's national synchrotron science facility, the Leicester team was able to see the individual atoms of the toxin. The structure not only reveals what the toxin looks like, but also shows how it assembles on the surface of cells to form lethal pores.
Professor Wallis said: "Our research is about a toxin called pneumolysin produced by a bacterium called pneumococcus (aka Streptococcus pneumoniae). Pneumococcal infections are the leading cause of bacterial pneumonia as well as the cause of a range of other life-threatening diseases such as meningitis and septicaemia. Pneumolysin is instrumental in the ability of pneumococcus to cause disease. The World Health Organization (WHO) estimated that more than 1.6 million people die every year from pneumococcal infections, including more than 800,000 children under 5 years old.
"The aim of the research was to find out how pneumolysin kills our cells, thereby causing tissue damage and contributing to disease. In particular we wanted to find out how multiple copies of the toxin assemble on the surface of cells. "We managed to determine the structure of pneumolysin using a technique called X-ray crystallography, which enables us to see the individual atoms of the toxin. The structure not only reveals what the toxin looks like, but also shows how it assembles to form lethal pores.
"Ours is the first detailed structure of pneumolysin. This level of detail is important and useful because it enables us to begin to understand how the toxin works. For example, we can see which parts of the toxin come together during pore assembly. When we disrupt these contacts, the toxin becomes inactivated so can no longer kill cells. "The mode of action of pneumolysin action revealed by our work appears to be conserved in related toxins from other disease-causing bacteria e.g. toxins produced by pathogenic species of Listeria."
The Top 500 supercomputer rankings are a fun way to gauge which countries boast the most powerful rigs in the world. And, perhaps unsurprisingly, China has won the top spot for the sixth time in a row.
Not only that, but the nation has nearly tripled its supercomputer count from 37 to 109 in only six months. Although the US still maintains a healthy 201 supercomputers, first place in terms of quantity, that’s actually a record low for the nation in the Top 500, which was conceived back in 1993.
Produced by its National University of Defense Technology, China’s Tianhe-2 bolsters a whopping 3,120,000 cores with the ability to achieve 33.86 quadrillion floating point operations (flops) per second. As if that information alone wasn’t intimidating enough, those numbers are almost double that of the US energy department’s still powerful, but not quite as monumental, Titan Cray XK7, apparently capable of 17.59 petaflops, according to the Linpack benchmark.
A United States-owned rig also occupies the third place position, that is, IBM’s Sequoia, custom-built for the National Nuclear Security Administration and housed in the Lawrence Livermore National Lab. The Sequoia, which claimed the top spot in 2012, has since been surpassed by both the Tianhe-2 and the Titan Cray XK7.
Among the top 10 of the Top 500, only the Trinity and Hazel Hen are fresh faces to the list, positioned at numbers 6 and 8, respectively. While the Trinity was conceived for the US Department of Energy, the Hazel Hen rests in Stuttgart, Germany.
In an interview with the BBC, Rajnish Arora, vice president of enterprise computing at IDC Asia Pacific, explained to the network that China’s domination in the supercomputer space is less reflective of the United States’ inability to compete and more representative of China’s economic growth.
“When China started off appearing on the center stage of the global economy in the 80s and 90s, it was predominately a manufacturing hub,” Arora said. “All the IP or design work would happen in Europe or the US and the companies would just send manufacturing or production jobs to China. Now as these companies become bigger, they want to invest in technical research capabilities, so that they can create a lot more innovation and do basic design and engineering work.”
Via Ben van Lier
For decades, bacteria have served society by producing antibiotics – the chemical compounds that can cure infectious diseases. However, it is possible that many natural microorganisms carry the recipes for the medicines of the future hidden in their genetic material, without this part of their genetic code being activated or “switched on”.
But now, biotechnologists from SINTEF and NTNU are developing technology that will make it easier to find – and exploit – these hidden and unutilized medicine factories in bacteria that exist in the natural environment. The hunt will concentrate on marine bacteria, and is one of the projects run by the new Norwegian Centre for Digital Life.
“Our aim is to identify novel compounds that are capable, for example, of killing cancer cells or antibiotic-resistant bacteria. The technology that we are developing will reduce the time taken to search for these and to make the production process more efficient,” says Alexander Wentzel, a senior scientist at SINTEF.
As a strategy, scientists will clip out genetic material from a large number of microorganisms before they transfer their DNA to cultivable bacteria; organisms whose characteristics have already been studied and will be optimized by the researchers in the INBioPharm project. The alterations will enable these organisms to switch on production of new substances that cannot be produced in the microorganism from which the DNA has been extracted.
With the aid of systems biology and synthetic biology (see fact-box), the project will develop the microorganisms in a way which, when they are cultivated, will produce small test quantities of all the possible products, and later, enable mass-production of the most promising substances.
Twist Bioscience dramatically scaled down the equipment for synthesizing DNA in a lab, making the process cheaper and faster. The stamp-sized wafers contain 100 microwells. Each of these contains 100 nanowells in which DNA can be synthesized.
AT TWIST BIOSCIENCE’S office in San Francisco, CEO Emily Leproust pulled out of her tote bag two things she carries around everywhere: a standard 96-well plastic plate ubiquitous in biology labs and her company’s invention, a silicon wafer studded with a similar number of nanowells.
Twist’s pitch is that it has dramatically scaled down the equipment for synthesizing DNA in a lab, making the process cheaper and faster. As Leproust gave her spiel, I looked from the jankety plastic plate, the size of two decks of cards side by side, to the sleek stamp-sized silicon wafer and politely nodded along. Then she handed me a magnifying lens to look down the wafer’s nanowells. Inside each nanowell was another 100 microscope holes.
That’s when I actually got it. The 96-well plate was not equivalent to the wafer, the entire plate was equivalent toone nanowell on the wafer. To put a number on it, traditional DNA synthesis machines can make one gene per 96-well plate; Twist’s machine can make 10,000 genes on a silicon wafer set the same size as the plate.
But who wants to order 10,000 genes? Until recently, that question might have been met with silence. “It was a lonely time,” says Leproust of her early fundraising efforts for Twist. Fast forward a couple years, though, and Twist has just signed a deal to sell at least 100 million letters of DNA—equivalent to tens of thousands of genes—to Ginkgo Bioworks, a synthetic biology outfit that inserts genes into yeast to make scents like rose oil or flavors like vanillin. Ginkgo is at the forefront of a wave of synthetic biology companies, bolstered by new gene-editing technologies like Crispr and investor interest.
“We’re Intel and Ginkgo is Microsoft,” says Leproust, which sounds exactly the kind of rhetoric you hear all the time in startupland. But her words reveal Twist’s specific ambition to be the driver behind synthetic biology innovations. Synthesizing genes in a lab allows biologists to design—down to the letter—the ones they want to test. Companies out there are already tinkering with DNA in various cells to create spider silk, cancer treatments, biodegradable plastic, diesel fuel—and Twist’s founders thinks the company can become the driving technology behind that new world.
Via Marko Dolinar
For nearly 400 years, Thanksgiving has been a time in North America when families come together to celebrate food and agriculture. As we reflect on yet another year, agricultural scientists at USDA continue to keep a wary eye on the future. At the end of what may be the hottest year on record, a period of drought has threatened the heart of one of the most important agricultural production zones in the United States. Water demands are increasing, and disease and pest pressures are continually evolving. This challenges our farmers’ ability to raise livestock and crops. How are science and technology going to address the problems facing our food supply?
To find answers, agricultural scientists turn to data—big data. Genomics, the field of science responsible for cataloging billions of DNA base pairs that encode thousands of genes in an organism, is fundamentally changing our understanding of plants and animals. USDA has already helped to fund and collect genomes for 25 crop plant species, important livestock and fish species, and numerous bacteria, fungi, and insect species related to agricultural production. Other USDA-supported research projects expanding these efforts are currently underway, including genome sequencing of 1,000 breeds of bulls and 5,000 insect species in the i5K initiative. But classifying and understanding DNA is only part of the story.
Even if neighboring farmers were to raise identical varieties of tomato, small variations in the environment can reduce crop performance and/or increase pests and disease. So, scientists and farmers are increasingly using technology like satellites, drones, sensors, and laser-guided tractors to collect thousands of data points about the environmental conditions in a field, such as temperature, humidity, soil composition, or slope of the land. Using these “precision agriculture” techniques, farmers could reduce their environmental footprint by matching land management practices to the unique environments on their farm.
In the long term, USDA researchers are hoping to combine precision agriculture and genomics in a remarkable way—to develop crops with combinations of genes that lead to the best performance in specific environments. To support this goal, USDA continues to lead the way in collecting and maintaining open access to these types of agriculture data. As a result, your local farmer’s market or grocery store may one day have even more varieties of produce to choose from on Thanksgiving, with each optimized for the farm or field on which it was grown.
Via Integrated DNA Technologies
Australian and Italian researchers have developed a smart sensor that can detect single molecules in chemical and biological compounds – a highly valued function in medicine, security and defence.
The researchers from the University of New South Wales, Swinburne University of Technology, Monash University and the University of Parma in Italy used a chemical and biochemical sensing technique called surface-enhanced Raman spectroscopy (SERS), which is used to understand more about the make-up of materials.
They were able to greatly amplify the technique's performance by taking advantage of metal nanostructures, which help generate 'hotspots' in close proximity to the metal surfaces.
The sensor was created using gold nanoparticles which self-assemble onto a gold- and silica-coated silicon base. This approach means the nanoparticles find the perfect spacing to achieve lots of uniformly distributed hotspots on the surface.
The hotspots also used a heat responsive polymer which acted as a gate to trap molecules, but importantly also allow them to be released down the track. "The sensor shows not only a good SERS reproducibility but also the ability to repetitively catch and release molecules for single-molecular sensing," postdoctoral fellow at Swinburne's Centre for Micro-Photonics, Dr Lorenzo Rosa, said.
"This reversible trapping process makes it possible to detect an abundance of analytes in one measurement, but also to reuse the SERS substrate multiple times." The technique used in this work has various applications for other measurement and detection systems sensitive to humidity, pH and light.