Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Nanophotonics, which takes advantage of the much faster speed of light compared with electrons, could potentially lead to future optical computers that transmit large amounts of data at very high speeds. Working toward this goal, researchers in a new study have developed a tiny laser 100 micrometers long and 5 micrometers in diameter—right at the limit of what the unaided human eye can see. As the first rolled-up semiconductor tube laser that is electrically powered, it can fit on an optical chip and serve as the light source for future optical communications technology.
A team of engineers, M. H. T. Dastjerdi, et al., at McGill University in Montreal have reported their development of the tiny laser in a recent issue of Applied Physics Letters. Future optical chips will require many vital components, such as modulators (which convert electrical signals into optical ones), photodetectors (which do the reverse), and waveguides (which control the path of light). Another essential requirement is, of course, the light itself, which may come from a micro- or nano-scale laser that can be integrated with the other components onto a silicon (Si) platform.
Although many different types of micro-sized lasers have been studied over the past several years, one promising candidate is a laser made from rolled-up semiconductor tubes. These lasers are fabricated by straining 2D nanomembranes on a substrate, and then selectively releasing parts of the nanomembranes so that they roll up into tiny tubes that act as optical cavities. The rolled-up tube lasers have an advantage over most other types of small lasers in that their optical emission characteristics can be precisely tailored using standard photolithography processes. They can also be easily transferred onto a Si platform, allowing for seamless integration with other chip components.
"In contrast to electrically injected devices, optically pumped devices require additional light sources (lasers, LEDs) to operate that take additional space on the chip and add a significant level of complexity," Zetian Mi, Associate Professor at McGill University, told Phys.org. "Therefore, optically pumped light sources are not practical for integrated chip-level optical communication systems."
As the researchers explain, fabricating electrically powered rolled-up tube lasers is difficult because the very thin nanomembranes make the process of injecting charge carriers into the laser very inefficient. To overcome this problem, the researchers designed the laser to lie horizontally on top of two supporting pieces connected to the electrodes in a U-shaped mesa design. In this formation, charge carriers are injected into the laser cavity from the sides. By circumventing the thin membrane walls, this lateral carrier injection scheme emits light from the center of the tube laser, significantly increasing injection efficiency.
UC Irvine and Australian chemists have figured out how to unboil egg whites – an innovation that could dramatically reduce costs for cancer treatments, food production and other segments of the $160 billion global biotechnology industry, according to findings published today in the journal ChemBioChem.
“Yes, we have invented a way to unboil a hen egg,” said Gregory Weiss, UCI professor of chemistry and molecular biology & biochemistry. “In our paper, we describe a device for pulling apart tangled proteins and allowing them to refold. We start with egg whites boiled for 20 minutes at 90 degrees Celsius and return a key protein in the egg to working order.”
Like many researchers, he has struggled to efficiently produce or recycle valuable molecular proteins that have a wide range of applications but which frequently “misfold” into structurally incorrect shapes when they are formed, rendering them useless.
“It’s not so much that we’re interested in processing the eggs; that’s just demonstrating how powerful this process is,” Weiss said. “The real problem is there are lots of cases of gummy proteins that you spend way too much time scraping off your test tubes, and you want some means of recovering that material.”
But older methods are expensive and time-consuming: The equivalent of dialysis at the molecular level must be done for about four days. “The new process takes minutes,” Weiss noted. “It speeds things up by a factor of thousands.”
To re-create a clear protein known as lysozyme once an egg has been boiled, he and his colleagues add a urea substance that chews away at the whites, liquefying the solid material. That’s half the process; at the molecular level, protein bits are still balled up into unusable masses. The scientists then employ a vortex fluid device, a high-powered machine designed by Professor Colin Raston’s laboratory at South Australia’s Flinders University. Shear stress within thin, microfluidic films is applied to those tiny pieces, forcing them back into untangled, proper form.
“This method … could transform industrial and research production of proteins,” the researchers write in ChemBioChem. For example, pharmaceutical companies currently create cancer antibodies in expensive hamster ovary cells that do not often misfold proteins. The ability to quickly and cheaply re-form common proteins from yeast or E. coli bacteria could potentially streamline protein manufacturing and make cancer treatments more affordable. Industrial cheese makers, farmers and others who use recombinant proteins could also achieve more bang for their buck.
The Guinea worm is inching ever closer to extinction, but unlike just about every other endangered species, no one is going to try to save it, least of all scientists. On the contrary, the worm’s disappearance would mark the scouring of a disease from the face of the earth—a feat humanity’s only been able to celebrate twice before, with the end of smallpox in 1980 and of the cattle disease rinderpest in 2011. Polio, despite the fact that a vaccine’s been around for more than half a century, has managed to hang on by its microscopic threads.
The Guinea worm is a parasite that enters the human body when the unwitting host-to-be drinks water contaminated with tiny water fleas in which Guinea worm larvae lurk. Once ingested, the fleas die and the Guinea worm larvae enter the host’s abdominal cavity and, unbeknownst to the host, begin maturing into a worm or worms that grow up to three feet in length. After about a year a painful blister forms on the host’s skin accompanied by itching and a burning sensation. Within about 10 to 15 days, one or more worms erupt from the person’s skin in a painful and drawn-out process. The emergence can occur from different parts of the body, including the roof of the mouth, the genitals, or the eye sockets, but around 90 percent of the worms emerge from the lower legs, according to the World Health Organization (WHO).
While the disease rarely kills, it can leave the host debilitated and weakened for a short or long period of time. Thanks in large part to the work of the Carter Center, the incidence of Guinea worm disease (also known as dracunculiasis, which is Latin for “affliction with little dragons”) has plummeted in recent years, falling from an estimated 3.5 million cases worldwide in the mid-1980s to just 148 in 2013 and 126 in 2014, according to the WHO.
How has such success been achieved? It’s taken the concerted effort of all involved—the scientists who have figured out how to contain it, community organizers who have helped spread the word on preventative solutions, and the people in areas where Guinea worm disease has been a big problem who are implementing the necessary changes to keep the parasite at bay.
Extreme weather phenomena called atmospheric rivers were behind intense snowstorms recorded in 2009 and 2011 in East Antarctica. The resulting snow accumulation partly offset recent ice loss from the Antarctic ice sheet, report researchers from KU Leuven.
Atmospheric rivers are long, narrow water vapour plumes stretching thousands of kilometres across the sky over vast ocean areas. They are capable of rapidly transporting large amounts of moisture around the globe and can cause devastating precipitation when they hit coastal areas.
Although atmospheric rivers are notorious for their flood-inducing impact in Europe and the Americas, their importance for Earth’s polar climate – and for global sea levels – is only now coming to light.
In a recent study, an international team of researchers led by Irina Gorodetskaya of KU Leuven’s Regional Climate Studies research group used a combination of advanced modelling techniques and data collected at Belgium’s Princess Elisabeth polar research station in East Antarctica’s Dronning Maud Land to produce the first ever in-depth look at how atmospheric rivers affect precipitation in Antarctica.
The researchers studied two particular instances of heavy snowfall in the East Antarctic region in detail, one in May 2009 and another in February 2011, and found that both were caused by atmospheric rivers slamming into the East Antarctic coast.
The Princess Elisabeth polar research station recorded snow accumulation equivalent to up to 5 centimetres of water for each of these weather events, good for 22 per cent of the total annual snow accumulation in those years.
The findings point to atmospheric rivers’ impressive snow-producing power. “When we looked at all the extreme weather events that took place during 2009 and 2011, we found that the nine atmospheric rivers that hit East Antarctica in those years accounted for 80 per cent of the exceptional snow accumulation at Princess Elisabeth station,” says Irina Gorodetskaya.
Our planet's trusty magnetic field—an invisible barrier created by the churning of molten-hot matter in Earth's core—protects us from the lethal space radiation that engulfs most of the known universe. Without this field, Earth today would look as barren as Mars. Scientists have a hard time imagining how life anywhere could exist without one.
This new understanding of asteroids has tantalizing implications for the panspermia hypothesis—the idea that life, or its precursor chemicals, could have found its way to Earth from elsewhere in the universe by hitchhiking on asteroids, meteors, or comets. Bryson is careful to point out that this is still a very open question, and his new finding is not concrete evidence that meteors and asteroids did or could support life. However, he says, it certainly strengthens the case.
Hyperion, new malware detection software that can quickly recognize malicious software even if the specific program has not been previously identified as a threat, has been licensed by Oak Ridge National Laboratory (ORNL) to R&K Cyber Solutions LLC (R&K).
Hyperion, which has been under development for a decade, offers more comprehensive scanning capabilities than existing cyber security methods, said one of its inventors, Stacy Prowell of the ONRL Cyber Warfare Research team. By computing and analyzing program behaviors associated with harmful intent, Hyperion can determine the software’s behavior without using its source code or even running the program.
“These behaviors can be automatically checked for known malicious operations as well as domain-specific problems,” Prowell said. “This technology helps detect vulnerabilities and can uncover malicious content before it has a chance to execute.”
“This approach is better than signature detection, which only searches for patterns of bytes,” Prowell said. “It’s easy for somebody to hide that — they can break it up and scatter it about the program so it won’t match any signature.”
“Software behavior computation is an emerging science and technology that will have a profound effect on malware analysis and software assurance,” said R&K Cyber Solutions CEO Joseph Carter. “Computed behavior based on deep functional semantics is a much-needed cyber security approach that has not been previously available. Unlike current methods, behavior computation does not look at surface structure. Rather, it looks at deeper behavioral patterns.”
Carter adds that technology’s malware analysis capabilities can be applied to multiple related cyber security problems, including software assurance in the absence of source code, hardware and software data exploitation and forensics, supply chain security analysis, anti-tamper analysis, and potential first intrusion detection systems based on behavior semantics.
Earlier this month, Medscape published the results of their recent survey (here) which asked 18,575 physicians across 25 specialties to rate their Electronic Health Record (EHR) system. For overall satisfaction, the #1 ranked EHR solution was the VA’s Computerized Patient Record System ‒ also known as VistA. It was built using open‒source software and is therefore license free.
There’s also a publicly available version of VistA called OpenVista and several companies leverage a services-only business model for larger OpenVista installations. For smaller installations, a YouTube video (here) suggests the free OpenVista software can be installed in about 10 minutes ‒ bring your own hardware.
Of course free software licensing doesn’t make the hardware, installation or maintenance free, but the lack of any software licensing fees at all does reduce the overall cost ‒ especially for large installations ‒ and that can typically save millions of dollars.
Open-source software also charts a much different course for design changes that are not dependent on the resources, budgets (or revenue requirements) of independent software vendors (ISV’s).
In many ways, VistA’s top rating is no surprise because it’s the only EHR installation in the U.S. with a truly national footprint. As a single software solution, VistA is designed to support almost 9 million vets through about 1,700 different care sites around the country.
Scientists at the University of Rochester have used lasers to transform metals into extremely water repellent, or super-hydrophobic, materials without the need for temporary coatings.
Super-hydrophobic materials are desirable for a number of applications such as rust prevention, anti-icing, or even in sanitation uses. However, as Rochester's Chunlei Guo explains, most current hydrophobic materials rely on chemical coatings. In a paper published today in the Journal of Applied Physics, Guo and his colleague at the University's Institute of Optics, Anatoliy Vorobyev, describe a powerful and precise laser-patterning technique that creates an intricate pattern of micro- and nanoscale structures to give the metals their new properties. This work builds on earlier research by the team in which they used a similar laser-patterning technique that turned metals black. Guo states that using this technique they can create multifunctional surfaces that are not only super-hydrophobic but also highly-absorbent optically.
Guo adds that one of the big advantages of his team's process is that "the structures created by our laser on the metals are intrinsically part of the material surface." That means they won't rub off. And it is these patterns that make the metals repel water.
"The material is so strongly water-repellent, the water actually gets bounced off. Then it lands on the surface again, gets bounced off again, and then it will just roll off from the surface," said Guo, professor of optics at the University of Rochester. That whole process takes less than a second. As the water bounces off the super-hydrophobic surfaces, it also collects dust particles and takes them along for the ride. To test this self-cleaning property, Guo and his team took ordinary dust from a vacuum cleaner and dumped it onto the treated surface. Roughly half of the dust particles were removed with just three drops of water. It took only a dozen drops to leave the surface spotless. Better yet, it remains completely dry.
For years, the lab of Leonard Zon, MD, director of the Stem Cell Research Program at Boston Children’s Hospital, has sought ways to enhance bone marrow transplants for patients with cancer, serious immune deficiencies and blood disorders. Using zebrafish as a drug-screening platform, the lab has found a number of promising compounds, including one called ProHema that is now in clinical trials. But truthfully, until now, Zon and his colleagues have largely been flying blind.
“Stem cell and bone marrow transplants are still very much a black box: cells are introduced into a patient and later on we can measure recovery of their blood system, but what happens in between can’t be seen,” says Owen Tamplin, PhD, in the Zon Lab. “Now we have a system where we can actually watch that middle step.” The animation, based on live imaging of naturally transparent zebrafish, reveals a surprisingly dynamic system in which newborn blood stem cells travel through the blood, exit into a “niche” where they get “cuddled” and nurtured, and then proceed to their final blood-making home. Their journey, also described in the January 15 issue of Cell, offers several clues for helping bone marrow transplants “take.”
“The same process occurs during a bone marrow transplant as occurs in the body naturally,” says Zon. “Our direct visualization gives us a series of steps to target, and in theory we can look for drugs that affect every step of that process.”
Satiety and other core physiological functions are modulated by sensory signals arising from the surface of the gut. Luminal nutrients and bacteria stimulate epithelial biosensors called enteroendocrine cells. Despite being electrically excitable, enteroendocrine cells are generally thought to communicate indirectly with nerves through hormone secretion and not through direct cell-nerve contact.
However, a group of scientists recently discovered in intestinal enteroendocrine cells a cytoplasmic process, a neuropod. They determined that neuropods provide a direct connection between enteroendocrine cells and neurons innervating the small intestine and colon. Using cell-specific transgenic mice to study neural circuits, the researchers found that enteroendocrine cells have all the necessary elements for neurotransmission, including expression of genes that encode pre-, post-, and transsynaptic proteins. This neuroepithelial circuit was reconstituted in vitro by co-culturing single enteroendocrine cells with sensory neurons. They used a monosynaptic rabies virus to define the circuit’s functional connectivity in vivo and determined that delivery of this neurotropic virus into the colon lumen resulted in the infection of mucosal nerves through enteroendocrine cells. This neuroepithelial circuit can serve as both a sensory conduit for food and gut microbes to interact with the nervous system and a portal for viruses to enter the enteric and central nervous systems.
Urban sociologists have long known that a set of remarkable laws govern the large-scale interaction between individuals such as the probability that one person will befriend another and the size of the cities they live in. The latter is an example of the Zipf’s law. If cities are listed according to size, then the rank of a city is inversely proportional to the number of people who live in it. For example, if the biggest city in the US has a population of 8 million people, the second-biggest city will have a population of 8 million divided by 2, the third biggest will have a population of 8 million divided by 3 and so on.
This simple relationship is known as a scaling law and turns out to fit the observed distribution of city sizes extremely well. Another interesting example is the probability that one person will be friends with another. This turns out to be inversely proportional to the number of people who live closer to the first person than the second.
What’s curious about these laws is that although they are widely accepted, nobody knows why they are true. There is no deeper theoretical model from which these laws emerge. Instead, they come simply from the measured properties of cities and friendships.
Today, all that changes thanks to the work of Henry Lin and Abraham Loeb at the Harvard-Smithsonian Centre for Astrophysics in Cambridge. These guys have discovered a single unifying principle that explains the origin of these laws.
And here’s the thing: their approach is mathematically equivalent to the way that cosmologists describe the growth of galaxies in space. In other words, cities form out of variations in population density in exactly the same way that galaxies formed from variations in matter density in the early universe.
These guys begin by creating a mathematical model of the way human population density varies across a flat Euclidean plane. They say they can ignore the effects of the Earth’s curvature in their model because any variations in population density will be small compared to the radius of the Earth. That is exactly how cosmologists think about the way galaxies evolved. They first consider the matter density of the early universe. Next, they look at the mathematical structure of any variations in this density. And finally they use this mathematics to examine how this density can change over time as more matter is added or taken away from specific regions. Because of the many decades of work on cosmology, these mathematical tools are already well understood and easily applied to the similar problem of the population density on Earth. All that is needed is some data to calibrate the mathematical model.
A mammoth US effort to genetically profile 10,000 tumors has officially come to an end. Started in 2006 as a US$100-million pilot, The Cancer Genome Atlas (TCGA) is now the biggest component of the International Cancer Genome Consortium, a collaboration of scientists from 16 nations that has discovered nearly 10 million cancer-related mutations.
The question is what to do next. Some researchers want to continue the focus on sequencing; others would rather expand their work to explore how the mutations that have been identified influence the development and progression of cancer.
“TCGA should be completed and declared a victory,” says Bruce Stillman, president of Cold Spring Harbor Laboratory in New York. “There will always be new mutations found that are associated with a particular cancer. The question is: what is the cost–benefit ratio?”
Stillman was an early advocate for the project, even as some researchers feared that it would drain funds away from individual grants. Initially a three-year project, it was extended for five more years. In 2009, it received an additional $100 million from the US National Institutes of Health plus $175 million from stimulus funding that was intended to spur the US economy during the global economic recession.
On 2 December, Staudt announced that once TCGA is completed, the NCI will continue to intensively sequence tumours in three cancers: ovarian, colorectal and lung adenocarcinoma. It then plans to evaluate the fruits of this extra effort before deciding whether to add back more cancers. But this time around, the studies will be able to incorporate detailed clinical information about the patient’s health, treatment history and response to therapies. Because researchers can now use paraffin-embedded samples, they can tap into data from past clinical trials, and study how mutations affect a patient’s prognosis and response to treatment. Staudt says that the NCI will be announcing a call for proposals to sequence samples taken during clinical trials using the methods and analysis pipelines established by the TCGA.
The rest of the International Cancer Gene Consortium, slated to release early plans for a second wave of projects in February, will probably take a similar tack, says co-founder Tom Hudson, president of the Ontario Institute for Cancer Research in Toronto, Canada. A focus on finding sequences that make a tumour responsive to therapy has already been embraced by government funders in several countries eager to rein in health-care costs, he says. “Cancer therapies are very expensive. It’s a priority for us to address which patients would respond to an expensive drug.”
The NCI is also backing the creation of a repository for data not only from its own projects, but also from international efforts. This is intended to bring data access and analysis tools to a wider swathe of researchers, says Staudt. At present, the cancer genomics data constitute about 20 petabytes (10**15 bytes), and are so large and unwieldy that only institutions with significant computing power can access them. Even then, it can take four months just to download them.
The Human Protein Atlas, a major multinational research project supported by the Knut and Alice Wallenberg Foundation, recently launched (November 6, 2014) an open source tissue-based interactive map of the human protein. Based on 13 million annotated images, the database maps the distribution of proteins in all major tissues and organs in the human body, showing both proteins restricted to certain tissues, such as the brain, heart, or liver, and those present in all. As an open access resource, it is expected to help drive the development of new diagnostics and drugs, but also to provide basic insights in normal human biology.
In the Science article, "Tissue-based Atlas of the Human Proteome", the approximately 20,000 protein coding genes in humans have been analysed and classified using a combination of genomics, transcriptomics, proteomics, and antibody-based profiling, says the article's lead author, Mathias Uhlén, Professor of Microbiology at Stockholm's KTH Royal Institute of Technology and the director of the Human Protein Atlas program. The analysis shows that almost half of the protein-coding genes are expressed in a ubiquitous manner and thus found in all analysed tissues.
Approximately 15% of the genes show an enriched expression in one or several tissues or organs, including well-known tissue-specific proteins, such as insulin and troponin. The testes, or testicles, have the most tissue-enriched proteins followed by the brain and the liver. The analysis suggests that approximately 3,000 proteins are secreted from the cells and an additional 5,500 proteins are located to the membrane systems of the cells.
"This is important information for the pharmaceutical industry. We show that 70% of the current targets for approved pharmaceutical drugs are either secreted or membrane-bound proteins," Uhlén says. "Interestingly, 30% of these protein targets are found in all analysed tissues and organs. This could help explain some side effects of drugs and thus might have consequences for future drug development." The analysis also contains a study of the metabolic reactions occurring in different parts of the human body. The most specialised organ is the liver with a large number of chemical reactions not found in other parts of the human body.
The coverage of living corals on Australia's Great Barrier Reef could decline to less than 10 percent if ocean warming continues, according to a new study that explores the short- and long-term consequences of environmental changes to the reef.
Environmental change has caused the loss of more than half the world's reef building corals. Coral cover, a measure of the percentage of the seafloor covered by living coral, is now just 10-20 percent worldwide. The Great Barrier Reef, once thought to be one of the more pristine global reef systems, has lost half of its coral cover in only the last 27 years. Overfishing, coastal pollution and increased greenhouse gas emissions leading to increased temperatures and ocean acidification, as well as other human impacts are all affecting the delicate balance maintained in coral reef ecosystems.
Now, in a new study that aims to project the composition of the future Great Barrier Reef under current and future environmental scenarios, researchers found that in the long term, moderate warming of 1-2 degrees Celsius would result in a high probability of coral cover declining to less than 10 percent, a number thought to be important for maintaining reef growth.
In the short term, with increasing temperatures as well as local man-made threats like coastal development, pollution, and over-fishing, the study found that corals—tiny animals related to jellyfish—would be over-run by seaweed which would, in effect, suffocate them. In the longer term, interactions among reef organisms would lead to dominance by other groups, including sponges and soft corals known as gorgonians.
The study, now in pre-print online in the journal Ecology, uses a multivariate statistical model and includes quantitative surveys of 46 reef habitats over 10 years of data from 1996-2006.
Researchers delivered a modified RNA that encodes a telomere-extending protein to cultured human cells. Cell proliferation capacity was dramatically increased, yielding large numbers of cells for study.
A new procedure can quickly and efficiently increase the length of human telomeres, the protective caps on the ends of chromosomes that are linked to aging and disease, according to scientists at the Stanford University School of Medicine. Treated cells behave as if they are much younger than untreated cells, multiplying with abandon in the laboratory dish rather than stagnating or dying.
The procedure, which involves the use of a modified type of RNA, will improve the ability of researchers to generate large numbers of cells for study or drug development, the scientists say. Skin cells with telomeres lengthened by the procedure were able to divide up to 40 more times than untreated cells. The research may point to new ways to treat diseases caused by shortened telomeres.
Telomeres are the protective caps on the ends of the strands of DNA called chromosomes, which house our genomes. In young humans, telomeres are about 8,000-10,000 nucleotides long. They shorten with each cell division, however, and when they reach a critical length the cell stops dividing or dies. This internal “clock” makes it difficult to keep most cells growing in a laboratory for more than a few cell doublings.
“Now we have found a way to lengthen human telomeres by as much as 1,000 nucleotides, turning back the internal clock in these cells by the equivalent of many years of human life,” said Helen Blau, PhD, professor of microbiology and immunology at Stanford and director of the university’s Baxter Laboratory for Stem Cell Biology. “This greatly increases the number of cells available for studies such as drug testing or disease modeling.”
“This new approach paves the way toward preventing or treating diseases of aging,” said Blau. “There are also highly debilitating genetic diseases associated with telomere shortening that could benefit from such a potential treatment.” Blau and her colleagues became interested in telomeres when previous work in her lab showed that the muscle stem cells of boys with Duchenne muscular dystrophy had telomeres that were much shorter than those of boys without the disease. This finding not only has implications for understanding how the cells function — or don’t function — in making new muscle, but it also helps explain the limited ability to grow affected cells in the laboratory for study.
Scientists in Italy have engineered a cheap and simple electrochemical sensor that cleans itself when exposed to ultraviolet light. Their system offers a route towards self-cleaning electrodes with myriad environmental and biomedical sensing applications – from detecting pollutants in water to monitoring medications in blood.
Open any book on chemical or biological sensors and you’ll find a lot of content on electrochemical devices. This prevalence is testament to the importance and advantages of electrode-based sensing; and electrodes containing nanomaterials are becoming increasingly popular, owing to their high surface-to-volume ratio, which can improve their sensitivity and lower costs.
However, nanomaterial-based electrodes are very difficult to keep clean, hindering their application in environmental and biomedical sensing. River water, for example, contains species that can foul electrochemical sensors and prevent their reuse. In another example, dopamine – an important neurotransmitter, particularly in Parkinson’s disease – fouls sensors during its electroanalytical detection, preventing reusability.
To solve the fouling problem, Luigi Falciola and his team at the University of Milan have engineered an electrochemical sensor with a photoactive top layer of titania that can be directly cleaned with ultraviolet light and repeatedly reused to detect dopamine. The titania covers a highly ordered distribution of silver nanoparticles (the actual sensing tool), arrayed on a bottom layer of silica.
Self-cleaning surfaces based on titania are an increasingly common part of our everyday lives, from self-cleaning windows, cars and cements to self-sterilising medical devices. These applications all clean using the same basic chemistry: ultraviolet light – from sunlight or an artificial source – induces photocatalysis at a titania coating, which breaks down organic foulants. Falciola’s team have incorporated the same principle in their sensor.
‘There are a few previous examples of self-cleaning electrodes, but our device is simpler and also probably cheaper to make,’ explains Falciola.
By rewriting the DNA of Escherichia coli so that the bacterium requires a synthetic amino acid to produce its essential proteins, two research teams may have paved the way to ensure that genetically modified organisms don’t escape into the environment. The life-or-death dependence of the newly engineered E. coli on synthetic amino acids makes it astronomically difficult for the genetically modified organism to survive outside the laboratory, explains Harvard Medical School’s George M. Church, who led one of the teams reporting the discovery in Nature (2015, DOI: 10.1038/nature14121).
That’s because no pool of synthetic amino acids exists in nature, he explains. A similar strategy was simultaneously published by Farren J. Isaacs and his colleagues at Yale University, also in Nature (2015, DOI: 10.1038/nature14095). The discoveries help construct improved containment barriers for genetically modified bacteria currently used in the biotech-based production of products as diverse as yogurt, propanediol, or insulin, Isaacs says. They also set the stage for expanding the use of genetically modified organisms in applications outside the lab, Isaacs adds. For example, he says, the bacteria could be used as the “basis for designer probiotics for diseases that originate in the gut of our bodies, or for specialized microorganisms that clean up landfills or oil spills.”
“There are all these ideas for using engineered cells [outside the confines of a lab], but the problem is that they’re not contained,” comments Christopher A. Voigt, a synthetic biologist at Massachusetts Institute of Technology. “This is the proof-of-principle work for addressing that problem.” To make the genetic firewall, both teams made changes to E. coli’s genome so that the bacteria’s protein production machinery inserts a nonnatural amino acid when it reads a specific three-base-pair codon. “They’ve extended the genetic code so that it can take a 21st amino acid,” explains Tom Ellis, a synthetic biologist at Imperial University, in London, who was not involved in the work. The two teams used different synthetic amino acids, but both groups selected mimics of phenylalanine, a bulky, hydrophobic amino acid.
Next, both teams scoured E. coli’s genome for essential proteins that the organism needed to survive. They looked for areas in those proteins where the synthetic amino acids might replace natural amino acids. Although both teams combined computational design and evolutionary biology to select which amino acid to replace in three essential proteins, Church’s team relied more on the former approach and Isaacs’ team on the latter.
Finally, they showed that when the engineered bacteria have access to a pool of the synthetic amino acids, they can build their essential proteins. With no access to the synthetic amino acids, protein production stalls and the bacteria die.
The teams performed extensive tests to see whether the newly engineered bacteria could evolve ways to sidestep the need for synthetic amino acids. Whenever the microbes managed the feat, the researchers tweaked the DNA until the bacteria depended solely on the synthetic amino acids.
Previous strategies for containing genetically modified bacteria seem “naive” in hindsight, Ellis says. These earlier strategies employed kill switches, which are “systems where the organism dies if some compound or environmental cue wasn’t given,” he adds. “Here the kill system is fully embedded in the heart of the bacteria.”
In theory, the strategy could be extended to other genetically modified organisms, such as plants, Voigt says. “It will probably be really hard, but not impossible,” he adds. According to Ellis, the next step is to get the platform working in yeast, which will be “an order of magnitude harder than bacteria.”
Another important step is to improve containment by ensuring that all DNA engineered into the organism relies on the synthetic amino acid, Ellis says. “If you accidentally spill the bacteria into the environment, it’s going to die,” he says. “But that DNA is left behind. The genetically modified genes could be incorporated into other bacteria through horizontal transfer,” he warns. “To alleviate all fears, we need to ensure that all genes you add to an organism—say for making insulin or biofuels—are also behind the genetic firewall and somehow encode the 21st amino acid.”
Almost 100 years ago physicists Werner Heisenberg, Max Born und Erwin Schrödinger created a new field of physics: quantum mechanics. Objects of the quantum world – according to quantum theory – no longer move along a single well-defined path. Rather, they can simultaneously take different paths and end up at different places at once. Physicists speak of quantum superposition of different paths. At the level of atoms, it looks as if objects indeed obey quantum mechanical laws. Over the years, many experiments have confirmed quantum mechanical predictions. In our macroscopic daily experience, however, we witness a football flying along exactly one path; it never strikes the goal and misses at the same time. Why is that so?
But it could also be that footballs obey completely different rules than those applying for single atoms. "Let us talk about the macro-realistic view of the world," Alberti explains. "According to this interpretation, the ball always moves on a specific trajectory, independent of our observation, and in contrast to the atom." But which of the two interpretations is correct? Do "large" objects move differently from small ones?
The physicists describe their research in the journal Physical Review X: With two optical tweezers they grabbed a single Caesium atom and pulled it in two opposing directions. In the macro-realist's world the atom would then be at only one of the two final locations. Quantum-mechanically, the atom would instead occupy a superposition of the two positions.
"We have now used indirect measurements to determine the final position of the atom in the most gentle way possible," says the PhD student Carsten Robens. Even such an indirect measurement (see figure) significantly modified the result of the experiments. This observation excludes – falsifies, as Karl Popper would say more precisely – the possibility that Caesium atoms follow a macro-realistic theory. Instead, the experimental findings of the Bonn team fit well with an interpretation based on superposition states that get destroyed when the indirect measurement occurs. All that we can do is to accept that the atom has indeed taken different paths at the same time.
"This is not yet a proof that quantum mechanics hold for large objects," cautions Alberti. "The next step is to separate the Caesium atom's two positions by several millimetres. Should we still find the superposition in our experiment, the macro-realistic theory would suffer another setback."
2014 was Earth’s hottest on record demonstrating new evidence that people are disrupting the climate by burning fossil fuels that release greenhouse gases into the air, two U.S. government agencies said on Friday. The White House said the studies, by the U.S. space agency NASA and the National Oceanic and Atmospheric Administration (NOAA), showed climate change was happening now and that action was needed to cut rising world greenhouse gas emissions.
The 10 warmest years since records began in the 19th century have all been since 1997, the data showed. Last year was the warmest, ahead of 2010, undermining claims by some skeptics that global warming has stopped in recent years.
Record temperatures in 2014 were spread around the globe, including most of Europe stretching into northern Africa, the western United States, far eastern Russia into western Alaska, parts of interior South America, parts of eastern and western coastal Australia and elsewhere, NASA and NOAA said.
“While the ranking of individual years can be affected by chaotic weather patterns, the long-term trends are attributable to drivers of climate change that right now are dominated by human emissions of greenhouse gases,” said Gavin Schmidt, director of NASA’s Goddard Institute of Space Studies in New York.
“The data shows quite clearly that it’s the greenhouse gas trends that are responsible for the majority of the trends,” he told reporters. Emissions were still rising “so we may anticipate further record highs in the years to come.”
U.N. studies show there already are more extremes of heat and rainfall and project ever more disruptions to food and water supplies. Sea levels are rising, threatening millions of people living near coasts, as ice melts from Greenland to Antarctica.
Next December, about 200 governments will meet in Paris to try to reach a deal to limit global warming, shifting to renewable energies. China and the United States, the top emitters of greenhouse gases, say they are cooperating more to achieve a U.N. accord.
Genomics England begins its 100,000 Genome Project to speed time to diagnosis and inform personalized treatment regimens
The project, titled 100,000 Genomes, is a transformative research project aimed at finding new ways to identify and care for patients, and could ultimately change how patients are treated in the National Health Service, according to Professor Mark Caulfield, lead scientist of Genomics England. “The overall impact of the work of Genomics England could be to transform the NHS provision of diagnostic tests and then care to a whole range of patients,” says Caulfield. “This could itself have an impact on how services are commissioned, with perhaps greater emphasis on testing of the broader population in order to achieve earlier diagnosis and more effective intervention for patients most at risk from developing very serious illnesses”, Prof Caulfield said.
Genomics is the study of genes and how they work. A genome is a complete map of a person’s DNA. It “contains all the information needed to build and maintain the organism,” according to Genome Home Reference, a service of the U.S. National Library of Medicine. Genomics lends insight into the cause of diseases and how diseases develop in each individual. Currently, medical researchers use genomics in an effort to develop personalized treatments for diseases. Also useful in public health, genomics helps track infectious diseases. It can help in understanding how infections spread and in many cases allow the pinpointing of the source and nature of an outbreak.
Genomics England has procured Illumina, a global leader in gene sequencing, to sequence and analyze the genomes. After analysis, results will be sent to NHS for review and possible clinical application. Some 75,000 participants are expected to take part in the 100,000 Genome Project, and recruitment will begin in early February 2015. Clinicians will refer eligible patients who wish to be involved in this project to one of 11 designated genomic medical centers. A genome project of this magnitude is not without challenges.
Referring to these inherent challenges, Jim Davies, Chief Technology Officer of Genomics England, said: “The data we need for Genomics England is large and complex: to get to 100,000 genomes we’ll be collecting 10 petabytes of sequence data, and detailed, relevant health data on up to 100,000 people.” Additional challenges include the need for informed patient consent, which means educating people about what a genome is and how learning about it might impact their lives, and ensuring data privacy.
Pompeii wasn’t the only Roman town destroyed when Mount Vesuvius erupted in 79 C.E. The blast of hot air and rain of volcanic ash also reached nearby Herculaneum (pictured above), where it entombed a library of papyrus scrolls. Unfortunately, it also transformed them from pliable parchment into little more than blackened, carbonized lumps. Archaeologists have tried several techniques to unroll the scrolls since the library was discovered in the 1750s, but they always ran the risk of destroying them in the process. Now, a new technique using high-energy x-rays offers a nondestructive way of reading these ancient texts. By placing a rolled up scroll in the path of a beam of powerful x-rays produced by a particle accelerator, researchers can measure a key difference between the burned papyrus and the ink on its surface: how fast the x-rays move through each substance. That allows them to differentiate between the scroll and the writing on it and, slowly but surely, reconstruct the text. Although they’ve managed to read only a few complete words so far, the researchers have reconstructed a nearly complete Greek alphabet from the letters inscribed on a still-rolled-up scroll, they report today in Nature Communications. The handwriting style is characteristic of texts written in the middle of the first century B.C.E.; in fact, it looks a lot like the handwriting on a previously unrolled scroll attributed to the philosopher Philodemus, the team says. More studies with even higher energy x-rays are needed to reconstruct the whole text on this and other scrolls, but the technique offers the possibility of reading works that haven’t been seen for nearly 2000 years.
At the SpaceX event held in Seattle, Elon Musk revealed his grand (and expensive) $10 billion plan to build internet connectivity in space. Musk’s vision wants to radically change the way we access internet. His plan includes putting satellites in space, between which data packets would bounce around before being passed down to Earth. Right now, data packets bounce about the various networks via routers.
Some say that Elon Musk’s ambitious project would enable a Smartphone to access the internet just like it communicates with GPS satellites. SpaceX will launch its satellites in a low orbit, so as to reduce communication lag. While geosynchronous communication satellites orbit the Earth from an altitude of 22,000 miles, SpaceX’s satellites would be orbiting the Earth from an altitude of 750 miles.
Once Musk’s system is in place, data packets would simply be sent to space, from where they would bounce about the satellites, and ultimately be sent back to Earth. “The speed of light is 40 percent faster in the vacuum of space than it is for fiber,” says Musk, which is why he believes that his unnamed SpaceX venture is the future of internet connectivity, replacing traditional routers and networks.
The project is based out of SpaceX’s new Seattle office. It will initially start out with 60 workers, but Musk predicts that the workforce may grow to over 1,000 in three to four years. Musk wants “the best engineers that either live in Seattle or that want to move to the Seattle area and work on electronics, software, structures, and power systems,” to work with SpaceX.
Political unrest doesn’t just destabilize human governments — it can also hurt wildlife. A new study finds that large mammal populations declined rapidly following the collapse of the Soviet Union. Using annual population estimates from the Russian Federal Agency of Game Mammal Monitoring database, researchers analyzed trends of eight large mammals—roe deer, red deer, reindeer, moose, wild boar, brown bears, lynx, and gray wolves—in Russia from 1981 to 2010, a time period that includes the 1991 collapse of the Soviet Union. The analysis uncovered major changes in population growth; with the exception of wolves, all species experienced a drop in population growth rates immediately following the collapse, and three species—wild boar, moose, and brown bears—exhibited significant reductions in population growth throughout the decade following the collapse, with declines evident in 85% or more of the study regions.
In stark contrast, wolf populations increased by more than 150% between 1992 and 2000. Mammals in politically stable regions of North America and Europe did not experience similar fluctuations over the 30-year period. Although not definitively linked, the changes in Russia’s mammal populations were likely a consequence of lapses in wildlife management that occurred following the political upheaval. As the economy folded and farms were abandoned, people likely resorted to hunting and poaching for food and income. Additionally, a lack of government control measures could have led to the increase in wolf populations, which would have further exacerbated wildlife declines. Although the falling populations can’t be definitively attributed to Russia’s political instability without stronger evidence, the findings do draw attention to the important connection between social welfare and wildlife health, the authors report online this month in Conservation Biology.
Robots are coming. Google has assembled a team of experts in London who are working to "solve intelligence." They make up Google DeepMind, the US tech giant's artificial intelligence (AI) company, which it acquired in 2014. In an interview with MIT Technology Review, published recently, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company's biggest fears about the future of AI.
Hassabis and his team are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services. It could enhance YouTube recommendations for users for example, or make the company's mobile voice search better.
But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this". He adds he thinks AI is the "no.1 risk for this century". It's ominous stuff. (Read about Elon Musk discussing his concerns about AI here.)
People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.
Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games — it wasn't programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were "shocked".
Google is also concerned about the "other side" of developing computers in this way. That's why it set up an "ethics board". It's tasked with making sure AI technology isn't abused. As Hassibis explains: "It's (AI) something that we or other people at Google need to be cognizant of." Hassibis does concede that "we're still playing Atari games currently" — but as AI moves forward, the fear sets in.
You're probably aware that heart disease and cancer are far and away the leading causes of death in America. But globally the picture is more complicated: The above map shows the leading cause of lost years of life by country (click to see a larger version). The data comes from the Global Burden of Disease study, whose 2013 installment was released just a few weeks ago. It's worth stressing that "cause of lost years of life" and "cause of death" aren't identical. For example, deaths from preterm births may cause more lost years of life in a country than deaths from heart disease even if heart disease is the leading cause of death. Deaths from preterm births amount to many decades of lost life, whereas heart disease tends to develop much later on.
But that makes the fact that heart disease is the leading cause of lost life in so many countries all the more striking, and indicative of those countries' successes in reducing childhood mortality. By contrast, in many lower-income countries, the leading cause is something like malaria, diarrhea, preterm birth, HIV/AIDS, or violence, which all typically afflict people earlier in life than heart disease or stroke. We've made considerable progress in fighting childhood mortality across the globe in recent decades, but there's still much work left to be done.