Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand. Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.
"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.
The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.
"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."
In May, the Ocean Cleanup project announced that its first deployment would be delivered in the Korea Strait next year. That will pave the way for its ultimate goal of cleaning up the Great Pacific Garbage Patch. With that in mind, a research expedition at the Garbage Patch has just been completed. The concept for the Ocean Cleanup project was conceived by Dutch entrepreneur and inventor Boyan Slat and announced in 2013. Slat realized that the movement of the oceans could be harnessed in order to direct floating plastic waste into the arms of a static collection system.
After a positive feasibility study, a successful crowdfunding campaign and being named a category winner in the 2015 Designs of the Year awards, the Ocean Cleanup project recently set out to gather research in the Pacific. A fleet of 30 vessels, including a 171 ft (52 m) mothership, took part in the month-long voyage, or Mega Expedition, the primary goal of which was to determine just how much plastic is actually floating in the Great Pacific Garbage Patch.
According to the Ocean Cleanup project, this was the largest ocean research expedition in history. A series of measurement techniques were employed to sample the concentration of plastic in the area, including trawls and aerial surveys. It is also said to have been the first time that large pieces of plastic, such as ghost nets and Japanese tsunami debris, have been quantified.
Slat explains that it is not just floating bits of plastic that are a problem, but what happens to those pieces over the long term. "The vast majority of the plastic in the garbage patch is currently locked up in large pieces of debris, but UV light is breaking it down into much more dangerous microplastics, vastly increasing the amount of microplastics over the next few decades if we don’t clean it up," he says. "It really is a ticking time bomb."
The research samples collected during the expedition during have to be analyzed, but preliminary findings indicate a "higher-than-expected volume" of plastic objects found at the Pacific site.
The cleanup proper of the Great Pacific Garbage Patch is expected to begin in 2020.
In the future, incense might need to carry a health warning, just like tobacco. That’s the conclusion of researchers who for the first time have compared the effects of burning incense indoors to inhaling tobacco smoke. Previous research has already shown how incense smoke can be harmful to a person’s health, but these new findings suggest that it’s worse than cigarettes by several measurements – a result that may alarm some in Asian countries, where incense burning is a common practice in the home and a traditional ritual in many temples.
Clearly, there needs to be greater awareness and management of the health risks associated with burning incense in indoor environments,” said Rong Zhou of the South China University of Technology, in a statement to the press.
The researchers tested two types of incense against cigarette smoke to see their effects on bacteria and the ovary cells of Chinese hamsters. Both the incense products contained the common ingredients agarwood and sandalwood, which are used in incense for their fragrances.
The findings, published in Environmental Chemistry Letters, showed that incense smoke is mutagenic, which means it can cause mutations to genetic material, primarily DNA. Compared to the cigarette smoke, the incense products were found to be more cytotoxic (toxic to cells) and genotoxic (toxic to DNA). Of the 64 compounds identified in the incense smoke, two were singled out as highly toxic.
Obviously none of this sounds very good, and for people frequently exposed to incense smoke in indoor environments, hopefully it serves as a wake-up call: mutagenics, genotoxins, and cytotoxins are all linked to the development of cancers.
Seas around the world have risen an average of nearly 3 inches since 1992, with some locations rising more than 9 inches due to natural variation, according to the latest satellite measurements from NASA and its partners. An intensive research effort now underway, aided by NASA observations and analysis, points to an unavoidable rise of several feet in the future.
Members of NASA’s new interdisciplinary Sea Level Change Team will discuss recent findings and new agency research efforts during a media teleconference today at 12:30 p.m. EDT. NASA will stream the teleconference live online.
The question scientists are grappling with is how quickly will seas rise?
“Given what we know now about how the ocean expands as it warms and how ice sheets and glaciers are adding water to the seas, it’s pretty certain we are locked into at least 3 feet of sea level rise, and probably more,” said Steve Nerem of the University of Colorado, Boulder, and lead of the Sea Level Change Team. “But we don't know whether it will happen within a century or somewhat longer.”
Team scientists will discuss a new visualization based on 23 years of sea level data – the entire record of available satellite data -- which reveals changes are anything but uniform around the globe. The record is based on data from three consecutive satellite missions, the first a collaboration between NASA and the French space agency, Centre National d'Études Spatiales, launched in 1992. The next in the series is Jason-3, led by the National Oceanic and Atmospheric Administration (NOAA) with participation by NASA, CNES and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT).
Spookiness, it seems, is here to stay. Quantum theory has been put to its most stringent “loophole free” test yet, and it has come out victorious, ruling out more common sense views of reality (well, mostly). Many thanks to Matt Leifer for bringing this experiment -- by a collaboration of researchers in the Netherlands, Spain, and the UK -- to my attention (arXiv:1508.05949).
All prior tests have loopholes, and to get a truly definitive result, these need to be closed. One such loophole is the “detection loophole”. In many Bell tests, experimenters entangle photons and then measure their properties. The trouble is photons zip about quickly, and often simply escape from the experiment before being detected and measured. Physicists can lose as many as 80 per cent of the photons in their test. That means that experimenters have to make a ‘fair sampling’ assumption that the ones that they *do* detect are representative of the ones that have gone missing. For the conclusions to be watertight, however, you really want to keep track of all the subjects in your test.
The authors of the recent test state: ”Our experiment realizes the first Bell test that simultaneously addresses both the detection loophole and the locality loophole. Being free of the experimental loopholes, the setup can test local realist theories of nature without introducing extra assumptions such as fair-sampling, a limit on (sub-)luminal communication or the absence of memory in the setup. Our observation of a loophole-free Bell inequality violation thus rules out all local realist theories that accept that the number generators timely produce a free random bit and that the outputs are final once recorded in the electronics. This result places the strongest restrictions on local realistic theories of nature to date.”
It’s a new day for RNA. In a study published in Cell Reports on Aug 18, Michael Werner, sixth-year graduate student in Cell and Molecular Biology, and Alex Ruthenburg, PhD, Neubauer Family Foundation Assistant Professor of Molecular Genetics and Cell Biology, detail their discovery of a new class of RNA molecule that could perhaps be considered the “dark matter” of the genome. They identified thousands of long noncoding RNAs that are physically attached to DNA (quite literally coating the genome), which may play important but yet unidentified roles in gene regulation.
At some point in high school and college introductory biology classes you probably learned the “Central Dogma.” It posits that in all organisms, genetic information is coded within DNA, which is converted to a ‘messenger’ molecule called RNA, which is then converted into proteins – and it is proteins that perform the various functions of the cell as molecular machines. Advances in next-generation sequencing technologies during the last decade have revealed that this is only part of the story, however.
It turns out that only ~1.5 percent of our genome contains the information to make proteins. Most of the DNA in our genome is processed into RNA ‘transcripts’ that don’t code for proteins – referred to as noncoding RNA. Some have even been shown to perform functions in the cell as RNA molecules, without the need to be turned into a protein.
Now, together with Alex Ruthenburg, Werner discovered a class of noncoding RNA that establishes a new paradigm for how RNA acts inside cells. In a recent Cell Reports paper, the two scientists show that the majority of long noncoding RNA molecules are actually associated with DNA, as opposed to messenger RNAs that are loosely dispersed throughout the nucleus.
Remarkably, they identified several thousand RNAs that are actually physically tethered to DNA and coat the human genome, which they called chromatin-enriched RNAs (cheRNAs). The discovery of these RNAs was possible through biochemical enrichment of the genome, to the exclusion of other parts of the cell that predominately contained messenger RNA. Although they didn’t intend to find these cheRNA molecules, they decided to see if there was anything else they could learn about them. To their excitement and considerable surprise, they found tantalizing hints that cheRNAs are involved in regulating the expression of nearby genes. The sheer number of these RNAs suggest that they could be a relatively common way to control genes throughout the human genome, possibly contributing to the complexity of tissues seen across our bodies.
Physicists have created a so-called magnetic wormhole that transports a magnetic field from one point to the other without being detected.
Ripped from the pages of a sci-fi novel, physicists have crafted a wormhole that tunnels a magnetic field through space. "This device can transmit the magnetic field from one point in space to another point, through a path that is magnetically invisible," said study co-author Jordi Prat-Camps, a doctoral candidate in physics at the Autonomous University of Barcelona in Spain. "From a magnetic point of view, this device acts like a wormhole, as if the magnetic field was transferred through an extra special dimension."
The idea of a wormhole comes from Albert Einstein's theories. In 1935, Einstein and colleague Nathan Rosen realized that the general theory of relativity allowed for the existence of bridges that could link two different points in space-time. Theoretically these Einstein-Rosen bridges, or wormholes, could allow something to tunnel instantly between great distances (though the tunnels in this theory are extremely tiny, so ordinarily wouldn't fit a space traveler). So far, no one has found evidence that space-time wormholes actually exist.
The new wormhole isn't a space-time wormhole per se, but is instead a realization of a futuristic "invisibility cloak" first proposed in 2007 in the journal Physical Review Letters. This type of wormhole would hide electromagnetic waves from view from the outside. The trouble was, to make the method work for light required materials that are extremely impractical and difficult to work with, Prat said.
But it turned out the materials to make a magnetic wormhole already exist and are much simpler to come by. In particular, superconductors, which can carry high levels of current, or charged particles, expel magnetic field lines from their interiors, essentially bending or distorting these lines. This essentially allows the magnetic field to do something different from its surrounding 3D environment, which is the first step in concealing the disturbance in a magnetic field.
So the team designed a three-layer object, consisting of two concentric spheres with an interior spiral-cylinder. The interior layer essentially transmitted a magnetic field from one end to the other, while the other two layers acted to conceal the field's existence.
The inner cylinder was made of a ferromagnetic mu-metal. Ferromagnetic materials exhibit the strongest form of magnetism, while mu-metals are highly permeable and are often used for shielding electronic devices.
A thin shell made up of a high-temperature superconducting material called yttrium barium copper oxide lined the inner cylinder, bending the magnetic field that traveled through the interior. The final shell was made of another mu-metal, but composed of 150 pieces cut and placed to perfectly cancel out the bending of the magnetic field by the superconducting shell. The whole device was placed in a liquid nitrogen bath in order to work. Normally, magnetic field lines radiate out from a certain location and decay over time, but the presence of the magnetic field should be detectable from points all around it. However, the new magnetic wormhole funnels the magnetic field from one side of the cylinder to another so that it is "invisible" while in transit, seeming to pop out of nowhere on the exit side of the tube, the researchers report today (Aug. 20, 2015) in the journal Scientific Reports.
No one really knows how many “things” there are deployed today that have IoT characteristics. IDC’s 2013 estimate was about 9.1 billion, growing to about 28 billion by 2020 and over 50 billion by 2025. You can get pretty much any other number you want, but all the estimates are very large. So what are all these IoT things doing and why are they there? Here’s our attempt to map out the IoT landscape (click to enlarge).
There are a whole lot of possible organizational approaches to the constituent parts of IoT. One can use a “halo” approach, looking at how IoT principles will be applied to individual people, their surroundings (vehicles and homes), the organization of those surroundings (towns and cities and the highways and other transit systems that connect them), the range of social activities (essentially commerce, but also travel, hospitality, entertainment and leisure) that go on in those surroundings and finally the underpinnings of those activities (“industrial” including agriculture, energy and transport and logistics).
This is not an exhaustive taxonomy (excluded are all military and some law enforcement specific uses) or even the best way to organize things, but it’s a useful start and has been helpful in explaining the opportunity to the businesses we advise.
Via Fernando Gil
Since the completion of the Human Genome Project in 2001, technological advances have made sequencing genomes much easier, quicker and cheaper, fueling an explosion in sequencing projects. Today, genomics is well into the era of ‘big data’, with genomics datasets often containing hundreds of terabytes (1014 bytes) of information.
The rise of big genomic data offers many scientific opportunities, but also creates new problems, as Jan Korbel, Group Leader in the Genome Biology Unit EMBL Heidelberg, describes in a new commentary paper authored with an international team of scientists and published today in Nature.
Korbel’s research focuses on genetic variation, especially genetic changes leading to cancer, and relies on computational and experimental techniques. While the majority of current cancer genetic studies assess the 1% of the genome comprising genes, a main research interest of the Korbel group is in studying genetic alterations within ‘intergenic’ regions that drive cancer. As this approach looks at much more of the genome than gene-focused studies, it requires analysis of larger amounts of data. This challenge is exemplified via the Pan-Cancer Analysis of Whole Genomes (PCAWG) project, co-led by Korbel, which brings together nearly 1 petabyte (10^15 bytes) of genome sequencing data from more than 2000 cancer patients.
The problem is not a shortage of data but accessing and analysing it. Genome datasets from cancer patients are typically stored in so-called ‘controlled access’ data archives, such as the European Genome-phenome Archive (EGA). These repositories, however, are ‘static’, says Korbel, meaning that the datasets need to be downloaded to a researcher’s institution before they can be further analysed or integrated with other types of data to address biomedically relevant research questions. “With massive datasets, this can take many months and may be unfeasible altogether depending on the institution’s network bandwidth and computational processing capacities,” says Korbel. “It’s a severe limitation for cancer research, blocking scientists from replicating and building on prior work.”
With data stored in one of the various commercial cloud services on offer from companies such as Amazon Web Services, or on academic community clouds, researchers can analyse vast datasets without first downloading them to their institutions, saving time and money that would otherwise need to be spent on maintaining them locally. Cloud computing also allows researchers to draw on the processing power of distributed computers to significantly speed up analysis without purchasing new equipment for computationally laborious tasks. A large portion of the data from PCAWG, for example, will be analysed through cloud computing using both academic community and commercial cloud providers, thanks to new computational frameworks currently being built.
One concern about using cloud computing revolves around the privacy of people who have supplied genetic samples for studies. However, cloud services are now typically as secure as regular institutional data centres, which has diminished this worry: earlier this year, the US National Institutes of Health lifted a 2007 ban on uploading their genomic data into cloud storage. Korbel predicts that the coming months and years will see a big upswing in the use of cloud computing for genomics research, with academic cloud services, such as the EMBL-EBI Embassy Cloud, and commercial cloud providers including Amazon becoming a crucial component of the infrastructure for pursuing research in human genetics.
Yet there remain issues to resolve. One is who should pay for cloud services. Korbel and colleagues urge funding agencies to take on this responsibility given the central role cloud services are predicted to play in future research. Another issue relates to the differing privacy, ethical and normative policies and regulations in Europe, the US, and elsewhere. Some European countries may prefer that patient data remain within their jurisdiction so that they fall under European privacy laws, and not US laws, which apply once a US-based cloud provider is used. Normative and bioethical aspects of patient genome analysis, including in the context of cloud computing, are another specific focus of Korbel’s research, which is being pursued via an inter-disciplinary collaboration with Fruzsina Molnár-Gábor from Heidelberg University faculty of law in a project funded by the Heidelberg Academy of Sciences and Humanities.
Plant species identification based on the morphological features of plant parts is a well-established science in botany. However, species identification from seeds has largely been unexplored, despite the fact that the seeds contain all of the genetic information that distinguishes one plant from another. Using seeds of genus Datura plants, a group of scientists now shows that the mass spectrum-derived chemical fingerprints for seeds of the same species are similar. On the other hand, seeds from different species within the same genus display distinct chemical signatures, even though they may contain similar characteristic biomarkers.
The intraspecies chemical signature similarities on the one hand, and interspecies fingerprint differences on the other, can be processed by multivariate statistical analysis methods to enable rapid species-level identification and differentiation. The chemical fingerprints can be acquired rapidly and in a high-throughput manner by direct analysis in real time mass spectrometry (DART-MS) analysis of the seeds in their native form, without use of a solvent extract. Importantly, knowledge of the identity of the detected molecules is not required for species level identification.
However, confirmation of the presence within the seeds of various characteristic tropane and other alkaloids, including atropine, scopolamine, scopoline, tropine, tropinone, and tyramine, was accomplished by comparison of the in-source collision-induced dissociation (CID) fragmentation patterns of authentic standards, to the fragmentation patterns observed in the seeds when analyzed under similar in-source CID conditions. The advantages, applications, and implications of the chemometric processing of DART-MS derived seed chemical signatures for species level identification and differentiation are discussed in the paper.
We all have been wondering for a long time if there is life beyond Earth. Mars was one of the first places that we hoped to find it.
Once we had great expectations that there may be intelligent Martians live in great cities. But after various missions to the Red Planet, these ideas have changed to the thought that microbial organisms may still lie deep beneath Mars' surface. Let's look at humanity's attempts to look for life on Mars through the years and see how likely life on the Red Planet actually is.
Quantum key distribution is regularly touted as the encryption of the future. While the keys are exchanged on an insecure channel, the laws of physics provide a guarantee that two parties can exchange a secret key and know if they're being overheard. This unencrypted-but-secure form of key exchange circumvents one of the potential shortcomings of some forms of public key systems.
However, quantum key distribution (QKD) has one big downside: the two parties need to have a direct link to each other. So, for instance, banks in and around Geneva use dedicated fiber links to perform QKD, but they can only do this because the link distance is less than 100km. These fixed and short links are an expensive solution. A more flexible solution is required if QKD is going to be used for more general encryption purposes.
A group of Italian researchers have demonstrated the possibility of QKD via a satellite, which in principle (but not in practice) means that any two parties with a view of a satellite can exchange keys.
QKD is based on, essentially, the fact that once you measure the state of a photon, the photon is gone—you need to absorb the photon with a detector to measure its state. To take a particular example, we have Alice and Bob who want to communicate without letting the nefarious Eve into the picture. They begin by generating a secret key, through the laws of quantum physics, with which to encode their future communications.
Alice generates two lists of random ones and zeros. The first list contains bit values, and the second set is used to set the basis (think of this as the orientation of the measurement system) of a string of single photons. An important point is that these two basis sets are not orthogonal. So, for instance, a common example is to choose vertical and horizontal polarization for one basis and two diagonal polarizations for the second. Between the two values, the polarization of the photon is set into four possible states.
These single photons are sent to Bob, who will measure them. But, the quantum measurements don't allow you to ask a photon "What polarization are you?" Instead you end up asking questions like "Are you vertical or horizontally polarized?" So, Bob randomly chooses between the two basis sets. Sometimes he asks the photons which diagonal polarization they have and other times he asks them if they are vertical or horizontally polarized.
Now, if Alice sends a vertically polarized photon to Bob who asks which diagonal polarization it has, the photon will end up randomly choosing 45 degrees or 135 degrees. However, if Alice chooses to send a horizontally polarized photon and Bob asks the photon if it is horizontally or vertically polarized, he will always get horizontally polarized. The key point is that the measurement basis choice determines how the photon must be described. If Bob and Alice make the same choice, the photon is either in one or other state. If their choices are different, the photon, according to Bob, is in a superposition of two states. The upshot is that, in the first case, the measurement process is deterministic. Alice and Bob can know from their instrument settings exactly which of Bob's detector must click. In the second case, however, the measurement process forces the photon to randomly choose from two states: neither Bob nor Alice can predict the outcome of the measurement. It is this uncertainty, and how intervening measurements by Eve modify that uncertainty, that give QKD its security.
After all the photons are sent, Bob has a string of random numbers, but he has no way of knowing which ones to choose to make up a key. To create a common secret key, Bob and Alice publicly announce their choice of basis set for each bit. But, the choice of which polarization is kept secret. Alice and Bob can look for the positions in the string where they made the same choices and choose those bits to generate the common key.
The next step is to reveal Eve. To do this, Alice announces a section of the secret key. How does this reveal Eve? Let's suppose that Eve is intercepting the photons. She randomly chooses a basis set and measures the photons, but Eve doesn't know which basis set Alice chose. When Eve tries to recreate the photon state that Alice sent, she gets it wrong half the time. So, instead of Alice and Bob finding that they get the same result all the time, the number drops to one half. Eve can, of course, be subtler and only intercept every second photon, bringing the statistic closer to full agreement. But, the fewer photons she intercepts, the less information she has.
When Alice and Bob compare statistics for the partial key, they not only know that Eve is there, but how much information Eve is getting. If Eve was not present, they can throw away the revealed section of key and continue to generate more key digits. However, even if Eve is listening in, they can determine if they wish to go on, based on knowing how much of the key Eve is intercepting.
Humans abound with remarkable skills: we write novels, build bridges, compose symphonies, and even navigate Boston traffic. But despite our mental prowess, we share a surprising deficit: our working memory can track only four items at one time.
“Would you buy a computer with a RAM capacity of 4?” asks David Somers, professor and chair of the Department of Psychological & Brain Sciences. “Not 4 MB or GB or 4K—just 4. So how the heck do humans do all this stuff?”
“There’s so much information out there, and our brains are very limited in what we’re able to process,” adds Samantha Michalka, a postdoctoral fellow at the Center for Computational Neuroscience & Neural Technology. “We desperately need attention to function in the world.”
Michalka is lead author and Somers is senior author of a new study that sheds light on this enduring mystery of neuroscience: how humans achieve so much with such limited attention. Funded by the National Science Foundation (NSF) and the National Institutes of Health (NIH), the work identifies a previously unknown attention network in the brain. It also reveals that our working memory for space and time can recruit our extraordinary visual and auditory processing networks when needed. The research appeared on August 19, 2015, in the journal Neuron.
Prior to this work, scientists believed that visual information from the eyes and auditory information from the ears merged before reaching the frontal lobes, where abstract thought occurs. The team of BU scientists, which also included Auditory Neuroscience Laboratory Director Barbara Shinn-Cunningham, performed functional MRI experiments to test the conventional wisdom. The experiments revealed that what was thought to be one large attention network in the frontal lobe is actually two interleaved attention networks, one supporting vision and one supporting hearing. “So instead of talking about a single attention network,” says Somers, “we now need to talk about a visual attention network and an auditory attention network that work together.”
The use of sunlight as an energy source is achieved in a number of ways, from conversion to electricity via photovoltaic (PV) panels, concentrated heat to drive steam turbines, and even hydrogen generation via artificial photosynthesis. Unfortunately, much of the light energy in PV and photosynthesis systems is lost as heat due to the thermodynamic inefficiencies inherent in the process of converting the incoming energy from one form to another. Now scientists working at the University of Bayreuth claim to have created a super-efficient light-energy transport conduit that exhibits almost zero loss, and shows promise as the missing link in the sunlight to energy conversion process.
Using specifically-generated nanofibers at its core, this is reported to be the very first time a directed energy transport system has been exhibited that effectively moves intact light energy over a distance of several micrometers, and at room temperature. And, according to the researchers, the transference of energy from block to block in the nanofibers is only adequately explained at the quantum level with coherence effects driving the energy along the individual fibers.
Quantum coherence is the phenomenon where subatomic waves are closely interlinked via shared electromagnetic fields. As they travel in phase together, these quantum coherent waves start to act as one very large synchronous wave propagating across a medium. In the case of the University of Bayreuth device, these coherent waves of energy travel across the molecular building blocks from which the nanofibers are made, passing from block to block and moving as one continuous energy wave would in unbound free space.
It is this effect that the scientists say is driving the super-low energy loss capabilities of their device, and have confirmed this observation using a variety of microscopy techniques to visualize the conveyance of excitation energy along the nanofibers. The nanofibers themselves are specifically-prepared supramolecular strands, manufactured from a chemically bespoke combination of carbonyl-bridged (molecularly connected) triarylamine (an organic compound) combined with three naphthalimide bithiophene chromophores (copolymer molecules that absorb and reflect specific wavelengths of light). When brought together under particular conditions, these elements spontaneously self-assemble into 4 micrometer long, 0.005 micrometer diameter nanofibers made up of more than 10,000 identical chemical building blocks.
"These highly promising nanostructures demonstrate that carefully tailoring materials for the efficient transport of light energy is an emerging research area," said Dr. Richard Hildner, an experimental physicist at the University of Bayreuth. The results of this research were recently published in the journal Nature.
Glaciers once covered most of Earth's surface and reflected the sun's heat back into space.
New details of a nightmare period on Earth with surface conditions as frigid as present-day central Antarctica at the equator have been revealed thanks to the publication of a study of ancient glacier water. The research, by an international team led by Daniel Herwartz, is published in the journal Proceedings of the National Academy of Sciences and shows that even tropical regions were once covered in snow and ice.
The idea of a deep-frozen world, “snowball Earth”, has captured the imagination since first proposed in the 1990s. On several occasions in history, long before animals evolved, apparently synchronous ice sheets existed on all the continents. However, much like falling into a crevasse on a glacier, it’s easy enough to enter such an ice age, but very difficult to escape.
The snowball Earth theory came from climate modelers who found that low carbon dioxide levels could trigger the growth of ice sheets. The whole planet would become glaciated and its mean temperature drop to as low as -45°C. As ice is much more reflective than the sea, or bare land, the Earth at that point would have been bouncing nearly all of the sun’s radiation back into space. So how could the planet ever emerge from such an ice age?
Volcanoes had to be the answer. Only they could emit enough carbon dioxide into the atmosphere to overcome the effects of Earth’s cool reflective surface. But climate models still found it difficult to plausibly describe how the Earth could have shed its glaciers.
We now have the first full explanation for how the best-known snowball event, the Marinoan, finished 635 million years ago with a several hundred meter rise in sea level. The study is the result of work by an international team of scientists. The results are published in the journal Nature Geoscience.
The team of researchers found slight wobbles of the Earth’s spin axis caused differences in the heat received at different places on the planet’s surface. These changes were small, but enough over thousands of years to cause a change in the places where snow accumulated or melted, leading the glaciers to advance and retreat.
The Earth was left looking just like the McMurdo Dry Valleys in Antarctica – arid, with lots of bare ground, but also containing glaciers up to 3 km thick. Such an Earth would have been darker than previously envisaged, absorbing more of the sun’s radiation; it was easier to see how the escape from the snowball happened.
Today, to find exposed rocks that can tell us about the carbon dioxide content of the atmosphere in the Marinoan, you have to go to the Norwegian Arctic island of Svalbard. In 2009 snowball theory was vindicated after we found the telltale signal of high carbon dioxide levels in Svalbard limestone that formed during the ice age.
Immediately underneath the Marinoan deposits are some beds of rocks deposited at very regular intervals – so regular that they must have formed over thousands of years, influenced by wobbles in the Earth’s orbit. Since Svalbard was near the Equator at the time, the most likely type of wobble is caused by the Earth slowly shifting (“precessing”) its axis on cycles of approximately 20,000 years.
Researchers also found evidence of the same process in the Snowball deposits themselves. Fluctuations in ice in relation to the Earth’s orbit are a feature of our modern ice ages over the past million years, but had not been found in such an old glaciation.
For a long time the Earth was too cold for glaciers to erode and deposit sediment – the main snowball period. The sediments then show several advances and retreats of the ice. When the glaciers retreated, they left behind a patchwork of environments: shallow and deep lakes, river channels, and floodplains that appeared as arid as anything known in Earth’s history.
Carbon dioxide appears to have remained at the same high level throughout the deposition of these sediments. Since it takes millions of years for CO2 to build up in the atmosphere, this implies the sediment layers must have formed quickly – on the order of 100,000 years. All this fits with the idea of 20,000 year precession cycles.
So after several million years of being frozen, this icy Earth with a hot atmosphere rich in carbon dioxide had reached a Goldilocks zone – too warm to stay completely frozen, too cold to lose its ice. This transitional period lasted around 100,000 years before the glaciers fully melted and present-day Svalbard was flooded by the sea.
Knut the polar bear may have met an early end but he wasn’t forgotten. The cute polar bear cub born at Berlin Zoo in 2006 and controversially reared by zookeepers, drowned as an adult after experiencing epileptic seizures. Now, the condition responsible for his death has been identified.
The cause of the seizures was unknown since no bacteria, virus or parasite could be found to explain the underlying brain inflammation. The mystery was finally solved by Harald Pruess from the German Centre for Neurodegenerative Diseases in Berlin and his team, who normally study dementia in people.
They analysed samples of Knut’s cerebrospinal fluid, which bathes the brain and spinal cord, and found high levels of an antibody known to attack a glutamate receptor in the brain. In humans, this is a sign of a disease called autoimmune encephalitis. Knut’s case is the first ever reported in a non-human.
Those who reject the 97% expert consensus on human-caused global warmingoften invoke Galileo as an example of when the scientific minority overturned the majority view. In reality, climate contrarians have almost nothing in common with Galileo, whose conclusions were based on empirical scientific evidence, supported by many scientific contemporaries, and persecuted by the religious-political establishment. Nevertheless, there’s a slim chance that the 2–3% minority is correct and the 97% climate consensus is wrong.
To evaluate that possibility, a new paper published in the journal of Theoretical and Applied Climatology examines a selection of contrarian climate science research and attempts to replicate their results. The idea is that accurate scientific research should be replicable, and through replication we can also identify any methodological flaws in that research. The study also seeks to answer the question, why do these contrarian papers come to a different conclusion than 97% of the climate science literature?
This new study was authored by Rasmus Benestad, myself (Dana Nuccitelli), Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook. Benestad (who did the lion’s share of the work for this paper) created a tool using the R programming language to replicate the results and methods used in a number of frequently-referenced research papers that reject the expert consensus on human-caused global warming. In using this tool, we discovered some common themes among the contrarian research papers.
Cherry picking was the most common characteristic they shared. We found that many contrarian research papers omitted important contextual information or ignored key data that did not fit the research conclusions.
Vital sign monitors in hospitals are bulky, restrictive and capture limited information. A professor-engineer at Johns Hopkins has designed a battery-powered, hand-held, 3-D printed device that acts as a “check-engine light” for people. The device uses mouthpiece and thumb pad sensors to quickly test a patient’s blood pressure, breathing, blood oxygen, heart rate and heartbeat pattern.
In a study published in the September issue of the Annals of Biomedical Engineering, the MouthLab prototype’s measurements of heart rate, blood pressure, temperature, breathing rate and blood oxygen from 52 volunteers compared well with vital signs measured by standard hospital monitors. The device also takes a basic electrocardiogram.
“We see it as a ‘check-engine’ light for humans,” says the device’s lead engineer, Gene Fridman, Ph.D., an assistant professor of biomedical engineering and of otolaryngology–head and neck surgery at Johns Hopkins. “It can be used by people without special training at home or in the field.” He expects the device may be able to detect early signs of medical emergencies, such as heart attacks, or avoid unnecessary ambulance trips and emergency room visits when a patient’s vital signs are good.
Because it monitors vital signs by mouth, future versions of the device will be able to detect chemical cues in blood, saliva and breath that act as markers for serious health conditions. “We envision the detection of a wide range of disorders,” Fridman says, “from blood glucose levels for diabetics, to kidney failure, to oral, lung and breast cancers.”
The MouthLab prototype consists of a small, flexible mouthpiece like those that scuba divers use, connected to a hand-held unit about the size of a telephone receiver. The mouthpiece holds a temperature sensor and a blood volume sensor. The thumb pad on the hand-held unit has a miniaturized pulse oximeter — a smaller version of the finger-gripping device used in hospitals, which uses beams of light to measure blood oxygen levels. Other sensors measure breathing from the nose and mouth.
MouthLab also has three electrodes for ECGs — one on the thumb pad, one on the upper lip of the mouthpiece and one on the lower lip — that work about as well as the chest and ankle electrodes used on basic ECG equipment in many ambulances or clinics. That ECG signal is the basis for MouthLab’s novel way of recording blood pressure. When the signal shows the heart is contracting, the device optically measures changes in the volume of blood reaching the thumb and upper lip. Unique software converts the blood flow data into systolic and diastolic pressure readings. The study found that MouthLab blood pressure readings effectively match those taken with standard, arm-squeezing cuffs.
The hand unit relays data by Wi-Fi to a nearby laptop or smart device, where graphs display real-time results. The next generation of the device will display its own data readouts with no need for a laptop, says Fridman. Ultimately, he explains, patients will be able to send results to their doctors via cellphone, and an app will let physicians add them to patients’ electronic medical records.
A 3-D printer made the parts for the prototype, “which looks a lot like a hand-held taser,” Fridman says. “Our final version will be smaller, more ergonomic, more user-friendly and faster. Our goal is to obtain all vital signs in under 10 seconds.”
Many components of milk have an ancient origin. We know this because many milk-related genes are older than the mammals. Take the caseins, which are usually the most abundant protein in mammalian milk. They help in transporting nutrients like calcium and phosphorus to babies, which helps the babies grow their skeleton and tissues.
Researchers have found that all mammals have highly organized clusters of genes, which code for three main types of caseins.
From this we can deduce that milk caseins are ancient. Researchers believe caseins diverged into the three main types that we see today long before the early mammals separated into monotremes, marsupials and placental mammals. Slowly, milk caseins went from being a nutrient supplement to egg yolk, to a major source of nutrients for babies.
Researchers have also traced how mammals became less dependent on the nutrients in egg yolk. About 170 million years ago, important egg yolk proteins called vitellogenins began disappearing one by one, according to a 2008 study. Again, this was before true mammals walked on earth.
All modern birds and reptiles have three genes associated with the production of vitellogenins. Egg-laying mammalian ancestors also had three genes. But among living mammals, only the egg-laying monotremes have one functional vitellogenin gene, alongside two inactive ones. In marsupials and placental mammals, all three vitellogenin genes are turned off.
The mammals would only have turned off these genes if they had substitutes to hand. So there must have been an alternative source of nutrients available, such as casein, before the vitellogenins were deactivated.
If egg yolk proteins began disappearing long before mammals appeared, it suggests that milk was already the chief source of nutrients for mammals' egg-laying ancestors, like the dinosaurs.
Chemists truly went back to the drawing board to develop new X-shaped organic building blocks that can be linked together by metal ions to form an Archimedean cuboctahedron. In the journal Angewandte Chemie, the scientists report that by changing the concentration or using different counterions, the cuboctahedron can be reversibly split into two octahedra—an interesting new type of fusion–fission switching process.
Archimedean polyhedra are a group of symmetrical solids with regular polygons for faces and equal angles at the vertices, like a classic soccer ball with its 12 pentagons and 20 hexagons. These forms are also found in nature: the rigid shells (capsids) of many viruses, as well as certain cellular transport vesicles are also Archimedean polyhedra. These biological forms are made by the self-assembly of individual protein building blocks. Chemists have frequently turned to this concept for inspiration to synthesize large molecular cages held together by coordination bonds.
A team headed by Chrys Wesdemiotis and George R. Newkome has now successfully produced an approximately 6 nm cuboctahedron out of organic molecules and metal ions. A cuboctahedron has a surface made of 8 triangles and 6 squares. The conceptual starting point was an X-shaped, organic building block, which, laid over the surface of a cuboctahedron, would give the correct angles between the edges, 60° and 90°. It should also be able to bind metal ions to hold everything together.
Using 12 of these tailored X-shaped terpyridine ligands and 24 metal ions (zinc or cadmium), the researchers were able to make cuboctahedra that self-assembled from the individual building blocks. The team from the University of Akron, the University of Chicago (Argonne), the University of South Florida (Tampa), Florida Atlantic University (Boca Raton), the University of Tokyo (Japan), and the Tianjin University of Technology (China) used a variety of spectroscopic techniques, model calculations, and single-crystal analyses with synchrotron X-ray diffraction to verify the structure. They were even able to see the shapes of the individual molecules with an electron microscope.
One new feature they observed was that the cuboctahedra split apart into two octahedra when the concentration is reduced. If the solution concentration is then increased, the octahedra fuse back together into cuboctahedra. This process could also be initiated by switching between different counterions. This new process could allow for the production of a new series of nanoscale building blocks for the materials sciences. In addition, the zinc cuboctahedra may be suitable for use as transport systems for drugs.
There is a link between the perception of time and memory function in those with dementia. Family members often report their loved ones with dementia sometimes live in the past, even reverting back to first languages. This is because memory is not just one process in the brain, but a collection of different systems. Those with Alzheimer’s disease may have impairments in short-term memory, however remote memory can be left relatively intact. So they’re able to remember public and personal events many decades ago, but unable to recall what happened earlier that day.
A fascinating case study illustrates this dissociation in remote and short-term memory in Alzheimer’s disease. A retired taxi driver diagnosed with Alzheimer’s disease showed remarkable spatial memory of downtown Toronto, Canada, where he had driven taxis and worked as a courier for 45 years. This was despite showing impairments in short-term memory and general cognitive functioning.
But while those with Alzheimer’s disease can typically remember events in the distant past better than those in the immediate past, they still perform worse than older adults without Alzheimer’s disease in memory retrieval. Interestingly, it appears that events and facts most frequently retrieved and used over a lifetime are those better recalled by those with Alzheimer’s disease in late life, rather than those encountered at any particular age.
This frequency of use memory pattern is mirrored in bilingual people with dementia. A friend commented that her Yia-Yia (Grandmother), who immigrated to Australia from Greece over 50 years ago, is increasingly conversing in Greek despite predominantly speaking English for decades (causing problems for my monolingual English-speaking friend).
Those with dementia often revert to their first language. This commonly begins with utterances from the first language appearing in conversation from the second language. This occurs more often in those less proficient in their second language, rather than being related to the age of acquisition of their second language.
So, how does this happen? Probably because familiar memories rely more on the brain’s cortex, its outer layer, while short-term memories rely more on a structure called the hippocampus. The hippocampus is typically affected at the start of late-life dementias such as Alzheimer’s disease, with regions of the cortex affected subsequently.
Researchers at the Aortic Institute at Yale have tested the genomes of more than 100 patients with thoracic aortic aneurysms, a potentially lethal condition, and provided genetically personalized care. Their work will also lead to the development of a “dictionary” of genes specific to the disease, according to researchers.
The study published early online in The Annals of Thoracic Surgery. Experts have known for more than a decade that thoracic aortic aneurysms — abnormal enlargements of the aorta in the chest area —run in families and are caused by specific genetic mutations. Until recently, comprehensive testing for these mutations has been both expensive and impractical. To streamline testing, the Aortic Institute collaborated with Dr. Allen Bale of Yale’s Department of Genetics to launch a program to test whole genomes of patients with the condition.
Over a period of three years, the researchers applied a technology known as Whole Exome Sequencing (WES) to more than 100 individuals with these aneurysms. “To our knowledge, it’s the first widespread application of this technology to this disease,” said lead author and cardiac surgeon Dr. John A. Elefteriades, director of the institute.
The researchers detected four mutations known to cause thoracic aortic aneurysms. “The key findings are that this technology can be applied to this disease and it identifies a lot of patients with genetic mutations,” said Elefteriades. Additionally, the testing program uncovered 22 previously unknown gene variants that likely also contribute to the condition.
Using the test results, the clinicians were able to provide treatment tailored to each patient’s genetic profile. “Personalized aortic aneurysm care is now a reality,” Elefteriades noted. The personalized care ranged from more frequent imaging tests to preventive surgery for those most at risk. “Patients who have very dangerous mutations are getting immediate surgery,” he said.
Given that aneurysm disease is a highly inherited condition, affecting each generation, the researchers offered testing to family members of patients, and found mutations in relatives with no clinical signs of disease.
The researchers anticipate identifying more gene variants over time, accumulating a whole dictionary of mutations. “In a few years, we’re going to have discovered many new genes and be able to offer personalized care to an even greater percentage of aneurysm patients, ” Elefteriades said.
Via Integrated DNA Technologies
Cancer researchers dream of the day they can force tumor cells to morph back to the normal cells they once were. Now, researchers on Mayo Clinic’s Florida campus have discovered a way to potentially reprogram cancer cells back to normalcy.
The finding, published in Nature Cell Biology, represents “an unexpected new biology that provides the code, the software for turning off cancer,” says the study’s senior investigator, Panos Anastasiadis, Ph.D., chair of the Department of Cancer Biology on Mayo Clinic’s Florida campus.
That code was unraveled by the discovery that adhesion proteins — the glue that keeps cells together — interact with the microprocessor, a key player in the production of molecules called microRNAs (miRNAs). The miRNAs orchestrate whole cellular programs by simultaneously regulating expression of a group of genes. The investigators found that when normal cells come in contact with each other, a specific subset of miRNAs suppresses genes that promote cell growth. However, when adhesion is disrupted in cancer cells, these miRNAs are misregulated and cells grow out of control. The investigators showed, in laboratory experiments, that restoring the normal miRNA levels in cancer cells can reverse that aberrant cell growth.
“The study brings together two so-far unrelated research fields — cell-to-cell adhesion and miRNA biology — to resolve a long-standing problem about the role of adhesion proteins in cell behavior that was baffling scientists,” says the study’s lead author Antonis Kourtidis, Ph.D., a research associate in Dr. Anastasiadis’ lab. “Most significantly, it uncovers a new strategy for cancer therapy,” he adds.
That problem arose from conflicting reports about E-cadherin and p120 catenin — adhesion proteins that are essential for normal epithelial tissues to form, and which have long been considered to be tumor suppressors. “However, we and other researchers had found that this hypothesis didn’t seem to be true, since both E-cadherin and p120 are still present in tumor cells and required for their progression,” Dr. Anastasiadis says. “That led us to believe that these molecules have two faces — a good one, maintaining the normal behavior of the cells, and a bad one that drives tumorigenesis.”
Their theory turned out to be true, but what was regulating this behavior was still unknown. To answer this, the researchers studied a new protein called PLEKHA7, which associates with E-cadherin and p120 only at the top, or the “apical” part of normal polarized epithelial cells. The investigators discovered that PLEKHA7 maintains the normal state of the cells, via a set of miRNAs, by tethering the microprocessor to E-cadherin and p120. In this state, E-cadherin and p120 exert their good tumor suppressor sides.
However, “when this apical adhesion complex was disrupted after loss of PLEKHA7, this set of miRNAs was misregulated, and the E-cadherin and p120 switched sides to become oncogenic,” Dr. Anastasiadis says. “We believe that loss of the apical PLEKHA7-microprocessor complex is an early and somewhat universal event in cancer,” he adds. “In the vast majority of human tumor samples we examined, this apical structure is absent, although E-cadherin and p120 are still present. This produces the equivalent of a speeding car that has a lot of gas (the bad p120) and no brakes (the PLEKHA7-microprocessor complex).
The world's first functioning organism with an expanded DNA alphabet has now met another milestone in artificial life: making proteins that don't exist in nature. The organism, a bacterium created by scientists at The Scripps Research Institute, incorporates two synthetic DNA letters, called X and Y, along with the four natural ones, A, T, C and G. A team led by Floyd Romesberg published a study last year demonstrating that the organism, an engineered strain of E. coli, can function and replicate with the synthetic DNA.
Synthorx, a biotech startup that licensed the technology from Scripps, has now used the bacterium to produce proteins incorporating artificial amino acids, the building blocks of proteins. These are placed at precisely specified intervals along the protein sequence, obeying the code of the expanded DNA alphabet.
The La Jolla startup plans to make drugs out of these artificial proteins with properties that can be adjusted, such as the length of action inside the body, and how tightly they bind to their target. By using the bacterium as living factories, Synthorx plans to make these drugs far more efficiently and cheaply than by traditional chemistry.
Via Integrated DNA Technologies
If you could unravel all the DNA in a single human cell and stretch it out, you’d have a molecular ribbon about 2 meters long and 2 nanometers across. Now imagine packing it all back into the cell’s nucleus, a container only 5 to 10 micrometers wide. That would be like taking a telephone cord that runs from Manhattan to San Francisco and cramming it into a two-story suburban house.
Fitting all that genetic material into a cramped space is step one. Just as important is how the material is organized. The cell’s complete catalog of DNA — its genome — must be configured in a specific three-dimensional shape to work properly. That 3-D organization of nuclear material — a configuration called the nucleome — helps control how and when genes are activated, defining the cell’s identity and its job in the body.
Researchers have long realized the importance of DNA’s precisely arranged structure. But only recently have new technologies made it possible to explore this architecture deeply. With simulations, indirect measurements and better imaging, scientists hope to reveal more about how the nucleome’s intricate folds regulate healthy cells. Better views will also help scientists understand the role that disrupted nucleomes play in aging and diseases, such as progeria and cancer.
Via Integrated DNA Technologies