They could almost be mistaken for images taken in the far-reaches of outer space, or an ultra-magnified snapshot of microscopic organisms.
Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Use of copper as a fluorescent material allows for the manufacture of inexpensive and environmentally compatible organic light-emitting diodes (OLEDs). Thermally activated delayed fluorescence (TADF) ensures high light yield. Scientists of Karlsruhe Institute of Technology (KIT), CYNORA, and the University of St Andrews have now measured the underlying quantum mechanics phenomenon of intersystem crossing in a copper complex. The results of this fundamental work are reported in the Science Advances journal and contribute to enhancing the energy efficiency of OLEDs.
Organic light-emitting diodes are deemed tomorrow's source of light. They homogeneously emit light in all observation directions and produce brilliant colors and high contrasts. As it is also possible to manufacture transparent and flexible OLEDs, new application and design options result, such as flat light sources on window panes or displays that can be rolled up. OLEDs consist of ultra-thin layers of organic materials, which serve as emitter and are located between two electrodes. When voltage is applied, electrons from the cathode and holes (positive charges) from the anode are injected into the emitter, where they form electron-hole pairs. These so-called excitons are quasiparticles in the excited state. When they decay into their initial state again, they release energy.
Excitons may assume two different states: Singlet excitons decay immediately and emit light, whereas triplet excitons release their energy in the form of heat. Usually, 25 percent singlets and 75 percent triplets are encountered in OLEDs. To enhance energy efficiency of an OLED, also triplet excitons have to be used to generate light. In conventional light-emitting diodes heavy metals, such as iridium and platinum, are added for this purpose. But these materials are expensive, have a limited availability, and require complex OLED production methods.
It is cheaper and environmentally more compatible to use copper complexes as emitter materials. Thermally activated delayed fluorescence (TADF) ensures high light yields and, hence, high efficiency: Triplet excitons are transformed into singlet excitons which then emit photons. TADF is based on the quantum mechanics phenomenon of intersystem crossing (ISC), a transition from one electronic excitation state to another one of changed multiplicity, i.e. from singlet to triplet or vice versa. In organic molecules, this process is determined by spin-orbit coupling. This is the interaction of the orbital angular momentum of an electron in an atom with the spin of the electron. In this way, all excitons, triplets and singlets, can be used for the generation of light. With TADF, copper luminescent material reaches an efficiency of 100 percent.
Stefan Bräse and Larissa Bergmann of KIT's Institute of Organic Chemistry (IOC), in cooperation with researchers of the OLED technology company CYNORA and the University of St Andrews, United Kingdom, for the first time measured the speed of intersystem crossing in a highly luminescent, thermally activated delayed fluorescence copper(I) complex in the solid state. The results are reported in the Science Advances journal. The scientists determined a time constant of intersystem crossing from singlet to triplet of 27 picoseconds (27 trillionths of a second). The reverse process -- reverse intersystem crossing -- from triplet to singlet is slower and leads to a TADF lasting for an average of 11.5 microseconds. These measurements improve the understanding of mechanisms leading to TADF and facilitate the specific development of TADF materials for energy-efficient OLEDs.
Because Volkswagen has committed large-scale fraud with the software in diesel engines, nine million fraudulent cars, sold in Europe and the US from 2009 to 2015, have emitted a cumulative amount of 526 kilo tons of nitrogen oxides more than was legally allowed. The Volkswagen fraud has had an even larger environmental impact in Europe than in the US: more Volkswagens were sold and the population density is higher. Environmental scientists from Radboud University in the Netherlands have provided an estimate of the public health consequences caused by this fraud in the scientific magazineEnvironmental Pollution.
According to the calculations of Rik Oldenkamp, Rosalie van Zelm and Mark Huijbregts, 45,000 life years were lost due to ill-health or early death, of which over 44,000 in Europe (and almost 700 in the United States) are associated with inhalation of fine dust due to the extra nitrogen oxides emitted by the diesel cars that where tampered with. If Volkswagen does not recall the cars that were tampered with, another 72,000 healthy life years will be lost in Europe due to the above-mentioned emissions.
Aside from 'healthy life years', there is another commonly used way to calculate damage, based on the value of statistical life (VSL). This measure of costs reflects the rate at which an individual would trade consumption of other goods and services for small changes in her own mortality risk, given her preferences and budget constraints. Researchers from Nijmegen have calculated these specific health costs due to mortality in Europe and the United States to be at least 39 billion US dollars, which is significantly more than the 7.3 billion US dollars that Volkswagen Group has set aside to cover worldwide costs related to the diesel emissions scandal. Using this method of damage calculation, one should take into account another 62 billion US dollars in damage if Volkswagen does not recall their cars.
USF scientists have isolated an extract from a sponge found in Antarctica, known as Dendrilla membranosa, and tested it on MRSA biofilm. The extract, a new chemical which the research team has named 'darwinolide,' is a natural product chemical found in laboratory tests to eliminate more than 98 percent of MRSA cells. The highly resistant MRSA infection, formerly particularly problematic in places such as hospitals and nursing homes, is now found in commonly used places.
The research team has named the new chemical "darwinolide."
The study describing their methods and results was published this week in the American Chemical Society's journal Organic Letters. While years ago the highly-resistant MRSA infection was particularly problematic in places such as hospitals and nursing homes, it has developed into an infection that can be found in commonly-used places such as gyms, locker rooms and schools. "In recent years, MRSA has become resistant to vancomycin and threatens to take away our most valuable treatment option against staph infections," said study co-author and USF microbiologist Dr. Lindsey N. Shaw.
MRSA is unique in that it can cause infections in almost every niche of the human host, from skin infections, to pneumonia, to endocarditis, a serious infection of tissues lining the heart. Unfortunately, the pace of the pharmaceutical industry's efforts to find new antibiotics to replace those no longer effective has slowed in recent years, said Shaw.
Like many other bacterium, the MRSA bacteria forms a biofilm. "Biofilms, formed by many pathogenic bacteria during infection, are a collection of cells coated in a variety of carbohydrates, proteins and DNA," said Shaw. "Up to 80 percent of all infections are caused by biofilms and are resistant to therapy. We desperately need new anti-biofilm agents to treat drug resistant bacterial infections like MRSA."
A new study finds significant impact, and a possible silver lining, for the iconic birds over the next century.
Adélie penguins (Pygoscelis adeliae) have survived in Antarctica for nearly 45,000 years, adapting to glacial expansions and sea ice fluctuations driven by millennia of climatic changes. The penguins remained resilient through these changes, but new research from the University of Delaware suggests that unique 21st-century climates may pose an existential threat to many of the colonies on the Antarctic continent.
Published Wednesday in Scientific Reports, the study, led by oceanographer Megan Cimino, found that up to 60 percent of the current Adélie penguin habitat in Antarctica could be unfit to host colonies by the end of the century.
The Adélie penguin is one of two true Antarctic penguins—the other being the emperor penguin (Aptenodytes forsteri)—and it inhabits the full extent of the continent. The penguins nest on land during the austral (southern) summer, and migrate during the winter to the edge of the sea ice, where they are able to feed at sea.
Using a combination of field survey data and high-resolution satellite imagery, the researchers were able to stitch together 30 years of colony data, from 1981 to 2010, at sites ringing Antarctica. Looking at the year-to-year data, the researchers were able to identify population trends at each colony site for the full 30-year period. The scientists found diverging trends at different sites. Some colonies, like the closely monitored population near Palmer Station, a United States research hub in northern Antarctica, saw declines of over 80 percent.
The great fregate bird can fly for 56 days without a break.
Plenty of birds fly vast distances on their migratory trips around planet Earth. But the most amazing of all might the frigate bird, which can stay aloft for two months straight without landing or resting. How the heck do they do that?
A team of biologists led by Henri Weimerskirch at the French National Center for Scientific Research just announced the results of a major new study on great frigates (Fregata minor), these fascinating seabirds native to the central Indian and Pacific Oceans. Using super-lightweight GPS trackers, the biologists followed four dozen birds from 2011 to 2015, some for up to two years continuously. What they found was astonishing. The birds could stay aloft for up to 56 days without landing, gliding for hundreds of miles per day with wing-flaps just every 6 minutes, and reaching altitude of more than 2.5 miles.
Before this new research, "it was known that frigate birds could stay several days aloft," says Weimerskirch, "but that they can stay two months [continuously in the air] is completely unexpected." The biologists' description of the bird's fantastical flights are published today in the journal Science.
One of the most basic components of any communications network is a power splitter that allows a signal to be sent to multiple users and devices. Researchers from Brown University have now developed just such a device for terahertz radiation -- a range of frequencies that may one day enable data transfer up to 100 times faster than current cellular and Wi-Fi networks.
IF YOU WANTED to write a history of the Internet, one of the first things you would do is dig into the email archives of Vint Cerf. In 1973, he co-created the protocols that Internet servers use to communicate with each other without the need for any kind of centralized authority or control. He has spent the decades since shaping the Internet’s development, most recently as Google’s “chief Internet evangelist.”
Thankfully, Cerf says he has archived about 40 years of old email—a first-hand history of the Internet stretching back almost as far as the Internet itself. But you’d also have a pretty big problem: a whole lot of that email you just wouldn’t be able to open. The programs Cerf used to write those emails, and the formats in which they’re stored, just don’t work on any current computer you’d likely be using to try to read them.
Today, much of the responsibility for preserving the web’s history rests on The Internet Archive. The non-profit’s Wayback Machine crawls the web perpetually, taking snapshots that let you, say, go back and see how WIRED looked in 1997. But the Wayback Machine has to know about a site before it can index it, and it only grabs sites periodically. Based on the Internet Archive’s own findings, the average webpage only lasts about 100 days. In order to preserve a site, the Wayback Machine has to spot it in that brief window before it disappears.
New study shows that the same cellular machinery exists in humans.
The ability to grow a new limb may seem like something straight out of science fiction, but new research shows exactly how animals like salamanders and zebrafish perform this stunning feat—and how humans may share the biological machinery that lets them do it. Scientists have long known of the regenerative powers of some species of fish and amphibians: To recreate a limb or fin lost to a hungry predator, they can regrow everything from bone to muscle to blood vessels with stem cells that form at the site of the injury. But just how they do it at the genetic level is a mystery.
To figure out what might be happening, scientists amputated the appendages of two ray-finned fish—zebrafish and bichir—and a salamander known as the axolotl, all of which can regrow their legs and fins. They then compared RNA from the site of the amputation. They found 10 microRNAs—small pieces of RNA that regulate gene expression—that were the same in all three species. What’s more, they seemed to function in the same way, despite the structural difference between the axolotl (pictured above) and the fishes.
The finding supports an existing idea that the three master limb-replacers last shared a common ancestor about 420 million years ago, and it suggests that the evolutionary process of growing limbs is saved over time, not developed independently in separate species, the researchers report today in PLOS ONE. What does this mean for humans? If these microRNAs can be programmed to work like they do in salamanders and fish, humans could enhance their ability to heal from serious injuries. But don’t expect to get Wolverine-like powers just yet—scientists say such modifications are still a long way off.
Despite the fact that Einstein's unifying theory has never been supported by observations, let alone definitive mathematical proof, Einstein's work did ultimately lead many scientists to re-examine the universe in relation to a holistic theory of everything, including an amalgam of his gravitational theories and quantum gravity hypotheses. Much work leading on from his theories provided tantalizing glimpses at possible gravitational interactions, including the behavior of the smallest of all fermions yet discovered – leptons and quarks.
This research led directly to the discovery of a gauge-invariant quantum field theory of the weak force, which included an electromagnetic interaction (and produced the "electroweak" concept that now shows correlation between electromagnetic and weak nuclear fields), which was, in itself, a great breakthrough in particle physics research. Unfortunately, however, it did not progress to include an observable gravitational component.
Nevertheless, buoyed by such revelations, theoretical physicists sought out a similar quantum field theory for the strong nuclear force, and eventually found one, dubbing it quantum chromodynamics. In this case, quarks are shown to interact through the exchange of gluons. This research has led to further postulations that the electroweak and strong nuclear forces could be united in a grand unified theory, which would then incorporate three of the four known forces in the universe. Again, however, an inclusion of the influence of gravity failed to be reconciled.
So despite the successful conflation of the fields discussed above, physicists have been unable to formulate a complete particle-driven unified field theory for gravity since it seems to lack a force-carrier particle of its own.
There is, however, one contender: A contentious theoretical particle known as a "graviton". The graviton moniker was apparently coined by the Russian physicists Dmitrii Blokhintsev and F. M. Gal'perin sometime in the mid 1930s (interestingly, around the time of the Einstein-Bohr stoush), in relation to the notion that if Einstein's predicted gravity waves existed, then they must also possess a quanta of energy, as does electromagnetic energy. That is, the electromagnetic and strong and weak nuclear forces all act through a "force carrier", which is exchanged between the interacting particles. These exchange carriers are also known as field particles, or gauge bosons.
The graviton, if it exists, doesn't seem to act like any of the other particles in the Standard Model, as it does not exhibit these force carrier behaviors. Put simply, unlike the other forces, gravity can not be absorbed, transformed, or shielded against, and it only attracts and never repels. In effect, this theoretical particle appears to possess no discernible way to interact with any other particle. This fact by itself would prohibit its inclusion in the Standard Model, partly because no instrument of sufficient size or efficiency could possibly be built to detect the supposedly tiny energies associated with it, but mostly because the entire concept runs into enormous theoretical difficulties at energies close to the Planck scale, which are the smallest sizes and energies able to be probed with particle accelerators.
Despite this, quantum gravity and other yet-to-be-proven quantum mechanical models such as string theory are often associated with gravitons, both of which rely on its existence. And though much hope is pinned on one of these theories eventually providing a unified description of gravity and particle physics, quantum gravity may prove the best contender. This is because string theory alone is not a physical descriptor of reality, but instead a self-contained mathematical model that describes all of the fundamental forces and the various forms of matter as models, not observed phenomena.
Using novel software that incorporates all of the field theory equations developed by Einstein as part of his general theory of relativity, research teams from Europe and the United States have started developing a model of the universe that they claim will eventually provide the most precise and detailed representation of the cosmos ever created.
Incorporating two new independently-developed computer codes from a team comprising members from Case Western Reserve University and Kenyon College, Ohio, and a team formed by a collaboration between the Institute of Cosmology and Gravitation, Portsmouth, and the University of Catania, Italy, the new research aims to amalgamate a range of physical theoretical information to provide new insights into the nature of gravity and its effects on all of the objects in the universe.
The pair of new codes are also claimed to be the first to use the complete general theory of relativity to help explain why there is a clumping of matter in some areas of space, while there is a distinct dearth of matter in others.
Einstein's theory, despite being over 100 years old, is still the foremost and best theory of gravity that we have. However, despite reliably predicting a range of cosmological phenomena, including the groundbreaking proof of the existence of gravity waves, the general theory of relativity equations involved are so complex that, until now, physicists have had to use simplified versions of the theory when looking at the mechanisms at play in the entire universe.
The new code, embedded in a new mathematical tool developed by the researchers and dubbed "Cosmograph" are said to be able to work with the complexities inherent in Einstein's equations to provide much more nuanced and detailed modeling than has ever been achieved before.
The HBP is a €1.2 billion worth and 10 years long global project that will give us a deeper and more meaningful understanding of how the human brain operates. It is comprised of 130 research institutions throughout Europe and coordinated through the Ecole polytechnique fédérale de Lausanne (EFPL) in Switzerland (1).
Experimental mapping of the brain turned out to be a dead end, given that it takes 20,000 experiments to map just one neural circuit and that our brain consists of 100 billion neurons and 100 trillion synapses. The HBP came up with a better solution by building the first human brain model. These are neuromorphic computing systems which use the same basic principles of computation and cognitive architectures as the brain (1, 2, 3, 4).
The plan is to determine fundamental principles of how neurons are connected and use those principles to construct statistical simulations. A simulation model will then predict how the certain parts of the brain, for which we have none or little experimental information, are wired and then compare the results with real biological data. In other words, the idea is to find some underlying principle that governs brain’s morphology and reverse-engineer the human brain with the help of supercomputers (1, 2, 3).
Astronomers have identified a family of incredible galaxies that could shed further light on the transformation of the early Universe known as the 'epoch of reionisation'. Dr David Sobral of Lancaster University will present their results on Monday 27 June at the National Astronomy Meeting in Nottingham.
About 150 million years after the Big Bang, some 13 billion years ago, the Universe was completely opaque to high energy ultraviolet light, with neutral hydrogen gas blocking its passage. Astronomers have long realised that this situation ended in the so-called 'epoch of reionisation', where ultraviolet light from the earliest stars broke open neutral hydrogen atoms, and could start to travel freely through the cosmos. This reionisation period marks a key transition between the relatively simple early cosmos, with normal matter made up of hydrogen and helium, and the universe as we see it today: transparent on large scales and filled with heavier elements.
In 2015 Sobral led a team that found the first example of a spectacularly bright galaxy within the epoch of reionisation, named Cosmos Redshift 7 or CR7, which may harbour first generation stars. The team also discovered a similar galaxy, MASOSA, which, together with Himiko, discovered by a Japanese team, hinted at a larger population of similar objects, perhaps made up of the earliest stars and/or black holes.
Using the Subaru and Keck telescopes on Hawaii, and the Very Large Telescope in Chile, Sobral and his team, along with a group in the US, have now found more examples of this population. All of the newly found galaxies seem to have a large bubble of ionised gas around them. Sobral comments: "Stars and black holes in the earliest, brightest galaxies must have pumped out so much ultraviolet light that they quickly broke up hydrogen atoms in the surrounding universe. The fainter galaxies seem to have stayed shrouded from view for a lot longer. Even when they eventually become visible, they show evidence of plenty of opaque material still in place around them."
Robots so small they can enter the bloodstream and perform surgeries are one step closer, a research team from Monash University has discovered.
Led by Dr Zhe Liu, the Monash Engineering team has focused on graphene oxide – which is a single atom thick – as an effective shape memory material. Graphene has captured world scientific and industrial interest for its miracle properties, with potential applications across energy, medicine, and even biomedical nano-robots. Until now shape memory effects have only been observed in materials larger than approximately 10nm. Graphene Oxide is approximately 1nm thick.
In contrast to other shape memory materials, the Monash team discovered that subjecting certain forms of graphene oxide to an electric field caused shape changes almost instantaneously and retained the new form until stretched back to its original shape.
PhD candidate and first author Zhenyue Chang, said the shape memory effect arises from an "atomic switch" enabling a super-fast response. "Aside from being able to transform at high speeds, graphene oxide has many other advantages over existing shape memory materials. It is incredibly light, has a high density to strain ratio, is very stable, and is able to perform a relative size change of 15 per cent, compared to shape memory alloys, which only change by four per cent.
"Like the science fiction movie, Fantastic Voyage, our research brings one step closer intelligent biomedical nano-robots that can be delivered into a living cell for future cellular surgery," Dr Liu said.
Dr Liu's team, from the Monash Centre for Atomically Thin Materials (MCATM) and Mechanical and Aerospace Engineering department at Monash University, made their discovery through computer simulations. "We are excited now to use our research to create the compound we have predicted. That would be amazing," Dr Liu said.
Scientists from Moscow Institute of Physics and Technology (MIPT), the Institute for Theoretical and Experimental Physics, and the National Research University Higher School of Economics have devised a method of distinguishing black holes from compact massive objects that are externally indistinguishable from one another. The method involves studying the energy spectrum of particles moving in the vicinity -- in one case it will be continuous and in the other it will be discrete. The findings have been published in Physical Review D.
Black holes, which were predicted by Einstein's theory of general relativity, have an event horizon -- a boundary beyond which nothing, even light, can return to the outside world. The radius of this boundary is called the Schwarzschild radius, in physical terms it is the radius of an object for which the escape velocity is greater than the speed of light, which means that nothing is able to overcome its gravity.
Black holes of stellar mass are the result of gravitational collapse which occurs at the time when a star "burns out" all its thermonuclear fuel and the force of the gas pressure can no longer resist gravity. If the star is massive enough, it collapses to a size smaller than the Schwarzschild radius and turns into a black hole. However, time on the event horizon slows down so much that for an outside observer the collapsing process almost stops (if a ship falls into a black hole, for example, to an outside observer it will appear to be continually falling toward the horizon), therefore all the black holes we see are objects that are eternally collapsing.
Astrophysicists have not yet been able to "see" a black hole directly, but there are many objects that are "suspected" of being black holes. Most scientists are sure that in the centerof our galaxy there is a supermassive black hole; there are binary systems where one of the components is most likely a black hole. However, some astrophysicists believe that there may be compact massive objects that fall very slightly short of black hole status; their range is only a little larger than the Schwarzschild radius. It may be the case that some of the "suspects" are in fact objects such as these. From the outside, however, they are not distinguishable from black holes.
Emil Akhmedov, Fedor Popov, and Daniil Kalinov devised a method to tell the difference between them, or more precisely the difference between compact massive objects and collapsing objects.
"We examined the scalar quantum field around a black hole and a compact object and found that around the collapsing object -- the black hole, there are no bound states, but around the compact object there are," explains FedorPopov, a member of staff at MIPT's Laboratory of High Energy Physics. He and his colleagues examined the behavior of scalar particles (the spin of these particles is zero -- an example of this could be the Higgs boson) in the vicinity of black holes and massive compact objects. The scientists derived analytical expressions for the energy spectrum of the particles. It was found that near the surface of an ultra-compact star with a radius slightly larger than the Schwarzschild radius there is a "potential hole" -- an area of space where particles fall into a gravitational "trap." The problem in this case is then similar to a simple task in quantum mechanics where the spectrum of the particles in the potential hole needs to be found. This spectrum is discrete, i.e. it has energy values where there are no particles. In simpler terms, the potential hole does not release particles of certain energies, and an "empty space" appears in the spectrum.
The artificial pancreas -- a device which monitors blood glucose in patients with type 1 diabetes and then automatically adjusts levels of insulin entering the body -- is likely to be available by 2018, conclude authors of a paper in Diabetologia (the journal of the European Association for the Study of Diabetes). Issues such as speed of action of the forms of insulin used, reliability, convenience and accuracy of glucose monitors plus cybersecurity to protect devices from hacking, are among the issues that are being addressed.
Currently available technology allows insulin pumps to deliver insulin to people with diabetes after taking a reading or readings from glucose meters, but these two components are separate. It is the joining together of both parts into a 'closed loop' that makes an artificial pancreas, explain authors Dr Roman Hovorka and Dr Hood Thabit of the University of Cambridge, UK. "In trials to date, users have been positive about how use of an artificial pancreas gives them 'time off' or a 'holiday' from their diabetes management, since the system is managing their blood sugar effectively without the need for constant monitoring by the user," they say.
One part of the clinical need for the artificial pancreas is the variability of insulin requirements between and within individuals -- on one day a person could use one third of their normal requirements, and on another 3 times what they normally would. This is dependent on the individual, their diet, their physical activity and other factors. The combination of all these factors together places a burden on people with type 1 diabetes to constantly monitor their glucose levels, to ensure they don't end up with too much blood sugar (hyperglycaemic) or more commonly, too little (hypoglycaemic). Both of these complications can cause significant damage to blood vessels and nerve endings, making complications such as cardiovascular problems more likely.
There are alternatives to the artificial pancreas, with improvements in technology in both whole pancreas transplantation and also transplants of just the beta cells from the pancreas which produce insulin. However, recipients of these transplants require drugs to suppress their immune systems just as in other organ transplants. In the case of whole pancreas transplantation, major surgery is required; and in beta cell islet transplantation, the body's immune system can still attack the transplanted cells and kill off a large proportion of them (80% in some cases). The artificial pancreas of course avoids the need for major surgery and immunosuppressant drugs.
Researchers globally continue to work on a number of challenges faced by artificial pancreas technology. One such challenge is that even fast-acting insulin analogues do not reach their peak levels in the bloodstream until 0.5 to 2 hours after injection, with their effects lasting 3 to 5 hours. So this may not be fast enough for effective control in, for example, conditions of vigorous exercise. Use of the even faster acting 'insulin aspart' analogue may remove part of this problem, as could use of other forms of insulin such as inhaled insulin. Work also continues to improve the software in closed loop systems to make it as accurate as possible in blood sugar management.
A number of clinical studies have been completed using the artificial pancreas in its various forms, in various settings such as diabetes camps for children, and real life home testing. Many of these trials have shown as good or better glucose control than existing technologies (with success defined by time spent in a target range of ideal blood glucose concentrations and reduced risk of hypoglycemia). A number of other studies are ongoing.
The authors say: "Prolonged 6- to 24-month multinational closed-loop clinical trials and pivotal studies are underway or in preparation including adults and children. As closed loop devices may be vulnerable to cybersecurity threats such as interference with wireless protocols and unauthorised data retrieval, implementation of secure communications protocols is a must."
The actual timeline to availability of the artificial pancreas, as with other medical devices, encompasses regulatory approvals with reassuring attitudes of regulatory agencies such as the US Food and Drug Administration (FDA), which is currently reviewing one proposed artificial pancreas with approval possibly as soon as 2017. And a recent review by the UK National Institute of Health Research (NIHR) reported that automated closed-loop systems may be expected to appear in the (European) market by the end of 2018. The authors say: "This timeline will largely be dependent upon regulatory approvals and ensuring that infrastructures and support are in place for healthcare professionals providing clinical care. Structured education will need to continue to augment efficacy and safety."
Kataegis is a recently discovered phenomenon in which multiple mutations cluster in a few hotspots in a genome. The anomaly was previously found in some cancers, but it has been unclear what role kataegis plays in tumor development and patient outcomes. Using a database of human tumor genomic data, researchers at the University of California San Diego School of Medicine and Moores Cancer Center have discovered that kataegis is actually a positive marker in breast cancer — patients with these mutation hotspots have less invasive tumors and better prognoses.
The study, published June 30 in Cell Reports, also suggests kataegis status could help doctors determine the treatment options that might work best for patients with the mutation pattern. “We don’t know what causes kataegis, and before this study not much was known about its functional importance at the molecular or clinical level,” said senior author Kelly Frazer, PhD, professor of pediatrics and director of the Institute for Genomic Medicine at UC San Diego School of Medicine and Moores Cancer Center. “We’ve now found that kataegis is associated with a good prognosis for patients with breast cancer.”
Kataegis occurs in approximately 55 percent of breast cancers. To determine the role of this phenomenon in patient outcomes, Frazer and her team studied human breast cancer data available from The Cancer Genome Atlas (TCGA), the National Institutes of Health’s database of genomic information from more than 15,000 human tumors representing many cancer types. The Frazer team established the kataegis status of 97 breast tumors and then paired this information with patient data, such as age at diagnosis, treatment and outcome. They also looked at an additional 412 human breast cancers for which they predicted kataegis status.
The researchers found several different clinical factors associated with kataegis. These mutation hotspots were more common in breast cancer patients diagnosed at a later age, and patients with HER2-positive and high-grade tumors.
What’s more, the presence of kataegis was a marker for good prognosis. Kataegis on chromosome 17 and 22 in particular were associated with low tumor invasiveness. And finally, although causes of death for patients in the TCGA database are not known, patients without kataegis tended to die younger (median age 47 years old) than patients with kataegis (median age 78 years old).
In a finding that helps explain kataegis’ beneficial effect, the researchers noted that genes located near kataegis hotspots were less likely to behave abnormally than genes located further away in the genome.
Scientists find evidence that the hole is finally shrinking, thanks to the phase out of harmful chemicals 30 years ago.
After three decades of observation, scientists have finally found the first fingerprints of healing in the notorious Southern Hemisphere ozone hole.
In 1974, Mario Molina and Sherwood Rowland, two chemists at the University of California, Irvine, published an article in Nature detailing the threats to the ozone layer from chlorofluorocarbon (CFC) gases. At the time, CFCs were commonly used in spray bottles and as coolants in many refrigerators, and they were rapidly accumulating in the atmosphere.
The groundbreaking research—for which they were awarded the 1995 Nobel Prize in chemistry—concluded that the atmosphere only had a “finite capacity for absorbing chlorine” atoms in the stratosphere.
After being widely attacked by the chemical industry, Molina and Rowland’s work was vindicated 11 years later, in 1985, when a team of English scientists realized the dire implications of their findings: the CFCs in the atmosphere had created a hole in the ozone layer. The loss of the protective ozone can lead to increased rates of skin cancer in humans and animals.
The research team, led by Susan Solomon, a professor of atmospheric chemistry and climate science at MIT, found multiple lines of evidence for the healing. The findings were published Thursday in Science.
The ozone hole forms every year over Antarctica, beginning in August and generally peaking in October. Solomon's team compared September ozone measurements, collected from balloon data and satellites, with statistical simulations that predict ozone.
Solomon’s team found that, in recent years, the hole is not eclipsing the 12-million-square-kilometer threshold until later in the southern spring, which indicates that the September hole is shrinking. In fact, the researchers believe the ozone hole has shrunk by more than 4 million square kilometers. Furthermore, the hole is not as deep as it used to be.
For the first time, researchers have demonstrated a DNA nanomotor that can "walk" along a track with sustainable motion. The nanomotor also has the highest fuel efficiency for any type of walking nanomotor so far built.
Researchers Meihan Liu et al. at the National University of Singapore have published a paper on the DNA nanowalker in a recent issue of ACS Nano.
The tiny motor illustrates how purely physical effects can enable the efficient harvest of chemical energy at the single-molecule level. By operating on chemical energy, the new motor functions completely differently than any macroscopic motor, and brings researchers a step closer to replicating the highly efficient biomotors that transport cargo in living cells.
An important characteristic of the new nanowalker is that, like biomotors in living cells, it is an enzyme. This means that it helps initiate the fuel-producing chemical reaction that generates its motion without permanently changing itself or its track. This trait enables repeated, sustainable motion, which has not been achieved by any chemically powered synthetic nanowalker before now. Most other nanowalkers have been "burn-bridge motors," meaning they are not enzymes but instead consume their tracks as their fuel.
Creating enzymatic nanowalkers is very challenging, and so progress in this area has been relatively slow over the past few years. The only other demonstration of an enzymatic walker was in 2009, when researchers designed a nanowalker that, despite being enzymatic, cannot achieve sustainable motion because its track coils over time and eventually halts the motor. This nanowalker uses more than two fuel molecules per step, and studies since then have suggested that two fuel molecules per step is a general threshold for enzymatic nanomotors.
With its capability of sustainable motion and a fuel efficiency of approximately one molecule per step, the new nanowalker represents a leap of progress in this area.
The key to this achievement was finding a physical mechanism for efficiently harvesting chemical energy at the single-molecule level. This mechanism consists of three "chemomechanical gates" that basically ensure that the nanowalker walks by always picking up its back leg and not its front leg.
To do this, these gates physically control the order in which the products are released in the chemical reaction that propels the nanowalker forward. As a result, the DNA nanowalker's back leg dissociates from the track first and takes a step forward before the front leg dissociates. Then when the front leg becomes the back leg, that leg takes a step forward, and the walking cycle repeats. The dissociation of each leg occurs when an enzyme "cuts" one fuel molecule that is bound to the leg, so that one molecule is all that is needed to take one step. Using a fluorescence microscope, the researchers observed that the 20-nm-long nanowalker could move at speeds of up to 3 nm per minute.
Immunotherapy has revolutionized cancer treatment by demonstrating that blocking immunosuppressive pathways elicits remarkable and often durable responses in patients with metastatic disease. In most patients, blocking immunosuppression is ineffective without a treatment that induces de novo anti-tumor immune responses. Evidence that T cells recognize unique mutation-generated neo-antigens in patients responding to immunotherapy implies that a tumor vaccine needs to be highly personalized. Emerging data that radiotherapy can convert the patient's own tumor into an in situ vaccine have resulted in significant interest for testing radiation in combination with immunotherapy. Successful personalized immunization of patients with cancer with local tumor irradiation could provide a simple, widely available, and cost-effective means to enhance responses to immunotherapy.
The mission will peek through the gas giant’s swirling clouds in search of a planetary core.
On 4 July, NASA intends to finish a job that started with the agency’s Galileo mission 21 years ago. At 8:18 p.m. Pacific time, the Juno spacecraft will ignite its main engine for 35 minutes and nudge itself into orbit around Jupiter. If all goes well, it will eventually slip into an even tighter path that whizzes as close as 4,200 kilometres above the planet’s roiling cloud-tops — while dodging as much of the lethal radiation in the planet’s belts as possible.
The US$1.1-billion mission, which launched in 2011, will be the first to visit the Solar System’s biggest planet since NASA’s Galileo spacecraft in 1995. Picking up where Galileo left off, Juno is designed to answer basic questions about Jupiter, including what its water content is, whether it has a core and what is happening at its rarely seen poles (see ‘Mission to Jupiter’).
Scientists think that Jupiter was the first planet to condense out of the gases that swirled around the newborn Sun 4.6 billion years ago. As such, it is made up of some of the most primordial material in the Solar System. Scientists know that it consists mostly of hydrogen and helium, but they are eager to pin down the exact amounts of other elements found on the planet.
“What we really want is the recipe,” says Scott Bolton, the mission’s principal investigator and a planetary scientist at the Southwest Research Institute in San Antonio, Texas.
Jupiter’s familiar visage, with its broad brown belts and striking Great Red Spot, represents only the tops of its churning clouds of ammonia and hydrogen sulfide. Juno — named after the Roman goddess who could see through clouds — will peer hundreds of kilometers into the planet’s atmosphere using microwave wavelengths.
Exploration of Jupiter’s interior should reveal more about the formidable atmospheric convection that powers the planet, says Paul Steffes, an electrical engineer at the Georgia Institute of Technology in Atlanta. In anticipation of Juno’s arrival, professional and amateur astronomers have been observing Jupiter with ground-based and space-based telescopes. For now, the planet is not experiencing any unusual atmospheric changes. “It’s kind of in its normal state, which is good,” says Amy Simon, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. This ‘normal’ behaviour gives researchers confidence that they will be able to understand Juno’s findings.
The Great Red Spot continues to shrink, as it has done in recent years, and to interact less and less with the jet streams on either of its edges. The broad belt just north of the planet’s equator has been expanding since late 2015 — a change that might be connected to processes deep in the atmosphere. “Trying to connect events that are happening at one level to events happening in another tells you how well coupled the whole atmosphere is,” says Leigh Fletcher, a planetary astronomer at the University of Leicester, UK.
As Juno probes deeper and deeper into the planet’s atmosphere, researchers hope to get information on a layer of hydrogen compressed into a liquid by increasing pressures. That liquid conducts electricity, which powers Jupiter’s enormous magnetic field. Deeper still, the spacecraft will look for evidence of a core — a dense nugget of heavier elements that most scientists think exists, but has never been observed. Juno will make precise measurements of how Jupiter’s gravity tugs on the spacecraft, which should reveal whether a core is present.
Juno will provide scientists with the last chance to look at Jupiter for a long time. It is scheduled to make 37 total orbits before performing a kamikaze run in early 2018, burning up inside the planet’s clouds to keep it from contaminating the moon Europa. The only other mission planned to the gas giant is the European Space Agency’s Jupiter Icy Moons Explorer (JUICE) spacecraft, which could launch as early as 2022 and will focus mainly on the moon Ganymede.
German researchers have developed a complex lens system no bigger than a grain of salt that fits inside a syringe. The imaging tool could make for not just more productive medical imaging, but tiny cameras for everything from drones to slimmer smartphones.
Getting inside the human body to have a look around is always going to be invasive, but that doesn't mean more can't be done to make things a little more comfortable. With this goal in mind, German researchers have developed a complex lens system no bigger than a grain of salt that fits inside a syringe. The imaging tool could make for not just more productive medical imaging, but tiny cameras for everything from drones to slimmer smartphones.
Scientists from the University of Stuttgart built their three-lens camera using a new 3D printing technique. They say their new approach offers sub-micrometer accuracy that makes it possible to 3D print optical lens systems with two or more lenses for the first time. Their resulting multi-lens system opens up the possibility of correcting for aberration (where a lens cannot bring all wavelengths of color to the same focal plane), which could enable higher image quality from smaller devices.
Here's how they did it. Using a femtosecond laser, where the pulse durations were shorter than 100 femtoseconds (a femtosecond is one quadrillionth of a second), they blasted a light-sensitive material resting on a glass substrate. Two photons are absorbed by the material, which exposes it and crosslinks polymers within. Unexposed material is then washed away with a solvent, leaving behind the hardened, crosslinked polymer used to form the optical element.
The team used this approach to print imaging components for optical microscopes with a diameter and height of 125 micrometers, and then attached them to the end of a 5.6-ft (1.7-m) optical fiber the width of two human hairs. The camera on the end of this small endoscope is capable of focusing on images from a distance of 3 mm (0.12 in). The team says the entire imaging system fits comfortably inside a standard syringe needle, which raises the possibility of delivering it to directly to organs, and even the brain.
Learning from your mistakes is a key life lesson, and it's one that researchers at Pacific Northwest National Laboratory (PNNL) can attest to. After unintentionally creating carbon-rich nanorods, the team realized its accidental invention behaves weirdly with water, demonstrating a 20-year old theory and potentially paving the way to low-energy water harvesting systems and sweat-removing fabrics.
The researchers note that ordinarily materials will absorb more water as the humidity in the air around them increases. But between 50 and 80 percent relative humidity, these nanorods will actually do the opposite and expel water, a behavior they say is not shared by any other material. Below that range, they behave as normal, so the process is reversible by lowering the humidity again.
"Our unusual material behaves a bit like a sponge; it wrings itself out halfway before it's fully saturated with water," says David Lao, PNNL research associate and creator of the material. These nanorods were created by mistake while trying to fabricate magnetic nanowires, and the researchers decided to give the accidents a closer look. On examining them with a vapor analysis instrument, Satish Nune, one of the authors of the research paper, noticed that the structures were actually losing weight as the humidity increased.
Assuming the equipment was malfunctioning, the scientists switched to a microscope, and were able to observe water appearing from between the branches of the nanorods, and then evaporating at a higher humidity. On researching why this was the case, the team looked to previous works and found papers from 2012 and 2013 explaining how water can spontaneously vaporize when confined in an area just 1.5 nm wide, or when tightly surrounded by hydrophobic materials. Observations even go as far back as the 1990s, when scientists experimenting with crystallized proteins noticed similar happenings and theorized that some unknown process was allowing the water to rapidly evaporate.
The recent research at PNNL appears to be the first time this phenomenon has been directly seen in action. The team's hypothesis was that the water is condensing and drawing the branches of the nanorods together, and when they reach the 1.5 nm threshold, as specified in the previous work, the water quickly evaporates.
"Now that we've gotten over the initial shock of this unforeseen behavior, we're imagining the many ways it could be harnessed to improve the quality of our lives," says David Heldebrant, the second author of the paper.
Scientists discovered an ancient fungus garden grown by termites millions of years ago.
The fossil structures bore every hallmark of a prehistoric farm: Crops were arranged according to an intricate, complex plan. Material for harvesting littered the ground. Analysis revealed that the crop was a species that only grows when cultivated.
This was agriculture — but underground and on a micro scale. At 25 million years old, it was also far more ancient than anything constructed by humans; Homo sapiens didn't even exist yet.
Instead, the farmers who tilled these ancient plots were termites. And their harvest was fungus. The fossilized termite gardens, uncovered from exposed cliff sides in the Rukwa Rift Basin of southwestern Tanzania, are the oldest physical evidence of farming on Earth, scientists report in the journal PLOS One this week.
"It captures a record of the evolutionary coupling of termites and fungus ... and allows us to trace back the antiquity of this symbiotic relationship," lead author Eric Roberts, a geologist at James Cook University in Australia, wrote in an email. "The new fossils help us to calibrate our evolutionary clocks and use them to better understand when this symbiosis first developed, which we now think was probably around 31 million years ago."
Across Africa, the massive, edible "termite mushrooms" grown by these tiny insects are famous. But European scientists didn't realize the significance of what was happening inside termite colonies until the mid-20th century. When researchers dissected towering termite mounds, which contain dozens of interlocking chambers and can grow taller than a person, they saw that the insects weren't just eating the fungus that grew alongside them — they were cultivating it.
Via Neelima Sinha
A new approach to gas exploration has discovered a huge helium gas field, which could address the increasingly critical shortage of this vital yet rare element.
Helium doesn't just make your voice squeaky - it is critical to many things we take for granted, including MRI scanners in medicine, welding, industrial leak detection and nuclear energy. However, known reserves are quickly running out. Until now helium has never been found intentionally - being accidentally discovered in small quantities during oil and gas drilling.
Now, a research group from Oxford and Durham universities, working with Helium One, a helium exploration company headquartered in Norway, has developed a brand new exploration approach. The first use of this method has resulted in the discovery of a world-class helium gas field in Tanzania.
Their research shows that volcanic activity provides the intense heat necessary to release the gas from ancient, helium-bearing rocks. Within the Tanzanian East African Rift Valley, volcanoes have released helium from ancient deep rocks and have trapped this helium in shallower gas fields. The research is being presented by Durham University PhD student Diveena Danabalan at the Goldschmidt geochemistry conference in Yokohama, Japan.
Diveena Danabalan, of Durham University's Department of Earth Sciences, said: "We show that volcanoes in the Rift play an important role in the formation of viable helium reserves. Volcanic activity likely provides the heat necessary to release the helium accumulated in ancient crustal rocks. However, if gas traps are located too close to a given volcano, they run the risk of helium being heavily diluted by volcanic gases such as carbon dioxide, just as we see in thermal springs from the region. We are now working to identify the 'goldilocks-zone' between the ancient crust and the modern volcanoes where the balance between helium release and volcanic dilution is 'just right'."
Prof. Chris Ballentine, Department of Earth Sciences, University of Oxford, said: "We sampled helium gas (and nitrogen) just bubbling out of the ground in the Tanzanian East African Rift valley. By combining our understanding of helium geochemistry with seismic images of gas trapping structures, independent experts have calculated a probable resource of 54 Billion Cubic Feet (BCf) in just one part of the rift valley. This is enough to fill over 1.2 million medical MRI scanners. To put this discovery into perspective, global consumption of helium is about 8 BCf per year and the United States Federal Helium Reserve, which is the world's largest supplier, has a current reserve of just 24.2 BCf. Total known reserves in the USA are around 153 BCf. This is a game changer for the future security of society's helium needs and similar finds in the future may not be far away."
It was just over 40 years ago that the concept of a solar power satellite (SPS) first emerged. American scientist and aerospace engineer Dr. Peter Glaser won a patent for a broadcast system using a one-square kilometer antenna to channel power via microwaves to a receiver on the ground. The advantage of such a system, and space-based solar power in general, is that it harnesses the unobstructed output of the sun, unlike land-based solar systems which are affected by the weather and Earth's day/night cycle.
While Glaser's proposal never got off the ground, it did inspire further investigation of the potential of space-based solar power by various government departments and institutions. In 2008, a company called Space Energy conducted a long-range wireless power transmission test using a microwave beam between two Hawaiian islands, a distance of 148 km (91.96 mi). The result was a power yield of 1/1000th of one percent on the receiving end, raising questions over whether the technique could be employed over the much larger distance between a satellite in geosynchronous Earth orbit (GEO) and a ground station.
Writing in IEEE Spectrum, Professor Emeritus at JAXA, Susumi Sasaki, argues that this experiment failed largely due to the dense atmosphere disturbing the microwaves' phases as a result of the horizontal transmission. In detailing the agency's proposal he emphasized that in a space-based system the microwaves only need to pass through this dense atmosphere for the last few kilometers of their journey. This, along with new designs for the solar power satellites and anticipated advances in technology over the coming decades, gives JAXA confidence that it can eventually achieve an effective wireless transmission of solar energy over the necessary 36,000 km (22,500 miles) from GEO.
JAXA is working on two concepts. The simpler one involves a huge square panel that measures 2 km (1.24 mi) per side. The top surface would be covered with photovoltaic elements, with transmission antennas on the bottom side. A small bus housing controls and communication systems would be tethered to the panel via 10 km (6.2 mi) long wires. A limitation with this design is that the orientation of the panel is fixed, meaning that as the Earth and the satellite spin, the amount of sunlight the panel receives will vary, impacting its ability to generate power.
Via Tania Gammage