NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
The number of climate refugees could increase dramatically in future. Researchers of the Max Planck Institute for Chemistry and the Cyprus Institute in Nicosia have calculated that the Middle East and North Africa could become so hot that human habitability is compromised. The goal of limiting global warming to less than two degrees Celsius, agreed at the recent UN climate summit in Paris, will not be sufficient to prevent this scenario. The temperature during summer in the already very hot Middle East and North Africa will increase more than two times faster compared to the average global warming. This means that during hot days temperatures south of the Mediterranean will reach around 46 degrees Celsius (approximately 114 degrees Fahrenheit) by mid-century. Such extremely hot days will occur five times more often than was the case at the turn of the millennium. In combination with increasing air pollution by windblown desert dust, the environmental conditions could become intolerable and may force people to migrate.
Something massive, with roughly 1,000 times the area of Earth, is blocking the light coming from a distant star known as KIC 8462852, and nobody is quite sure what it is. As astronomer Tabetha Boyajian investigated this perplexing celestial object, a colleague suggested something unusual: Could it be an alien-built megastructure? Such an extraordinary idea would require extraordinary evidence. In this talk, Boyajian gives us a look at how scientists search for and test hypotheses when faced with the unknown.
"Since lasers were invented more than 50 years ago, they have transformed a diverse swath of technology—from CD players to surgical instruments."
"Now researchers from France and Hungary have invented a way to print lasers that's so cheap, easy and efficient they believe the core of the laser could be disposed of after each use. The team reports its findings in the Journal of Applied Physics.
"The low-cost and easiness of laser chip fabrication are the most significant aspects of our results," said Sébastien Sanaur, an associate professor in the Center of Microelectronics in Provence at the Ecole Nationale Supérieure des Mines de Saint-Étienne in France.
Sanaur and his colleagues made organic lasers, which amplify light with carbon-containing materials. Organic lasers are not as common as inorganic lasers, like those found in laser pointers, DVD players, and optical mice, but they offer benefits such as high-yield photonic conversion, easy fabrication, low-cost and a wide range of wavelengths."
"Researchers have developed the first imaging technique that can clearly see inside molecular structures, and have used it to create 3D holograms of the atomic arrangements inside these structures.
Before now, reliable imaging techniques (for example, scanning tunneling microscopy) could only scan the surfaces of molecules. The ability to peer deep inside a molecular structure and see all of the individual atoms will be essential for developing new materials and understanding their unique physical and chemical properties.
The researchers, Tobias Lühr et al., have published a paper on the new imaging technique in a recent issue of Nano Letters.
The new holographic imaging method significantly improves upon the previous methods: It almost completely eliminates image artifacts, has the ability to image thousands of atoms, and can also distinguish between different types of atoms.
The researchers demonstrated the technique by creating 3D holograms of pyrite (FeS2).
The holography method works by scattering electron waves off a molecule's atoms. Interference between the emitted and scattered electron waves creates diffraction patterns. This information is then used to reconstruct 3D holographic images showing the atoms' true locations."
MIT scientists have developed a 5-atom quantum computer, one that is able to render traditional encryption obsolete. The creation of this five atom quantum computer comes in response to a challenge posed in 1994 by Professor Peter Shor of MIT. Professor Shor developed a quantum algorithm that’s able to calculate a large number’s prime factors more efficiently than traditional computers, with 15 being the smallest figure to meaningfully demonstrate the algorithm.
The new system was able to return the correct factors and with a confidence upwards of 99 percent. Professor Isaac Chuan of MIT said: “We show that Shor’s algorithm, the most complex quantum algorithm known to date, is realizable in a way where, yes, all you have to do is go in the lab, apply more technology, and you should be able to make a bigger quantum computer.”
Of course, this may be a little easier said than done. “It might still cost an enormous amount of money to build—you won’t be building a quantum computer and putting it on your desktop anytime soon—but now it’s much more an engineering effort, and not a basic physics question,” Chuang added.
Yet, Chuang has his team are hopeful for the future of quantum computing, saying that they “foresee it being straightforwardly scalable, once the apparatus can trap more atoms and more laser beams can control the pulses…We see no physical reason why that is not going to be in the cards.”
Is there life beyond our solar system? If there is, our best bet for finding it may lie in three nearby, Earth-like exoplanets.
For the first time, an international team of astronomers from MIT, the University of Liège in Belgium, and elsewhere have detected three planets orbiting an ultracool dwarf star, just 40 light years from Earth. The sizes and temperatures of these worlds are comparable to those of Earth and Venus, and are the best targets found so far for the search for life outside the solar system. The results are published today in the journal Nature.
The scientists discovered the planets using TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope), a 60-centimeter telescope operated by the University of Liège, based in Chile. TRAPPIST is designed to focus on 60 nearby dwarf stars—very small, cool stars that are so faint they are invisible to optical telescopes. Belgian scientists designed TRAPPIST to monitor dwarf stars at infrared wavelengths and search for planets around them.
The team focused the telescope on the ultracool dwarf star, 2MASS J23062928-0502285, now known as TRAPPIST-1, a Jupiter-sized star that is one-eighth the size of our sun and significantly cooler. Over several months starting in September 2015, the scientists observed the star's infrared signal fade slightly at regular intervals, suggesting that several objects were passing in front of the star.
With further observations, the team confirmed the objects were indeed planets, with similar sizes to Earth and Venus. The two innermost planets orbit the star in 1.5 and 2.4 days, though they receive only four and two times the amount of radiation, respectively, as the Earth receives from the sun. The third planet may orbit the star in anywhere from four to 73 days, and may receive even less radiation than Earth. Given their size and proximity to their ultracool star, all three planets may have regions with temperatures well below 400 kelvins, within a range that is suitable for sustaining liquid water and life.
Because the system is just 40 light years from Earth, co-author Julien de Wit, a postdoc in the Department of Earth, Atmospheric, and Planetary Sciences, says scientists will soon be able to study the planets' atmospheric compositions, as well as assess their habitability and whether life actually exists within this planetary system.
A unique observatory buried deep in the clear ice of the South Pole region, an orbiting observatory that monitors gamma rays, a powerful outburst from a black hole 10 billion light years away, and a super-energetic neutrino named Big Bird. These are the cast of characters that populate a paper published in Nature Physics, on Monday April 18th.
The observatory that resides deep in the cold dark of the Antarctic ice has one job: to detect neutrinos. Neutrinos are strange, standoffish particles, sometimes called ‘ghost particles’ because they’re so difficult to detect. They’re like the noble gases of the particle world. Though neutrinos vastly outnumber all other atoms in our Universe, they rarely interact with other particles, and they have no electrical charge. This allows them to pass through normal matter almost unimpeded. To even detect them, you need a dark, undisturbed place, isolated from cosmic rays and background radiation.
This explains why they built an observatory in solid ice. This observatory, called the IceCube Neutrino Observatory, is the ideal place to detect neutrinos. On the rare occasion when a neutrino does interact with the ice surrounding the observatory, a charged particle is created. This particle can be either an electron, muon, or tau. If these charged particles are of sufficiently high energy, then the strings of detectors that make up IceCube can detect it. Once this data is analyzed, the source of the neutrinos can be known.
The next actor in this scenario is NASA’s Fermi Gamma-Ray Space Telescope. Fermi was launched in 2008, with a specific job in mind. Its job is to look at some of the exceptional phenomena in our Universe that generate extraordinarily large amounts of energy, like super-massive black holes, exploding stars, jets of hot gas moving at relativistic speeds, and merging neutron stars. These things generate enormous amounts of gamma-ray energy, the part of the electromagnetic spectrum that Fermi looks at exclusively.
Next comes PKS B1424-418, a distant galaxy with a black hole at its center. About 10 billion years ago, this black hole produced a powerful outburst of energy, called a blazar because it’s pointed at Earth. The light from this outburst started arriving at Earth in 2012. For a year, the blazar in PKS B1424-418 shone 15-30 times brighter in the gamma spectrum than it did before the burst.
Detecting neutrinos is a rare occurrence. So far, IceCube has detected about a hundred of them. For some reason, the most energetic of these neutrinos are named after characters on the popular children’s show called Sesame Street. In December 2012, IceCube detected an exceptionally energetic neutrino, and named it Big Bird. Big Bird had an energy level greater than 2 quadrillion electron volts. That’s an enormous amount of energy shoved into a particle that is thought to have less than one millionth the mass of an electron.
The latest study focused on the brain's system for decoding language, called the semantic system.
Scientists at the University of California, Berkeley played seven volunteers two hours of narrative stories and recorded their brain activity using functional MRI scanners.
These scans recorded changes in the flow of oxygenated blood to different regions of the brain - measured as tiny cubes called voxels - which coincided with words being spoken.
In the first round of experiments, volunteers were played the audio while being scanned, and the audio was then transcribed to synchronise the words and fMRI responses. But as a second test, to see if they could predict the fMRI activity, the team made the volunteers listen to a new story.
Analysis from the first test showed increased activity in the voxels correlated with the meanings and context - so the fMRI observations served as an indirect measurement of brain activity for more than 10,470 words in total.
The team used this information to build computer models to predict brain activity for the new story, and found that their models could predict the brain responses relatively well, building up the detailed views of the semantic system.
The UCB team explained: 'These semantic maps give us, for the first time, a detailed map of how meaning is represented across the human cortex. Rather than being limited to a few brain areas, we find that language engages very broad regions of the brain.'
For nearly 80 years, nuclear fission has awaited a description within a microscopic framework. In the first study of its kind, scientists collaborating from the University of Washington, Warsaw University of Technology (Poland), Pacific Northwest National Laboratory, and Los Alamos National Laboratory, developed a novel model to take a more intricate look at what happens during the last stages of the fission process. Using the model, they determined that fission fragments remain connected far longer than expected before the daughter nuclei split apart. Moreover, they noted the predicted kinetic energy agreed with results from experimental observations. This discovery indicates that complex calculations of real-time fission dynamics without physical restrictions are feasible and opens a pathway to a theoretical microscopic framework with abundant predictive power.
It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.
In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.
"The most interesting thing about my work is that it shows just how simple and small a fully error corrected quantum circuit can be, which is why I call the device the 'Very Small Logical Qubit,'" Kapit told Phys.org. "Also, the error correction is fully passive—unwanted error states are quickly repaired by engineered dissipation, without the need for an external computer to watch the circuit and make decisions. While this paper is a theoretical blueprint, it can be built with current technology and doesn't require any new insights to make it a reality."
The new passive error correction circuit consists of just two primary qubits, in contrast to the 10 or more qubits required in most active approaches. The two qubits are coupled to each other, and each one is also coupled to a "lossy" object, such as a resonator, that experiences photon loss.
"In the absence of any errors, there are a pair of oscillating photon configurations that are the 'good' logical states of the device, and they oscillate at a fixed frequency based on the circuit parameters," Kapit explained. "However, like all qubits, the qubits in the circuit are not perfect and will slowly leak photons into the environment. When a photon randomly escapes from the circuit, the oscillation is broken, at which point a second, passive error correction circuit kicks in and quickly inserts two photons, one which restores the lost photon and reconstructs the oscillating logical state, and the other is dumped to a lossy circuit element and quickly leaks back out of the system. The combination of careful tuning of the resonant frequencies of the circuit and adding photons two at a time to correct losses ensures that the passive error correction circuit can operate continuously but won't do anything to the two good qubits unless their oscillation has been broken by a photon loss."
MIT researchers have devised a new set of proteins that can be customized to bind arbitrary RNA sequences, making it possible to image RNA inside living cells, monitor what a particular RNA strand is doing, and even control RNA activity. The new strategy is based on human RNA-binding proteins that normally help guide embryonic development. The research team adapted the proteins so that they can be easily targeted to desired RNA sequences. “You could use these proteins to do measurements of RNA generation, for example, or of the translation of RNA to proteins,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab. “This could have broad utility throughout biology and bioengineering.”
Unlike previous efforts to control RNA with proteins, the new MIT system consists of modular components, which the researchers believe will make it easier to perform a wide variety of RNA manipulations. “Modularity is one of the core design principles of engineering. If you can make things out of repeatable parts, you don’t have to agonize over the design. You simply build things out of predictable, linkable units,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research.
Boyden is the senior author of a paper describing the new system in the Proceedings of the National Academy of Sciences. The paper’s lead authors are postdoc Katarzyna Adamala and grad student Daniel Martin-Alarcon.
Living cells contain many types of RNA that perform different roles. One of the best known varieties is messenger RNA (mRNA), which is copied from DNA and carries protein-coding information to cell structures called ribosomes, where mRNA directs protein assembly in a process called translation. Monitoring mRNA could tell scientists a great deal about which genes are being expressed in a cell, and tweaking the translation of mRNA would allow them to alter gene expression without having to modify the cell’s DNA.
To achieve this, the MIT team set out to adapt naturally occurring proteins called Pumilio homology domains. These RNA-binding proteins include sequences of amino acids that bind to one of the ribonucleotide bases that make up RNA. In recent years, scientists have been working on developing these proteins for experimental use, but until now it was more of a trial-and-error process to create proteins that would bind to a particular RNA sequence.
“It was not a truly modular code,” Boyden says, referring to the protein’s amino acid sequences. “You still had to tweak it on a case-by-case basis. Whereas now, given an RNA sequence, you can specify on paper a protein to target it.”
To create their code, the researchers tested out many amino acid combinations and found a particular set of amino acids that will bind each of the four bases at any position in the target sequence. Using this system, which they call Pumby (for Pumilio-based assembly), the researchers effectively targeted RNA sequences varying in length from six to 18 bases.
“I think it’s a breakthrough technology that they’ve developed here,” says Robert Singer, a professor of anatomy and structural biology, cell biology, and neuroscience at Albert Einstein College of Medicine, who was not involved in the research. “Everything that’s been done to target RNA so far requires modifying the RNA you want to target by attaching a sequence that binds to a specific protein. With this technique you just design the protein alone, so there’s no need to modify the RNA, which means you could target any RNA in any cell.”
Scientists say they now have a near-perfect picture of the genetic events that cause breast cancer, which they hope will unlock new ways of treating the disease.
The study, published in Nature, has been described as a "milestone" moment that could help unlock new ways of treating and preventing the disease. The largest study of its kind unpicked practically all the errors that cause healthy breast tissue to go rogue.
Cancer Research UK said the findings were an important stepping-stone to new drugs for treating cancer. To understand the causes of the disease, scientists have to understand what goes wrong in our DNA that makes healthy tissue turn cancerous. The international team looked at all 3 billion letters of people's genetic code - their entire blueprint of life - in 560 breast cancers. They uncovered 93 sets of instructions, or genes, that if mutated, can cause tumours. Some have been discovered before, but scientists expect this to be the definitive list, barring a few rare mutations.
Prof Sir Mike Stratton, the director of the Sanger Institute in Cambridge which led the study, said it was a "milestone" in cancer research. "There are about 20,000 genes in the human genome. It turns out, now we have this complete view of breast cancer - there are 93 of those genes that if mutated will convert a normal breast cell into a breast cancer cell. That is an important piece of information," he stated. "We hand that list over to the universities, the pharmaceuticals, the biotech companies to start developing new drugs because those mutated genes and their proteins are targets for new therapeutics. There are now many drugs that have been developed over the last 15 years against such targets which we know work."
Deakin University scientists may have found a way to stop the cancer that has been killing Tasmanian devils for the past 20 years.
Dr Beata Ujvari, from Deakin's Centre for Integrative Ecology within the School of Life and Environmental Sciences, investigated differences in molecules found in the devils' immune systems, comparing those that had the cancer, known as the Tasmanian Devil Facial Tumour Disease, and those that didn't.
We know from human and animal studies that certain natural antibodies are able to recognise and kill cancerous cells, so we wanted to see whether the presence of these molecules would also determine tumor development in Tasmanian devils," Dr Ujvari said.
"We found that devils that have a higher ratio of these natural antibodies were less likely to have cancer. "We can deduce then that devils with higher natural antibody ratio are therefore less susceptible to the contagious cancer."
Dr Ujvari said the results could potentially halt the spread of disease that has devastated the Tasmanian devil population since its first sighting in 1996, hopefully enabling new vaccine and treatment options.
The research, "Immunoglubolin dynamics and cancer prevalence in Tasmanian devils (Sarcophilus harrisii)" is published in the latest edition of Nature Scientific Reports. "Anti-tumor vaccines that enhance the production of these natural antibodies, or direct treatment of the cancer with natural antibodies, could become a solution to help halt this disease," Dr Ujvari said.
"This process known as 'active immunotherapy', is becoming more and more accepted in treating human cancers, and we think it could be the magic bullet in saving the Tasmanian devils from extinction."
The facial cancer is spread from devil to devil via biting during social interactions, and has caused massive population declines of Tasmanian devils since its first sighting in 1996, in Tasmania. Dr Ujvari said that because the cancer was transmitted from devil to devil, their immune system should recognise the cells as foreign objects, like a pathogen, and work to eliminate them from the victim's system.
"However, this disease's cells are able to avoid recognition by the devils' immune systems and develop into large ulcerating tumors that ultimately kills the animals," she said.
Two black holes, with masses 29 and 35 times the mass of the Sun, merged to form an even bigger black hole. The merger resulted in three entire suns worth of matter converted to pure energy in the form of gravitational waves. The waves travelled a billion light years before a tiny meat-filled species on a pale blue dot in space figured how to see them. Thanks to the smartest one that species had seen in a century, they knew that black holes might merge, and that they would produce these waves if they ever collided. They put so much trust in his proven theory, that they searched for many years to find the waves he predicted. Exactly 100 years after his famous theory was released, their hard work paid off, and they celebrated one of the most significant discoveries in meat-filled history.
"As we age, tiny blood vessels in the brain stiffen and sometimes rupture, causing "microbleeds." This damage has been associated with neurodegenerative diseases and cognitive decline, but whether the brain can naturally repair itself beyond growing new blood-vessel tissue has been unknown.
A zebrafish study published on May 3 in Immunity describes for the first time how white blood cells called macrophages can grab the broken ends of a blood vessel and stick them back together. "
A study from MIT and the National University of Singapore links allergies caused by house dust mites to DNA damage. The findings could help predict asthma patients’ risk for lung tissue damage.
House dust mites, which are a major source of allergens in house dust, can cause asthma in adults and children. Researchers from MIT and the National University of Singapore have now found that these mites have a greater impact than previously known — they induce DNA damage that can be fatal to lung cells if the damaged DNA is not adequately repaired.
The findings suggest that DNA repair capacity, which varies widely among healthy individuals, could be a susceptibility factor that places an asthmatic patient at increased risk of developing asthma-associated pathologies, the researchers say.
“DNA damage is a component in asthma development, potentially contributing to the worsening of asthma. In addition to activation of immune responses, patients’ DNA repair capacity may affect disease progression,” says Bevin Engelward, a professor of biological engineering at MIT and a senior author of the study. “Ultimately, screening for DNA repair capacity might be used to predict the development of severe asthma.”
Fred Wong Wai-Shiu, head of the Department of Pharmacology at the National University of Singapore, is also a senior author of the study, which appears in the May 1 issue of theJournal of Allergy and Clinical Immunology. The paper’s lead author is Tze Khee Chan, a graduate student in the Singapore-MIT Alliance for Research and Technology (SMART).
Largest-ever study of breast cancer genomes, led by Wellcome Genome Campus researchers, reveals new genes and mutations involved in the disease.
The study has uncovered five new genes associated with the disease and 13 new mutational signatures that influence tumor development. Published in Nature and Nature Communications, two studies from the Wellcome Genome Campus pinpoint where genetic variations in breast cancers occur. The findings provide insights into the causes of breast tumours and demonstrate that breast-cancer genomes are highly individual.
Each patient’s cancer genome provides a complete historical account of the genetic changes that person has acquired throughout life. As they develop from a fertilised egg into full adulthood, a person’s DNA gathers genetic changes along the way. Human DNA is constantly being damaged, either by things in the environment or simply from regular wear and tear in the cell. These mutations form patterns–mutational signatures–that can be detected, and give us clues about the causes of cancer.
Ultrasound is also called sonography and is essentially a type of ‘medical sonar’. It has revolutionized medicine since the 1940s, giving us the ability to look into the body in a completely safe way without leaving icky radiation behind, like xrays.
Beyond predicting whether your baby shower will be blue or pink, lesser known applications of ultrasound include the ability to essentially burn and destroy cells inside your body. As such, it has been successfully used to do surgery without making any cuts into the human body. This is a technique that has been used to remove cancerous cells while not affecting any of the surrounding tissue, and without any of the side-effects associated with other kinds of cancer treatment. This is referred to by scientist Yoav Medan as focused ultrasound. If you are unfamiliar with this, you need to watch this TED talk. Non-invasive procedures like this are the future of surgery.
Non-invasive procedures are also the future of neuroscience. It is at this point that we find ourselves at the application of this astonishing science to memory research. As of very recently, scientists have been able to use ultrasound toselectively and non-invasively control brain cells. In other words, we can remote-control individual cells in the brain. We can send a pulse of sound into a brain and change what that creature thinks and does. Crazy, right?
The underlying technology is called sonogenetics, and it was first announced as a technique that can modify brain cells in a paper published by scientist Stuart Ibsen and colleagues in 2015.
Scanning the mitochondrial genomes of thousands of species is beginning to shed light on why some genes were lost while others were retained.
Billions of years ago, one cell—the ancestral cell of modern eukaryotes—engulfed another, a microbe that gave rise to today’s mitochondria. Over evolutionary history, the relationship between our cells and these squatters has become a close one; mitochondria provide us with energy and enjoy protection from the outside environment in return. As a result of this interdependence, our mitochondria, which once possessed their own complete genome, have lost most of their genes: while the microbe that was engulfed so many years ago is estimated to have contained thousands of genes, humans have just 13 remaining genes in their mitochondrial DNA (mtDNA).
Some mitochondrial genes have disappeared completely; others have been transferred to our cells’ nuclei for safekeeping, away from the chemically harsh environment of the mitochondrion. This is akin to storing books in a nice, dry, central library, instead of a leaky shed where they could get damaged. In humans, damage to mitochondrial genes can result in devastating genetic diseases, so why keep any books at all in the leaky shed?
Researchers have proposed diverse hypotheses to explain mitochondrial gene retention. Perhaps the products of some genes are hard to introduce into the mitochondrion once they’ve been made elsewhere. (Mitochondria have their own ribosomes and are capable of translating their retained genes in-house.) Or perhaps keeping some mitochondrial genes allows the cell to control each organelle individually. Historically, it has been hard to gather quantitative support for any of these ideas, but in the world of big (and growing) biological data we now have the power to shed light on this question. The mtDNA sequences of thousands of organisms as diverse as plants, worms, yeasts, protists, and humans have now been sequenced, yielding information on the patterns of gene loss and on the gene properties that may have governed this loss.
Can NASA’s next big space telescope take a picture of an alien Earth-like planet orbiting another star? Astronomers have long dreamed of such pictures, which would allow them to study worlds beyond our solar system for signs of habitability and life.
But for as long as astronomers have dreamed, the technology to make those dreams a reality has seemed decades away. Now, however, a growing number of experts believe NASA’s Wide-Field Infrared Survey Telescope (WFIRST) could take snapshots of “other Earths”—and soon. The agency formally started work on WFIRST in February of this year and plans to launch the observatory in 2025.
WFIRST was conceived in 2010 as the top-ranked priority of the National Academy of Sciences’ Decadal Survey, a report from U.S. astronomers that proposes a wish list of future missions for NASA and other federal science agencies. The telescope’s heart is a 2.4-meter mirror that, although the same size and quality as the Hubble Space Telescope’s, promises panoramic views of the heavens a hundred times larger than anything Hubble could manage. Using a camera called the Wide Field Instrument, WFIRST’s primary objective will be to study dark energy, the mysterious force driving the universe’s accelerating expansion. But another hot topic—the existential quest to know whether we are alone in the universe—is already influencing the mission.
Researchers have discovered more than a thousand exoplanets—planets around other stars—since the Decadal Survey’s crucial recommendation of WFIRST as NASA’s top-priority next-generation astrophysics mission. They expect to find tens of thousands more within the next 10 years. Many will be discovered by WFIRST itself when it surveys the Milky Way’s galactic bulge for stars that briefly brighten as planets cross in front of them, acting as gravitational lenses to magnify their light.
That survey could yield at least as many worlds as NASA’s wildly successful planet-hunting Kepler space telescope, which used different techniques to net about 5,000 probable planets before hardware failures ended its primary mission in 2013.
Already, rough statistics from the entirety of known planets suggest that every star in the sky is accompanied by at least one, and that perhaps one in five sunlike stars bears a rocky orb in a not-too-hot, not-too-cold “habitable zone” where liquid water can exist. The best way to learn whether any of these worlds are Earth-like is to see them—but taking a planet’s picture from light-years away is far from easy. A habitable world would be a faint dot lost in the overpowering glare of its larger, 10-billion-times-brighter star. Glimpsing it would be like seeing a firefly fluttering next to a searchlight or a wisp of bioluminescent algae on a wave crashing against a lighthouse.
The specificity and flexibility of antibodies has made them essential and ubiquitous research tools. For those same reasons many antibody based therapies are currently in development. This week in the journal Nature researchers have described the development and application of a potential antibody based HIV vaccine alternative. The researchers report that a single dose of monoclonal antibodies can protect research animals nearly 6 months from repeated exposure to an HIV-like virus.
For reasons both ethical and practical the scientists conducted their research using a cohort of Rhesus macaques, and an HIV-like virus known as a simian/human (SIV/HIV) chimaeric virus (SHIV). HIV is only capable of infecting human beings and so the study of HIV in animals requires the use of other viruses including SHIVs which are genetically modified HIV-like viruses containing DNA from both HIV and the related Simian immunodeficiency virus.
In the newly reported work animals within experimental groups were prophylactically treated intravenously with one of four different monoclonal antibodies. Both the antibody protected experimental animals and the unprotected control animals were then exposed weekly to low doses of an SHIV. While the experimental group of animals was exposed to the prophylactic antibodies intravenously, viral challenged occurred intrarectally. Both experimental and control animals were exposed to the SHIV by infusing the rectal cavity with a 1 mL suspension of the virus. Research animals were exposed to the virus one week after treatment with the prophylactic antibodies and every week thereafter. This aspect of the experimental procedure differed from other previously reported work.
While previous experiments have examined the utility of using antibodies to prevent infection following single high doses of virus, the newly reported work sought to better mimic real world exposure by exposing animals repeatedly to smaller viral titres.
Animals given the prophylactic antibodies were protected from the virus for up to 23 weeks, with the average duration of protection being 12 to 14 weeks. In contrast the animals within the control group, lacking the protective benefit of the antibodies, were infected within an average of 3 weeks.
This new work is promising. In an interview with The VergeDr. David Montefiori, a specialist who took no part in the study, suggested that with improving technologies the antibody based therapeutics may be capable of offer protection for periods even longer than six months.
What are Google's visionaries up to these days? You may be sorry you asked. Discovery reported that an electronic device injected into an eyeball is the focus of a patent filed by Google.
The Google patent application was reported in Forbes. Aaron Tilley said "the device is injected in fluid that then solidifies to couple the device with the eye's lens capsule, the transparent membrane surrounding the lens." The device is injected into the eye and it has tiny components, said Lilley: storage, sensors, radio, battery and an electronic lens. The device gets power wirelessly from an "energy harvesting antenna."
"The whole endeavor appears to be a way of correcting poor vision," saidDiscovery. Tilley at Forbes said, "According to the patent, the electronic lens would assist in the process of focusing light onto the eye's retina." The inventor in the application is listed as Andrew Jason Conrad.
The patent application said, "Elements of the human eye (e.g., the cornea, lens, aqueous and vitreous humor) operate to image the environment of the eye by focusing light from the environment onto the retina of the eye, such that images of elements of the environment are presented in-focus on the retina. The optical power of the natural lens of the eye can be controlled (e.g., by ciliary muscles of the eye) to allow objects at different distances to be in focus at different points in time (a process known as accommodation)."
A variety of reasons, however, are behind decreased focus and degradation of images presented to the retina. "Issues with poor focus can be rectified by the use of eyeglasses and/or contact lenses or by the remodeling of the cornea. Further, artificial lenses can be implanted into the eye (e.g., into the space in front of the iris, into the lens capsule following partial or full removal of the natural lens, e.g., due to the development of cataracts) to improve vision."
From visible light to radio waves, most people are familiar with the different sections of the electromagnetic spectrum. But one wavelength is often forgotten, little understood, and, until recently, rarely studied.
"Terahertz is somewhat of a gap between microwaves and infrared," said Northwestern University's Cheng Sun. "People are trying to fill in this gap because this spectrum carries a lot of information."
Sun and his team have used metamaterials and 3-D printing to develop a novel lens that works with terahertz frequencies. Not only does it have better imaging capabilities than common lenses, but it opens the door for more advances in the mysterious realm of the terahertz. Supported by the National Science Foundation, the work was published online on April 22 in the journal Advanced Optical Materials.
"Typical lenses—even fancy ones—have many, many components to counter their intrinsic imperfections," said Sun, associate professor of mechanical engineering at Northwestern's McCormick School of Engineering. "Sometimes modern imaging systems stack several lenses to deliver optimal imaging performance, but this is very expensive and complex."
The focal length of a lens is determined by its curvature and refractive index, which shapes the light as it enters. Without components to counter imperfections, resulting images can be fuzzy or blurred. Sun's lens, on the other hand, employs a gradient index, which is a refractive index that changes over space to create flawless images without requiring additional corrective components.
A team of researchers from the University of California, Davis and the University of Washington have demonstrated that the conductance of DNA can be modulated by controlling its structure, thus opening up the possibility of DNA’s future use as an electromechanical switch for nanoscale computing. Although DNA is commonly known for its biological role as the molecule of life, it has recently garnered significant interest for use as a nanoscale material for a wide-variety of applications.
In their paper published in Nature Communications, the team demonstrated that changing the structure of the DNA double helix by modifying its environment allows the conductance (the ease with which an electric current passes) to be reversibly controlled. This ability to structurally modulate the charge transport properties may enable the design of unique nanodevices based on DNA. These devices would operate using a completely different paradigm than today’s conventional electronics.
“As electronics get smaller they are becoming more difficult and expensive to manufacture, but DNA-based devices could be designed from the bottom-up using directed self-assembly techniques such as ‘DNA origami’,” said Josh Hihath, assistant professor of electrical and computer engineering at UC Davis and senior author on the paper. DNA origami is the folding of DNA to create two- and three-dimensional shapes at the nanoscale level.
“Considerable progress has been made in understanding DNA’s mechanical, structural, and self-assembly properties and the use of these properties to design structures at the nanoscale. The electrical properties, however, have generally been difficult to control,” said Hihath.
In addition to potential advantages in fabrication at the nanoscale level, such DNA-based devices may also improve the energy efficiency of electronic circuits. The size of devices has been significantly reduced over the last 40 years, but as the size has decreased, the power density on-chip has increased. Scientists and engineers have been exploring novel solutions to improve the efficiency.
“There’s no reason that computation must be done with traditional transistors. Early computers were fully mechanical and later worked on relays and vacuum tubes,” said Hihath. “Moving to an electromechanical platform may eventually allow us to improve the energy efficiency of electronic devices at the nanoscale.”
This work demonstrates that DNA is capable of operating as an electromechanical switch and could lead to new paradigms for computing. To develop DNA into a reversible switch, the scientists focused on switching between two stable conformations of DNA, known as the A-form and the B-form. In DNA, the B-form is the conventional DNA duplex that is commonly associated with these molecules. The A-form is a more compact version with different spacing and tilting between the base pairs. Exposure to ethanol forces the DNA into the A-form conformation resulting in an increased conductance.
Similarly, by removing the ethanol, the DNA can switch back to the B-form and return to its original reduced conductance value.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.