Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • history • language • map • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Is there life beyond our solar system? If there is, our best bet for finding it may lie in three nearby, Earth-like exoplanets.
For the first time, an international team of astronomers from MIT, the University of Liège in Belgium, and elsewhere have detected three planets orbiting an ultracool dwarf star, just 40 light years from Earth. The sizes and temperatures of these worlds are comparable to those of Earth and Venus, and are the best targets found so far for the search for life outside the solar system. The results are published today in the journal Nature.
The scientists discovered the planets using TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope), a 60-centimeter telescope operated by the University of Liège, based in Chile. TRAPPIST is designed to focus on 60 nearby dwarf stars—very small, cool stars that are so faint they are invisible to optical telescopes. Belgian scientists designed TRAPPIST to monitor dwarf stars at infrared wavelengths and search for planets around them.
The team focused the telescope on the ultracool dwarf star, 2MASS J23062928-0502285, now known as TRAPPIST-1, a Jupiter-sized star that is one-eighth the size of our sun and significantly cooler. Over several months starting in September 2015, the scientists observed the star's infrared signal fade slightly at regular intervals, suggesting that several objects were passing in front of the star.
With further observations, the team confirmed the objects were indeed planets, with similar sizes to Earth and Venus. The two innermost planets orbit the star in 1.5 and 2.4 days, though they receive only four and two times the amount of radiation, respectively, as the Earth receives from the sun. The third planet may orbit the star in anywhere from four to 73 days, and may receive even less radiation than Earth. Given their size and proximity to their ultracool star, all three planets may have regions with temperatures well below 400 kelvins, within a range that is suitable for sustaining liquid water and life.
Because the system is just 40 light years from Earth, co-author Julien de Wit, a postdoc in the Department of Earth, Atmospheric, and Planetary Sciences, says scientists will soon be able to study the planets' atmospheric compositions, as well as assess their habitability and whether life actually exists within this planetary system.
A unique observatory buried deep in the clear ice of the South Pole region, an orbiting observatory that monitors gamma rays, a powerful outburst from a black hole 10 billion light years away, and a super-energetic neutrino named Big Bird. These are the cast of characters that populate a paper published in Nature Physics, on Monday April 18th.
The observatory that resides deep in the cold dark of the Antarctic ice has one job: to detect neutrinos. Neutrinos are strange, standoffish particles, sometimes called ‘ghost particles’ because they’re so difficult to detect. They’re like the noble gases of the particle world. Though neutrinos vastly outnumber all other atoms in our Universe, they rarely interact with other particles, and they have no electrical charge. This allows them to pass through normal matter almost unimpeded. To even detect them, you need a dark, undisturbed place, isolated from cosmic rays and background radiation.
This explains why they built an observatory in solid ice. This observatory, called the IceCube Neutrino Observatory, is the ideal place to detect neutrinos. On the rare occasion when a neutrino does interact with the ice surrounding the observatory, a charged particle is created. This particle can be either an electron, muon, or tau. If these charged particles are of sufficiently high energy, then the strings of detectors that make up IceCube can detect it. Once this data is analyzed, the source of the neutrinos can be known.
The next actor in this scenario is NASA’s Fermi Gamma-Ray Space Telescope. Fermi was launched in 2008, with a specific job in mind. Its job is to look at some of the exceptional phenomena in our Universe that generate extraordinarily large amounts of energy, like super-massive black holes, exploding stars, jets of hot gas moving at relativistic speeds, and merging neutron stars. These things generate enormous amounts of gamma-ray energy, the part of the electromagnetic spectrum that Fermi looks at exclusively.
Next comes PKS B1424-418, a distant galaxy with a black hole at its center. About 10 billion years ago, this black hole produced a powerful outburst of energy, called a blazar because it’s pointed at Earth. The light from this outburst started arriving at Earth in 2012. For a year, the blazar in PKS B1424-418 shone 15-30 times brighter in the gamma spectrum than it did before the burst.
Detecting neutrinos is a rare occurrence. So far, IceCube has detected about a hundred of them. For some reason, the most energetic of these neutrinos are named after characters on the popular children’s show called Sesame Street. In December 2012, IceCube detected an exceptionally energetic neutrino, and named it Big Bird. Big Bird had an energy level greater than 2 quadrillion electron volts. That’s an enormous amount of energy shoved into a particle that is thought to have less than one millionth the mass of an electron.
The latest study focused on the brain's system for decoding language, called the semantic system.
For nearly 80 years, nuclear fission has awaited a description within a microscopic framework. In the first study of its kind, scientists collaborating from the University of Washington, Warsaw University of Technology (Poland), Pacific Northwest National Laboratory, and Los Alamos National Laboratory, developed a novel model to take a more intricate look at what happens during the last stages of the fission process. Using the model, they determined that fission fragments remain connected far longer than expected before the daughter nuclei split apart. Moreover, they noted the predicted kinetic energy agreed with results from experimental observations. This discovery indicates that complex calculations of real-time fission dynamics without physical restrictions are feasible and opens a pathway to a theoretical microscopic framework with abundant predictive power.
It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.
In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.
"The most interesting thing about my work is that it shows just how simple and small a fully error corrected quantum circuit can be, which is why I call the device the 'Very Small Logical Qubit,'" Kapit told Phys.org. "Also, the error correction is fully passive—unwanted error states are quickly repaired by engineered dissipation, without the need for an external computer to watch the circuit and make decisions. While this paper is a theoretical blueprint, it can be built with current technology and doesn't require any new insights to make it a reality."
The new passive error correction circuit consists of just two primary qubits, in contrast to the 10 or more qubits required in most active approaches. The two qubits are coupled to each other, and each one is also coupled to a "lossy" object, such as a resonator, that experiences photon loss.
"In the absence of any errors, there are a pair of oscillating photon configurations that are the 'good' logical states of the device, and they oscillate at a fixed frequency based on the circuit parameters," Kapit explained. "However, like all qubits, the qubits in the circuit are not perfect and will slowly leak photons into the environment. When a photon randomly escapes from the circuit, the oscillation is broken, at which point a second, passive error correction circuit kicks in and quickly inserts two photons, one which restores the lost photon and reconstructs the oscillating logical state, and the other is dumped to a lossy circuit element and quickly leaks back out of the system. The combination of careful tuning of the resonant frequencies of the circuit and adding photons two at a time to correct losses ensures that the passive error correction circuit can operate continuously but won't do anything to the two good qubits unless their oscillation has been broken by a photon loss."
MIT researchers have devised a new set of proteins that can be customized to bind arbitrary RNA sequences, making it possible to image RNA inside living cells, monitor what a particular RNA strand is doing, and even control RNA activity. The new strategy is based on human RNA-binding proteins that normally help guide embryonic development. The research team adapted the proteins so that they can be easily targeted to desired RNA sequences. “You could use these proteins to do measurements of RNA generation, for example, or of the translation of RNA to proteins,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab. “This could have broad utility throughout biology and bioengineering.”
Unlike previous efforts to control RNA with proteins, the new MIT system consists of modular components, which the researchers believe will make it easier to perform a wide variety of RNA manipulations. “Modularity is one of the core design principles of engineering. If you can make things out of repeatable parts, you don’t have to agonize over the design. You simply build things out of predictable, linkable units,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research.
Boyden is the senior author of a paper describing the new system in the Proceedings of the National Academy of Sciences. The paper’s lead authors are postdoc Katarzyna Adamala and grad student Daniel Martin-Alarcon.
Living cells contain many types of RNA that perform different roles. One of the best known varieties is messenger RNA (mRNA), which is copied from DNA and carries protein-coding information to cell structures called ribosomes, where mRNA directs protein assembly in a process called translation. Monitoring mRNA could tell scientists a great deal about which genes are being expressed in a cell, and tweaking the translation of mRNA would allow them to alter gene expression without having to modify the cell’s DNA.
To achieve this, the MIT team set out to adapt naturally occurring proteins called Pumilio homology domains. These RNA-binding proteins include sequences of amino acids that bind to one of the ribonucleotide bases that make up RNA. In recent years, scientists have been working on developing these proteins for experimental use, but until now it was more of a trial-and-error process to create proteins that would bind to a particular RNA sequence.
“It was not a truly modular code,” Boyden says, referring to the protein’s amino acid sequences. “You still had to tweak it on a case-by-case basis. Whereas now, given an RNA sequence, you can specify on paper a protein to target it.”
To create their code, the researchers tested out many amino acid combinations and found a particular set of amino acids that will bind each of the four bases at any position in the target sequence. Using this system, which they call Pumby (for Pumilio-based assembly), the researchers effectively targeted RNA sequences varying in length from six to 18 bases.
“I think it’s a breakthrough technology that they’ve developed here,” says Robert Singer, a professor of anatomy and structural biology, cell biology, and neuroscience at Albert Einstein College of Medicine, who was not involved in the research. “Everything that’s been done to target RNA so far requires modifying the RNA you want to target by attaching a sequence that binds to a specific protein. With this technique you just design the protein alone, so there’s no need to modify the RNA, which means you could target any RNA in any cell.”
Conventional silicon-based computing, which has advanced by leaps and bounds in recent decades, is pushing against its practical limits. DNA computing could help take the digital era to the next level. Scientists are now reporting progress toward that goal with the development of a novel DNA-based GPS. They describe their advance in ACS' The Journal of Physical Chemistry B.
Jian-Jun Shu and colleagues note that Moore's law, which marked its 50thanniversary in April, posited that the number of transistors on a computer chip would double every year. This doubling has enabled smartphone and tablet technology that has revolutionized computing, but continuing the pattern will come with high costs. In search of a more affordable way forward, scientists are exploring the use of DNA for its programmability, fast processing speeds and tiny size. So far, they have been able to store and process information with the genetic material and perform basic computing tasks. Shu's team set out to take the next step.
The researchers built a programmable DNA-based processor that performs two computing tasks at the same time. On a map of six locations and multiple possible paths, it calculated the shortest routes between two different starting points and two destinations. The researchers say that in addition to cost- and time-savings over other DNA-based computers, their system could help scientists understand how the brain's "internal GPS" works.
In a proof-of-principle experiment, researchers at UNSW Australia have demonstrated that a small group of individual atoms placed very precisely in silicon can act as a quantum simulator, mimicking nature -- in this case, the weird quantum interactions of electrons in materials.
"Previously this kind of exact quantum simulation could not be performed without interference from the environment, which typically destroys the quantum state," says senior author Professor Sven Rogge, Head of the UNSW School of Physics and program manager with the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T).
"Our success provides a route to developing new ways to test fundamental aspects of quantum physics and to design new, exotic materials -- problems that would be impossible to solve even using today's fastest supercomputers."
The study is published in the journal Nature Communications. The lead author was UNSW's Dr Joe Salfi and the team included CQC2T director Professor Michelle Simmons, other CQC2T researchers from UNSW and the University of Melbourne, as well as researchers from Purdue University in the US.
Two dopant atoms of boron only a few nanometres from each other in a silicon crystal were studied. They behaved like valence bonds, the "glue" that holds matter together when atoms with unpaired electrons in their outer orbitals overlap and bond.
The team's major advance was in being able to directly measure the electron "clouds" around the atoms and the energy of the interactions of the spin, or tiny magnetic orientation, of these electrons.
They were also able to correlate the interference patterns from the electrons, due to their wave-like nature, with their entanglement, or mutual dependence on each other for their properties. "The behavior of the electrons in the silicon chip matched the behaviour of electrons described in one of the most important theoretical models of materials that scientists rely on, called the Hubbard model," says Dr Salfi.
"This model describes the unusual interactions of electrons due to their wave-like properties and spins. And on of its main applications is to understand how electrons in a grid flow without resistance, even though they repel each other," he says. The team also made a counterintuitive find -- that the entanglement of the electrons in the silicon chip increased the further they were apart. "This demonstrates a weird behaviour that is typical of quantum systems," says Professor Rogge.
EVE Online isn't just a game about internet spaceships and sci-fi politics. Since March, developer CCP Games has been running Project Discovery – an initiative to help improve scientific understanding of the human body at the tiniest levels. Run in conjunction with the Human Protein Atlas and Massively Multiplayer Online Science, the project taps into EVE Online's greatest resource – its player base – to help categorise millions of proteins.
"We show them an image, and they can change the colour of it, putting green or red dyes on it to help them analyse it a little bit better," Linzi Campbell, game designer on Project Discovery, tells WIRED. "Then we also show them examples – cytoplasm is their favourite one! We show them what each of the different images should look like, and just get them to pick a few that they identify within the image. The identifications are scrambled each time, so it's not as simple as going 'ok, every time I just pick the one on the right' – they have to really think about it."
The analysis project is worked into EVE Online as a minigame, and works within the context of the game's lore. "We have this NPC organisation called the Drifters – they're like a mysterious entity in New Eden [EVE's interplanetary setting]," Campbell explains. "The players don't know an awful lot about the Drifters at the minute, so we disguised it within the universe as Drifter DNA that they were analysing. I think it just fit perfectly. We branded this as [research being done by] the Sisters of Eve, and they're analysing this Drifter DNA."
The response has been tremendous. "We've had an amazing number of classifications, way over our greatest expectations," says Emma Lundberg, associate professor at the Human Protein Atlas. "Right now, after six weeks, we've had almost eight million classifications, and the players spent 16.2 million minutes playing the minigame. When we did the math, that translated – in Swedish measures – to 163 working years. It's crazy."
"We had a little guess, internally. We said if we get 40,000+ classifications a day, we're happy. If we get 100,000 per day, then we're amazed," Lundberg adds. "But when it peaked in the beginning, we had 900,000 classifications in one day. Now it's stabilised, but we're still getting around 200,000 a day, so everyone is mind-blown. We never expected it."
Stem cells work throughout our lives as a sort of handyman, repairing damaged tissues and renewing some normal ones, like the skin we shed. Scientists have come to understand much about how stem cells function when we are adults, but less is known about where these stem cells come from to begin with, as an embryo is developing.
Now, researchers at The Rockefeller University have identified a new mechanism by which cells are instructed during development to become stem cells. The results, published in Cell on January 14, help explain how communication between cells mediates this process, and may have implications for skin cancer treatments. The researchers traced the cell divisions that occur as hair follicles form in mice to determine where stem cells first emerge. Above, developing hair follicles are shown at various stages.
“While adult stem cells are increasingly well-characterized, we know little about their origins. Here, we show that in the skin, stem cell progenitors of the hair follicle are specified as soon as the cells within the single-layered embryonic epidermis begin to divide downward to form an embryonic hair bud,” explains Elaine Fuchs, Rebecca C. Lancefield Professor and head of the Robin Chemers Neustein Laboratory of Mammalian Cell Biology and Development. “This timing was much earlier than previously thought, and gives us new insights into the establishment of these very special cells.”
Clusters of stem cells receive signals from other nearby cells that instruct them to either stay a stem cell or differentiate into a specific cell type. These instructive groups of cells, called the “niche,” are known to maintain adult stem cell populations. Less well understood is how the niche forms, or when and where stem cells first appear during embryonic development.
“Adult stem cells are dependent on the niche for instructions on both how to become a stem cell, and how to control stem cell population size,” says first author Tamara Ouspenskaia. “The question was, does the niche appear first and call other cells over to become stem cells? Or is it the other way around? Stem cells could be appearing elsewhere first and then recruiting the niche.”
Working in the mouse hair follicle, a region that contains active stem cells, Fuchs and colleagues investigated the cell divisions that occur as a hair follicle is first forming. The hair follicle begins as a small bud called a placode, and develops into a tissue of multiple layers, comprised of different cell types. By labeling cells within the placode and tracing their progeny, the researchers determined that from each division, one daughter cell stayed put, while the other escaped to a different layer.
Researchers from the University of Illinois at Urbana-Champaign have developed a one-step, facile method to pattern graphene by using stencil mask and oxygen plasma reactive-ion etching, and subsequent polymer-free direct transfer to flexible substrates.
Graphene, a two-dimensional carbon allotrope, has received immense scientific and technological interest. Combining exceptional mechanical properties, superior carrier mobility, high thermal conductivity, hydrophobicity, and potentially low manufacturing cost, graphene provides a superior base material for next generation bioelectrical, electromechanical, optoelectronic, and thermal management applications.
"Significant progress has been made in the direct synthesis of large-area, uniform, high quality graphene films using chemical vapor deposition (CVD) with various precursors and catalyst substrates," explained SungWoo Nam, an assistant professor of mechanical science and engineering at Illinois. "However, to date, the infrastructure requirements on post-synthesis processing--patterning and transfer--for creating interconnects, transistor channels, or device terminals have slowed the implementation of graphene in a wider range of applications."
"In conjunction with the recent evolution of additive and subtractive manufacturing techniques such as 3D printing and computer numerical control milling, we developed a simple and scalable graphene patterning technique using a stencil mask fabricated via a laser cutter," stated Keong Yong, a graduate student and first author of the paper, "Rapid Stencil Mask Fabrication Enabled One-Step Polymer-Free Graphene Patterning and Direct Transfer for Flexible Graphene Devices appearing in Scientific Reports.
"Our approach to patterning graphene is based on a shadow mask technique that has been employed for contact metal deposition," Yong added. "Not only are these stencil masks easily and rapidly manufactured for iterative rapid prototyping, they are also reusable, enabling cost-effective pattern replication. And since our approach involves neither a polymeric transfer layer nor organic solvents, we are able to obtain contamination-free graphene patterns directly on various flexible substrates."
Nam stated that this approach demonstrates a new possibility to overcome limitations imposed by existing post-synthesis processes to achieve graphene micro-patterning. Yong envisions this facile approach to graphene patterning sets forth transformative changes in "do It yourself" (DIY) graphene-based device development for broad applications including flexible circuits/devices and wearable electronics.
"This method allows rapid design iterations and pattern replications, and the polymer-free patterning technique promotes graphene of cleaner quality than other fabrication techniques," Nam said. "We have shown that graphene can be patterned into varying geometrical shapes and sizes, and we have explored various substrates for the direct transfer of the patterned graphene."
Cancer researchers at the University of Cincinnati have found a particular signaling route in microRNA (miR-22) that could lead to targets for acute myeloid leukemia, the most common type of fast-growing cancer of the blood and bone marrow. These findings are being published in the April 26 issue of the online journal Nature Communications.
Acute myeloid leukemia (AML) is the most common type of acute leukemia and occurs when the bone marrow begins to make blasts, cells that have not yet completely matured. These blasts normally develop into white blood cells. However, in AML, these cells do not develop and are unable to ward off infections.
Jianjun Chen, PhD, associate professor in the Department of Cancer Biology at the UC College of Medicine, member of the UC Cancer Institute and lead author on the study, says that microRNAs are sophistically controlled and play key roles in the development of cancer.
"MicroRNAs make up a class of small, non-coding internal RNAs that control a gene’s job, or expression, by directing their target messaging RNAs, or mRNAs, to inhibit or stop. Cellular organisms use mRNA to convey genetic information,” he says. "Previous research has shown that microRNA miR-22 is linked to breast cancer and other blood disorders which sometimes turn into AML, but we found in this study that it could be an essential anti-tumor gate keeper in AML when it is down-regulated, meaning its function is minimized.
"When we forced miR-22 expression, we saw difficulty in leukemia cells developing, growing and thriving. miR-22 targets multiple cancer causing genes (CRTC1, FLT3 and MYCBP) and blocks certain pathways (CREB and MYC). The down-regulation, or decreased output, of miR-22 in AML is caused by the loss of the number of DNA being copied and/or stopping their expression through a pathway called TET1/GFI1/EZH2/SIN3A. Also, nanoparticles carrying miR-22 DNA oligonucleotides (short nucleic acid molecules) prevented leukemia advancement.”
Chen, who conducted the study using bone marrow transplant samples and animal models, says that the ten-eleven translocation proteins (TET1/2/3) in mammals help to control genetic expression in normal developmental processes in contrast to mutations that cause function loss and tumor-slowing with TET2, which is observed in blood and stem cell cancers.
"We recently reported that TET1 plays an essential cancer generating role in certain AML where it activates expression of homeobox genes, which are a large family of similar genes that direct the formation of many body structures during early embryonic development,” he says. "However, it is unknown whether TET1 can also function as a repressor for cellular function in cancer, and its role in microRNA expression has rarely been studied.”
Chen says these findings are important in targeting a cancer that is both common and fatal. "The majority of patients with ALM usually don’t survive longer than five years, even with chemotherapy, which is why the development of new effective therapies based on the underlying mechanisms of the disease is so important,” he says, adding that this pathogenesis as well as drug response to AML is unclear. "Our study uncovers a previously unappreciated signaling pathway (TET1/GFI1/EZH2/SIN3A⊣miR-22⊣CREB-MYC) and provides new insights into genetic mechanisms causing and progressing AML and also highlights the clinical potential of miR-22-based AML therapy. More research on this pathway and ways to target it are necessary.”
Ultrasound is also called sonography and is essentially a type of ‘medical sonar’. It has revolutionized medicine since the 1940s, giving us the ability to look into the body in a completely safe way without leaving icky radiation behind, like xrays.
Beyond predicting whether your baby shower will be blue or pink, lesser known applications of ultrasound include the ability to essentially burn and destroy cells inside your body. As such, it has been successfully used to do surgery without making any cuts into the human body. This is a technique that has been used to remove cancerous cells while not affecting any of the surrounding tissue, and without any of the side-effects associated with other kinds of cancer treatment. This is referred to by scientist Yoav Medan as focused ultrasound. If you are unfamiliar with this, you need to watch this TED talk. Non-invasive procedures like this are the future of surgery.
Non-invasive procedures are also the future of neuroscience. It is at this point that we find ourselves at the application of this astonishing science to memory research. As of very recently, scientists have been able to use ultrasound toselectively and non-invasively control brain cells. In other words, we can remote-control individual cells in the brain. We can send a pulse of sound into a brain and change what that creature thinks and does. Crazy, right?
The underlying technology is called sonogenetics, and it was first announced as a technique that can modify brain cells in a paper published by scientist Stuart Ibsen and colleagues in 2015.
Scanning the mitochondrial genomes of thousands of species is beginning to shed light on why some genes were lost while others were retained.
Billions of years ago, one cell—the ancestral cell of modern eukaryotes—engulfed another, a microbe that gave rise to today’s mitochondria. Over evolutionary history, the relationship between our cells and these squatters has become a close one; mitochondria provide us with energy and enjoy protection from the outside environment in return. As a result of this interdependence, our mitochondria, which once possessed their own complete genome, have lost most of their genes: while the microbe that was engulfed so many years ago is estimated to have contained thousands of genes, humans have just 13 remaining genes in their mitochondrial DNA (mtDNA).
Some mitochondrial genes have disappeared completely; others have been transferred to our cells’ nuclei for safekeeping, away from the chemically harsh environment of the mitochondrion. This is akin to storing books in a nice, dry, central library, instead of a leaky shed where they could get damaged. In humans, damage to mitochondrial genes can result in devastating genetic diseases, so why keep any books at all in the leaky shed?
Researchers have proposed diverse hypotheses to explain mitochondrial gene retention. Perhaps the products of some genes are hard to introduce into the mitochondrion once they’ve been made elsewhere. (Mitochondria have their own ribosomes and are capable of translating their retained genes in-house.) Or perhaps keeping some mitochondrial genes allows the cell to control each organelle individually. Historically, it has been hard to gather quantitative support for any of these ideas, but in the world of big (and growing) biological data we now have the power to shed light on this question. The mtDNA sequences of thousands of organisms as diverse as plants, worms, yeasts, protists, and humans have now been sequenced, yielding information on the patterns of gene loss and on the gene properties that may have governed this loss.
Can NASA’s next big space telescope take a picture of an alien Earth-like planet orbiting another star? Astronomers have long dreamed of such pictures, which would allow them to study worlds beyond our solar system for signs of habitability and life.
But for as long as astronomers have dreamed, the technology to make those dreams a reality has seemed decades away. Now, however, a growing number of experts believe NASA’s Wide-Field Infrared Survey Telescope (WFIRST) could take snapshots of “other Earths”—and soon. The agency formally started work on WFIRST in February of this year and plans to launch the observatory in 2025.
WFIRST was conceived in 2010 as the top-ranked priority of the National Academy of Sciences’ Decadal Survey, a report from U.S. astronomers that proposes a wish list of future missions for NASA and other federal science agencies. The telescope’s heart is a 2.4-meter mirror that, although the same size and quality as the Hubble Space Telescope’s, promises panoramic views of the heavens a hundred times larger than anything Hubble could manage. Using a camera called the Wide Field Instrument, WFIRST’s primary objective will be to study dark energy, the mysterious force driving the universe’s accelerating expansion. But another hot topic—the existential quest to know whether we are alone in the universe—is already influencing the mission.
Researchers have discovered more than a thousand exoplanets—planets around other stars—since the Decadal Survey’s crucial recommendation of WFIRST as NASA’s top-priority next-generation astrophysics mission. They expect to find tens of thousands more within the next 10 years. Many will be discovered by WFIRST itself when it surveys the Milky Way’s galactic bulge for stars that briefly brighten as planets cross in front of them, acting as gravitational lenses to magnify their light.
That survey could yield at least as many worlds as NASA’s wildly successful planet-hunting Kepler space telescope, which used different techniques to net about 5,000 probable planets before hardware failures ended its primary mission in 2013.
Already, rough statistics from the entirety of known planets suggest that every star in the sky is accompanied by at least one, and that perhaps one in five sunlike stars bears a rocky orb in a not-too-hot, not-too-cold “habitable zone” where liquid water can exist. The best way to learn whether any of these worlds are Earth-like is to see them—but taking a planet’s picture from light-years away is far from easy. A habitable world would be a faint dot lost in the overpowering glare of its larger, 10-billion-times-brighter star. Glimpsing it would be like seeing a firefly fluttering next to a searchlight or a wisp of bioluminescent algae on a wave crashing against a lighthouse.
The specificity and flexibility of antibodies has made them essential and ubiquitous research tools. For those same reasons many antibody based therapies are currently in development. This week in the journal Nature researchers have described the development and application of a potential antibody based HIV vaccine alternative. The researchers report that a single dose of monoclonal antibodies can protect research animals nearly 6 months from repeated exposure to an HIV-like virus.
For reasons both ethical and practical the scientists conducted their research using a cohort of Rhesus macaques, and an HIV-like virus known as a simian/human (SIV/HIV) chimaeric virus (SHIV). HIV is only capable of infecting human beings and so the study of HIV in animals requires the use of other viruses including SHIVs which are genetically modified HIV-like viruses containing DNA from both HIV and the related Simian immunodeficiency virus.
In the newly reported work animals within experimental groups were prophylactically treated intravenously with one of four different monoclonal antibodies. Both the antibody protected experimental animals and the unprotected control animals were then exposed weekly to low doses of an SHIV. While the experimental group of animals was exposed to the prophylactic antibodies intravenously, viral challenged occurred intrarectally. Both experimental and control animals were exposed to the SHIV by infusing the rectal cavity with a 1 mL suspension of the virus. Research animals were exposed to the virus one week after treatment with the prophylactic antibodies and every week thereafter. This aspect of the experimental procedure differed from other previously reported work.
While previous experiments have examined the utility of using antibodies to prevent infection following single high doses of virus, the newly reported work sought to better mimic real world exposure by exposing animals repeatedly to smaller viral titres.
Animals given the prophylactic antibodies were protected from the virus for up to 23 weeks, with the average duration of protection being 12 to 14 weeks. In contrast the animals within the control group, lacking the protective benefit of the antibodies, were infected within an average of 3 weeks.
This new work is promising. In an interview with The Verge Dr. David Montefiori, a specialist who took no part in the study, suggested that with improving technologies the antibody based therapeutics may be capable of offer protection for periods even longer than six months.
What are Google's visionaries up to these days? You may be sorry you asked. Discovery reported that an electronic device injected into an eyeball is the focus of a patent filed by Google.
The Google patent application was reported in Forbes. Aaron Tilley said "the device is injected in fluid that then solidifies to couple the device with the eye's lens capsule, the transparent membrane surrounding the lens." The device is injected into the eye and it has tiny components, said Lilley: storage, sensors, radio, battery and an electronic lens. The device gets power wirelessly from an "energy harvesting antenna."
"The whole endeavor appears to be a way of correcting poor vision," saidDiscovery. Tilley at Forbes said, "According to the patent, the electronic lens would assist in the process of focusing light onto the eye's retina." The inventor in the application is listed as Andrew Jason Conrad.
The patent application said, "Elements of the human eye (e.g., the cornea, lens, aqueous and vitreous humor) operate to image the environment of the eye by focusing light from the environment onto the retina of the eye, such that images of elements of the environment are presented in-focus on the retina. The optical power of the natural lens of the eye can be controlled (e.g., by ciliary muscles of the eye) to allow objects at different distances to be in focus at different points in time (a process known as accommodation)."
A variety of reasons, however, are behind decreased focus and degradation of images presented to the retina. "Issues with poor focus can be rectified by the use of eyeglasses and/or contact lenses or by the remodeling of the cornea. Further, artificial lenses can be implanted into the eye (e.g., into the space in front of the iris, into the lens capsule following partial or full removal of the natural lens, e.g., due to the development of cataracts) to improve vision."
From visible light to radio waves, most people are familiar with the different sections of the electromagnetic spectrum. But one wavelength is often forgotten, little understood, and, until recently, rarely studied.
"Terahertz is somewhat of a gap between microwaves and infrared," said Northwestern University's Cheng Sun. "People are trying to fill in this gap because this spectrum carries a lot of information."
Sun and his team have used metamaterials and 3-D printing to develop a novel lens that works with terahertz frequencies. Not only does it have better imaging capabilities than common lenses, but it opens the door for more advances in the mysterious realm of the terahertz. Supported by the National Science Foundation, the work was published online on April 22 in the journal Advanced Optical Materials.
"Typical lenses—even fancy ones—have many, many components to counter their intrinsic imperfections," said Sun, associate professor of mechanical engineering at Northwestern's McCormick School of Engineering. "Sometimes modern imaging systems stack several lenses to deliver optimal imaging performance, but this is very expensive and complex."
The focal length of a lens is determined by its curvature and refractive index, which shapes the light as it enters. Without components to counter imperfections, resulting images can be fuzzy or blurred. Sun's lens, on the other hand, employs a gradient index, which is a refractive index that changes over space to create flawless images without requiring additional corrective components.
A team of researchers from the University of California, Davis and the University of Washington have demonstrated that the conductance of DNA can be modulated by controlling its structure, thus opening up the possibility of DNA’s future use as an electromechanical switch for nanoscale computing. Although DNA is commonly known for its biological role as the molecule of life, it has recently garnered significant interest for use as a nanoscale material for a wide-variety of applications.
In their paper published in Nature Communications, the team demonstrated that changing the structure of the DNA double helix by modifying its environment allows the conductance (the ease with which an electric current passes) to be reversibly controlled. This ability to structurally modulate the charge transport properties may enable the design of unique nanodevices based on DNA. These devices would operate using a completely different paradigm than today’s conventional electronics.
“As electronics get smaller they are becoming more difficult and expensive to manufacture, but DNA-based devices could be designed from the bottom-up using directed self-assembly techniques such as ‘DNA origami’,” said Josh Hihath, assistant professor of electrical and computer engineering at UC Davis and senior author on the paper. DNA origami is the folding of DNA to create two- and three-dimensional shapes at the nanoscale level.
“Considerable progress has been made in understanding DNA’s mechanical, structural, and self-assembly properties and the use of these properties to design structures at the nanoscale. The electrical properties, however, have generally been difficult to control,” said Hihath.
In addition to potential advantages in fabrication at the nanoscale level, such DNA-based devices may also improve the energy efficiency of electronic circuits. The size of devices has been significantly reduced over the last 40 years, but as the size has decreased, the power density on-chip has increased. Scientists and engineers have been exploring novel solutions to improve the efficiency.
“There’s no reason that computation must be done with traditional transistors. Early computers were fully mechanical and later worked on relays and vacuum tubes,” said Hihath. “Moving to an electromechanical platform may eventually allow us to improve the energy efficiency of electronic devices at the nanoscale.”
This work demonstrates that DNA is capable of operating as an electromechanical switch and could lead to new paradigms for computing. To develop DNA into a reversible switch, the scientists focused on switching between two stable conformations of DNA, known as the A-form and the B-form. In DNA, the B-form is the conventional DNA duplex that is commonly associated with these molecules. The A-form is a more compact version with different spacing and tilting between the base pairs. Exposure to ethanol forces the DNA into the A-form conformation resulting in an increased conductance.
Similarly, by removing the ethanol, the DNA can switch back to the B-form and return to its original reduced conductance value.
Viruses are surprisingly symmetrical, and each trading card shows the structure of the viral capsid - the protein shell protecting the genetic material inside a virus. To make the 3D animations I used UCSF Chimera, a free molecular modeling program. When scientists discover a new protein structure they upload it to the worldwide Protein Data Bank. Each entry is assigned a unique ID number, which you can use to call up the structure in programs like Chimera or PyMol.
Molecular Modeling: Molecular graphics and analyses were performed with the UCSF Chimera package. Chimera is developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco (supported by NIGMS P41-GM103311).
Sequence analysis of 41 viral strains reveals more than a half-century of change.
Comparing the sequences of 30 strains of Zika virus isolated from humans, 10 from mosquitoes, and one from monkeys has revealed significant evolutionary change over the past 70 years, according to a study published today (April 15) in Cell Host & Microbe. Specifically, the sequences of the viral strains showed notable divergence between the Asian and African lineages and suggest that modern Zika virus strains derived from the Asian lineage, as they are more similar to the Malaysian/1966 strain than the Nigerian/1968 strain. Additionally, the gene for the pre-membrane precursor protein has very high variability among the Zika strains examined, which modeling work suggests may affect the protein’s structure.
“We believe these changes may, at least partially, explain why the virus has demonstrated the capacity to spread exponentially in the human population in the Americas,” study coauthor Genhong Cheng of the University of California, Los Angeles, said in a press release. “These changes could enable the virus to replicate more efficiently, invade new tissues that provide protective niches for viral propagation, or evade the immune system, leading to viral persistence.”
But it is not possible to directly test whether these mutations affect the virus’s spread in humans, virologist Vincent Racaniello noted on his blog. “It’s easy to blame mutations in the viral genome for novel patterns of transmission or pathogenesis,” he wrote. “There is no reason to assume that such changes influence virulence, disease patterns, or transmission in humans.”
Two black holes in nearby galaxies have been observed devouring their companion stars at a rate exceeding classically understood limits, and in the process, kicking out matter into surrounding space at astonishing speeds of around a quarter the speed of light.
The researchers, from the University of Cambridge, used data from the European Space Agency's (ESA) XMM-Newton space observatory to reveal for the first time strong winds gusting at very high speeds from two mysterious sources of x-ray radiation. The discovery, published in the journal Nature, confirms that these sources conceal a compact object pulling in matter at extraordinarily high rates.
When observing the Universe at x-ray wavelengths, the celestial sky is dominated by two types of astronomical objects: supermassive black holes, sitting at the centres of large galaxies and ferociously devouring the material around them, and binary systems, consisting of a stellar remnant - a white dwarf, neutron star or black hole - feeding on gas from a companion star.
In both cases, the gas forms a swirling disc around the compact and very dense central object. Friction in the disc causes the gas to heat up and emit light at different wavelengths, with a peak in x-rays. But an intermediate class of objects was discovered in the 1980s and is still not well understood. Ten to a hundred times brighter than ordinary x-ray binaries, these sources are nevertheless too faint to be linked to supermassive black holes, and in any case, are usually found far from the centre of their host galaxy.
"We think these so-called 'ultra-luminous x-ray sources' are special binary systems, sucking up gas at a much higher rate than an ordinary x-ray binary," said Dr Ciro Pinto from Cambridge's Institute of Astronomy, the paper's lead author. "Some of these sources host highly magnetised neutron stars, while others might conceal the long-sought-after intermediate-mass black holes, which have masses around one thousand times the mass of the Sun. But in the majority of cases, the reason for their extreme behaviour is still unclear."
Pinto and his colleagues collected several days' worth of observations of three ultra-luminous x-ray sources, all located in nearby galaxies located less than 22 million light-years from the Milky Way. The data was obtained over several years with the Reflection Grating Spectrometer on XMM-Newton, which allowed the researchers to identify subtle features in the spectrum of the x-rays from the sources. In all three sources, the scientists were able to identify x-ray emission from gas in the outer portions of the disc surrounding the central compact object, slowly flowing towards it.
But two of the three sources - known as NGC 1313 X-1 and NGC 5408 X-1 - also show clear signs of x-rays being absorbed by gas that is streaming away from the central source at 70,000 kilometres per second - almost a quarter of the speed of light. "This is the first time we've seen winds streaming away from ultra-luminous x-ray sources," said Pinto. "And the very high speed of these outflows is telling us something about the nature of the compact objects in these sources, which are frantically devouring matter."
While the hot gas is pulled inwards by the central object's gravity, it also shines brightly, and the pressure exerted by the radiation pushes it outwards. This is a balancing act: the greater the mass, the faster it draws the surrounding gas; but this also causes the gas to heat up faster, emitting more light and increasing the pressure that blows the gas away. There is a theoretical limit to how much matter can be pulled in by an object of a given mass, known as the Eddington limit. The limit was first calculated for stars by astronomer Arthur Eddington, but it can also be applied to compact objects like black holes and neutron stars.
Peering to the outskirts of our solar system, NASA's Hubble Space Telescope has spotted a small, dark moon orbiting Makemake, the second brightest icy dwarf planet—after Pluto—in the Kuiper Belt.
The moon—provisionally designated S/2015 (136472) 1 and nicknamed MK 2—is more than 1,300 times fainter than Makemake. MK 2 was seen approximately 13,000 miles from the dwarf planet, and its diameter is estimated to be 100 miles across. Makemake is 870 miles wide. The dwarf planet, discovered in 2005, is named for a creation deity of the Rapa Nui people of Easter Island.
The Kuiper Belt is a vast reservoir of leftover frozen material from the construction of our solar system 4.5 billion years ago and home to several dwarf planets. Some of these worlds have known satellites, but this is the first discovery of a companion object to Makemake. Makemake is one of five dwarf planets recognized by the International Astronomical Union.
The observations were made in April 2015 with Hubble's Wide Field Camera 3. Hubble's unique ability to see faint objects near bright ones, together with its sharp resolution, allowed astronomers to pluck out the moon from Makemake's glare. The discovery was announced today in a Minor Planet Electronic Circular.
It was discovered some time ago that eukaryotic cells regularly secrete such structures as microvesicles, macromolecular complexes, and small molecules into their ambient environment. Exosomes are one of the types of natural nanoparticles (or nanovesicles) that have shown promise in many areas of research, diagnostics and therapy. They are small lipid membrane vesicles (30-120 nm) generated by fusion of cytoplasmic endosomal multivesicular bodies within the cell surface. Exosomes are found throughout the body in such fluids as blood, saliva, urine, and breast milk. Furthermore, all types of cells secrete them in in vitro culture. It is believed that they have many natural functions, including acting as transporters of nucleic acids (mostly RNA), cytosolic proteins and metabolites to many cells, tissues or organs throughout the body. Much remains to be understood regarding how they are formed, as well as of their targeting and ultimate physiological activity. But many don’t realize that some activities have been rather thoroughly demonstrated─ such as their function in some sort of either local or more systemic intercellular communication.Exosomes as ToolsGeneral interest in exosomes is now growing for many reasons. One is because of the observation of their natural activity with antigen-presenting cells and in immune responses in the body. Their potential as very powerful biomedical tools of both diagnostic and therapeutic value is now being more widely reported. Applications described include using them as immunotherapeutic reagents, vectors of engineered genetic constructs, and vaccine particles. They’ve also been described as tools in the diagnosis or prognosis of a wide variety of disorders, such as cancer and neurodegenerative diseases. Also, their potential in tissue-level microcommunication is driving interest in such therapeutic activities as cardiac repair following heart attacks. Their potential as biomarkers is being explored because their content has been described as a “fingerprint” of differentiation or signaling or regulation status of the cell generating them. For example, by monitoring the exosomes secreted by transplanted cells, one may be able to predict the status or potentially even the outcome of cell therapy procedures. Clinical trials are in progress for exosomes in many therapeutic functions, for many indications. One example is using dendritic cell-derived exosomes to initiate immune response to cancers.Exosome Manufacturing
Exosome product manufacturing involves many distinct areas of study. First of all, we are interested in their efficient and robust generation at a sufficient scale. Also, because they are found in such raw materials as animal serum, avoiding process-related contaminants is a concern. Finally, a variety of means of separating them from other types of extracellular vesicles and cell debris is under study. As exosomes are being examined in so many applications, their production involves many distinct platforms and concerns. First of all, an appropriate and effective culture mode is required for any cell line that is specifically required by the application. Also, one must consider the quality systems and regulatory status of the materials and manufacturing environment for the particular product addressed. Finally, a robust process must be described for the scale and duration of production demanded. As things exist now, their production can be described as 1) the at-scale expansion and culture of the parent cell-line, 2) the collection or harvest of the culture media containing the secreted exosomes, and 3) the isolation or purification of the desired exosomes from not only other macrovesicles, macromolecular complexes, and small molecules, but from such other process contaminants as cellular debris and culture media components.
Population modelling: Understanding lake microbes A lake microbe population model developed by US researchers reveals how environmental factors affect community dyamics.
Like many other environments, Lake Mendota, WI, USA, is populated by many thousand microbial species. Only about 1,000 of these constitute between 80 and 99% of the total microbial community, depending on the season, whereas the remaining species are rare. The functioning and resilience of the lake ecosystem depend on these microorganisms, and it is therefore important to understand their dynamics throughout the year.
Via Dr Alejandro Martinez-Garcia