NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
Imagine predicting your car will break down and being able to replace the faulty part before it becomes a problem. Now Australian physicists have found a way to do this – albeit on a quantum scale.
In Nature Communications, they enlisted machine learning to “foresee” the future failure of a quantum bit, or qubit, and makes the necessary corrections to stop it happening.
Quantum computing is a potentially world-changing technology with the potential to complete tasks in minutes what current computers take thousands of years. But achieving a practical, large-scale quantum technologies is probably a long way off.
One of the major challenges is maintaining qubits in the delicate, zen-like state of superposition they need to do their business.
Any tiny nudge from the environment – such as the jiggly atom next door – knocks the qubit off balance.
So physicists go to great lengths to stabilize qubits, cooling them to more than 200 degrees below zero to reduce atomic jiggling. Still, superposition typically lasts but a tiny fraction of a second, and this cuts quantum number-crunching time short.
A team led by Michael Biercuk at the University of Sydney found a new way of stabilizing qubits against noise in the environment. It works by predicting how a qubit will behave and act preemptively. In a quantum computer, the technique could make qubits twice as stable as before. The team used control theory and machine learning (a kind of artificial intelligence) to estimate how the future of a qubit would play out.
Control theory is the branch of engineering that deals with feedback systems, such as the thermostat keeping your room temperature constant. The thermostat reacts to changes in the environment, initiating warm or cool air to pump into the room. Meanwhile, new machine learning algorithms look at how the system behaved in the past and use this information predict how it will react to future events.
First, Biercuk’s team made a qubit by trapping a single ion of ytterbium in a beam of laser light. To train their algorithm, they simulated noise, tweaking the light to disturb the atom in a controlled way. Their algorithm monitored how the qubit responded to these tweaks and made a prediction for how it would behave in future. Next, they let events play out for the qubit to check their algorithm’s accuracy. The longer the algorithm watched the qubit, the more accurate its predictions became. Finally, the team used the predictions to help the system self-correct. The qubit was twice as stable with the algorithm as without it.
New evidence involving the ancient poop of some of the huge and astonishing creatures that once roamed Australia indicates the primary cause of their extinction around 45,000 years ago was likely a result of humans, not climate change.
The twilight zone lies deep below the ocean surface, and the fish that live there could feed the world's population many times over – if we can work out how to catch them.
Until relatively recently, some believed that domestic chickens were the most abundant vertebrates on the planet with numbers estimated at around 24 billion. In fact, this figure is dwarfed by some fish in the twilight zone. The global bristlemouth population is so vast, for instance, that numbers may lie in the quadrillions while various estimates of lanternfish suggest that their biomass alone is several times greater than the entire world fisheries catch.
Such is the abundance of these fish that they greatly perplexed oceanographers trying to measure the depth of the world's oceans using sonar during World War Two. The sonar signals reflected back off the fish, giving the impression of a "false ocean bottom", only a few hundred meters down. Once they realized what was going on, the US military considered trying to use these layers of fish as a camouflage to hide their submarines.
In total, based on new acoustic surveys, scientists now believe that the biomass of fish in the twilight zone is at least 10 billion metric tons. "It's an immense number," St John says. "There is enough fish there to make up 1.3 tons per human on the planet. If you take and harvest 50% of that, turn it into fishmeal and then feed it to chickens through agriculture or pigs, you're creating 4.3 kilos of animal protein per human per day. So you've got people starving on the planet right now through food shortages, and here's a huge larder which we haven't even touched."
Targeted genome editing has become a powerful genetic tool for studying gene function or for modifying genomes by correcting defective genes or introducing genes. A variety of reagents have been developed in recent years that can generate targeted double-stranded DNA cuts which can be repaired by the error-prone, non-homologous end joining repair system or via the homologous recombination-based double-strand break repair pathway provided a suitable template is available. These genome editing reagents require components for recognizing a specific DNA target site and for DNA-cleavage that generates the double-stranded break. In order to reduce potential toxic effects of genome editing reagents, it might be desirable to control the in vitro or in vivo activity of these reagents by incorporating regulatory switches that can reduce off-target activities and/or allow for these reagents to be turned on or off.
This review here will outline the various genome editing tools that are currently available and describe the strategies that have so far been employed for regulating these editing reagents. In addition, this review will examine potential regulatory switches/strategies that can be employed in the future in order to provide temporal control for these reagents.
A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.
“The model performs in the 75th percentile for American adults, making it better than average,” said Northwestern Engineering’s Ken Forbus. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”
The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory.
Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern’s McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.
When matter changes from solids to liquids to vapors, the changes are called phase transitions. Among the most interesting types are more exotic changes—quantum phase transitions—where the strange properties of quantum mechanics can bring about extraordinary changes in curious ways.
In a paper published in Physical Review Letters, a team of researchers led by the Department of Energy's Oak Ridge National Laboratory reports the discovery of a new type of quantum phase transition. This unique transition happens at an elastic quantum critical point, or QCP, where the phase transition isn't driven by thermal energy but instead by the quantum fluctuations of the atoms themselves.
The researchers used a combination of neutron and X-ray diffraction techniques, along with heat capacity measurements, to reveal how an elastic QCP can be found in a lanthanum-copper material by simply adding a little bit of gold.
Phase transitions associated with QCPs happen at near absolute zero temperature (about minus 460 degrees Fahrenheit), and are typically driven at that temperature via factors such as pressure, magnetic fields, or by substituting additional chemicals or elements in the material.
"We study QCPs because materials exhibit many strange and exciting behaviors near the zero temperature phase transition that can't be explained by classical physics," said lead author Lekh Poudel, a University of Tennessee graduate student working in ORNL's Quantum Condensed Matter Division. "Our goal was to explore the possibility of a new type of QCP where the quantum motion alters the arrangement of atoms.
"Its existence had been theoretically predicted, but there hadn't been any experimental proof until now," he said. "We're the first to establish that the elastic QCP does exist."
Meet Malaysia's new pink ladies: Two species of katydid whose females sport distinctly rosy hues. While the males of the new species are a uniform green color, the females are standouts in red and pink. Not only that, both sexes look just like leaves, with distinctive veins and leaf-like lobes on their legs. Also see a new species of spider that looks like a leaf.
The insects, which live in northern Borneo, are especially unusual because one of them was identified based on photographs alone.
In 2013, a friend showed George Beccaloni pictures of a spectacularly colored katydid—a type of grasshopper-like insect—that Beccaloni couldn’t identify. Beccaloni sent them to Sigfrid Ingrisch, an expert on Asian katydids.
“He was reluctant to name and describe it because it’s not good practice to describe new species based only on photos,” says Beccaloni, a zoologist at London's Natural History Museum. “Often you need to look at microscopic characteristics, things that don’t show up in photos, to differentiate species.”
But in this case, the scientists felt confident naming the insect as a new species, Eulophophyllum kirki, since the veins of its wings were clearly visible and unlike any other known species. Wing veins are often used to tell katydid species apart.
Scientists have found that 55 million years ago the Arctic was once a lot like Miami, with an average temperature of 74 degrees Fahrenheit, alligator ancestors and palm trees, scientists say. That conclusion, based on first-of-their-kind core samples extracted from more than 1,000 feet below the Arctic Ocean floor, is contained in three studies published in Thursday's issue of the journal Nature.
Scientists say the findings are both a glimpse backward at a region heated by naturally produced greenhouse gases run amok and a sneak peek at what manmade global warming could do someday.
Scientists believe a simple fern may have been responsible for cooling things back down by sucking up massive amounts of the carbon dioxide responsible for the warming. But this natural solution to global warming wasn't exactly quick: It took about a million years.
The Earth went through an extended period of natural global warming, capped off by a supercharged spike of carbon dioxide that accelerated the greenhouse effect even more about 55 million years ago. Scientists already knew this "thermal event" happened, but figured that while the rest of the world got really hot, the polar regions were still comfortably cooler, maybe about 52 degrees on average.
But the new research from the multinational Arctic Coring Expedition found the polar average was closer to 74. "It's the first time we've looked at the Arctic, and man, it was a big surprise to us," said study co-author Kathryn Moran, an oceanographer at the University of Rhode Island. "It's a new look to how the Earth can respond to these peaks in carbon dioxide."
The 74-degree temperature — based on core samples, which act as a climatic time capsule — was probably the year-round average. But because the data is so limited, it could also be simply the summertime average, researchers said.
"Imagine a world where there are dense sequoia trees and cypress trees like in Florida that ring the Arctic Ocean," said Yale geology professor Mark Pagani, a study co-author. He said it was probably a tropical paradise, "but the mosquitoes were probably the size of your head."
Researchers are not sure what caused the sudden boost of carbon dioxide that set the greenhouse effect on broil. Possible culprits could be huge releases of methane from the ocean, gigantic continent-sized burning of trees, or lots of volcanic eruptions.
Researchers have developed a new technique that allows them to examine huge amounts of information from a single cell or zoom out and see data patterns among thousands upon thousands of cells — all in a single experiment.
In work published in the journal Nature Communications, scientists from the biotech company 10x genomics and Fred Hutchinson Cancer Research Center describe their method, which could help researchers dive deep into the ecosystem of cancer or other diseases.
The platform allows researchers to analyze which genes are turned on (and to what level) in tens of thousands of cells at once. “What the technology allows you to do is to be able to identify different types of cells and how many there are, and also infer what they’re doing based on gene expression,” which could give researchers a better understanding of diseases such as leukemia, noted co-author Dr. Jerald Radich, a Fred Hutch physician-scientist who specializes in leukemia research.
Radich is working with the new technology to gain a better understanding of which cell types contribute to leukemia relapse. Once that’s understood, “you can imagine using it in the clinic as an adjunct to the ways that we look at residual disease [low levels of remaining leukemia cells that can contribute to relapse],” he said.
Dr. Jason Bielas, the paper’s lead author, developed methods and designed the experiments needed to validate the platform, known as the Chromium Single Cell 3’ Solution. He and his team were able to analyze nearly 70,000 cells in a single experiment and use gene expression patterns to group individual cells by type.
Bielas also developed additional methods to detect subtle DNA variations and further expand the technology’s applications. Current methods to detect leukemic cells in patients often rely on surface markers. Using only gene expression information and slight differences in gene sequences, the team was able to distinguish between donor and recipient blood cells in patients who had received bone marrow transplants to treat their leukemia — an important component of patient care after transplant.
Gaping cosmic voids might hold the answers to dark matter, dark energy and the very foundations of the universe.
Scientists think this “Cosmic Web,” to use the preferred nomenclature, emerged from fluctuations in the primordial cosmos that arose 13.8 billion years ago in the Big Bang. Dark matter — the mysterious, invisible substance reckoned to comprise 80 percent of the universe’s matter — clumped here and there, gravitationally drawing regular matter toward it. As the universe expanded and matured, these overdense regions of matter gelled into galaxy clusters, leaving underdense voids to grow emptier.
Discovered in 1981, this colossal void spans 250 million light-years, yet contains only a few dozen galaxies. Denser areas might pack 10,000 galaxies into the same space. For years, it was the light-emitting parts of the Cosmic Web that held cosmologists’ attention as they tried to explain dark matter, gravity and the universe’s unfurling. No one cared much about the voids.
“I remember very prominent cosmologists for a long time said, ‘Oh, voids, they’re not important,’ ” says University of Groningen astrophysicist Rien van de Weygaert, a pioneer in the field of void research. “I got a lot of flak in the beginning.”
In the past 20 years, van de Weygaert and his colleagues have demonstrated how voids are not just null, passive places. Voids change over time, actually spurring the universe’s hordes of galaxies into their filamentous structures. To know how the universe got from there to here, van de Weygaert reasoned, we have to grasp it holistically. “You need an understanding of the evolution of the voids to understand the whole development of this weblike network we call the Cosmic Web,” he says.
Real insights into the characteristics of voids - and how they shape the universe - truly only came around with the Sloan Digital Sky Survey, the biggest redshift survey to date, begun in 2000. "People had identified individual voids," says Jain, "but to have a whole population to work with was only possible after Sloan."
Sloan and other new surveys have now bagged thousands upon thousands of voids. Looking at them as a whole, we're gleaning that they're typically oval-shaped and span 50 million to 150 million light-years in the modern, nearby universe. A few billion years ago, though, voids tended to be smaller. That suggests they're growing, joining together in places, squeezing and concentrating dark and luminous matter between them. "Voids evolve in a hierarchical way," says van de Weygaert. "They build up into bigger soap suds, like in your kitchen sink, where you see the suds merging into larger bubbles."
A new, ;ow-cost, ten-times-higher-resolution spectroscopy technique could allow for detection of microscopic amounts of chemicals for applications in security, law enforcement, and research.
MIT researchers have developed a radical design for a low-cost, miniaturized microscope that can chemically identify individual micrometer-sized particles. It could one day be used in airports or other high-security venues as a highly sensitive and low-cost way to rapidly screen people for microscopic amounts of potentially dangerous materials. It could also be used for scientific analysis of very small samples or for measuring the optical properties of materials.
In an open-access paper in the journal Optics Letters, from The Optical Society (OSA), the researchers demonstrated their new “photothermal modulation of Mie scattering” (PMMS) microscope by measuring infrared spectra of individual 3-micrometer spheres made of silica or acrylic. The new technique uses a simple optical setup consisting of compact components that will allow the instrument to be miniaturized into a portable device about the size of a shoebox.
The new microscope’s use of visible wavelengths for imaging gives it a spatial resolution of around 1 micrometer, compared to the roughly 10-micrometer resolution of traditional infrared spectroscopy methods. This increased resolution allows the new technique to distinguish and identify individual particles that are extremely small and close together.*
“If there are two very different particles in the field of view, we’re able to identify each of them,” said Stolyarov. “This would never be possible with a conventional infrared technique because the image would be indistinguishable.”
“The most important advantage of our new technique is its highly sensitive, yet remarkably simple design,” said Ryan Sullenberger, associate staff at MIT Lincoln Labs and first author of the paper. “It provides new opportunities for nondestructive chemical analysis while paving the way towards ultra-sensitive and more compact instrumentation.”
Mobile phones and computers use electromagnetic waves to send and receive information — they’re what enable our devices to upload photos and download apps. But there is only a limited amount of bandwidth available on the electromagnetic spectrum. Engineers have envisioned that enabling wireless devices to send and receive information on the same frequency would be one way to overcome that limitation. But that approach posed its own challenge, because incoming and outgoing waves on the same frequency typically interfere with each other. That’s why, for example, radio stations that use the same frequency disrupt each other’s signals when a radio is close enough to both of them.
A new design developed by UCLA electrical engineers could solve that problem. The researchers proved that a circulator — a tiny device that sends and receives electromagnetic waves from different ports — that shared the same antenna could enable signals to be sent and received simultaneously. Sending signals on the same frequencies that they are received could essentially double the space on the spectrum available for chips to transfer data.
A paper about the work was published in Scientific Reports, an open-access journal published by Nature. Previous generations of circulators used magnetic material, which cannot be incorporated into current microchips and doesn’t have enough bandwidth for today’s smartphones and other devices. The UCLA prototype uses coaxial cables to route the electromagnetic waves through non-magnetic material, but the device would ultimately be likely to be built with silicon-based or other semiconductor materials.
The key to the design is an approach called “sequentially switched delay lines,” which is similar to the way transportation engineers route passenger trains from one track to another, to allow multiple trains to enter and exit train stations at the same time and avoid collisions, even if there are only a few available tracks.
“In a busy train station, trains are actively switched onto and off of tracks to minimize the time they might be stopped to get into and out of the station,” said Yuanxun “Ethan” Wang, an associate professor of electrical engineering at the UCLA Henry Samueli School of Engineering and Applied Science who led the research. “This is the same idea, only with electromagnetic waves of the same frequency carrying information inside a chip.”
Lead author Mathew Biedka and co-author Rui Zhu are UCLA doctoral students advised by Wang, and co-author Qiang “Mark” Xu is a postdoctoral scholar in Wang’s laboratory.
In research that could one day lead to advances against neurodegenerative diseases like Alzheimer's and Parkinson's, University of Michigan engineering researchers have demonstrated a technique for precisely measuring the properties of individual protein molecules floating in a liquid.
Proteins are essential to the function of every cell. Measuring their properties in blood and other body fluids could unlock valuable information, as the molecules are a vital building block in the body. The body manufactures them in a variety of complex shapes that can transmit messages between cells, carry oxygen and perform other important functions.
Sometimes, however, proteins don't form properly. Scientists believe that some types of these misshapen proteins, called amyloids, can clump together into masses in the brain. The sticky tangles block normal cell function, leading to brain cell degeneration and disease.
But the processes of how amyloids form and clump together are not well understood. This is due in part to the fact that there's currently not a good way to study them. Researchers say current methods are expensive, time-consuming and difficult to interpret, and can only provide a broad picture of the overall level of amyloids in a patient's system.
The University of Michigan and University of Fribourg researchers who developed the new technique believe that it could help solve the problem by measuring an individual molecule's shape, volume, electrical charge, rotation speed and propensity for binding to other molecules.
They call this information a "5-D fingerprint" and believe that it could uncover new information that may one day help doctors track the status of patients with neurodegenerative diseases and possibly even develop new treatments. Their work is detailed in a paper published in Nature Nanotechnology.
"Imagine the challenge of identifying a specific person based only on their height and weight," said David Sept, a U-M biomedical engineering professor who worked on the project. "That's essentially the challenge we face with current techniques. Imagine how much easier it would be with additional descriptors like gender, hair color and clothing. That's the kind of new information 5-D fingerprinting provides, making it much easier to identify specific proteins."
Michael Mayer, the lead author on the study and a former U-M researcher who's now a biophysics professor at Switzerland's Adolphe Merkle Institute, says identifying individual proteins could help doctors keep better tabs on the status of a patient's disease, and it could also help researchers gain a better understanding of exactly how amyloid proteins are involved with neurodegenerative disease.
To take the detailed measurements, the research team uses a nanopore 10-30 nanometers wide—so small that only one protein molecule can fit through at a time. The researchers filled the nanopore with a salt solution and passed an electric current through the solution.
As a protein molecule tumbles through the nanopore, its movement causes tiny, measurable fluctuations in the electric current. By carefully measuring this current, the researchers can determine the protein's unique five-dimensional signature and identify it nearly instantaneously.
"Amyloid molecules not only vary widely in size, but they tend to clump together into masses that are even more difficult to study," Mayer said. "Because it can analyze each particle one by one, this new method gives us a much better window to how amyloids behave inside the body."
Ultimately, the team aims to develop a device that doctors and researchers could use to quickly measure proteins in a sample of blood or other body fluid. This goal is likely several years off; in the meantime, they are working to improve the technique's accuracy, honing it in order to get a better approximation of each protein's shape. They believe that in the future, the technology could also be useful for measuring proteins associated with heart disease and in a variety of other applications as well.
"I think the possibilities are pretty vast," Sept said. "Antibodies, larger hormones, perhaps pathogens could all be detected. Synthetic nanoparticles could also be easily characterized to see how uniform they are."
The study is titled "Real-time shape approximation and fingerprinting of single proteins using a nanopore."
Messenger RNAs carry the information for the assembly of proteins from the DNA in the cell nucleus to the sites of protein synthesis in the cytoplasm, and are crucial for cell function. In nerve cells, which form cytoplasmic processes that can be very long, many neuronal mRNAs must be conveyed to the sites of action of their protein products to ensure that the correct intercellular connections can be established. This requires a dedicated transport system that links remote regions of the cytoplasm with the cell nucleus. Dierk Niessing, a professor at LMU’s Biomedical Center and leader of a research group in the Institute of Structural Biology at the Helmholtz Zentrum München, has now characterized the structure of a macromolecular complex involved in the transport of mRNAs in yeast cells. The new findings appear in the journal Nature Structural & Molecular Biology.
As a member of the DFG Research Unit “Macromolecular Complexes in mRNA Localization” Niessing explores the workings of the cell’s molecular transport systems in several model organisms. In the new study, carried out in collaboration with first author Franziska Edelmann at the Helmholtz Zentrum München, the authors used baker’s yeast (Saccharomyces cerevisiae) to investigate at high resolution the succession of structural interactions required for the specific recognition of mRNA in the nucleus and its subsequent transport in the cytoplasm.
The research team systematically isolated and crystallized sub-complexes of the molecular machine responsible for the process and subjected them to X-ray crystallographic analysis. The resulting models clearly show, for the first time, how the hairpin-like conformation of the RNA is altered when it is recognized by the requisite binding proteins in the nucleus. “We were surprised to see that the RNA is not only recognized by these proteins, they also force it to adopt a new form. They staple it together, so to speak,” Niessing says. Carriage of the RNAs is the responsibility of so-called motor proteins.
With the help of unfolded adaptor proteins, they attach to the RNA-protein complex as it emerges from the nucleus. In doing so, they stabilize the whole assembly, as the structural models demonstrate, thus allowing the RNA to be transported to its destination along the fibers that make up the cytoskeleton, which serve as the system’s ‘railway lines’.
The country’s Internet giants are focusing on AI research, and domestic venture capital funding is pouring into the field.
The nation’s search giant, Baidu, is leading the charge. Already making serious headway in AI research, it has now announced that former Microsoft executive Qi Lu will take over as its chief operating officer. Lu ran the applications and services division at Microsoft, but, according to the Verge, a large part of his remit was developing strategies for artificial intelligence and chatbots. In a statement, Baidu cites hiring Lu as part of its plan to become a “global leader in AI.”
Meanwhile, Baidu’s chief scientist, Andrew Ng, has announced that the company is opening a new augmented reality lab in Beijing. Baidu has already made progress in AR, using computer vision and deep learning to add an extra layer to the real world for millions of people. But the new plans aim to use a 55-strong lab to increase revenue by building AR marketing tools—though it’s thought that the company will also consider health-care and education applications in the future.
But Baidu isn’t alone in pushing forward. Late last year, Chinese Internet giant Tencent—the company behind the hugely successful mobile app WeChat, which has 846 million active users—said that it was determined to build a formidable AI lab. It plans to start publishing its work at conferences this year. Smaller players could also get a shot in the arm. According to KPMG, Chinese venture capital investment looks set to pour into AI research in the coming year. Speaking to Fintech Innovation, KPMG partner Egidio Zarrella explained that “the amount being invested in artificial intelligence in Asia is growing by the day.”
Similar growth is already underway in China's research community. Astudy by Japan's National Institute of Science and Technology Policy found China to be a close second to the U.S. in terms of the number of AI studies presented at top academic conferences in 2015. And a U.S.government report says that the number of papers published by Chinese researchers mentioning "deep learning" already exceeds the number published by U.S. researchers.
All of which has seen South China Morning Post label AI and AR as “must haves” in any self-respecting Chinese investment portfolio. No kidding. This year, it seems, many U.S. tech companies might find themselves looking East to identify competition.
Global temperatures have continued to rise, making 2016 the hottest year on the historical record and the third consecutive record-breaking year, scientists say. Of the 17 hottest years ever recorded, 16 have now occurred since 2000.
Metagenomics database helps fill in 10 percent of previously unknown protein structures.
For proteins, appearance matters. These important molecules largely form a cell’s structures and carry out its functions: proteins control growth and influence mobility, serve as catalysts, and transport or store other molecules. Comprised of long amino acid chains, the one-dimensional amino acid sequence may seem meaningless on paper. Yet when viewed in three dimensions, researchers can see what a protein’s structure is and how a protein’s structure, and particularly the way it folds, determines its functions.
There are close to 15,000 protein families – groups of families that share an evolutionary origin – in the database Pfam. For nearly a third (4,752) of these protein families, there is at least one protein in each family that already has an experimentally determined structure. For another third (4,886) of the protein families, comparative models could be built with some degree of confidence. For the final third (5,211) of the protein families in the database, however, no structural information exists.
In the January 20, 2017 issue of Science, a team led by University of Washington’s David Baker in collaboration with researchers at the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, reports that structural models have been generated for 614 or 12 percent of the protein families that had previously had no structural information available. “That this could be accomplished using computational modeling methods was not at all apparent 5 years ago,” the team noted in their paper. This accomplishment was made possible through a collaboration in which the Baker lab’s protein structure prediction server Rosetta analyzed the metagenomic sequences publicly available on the Integrated Microbial Genomes (IMG) system run by the DOE JGI.
“A large number of protein families (in Pfam) have low number of sequences,” said study first author Sergey Ovchinnikov, a graduate student in the Baker lab. “This resulted in two consequences: 1) nobody cared about these families (since they were small); and, 2) co-evolution methods could not be applied to study them. With metagenomics, we found that some of these neglected families with only a handful of sequences so far, can now become as large as some of the most studied ones, when metagenomics data are taken into account! Moreover, we can offer a 3D model of a representative sequence from the family. We hope this will spark interest in some of these families.”
Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor - meaning that it can be made to carry an electrical current with zero resistance.
The finding, reported in Nature Communications, further enhances the potential of graphene, which is already widely seen as a material that could revolutionise industries such as healthcare and electronics. Graphene is a two-dimensional sheet of carbon atoms and combines several remarkable properties; for example, it is very strong, but also light and flexible, and highly conductive.
Since its discovery in 2004, scientists have speculated that graphene may also have the capacity to be a superconductor. Until now, superconductivity in graphene has only been achieved by doping it with, or by placing it on, a superconducting material - a process which can compromise some of its other properties. But in the new study, researchers at the University of Cambridge managed to activate the dormant potential for graphene to superconduct in its own right. This was achieved by coupling it with a material called praseodymium cerium copper oxide (PCCO).
Superconductors are already used in numerous applications. Because they generate large magnetic fields they are an essential component in MRI scanners and levitating trains. They could also be used to make energy-efficient power lines and devices capable of storing energy for millions of years.
Superconducting graphene opens up yet more possibilities. The researchers suggest, for example, that graphene could now be used to create new types of superconducting quantum devices for high-speed computing. Intriguingly, it might also be used to prove the existence of a mysterious form of superconductivity known as "p-wave" superconductivity, which academics have been struggling to verify for more than 20 years.
A San Francisco startup has landed Food and Drug Administration approval for artificial intelligence-assisted cardiac imaging in the cloud.
Arterys Inc.’s Cardio DL program applies deep learning, a form of artificial intelligence, to automate tasks that radiologists have been performing manually. It represents the first FDA-cleared, zero-footprint use of cloud computing and deep learning through AI in a clinical setting, the company said.
Arterys developed the technology by mining a data set of more than 3,000 cardiac cases. Cardio DL produces editable, automated contours, according to a company statement. It can provide accurate and consistent cardiac measurements in seconds, as opposed to one hour for manual processing.
Obtaining an image of a heart through MRI is a complex, time-consuming process that Arterys is working to improve, according to Arterys CEO Fabien Beckers.
Radiologists have traditionally used software to segment and draw contours around the ventricle to determine how the heart is functioning, Becker said. The new, AI-assisted software can provide deep learning-generated contours of the insides and outsides of the heart’s ventricles to speed up the process and improve accuracy.
“It’s the new way of doing medical imaging, a cloud medical imaging application that can have AI embedded in it,” he said. “It has the potential to make sure that physicians benefit from the work of thousands of other physicians and can be transforming healthcare in a positive fashion.”
The discovery that extinct marine organisms called trilobites laid eggs provides the first direct evidence for how they reproduced.
Trilobites lived between 520 million and 250 million years ago, and are one of the earliest known groups of arthropods (invertebrates, including modern insects, with exoskeletons and segmented bodies).
Thomas Hegna of Western Illinois University in Macomb and his colleagues report the discovery of ancient trilobite eggs in New York State, in rocks about 450 million years old. The eggs are spherical, almost 200 micrometers in diameter, and lie near several well-preserved trilobite fossils.
Trilobites may have released eggs and sperm through genital pores at or near the backs of their heads, the authors say.
Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system's connectivity.
The research was published in the Journal of Neurophysiology.
"The brain is so much more than individual neurons," said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). "It's about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections."
"Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders," said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. "Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost."
The origin and cellular complexity of eukaryotes represent a major enigma in biology. Current data support scenarios in which an archaeal host cell and an alpha-proteobacterial (mitochondrial) endosymbiont merged together, resulting in the first eukaryotic cell. The host cell is related to Lokiarchaeota, an archaeal phylum with many eukaryotic features. The emergence of the structural complexity that characterizes eukaryotic cells remains unclear.
Scientists now describe the ‘Asgard’ superphylum, a group of uncultivated archaea that, as well as Lokiarchaeota, includes Thor-, Odin- and Heimdallarchaeota. Asgard archaea affiliate with eukaryotes in phylogenomic analyses, and their genomes are enriched for proteins formerly considered specific to eukaryotes.
Notably, thorarchaeal genomes encode several homologues of eukaryotic membrane-trafficking machinery components, including Sec23/24 and TRAPP domains. Furthermore, the researchers identify thorarchaeal proteins with similar features to eukaryotic coat proteins involved in vesicle biogenesis. These results expand the known repertoire of ‘eukaryote-specific’ proteins in Archaea, indicating that the archaeal host cell already contained many key components that govern eukaryotic cellular complexity.
Columbia Engineering researchers have invented a technique for manufacturing complex microdevices with three-dimensional, freely moving parts made from biomaterials that can safely be implanted in the body. Potential applications include a drug-delivery system to provide tailored drug doses for precision medicine, catheters, stents, cardiac pacemakers, and soft microbotics.
Most current implantable microdevices have static components rather than moving parts and, because they require batteries or other toxic electronics, they have limited biocompatibility.
The new technique stacks a soft biocompatible hydrogel material in layers, using a fast manufacturing method the researchers call “implantable microelectromechanical systems” (iMEMS).
On Oct. 26, 2016, a pair of Hornets flying above an empty part of California opened their bellies and released a robotic swarm. With machine precision, the fast-moving unmanned flying machines took flight, then moved to a series of waypoints, meeting objectives set for the swarm by a human controller. The brief flight of 103 tiny drones heralds a new age in how, exactly, America uses robots at war.
The Pentagon’s worked with Perdix drones since 2013, with the October flight using the military’s 6th generation of the devices. F/A-18 Hornets, long-serving Navy fighters, carried the drones and released them from flare dispensers. The small drones were the subject of an episode of CBS’s 60 Minutes, and they move so fast they’re hard to film. Below, in a clip from the Department of Defense, the drones are barely visible as dark blurs beneath the fighters.
Captured by telemetry video on the ground, the swarm is clearly visible. First it appears as if from nowhere, moves as one towards a new set of objectives. This drone swarm was a product of the Strategic Capabilities Office, and outgoing Secretary of Defense Ash Carter praised the work, saying “This is the kind of cutting-edge innovation that will keep us a step ahead of our adversaries. This demonstration will advance our development of autonomous systems.”
Autonomy and swarming are centerpieces in many predictions about the next century of war. The Predator, Reaper, and Global Hawk drones that have so far most embodied how the United States fights wars are big, expensive, and vulnerable machines, with human pilots and sensor operators controlling them remotely. These drones also operate in skies relatively free of threats, without fear that a hostile jet will shoot them down. That’s an approach that’s fine for counterinsurgency battles, an admittedly large part of the wars the Pentagon actually fights, but against a near-peer nation or any foe with sophisticated anti-air or electronic jamming equipment, Reapers are extremely vulnerable targets.
Swarms, where several small flying robots work together to do the same job previously done by a larger craft are one way around that. A few $45,000 anti-air missiles are a cost-effective way to shoot down an $18 million Reaper, but firing that same anti-air missile at a smaller, commercial drone isn’t as effective, especially when there are still 102 other drones flying the same mission at the same time.
Controlling that swarm is where autonomy comes in. With every Predator drone, there’s an actual joystick and flight controls for a human pilot, whose job it is to direct the uncrewed plane and maneuver it. That one-to-one ratio would be impossible to maintain with a small drone swarm, and given that the perdix drone has a listed flight time of “over 20 minutes,” it would be a lot of effort for a very short excursion.
Before fulfilling its audacious dream of interstellar flight, Breakthrough Starshot—the private effort funded by billionaire Yuri Milner to conduct high-speed robotic voyages to the stars within a generation—must first find a destination.
The project’s primary target is the triple star system Alpha Centauri, our nearest interstellar neighbor at just over four light-years away. Of its three stars, only the red dwarf Proxima Centauri is known to have a planet, anEarth-mass world in a star-hugging orbit where liquid water—and therefore life as we know it—could exist. Astronomers already have plans to closely study this planet, but may find it unwelcoming due to its bombardment with intense flares from its nearby host star.
Many believe the system’s larger, brighter and more sunlike stars, the binary pair Alpha Centauri A and B, offer better prospects for life-friendly worlds, even though all previous planet hunts there have come up empty-handed. Thoroughly examining these two stars requires expensive new instruments and many nights on the world’s best, most in-demand telescopes—boons just as elusive as Alpha Centauri’s planets. For years, this relative lack of resources has rendered any worlds around Alpha Centauri A or B effectively invisible to us, lost in the overpowering glare of those stars.
Before the end of the decade, however, they may appear in plain view. This week, Milner’s Breakthrough Initiatives organization announced a partnership with the European Southern Observatory (ESO) to search for and image the planets of Alpha Centauri A and B as early as 2019. The partnership, in which Breakthrough purchases instrument upgrades and observing time on ESO’s Very Large Telescope (VLT) in Chile for an undisclosed sum, is only the first phase of the organization’s more ambitious plans to scour nearby stars for promising worlds that its Starshot probes might someday visit.
“It’s high time that humanity gets to know its neighboring star system better and finds out if it contains more planets,” Milner says. “This collaboration will develop state-of-the-art instruments to enhance the already impressive VLT in pursuit of that common goal.” Breakthrough representatives say the organization is already in discussions to augment its search with additional Southern Hemisphere observatories, and is also investigating possibilities for launching small, planet-finding space telescopes.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.