Today’s digital photos are far more vivid than just a few years ago, thanks to a steady stream of advances in optics, detectors, and software. Similar advances have also improved the ability of machines called cryo-electron microscopes (cryo-EMs) to see the Lilliputian world of atoms and molecules. Now, researchers report that they’ve created the highest ever resolution cryo-EM image, revealing a druglike molecule bound to its protein target at near atomic resolution. The resolution is so sharp that it rivals images produced by x-ray crystallography, long the gold standard for mapping the atomic contours of proteins. This newfound success is likely to dramatically help drugmakers design novel medicines for a wide variety of conditions.
“This represents a new era in imaging of proteins in humans with immense implications for drug design,” says Francis Collins, who heads the U.S. National Institutes of Health in Bethesda, Maryland. Collins may be partial. He’s the boss of the team of researchers from the National Cancer Institute (NCI) and the National Heart, Lung, and Blood Institute that carried out the work. Still, others agree that the new work represents an important milestone. “It’s a major advance in the technology,” says Wah Chiu, a cryo-EM structural biologist at Baylor College of Medicine in Houston, Texas. “It shows [cryo-EM] technology is here.”
Cryo-EM has long seemed behind the times—an old hand tool compared with the modern power tools of structural biology. The two main power tools, x-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, enable researchers to pin down the position of protein features to less than 0.2 nanometers, good enough to see individual atoms. By contrast, cryo-EM has long been limited to a resolution of 0.5 nm or more.
Cryo-EM has been around for decades. But until recently its resolution hasn’t even been close to crystallography and NMR. “We used to be called the field of blob-ology,” says Sriram Subramaniam, a cryo-EM structural biologist at NCI, who led the current project. But steady improvements to the electron beam generators, detectors, and imaging analysis software have slowly helped cryo-EM inch closer to the powerhouse techniques. Earlier this year, for example, two groups of researchers broke the 0.3-nm-resolution benchmark, enough to get a decent view of the side arms of two proteins’ individual amino acids. Still, plenty of detail in the images remained fuzzy.
For their current study, Subramaniam and his colleagues sought to refine their images of β-galactosidase, a protein they imaged last year at a resolution of 0.33 nm. The protein serves as a good test case, Subramaniam says, because researchers can compare their images to existing x-ray structures to check their accuracy. Subramaniam adds that the current advance was more a product of painstaking refinements to a variety of techniques—including protein purification procedures that ensure each protein copy is identical and software improvements that allow researchers to better align their images. Subramaniam and his colleagues used some 40,000 separate images to piece together the final shape of their molecule. They report online today in Science that these refinements allowed them to produce a cryo-EM image of β-galactosidase at a resolution of 0.22 nm, not quite sharp enough to see individual atoms, but clear enough to see water molecules that bind to the protein in spots critical to the function of the molecule.
The above composite image of the protein β-galactosidase shows the progression of cryo-EM’s ability to resolve a protein’s features from mere blobs (left) a few years ago to the ultrafine 0.22-nanometer resolution today (right).
In January 1865, August Kekulé (see picture) published his theory of the structure of benzene, which he later reported had come to him in a daydream about a snake biting its tail. Although other theories had been postulated before 1865, Kekulé was the first to identify the correct structure. Kekulé’s theory resulted in a clear understanding of aromatic compounds and thus had a major impact on the development of chemical science and industry.
If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines
might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his “brain” was on the small side.
Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do.
Today, backed by funding from the National Science Foundation, the Office of Naval Research, Google, Microsoft, and Qualcomm, Saxena and his team unveiled what they call RoboBrain, a kind of online service packed with information and artificial intelligence software that any robot could tap into. Working alongside researchers at the University of California at Berkeley, Brown University, and Cornell University, they hope to create a massive online “brain” that can help all robots navigate and even understand the world around them. “The purpose,” says Saxena, who dreamed it all up, “is to build a very good knowledge graph—or a knowledge base—for robots to use.”
-The 2014 outbreak of Ebola virus in West Africa has the world on high alert. Currently deemed an international public health emergency by the World Health Organization (WHO), the virus has racked up more than 1,060 deaths and sickened 1,975 – making it the deadliest Ebola outbreak ever.
Here's an update to my post Ebola - The New Black Death? on Tekrighter's Science Blog.
Self-folding sheets of a plastic-like material point the way to robots that can assume any conceivable 3-D structure.
Programmable matter is a material whose properties can be programmed to achieve specific shapes or stiffnesses upon command. This concept requires constituent elements to interact and rearrange intelligently in order to meet the goal. This research considers achieving programmable sheets that can form themselves in different shapes autonomously by folding. Past approaches to creating transforming machines have been limited by the small feature sizes, the large number of components, and the associated complexity of communication among the units. We seek to mitigate these difficulties through the unique concept of self-folding origami with universal crease patterns.
This approach exploits a single sheet composed of interconnected triangular sections. The sheet is able to fold into a set of predetermined shapes using embedded actuation. To implement this self-folding origami concept, we have developed a scalable end-to-end planning and fabrication process. Given a set of desired objects, the system computes an optimized design for a single sheet and multiple controllers to achieve each of the desired objects. The material, called programmable matter by folding, is an example of a system capable of achieving multiple shapes for multiple functions.
As director of the Distributed Robotics Laboratory at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Professor Daniela Rus researches systems of robots that can work together to tackle complicated tasks. One of the big research areas in distributed robotics is what’s called “programmable matter,” the idea that small, uniform robots could snap together like intelligent Legos to create larger, more versatile robots.
The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) has a Programmable Matter project that funds a good deal of research in the field and specifies “particles … which can reversibly assemble into complex 3D objects.” But that approach turns out to have drawbacks, Rus says. “Most people are looking at separate modules, and they’re really worried about how these separate modules aggregate themselves and find other modules to connect with to create the shape that they’re supposed to create,” Rus says. But, she adds, “actively gathering modules to build up a shape bottom-up, from scratch, is just really hard given the current state of the art in our hardware.”
So Rus has been investigating alternative approaches, which don’t require separate modules to locate and connect to each other before beginning to assemble more complex shapes. Fortunately, also at CSAIL is Erik Demaine, who joined the MIT faculty at age 20 in 2001, becoming the youngest professor in MIT history. One of Demaine’s research areas is the mathematics of origami, and he and Rus hatched the idea of a flat sheet of material with tiny robotic muscles, or actuators, which could fold itself into useful objects. In principle, flat sheets with flat actuators should be much easier to fabricate than three-dimensional robots with enough intelligence that they can locate and attach to each other.
So they designed yet another set of algorithms that, given sequences of folds for several different shapes, would determine the minimum number of actuators necessary to produce all of them. Then they set about building a robot that could actually assume multiple origami shapes. Their prototype, made from glass-fiber and hydrocarbon materials, with an elastic plastic at the creases, is divided into 16 squares about a centimeter across, each of which is further divided into two triangles. The actuators consist of a shape-memory alloy — a metal that changes shape when electricity is applied to it. Each triangle also has a magnet in it, so that it can attach to its neighbors once the right folds have been performed.
Microalgae-based biofuel not only has the potential to quench a sizable chunk of the world's energy demands, say Utah State University researchers. It's a potential game-changer.
"That's because microalgae produces much higher yields of fuel-producing biomass than other traditional fuel feedstocks and it doesn't compete with food crops," says USU mechanical engineering graduate student Jeff Moody.
With USU colleagues Chris McGinty and Jason Quinn, Moody published findings from an unprecedented worldwide microalgae productivity assessment in the May 26, 2014 Edition of the Proceedings of the National Academy of Sciences. The team's research was supported by the U.S. Department of Energy.
Despite its promise as a biofuel source, the USU investigators questioned whether "pond scum" could be a silver bullet-solution to challenges posed by fossil fuel dependence.
"Our aim wasn't to debunk existing literature, but to produce a more exhaustive, accurate and realistic assessment of the current global yield of microalgae biomass and lipids," Moody says.
With Quinn, assistant professor in USU's Department of Mechanical and Aerospace Engineering, and McGinty, associate director of USU's Remote Sensing/Geographic Information Systems Laboratory in the Department of Wildland Resources, Moody leveraged a large-scale, outdoor microalgae growth model. Using meteorological data from 4,388 global locations, the team determined the current global productivity potential of microalgae.
Algae, he says, yields about 2,500 gallons of biofuel per acre per year. In contrast, soybeans yield approximately 48 gallons; corn about 18 gallons.
"In addition, soybeans and corn require arable land that detracts from food production," Quinn says. "Microalgae can be produced in non-arable areas unsuitable for agriculture."
The researchers estimate untillable land in Brazil, Canada, China and the U.S. could be used to produce enough algal biofuel to supplement more than 30 percent of those countries' fuel consumption.
Digital medicine is poised to transform biomedical research, clinical practice and the commercial sector. Here we introduce a monthly column from R&D/venture creation firm PureTech tracking digital medicine's emergence.
Technology has already transformed the social fabric of life in the twenty-first century. It is now poised to profoundly influence disease management and healthcare. Beyond the hype of the 'mobile health' and 'wearable technology' movement, the ability to monitor our bodies and continuously gather data about human biology suggests new possibilities for both biomedical research and clinical practice. Just as the Human Genome Project ushered in the age of high-throughput genotyping, the ability to automate, continuously record, analyze and share standardized physiological and biological data augurs the beginning of a new era—that of high-throughput human phenotyping.
These advances are prompting new approaches to research and medicine, but they are also raising questions and posing challenges for existing healthcare delivery systems. How will these technologies alter biomedical research approaches, what types of experimental questions will researchers now be able to ask and what types of training will be needed? Will the ability to digitize individual characteristics and communicate by mobile technology empower patients and enable the modification of disease-promoting behaviors; at the same time, will it threaten patient privacy? Will doctors be prescribing US Food and Drug Administration (FDA)-cleared apps on a regular basis, not just to monitor and manage chronic disease but also to preempt acute disease episodes? Will the shift in the balance between disease treatment and early intervention have a broad economic impact on the healthcare system? How will the emergence of these new technologies reshape the healthcare industry and its underlying business models? What will be the defining characteristics of 'winning' products and companies?
These are just some of the questions we plan to ask over the coming months. In the meantime, we introduce here some of the key themes shaping R&D in the digital medicine field and focus on what they might mean for the biopharmaceutical and diagnostic/device industries.
With an estimated 1.6 billion tonnes of water ice at its poles and an abundance of rare-earth elements hidden below its surface, the moon is rich ground for mining.
In this month's issue of Physics World, science writer Richard Corfield explains how private firms and space agencies are dreaming of tapping into these lucrative resources and turning the moon's grey, barren landscape into a money-making conveyer belt.
When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.
Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use OpenTable. But putting those things together is, at the moment, beyond her.
Now a small team of engineers at a stealth startup called Viv Labs claims to be on the verge of realizing an advanced form of AI that removes those limitations. Whereas Siri can only perform tasks that Apple engineers explicitly implement, this new program, they say, will be able to teach itself, giving it almost limitless capabilities. In time, they assert, their creation will be able to use your personal preferences and a near-infinite web of connections to answer almost any query and perform almost any function.
“Siri is chapter one of a much longer, bigger story,” says Dag Kittlaus, one of Viv’s cofounders. He should know. Before working on Viv, he helped create Siri. So did his fellow cofounders, Adam Cheyer and Chris Brigham.
For the past two years, the team has been working on Viv Labs’ product—also named Viv, after the Latin root meaning live. Their project has been draped in secrecy, but the few outsiders who have gotten a look speak about it in rapturous terms. “The vision is very significant,” says Oren Etzioni, a renowned AI expert who heads the Allen Institute for Artificial Intelligence. “If this team is successful, we are looking at the future of intelligent agents and a multibillion-dollar industry.”
Back in 2012, the Sun erupted with a powerful solar storm that just missed the Earth but was big enough to "knock modern civilization back to the 18th century," NASA said. The extreme space weather that tore through Earth's orbit on July 23, 2012, was the most powerful in 150 years, according to a statement posted on the US space agency website Wednesday.
However, few Earthlings had any idea what was going on. "If the eruption had occurred only one week earlier, Earth would have been in the line of fire," said Daniel Baker, professor of atmospheric and space physics at the University of Colorado. Instead the storm cloud hit the STEREO-A spacecraft, a solar observatory that is "almost ideally equipped to measure the parameters of such an event," NASA said. Scientists have analyzed the treasure trove of data it collected and concluded that it would have been comparable to the largest known space storm in 1859, known as the Carrington event. It also would have been twice as bad as the 1989 solar storm that knocked out power across Quebec, scientists said.
"I have come away from our recent studies more convinced than ever that Earth and its inhabitants were incredibly fortunate that the 2012 eruption happened when it did," said Baker. The National Academy of Sciences has said the economic impact of a storm like the one in 1859 could cost the modern economy more than two trillion dollars and cause damage that might take years to repair. Experts say solar storms can cause widespread power blackouts, disabling everything from radio to GPS communications to water supplies -- most of which rely on electric pumps.
They begin with an explosion on the Sun's surface, known as a solar flare, sending X-rays and extreme UV radiation toward Earth at light speed. Hours later, energetic particles follow and these electrons and protons can electrify satellites and damage their electronics.
Next are the coronal mass ejections, billion-ton clouds of magnetized plasma that take a day or more to cross the Sun-Earth divide. These are often deflected by Earth's magnetic shield, but a direct hit could be devastating.
When innovation stalls, sometimes it just needs a little push. A bit of force applied in the right direction and then, momentum imparted, the rest takes care of itself. That push can come from many sources, but one tends to be the most effective: money.
It was a monetary prize that spurred Charles Lindbergh to strap into the Spirit of St. Louis and become the first to cross the Atlantic in one shot. It was a monetary prize that encouraged Scaled Composites to build SpaceShipOne, ultimately spawning Virgin Galactic. And, next year, it will be a monetary prize that puts the first non-government-funded rover on the moon. Or, possibly, multiple rovers.
Many researchers believe that physics will not be complete until it can explain not just the behavior of space and time, but where these entities come from.
“Imagine waking up one day and realizing that you actually live inside a computer game,” says Mark Van Raamsdonk, describing what sounds like a pitch for a science-fiction film. But for Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, Canada, this scenario is a way to think about reality. If it is true, he says, “everything around us — the whole three-dimensional physical world — is an illusion born from information encoded elsewhere, on a two-dimensional chip”. That would make our Universe, with its three spatial dimensions, a kind of hologram, projected from a substrate that exists only in lower dimensions.
This 'holographic principle' is strange even by the usual standards of theoretical physics. But Van Raamsdonk is one of a small band of researchers who think that the usual ideas are not yet strange enough. If nothing else, they say, neither of the two great pillars of modern physics — general relativity, which describes gravity as a curvature of space and time, and quantum mechanics, which governs the atomic realm — gives any account for the existence of space and time. Neither does string theory, which describes elementary threads of energy.
Van Raamsdonk and his colleagues are convinced that physics will not be complete until it can explain how space and time emerge from something more fundamental — a project that will require concepts at least as audacious as holography. They argue that such a radical reconceptualization of reality is the only way to explain what happens when the infinitely dense 'singularity' at the core of a black hole distorts the fabric of space-time beyond all recognition, or how researchers can unify atomic-level quantum theory and planet-level general relativity — a project that has resisted theorists' efforts for generations.
“All our experiences tell us we shouldn't have two dramatically different conceptions of reality — there must be one huge overarching theory,” says Abhay Ashtekar, a physicist at Pennsylvania State University in University Park.
Finding that one huge theory is a daunting challenge. Here, Nature explores some promising lines of attack — as well as some of the emerging ideas about how to test these concepts (see 'The fabric of reality').
Perhaps the most distinguishing feature of Spider-Man is his ability to shoot webs. But what are all the forces, tensile strengths, and other actions of these webs? Here, we break down the physics behind Spidey's iconic webbing.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.