Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centers for reading across cultures.
Via Sakis Koukouvis
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Vision researchers at Columbia University Medical Center have discovered a gene that causes myopia, but only in people who spend a lot of time in childhood reading or doing other “nearwork.”
Using a database of approximately 14,000 people, the researchers found that those with a certain variant of the gene — called APLP2 — were five times more likely to develop myopia in their teens if they had read an hour or more each day in their childhood. Those who carried the APLP2 risk variant but spent less time reading had no additional risk of developing myopia.
“We have known for decades that myopia is caused by genes and their interactions with environmental factors like reading and nearwork, but we have not had hard proof. This is the first known evidence of gene-environment interaction in myopia,” says the study’s lead investigator, Andrei Tkatchenko, MD, PhD, of CUMC. The research was published August 27, 2015 in PLOS Genetics.
Although it’s not yet known how genetic variation at the APLP2 gene causes myopia, Dr. Tkatchenko and his colleagues think the risk variant may increase the amount of APLP2 protein produced in the eye, which in turn may cause the eye to undergo excessive elongation. They found that mice exposed to a visual environment that mimics reading were less likely to develop myopia when little APLP2 protein was present in the eye.
“By reducing the level of APLP2 in the eye, you can reduce susceptibility to environmentally induced myopia. This gives us an opportunity to develop a therapy to prevent myopia in everyone, regardless of the APLP2 variant they carry,” Dr. Tkatchenko says.
Developing such a therapy, however, could take years, as researchers don’t yet know how APLP2 levels could be reduced in people. And the therapy would be most effective in young children, before the eye has started to elongate and become myopic.
DARK MATTER—THE UNSEEN 80 percent of the universe’s mass—doesn’t emit, absorb or reflect light. Astronomers know it exists only because it interacts with our slice of the ordinary universe through gravity. Hence the hunt for this missing mass has focused on so-called WIMPs—Weakly Interacting Massive Particles—which interact with each other as infrequently as they interact with normal matter.
Physicists have reasons to look for alternatives to WIMPs. For two decades, astronomers have found less dark matter at the centers of galaxies than what WIMP models suggest they should. The discrepancy is even worse at the cores of the universe’s tiny dwarf galaxies, which have few ordinary stars but lots of dark matter.
About four years ago, James Bullock, a professor of physics and astronomy at the University of California, Irvine, began to wonder whether the standard view of dark matter was failing important empirical tests. “This was the point where I really started thinking hard about alternatives,” he said.
Bullock thinks that dark matter might instead be complex, something that interacts with itself strongly in the way that ordinary matter interacts with itself to form intricate structures like atoms and atomic elements. Such a self-interacting dark matter, Bullock suspects, could exist in a “dark sector,” somewhat parallel to our own light sector, but detectable only through the way it affects gravity.
He and his colleagues have created numerical simulations that predict what the universe would look like if dark matter feels strong interactions. They expected to see the model fail. Instead, they found that it was consistent with what astronomers observe.
Bullock explains: "We’ve come to understand that we can describe the world that we experience by the Standard Model of particle physics. We think of the particles that make up you and me as being broken down into constituent things, like quarks, and those quarks combine into neutrons and protons. There is a complicated dance that allows these particles to interact in certain ways. It gives rise to the periodic table of elements and all of the vast complexity we see around us. Just 20 percent of the mass of the universe is all of this complexity."
On the other hand, dark matter makes up something like 80 percent of the mass. First-guess models for what it is suggests that it is one particle that doesn’t really interact with much of anything—WIMPs. These are collisionless, meaning when two dark matter particles come at each other they basically go through each other.
Another possibility is this 80 percent of the universe is also complex. Maybe there’s something interesting going on in what’s called the dark sector. We know that whatever ties us to the dark matter is pretty weak or else we would have already seen it. This observation has led to the belief that all the interactions that could be going on with dark matter are weak. But there’s another possibility: When dark matter particles see themselves, there are complex and potentially very strong interactions. There even could be dark atoms and dark photons.
Via Ben van Lier
The best way to study the subatomic particles that make up the most fundamental building blocks of our universe is, of course, to smash them into each other with as much energy as possible. And now physicists at SLAC National Accelerator Laboratory say they’ve found a better way to do that.
Researchers at SLAC’s Facility for Advanced Accelerator Experimental Tests (FACET) are especially interested in what happens when they crash high-energy beams of electrons into beams of positrons, their antimatter opposites. To answer the next generation of questions about these particles, however, physicists would need particle accelerators six miles long or more, with current accelerator technology.
That’s why FACET researchers developed a way to increase the energy of a particle beam in a shorter distance, so physicists could study electrons and positrons with smaller accelerators. It works like this: when physicists fire a concentrated group of electrons into an ionized gas, or plasma, the electrons create a wake. That wake can help accelerate a second group of electrons, travelling behind the first group, because they get to basically surf on a wave of plasma.
The technique, called plasma wakefield acceleration, works well for electrons, but it’s harder to accelerate positrons this way. Usually, the second group of positrons loses its shape or slows down when its hits the wake, rather than surfing the plasma wave and going faster. Researchers at FACET found a way to fire a single, carefully shaped group of positrons so that the front of the group creates a wake that helps accelerate the tail of the group and focus its shape.
It works well, according to the researchers, who published the results of their experiments in the journal Nature. “In this stable state, about 1 billion positrons gained 5 billion electronvolts of energy over a short distance of only 1.3 meters,” said lead author Sébastien Corde, of France’s Ecole Polytechnique, in a statement. That means the particle colliders of the future could be much smaller, with higher energy collisions, than today’s colliders.
At the moment, FACET is the only facility that can accelerate positrons this way. Particle colliders are expensive, so it’s not likely that research facilities will be building new colliders to take advantage of the wakefield acceleration method anytime soon, but some may upgrade their existing accelerators. “It’s conceivable to boost the performance of linear accelerators by adding a very short plasma accelerator at the end,” said Corde, “That would multiply the accelerator’s energy without making the entire structure significantly longer.”
A UCSF-led team has developed a technique to build tiny models of human tissues, called organoids, more precisely than ever before using a process that turns human cells into a biological equivalent of LEGO bricks. These mini-tissues in a dish can be used to study how particular structural features of tissue affect normal growth or go awry in cancer. They could be used for therapeutic drug screening and to help teach researchers how to grow whole human organs.
The new technique — called DNA Programmed Assembly of Cells (DPAC) and reported in the journal Nature Methods on Aug. 31 — allows researchers to create arrays of thousands of custom-designed organoids, such as models of human mammary glands containing several hundred cells each, which can be built in a matter of hours.
There are few limits to the tissues this technology can mimic, said Zev Gartner, PhD, the paper’s senior author and an associate professor of pharmaceutical chemistry at UCSF. “We can take any cell type we want and program just where it goes. We can precisely control who’s talking to whom and who’s touching whom at the earliest stages. The cells then follow these initially programmed spatial cues to interact, move around, and develop into tissues over time.”
“One potential application,” Gartner said, “would be that within the next couple of years, we could be taking samples of different components of a cancer patient’s mammary gland and building a model of their tissue to use as a personalized drug screening platform. Another is to use the rules of tissue growth we learn with these models to one day grow complete organs.”
Our bodies are made of more than 10 trillion cells of hundreds of different kinds, each of which plays its unique role in keeping us alive and healthy. The way these cells organize themselves structurally in different organ systems helps them coordinate their amazingly diverse behaviors and functions, keeping the whole biological machine running smoothly. But in diseases such as breast cancer, the breakdown of this order has been associated with the rapid growth and spread of tumors.
“Cells aren’t lonely little automatons,” Gartner said. “They communicate through networks to make group decisions. As in any complex organization, you really need to get the group’s structure right to be successful, as many failed corporations have discovered. In the context of human tissues, when organization fails, it sets the stage for cancer.”
But studying how the cells of complex tissues like the mammary gland self-organize, make decisions as groups, and break down in disease has been a challenge to researchers. The living organism is often too complex to identify the specific causes of a particular cellular behavior. On the other hand, cells in a dish lack the critical element of realistic 3-D structure.
“This technique lets us produce simple components of tissue in a dish that we can easily study and manipulate,” said Michael Todhunter, PhD, who led the new study with Noel Jee, PhD, when both were graduate students in the Gartner research group. “It lets us ask questions about complex human tissues without needing to do experiments on humans.”
Transcriptome sequencing reveals Euglena’s unexpected metabolic capabilities.
The pond algae Euglena gracilis has a surprising wealth of metabolic pathways for unexpected natural products, new research shows. Genes from this common single-celled organism could therefore be manipulated to synthesise a host of unusual, and potentially useful, compounds.
Euglenoids are a group of algae that grow abundantly in nutrient-rich freshwater environments, such as garden ponds. Euglena gracilis is known to produce many nutritional compounds including vitamins A, C and E, essential amino acids and polyunsaturated fatty acids. However, sequencing its genome in a bid to unlock these valuable natural products has proved very challenging due to its large size, complexity and incorporation of the unusual nucleotide base J.
Researchers, led by Rob Field at the John Innes Centre in the UK, have tackled this problem by instead looking at Euglena’s transcriptome – the mRNA transcribed from the genome that shows what genes an organism is using at a given time.
The results are intriguing: this single celled organism possesses over 30,000 protein-encoding genes – significantly more than the 21,000 found in humans. Only a third of these genes were constantly active, while the rest seemed to be light-responsive. ‘Around 10,000 genes are switched on when the lights are on and 10,000 switched off, so it’s almost as if Euglena is two different organisms living in the same chassis,’ explains Field. A vast number of genes – nearly 60% – had no known match in other studied organisms, meaning we simply don’t know what they do.
There were some revelations among the genes that could be identified too, including unexpected genes for the production of a variety of potentially useful classes of natural products that have not been associated with Euglena before, such as polyketides and non-ribosomal peptides. This is a very interesting finding according to Wilfred van der Donk, a natural products biochemist at the University of Illinois, US, because ‘products of these types of genes have seldom or never been isolated from these organisms. Thus, these findings open the door to isolation and structural elucidation of these compounds, and investigation of their function.’
This transcriptome approach to studying Euglena could be applied to other related organisms too, such as algal blooms suffocating parts of the UK’s Norfolk Broads. Taking what they’ve learned from Euglena, Field’s group have now begun to study how algae produce toxic natural products and what environmental factors might trigger this production.
Via Integrated DNA Technologies
Some natural types of fungus appear to inhibit the build-up of tau—a protein linked to Alzheimer’s disease and other neurodegenerative diseases.
“Tau is a protein that is produced by the body,” says T. Chris Gamblin, associate professor of molecular biosciences at the University of Kansas. “It’s found primarily in neurons in the normal brain where it helps them maintain their shape and function.
“In Alzheimer’s disease, through a mechanism we don’t quite understand, tau is changed in a way that causes it to start clumping together with other tau molecules, forming string-like fibrils that accumulate into the pathological structure in Alzheimer’s disease called ‘neurofibrillary tangles.''
For a new study published in Planta Medica, researchers tested 17 natural fungal products, most of which had similar structures to compounds seen by previous researchers to hinder tau formation. “Because fungi have historically been a rich source of biologically useful compounds, we thought it would be worth screening them to determine their activity,” Gamblin says. “We used advanced genetic techniques to get the fungus to overproduce many, many different types of natural products so we could purify and identify them.”
Of the 17 natural products, the researchers discovered three that were effective in inhibiting tau accumulation: 2,ω-dihydroxyemodin, asperthecin, and asperbenzaldehyde. “All three of them did block the aggregation of tau 100 percent as far as we can tell,” says Gamblin. “Some of them took very high concentrations to do so though.
Via Linda Denty
What makes someone better at switching between different tasks? Looking for the mechanisms behind cognitive flexibility, researchers at the University of Pennsylvania and Germany’s Central Institute of Mental Health in Mannheim and Charité University Medicine Berlin have used brain scans to shed new light on this question.
By studying networks of activity in the brain’s frontal cortex, a region associated with control over thoughts and actions, the researchers have shown that the degree to which these networks reconfigure themselves while switching from task to task predicts people’s cognitive flexibility.
Experiment participants who performed best while alternating between a memory test and a control test showed the most rearrangement of connections within their frontal cortices as well as the most new connections with other areas of their brains.
A more fundamental understanding of how the brain manages multitasking could lead to better interventions for medical conditions associated with reduced executive function, such as autism, schizophrenia or dementia.
Danielle Bassett, the Skirkanich Assistant Professor of Innovation in Penn’s School of Engineering and Applied Science, is senior author on the study. Manheim’s Urs Braun and Axel Schäfer were the lead authors. The research also featured work from Andreas Meyer-Lindenberg and Heike Tost of Mannheim, Henrik Walter of Charité, and others. It was published in the Proceedings of the National Academy of Sciences.
Rather than looking at the role a single region in the brain plays, Bassett and colleagues study the interconnections between the regions as indicated by synchronized activity. Using fMRI, they can measure which parts of the brain are “talking” to one another as study participants perform various tasks. Mapping the way this activity network reconfigures itself provides a more holistic view of how the brain operates.
“We try to understand how dynamic flexibility of brain networks can predict cognitive flexibility, or the ability to switch from task to task,” Bassett said. “Rather than being driven by the activity of single brain areas, we believe executive function is a network-level process.”
A group of scientists from Russia, the USA and China, led by Artyom Oganov from the Moscow Institute of Physics and Technology (MIPT), using computer generated simulation have predicted the existence of a new two-dimensional carbon material, a “patchwork” analogue of graphene called phagraphene. The results of their investigation were recently published in the journal Nano Letters.
“Unlike graphene, a hexagonal honeycomb structure with atoms of carbon at its junctions, phagraphene consists of penta-, hexa- and heptagonal carbon rings. Its name comes from a contraction of Penta-Hexa-heptA-graphene,” says Oganov, head of the MIPT Laboratory of Computer Design.
Two-dimensional materials, composed of a one-atom-thick layer, have attracted great attention from scientists in the last few decades. The first of these materials, graphene, was discovered in 2004 by two MIPT graduates, Andre Geim and Konstantin Novoselov. In 2010 Geim and Novoselov were awarded the Nobel Prize in physics for that achievement.
Due to its two-dimensional structure, graphene has absolutely unique properties. Most materials can transmit electric current when unbound electrons have an energy that corresponds to the conduction band of the material. When there is a gap between the range of possible electron energies, the valence band, and the range of conductivity (the so-called forbidden zone), the material acts as an insulator. When the valence band and conduction band overlap, it acts a conductor, and electrons can move under the influence of electric field.
In graphene each carbon atom has three electrons that are bound to electrons in neighboring atoms, forming chemical bonds. The fourth electron of each atom is “delocalized” throughout the whole graphene sheet, which allows it to conduct electrical current. At the same time, the forbidden zone in the graphene has zero width. If you plot the electron energy and their location in graph form, you get a figure resembling an hour glass, i.e. two cones connected by vertices. These are known as Dirac cones.
Due to this unique condition, electrons in graphene behave very strangely: all of them have one and the same velocity (which is comparable to the velocity of light), and they possess no inertia. They appear to have no mass. And, according to the theory of relativity, particles traveling at the velocity of light must behave in this manner. The velocity of electrons in graphene is about 10 thousand kilometers a second. Electron velocities in a typical conductor vary from centimeters up to hundreds of meters per second.
Phagraphene, discovered by Oganov and his colleagues through the use of the USPEX algorithm, as well as graphene, is a material where Dirac cones appear, and electrons behave similar to particles without mass.
Animal experiments show how a just-discovered prion triggers a rare Parkinson’s-like disease.
Scientists claim to have discovered the first new human prion in almost 50 years. Prions are misfolded proteins that make copies of themselves by inducing others to misfold. By so doing, they multiply and cause disease. The resulting illness in this case is multiple system atrophy (MSA), a neurodegenerative disease similar to Parkinson's. The study, published August 31 in Proceedings of the National Academy of Sciences, adds weight to the idea that many neurodegenerative diseases are caused by prions.
In the 1960s researchers led by Carleton Gajdusek at the National Institutes of Health transmitted kuru, a rare neurodegenerative disease found in Papua New Guinea, and Creutzfeldt–Jakob disease (CJD), a rare human dementia, to chimpanzees by injecting samples from victims' brains directly into those of chimps. It wasn't until 1982, however, that Stanley Prusiner coined the term prion (for “proteinaceous infectious particle”) to describe the self-propagating protein responsible.
Prusiner and colleagues at the University of California, San Francisco, showed this process caused a whole class of diseases, called spongiform encephalopathies (for the spongelike appearance of affected brains), including the bovine form known as “mad cow” disease. The same protein, PrP, is also responsible for kuru, which was spread by cannibalism; variant-CJD, which over 200 people developed after eating beef infected with the bovine variety; and others. The idea that a protein could transmit disease was radical at the time but the work eventually earned Prusiner the 1997 Nobel Prize in Physiology or Medicine. He has long argued prions may underlie other neurodegenerative diseases but the idea has been slow to gain acceptance.
In 2013 a team in Prusiner's lab, including neuroscientist Kurt Giles, were trying to transmit Parkinson's disease to mice genetically engineered to produce a human protein involved in Parkinson’s, alpha-synuclein, by injecting them with brain samples from deceased patients. They failed, but for comparison they also used two MSA samples—those mice got sick. “The controls were the ones that worked,” Giles says. “So we got lots more samples.” For the new study, the team obtained 12 more MSA samples from three brain banks in London, Boston and Sydney.
The result was the same: the mice injected with these samples all developed disease within 3.5 to five months. The gene inserted in the mice has a mutation associated with a hereditary form of Parkinson's, which researchers think makes the alpha-synuclein more likely to misfold. Mice with two copies develop disease spontaneously, after about 10 months, but mice with one copy remain healthy. Injecting either type with MSA samples resulted in neurodegeneration and death for both in the same short time span.
Presumably what happens is that alpha-synuclein prions in the MSA brain samples propagate by inducing the human alpha-synuclein proteins in the mice, which are prone to misfold, to take their particular aberrant shape Afterward, these mice's brains also showed buildups of alpha-synuclein in cells, and samples from these brains also caused disease in other mice. Neither a sample from a disease-free brain nor samples from Parkinson's patients, had these effects.
The unique properties found in the stunning iridescent wings of a tropical blue butterfly could hold the key to developing new highly selective gas detection sensors. Pioneering new research by a team of international scientists, including researchers from the University of Exeter, has replicated the surface chemistry found in the iridescent scales of the Morpho butterfly to create an innovative gas sensor.
The ground-breaking findings could help inspire new designs for sensors that could be used in a range of sectors, including medical diagnostics, industry, and the military.The research, published in the highly respected scientific journal, Nature Communications on September 1st ("Towards outperforming conventional sensor arrays with fabricated individual photonic vapour sensors inspired by Morpho butterflies"), describes how the composition of gases in different environments can be detected by measuring small colour changes of the innovative bio-inspired sensor.
Professor Pete Vukusic, one of the authors of the research and part of the Physics department at the University of Exeter said: "Bio-inspired approaches to the realisation of new technologies are tremendously valuable. In this work, by developing a detailed understanding of the subtle way in which the appearance and colour of the Morpho butterfly arises, and the way this colour depends on its local environment, our team has discovered a remarkable way in which we can advance sensor and detector technology rapidly."
Tiny tree-like nanostructures in the scales of Morpho wings are known to be responsible for the butterfly's brilliant iridescence. Previous studies have shown that vapour molecules adhere differently to the top of these structures than to the bottom due to local chemistry within the scales. This selective response to vapour molecules is the key to this bio-inspired gas sensor.
A team of researchers with Kogakuin University has demonstrated a lithium ion battery which is not only nearly transparent, but can also be recharged with direct sunlight alone. The battery was demonstrated at Innovation Japan 2015, where the leader of the team, and president of the university explained the goals of their battery research and the benefits consumers might eventually see from it.
It was just four years ago that a team of researchers at Stanford unveiled a nearly transparent lithium-ion battery that was both see-through and bendable. The team in Japan has been working with the new technology since then, two years ago unveiling a nearly transparent battery of their own which was charged with a separate solar panel. Now, the team has upgraded that battery by allowing it to recharge itself when exposed to sunlight.
To make the new battery, the team tweaked the materials that were already in use—lithium iron phosphate for the positive electrode and lithium titanate and lithium hexafluorophosphate for the negative electrode—all ingredients that are already generally used to make lithium-ion batteries. When the battery is exposed to sunlight, it becomes slightly tinted (down to approximately 30 percent transmittance), lowering the amount of light that can pass through. The trick in getting them to be nearly transparent is in making them really thin—the electrodes are just 80nm and 90nm. After discharge, the team reports that light transmittance rises to approximately 60 percent. They also report output from the battery of 3.6V.
The team believes their transparent solar charged batteries could one day be used as "smart" windows for homes or offices, allowing for not only automatic tinting, but as energy capture and storage devices for use in a variety of ways. Taking the concept further, it is possible the idea could be extended at some point to consumer electronics, with displays or even entire casings made of the material to help keep phones, tablets and other gear operating when used outdoors or under other types of lighting. But first the new technology will have to be vetted to make sure it works as promised (it has been tested at 20 charge/discharges) and then to see if it can stand up to the rigors of daily use.
A team of scientists has successfully measured particles of light being “squeezed”, in an experiment that had been written off in physics textbooks as impossible to observe.
Squeezing is a strange phenomenon of quantum physics. It creates a very specific form of light which is “low-noise” and is potentially useful in technology designed to pick up faint signals, such as the detection of gravitational waves.
The standard approach to squeezing light involves firing an intense laser beam at a material, usually a non-linear crystal, which produces the desired effect.
For more than 30 years, however, a theory has existed about another possible technique. This involves exciting a single atom with just a tiny amount of light. The theory states that the light scattered by this atom should, similarly, be squeezed.
Unfortunately, although the mathematical basis for this method – known as squeezing of resonance fluorescence – was drawn up in 1981, the experiment to observe it was so difficult that one established quantum physics textbook despairingly concludes: “It seems hopeless to measure it”.
So it has proven – until now. In the journal Nature, a team of physicists report that they have successfully demonstrated the squeezing of individual light particles, or photons, using an artificially constructed atom, known as a semiconductor quantum dot. Thanks to the enhanced optical properties of this system and the technique used to make the measurements, they were able to observe the light as it was scattered, and proved that it had indeed been squeezed.
Professor Mete Atature, from the Cavendish Laboratory, Department of Physics, and a Fellow of St John’s College at the University of Cambridge, led the research. He said: “It’s one of those cases of a fundamental question that theorists came up with, but which, after years of trying, people basically concluded it is impossible to see for real – if it’s there at all.”
“We managed to do it because we now have artificial atoms with optical properties that are superior to natural atoms. That meant we were able to reach the necessary conditions to observe this fundamental property of photons and prove that this odd phenomenon of squeezing really exists at the level of a single photon. It’s a very bizarre effect that goes completely against our senses and expectations about what photons should do.”
The robot moves slowly along its track, pausing regularly to reach out an arm that carefully scoops up a component. The arm connects the component to an elaborate construction on the robot's back. Then the robot moves forward and repeats the process — systematically stringing the parts together according to a precise design.
It might be a scene from a high-tech factory — except that this assembly line is just a few nanometres long. The components are amino acids, the product is a small peptide and the robot, created by chemist David Leigh at the University of Manchester, UK, is one of the most complex molecular-scale machines ever devised.
It is not alone. Leigh is part of a growing band of molecular architects who have been inspired to emulate the machine-like biological molecules found in living cells — kinesin proteins that stride along the cell's microscopic scaffolding, or the ribosomethat constructs proteins by reading genetic code. Over the past 25 years, these researchers have devised an impressive array of switches, ratchets, motors, rods, rings, propellers and more — molecular mechanisms that can be plugged together as if they were nanoscale Lego pieces. And progress is accelerating, thanks to improved analytical-chemistry tools and reactions that make it easier to build big organic molecules.
Now the field has reached a turning point. “We've made 50 or 60 different motors,” says Ben Feringa, a chemist at the University of Groningen in the Netherlands. “I'm less interested in making another motor than actually using it.”
That message was heard clearly in June, when one of the influential US Gordon conferences focused for the first time on molecular machines and their potential applications, a clear sign that the field has come of age, says the meeting's organizer, chemist Rafal Klajn of the Weizmann Institute of Science in Rehovot, Israel. “In 15 years' time,” says Leigh, “I think they will be seen as a core part of chemistry and materials design.”
Getting there will not be easy. Researchers must learn how to make billions of molecular machines work in concert to produce measurable macroscopic effects such as changing the shape of a material so that it acts as an artificial muscle. They must also make the machines easier to control, and ensure that they can carry out countless operations without breaking.
That is why many in the field do not expect the first applications to involve elaborate constructs. Instead, they predict that the basic components of molecular machines will be used in diverse areas of science: as light-activated switches that can release targeted drugs, for example, or as smart materials that can store energy or expand and contract in response to light. That means that molecular architects need to reach out to researchers who work in fields that might benefit from their machine parts, says Klajn. “We need to convince them that these molecules are really exciting.”
While the U.S. Navy is busy with the development of a new bulletproof material called Spinel, Surmet Corporation is already commercially producing its own version called ALON®. Technically known as aluminum oxynitride, Star Trek fans may be more familiar with the term “transparent aluminum” first proposed by Scotty in the 1986 movie, Star Trek IV: The Voyage Home. While ALON isn’t quite what Scotty had in mind (it’s not truly a transparent metallic aluminum, but rather a transparent aluminum-based ceramic), it’s pretty darn close.
Developed by Raytheon, ALON begins as a powder, which is then molded and baked in very high heat. The heating process causes the powder to liquefy and cool quickly, leaving the molecules loosely arranged, as if still in liquid form. It is this crystalline structure that provides ALON its level of strength and scratch resistance comparable to rugged sapphire. Polishing the aluminum oxynitride strengthens the material and also makes it extremely clear.
Traditional bulletproof glass is comprised of multiple layers: polycarbonate sandwiched between two layers of glass. Similarly, transparent aluminum armor is also composed of three layers: an outer layer of aluminum oxynitride, a middle layer of glass and a rear layer of polymer backing. However, the similarities stop there. Aluminum armor can deflect the same rounds from small-caliber weapons as traditional bulletproof glass, but it will still be more clearly transparent even after being shot. Also, a .50-caliber armor-piercing bullet could sink nearly three inches into bulletproof glass before stopping. Aluminum armor can stop it in half the distance and yet is half the weight and thickness of traditional transparent armor.
In addition, transparent aluminum armor can be produced in virtually any shape and can also hold up to the elements much better than traditional bulletproof glass, which can be worn away by blowing desert sand or shrapnel.
Despite aluminum oxynitride’s ability to produce a superior transparent aluminum armor, this material has not been put into widespread use. The largest factor in this is cost. Transparent aluminum armor can be anywhere from three to five times as much to produce as traditional bulletproof glass. In theory, however, it would not need to be replaced as often, saving money in the long run. Further, there is no existing infrastructure to produce the material in large panes like the size of a front windshield of a vehicle. ALON is currently used for smaller applications, such as the lenses in battlefield cameras or the windows over the sensors in missiles.
Algae, which causes a lot of damage to the marine ecosystem by creating water blooms and red tides, is now turning into the next-generation raw material of eco-friendly biofuels, including biodiesel and bioethanol.
Until now, biofuels have been produced from first-generation grass feed stock, such as corn and sugar cane, or second-generation plant feed stock, including corn stalk and rice husks. However, using grass feed stock aggravates shortages of food among low-income groups by raising the price of grain, while plant feed stock has limitations like low yields. As a third-generation raw material that will overcome such weak points, marine algae and microalgae are in the spotlight from the global biofuels industry.
In particular, they absorb carbon dioxide in the process of growth. So, when marine algae and microalgae are provided carbon dioxide emitted from thermal power plants and breweries, they can reduce carbon dioxide emissions and produce biofuels at the same time. According to a survey, 180 tons of carbon dioxide are decreased when producing 100 tons of microalgae.
Sohn Jong-koo, senior researcher at the Industry Information Analysis Center at KISTI, said, “Currently, the U.S. accounts for 50 percent of the algae biofuel market, while Europe accounts for 30 percent. Korea, Japan, China, Australia and Israel are now going after them.” Sohn expects that the related market will be created in earnest, beginning this year, as commercial plants will be constructed in earnest. In fact, market research firm Pike Research has forecasted that the algae biofuel market this year will be estimated at US$1.6 billion (1.88 trillion won), and it will rapidly grow by 812 percent in the next five years to reach US$13 billion (15.3 trillion won) in 2020. It means that 61 million gallons, or 230 million liters, of algae biofuels will be sold around the world five years after that.
In a bid to tap into such a huge market, South Korean government-funded research institutes and private firms are advancing technology based on government-level support. The country is aiming to construct 500,000 hectares of marine algae farms by 2020 and produce 227 million liters of bioethanol annually, taking over 20 percent of domestic gasoline consumption.
Via Marko Dolinar
Scientists have fabricated a flexible electrical circuit that, when cut into two pieces, can repair itself and fully restore its original conductivity. The circuit is made of a new gel that possesses a combination of properties that are not typically seen together: high conductivity, flexibility, and room-temperature self-healing. The gel could potentially offer self-healing for a variety of applications, including flexible electronics, soft robotics, artificial skins, biomimetic prostheses, and energy storage devices.
The researchers, led by Guihua Yu, an assistant professor at the University of Texas at Austin, have published a paper on the new self-healing gel in a recent issue of Nano Letters.
The new gel's properties arise from its hybrid composition of two gels: a supramolecular gel, or 'supergel', is injected into a conductive polymer hydrogel matrix. As the researchers explain, this "guest-to-host" strategy allows the chemical and physical features of each component to be combined.
The supergel, or the "guest," provides the self-healing ability due to its supramolecular chemistry. As a supramolecular assembly, it consists of large molecular subunits rather than individual molecules. Due to its large size and structure, the assembly is held together by much weaker interactions than normal molecules, and these interactions can also be reversible. This reversibility is what gives the supergel its ability to act like a "dynamic glue" and reassemble itself.
Meanwhile, the conductive polymer hydrogel, or the "host," contributes to the conductivity due to its nanostructured 3D network that promotes electron transport. As the backbone of the hybrid gel, the hydrogel component also reinforces its strength and elasticity. When the supergel is injected into the hydrogel matrix, it wraps around the hydrogel in such a way as to form a second network, further strengthening the hybrid gel as a whole.
In their experiments, the researchers fabricated thin films of the hybrid gel on flexible plastic substrates to test their electrical properties. The tests showed that the conductivity is among the highest values of conductive hybrid gels, and is maintained due to the self-healing property even after repeated bending and stretching. The researchers also demonstrated that, when an electrical circuit made of the hybrid gel is cut, it takes about one minute for the circuit to self-heal and recover its original conductivity. The gel self-heals even after being cut multiple times in the same location.
A new ingredient developed by scientists in Scotland could mean that ice cream fans can enjoy their treats before they melt.
A naturally occurring protein can be used to create ice cream which stays frozen for longer in hot weather. The scientists estimate that the slow-melting product could become available in three to five years. The development could also allow products to be made with lower levels of saturated fat and fewer calories.
Teams at the Universities of Edinburgh and Dundee have discovered that the protein, known as BsIA, works by binding together the air, fat and water in ice cream. It is also said to prevent gritty ice crystals from forming - ensuring a fine, smooth texture.
Prof Cait MacPhee, of the University of Edinburgh's school of physics and astronomy, who led the project, said: "It's not completely non-melting because you do want your ice cream to be cold. It will melt eventually but hopefully by keeping it stable for longer it will stop the drips."
China is set to complete the installation of the world's longest quantum communication network stretching 2,000km (1,240miles) from Beijing to Shanghai by 2016, say scientists leading the project. Quantum communications technology is considered to be "unhackable" and allows data to be transferred at the speed of light. By 2030, the Chinese network would be extended worldwide, the South China Morning Post reported. It would make the country the first major power to publish a detailed schedule to put the technology into extensive, large-scale use.
The development of quantum communications technology has accelerated in the last five years. The technology works by two people sharing a message which is encrypted by a secret key made up of quantum particles, such as polarized photons. If a third person tries to intercept the photons by copying the secret key as it travels through the network, then the eavesdropper will be revealed by virtue of the laws of quantum mechanics – which dictate that the act of interfering with the network affects the behaviour of the key in an unpredictable manner.
If all goes to schedule, China would be the first country to put a quantum communications satellite in orbit, said Wang Jianyu, deputy director of the China Academy of Science's (CAS) Shanghai branch. At a recent conference on quantum science in Shanghai, Wang said scientists from CAS and other institutions have completed major research and development tasks for launching the satellite equipped with quantum communications gear, South China Morning Post said.
The potential success of the satellite was confirmed by China's leading quantum communications scientist, Pan Jianwei, a CAS academic who is also a professor of quantum physics at the University of Science and Technology of China (USTC) in Hefei, in the eastern province of Anhui. Pan said researchers reported significant progress on systems development after conducting experiments at a test center in the northwest of China.
The satellite would be used to transmit encoded data through a method called quantum key distribution (QKD), which relies on cryptographic keys transmitted via light pulse signals. QKD is said to be nearly impossible to hack, since any attempted eavesdropping would change the quantum states and thus could be quickly detected by dataflow monitors.
Via LeapMind, Jocelyn Stoller
Smooth diamond shape texture of sharkskin appears to be a very hostile environment for micro-organisms.
Even though hospitals are rapidly cleaned with strong antiseptics, they can still be filled with all sorts of microorganisms that threaten our health. About two million people catch some disease in US hospitals every year, and around 100,000 die from it, while microorganisms tend to grow more resilient to antibiotics. Scientists are trying to mimic nature to find a long-term solution for this issue, and they allegedly found it in sharks skin, WIRED Science reported.
While helping the NAVY figure out how to keep its ship sides smooth, scientist Anthony Brennan studied sharkskin. Its smooth surface allows these great sea predators to swim faster than any other sea creature. He noticed that sharkskin is barnacle and algae-free. It is also a well known fact that microorganisms are more likely to hold onto roughened surfaces than stick onto smooth ones. That is how Sharklet Technologies was created.
According to Sharklet Technologies CEO Mark Spiecker, “By staying clean while moving slow, sharks defy a basic principle of the ocean.” Sharkskin consists of millions of nano-ridges, arranged in a diamond pattern. This texture enables the process called mechanotransduction, which basically provides mechanical stress on microorganisms. In such an environment, bacteria lives no longer than 18 minutes, which is not enough lifetime for reproduction, according to Spiecker.
The goal is the creation of a thin film that has the same texture as sharkskin, that can be applied on hospital’s most exposed surfaces such as door handles or stairway banisters. This should make it more difficult for bacteria to build up on such surfaces, including antibiotic-resistant bacteria, like MRSA—to settle on these areas and infect hospital patients.
WIRED reported Spieckers claim that the Sharklet film can reduce bacteria transfer up to a 97 percent.
Two new microscopy techniques are helping scientists see smaller structures in living cells than ever glimpsed before.
Scientists can now view structures just 45 to 84 nanometers wide, Nobel prize-winning physicist Eric Betzig of the Howard Hughes Medical Research Institute’sJanelia research campus in Ashburn, Va., and colleaguesreport in the Aug. 28 Science. The techniques beat the previous resolution of 100 nanometers and shatters the 250 nanometer “diffraction barrier,” imposed by the bending of light.
Using other tricks to improve the super-resolution methods also allowed the researchers to take ultraquick pictures with less cell-damaging light than before. As a result, scientists can watch sub-second interactions within cells, revealing new insights into how cells work.
Every second greater than 10E25 antineutrinos radiate to space from Earth, shining like a faint antineutrino star. Underground antineutrino detectors have revealed the rapidly decaying fission products inside nuclear reactors, verified the long-lived radioactivity inside our planet, and informed sensitive experiments for probing fundamental physics. Mapping the anisotropic antineutrino flux and energy spectrum advance geoscience by defining the amount and distribution of radioactive power within Earth while critically evaluating competing compositional models of the planet. A group of scientists now present the Antineutrino Global Map 2015 (AGM2015), an experimentally informed model of Earth’s surface antineutrino flux over the 0 to 11 MeV energy spectrum, along with an assessment of systematic errors. The open source AGM2015 provides fundamental predictions for experiments, assists in strategic detector placement to determine neutrino mass hierarchy, and aids in identifying undeclared nuclear reactors. They use cosmo-chemically and seismologically informed models of the radiogenic lithosphere/mantle combined with the estimated antineutrino flux, as measured by KamLAND and Borexino, to determine the Earth’s total antineutrino luminosity. They find a dominant flux of geo-neutrinos, predict sub-equal crust and mantle contributions, with ~1% of the total flux from man-made nuclear reactors.
By combining designer quantum dot light-emitters with spectrally matched photonic mirrors, a team of scientists with Berkeley Lab and the University of Illinois created solar cells that collect blue photons at 30 times the concentration of conventional solar cells, the highest luminescent concentration factor ever recorded. This breakthrough paves the way for the future development of low-cost solar cells that efficiently utilize the high-energy part of the solar spectrum.
"We've achieved a luminescent concentration ratio greater than 30 with an optical efficiency of 82-percent for blue photons," says Berkeley Lab director Paul Alivisatos, who is also the Samsung Distinguished Professor of Nanoscience and Nanotechnology at the University of California Berkeley, and director of the Kavli Energy Nanoscience Institute (ENSI), was the co-leader of this research. "To the best of our knowledge, this is the highest luminescent concentration factor in literature to date."
Alivisatos and Ralph Nuzzo of the University of Illinois are the corresponding authors of a paper in ACS Photonics describing this research entitled "Quantum Dot Luminescent Concentrator Cavity Exhibiting 30-fold Concentration." Noah Bronstein, a member of Alivisatos's research group, is one of three lead authors along with Yuan Yao and Lu Xu. Other co-authors are Erin O'Brien, Alexander Powers and Vivian Ferry.
The solar energy industry in the United States is soaring with the number of photovoltaic installations having grown from generating 1.2 gigawatts of electricity in 2008 to generating 20-plus gigawatts today, according to the U.S. Department of Energy (DOE). Still, nearly 70-percent of the electricity generated in this country continues to come from fossil fuels. Low-cost alternatives to today's photovoltaic solar panels are needed for the immense advantages of solar power to be fully realized. One promising alternative has been luminescent solar concentrators (LSCs).
Unlike conventional solar cells that directly absorb sunlight and convert it into electricity, an LSC absorbs the light on a plate embedded with highly efficient light-emitters called "lumophores" that then re-emit the absorbed light at longer wavelengths, a process known as the Stokes shift. This re-emitted light is directed to a micro-solar cell for conversion to electricity. Because the plate is much larger than the micro-solar cell, the solar energy hitting the cell is highly concentrated.
With a sufficient concentration factor, only small amounts of expensive III−V photovoltaic materials are needed to collect light from an inexpensive luminescent waveguide. However, the concentration factor and collection efficiency of the molecular dyes that up until now have been used as lumophores are limited by parasitic losses, including non-unity quantum yields of the lumophores, imperfect light trapping within the waveguide, and reabsorption and scattering of propagating photons.
"We replaced the molecular dyes in previous LSC systems with core/shell nanoparticles composed of cadmium selenide (CdSe) cores and cadmium sulfide (CdS) shells that increase the Stokes shift while reducing photon re-absorption," says Bronstein.
An experimental gene therapy reduces the rate at which nerve cells in the brains of Alzheimer’s patients degenerate and die, according to new results from a small clinical trial, published in the current issue of the journal JAMA Neurology.
Targeted injection of the Nerve Growth Factor gene into the patients’ brains rescued dying cells around the injection site, enhancing their growth and inducing them to sprout new fibres. In some cases, these beneficial effects persisted for 10 years after the therapy was first delivered.
Alzheimer’s is the world’s leading form of dementia, affecting an estimated 47 million people worldwide. This figure is predicted to almost double every 20 years, with much of this increase is likely to be in the developing world. And despite the huge amounts of time, effort, and money devoted to developing an effective cure, the vast majority of new drugs have failed in clinical trials.
The new results are preliminary findings from the very first human trials designed to test the potential benefits of nerve growth factor (NGF) gene therapy for Alzheimer’s patients.
NGF was discovered in the 1940s by Rita Levi-Montalcini, who convincingly demonstrated that the small protein promotes the survival of certain sub-types of sensory neurons during development of the nervous system. Since then, others have shown that it also promotes the survival of acetylcholine-producing cells in the basal forebrain, which die off in Alzheimer’s.
Caltech engineers have created flat devices capable of manipulating light in ways that are very difficult or impossible to achieve with conventional optical components.
Ancient rocks harbored microbial life deep below the seafloor, reports a team of scientists from the Woods Hole Oceanographic Institution (WHOI), Virginia Tech, and the University of Bremen. This new evidence was contained in drilled rock samples of Earth's mantle - thrust by tectonic forces to the seafloor during the Early Cretaceous period. The new study was published today in the Proceedings of the National Academy of Sciences.
The discovery confirms a long-standing hypothesis that interactions between mantle rocks and seawater can create potential for life even in hard rocks deep below the ocean floor. The fossilized microbes are likely the same as those found at the active Lost City hydrothermal field, providing potentially important clues about the conditions that support 'intraterrestrial' life in rocks below the seafloor.
"We were initially looking at how seawater interacts with mantle rocks, and how that process generates hydrogen," said Frieder Klein, an associate scientist at WHOI and lead author of the study. "But during our analysis of the rock samples, we discovered organic-rich inclusions that contained lipids, proteins and amino acids - the building blocks of life - mummified in the surrounding minerals."
This study, which was a collaborative effort between Klein, WHOI scientists Susan Humphris, Weifu Guo and William Orsi, Esther Schwarzenbach from Virginia Tech and Florence Schubotz from the University of Bremen, focused on mantle rocks that were originally exposed to seawater approximately 125 million years ago when a large rift split the massive supercontinent known as Pangaea. The rift, which eventually evolved into the Atlantic Ocean, pulled mantle rocks from Earth's interior to the seafloor, where they underwent chemical reactions with seawater, transforming the seawater into a hydrothermal fluid.
"The hydrothermal fluid likely had a high pH and was depleted in carbon and electron acceptors," Klein said. "These extreme chemical conditions can be challenging for microbes. However, the hydrothermal fluid contained hydrogen and methane and seawater contains dissolved carbon and electron acceptors. So when you mix the two in just the right proportions, you can have the ingredients to support life."