NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
ESO's Very Large Telescope has revealed the largest yellow star—and one of the 10 largest stars found so far. This hypergiant has been found to measure more than 1,300 times the diameter of the Sun, and to be part of a double star system, with the second component so close that it is in contact with the main star. Observations spanning over 60 years also indicate that this remarkable object is changing very rapidly.
Using ESO's Very Large Telescope Interferometer (VLTI), Olivier Chesneau (Observatoire de la Côte d'Azur, Nice, France) and an international team of collaborators have found that the yellow hypergiant star HR 5171 A is absolutely huge—1300 times the diameter of the Sun and much bigger than was expected. This makes it the largest yellow star known. It is also in the top ten of the largest stars known—50% larger than the famous red supergiant Betelgeuse—and about one million times brighter than the Sun.
"The new observations also showed that this star has a very close binary partner, which was a real surprise," says Chesneau. "The two stars are so close that they touch and the whole system resembles a gigantic peanut."
The astronomers made use of a technique called interferometry to combine the light collected from multiple individual telescopes, effectively creating a giant telescope up to 140 metres in size. The new results prompted the team to thoroughly investigate older observations of the star spanning more than sixty years, to see how it had behaved in the past.
Yellow hypergiants are very rare, with only a dozen or so known in our galaxy—the best-known example being Rho Cassiopeiae. They are among the biggest and brightest stars known and are at a stage of their lives when they are unstable and changing rapidly. Due to this instability, yellow hypergiants also expel material outwards, forming a large, extended atmosphere around the star.
Despite its great distance of nearly 12 000 light-years from Earth, the object can just about be seen with the naked eye by the keen-sighted. HR 5171 A has been found to be getting bigger over the last 40 years, cooling as it grows, and its evolution has now been caught in action. Only a few stars are caught in this very brief phase, where they undergo a dramatic change in temperature as they rapidly evolve.
Have you ever Googled for an online diagnosis before visiting a doctor? If so, you may have helped provide early warning of an infectious disease epidemic.
In a new study published in Lancet Infectious Diseases, Internet-based surveillance has been found to detect infectious diseases such as Dengue Fever and Influenza up to two weeks earlier than traditional surveillance methods, according to Queensland University of Technology (QUT) research fellow and senior author of the paper Wenbiao Hu.
“This is because traditional surveillance relies on the patient recognizing the symptoms and seeking treatment before diagnosis, along with the time taken for health professionals to alert authorities through their health networks. In contrast, digital surveillance can provide real-time detection of epidemics.”
Hu said the study used search engine algorithms such as Google Trends and Google Insights. It found that detecting the 2005–06 avian influenza outbreak “Bird Flu” would have been possible between one and two weeks earlier than official surveillance reports.
“In another example, a digital data collection network was found to be able to detect the SARS outbreak more than two months before the first publications by the World Health Organization (WHO),” Hu said.
Los Amigos Biological Station sits within the Peruvian Amazon—one of the planet’s richest hotspots for life. Countless species fly, scurry, climb and burrow through the surrounding rainforest. To be at the station is to be surrounded by life at its most diverse and wondrous.
But you don’t have to go into the forest to find diversity. The research station has a kilometre-long airstrip, and its borders are thick with climbing squash vines descending from the trees. A team of scientists led by Marty Condon from Cornell College collected some 3,600 flowers from these vines, all belonging to just twospecies. They found entire worlds.
The flowers were home to 14species of fly, which lived nowhere else. “When we go out in the field, we collect every flower, fruit and stem of this group. These particular flies have only come out of these two plants,” says Condon. Most were even restricted to either the male or female flowers of their chosen plant.
There was more. Condon also found 18 species of parasitic wasps, which attack the fly larvae and lay eggs inside their bodies. Two of the wasps were generalists that attacked a wide variety of hosts. But the vast majority were specialists that targeted just one of the 14 available fly species, even though there were several possible targets around.
The human brain is build in such a way, that only necessary information is stored permanently -- the rest is forgotten over time. However, so far it was not clear if this process was active or passive. Scientists from the transfaculty research platform Molecular and Cognitive Neurosciences (MCN) at the University of Basel have now found a molecule that actively regulates memory loss. The so-called musashi protein is responsible for the structure and function of the synaptic connections of the brain, the place where information is communicated from one neuron to the next.
Using olfactory conditioning, the researchers Attila Stetak and Nils Hadziselimovic first studied the learning abilities of genetically modified ringworms (C. elegans) that were lacking the musashi protein. The experiments showed that the worms exhibited the same learning skills as unmodified animals. However, with extended duration of the experiment, the scientists discovered that the mutants were able to remember the new information much better. In other words: The genetically modified worms lacking the musashi protein were less forgetful.
Further experiments showed that the protein inhibits the synthesis of molecules responsible for the stabilization of synaptic connections. This stabilization seems to play an important role in the process of learning and forgetting. The researchers identified two parallel mechanisms: One the one hand, the protein adducin stimulates the growth of synapses and therefore also helps to retain memory; on the other hand, the musashi protein actively inhibits the stabilization of these synapses and thus facilitates memory loss. Therefore, it is the balance between these two proteins that is crucial for the retention of memories.
Forgetting is thus not a passive but rather an active process and a disruption of this process may result in serious mental disorders. The musashi protein also has interesting implications for the development of drugs trying to prevent abnormal memory loss that occurs in diseases such as Alzheimer's. Further studies on the therapeutic possibilities of this discovery will be done.
Researchers at UC Berkeley found that an electrical current can be used to orchestrate the flow of a group of cells, an achievement that could establish the basis for more controlled forms of tissue engineering and for potential applications such as “smart bandages” that use electrical stimulation to help heal wounds.
In the experiments, described in a study published this week in the journal Nature Materials, the researchers used single layers of epithelial cells, the type of cells that bind together to form robust sheathes in skin, kidneys, cornea and other organs. They found that by applying an electric current of about five volts per centimeter, they could encourage cells to migrate along the direct current electric field.
They were able to make the cells swarm left or right, to diverge or converge and to make collective U-turns. They also created elaborate shapes, such as a triceratops and the UC Berkeley Cal bear mascot, to explore how the population and configuration of cell sheets affect migration.
This is the first data showing that direct current fields can be used to deliberately guide migration of a sheet of epithelial cells,” said study lead author Daniel Cohen, who did this work as a student in the UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering. “There are many natural systems whose properties and behaviors arise from interactions across large numbers of individual parts – sand dunes, flocks of birds, schools of fish, and even the cells in our tissues. Just as a few sheepdogs exert enormous control over the herding behavior of sheep, we might be able to similarly herd biological cells for tissue engineering.”
Galvanotaxis – the use of electricity to direct cell movement – had been previously demonstrated for individual cells, but how it influences the collective motion of cells was still unclear.
“The ability to govern the movement of a mass of cells has great utility as a scientific tool in tissue engineering,” said study senior author Michel Maharbiz, UC Berkeley associate professor of electrical engineering and computer sciences. “Instead of manipulating one cell at a time, we could develop a few simple design rules that would provide a global cue to control a collection of cells.”
The work was borne from a project, led by Maharbiz, to develop electronic nanomaterials for medical use that was funded by the National Science Foundation’s Emerging Frontiers in Research and Innovation program. The researchers collaborated with W. James Nelson, professor of molecular and cellular physiology at Stanford University and one of the world’s top experts in cell-to-cell adhesion. Cohen is now a postdoctoral research fellow in Nelson’s lab.
Researchers have shown that the commonly found bacterium Rhodopseudomonas palustris can use natural conductivity to pull electrons from minerals located remotely in soil and sediment while remaining at the surface, where they absorb the sunlight needed to produce energy.
Led by Peter Girguis, the John L. Loeb Associate Professor of the Natural Sciences, and Arpita Bose, a post-doctoral fellow in Organismic and Evolutionary Biology, a team of researchers showed that the commonly found bacterium Rhodopseudomonas palustris can use natural conductivity to pull electrons from minerals located deep in soil and sediment while remaining at the surface, where they absorb the sunlight needed to produce energy. The study is described in a February 26 paper in Nature Communications.
"When you think about electricity and living organisms, most people default to Mary Shelley's Frankenstein, but we've long understood that all organisms actually use electrons -- what constitutes electricity -- to do work," Girguis said. "At the heart of this paper is a process called extracellular electron transfer (EET), which involves moving electrons in and out of cells. What we were able to show is that these microbes take up electricity, which goes into their central metabolism, and we were able to describe some of the systems that are involved in that process."
In the wild, the microbes rely on iron to provide the electrons they need to fuel energy generation, but tests in the lab suggest the iron itself isn't critical for this process. By attaching an electrode to colonies of the microbes in the lab, researchers observed that they could take up electrons from a non-ferrous source, suggesting they might also use other electron-rich minerals -- such as other metals and sulfur compounds -- in the wild.
"That's a game-changer," Girguis said. "We have understood for a long time that the aerobic and anaerobic worlds interact mainly through the diffusion of chemicals into and out of those domains. Accordingly, we also believe this process of diffusion governs the rates of many biogeochemical cycles. But this research indicates…that this ability to do EET is, in a sense, an end-run around diffusion. That could change the way we think about the interactions between the aerobic and anaerobic worlds, and might change the way we calculate the rates of biogeochemical cycling."
Using genetic tools, researchers were also able to identify a gene that is critical to the ability to take up electrons. When the gene was turned off, Girguis said, the microbes' ability to take up electrons dropped by about a third.
"We are very interested in understanding exactly what that role that gene plays in electron uptake," Girguis said. "Related genes are found throughout other microbes in nature, and we aren't exactly sure what they're doing in those microbes. This offers some tantalizing evidence that other microbes carry out this process as well."
A new global monitoring system has been launched that promises "near real time" information on deforestation around the world.
Global Forest Watch (GFW) is backed by Google and over 40 business and campaigning groups.
It uses information from hundreds of millions of satellite images as well as data from people on the ground. Businesses have welcomed the new database as it could help them prove that their products are sustainable.
Despite greater awareness around of the world of the impacts of deforestation, the scale of forest loss since 2000 has been significant - data from Google and the University of Maryland says the world lost 230 million hectares of trees between 2000 and 2012.
Forest campaigners say this is the equivalent of 50 football fields of trees being cut down, every minute of every day over the past 12 years.
One of the big problems in dealing with tree loss has been a lack of accurate information. Over the same time period as all these trees were lost, around 800,000 sq km of new forest was planted.
To tackle the dearth of reliable and up to date information, the US based World Resources Institute (WRI) has led the development of GFW, using half a billion high resolution images from Nasa's Landsat program.
The tool will be aimed at politicians and decision makers but also at indigenous groups.
Most people take it for granted that we have yet to make contact with an extraterrestrial civilization. Trouble is, the numbers don’t add up. Our Galaxy is so old that every corner of it should have been visited many, many times over by now. No theory to date has satisfactorily explained away this Great Silence, so it’s time to think outside the box. Here are eleven of the weirdest solutions to the Fermi Paradox.
There's no shortage of solutions to the Fermi Paradox. The standard ones are fairly well known, and we’re not going to examine them here, but they include the Rare Earth Hypothesis (the suggestion that life is exceptionally rare), the notion that space travel is too difficult, or the distances too vast, the Great Filter Hypothesis (the idea that all sufficiently advanced civilizations destroy themselves before going intergalactic), or that we’re simply not interesting enough.
But for the purposes of this discussion, we’re going to look at some of the more bizarre and arcane solutions to the Fermi Paradox. Because sometimes it takes a weird explanation to answer a weird question. So, as Enrico Fermi famously asked, “Where is everybody?”
Astronauts, get your welding goggles on – the space station is going into the foundry business. The International Space Station (ISS) is set to do a spot of industrial research this June, when ESA’s Materials Science Laboratory-Electromagnetic Levitator (MSL-EML) heads for the station aboard Europe's’ Automated Transfer Vehicle 5 (ATV-5) Georges Lemaître unmanned space freighter as part of a program to study the casting of alloys in a weightless environment.
Most metals have are crystalline and their properties depend on this microstructure, which develops as they cool. An everyday version of this is tempering, where a steel knife blade is heated to red hot and then plunged into cold water. The sudden cooling alters the crystalline microstructure of the steel, making it hard and able to hold a sharp edge.
The example is a simple one, but the process is actually extremely complex. It’s even more so when molten metal is cooled inside a casting. The temperature and density differences, convection forces as the cooling molten metal rises and falls in the mold, and any number of other factors are among the many reasons why casting metals, especially exotic alloys, is often as much art as science.
Microgravity is one way of reducing this complexity, so scientists are better able to understand it. In the absence of gravity, there aren't any convection forces, so metal castings have an even temperature. Furthermore, in a gravity-free environment metal samples can be suspended in a magnetic field and heated using conduction coils. This means there are no complicating factors, such as the molten sample sticking to a crucible wall or being contaminated by it.
By means of microgravity, scientists hope to gain a better understanding of an alloy’s surface tension, viscosity, melting range, fraction solid, specific heat, heat of fusion, mass density, and thermal expansion among other things. This would be of tremendous importance for everything from casting turbine blades to developing lighter weight alloys.
Psychologists have long been investigating the connection between facial expressions and emotions. A theory first offered by Paul Ekman, says that there are six primary emotions that are globally recognized and easily construed through specific facial expressions: happiness, sadness, fear, anger, surprise and disgust.
According to new research published in the journal Current Biology, scientists at the University of Glasgow have discovered that there are only four basic emotions: happiness, sadness, fear/surprise and anger/disgust.
In a unique approach, the study team looked at the ‘temporal dynamics’ of facial expressions, thanks to a unique system developed at the University of Glasgow. They studied the array of different muscles inside the face involved with conveying different emotions, called ‘Action Units,’ in addition to the time-frame over which each muscle was triggered.
The scientists determined that while the facial expression signals of happiness and sadness are clearly unique across time, fear and surprise share a typical signal — the wide open eyes — at the start of the signaling mechanics. Likewise, anger and disgust share the wrinkled nose.
It is these first signals that could possibly represent simpler danger signals. Later in the signaling mechanics, facial expressions transfer signals that differentiate all six ‘classic’ facial expressions of emotion, the researchers said.
“Our results are consistent with evolutionary predictions, where signals are designed by both biological and social evolutionary pressures to optimize their function,” said study author Rachael Jack, a psychologist at the Scottish university.
“First, early danger signals confer the best advantages to others by enabling the fastest escape,” Jack explained. “Secondly, physiological advantages for the expresser – the wrinkled nose prevents inspiration of potentially harmful particles, whereas widened eyes increases intake of visual information useful for escape – are enhanced when the face movements are made early.”
“What our research shows is that not all facial muscles appear simultaneously during facial expressions, but rather develop over time supporting a hierarchical biologically-basic to socially-specific information over time,” she added.
The unique system developed by the study team uses cameras to record a three-dimensional image of participants’ faces. These participants were expressly trained to be able to activate all 42 individual facial muscles separately.
From the image, the system computer could generate a specific or random facial expression on a 3D model based on the triggering of different Action Units or clusters of units to impersonate all facial expressions.
Participants were then asked to observe the computer model as it generated various expressions and determine which emotion was being articulated. The researchers could then tell which specific Action Units observers correlate with specific emotions.
The study team discovered that the signals for fear/surprise and anger/disgust were confused at the beginning stage of transmission and only became more obvious later when other Action Units were incorporated.
“Our research questions the notion that human emotion communication comprises six basic, psychologically irreducible categories. Instead we suggest there are four basic expressions of emotion,” Jack said.
Imagine a ribbon roughly one hundred million times as long as it is wide. If it were a meter long, it would be 10 nanometers wide, or just a few times thicker than a DNA double helix. Scaled up to the length of a football field, it would still be less than a micrometer across — smaller than a red blood cell. Would you trust your life to that thread? What about a tether 100,000 kilometers long, one stretching from the surface of the Earth to well past geostationary orbit (GEO, 22,236 miles up), but which was still somehow narrower than your own wingspan?
The idea of climbing such a ribbon with just your body weight sounds precarious enough, but the ribbon predicted by a new report from the International Academy of Astronautics (IAA) will be able to carry up to seven 20-ton payloads at once. It will serve as a tether stretching far beyond geostationary (aka geosynchronous) orbit and held taught by an anchor of roughly two million kilograms. Sending payloads up this backbone could fundamentally change the human relationship with space — every climber sent up the tether could match the space shuttle in capacity, allowing up to a “launch” every couple of days.
The report spends 350 pages laying out a detailed case for this device, called a space elevator. The central argument — that we should build a space elevator as soon as possible — is supported by a detailed accounting of the challenges associated with doing so. The possible pay-off is as simple as could be — a space elevator could bring the cost-per-kilogram of launch to geostationary orbit from $20,000 to as little as $500.
Not only is a geostationary orbit intrinsically useful for satellites, but it’s far enough up the planet’s gravity well to be able to use it in cheap, Earth-assisted launches. A mission to Mars might begin by pushing off near the top of the tether and using small rockets to move into a predictably unstable fall — one, two, three loops around the Earth and off we go with enough pep to cut huge fractions off the fuel budget. Setting up a base on the Moon or Mars would be relatively trivial, with a space elevator in place.
Those are not small advantages, and are worth significant investment from the private sector. Governments and corporations spend billions installing infrastructure in space — an elevator could easily pay for itself, and demand investment from anyone with an interest in ensuring cheap access to it down the line. A space elevator is relevant to scientists, telecoms, and militaries alike — and with Moon- and asteroid-based miningbecoming less hare-brained by the minute, Earth’s notorious resource sector could get on-board as well. It will certainly be expensive, probably the biggest mega-project of all time, but since a space elevator can offer a solid value proposition to everyone from Google to DARPA to Exxon, funding might end up being the least of its problems.
During harvest last year, banana farmers in Jordan and Mozambique made a chilling discovery. Their plants were no longer bearing the soft, creamy fruits they'd been growing for decades. When they cut open the roots of their banana plants, they saw something that was turning banana plants into a rotting mass.
Scientists first discovered the fungus that is turning banana plants into this rotting mass in Southeast Asia in the 1990s. Since then the pathogen, known as the Tropical Race 4 strain of Panama disease, has slowly but steadily ravaged export crops throughout Asia. The fact that this vicious soil-borne fungus has now made the leap to Mozambique and Jordan is frightening. One reason is that it’s getting closer to Latin America, where at least 70% of the world’s $8.9-billion-a-year worth of exported bananas is grown.
Chiquita, the $548-million fruit giant with the world’s largest banana market share, is downplaying the risk. ”It’s certainly not an immediate threat to banana production in Latin America [where Chiquita's crops are],” Ed Lloyd, spokesman for Chiquita, told the Charlotte Business Journal in late December, explaining that the company is using a “risk-mitigation program” to approach the potential spread.
Even if it takes longer to arrive, the broader ravaging of the commercial banana appears inevitable. And we don’t need to imagine what that would mean for banana exports—the exact scenario has already happened. Starting in 1903, Race 1, an earlier variant of today’s pathogen, ravaged the export plantations of Latin America and the Caribbean. Within 50 years, Race 1 drove the world’s only export banana species, the Gros Michel, to virtual extinction. That’s why 99% of the bananas eaten in the developed world today are a cultivar called the Cavendish, the only export-suitable banana that could take on Race 1 and live to tell.
Over the half-century it took to wipe out the Gros Michel, Race 1 caused at least $2.3 billion in damage (around $18.2 billion in today’s terms.) And that was in the commercial heart of global banana production. Tropical Race 4, by comparison, has damaged $400 million in banana crops in the Philippines alone.
But the bigger difference now is that, compared its 20th-century cousin, Tropical Race 4 is a pure killing machine—and not just for Cavendishes. Scores of other species that are immune to Race 1 have no defenses against the new pathogen. In fact, Tropical Race 4 is capable of killing at least 80%—though possibly as much as 85%—of the 145 million tonnes (160 million tons) of bananas and plantains produced each year, says Ploetz.
Alan Turing's accomplishments in computer science are well known, but lesser known is his impact on biology and chemistry. In his only paper on biology, Turing proposed a theory of morphogenesis, or how identical copies of a single cell differentiate, for example, into an organism with arms and legs, a head and tail.
Now, 60 years after Turing's death, researchers from Brandeis University and the University of Pittsburgh have provided the first experimental evidence that validates Turing's theory in cell-like structures.
The team published their findings in the Proceedings of the National Academy of Sciences on Monday, March 10.
Turing was the first to offer an explanation of morphogenesis through chemistry. He theorized that identical biological cells differentiate, change shape and create patterns through a process called intercellular reaction-diffusion. In this model, a system of chemicals react with each other and diffuse across a space—say between cells in an embryo. These chemical reactions need an inhibitory agent, to suppress the reaction, and an excitatory agent, to activate the reaction. This chemical reaction, diffused across an embryo, will create patterns of chemically different cells.
Turing predicted six different patterns could arise from this model. At Brandeis, Seth Fraden, professor of physics, and Irv Epstein, the Henry F. Fischbach Professor of Chemistry, created rings of synthetic, cell-like structures with activating and inhibiting chemical reactions to test Turing's model. They observed all six patterns plus a seventh unpredicted by Turing.
New 2D material could have very large magnetic moment
Atom-thick layers of iron have been made in the tiny holes of a perforated piece of free-standing graphene. The work was done by an international team, which has also done calculations that suggest the new material has some potentially useful exotic properties, such as a large magnetic moment. However, the team believes the 2D structure becomes thermodynamically unstable when it is more than 12 atoms wide: a problem that would have to be overcome before the material could be put to work in practical applications such as magnetic data storage.
At first glance, a free-standing 2D metal seems impossible. This is because the bonding between atoms in a metal is mediated by conduction electrons, which are free to move in any direction. As a result, metals tend to have 3D crystal structures and no tendency to form planar sheets. This is unlike crystalline carbon, which is held together by highly directional covalent bonds that allow free-standing atom-thick sheets of graphene to exist. While single epitaxial layers of metal atoms can be created on a substrate, these are not true 2D materials because the atoms are bonded to the underlying structure.
In the new research, Mark Rümmeli and colleagues at the Leibniz Institute for Solid State and Materials Research in Dresden, Germany, and at institutes in Poland and Korea studied the behaviour of metal atoms at the edges of holes in graphene. They grew a sheet of graphene by chemical vapour deposition on a surface and detached it by etching the substrate with an iron-chloride solution. This left trace amounts of iron on the surface of the graphene. Irradiating the graphene with an electron beam created small holes and also encouraged the iron atoms to move around. The edge atoms of graphene are the most reactive because they contain dangling bonds; so when the mobile iron atoms encounter the edge of a hole, they bond to it. This continues with iron atoms bonding to the other iron atoms around the edge, until the hole is completely sealed with a 2D square lattice of iron.
The group's theoretical calculations show that the largest thermodynamically stable sheet would be about 12 atoms across – or just 3 nm – wide. The largest sheets observed in the experiment were only 10 atoms wide. Beyond this, the tendency of iron to form a 3D structure wins out over the bonding between the iron and carbon atoms at the edges. "The atoms usually form a tiny crystal that sticks to one of the edges," explains Rümmeli, who is now at the Institute for Basic Science in Korea.
Converting natural gas into liquids could lead to greener, and cheaper, fuel and chemicals
Natural gas is great at heating our houses, but it’s not so good at fueling our cars—at least not yet. Researchers in the United States have discovered a new and more efficient method for converting the main components in natural gas into liquids that can be further refined into either common commodity chemicals or fuels. The work opens the door to displacing oil with abundant natural gas—and reducing both carbon emissions and society’s dependence on petroleum in the process.
Over the past several years, the United States and other countries have undergone an energy revolution as new drilling techniques and a process called hydraulic fracturing have made it possible to recover vast amounts of natural gas. Today, most of that gas is burned, either for heating homes or to drive electricity-generating turbines. But chemical companies have also long had the technology to convert the primary hydrocarbons in natural gas—methane, ethane, and propane—into alcohols, the liquid starting materials for plastics, fuels, and other commodities made by the train load. However, this technology has never been adopted on a wide scale, because it requires complex and expensive chemical plants that must run at temperatures greater than 800°C in order to carry out the transformation. Converting petroleum into those commodities has always been cheaper, which is why we’ve grown so dependent on oil.
Two decades ago, Roy Periana, a chemist at the Scripps Research Institute in Jupiter, Florida, started looking for metal catalysts that could transform natural gas into alcohols at lower temperatures. He knew he needed to find metals that were deft at breaking the carbon-hydrogen bonds that are at the heart of methane, ethane, and propane, short hydrocarbons known as alkanes, and then add in oxygen atoms that would transform the alkanes into alcohols. But all the catalysts he discovered—including platinum, rhodium, and iridium—are rare and expensive, and the technique was never commercialized.
Periana says that what he didn’t appreciate at the time was that to be a good catalyst, the metals need to do another job in addition to transforming C-H bonds into C-O bonds. That’s because in a reactor, these catalysts are surrounded by solvent molecules. So before a metal can break an alkane’s bond, the alkane must first nudge a solvent molecule aside. It turns out that the expensive metals Periana was using aren’t so good at that part of the process: They require extra energy to push the solvent molecules out of their midst. Periana’s team realized that the different electronic structure of more abundant “main group” metals means that they wouldn’t have to pay this energetic price, and, therefore, might be able to carry out the C-H to C-O transformation more efficiently.
Discovery of blood in creature frozen for 43,000 years is seen as major breakthrough by international team.
Viktoria Egorova, chief of the Research and Clinical Diagnostic Laboratory of the Medical Clinic of North-Eastern Federal University, said: 'We have dissected the soft tissues of the mammoth - and I must say that we didn't expect such results. The carcass that is more than 43,000 years old has preserved better than a body of a human buried for six months.
'The tissue cut clearly shows blood vessels with strong walls. Inside the vessels there is haemolysed blood, where for the first time we have found erythrocytes. Muscle and adipose tissues are well preserved.
'We have also obtained very well visualised migrating cells of the lymphoid tissue, which is another great discovery.
'The upper part of the carcass has been eaten by animals, yet the lower part with the legs and, astonishingly, the trunk are very well preserved.
'We also have the mammoth's liver - very well preserved, too, and looks like with some solid fragments inside it. We haven't managed to study them yet, but the first suggestion is that possibly these are kidney stones.
'Another discovery was intestines with remains of the vegetation the mammoth ate before its death, and a multi-chambered stomach what we've been working with today, collecting tissue samples. There is a lot more material that will have to go through laboratory research'.
Rice University bioengineers have created a toolkit that uses colored lights and engineered bacteria to bring both mathematical predictability and cut-and-paste simplicity to the world of genetic circuit design.
“Life is controlled by DNA-based circuits, and these are similar to the circuits found in electronic devices like smartphones and computers,” said Rice bioengineer Jeffrey Tabor, the lead researcher on the project.
“A major difference is that electrical engineers measure the signals flowing into and out of electronic circuits as voltage, whereas bioengineers measure genetic circuit signals as genes turning on and off.”
In a paper appearing online in the journal Nature Methods, Tabor and colleagues, including graduate student and lead author Evan Olson, describe a new, ultra high-precision method for creating and measuring gene expression signals in bacteria by combining light-sensing proteins from photosynthetic algae with a simple array of red and green LED lights and standard fluorescent reporter genes.
By varying the timing and intensity of the lights, the researchers were able to control exactly when and how much different genes were expressed. “Light provides us a powerful new method for reliably measuring genetic circuit activity,” said Tabor, an assistant professor of bioengineering who also teaches in Rice’s Ph.D. program in systems, synthetic and physical biology.
“Our work was inspired by the methods that are used to study electronic circuits. Electrical engineers have tools likeoscilloscopes and function generators that allow them to measure how voltage signals flow through electrical circuits. Those measurements are essential for making multiple circuits work together properly, so that more complex devices can be built. We have used our light-based tools as a biological function generator and oscilloscope in order to similarly analyze genetic circuits.”
If a gene is not “expressed,” it is turned off, and its product is not produced. The bacteria used in Tabor’s study have about 4,000 genes, while humans have about 20,000. The processes of life are coordinated by different combinations and timings of genes turning on and off.
Each component of a genetic circuit acts on the input it receives — which may be one or more gene-expression products from other components — and produces its own gene-expression product as an output.
By linking the right genetic components together, synthetic biologists like Tabor and his students construct genetic circuits that program cells to carry out complex functions, such as counting, having memory, growing into tissues, or diagnosing the signatures of disease in the body.
Researchers from the University of Surrey and Philips have identified a new transistor design that could make flexible electronics such as roll-up tablet computers available in the near future.
As reported in a study published in NatureScientific Reports (open access), they further developed their “Source Gated Transistor” (SGT) to work with next-generation digital circuits.
Previously, the researchers found that SGTs could be applied to many analog electronic designs, such as display screens. In the current study, researchers have now shown that SGTs can also be applied to next-generation digital circuits.
SGTs control the electric current just as it enters a semiconductor, which decreases the odds of circuit malfunction, improves energy efficiency, and keeps fabrication costs to a minimum. These properties make SGTs ideal for next-generation electronic devices, and could enable digital technologies to be incorporated into those built using flexible plastics or clothing textiles.
These technologies may include ultra-lightweight and flexible gadgets that can be rolled up to save space when not in use, smart plasters that can wirelessly monitor the health of the wearer, low-cost electronic shopping tags for instant checkout without the need for queuing, and disaster prediction sensors for use on buildings in regions that are at high risk of natural disasters.
“These technologies involve thin plastic sheets of electronic circuits, similar to sheets of paper, but embedded with smart technologies. Until now, such technologies could only be produced reliably in small quantities, and that confined them to the research lab.
“With SGTs we have shown we can achieve some characteristics needed to make these technologies viable, without increasing the complexity or cost of the design.” said lead researcher Radu Sporea, Royal Academy of Engineering Research Fellow at the Advanced Technology Institute, University of Surrey.
Humans are among the very few animals that constitute a threat to elephants. Yet not all people are a danger — and elephants seem to know it. The giants have shown a remarkable ability to use sight and scent to distinguish between African ethnic groups that have a history of attacking them and groups that do not. Now a study reveals that they can even discern these differences from words spoken in the local tongues.
Biologists Karen McComb and Graeme Shannon at the University of Sussex in Brighton, UK, guessed that African elephants (Loxodonta africana) might be able to listen to human speech and make use of what they heard. To tease out whether this was true, they recorded the voices of men from two Kenyan ethnic groups calmly saying, “Look, look over there, a group of elephants is coming,” in their native languages. One of these groups was the semi-nomadic Maasai, some of whom periodically kill elephants during fierce competition for water or cattle-grazing space. The other was the Kamba, a crop-farming group that rarely has violent encounters with elephants.
The researchers played the recordings to 47 elephant family groups at Amboseli National Park in Kenya and monitored the animals' behaviour. The differences were remarkable. When the elephants heard the Maasai, they were much more likely to cautiously smell the air or huddle together than when they heard the Kamba. Indeed, the animals bunched together nearly twice as tightly when they heard the Maasai.
“We knew elephants could distinguish the Maasai and Kamba by their clothes and smells, but that they can also do so by their voices alone is really interesting,” says Fritz Vollrath, a zoologist at the University of Oxford, UK.
In what could prove to be a major breakthrough in quantum memory storage and information processing, German researchers have frozen the fastest thing in the universe: light. And they did so for a record-breaking one minute. It sounds weird and it is. The reason for wanting to hold light in its place (aside from the sheer awesomeness of it) is to ensure that it retains its quantum coherence properties (i.e. its information state), thus making it possible to build light-based quantum memory.
And the longer that light can be held, the better as far as computation is concerned. Accordingly, it could allow for more secure quantum communications over longer distances.
Needless to say, halting light is not easy — you can't just put in the freezer. Light is electromagnetic radiation that moves at 300 million meters per second.
Over the course of a one minute span, it can travel about 11 million miles (18 million km), or 20 round trips to the moon. So it's a rather wily and slippery medium, to say the least. But light can be slowed down and even halted altogether. And in fact, researchers once kept it still for 16 seconds by using cold atoms.
For this particular experiment, researcher Georg Heinze and his team converted light coherence into atomic coherences. They did so by using a quantum interference effect that makes an opaque medium — in this case a crystal — transparent over a narrow range of light spectra (a process called electromagnetically induced transparency (EIT)).
The researchers shot a laser through this crystal (a source of light), which sent its atoms into a quantum superposition of two states. A second beam then switched off the first laser, and as a consequence, the transparency. Thus, the researchers collapsed the superposition — and trapped the second laser beam inside.
There is a scene in the film “Spider-Man 2” where Spider-Man prevents a train full of people from crashing by holding it back with about 10 sets of spider silk ropes each less than half an inch thick. It turns out the scene isn’t just fantasy.
“We calculated roughly how thick the fibers were, how many of them he had attached to the walls, how much the locomotive and people weighed, and how fast it appeared to be going,” says Randy Lewis, a professor of biology and biological engineering atUtah State University. “Spider-Man would have been able to stop that train,” says Lewis, a molecular biologist, materials scientist, and chemist who for 25 years has been striving to synthesize spider silk.
Despite being a protein, spider silk is by weight five times stronger than steel and three times tougher than Kevlar, a p-aramid fiber from DuPont. Strength is defined as the weight a material can bear, and toughness is the amount of kinetic energy it can absorb without breaking. The silk’s primary structure is its amino acid sequence, mainly consisting of repeated glycine and alanine blocks.
Potential applications include cables and bulletproof vests. Spider silk’s antimicrobial properties make it suitable for wound patches. Because the silk is not rejected by the human body, it can be used to manufacture artificial tendons or to coat implants. And its thermal conductivity is similar to that of copper but its mass density is one-seventh of copper’s, making it a potential heat management material.
Deprived of sight, blind people manage to squeeze an amazing amount of information out of their other senses. Doing this requires their brains to do some reorganising. To learn about some of these changes, scientists studied the brains of blind people who've learned to use an augmented reality system that converts images into soundscapes.
The system was invented in the early '90s, but it's not widely used. The way it works is a person puts on a pair of goggles with a built-in camera and software that converts images captured by the camera into sounds. For example, the pitch of the sound (high or low) indicates the vertical position of an object; the timing and duration of the sound indicate the object's horizontal position and width (you can see and hear a demo of a similar technology here). For real world scenes, the sounds are complex -- in fact, they sound a bit like a garbled transmission from an alien spacecraft.
But with enough practice people can learn to interpret the sounds and form a mental image of objects -- including people -- that appear in front of them.
When sighted people see an outline or silhouette of a human body, areas of the cerebral cortex that specialise in making sense of visual stimuli become active. One of these, the extrastriate body area, seems particularly interested in bodies: it responds more strongly to images of the human body than to other types of objects.
But blindness cuts off the usual flow of information from the eyes to this part of the brain, and people who've been blind since birth have never actually seen a human form. Something must change in their brains when they learn to perceive body shapes using sound. Do visual parts of the brain start responding to sounds? Or do auditory parts of the brain start responding to body shapes? It's a neat trick either way.
To find out what really happens, Ella Striem-Amit and Amir Amedi of the Hebrew University of Jerusalem scanned the brains of seven congenitally blind people who'd trained for an average of 73 hours on the augmented reality system. After training, they achieved 78 percent accuracy at classifying three different types of objects: people, everyday objects (like a cellphone), or textured patterns.
In some cases, they could do even more. "During training, the participants were asked to report the body posture of the people in the images they 'saw,' and could verbally describe it quite well, and also mimic it themselves," Striem-Amit said.
Striem-Amit and Amedi also found that in blind people as well as sighted people, body shapes also activated an area called the temporal-parietal junction, which some researchers think is involved in figuring out the intentions of other people.
The study illustrates that the brain can be remarkably malleable, says Kalanit Grill-Spector, a neuroscientist at Stanford University. When blind people learn to read Braille, their visual cortex becomes sensitive to touch, she notes. "However, there has been little evidence for auditory stimuli driving responses in visual cortex in the blind," Grill-Spector said. "For example making human sounds such as clapping or laughing does not seem to activate visual cortex in the blind."
For most people, music is one of life's great pleasures. But the inability to enjoy it is a real condition that has just been recognized and described by science. The new condition, known as specific musical anhedonia, is described in a new paper published this week in the journal Current Biology.
People with the condition have no trouble perceiving or identifying music, or even describing the mood the music is supposed to convey, said Robert Zatorre, a McGill University neuroscientist who co-authored the paper. The condition affects about two per cent of the population. Many of those who have it said they have tried to mask their dislike of music from others.
Zatorre had previously done studies that showed music activates the pleasure and reward centres of the brain, just as food and sex do. Scientists are interested in studying the brain's reward system because problems with it are implicated in a lot of problems such as eating disorders and drug and gambling addictions.
Zatorre and colleagues in Spain, including Josep Marco-Pallares of the University of Barcelona, began to wonder if music activated the pleasure centre of the brain in everyone, or if there were some people who didn't respond the same way.
To figure that out, they surveyed around 500 students at the University of Barcelona about their music habits and response to music — for example, did they often have music playing and did they like to share music with their friends?
Groups of students who scored high, average, and low on the questionnaire were tested in the lab for their body's response to music — changes in heart rate and skin conductance, which indicate emotional or nervous system arousal.
While those who scored average or high on the questionnaire had a strong physiological response to the music, those who scored low "more or less flatlined," Zatorre recalled, confirming that they did not derive pleasure from music.
The students were given additional questionnaires to make sure they weren't depressed and were able to experience pleasure from other things.
Then they were tested in another experiment – a slot-machine-like gambling video game in which they would sometimes receive a big payout.
"People who didn't respond to music nonetheless showed a perfectly normal response to the monetary reward," Zatorre said.
That's interesting because previously, researchers had thought the brain's reward centre was an "all or none" system that was functioning normally, hyperactive, or underactive as a whole.
Have you ever wondered why your laptop or smartphone feels warm when you're using it? That heat is a byproduct of the microprocessors in your device using electric current to power computer processing functions — and it is actually wasted energy.
Now, a team led by researchers from the UCLA Henry Samueli School of Engineering and Applied Science has made major improvements in computer processing using an emerging class of magnetic materials called "multiferroics," and these advances could make future devices far more energy-efficient than current technologies.
With today's device microprocessors, electric current passes through transistors, which are essentially very small electronic switches. Because current involves the movement of electrons, this process produces heat — which makes devices warm to the touch. These switches can also "leak" electrons, making it difficult to completely turn them off. And as chips continue to get smaller, with more circuits packed into smaller spaces, the amount of wasted heat grows.
The UCLA Engineering team used multiferroic magnetic materials to reduce the amount of power consumed by "logic devices," a type of circuit on a computer chip dedicated to performing functions such as calculations. A multiferroic can be switched on or off by applying alternating voltage — the difference in electrical potential. It then carries power through the material in a cascading wave through the spins of electrons, a process referred to as a spin wave bus.
A spin wave can be thought of as similar to an ocean wave, which keeps water molecules in essentially the same place while the energy is carried through the water, as opposed to an electric current, which can be envisioned as water flowing through a pipe, said principal investigator Kang L. Wang, UCLA's Raytheon Professor of Electrical Engineering and director of the Western Institute of Nanoelectronics (WIN).
"Spin waves open an opportunity to realize fundamentally new ways of computing while solving some of the key challenges faced by scaling of conventional semiconductor technology, potentially creating a new paradigm of spin-based electronics," Wang said.
The UCLA researchers were able to demonstrate that using this multiferroic material to generate spin waves could reduce wasted heat and therefore increase power efficiency for processing by up to 1,000 times. Their research is published in the journal Applied Physics Letters.
A team of researchers working at the university of Notre Dame has discovered a whole new group of quasicrystals. In their paper published in the journal Nature, the team describes how they accidently created a new kind of quasicrystal as part of a series of experiments designed to learn more about electron distribution in ferrocenecarboxylic acids.
Quasicrystals are groups of molecules bonded together in structures that resemble crystals in that they are organized, but unlike crystals, the structures are not nearly as uniform. In fact, they are quite the opposite—though they are locally symmetric, they lack any sort of long distance periodicity. Because of their chaotic nature, quasicrystals tend to feel slippery to the touch, which is why they have been used to coat the surface of non-stick frying pans. The first quasicrystal was made, also by accident, in 1982, by Daniel Shechtman (who later won a Nobel prize for his work). Since then many more of them have been made in various labs, (one was even found to exist in a meteorite) though most of them have had one thing in common, they were all formed from two or three metal alloys.
In this latest discovery, the quasicrystals self-formed after the researchers placed a layer of iron containing molecules of ferrocenecarboxylic acid on top of a gold surface. The team was expecting to see a linear group of stable molecules pairing up as dimers, but instead were surprised to find that they had formed into five sided rosettes—it was the rosettes that pushed other molecules into bonding forming crystalline shapes, resulting in the formation of 2D quasicrystals that took the form of several different shapes: stars, boats, pentagons, rhombi, etc., all repeated in haphazard fashion.
In studying the quasicrystals using scanning tunnelling microscopy, the researchers found that they were held together by weak hydrogen bonds rather that the strong ionic bonds found in other such molecules. Weak hydrogen bonds are generally more common in organic molecules that exhibit complex structures.
In their paper, the researchers suggest their discovery might lead to the creation or discovery of many other similar types of quasicrystals, though it's still not clear to what use they might be put.