Amazing Science
Find tag "'MOST READS'"
304.7K views | +111 today
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Simulate The Human Brain In A Supercomputer: The Human Brain Project Has Officially Begun

The brain, with its billions of interconnected neurons, is without any doubt the most complex organ in the body and it will be a long time before we understand all its mysteries. The Human Brain Project proposes a completely new approach. The project is integrating everything we know about the brain into computer models and using these models to simulate the actual working of the brain. Ultimately, it will attempt to simulate the complete human brain. The models built by the project will cover all the different levels of brain organisation -- from individual neurons through to the complete cortex. The goal is to bring about a revolution in neuroscience and medicine and to derive new information technologies directly from the architecture of the brain.

The challenges facing the project are huge. Neuroscience alone produces more than 60'000 scientific papers every year. From this enormous mass of information, the project will have to select and harmonise the data it is going to use -- ensuring that data produced with different methods is fully comparable.

The data feeding the project's simulation effort will come from the clinic and from neuroscience experiments. As we try to fit all the information together, we will discover many of the brain's fundamental design secrets: the geometry and electrical behaviour of different classes of neurons, the way they connect to form circuits, and the way new functions emerge as more and more neurons connect. It is these principles, translated into mathematics that will drive the project's models and simulations.

Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.

The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson's, and Alzheimer's. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.

The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively - all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.

The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain's basic building blocks.

Karlos Svoboda's curator insight, October 11, 2013 9:30 PM

A už tu máme pokračování projektu pod názvem The Human Brain Project neboli simulace mozku na supercomuteru Blue Beam o emulaci sice ještě nejde, ale kdo ví zda-li nám vědci říkají vše...

Scooped by Dr. Stefan Gruenwald!

A pinch of platinum results in white organic LEDs with tunable spin-orbit coupling

A pinch of platinum results in white organic LEDs with tunable spin-orbit coupling | Amazing Science |
The development of efficient organic light-emitting diodes (OLED) and organic photovoltaic cells requires control over the dynamics of spin sensitive excitations.


A team of scientists have developed a plastic-like polymer that emits white light more efficiently than current organic LEDs. In recent years household lighting has moved from the incandescent light bulb to the compact fluorescent, and more recently to LEDs.


But to create white light manufacturers cluster red, green and blue LEDs, or use blue LED light, some of which is converted to yellow, and then mixing the two colours to create white light.


Organic light emitting diodes (OLEDs) use polymer chains that glow when they are stimulated with an electrical current or light. However, current OLED displays also combine red, blue and green to create white light.


Professor Valy Vardeny of the University of Utah and colleagues have combined polymer chains with heavy atoms to create a new type of OLED that they believe could create white light much more simply. "This new polymer has all those colours simultaneously, so there is no need for small pixels and complicated engineering to create them," says Vardeny.


Polymers have two kinds of electronic states - single and triplet. When the polymer is stimulated, the singlet state emits blue light. The triplet state, which can emit lower energy red light, is much harder to stimulate.


If a heavy atom such as iridium, platinum or palladium is incorporated into the polymer chain, this triplet state is easier to stimulate. "The idea is to incorporate the heavy atom into the chain so that you don't have to rely on mixtures or energy transfers. The compound itself contains all the ingredients," says Vardeny.


Varying how often platinum appears in the chain allows the researchers to 'tune' the light it emits. For example, a platinum atom after each 'unit' in the chain emits violet and yellow light, while a platinum atom after every third unit results in blue and orange light.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Amazing: Scientists generate first map of clouds on an extremely hot exoplanet (Kepler 7b)

Amazing: Scientists generate first map of clouds on an extremely hot exoplanet (Kepler 7b) | Amazing Science |

Astronomers using data from NASA’s Kepler and Spitzer space telescopes have created the first cloud map of a planet beyond our solar system: a sizzling, Jupiter-like world known as Kepler-7b.


The planet is marked by high clouds in the west and clear skies in the east. Previous studies from Spitzer have resulted in temperature maps of planets orbiting other stars, but this is the first look at cloud structures on a distant world.


“By observing this planet with Spitzer and Kepler for more than three years, we were able to produce a very low-resolution ‘map’ of this giant, gaseous planet,” said Brice-Olivier Demory of MIT. Demory is lead author of a paper accepted for publication in the Astrophysical Journal Letters. “We wouldn’t expect to see oceans or continents on this type of world, but we detected a clear, reflective signature that we interpreted as clouds.”


Kepler has discovered more than 150 exoplanets, which are planets outside our solar system, and Kepler-7b was one of the first. The telescope’s problematic reaction wheels prevent it from hunting planets any more, but astronomers continue to pore over almost four years’ worth of collected data.


Kepler’s visible-light observations of Kepler-7b’s moon-like phases led to a rough map of the planet that showed a bright spot on its western hemisphere. But these data were not enough on their own to decipher whether the bright spot was coming from clouds or heat. The Spitzer Space Telescope played a crucial role in answering this question.


Like Kepler, Spitzer can fix its gaze at a star system as a planet orbits around the star, gathering clues about the planet’s atmosphere. Spitzer’s ability to detect infrared light means it was able to measure Kepler-7b’s temperature, estimating it to be between 1,500 and 1,800 degrees Fahrenheit (1,100 and 1,300 Kelvin).


This is relatively cool for a planet that orbits so close to its star — within 0.06 astronomical units (one astronomical unit is the distance from Earth and the sun) — and, according to astronomers, too cool to be the source of light Kepler observed. Instead, they determined, light from the planet’s star is bouncing off cloud tops located on the west side of the planet.


“Kepler-7b reflects much more light than most giant planets we’ve found, which we attribute to clouds in the upper atmosphere,” said Thomas Barclay, Kepler scientist at NASA’s Ames Research Center in Moffett Field, Calif. “Unlike those on Earth, the cloud patterns on this planet do not seem to change much over time — it has a remarkably stable climate.”


The findings are an early step toward using similar techniques to study the atmospheres of planets more like Earth in composition and size.

“With Spitzer and Kepler together, we have a multi-wavelength tool for getting a good look at planets that are trillions of miles away,” said Paul Hertz, director of NASA’s Astrophysics Division in Washington. “We’re at a point now in exoplanet science where we are moving beyond just detecting exoplanets, and into the exciting science of understanding them.”


Kepler identified planets by watching for dips in starlight that occur as the planets transit, or pass in front of their stars, blocking the light. This technique and other observations of Kepler-7b previously revealed that it is one of the puffiest planets known: if it could somehow be placed in a tub of water, it would float. The planet was also found to whip around its star in just less than five days.


Explore all 900-plus exoplanet discoveries with NASA’s “Eyes on Exoplanets,” a fully rendered 3D visualization tool, available for download.


The program is updated daily with the latest findings from NASA’s Kepler mission and ground-based observatories around the world as they search for planets like our own.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Accelerating Diabetic Wound Healing By Inhibiting Two Metalloproteases (MMP8 and MMP9)

Accelerating Diabetic Wound Healing By Inhibiting Two Metalloproteases (MMP8 and MMP9) | Amazing Science |

A complication of diabetes is the inability of wounds to heal in diabetic patients. Diabetic wounds are refractory to healing due to the involvement of activated matrix metalloproteinases (MMPs), which remodel the tissue resulting in apoptosis. There are no readily available methods that identify active unregulated MMPs. With the use of a novel inhibitor-tethered resin that binds exclusively to the active forms of MMPs, coupled with proteomics, we quantified MMP-8 and MMP-9 in a mouse model of diabetic wounds. Topical treatment with a selective MMP-9 inhibitor led to acceleration of wound healing, re-epithelialization, and significantly attenuated apoptosis. In contrast, selective pharmacological inhibition of MMP-8 delayed wound healing, decreased re-epithelialization, and exhibited high apoptosis. The MMP-9 activity makes the wounds refractory to healing, whereas that of MMP-8 is beneficial. The treatment of diabetic wounds with a selective MMP-9 inhibitor holds great promise in providing heretofore-unavailable opportunities for intervention of this disease.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A pair of breakthroughs in photonics could allow for faster and faster electronics

A pair of breakthroughs in photonics could allow for faster and faster electronics | Amazing Science |

A pair of breakthroughs in the field of silicon photonics by researchers at the University of Colorado Boulder, the Massachusetts Institute of Technology and Micron Technology Inc. could allow for the trajectory of exponential improvement in microprocessors that began nearly half a century ago—known as Moore's Law—to continue well into the future, allowing for increasingly faster electronics, from supercomputers to laptops to smartphones.

The research team, led by CU-Boulder researcher Milos Popovic, an assistant professor of electrical, computer and energy engineering, developed a new technique that allows microprocessors to use light, instead of electrical wires, to communicate with transistors on a single chip, a system that could lead to extremely energy-efficient computing and a continued skyrocketing of computing speed into the future.


Popovic and his colleagues created two different optical modulators—structures that detect electrical signals and translate them into optical waves—that can be fabricated within the same processes already used in industry to create today's state-of-the-art electronic microprocessors. The modulators are described in a recent issue of the journal Optics Letters.


First laid out in 1965, Moore's Law predicted that the size of the transistors used in microprocessors could be shrunk by half about every two years for the same production cost, allowing twice as many transistors to be placed on the same-sized silicon chip. The net effect would be a doubling of computing speed every couple of years.


The projection has held true until relatively recently. While transistors continue to get smaller, halving their size today no longer leads to a doubling of computing speed. That's because the limiting factor in microelectronics is now the power that's needed to keep the microprocessors running. The vast amount of electricity required to flip on and off tiny, densely packed transistors causes excessive heat buildup.


"The transistors will keep shrinking and they'll be able to continue giving you more and more computing performance," Popovic said. "But in order to be able to actually take advantage of that you need to enable energy-efficient communication links."

Russ Roberts's curator insight, October 3, 2013 9:18 AM

This breakthrough technology could affect the design of communications equipment, including that used for Amateur Radio.  Informative article.  Aloha de Russ (KH6JRM).

Rob Hatfield, M.Ed.'s curator insight, October 3, 2013 6:40 PM

This is a STEM trend in the making.

Scooped by Dr. Stefan Gruenwald!

Molecular Therapy: Vaccinia Virus Induces Programmed Necrosis in Ovarian Cancer Cells

Molecular Therapy: Vaccinia Virus Induces Programmed Necrosis in Ovarian Cancer Cells | Amazing Science |

Vaccinia virus is an ideal oncolytic candidate due to its ability to infect a broad range of cells, rapid replication cycle, and production of extracellular enveloped virions that evade the immune response and that may allow spread to distant metastases following local delivery. Systemic delivery of the oncolytic vaccinia strain JX-594 demonstrated safe and effective infection of tumor tissue, while randomized data indicate a survival advantage for patients with advanced hepatocellular carcinoma treated with high dose (10E9 plaque-forming units (pfu)) intratumoral JX-594 compared with low dose (10E8 pfu).


The mechanism by which tumor cell death is induced by vaccinia virus remains poorly understood. Classical apoptosis, autophagy, and necrosis have all been implicated in vaccinia infection to varying degrees; cell lysis is a common endpoint of infection, apoptosis has been observed in some cancer cell lines and immune cells, and autophagy is disrupted in fibroblasts following infection. Programmed necrosis is also reported to have a role in the fate of vaccinia-infected T cells, while two previous studies indicated that tumor necrosis factor (TNF)-α treatment of vaccinia-infected mouse fibroblasts and Jurkat cells induced necrosis, which was dependent upon the viral caspase inhibitor B13R and receptor interacting protein (RIP)1, respectively.


Evasion of cell death in general, is a hallmark of cancer, and little of the previous work attempting to characterize vaccinia-induced cell death has been performed in malignant cells. An European research team has now investigated cell death pathways in models of ovarian cancer following infection with Lister-dTK, an oncolytic Lister strain vaccinia virus bearing a deletion of the thymidine kinase gene. Their data show that classical apoptosis is not the primary mode of cell death execution. Vaccinia interferes with the autophagic process but does not increase autophagic flux and does not rely upon autophagy to induce death. Lister-dTK infection leads to both morphological and metabolic features of necrosis. They also show that RIP1 and caspase-8 associate during vaccinia infection of ovarian cancer cells, while pharmacological inhibition of key necrosis proteins, including RIP1 and mixed lineage kinase domain-like protein (MLKL), significantly attenuates vaccinia-induced cell death. Inhibition of TNF-αsignaling, by contrast, has no effect on viral efficacy. Along with visible necrosis in infected tumors observed in vivo, these data strongly suggest that vaccinia induces necrotic death in ovarian cancer.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

First computer made of tiny carbon nanotubes is unveiled

First computer made of tiny carbon nanotubes is unveiled | Amazing Science |

The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy–delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies.

Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Scientists from Stanford recently show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.

Russ Roberts's curator insight, October 1, 2013 8:35 AM

Another computer revolution may be upon us. Aloha de Russ (KH6JRM).

Scooped by Dr. Stefan Gruenwald!

By triggering or silencing certain brain cells, mice eat or stop eating regardless of hunger

By triggering or silencing certain brain cells, mice eat or stop eating regardless of hunger | Amazing Science |

By hijacking connections between neurons deep within the brain, scientists forced full mice to keep eating and hungry mice to shun food. By identifying precise groups of cells that cause eating and others that curb it, the results begin to clarify the intricate web of checks and balances in the brain that control feeding.


“This is a really important missing piece of the puzzle,” says neuroscientist Seth Blackshaw of Johns Hopkins University in Baltimore. “These are cell types that weren’t even predicted to exist.” A deeper understanding of how the brain orchestrates eating behavior could lead to better treatments for disorders such as anorexia and obesity, he says.


Scientists led by Joshua Jennings and Garret Stuber of the University of North Carolina at Chapel Hill genetically tweaked mice so that a small group of neurons would respond to light. When a laser shone into the brain, these cells would either fire or, in a different experiment, stay quiet. These neurons reside in a brain locale called the bed nucleus of the stria terminalis, or BNST. Some of the message-sending arms of these neurons reach into the lateral hypothalamus, a brain region known to play a big role in feeding.


When a laser activated these BNST neurons, the mice became ravenous, voraciously eating their food, the researchers report in the Sept. 27, 2013 Science. “As soon as you turn it on, they start eating and they don’t stop until you turn it off,” Stuber says. The opposite behavior happened when a laser silenced BNST neurons’ messages to the lateral hypothalamus: The mice would not eat, even when hungry.


The results illuminate a complex network of neuron connections, in which some cells boost other neurons’ activity, while other cells apply brakes. In the experiment, stimulating BNST neurons with light — which consequently shut down the activity of neurons in the lateral hypothalamus — led to the overeating behavior, the team found. That result suggests that these lateral hypothalamus neurons normally restrict feeding.


That finding is surprising, says Blackshaw. Earlier experiments hinted that these hypothalamic cells would encourage eating behavior, but the new study suggests the exact opposite.


The researchers don’t know whether, if they controlled the neurons for long periods, the mice would ultimately starve or overeat to the point of illness. Stuber and colleagues used the laser technique, called optogenetics, in roughly 20-minute bursts. Longer-term manipulations of these neural connections — perhaps using a drug — might cause lasting changes in appetite and, as a result, body mass, Stuber says.


This precise control of feeding behavior underscores the fact that eating disorders occur when brain systems go awry, Stuber says. “We think of feeding in terms of metabolism and body stuff,” he says. “But at the end of the day, it’s controlled by the brain.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Undoing Down syndrome? Sonic Hedgehog reverses learning deficits in mice with trisomy 21 traits

Undoing Down syndrome? Sonic Hedgehog reverses learning deficits in mice with trisomy 21 traits | Amazing Science |

For people with trisomy 21 – more commonly known as Down syndrome – learning and remembering important concepts can be a struggle, since some of their brain’s structures do not develop as fully as they should.

But now, researchers may have found a way to reverse the learning deficits associated with Down syndrome, after having discovered a compound that can significantly bolster cognition in mice with a condition very similar to trisomy 21.


In a new study published in the Sept. 4 issue of Science Translational Medicine, scientists injected a small protein known as a sonic hedgehog pathway agonist into the brains of genetically engineered mice on the day of their birth.  The treatment enabled the rodents’ cerebellums to grow to a normal size, allowing them to perform just as well as unmodified mice in behavioral tests.


“We’ve been working for some time to characterize the basis for how people with trisomy 21 diverge in development from people without trisomy 21,” Roger Reeves, a professor in the McKusick-Nathans Institute of Genetic Medicine at the Johns Hopkins University School of Medicine, told “One of the early things we see is that people with Down syndrome have very small cerebellums, which does a lot more things than we used to think it did.”


Down syndrome is a condition that occurs when people receive three – rather than the typical two – copies of chromosome 21. Because of this “trisomy,” Down syndrome patients have extra copies of the more than 300 genes contained in that chromosome.  This leads to a range of symptoms, including mild to moderate intellectual disability, distinct facial features, heart defects and other health problems.


Through previous research, Reeves found that another distinct trait of people with Down syndrome is a cerebellum that’s approximately 60 percent of the normal size.  In order for this important brain region to grow and form, a small population of cells in the brain must quickly divide and multiply shortly after birth. This cell population requires a specific growth factor known as the sonic hedgehog pathway to stimulate the cells, triggering them to divide.


However, the trisomic cells in people with Down syndrome do not respond as well to this growth factor, stunting the development of the cerebellum – a region of the brain found to be important in cognitive processing and emotional control.


“We thought if we could stimulate these cells a bit at birth, we could make up the deficit,” Reeves said.  To test this theory, Reeves and his research team created a series of genetically engineered mice, all of which had extra copies of about half of the genes found in chromosome 21.  According to Reeves, this caused the mice to have many of the same characteristics seen in patients with Down syndrome, such as a smaller cerebellum and learning difficulties.


The researchers then injected the mice with a sonic hedgehog pathway agonist, which stimulates the growth factor pathway needed to trigger cerebellum development.   The compound was given to the mice just once on the day of birth. “From that one injection, we were able to normalize the growth of the cerebellum, and they continued to have a structurally normal cerebellum when they grew up,” Reeves said.


Going one step further, the researchers conducted a series of behavioral tests on the mice to better understand how normalizing this brain structure would affect their overall performance.  One of these tests was the Morris water maze test, an experiment that involves placing the mice in a pool of water and seeing how long it takes them to escape using a platform hidden below the water’s surface.  The test measures the rodents’ spatial learning and memory capabilities, which are primarily controlled by the hippocampus.


The sonic hedgehog agonist has yet to be proven effective in humans with Down syndrome, and future research is needed to determine exactly how the injection improved the mice’s cognitive abilities and whether or not the agonist has any side effects.  But Reeves remains hopeful that these findings could have translational potential.

M. Philip Oliver's curator insight, September 27, 2013 5:57 PM

Will "It" extrapolate?

Elizabeth W.'s curator insight, March 25, 7:40 AM

This article is proposing the possibility of treating people born with Down syndrome. It has been found that people with Down syndrome  have cerebellum's that are 60% of the size of a normal cerebellum which we plays a part in our cognitive and emotional functioning. The researchers are now studying if they can help the cerebellum grow at birth with an injected agonist. The study has been done with mice and showed some promising results but whether or not the mice are healthy overall still hasn't been determined. It will also take awhile before this is used on anybody with Down syndrome. 

If this is able to work and show promising results to help people with Down syndrome, it would be an amazing and radical change. However, from my perspective it seems that this type of thing would naturally have costs or risks associated with injecting the agonist and also there could be some ethical issues. "Do I inject my child with this agonist in hopes they will not experience the obstacles with Down syndrome or do I not knowing there could be a risk child?" I'm not sure what procedures would be taken place for this but I could see this issue coming up. 

Scooped by Dr. Stefan Gruenwald!

How to make ceramics that bend without breaking: Self-deploying medical devices?

How to make ceramics that bend without breaking: Self-deploying medical devices? | Amazing Science |
New materials could lead to actuators on a chip and self-deploying medical devices. Ceramics are not known for their flexibility: they tend to crack under stress.


The team has developed a way of making minuscule ceramic objects that are not only flexible, but also have a "memory" for shape: When bent and then heated, they return to their original shapes. The surprising discovery is reported this week in the journalScience, in a paper by MIT graduate student Alan Lai, professor Christopher Schuh, and two collaborators in Singapore.

Shape-memory materials, which can bend and then snap back to their original configurations in response to a temperature change, have been known since the 1950s, explains Schuh, the Danae and Vasilis Salapatas Professor of Metallurgy and head of MIT's Department of Materials Science and Engineering. "It's been known in metals, and some polymers," he says, "but not in ceramics."


In principle, the molecular structure of ceramics should make shape memory possible, he says -- but the materials' brittleness and propensity for cracking has been a hurdle. "The concept has been there, but it's never been realized," Schuh says. "That's why we were so excited."


The key to shape-memory ceramics, it turns out, was thinking small.


The team accomplished this in two key ways. First, they created tiny ceramic objects, invisible to the naked eye: "When you make things small, they are more resistant to cracking," Schuh says. Then, the researchers concentrated on making the individual crystal grains span the entire small-scale structure, removing the crystal-grain boundaries where cracks are most likely to occur.

Those tactics resulted in tiny samples of ceramic material -- samples with deformability equivalent to about 7 percent of their size. "Most things can only deform about 1 percent," Lai says, adding that normal ceramics can't even bend that much without cracking.


"Usually if you bend a ceramic by 1 percent, it will shatter," Schuh says. But these tiny filaments, with a diameter of just 1 micrometer -- one millionth of a meter -- can be bent by 7 to 8 percent repeatedly without any cracking, he says.


While a micrometer is pretty tiny by most standards, it's actually not so small in the world of nanotechnology. "It's large compared to a lot of what nanotech people work on," Lai says. As such, these materials could be important tools for those developing micro- and nanodevices, such as for biomedical applications. For example, shape-memory ceramics could be used as microactuators to trigger actions within such devices -- such as the release of drugs from tiny implants.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Breakthrough: A man controls robotic leg using thoughts alone

A man missing his lower leg has gained precise control over a prosthetic limb, just by thinking about moving it – all because his unused nerves were preserved during the amputation and rerouted to his thigh where they can be used to communicate with a robotic leg.


The man can now seamlessly switch from walking on level ground to climbing stairs and can even kick a football around.


During a traditional limb amputation, the main sensory nerves are severed and lose their function. In 2006, Todd Kuiken and his colleagues at the Rehabilitation Institute of Chicago in Illinois realised they could preserve some of that functionality by carefully rerouting sensory nerves during an amputation and attaching them to another part of the body.

They could then use the rerouted nerve signals to control a robotic limb, allowing a person to control their prosthesis with the same nerves they originally used to control their real limb.


Kuiken's team first attempted the procedure – which is called targeted muscle reinnervation (TMR) – on people who were having their arm amputated. Now, Kuiken's team has performed TMR for the first time on a man with a leg amputation.


First, the team rerouted the two main branches of the man's sciatic nerve to muscles in the thigh above the amputation. One branch controls the calf and some foot muscles, the other controls the muscle running down the outside leg and some more foot muscles.


After a few months, the man could control his thigh muscles by thinking about using his missing leg. The next step was to link up a prosthesis.


The robot leg in question is a sophisticated prosthesis: it carries a number of mechanical sensors including gyroscopes and accelerometers, and can be trained to use the information from these sensors to perform certain walking styles. Kuiken's team reckoned that the leg would perform even better if it could infer the user's intended walking style with information from the sciatic nerve.


To do so, the researchers asked their volunteer to attempt to perform certain movements with his missing leg – for instance, flexing the foot – while they monitored the pattern of electric signals from the rerouted nerves in the thigh muscles. The researchers then programmed the robot leg to flex its foot whenever it detected that particular pattern of electrical activity.


Using just the mechanical sensor data, the robotic leg made the correct movement about 87 per cent of the time. With additional data from the nerves, the success rate rose to 98 per cent, and there were no so-called critical errors – errors that increase the risk of the user losing balance and falling. Those kinds of errors are most common when the user suddenly shifts walking style – when they begin to climb stairs, for instance, but with the additional information from the nerves, the robotic leg can make a seamless, natural transition between walking styles (see video).

Carlos Garcia Pando's comment, September 27, 2013 12:11 AM
Great idea. Thanks for posting
Madison Punch's comment, April 13, 11:51 AM
Aha, where psychology meets physiology. I think this is amazing and definitely the best way for prosthetic limb users to activate their faux leg/arm/etc. Very cool!
Scooped by Dr. Stefan Gruenwald!

DNA damage may cause ALS (Lou Gehrig’s disease), involving SIRT1, HDAC1 and sarcoma breakpoint protein FUS

DNA damage may cause ALS (Lou Gehrig’s disease), involving SIRT1, HDAC1 and sarcoma breakpoint protein FUS | Amazing Science |

MIT neuroscientists have found new evidence that suggests that a failure to repair damaged DNA could underlie not only ALS, but also other neurodegenerative disorders such as Alzheimer’s disease. These findings imply that drugs that bolster neurons’ DNA-repair capacity could help ALS patients, says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and senior author of a paper describing the ALS findings in the Sept. 15, 2013 issue of Nature Neuroscience.

Neurons are some of the longest-living cells in the human body. While other cells are frequently replaced, our neurons are generally retained throughout our lifetimes. Consequently, neurons can accrue a lot of DNA damage and are especially vulnerable to its effects. 

“Our genome is constantly under attack and DNA strand breaks are produced all the time. Fortunately, they are not a worry because we have the machinery to repair it right away. But if this repair machinery were to somehow become compromised, then it could be very devastating for neurons,” Tsai says.

Tsai’s group has been interested in understanding the importance of DNA repair in neurodegenerative processes for several years. In a study published in 2008, they reported that DNA double-strand breaks precede neuronal loss in a mouse model that undergoes Alzheimer’s disease-like neurodegeneration and identified a protein, HDAC1, which prevents neuronal loss under these conditions.  

HDAC1 is a histone deacetylase, an enzyme that regulates genes by modifying chromatin, which consists of DNA wrapped around a core of proteins called histones. HDAC1 activity normally causes DNA to wrap more tightly around histones, preventing gene expression. However, it turns out that cells, including neurons, also exploit HDAC1’s ability to tighten up chromatin to stabilize broken DNA ends and promote their repair. 

In a paper published earlier this year in Nature Neuroscience, Tsai’s team reported that HDAC1 works cooperatively with another deacetylase called SIRT1 to repair DNA and prevent the accumulation of damage that could promote neurodegeneration. 

When a neuron suffers double-strand breaks, SIRT1 migrates within seconds to the damaged sites, where it soon recruits HDAC1 and other repair factors. SIRT1 also stimulates the enzymatic activity of HDAC1, which allows the broken DNA ends to be resealed. 

SIRT1 itself has recently gained notoriety as the protein that promotes longevity and protects against diseases including diabetes and Alzheimer’s disease, and Tsai’s group believes that its role in DNA repair contributes significantly to the protective effects of SIRT1. 

In an attempt to further unveil other partners that work with HDAC1 to repair DNA, Tsai and colleagues stumbled upon a protein called Fused In Sarcoma (FUS). This finding was intriguing, Tsai says, because the FUS gene is one of the most common sites of mutations that cause inherited forms of ALS. 

The MIT team found that FUS appears at the scene of DNA damage very rapidly, suggesting that FUS is orchestrating the repair response. One of its roles is to recruit HDAC1 to the DNA damage site. Without it, HDAC1 does not appear and the necessary repair does not occur. Tsai believes that FUS may also be involved in sensing when DNA damage has occurred.

At least 50 mutations in the FUS gene have been found to cause ALS. The majority of these mutations occur in two sections of the FUS protein. The MIT team mapped the interactions between FUS and HDAC1 and found that these same two sections of the FUS protein bind to HDAC1. 

They also generated four FUS mutants that are most commonly seen in ALS patients. When they replaced the normal FUS with these mutants, they found that the interaction with HDAC1 was impaired and DNA damage was significantly increased. This suggests that those mutations prevent FUS from recruiting HDAC1 when DNA damage occurs, allowing damage to accumulate and eventually leading to ALS.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Amazing claim: UK scientists believe they have found life forms arriving to Earth's stratosphere from space

Amazing claim: UK scientists believe they have found life forms arriving to Earth's stratosphere from space | Amazing Science |

The team, led by Professor (Hon. Cardiff and Buckingham Universities) Milton Wainwright, from the University’s Department of Molecular Biology and Biotechnology found small organisms that could have come from space after sending a specially designed balloon to 27 km into the stratosphere during the recent Perseid meteor shower.


Professor Wainwright said: “Most people will assume that these biological particles must have just drifted up to the stratosphere from Earth, but it is generally accepted that a particle of the size found cannot be lifted from Earth to heights of, for example, 27km. The only known exception is by a violent volcanic eruption, none of which occurred within three years of the sampling trip.


“In the absence of a mechanism by which large particles like these can be transported to the stratosphere we can only conclude that the biological entities originated from space. Our conclusion then is that life is continually arriving to Earth from space, life is not restricted to this planet and it almost certainly did not originate here.”


Professor Wainwright said the results could be revolutionary: “If life does continue to arrive from space then we have to completely change our view of biology and evolution,” he added. “New textbooks will have to be written!”

The balloon, designed by Chris Rose and Alex Baker from the University of Sheffield’s Leonardo Centre for Tribology, was launched near Chester and carried microscope studs which were only exposed to the atmosphere when the balloon reached heights of between 22 and 27km. The balloon landed safely and intact near Wakefield. The scientists then discovered that they had captured a diatom fragment and some unusual biological entities from the stratosphere, all of which are too large to have come from Earth.


Professor Wainwright said stringent precautions had been taken against the possibility of contamination during sampling and processing, and said the group was confident that the biological organisms could only have come from the stratosphere.


The group’s findings have been published in the Journal of Cosmology and updated versions will appear in the same journal, a new version of which will be published in the near future. Professor Chandra Wickramasinghe of the Buckingham, University Centre for Astrobiology (of which Professor Wainwright is an Honorary Fellow) also gave a presentation of the group’s findings at a meeting of astronomers and astrobiologists in San Diego last month.


Professor Wainwright’s team is hoping to extend and confirm their results by carrying out the test again in October to coincide with the upcoming Haley’s Comet-associated meteorite shower when there will be large amounts of cosmic dust. It is hoped that more new, or unusual, organisms will be found.


Professor Wainwright added: “Of course it will be argued that there must be an, as yet, unknown mechanism for transferring large particles from Earth to the high stratosphere, but we stand by our conclusions. The absolutely crucial experiment will come when we do what is called ‘isotope fractionation’. We will take some of the samples which we have isolated from the stratosphere and introduce them into a complex machine – a button will be pressed. If the ratio of certain isotopes gives one number then our organisms are from Earth, if it gives another, then they are from space. The tension will obviously be almost impossible to live with!”

There have been a number of investigations showing that viable bacteria and fungi exist in both the lower and the upper stratosphere over the altitude range 20 km - 60 km. Since a number of different methodological approaches have been used in these studies, and a range of different microbes have been isolated from the stratosphere using a variety of approaches, there is little doubt that microbes do exist in the stratosphere. Such organisms are unlikely to grow in this “high cold biosphere” but survive instead in the dormant state as “extremodures”; the fact that bacteria and fungi can be grown on isolation media when returned to Earth shows however, that these stratospherederived microbes remain viable despite exposure to the extreme rigors of the stratosphere.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Gene activity and transcript patterns visualized for the first time in thousands of single cells

Gene activity and transcript patterns visualized for the first time in thousands of single cells | Amazing Science |
Biologists of the University of Zurich have developed a method to visualize the activity of genes in single cells. The method is so efficient that, for the first time, a thousand genes can be studied in parallel in ten thousand single human cells.


Applications lie in fields of basic research and medical diagnostics. The new method shows that the activity of genes, and the spatial organization of the resulting transcript molecules, strongly vary between single cells. Whenever cells activate a gene, they produce gene specific transcript molecules, which make the function of the gene available to the cell. The measurement of gene activity is a routine activity in medical diagnostics, especially in cancer medicine. Today's technologies determine the activity of genes by measuring the amount of transcript molecules. However, these technologies can neither measure the amount of transcript molecules of one thousand genes in ten thousand single cells, nor the spatial organization of transcript molecules within a single cell. The fully automated procedure, developed by biologists of the University of Zurich under the supervision of Prof. Lucas Pelkmans, allows, for the first time, a parallel measurement of the amount and spatial organization of single transcript molecules in ten thousands single cells. The results, which were recently published in the scientific journal Nature Methods, provide completely novel insights into the variability of gene activity of single cells.

The method developed by Pelkmans' PhD students Nico Battich and Thomas Stoeger is based upon the combination of robots, an automated fluorescence microscope and a supercomputer. "When genes become active, specific transcript molecules are produced. We can stain them with the help of a robot", explains Stoeger. Subsequently, fluorescence microscope images of brightly glowing transcript molecules are generated. Those images were analyzed with the supercomputer Brutus, of the ETH Zurich. With this method, one thousand human genes can be studied in ten thousand single cells. According to Pelkmans, the advantages of this method are the high number of single cells and the possibility to study, for the first time, the spatial organization of the transcript molecules of many genes.

The analysis of the new data shows that individual cells distinguish themselves in the activity of their genes. While the scientists had been suspecting a high variability in the amount of transcript molecules, they were surprised to discover a strong variability in the spatial organization of transcript molecules within single cells and between multiple single cells. The transcript molecules adapted distinctive patterns.


The importance of these new insights was summarized by Pelkmans: "Our method will be of importance to basic research and the understanding of cancer tumors because it allows us to map the activity of genes within single tumor cells.

Dmitry Alexeev's curator insight, October 7, 2013 11:45 PM

i would expect to find more tricky distributions of the gene expression pattern among cells - but... however it looks that the high throughputness is being reaching via automatization and robotization either then novell principles

Rescooped by Dr. Stefan Gruenwald from Complex Insight - Understanding our world!

Synthetic biologists probe biochar's impact on microbial signaling

Synthetic biologists probe biochar's impact on microbial signaling | Amazing Science |

Charcoal has a long soil residence time, which has resulted in its production and use as a carbon sequestration technique (biochar). A range of biological effects can be triggered by soil biochar that can positively and negatively influence carbon storage, such as changing the decomposition rate of organic matter and altering plant biomass production. Sorption of cellular signals has been hypothesized to underlie some of these effects, but it remains unknown whether the binding of biochemical signals occurs, and if so, on time scales relevant to microbial growth and communication.


A team of researchers now examined the biochar sorption of N-3-oxo-dodecanoyl-L-homoserine lactone, an acyl-homoserine lactone (AHL) intercellular signaling molecule used by many gram-negative soil microbes to regulate gene expression. They were able to show that wood biochars disrupt communication within a growing multicellular system that is made up of sender cells that synthesize AHL and receiver cells that express green fluorescent protein in response to an AHL signal. However, biochar inhibition of AHL-mediated cell–cell communication varied, with the biochar prepared at 700 °C (surface area of 301 m2/g) inhibiting cellular communication 10-fold more than an equivalent mass of biochar prepared at 300 °C (surface area of 3 m2/g).


These findings provide the first direct evidence that biochars elicit a range of effects on gene expression dependent on intercellular signaling, implicating the method of biochar preparation as a parameter that could be tuned to regulate microbial-dependent soil processes, like nitrogen fixation and pest attack of root crops.

Via Socrates Logos, ComplexInsight
Socrates Logos's curator insight, September 30, 2013 12:31 PM

Source: Rice University

"In the first study of its kind, Rice University scientists have used synthetic biology to study how a popular soil amendment called “biochar” can interfere with the chemical signals that some microbes use to communicate. The class of compounds studied includes those used by some plant pathogens to coordinate their attacks.

Biochar is charcoal that is produced — typically from waste wood, manure or leaves — for use as a soil additive. Studies have found biochar can improve both the nutrient- and water-holding properties of soil, but its popularity in recent years also owes to its ability to reduce greenhouse gases by storing carbon in soil, in some cases for many centuries.……"

ComplexInsight's curator insight, October 1, 2013 12:44 PM

Bacterial quorum communication is essential for regulating and governing behaviour in bacterial populations - and we often do not understand the inter-relatonships enough. This is good research into soil impact  of adding charcoal as a soil additive  - hopefully it will be a precursor to similar studies for other soil additives we routinely add in intensive agriculture.

Scooped by Dr. Stefan Gruenwald!

3D Printing: The Greener Choice

3D Printing: The Greener Choice | Amazing Science |
It takes less energy—and thus releases less carbon dioxide—to make stuff at home with a 3D printer than to manufacture it overseas and ship it to the US.


3D printing isn’t just cheaper, it’s also greener, says Michigan Technological University’s Joshua Pearce. Even Pearce, an aficionado of the make-it-yourself-and-save technology, was surprised at his study’s results. It showed that making stuff on a 3D printer uses less energy—and therefore releases less carbon dioxide—than producing it en masse in a factory and shipping it to a warehouse.

Most 3D printers for home use, like the RepRap used in this study, are about the size of microwave ovens. They work by melting filament, usually plastic, and depositing it layer by layer in a specific pattern. Free designs for thousands of products are available from outlets like


Common sense would suggest that mass-producing plastic widgets would take less energy per unit than making them one at a time on a 3D printer. Or, as Pearce says, “It’s more efficient to melt things in a cauldron than in a test tube.” However, his group found it’s actually greener to make stuff at home.


They conducted life cycle impact analyses on three products: an orange juicer, a children’s building block and a waterspout. The cradle-to-gate analysis of energy use went from raw material extraction to one of two endpoints: entry into the US for an item manufactured overseas or printing it a home on a 3D printer.

Pearce’s group found that making the items on a basic 3D printer took from 41 percent to 64 percent less energy than making them in a factory and shipping them to the US.


Some of the savings come from using less raw material. “Children’s blocks are normally made of solid wood or plastic,” said Pearce, an associate professor of materials science and engineering/electrical and computer engineering. 3D printed blocks can be made partially or even completely hollow, requiring much less plastic.


Pearce’s team ran their analysis with two common types of plastic filament used in 3D printing, including polylactic acid (PLA). PLA is made from renewable resources, such as cornstarch, making it a greener alternative to petroleum-based plastics. The team also did a separate analysis on products made using solar-powered 3D printers, which drove down the environmental impact even further.


“The bottom line is, we can get substantial reductions in energy and CO2 emissions from making things at home,” Pearce said. “And the home manufacturer would be motivated to do the right thing and use less energy, because it costs so much less to make things on a 3D printer than to buy them off the shelf or on the Internet.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The thinnest membrane ever constructed: 1.8nm thick graphene readily sorts hydrogen and carbon dioxide

The thinnest membrane ever constructed: 1.8nm thick graphene readily sorts hydrogen and carbon dioxide | Amazing Science |

One of the thinnest membranes ever made is also highly discriminating when it comes to the molecules going through it. Engineers at the University of South Carolina have constructed a graphene oxide membrane less than 2 nanometers thick with high permeation selectivity between hydrogen and carbon dioxide gas molecules.


The selectivity is based on molecular size, the team reported in the journal Science. Hydrogen and helium pass relatively easily through the membrane, but carbon dioxide, oxygen, nitrogen, carbon monoxide and methane move much more slowly.


“The hydrogen kinetic diameter is 0.289 nm, and carbon dioxide is 0.33 nm. The difference in size is very small, only 0.04 nm, but the difference in permeation is quite large” said Miao Yu, a chemical engineer in USC’s College of Engineering and Computingwho led the research team. “The membrane behaves like a sieve. Bigger molecules cannot go through, but smaller molecules can.”


In addition to selectivity, what’s remarkable about the USC team’s result is the quality of the membrane they were able to craft on such a small scale. The membrane is constructed on the surface of a porous aluminum oxide support. Flakes of graphene oxide, with widths on the order of 500 nm but just one carbon atom thick, were deposited on the support to create a circular membrane about 2 square centimeters in area.


The membrane is something of an overlapping mosaic of graphene oxide flakes. It’s like covering the surface of a table with playing cards. And doing that on a molecular scale is very hard if you want uniform coverage and no places where you might get “leaks.” Gas molecules are looking for holes anywhere they can be found, and in a membrane made up of graphene oxide flakes, there would be two likely places: holes within the flakes, or holes between the flakes.


It’s the spaces between flakes that have been a real obstacle to progress in light gas separations. That’s why microporous membranes designed to distinguish in this molecular range have typically been very thick. “At least 20 nm, and usually thicker,” said Miao. Anything thinner and the gas molecules could readily find their way between non-uniform spaces between flakes.


Miao’s team devised a method of preparing a membrane without those “inter-flake” leaks. They dispersed graphene oxide flakes, which are highly heterogeneous mixtures when prepared with current methods, in water and used sonication and centrifugation techniques to prepare a dilute, homogeneous slurry. These flakes were then laid down on the support by simple filtration.


Their thinnest result was a 1.8-nm-thick membrane that only allowed gas molecules to pass through holes in the graphene oxide flakes themselves, the team reported. They found by atomic force microscopy that a single graphene oxide flake had a thickness of approximately 0.7 nm. Thus, the 1.8-nm-thick membrane on aluminum oxide is only a few molecular layers thick, with molecular defects within the graphene oxide that are essentially uniform and just a little too small to let carbon dioxide through easily.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Cool Future Technologies!

Redox Power Plans To Roll Out Dishwasher-Sized Fuel Cells That Cost 90% Less Than Currently Available Fuel Cells

Redox Power Plans To Roll Out Dishwasher-Sized Fuel Cells That Cost 90% Less Than Currently Available Fuel Cells | Amazing Science |

Redox Power Systems, a Fulton, MD-based start-up company founded last year, sealed the deal on a partnership with researchers at the University of Maryland to commercialize a potentially game-changing distributed generation technology.


Redox says that it plans to bring to market a fuel cell that is about one-tenth the size and one-tenth the cost of currently commercial fuel cells by 2014.


The breakthrough solid oxide fuel cell technology is the brainchild of Eric Wachsman, the director of the University of Maryland’s Energy Research Center.


Redox says that it will provide safe, efficient, reliable, uninterrupted power, on–site and optionally off the grid, at a price competitive with current energy sources.


The promise is this: generate your own electricity with a system nearly impervious to hurricanes, thunderstorms, cyber attacks, derechos, and similar dangers, while simultaneously helping the environment.


“Every business or home should be able to safely generate its own energy,” said Warren Citrin, CEO and director of Redox. “We currently rely upon a vulnerable electrical grid. The best way to decrease that vulnerability is through distributed energy, that is, by making your own energy on-site. We are building systems to do that, with an emphasis on efficiency and affordability. These should be common appliances.”


Redox’s PowerSERG 2-80, also called “The Cube,” connects to your natural gas line and electrochemically converts methane to electricity.

The first generation has a nameplate capacity of 25 kilowatts, which can power a gas station or small grocery store, and is roughly the size of a dishwasher.


The system can run at an 80% efficiency when used to provide both heat and power.

Via Sepp Hasslberger
Sepp Hasslberger's curator insight, August 20, 2013 9:10 AM

Projected 25 KW power plant that is to run on methane.

Scooped by Dr. Stefan Gruenwald!

RNAi Therapeutics: Tekmira Expands Biodefense Efforts to Cover Both Ebola and Marburg Virus

RNAi Therapeutics: Tekmira Expands Biodefense Efforts to Cover Both Ebola and Marburg Virus | Amazing Science |

Following the successes with its Ebola biodefense program, Tekmira has started to present increasingly promising data for treating a related filovirus, the Marburg virus.  With 100% survival rates in non-human primate models and new RNAi triggers that should cover a broad spectrum of strains and possibly related viruses, the company is well positioned to also take away the Marburg indication from competitor Sarepta which in turn has become preoccupied with its exon-skipping drug candidate for DMD.  

Tekmira and their collaborators from the UTMB in Galveston, Texas, reported the successful treatment of guinea pigs infected with a number of different strains of the Marburg virus.  Because the development of treatments for rapidly mutating viruses and viruses with a multitude of divergent strains is hampered by sequence diversity, a broader strain coverage was achieved, like in Tekmira’s Ebola approach or in Arrowhead’s chronic HepB strategy (ARC520), through the concurrent use of two siRNAs in a single formulation.

Chief Scientific Officer of the company, Ian MacLachlan, presented gold standard non-human primate data of SNALP RNAi Therapeutics for Marburg virus at the ongoing OligoDIA regulator-industry conference.  Accordingly, Tekmira’s newer LNP formulations (‘SNALP-G’) were shown to fully protect monkeys from death due to Marburg infection when given at 0.5mg/kg (=the magic safety threshold for SNALP). As an important comparison, Sarepta last year reported ‘83% to 100%’ protection rates in comparable models when treatment was initiated up to 96 hours after infections with its newer PMOplus morpholino chemistry.   It is therefore of interest to test the impact of further delaying treatment with SNALPs.

RNAi therapeutics have the potential to treat a broad number of human diseases by "silencing" disease causing genes. The discoverers of RNAi, a gene silencing mechanism used by all cells, were awarded the 2006 Nobel Prize for Physiology or Medicine. RNAi therapeutics, such as "siRNAs," require delivery technology to be effective systemically. Tekmira believes its LNP technology represents the most widely adopted delivery technology for the systemic delivery of RNAi therapeutics. Tekmira's LNP platform is being utilized in multiple clinical trials by both Tekmira and its partners. Tekmira's LNP technology (formerly referred to as stable nucleic acid-lipid particles or SNALP) encapsulates siRNAs with high efficiency in uniform lipid nanoparticles that are effective in delivering RNAi therapeutics to disease sites in numerous preclinical models. Tekmira's LNP formulations are manufactured by a proprietary method which is robust, scalable and highly reproducible, and LNP-based products have been reviewed by multiple FDA divisions for use in clinical trials. LNP formulations comprise several lipid components that can be adjusted to suit the specific application.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New adaptive and multifunctional material is inspired by tears

New adaptive and multifunctional material is inspired by tears | Amazing Science |

Imagine a tent that blocks light on a dry and sunny day, and becomes transparent and water-repellent on a dim, rainy day. Or highly precise, self-adjusting contact lenses that also clean themselves. Or pipelines that can optimize the rate of flow depending on the volume of fluid coming through them and the environmental conditions outside.


A team of researchers at the Wyss Institute at Harvard University and Harvard's School of Engineering and Applied Sciences (SEAS) just moved these enticing notions much closer to reality by designing a new kind of adaptive material with tunable transparency and wettability features, as reported yesterday in the online version of Nature Materials.


"The beauty of this system is that it's adaptive and multifunctional," said senior author Joanna Aizenberg, Ph.D., a Core Faculty member at the Wyss Institute and the Amy Smith Berylson Professor of Materials Science at SEAS.


The new material was inspired by dynamic, self-restoring systems in Nature, such as the liquid film that coats your eyes. Individual tears join up to form a dynamic liquid film with an obviously significant optical function that maintains clarity, while keeping the eye moist, protecting it against dust and bacteria, and helping to transport away any wastes -- doing all of this and more in literally the blink of an eye.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How Many Earths? Interactive Kepler Data

How Many Earths? Interactive Kepler Data | Amazing Science |

This interactive graphic is based on the data for candidate planets identified by NASA's Kepler Space Telescope. Kepler found these planets by recording the slight dimming of the light from a star caused by a planet passing in front of it.


About 10 per cent of the candidate planets will probably turn out to be no such thing – it's possible to mistake the second star in a binary star system for a giant planet, for example. On the other hand, Kepler probably missed around 10 per cent of the planets that passed in front of target stars because the dimming of the star's light was too slight to detect against the natural variability in the stars' light output. These two numbers roughly cancel each another out, so they are not included in our calculations.


The first step in answering "How many Earths?" was to ignore planets twice the Earth's diameter or larger: these are likely to be gas giants like Jupiter, not rocky worlds like ours. However, such planets may possess rocky moons, which could well host life.


Not all of the remaining planets will be hospitable to life. For example, carbon-rich planets could have a graphite crust with layers of diamond below and rivers of oil and tar.


Kepler could not determine a planet's composition, but to calculate how many planets might be friendly to life, we estimated the number in stars' habitable zones – orbits where a planet will be neither too hot nor too cold for water to exist in liquid form.


Defining a star's habitable zone is a complex process, but as a reasonable proxy we used Kepler's estimates of planets' equilibrium temperature. This is the temperature that would be measured at a planet's surface if it were a black body heated by its parent star without any atmospheric greenhouse effect.


The next step – the most uncertain part of our quest – was extrapolating to the total number of roughly Earth-sized planets likely to be orbiting Kepler's 150,000 target stars. Simple geometry tells us that Kepler will have missed most of these planets: the tilts of their orbits mean they never passed between their parent stars and the telescope. And the farther out a planet orbits, the harder it was for Kepler to detect.


Taking everything into account, the best estimate for the average number of roughly Earth-sized planets in each star's habitable zone is 0.15, according to simulations based on Kepler data thatCourtney Dressing and David Charbonneau of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, performed. Applying this average to Kepler's 150,000 target stars gave our estimate of 22,500 potentially habitable, roughly Earth-sized planets.


There is an important caveat, though. Dressing and Charbonneau's calculations are for class M stars, which have a reddish hue and account for about three-quarters of the stars in our galaxy. But about 80 per cent of Kepler's target stars are class G stars, like our sun, which are yellowish. Nobody knows for sure whether these different classes of stars have similar populations of planets.


The final step in our quest was to extrapolate to the entire galaxy. Estimates of the number of stars in the Milky Way vary from 100 billion to 200 billion. Applying the same estimate of 0.15 potentially Earth-like planets per star gave our figure of between 15 and 30 billion.


If we had displayed all these potential planets in the final view, the sky would have become a mass of green. To give a meaningful view for someone here on Earth, we selected stars from the European Space Agency's Tycho-2 catalogue with an apparent magnitude of 10.5 or brighter – these stars would be visible on a dark night with a good pair of binoculars. We have displayed a random sample of 15 per cent of these stars, corresponding to Dressing and Charbonneau's estimate of stars with potentially habitable, roughly Earth-sized planets.

BlackHorseMedia's curator insight, September 30, 2013 12:56 PM

2500 years ago, the Buddha is said to have remarked that there are "many, many" planets with beings just like us..... 

Scooped by Dr. Stefan Gruenwald!

Biologists Confirm Role of Sperm Competition in Formation of New Species

Biologists Confirm Role of Sperm Competition in Formation of New Species | Amazing Science |

Biologists in Syracuse University’s College of Arts and Sciences have confirmed that reproductive isolation, a critical step in the formation of new species, can arise from diversifying sperm competition. Their findings, which have major implications for the study of biodiversity, are the subject of a groundbreaking article in the Oct. 7th issue of Current Biology (Elsevier, 2013).


Female promiscuity—something that occurs in a majority of species, including humans—results in the ejaculates from two or more males overlapping within her reproductive tract. When this happens, sperm compete for fertilization of the female’s eggs. In addition, the female has the opportunity to bias fertilization of her eggs in favor of one male’s sperm over others.


These processes, collectively known as postcopulatory sexual selection, drive a myriad of rapid, coordinated evolutionary changes in ejaculate and female reproductive tract traits. These changes have been predicted to be an important part of speciation, the process by which new biological species arise.


Until now, traits and processes that influence fertilization success have been poorly understood, due to the challenges of observing what sperm do within the female’s body and of discriminating sperm among different males. Almost nothing is known about what determines the sperm’s fate in hybrid matings where there may be an evolutionary mismatch between ejaculate and female reproductive tract traits.


Professor John Belote has overcome these challenges by genetically engineering closely related species of fruit flies with different colors of glow-in-the-dark-sperm. Working closely with Scott Pitnick, Mollie Manier, and other colleagues in SU’s Pitnick Lab, he is able to observe ejaculate-female interactions and sperm competition in hybrid matings.


“How new species arise is one of the most important questions facing biologists, and we still have a lot to learn,” says Pitnick, a professor in SU's Department of Biology, adding that the mechanisms maintaining the genetic boundary between species is difficult to pin down. “This paper [in Current Biology] is perhaps the most important one of my career. It has been six years in the making.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Alien frontier: See the haunting, beautiful weirdness of Mars in Hi-Res Pictures

Alien frontier: See the haunting, beautiful weirdness of Mars in Hi-Res Pictures | Amazing Science |

Mounted to the Mars Reconnaissance Orbiter as it floats high above the red planet is the HiRISE telescope, an imaging device capable of taking incredibly high-resolution photos of the martian landscape. It's sent back nearly 30,000 photos during its time above the planet, which have been used by NASA to find clear landing spots for rovers, and by researchers to learn more about the features of Mars' surface.


The stunning views captured by HiRISE have inspired a book from the publisher Aperture, called This is Mars, which includes 150 of its finest looks at the planet. The entire collection is in black and white, however, as that's how HiRISE's images naturally turn out.


But by combining different color filters on the telescope, NASA is able to produce colored versions of most images too. They're known as "false color" images, since they won't perfectly match up with what the human eye would see. False color images are still useful, however, in helping researchers distinguish between different elements of Mars' landscape. They're also downright gorgeous to look through. Below, we've collected our own series of some of the most incredible sights taken by HiRISE throughout 2013.

Adrian Rojas's comment, October 7, 2013 8:36 PM
Well I thought Aliens didn't exist? And if they didn't why are they saying they have found alien made things on Mars. I believe in aliens because there is proof but then you hear someone say no and they try to find some scientific way of proving that aliens don't exist. And some of the pictures here don't really show or prove that aliens made them because I could have just been eroded or gravity formed it like that. There is many explanation that these photographs are not alien made because it could of just been naturally made like that.

How do we have photos of Mars if we have never been there? And there was an article that said they have found water on Mars so it's not impossible if there was life on the planet. But you can't just jump on the conclusion that there are aliens on Mars.
Dr. Stefan Gruenwald's comment, October 7, 2013 11:32 PM
Alien in the title is used as an adjective and means "strange, foreign". It has NOTHING to do with actual aliens. Where do you get this idea from?
alenav09's curator insight, October 11, 2013 4:42 PM

Wow aliens that's crazy and to think some people actually think that there are no aliens we'll just wow. If you really think about it then you can see that we can't be the only possible life forms out there!!!!!

Scooped by Dr. Stefan Gruenwald!

Paleoclimatology: Three Major Rivers Existed in the Sahara 100,000 Years Ago

Paleoclimatology: Three Major Rivers Existed in the Sahara 100,000 Years Ago | Amazing Science |

Simulating paleoclimates in the Sahara region, a team of researchers from Germany and United Kingdom has found evidence of three major river systems that likely existed in North Africa about 130,000 – 100,000 years ago, but are now largely buried by dune systems in the desert. The image shows Irharhar, Sahabi and Kufrah rivers systems in the Sahara region. The green points show the location of archaeological sites in the region.


When flowing, these rivers – Irharhar, Sahabi and Kufrah – likely provided fertile habitats for animals and vegetation, creating ‘green corridors’ across the region. At least one river system is estimated to have been 100 km wide and largely perennial.


The Irharhar river, westernmost of the three identified, may represent a likely route of human migration across the region. In addition to rivers, new simulations predict massive lagoons and wetlands in northeast Libya, some of which span over 70,000-square kilometers.


“It’s exciting to think that 100,000 years ago there were three huge rivers forcing their way across a 1,000 km of the Sahara desert to the Mediterranean and that our ancestors could have walked alongside them,” said Dr Tom Coulthard of the University of Hull, UK, who is a lead author of the study published in the journal PLoS ONE.


Previous studies have shown that people traveled across the Saharan mountains toward more fertile Mediterranean regions, but when, where and how they did so is a subject of debate. Existing evidence supports the possibilities of a single trans-Saharan migration, many migrations along one route, or multiple migrations along several different routes.


The existence of ‘green corridors’ that provided water and food resources were likely critical to these events, but their location and the amount of water they carried is not known. The simulations provided in this study aim to quantify the probability that these routes may have been viable for human migration across the region.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Parasites make mice lose fear of cats permanently, even after Toxoplasma infection is cleared

Parasites make mice lose fear of cats permanently, even after Toxoplasma infection is cleared | Amazing Science |

Behavioral changes persist after Toxoplasma infection is cleared. A parasite that infects up to one-third of people around the world may have the ability to permanently alter a specific brain function in mice.

Toxoplasma gondii is known to remove rodents’ innate fear of cats. The new research shows that even months after infection, when parasites are no longer detectable, the effect remains. This raises the possibility that the microbe causes a permanent structural change in the brain. The microbe is a single-celled pathogen that infects most types of mammal and bird, causing a disease called toxoplasmosis. But its effects on rodents are unique; most flee cat odor, but infected ones are mildly attracted to it.

This is thought to be an evolutionary adaptation to help the parasite complete its life cycle: Toxoplasma can sexually reproduce only in the cat gut, and for it to get there, the pathogen's rodent host must be eaten.


In humans, studies have linked Toxoplasma infection with behavioral changes and schizophrenia. One work found an increased risk of traffic accidents in people infected with the parasite; another found changes in responses to cat odor. People with schizophrenia are more likely than the general population to have been infected with Toxoplasma, and medications used to treat schizophrenia may work in part by inhibiting the pathogen's replication.


Schizophrenia is thought to involve excess activity of the neurotransmitter dopamine in the brain. This has bolstered one possible explanation for Toxoplasma’s behavioral effect: the parasite establishes persistent infections by means of microscopic cysts that grow slowly in brain cells. It can increase those cells’ production of dopamine, which could significantly alter their function. Most other suggested mechanisms also rely on the presence of cysts.

Research on Toxoplasma has mainly used the North American Type II strain. Wendy Ingram, a molecular cell biologist at the University of California, Berkeley, and her colleagues investigated the effects of two other major strains, Type I and Type III, on mouse behavior. They found that within three weeks of infection with either strain, mice lost all fear of cat odor — showing that the behavioral shift is a general trait of Toxoplasma.


More surprising was the situation four months after infection. The Type I pathogen that the researchers used had been genetically modified to provoke an effective immune response, allowing the mice to overcome the infection. After four months, it was undetectable in the mouse brain, indicating that no more than 200 parasite cells remained. “We actually expected that Type I wouldn’t be able to form cysts, and therefore wouldn’t be able to cause the behavioral change,” explains Ingram.

But that was not the case: the mice remained as unperturbed by cat odour as they had been at three weeks. “Long after we lose the ability to see it in the brain, we still see its behavioral effect,” says geneticist Michael Eisen, also at Berkeley.


This suggests that the behavioral change could be due to a specific, hard-wired alteration in brain structure, which is generated before cysts form and cannot be reversed. The finding casts doubt on theories that cysts or dopamine cause the behavioral changes of Toxoplasma infections.

Becky Raines's comment, October 1, 2013 12:47 PM
That is so adorable. I was hoping that one day mice would stop being afraid and be sort of like friends, but that does mean that the cat has to agree as well. Cats like to chase mice so unless the cat agrees then it won't work out for either one animal.