Amazing Science
502.1K views | +221 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

New type of solar structure cools buildings in full sunlight

New type of solar structure cools buildings in full sunlight | Amazing Science |
A Stanford team has designed an entirely new form of cooling panel that works even when the sun is shining. Such a panel could vastly improve the daylight cooling of buildings, cars and other structures by radiating sunlight back into the chilly vacuum of space.Homes and buildings chilled without air conditioners. Car interiors that don't heat up in the summer sun. Tapping the frigid expanses of outer space to cool the planet. Science fiction, you say? Well, maybe not any more.


A team of researchers at Stanford has designed an entirely new form of cooling structure that cools even when the sun is shining. Such a structure could vastly improve the daylight cooling of buildings, cars and other structures by reflecting sunlight back into the chilly vacuum of space.


“People usually see space as a source of heat from the sun, but away from the sun outer space is really a cold, cold place,” explained Shanhui Fan, professor of electrical engineering and the paper’s senior author. “We’ve developed a new type of structure that reflects the vast majority of sunlight, while at the same time it sends heat into that coldness, which cools manmade structures even in the day time.” 

The trick, from an engineering standpoint, is two-fold. First, the reflector has to reflect as much of the sunlight as possible. Poor reflectors absorb too much sunlight, heating up in the process and defeating the purpose of cooling.


The second challenge is that the structure must efficiently radiate heat back into space. Thus, the structure must emit thermal radiation very efficiently within a specific wavelength range in which the atmosphere is nearly transparent. Outside this range, Earth’s atmosphere simply reflects the light back down. Most people are familiar with this phenomenon. It’s better known as the greenhouse effect—the cause of global climate change.

The new structure accomplishes both goals. It is an effective a broadband mirror for solar light—it reflects most of the sunlight. It also emits thermal radiation very efficiently within the crucial wavelength range needed to escape Earth’s atmosphere.


Radiative cooling at nighttime has been studied extensively as a mitigation strategy for climate change, yet peak demand for cooling occurs in the daytime. 


“No one had yet been able to surmount the challenges of daytime radiative cooling—of cooling when the sun is shining,” said Eden Rephaeli, a doctoral candidate in Fan’s lab and a co-first-author of the paper. “It’s a big hurdle.”

The Stanford team has succeeded where others have come up short by turning to nanostructured photonic materials. These materials can be engineered to enhance or suppress light reflection in certain wavelengths.


"We've taken a very different approach compared to previous efforts in this field," said Aaswath Raman, a doctoral candidate in Fan’s lab and a co-first-author of the paper. "We combine the thermal emitter and solar reflector into one device, making it both higher performance and much more robust and practically relevant. In particular, we're very excited because this design makes viable both industrial-scale and off-grid applications."


Using engineered nanophotonic materials the team was able to strongly suppress how much heat-inducing sunlight the panel absorbs, while it radiates heat very efficiently in the key frequency range necessary to escape Earth’s atmosphere. The material is made of quartz and silicon carbide, both very weak absorbers of sunlight.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Quality control opens path to synthetic biology's Ikea - Next industrial revolution could be biological

Quality control opens path to synthetic biology's Ikea - Next industrial revolution could be biological | Amazing Science |

Think living machines that produce energy from landfill waste, biological sensors that detect dirty water or bacterial production lines that churn out drugs.


These are just some of the applications that synthetic biology – applying engineering principles to biological parts – could make possible. That goal is looking more likely now that, for the first time, researchers have established a set of rules that could allow parts to be assembled with industrial rigour. Libraries of these standardised high-quality parts will let engineers pick components knowing how they will behave.


The behaviour of all living matter is governed by gene expression, the process by which biological materials such as proteins are made. So synthetic biology's "parts" are the DNA sequences that contain certain manufacturing instructions. When these parts are stuck together, the genes are expressed and the required protein is made.


Researchers have been building one-off biological machines by combining several of these parts for years. But, because there is little quality control, producing them on an industrial scale has so far been impossible. To change this, Drew Endy, co-director of the BIOFAB facility in California, and his colleagues have developed a mathematical framework to show how each part interacts with others and whether this results in the right amount of the right product being made. The work involved physically testing out hundreds of combinations of common biological components and using the results to create a scoring system, effectively establishing a standard of excellence that should let engineers build their most reliable devices yet (Nucleic Acids Research,


The team found that bundling parts together according to their specific function gave more reliable results than considering them separately. This is how nature does it, says Endy, but the dogma had been that all parts should be clearly separated and assembled in a more modular way, which was the principle used to set up the BioBricks registry, an existing library of parts, in 2003. It was a case of "let's change our religion on how you assemble things", says Endy.


This realisation enabled the team to design hundreds of combinations of DNA segments from the Escherichia coli genome – one of the most commonly used source of parts – to build up a library of parts with a reliability of around 93 per cent.


"It's really great to have honest metrics for the performance of parts," saysChristopher Voigt at the Massachusetts Institute of Technology, who four years ago developed components that worked about 50 per cent of the time. It should help remove the element of trial and error that synthetic biologists have so far had to live with. "It's normally done in an ad-hoc way," says Voigt. "You just drop in the part and hope you get what you want."


For the BIOFAB group, whose aim is to mass-produce their standardised parts and ship them to researchers around the world, this is just the start. The scoring system should apply to a wide range of organisms and there are many more families of parts that need to be catalogued, such as those from other commonly used bacteria like salmonella and Rhodobacter. "It's a slow, hard slog but it's essential," says Voigt.


One thing is for sure: biology is undergoing developments that parallel the industrial revolution, says Richard Kitney at Imperial College London. Only recently, for example, it might have taken 10 bioengineers more than 10 years to build something that produced a single drug. This is akin to the cottage industries of the 18th century in the UK, Kitney says, where master craftsmen like George Hepplewhite would labour to create one-off pieces of furniture.

"But we went from Hepplewhite to Ikea," he says. "That's what we're trying to achieve in synthetic biology."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New Process to Make Near Perfect Nanospheres Developed

New Process to Make Near Perfect Nanospheres Developed | Amazing Science |

Researchers led by Dr. Victoria Gelling at North Dakota State University, Fargo, developed a patent-pending technology to produce nanospheres that could enable advances across multiple industries. The environmentally-friendly process oxidizes ozone in water to produce polymer-based nanospheres, ranging from 70 to 400 nanometers in diameter, that are uniform in size and shape, stay suspended in solution, and are easily removed using a centrifuge. The scanning electron microscopy image depicts the uniform spherical morphology of these nanospheres.

“The synthesis of the nanospheres is rather simple, with no other chemicals required other than water, ozone, and the small molecules which will become the polymers,” said Dr. Gelling. “We also have tight control of the size, as they are beautiful, perfect marbles.”

Given their uniform size and shape, the nanospheres could have uses across multiple industries. According to Dr. Gelling, such nanospheres could be used to:

• Produce high-performance electronic devices and energy-efficient digital displays
• Create materials with high conductivity and smaller parts for consumer electronics
• Deliver medicine directly to diseased cells in the body
• Provide antibacterial coating on dressing for wounds
• Develop nanosensors to aid in early disease detection
• Create coatings that provide increased protection against corrosion and abrasion

Scooped by Dr. Stefan Gruenwald!

Voxeljet: The First Continuous 3D Printer

Voxeljet: The First Continuous 3D Printer | Amazing Science |

If there ever was a major leap in the evolution of the 3D printer, the Voxeljet concept is the benchmark machine to follow. In the explosive arena of start-ups that produce innovative 3D-printers, voxeljet has decided to challenge and change the direction of how 3D printers work. Taking a look at three specific factors that set this process apart from others on the market, it becomes quite clear just how revolutionary this concept is.


• The ability to have a continuous supply of consumables delivered to the machines as it is making a model. This is made possible because the bed of consumables sits above where the models are actually made.


• The printhead sits in an area that it tilted at about a 35 degree angle with a printhead resolution of 600 DPI.


• The build size 800mm x 500mm x Infinity. As the model is being printed it sits on a conveyor belt that delivers the model out at the other end.


At a layer thickness of 150 to 400 microns, the resolution is decent when compared to others 3D printers but it is worth noting that this is still in the concept phase so there is the possibility to improve the layer thickness.

Continuous 3D-Printing Technology represents a new dimension in the manufacturing of moulds and models without tools. With its big advantages compared to conventional standard-3D-printers VX concept is a pioneer for a whole new generation of machines. The length of the moulds is virtually unlimited with this type of system as there is no restriction to the length of the belt conveyor. The usable build length is only limited by the manageability of the moulds. Furthermore, the tilt of the print level enables the print head to take far less time for positioning movements, which improves the print speed. Apart from the technological highlights, users will be pleased with the investment and operating costs because they are lower than those of conventional systems. With the continuous printing system, there is no need for a build container or separate unpacking station, which has a positive effect on the purchase costs. The printer also scores points with its high re-use rate for the unprinted particle material, which is returned straight to the build zone from the unpacking area. Consequently, the machine requires smaller filling quantities and incurs lower set-up costs.


The voxeljet adds a whole new possibility with the ability to produce a larger number of mass-customized products. The notion of using 3D printed models as the only source of fabrication for manufactured models is still too costly, but with the voxeljet, we approach the tipping point in that it allows for the best of both worlds. Additive manufacturing as a process has less waste as when compared to its subtractive counterpart. There is also no increase in cost when it comes to the complexity of geometry in additive manufacturing. Coupled with an extra long support bed out the other side, it becomes quite clear that a customized, on-demand future is just that much closer.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Biological transistor (transcriptor) enables computing within living cells

Biological transistor (transcriptor) enables computing within living cells | Amazing Science |

ENIAC, the first modern computer developed in the 1940s, used vacuum tubes and electricity. Today, computers use transistors made from highly engineered semiconducting materials to carry out their logical operations. And now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper to be published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”


“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author. The creation of the transcriptor allows engineers to compute inside living cells to record, for instance, when cells have been exposed to certain external stimuli or environmental factors, or even to turn on and off cell reproduction as needed. “Biological computers can be used to study and reprogram living systems, monitor environments and improve cellular therapeutics,” said Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author.

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.


“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy. Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell. They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short. Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.


Despite their outward differences, all modern computers, from ENIAC to Apple, share three basic functions: storing, transmitting and performing logical operations on information.


Last year, Endy and his team made news in delivering the other two core components of a fully functional genetic computer. The first was a type of rewritable digital data storage within DNA. They also developed a mechanism for transmitting genetic information from cell to cell, a sort of biological Internet.


“The potential applications are limited only by the imagination of the researcher,” said co-author Monica Ortiz, a PhD candidate in bioengineering who demonstrated autonomous cell-to-cell communication of DNA encoding various BIL gates.


To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.


“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”


On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification. 

With transcriptors, a very small change in the expression of an integrase can create a very large change in the expression of any two other genes.


To understand the importance of amplification, consider that the transistor was first conceived as a way to replace expensive, inefficient and unreliable vacuum tubes in the amplification of telephone signals for transcontinental phone calls. Electrical signals traveling along wires get weaker the farther they travel, but if you put an amplifier every so often along the way, you can relay the signal across a great distance. The same would hold in biological systems as signals get transmitted among a group of cells.


“It is a concept similar to transistor radios,” said Pakpoom Subsoontorn, a PhD candidate in bioengineering and co-author of the study who developed theoretical models to predict the behavior of BIL gates. “Relatively weak radio waves traveling through the air can get amplified into sound.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

SMIM1: Vel-Negative Blood Problem Explained

SMIM1: Vel-Negative Blood Problem Explained | Amazing Science |

In the early 1950's, a 66-year-old woman with colon cancer received a blood transfusion - but she suffered a severe rejection of the transfused blood. When writing the case study, the medical journal Revue D'Hématologie identified her only as "Patient Vel."


It was determined that Mrs. Vel had developed a potent antibody against some unknown molecule found on the red blood cells of most people in the world—but not found on her own red blood cells. But the molecule was unknown, nobody could find it. A blood mystery began, and, from her case, a new blood type, "Vel-negative," was described in 1952.


Soon it was discovered that Mrs. Vel was not alone. It is estimated that over 200,000 people in Europe and a similar number in North America are Vel-negative, about 1 in 2,500. For these people, successive blood transfusions could easily turn to kidney failure and death. So, for sixty years, doctors and researchers have hunted for the underlying cause of this blood type.


Now a team of scientists from the University of Vermont and France has found the missing molecule—a tiny protein called SMIM1—and the mystery is solved. "Our findings promise to provide immediate assistance to health-care professionals should they encounter this rare but vexing blood type," says University of Vermont's Bryan Ballif. Last year, Ballif and Arnaud identified the proteins responsible for two other rare blood types, Junior and Langeris, moving the global count of understood blood types or systems from 30 to 32. Now, with Vel, the number rises to 33. The little protein didn't reveal its identity easily. "I had to fish through thousands of proteins," Ballif says. And several experiments failed to find the culprit because of its unusual biochemistry—and pipsqueak size. But he eventually nabbed it using a high-resolution mass spectrometer funded by the Vermont Genetics Network. And what he found was new to science. "It was only a predicted protein based on the human genome," says Ballif, but hadn't yet been observed. It has since been named: Small Integral Membrane Protein 1, or SMIM1.


Next, Lionel Arnaud of the French National Institute of Blood Transfusion
and the team in France tested seventy people known to be Vel-negative. In every case, they found a deletion—a tiny missing chunk of DNA—in the gene that instructs cells on how to manufacture SMIM1. This was the final proof the scientists needed to show that the Vel-negative blood type is caused by a lack of the SMIM1 protein on a patient's red blood cells.


Today, personalized medicine— where doctors treat us based on our unique biological makeup—is a hot trend. "The science of blood transfusion has been attempting personalized medicine since its inception," Ballif notes, "given that its goal is to personalize a transfusion by making the best match possible between donor and recipient.


"Identifying and making available rare blood types such as Vel-negative blood brings us closer to a goal of personalized medicine. Even if you are that rare one person out of 2,500 that is Vel-negative, we now know how to rapidly type your blood and find blood for you—should you need a transfusion."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New record holder for being the lightest substance on Earth, less than even helium

New record holder for being the lightest substance on Earth, less than even helium | Amazing Science |

Scientists have just unveiled the lightest human-made substance on Earth. How light are we talking? Let's put it this way: it's less dense than helium.


 The battle for rights to the title of world's lightest material (technically world's lowest density material) has played out like a brutally rapid series of monarchal overthrows. For years, NASA's aerogel (density 1 milligram per cubic centimeter) held the title of lightest material on Earth. In November 2011, it was dethroned by a gorgeous, ultralight metallic microlattice (density 0.9 mg/cm3). Months later, a substance called "Aerographite" with a density of just 0.2 mg/cm3 blew both of these ultralight materials out of the water. Now, a new material has assumed the throne. 


In a Nature paper titled "Solid carbon, springy and light," scientists from Zhejiang University in Hangzhou, China have introduced a graphene aerogel that comes in at just 0.16 milligrams per cubic centimeter. As a point of reference, that's less than one-seventh the density of air. And while it's still twice as dense as hydrogen, it's the very first ultralight substance to achieve a mass-to-volume ratio less than helium's, 0.1786 mg/cm3. What's more, it's got some killer real-world applications.


 The new material is amazingly absorptive, able to suck in up to 900 times its own weight in oil at a rate of 68.8 grams per second — only oil, not water, which means it has massive potential as a cleaning material when it comes to events such as oil spills.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

CRISPR - The phage-bacteria arms-race: Shaping the evolution of microbes

CRISPR - The phage-bacteria arms-race: Shaping the evolution of microbes | Amazing Science |

Bacteria, the most abundant organisms on the planet, are outnumbered by a factor of 10 to 1 by phages that infect them. Faced with the rapid evolution and turnover of phage particles, bacteria have evolved various mechanisms to evade phage infection and killing, leading to an evolutionary arms-race. The extensive co-evolution of both phage and host has resulted in considerable diversity on the part of both bacterial and phage defensive and offensive strategies.


Phage-host relationships have been studied intensively since the early days of molecular biology. In the late 1970s, while viruses were found to be ubiquitous, it was assumed that they were present in relatively low numbers and that their effect on microbial communities was low. With the increasing availability of new molecular techniques that allow studies of microbial communities without the need to culture them, it is now realized that viruses greatly outnumber bacteria in the ocean and other environments, with viral numbers (~10E7–10E8 ml−1) often tenfold larger than bacterial cell counts (~10E6ml−1). Thus, bacteria are confronted with a constant threat of phage predation.


The Red Queen hypothesis posits that competitive environmental interactions, such as those displayed by hosts and parasites, will lead to continuous variation and selection towards adaptation of the host, and counter-adaptations on the side of the parasite. Arguably, nowhere is this evolutionary trend so pronounced as in phage-microbe interactions. This is due to the extremely rapid evolution and turnover of phage particles, causing acute pressure on microbial communities to evade infection and killing by phages. In fact, the arms-race between phage and bacteria is predicted to have had an impact on global nutrient cycling, on global climate, on the evolution of the biosphere, and also on the evolution of virulence in human pathogens.


A recent study focuses on the evolution of three of the most well studied microbial defense mechanisms against phage: the restriction-modification system, the recently discovered CRISPR (clustered regularly interspersed palindromic repeats) loci together with their associated cas genes, and the abortive infection system (summarized in Table 1). This research first describes these defense systems, as well as the counter-adaptations that evolved in the phage to allow escape from bacterial defense. It also discusses features that are common to many microbial defense systems, such as rapid evolution, tendency for lateral gene transfer (LGT), and the selfish nature of these systems.


No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science News!

Which Came First, the Head or the Brain?

Which Came First, the Head or the Brain? | Amazing Science |

The sea anemone, a cnidarian, has no brain. It does have a nervous system, and its body has a clear axis, with a mouth on one side and a basal disk on the other. However, there is no organized collection of neurons comparable to the kind of brain found in bilaterians, animals that have both a bilateral symmetry and a top and bottom. Most animals except sponges, cnidarians, and a few other phyla are bilaterians. So an interesting evolutionary question is, which came first, the head or the brain? Do animals such as sea anemones, which lack a brain, have something akin to a head?


Chiara Sinigaglia and colleagues reported recently that at least some developmental pathways seen in cnidarians share a common lineage with head and brain development in bilaterians. It might seem intuitive to expect to find genes involved in brain development around the mouth of the anemone, and previous work has suggested that the oral region in cnidarians corresponds to the head region of bilaterians. However, there has been debate over whether the oral or aboral pole of cnidarians is analogous to the anterior pole of bilaterians. At the start of its life cycle a sea anemone exists as a free swimming planula, which then attaches to a surface and becomes a sea anemone. That free-swimming phase contains an apical tuft, a sensory structure at the front of the swimming animal's body. The apical tuft is the part that attaches and becomes the aboral pole --the part distal from the mouth-- of the adult anemone.


To test whether genetic expression in the aboral pole of cnidarians does in fact resemble the head patterning seen in bilaterians, researchers analyzed gene expression in Nematostella vectensis, a sea anemone found in estuaries and bays. They focused on the six3 and FoxQ2transcription factors, as these genes are known to regulate development of the anterior-posterior axis in bilaterian species. six3 knockout mice, for example, fail to develop a forebrain, and in humans, six3 is known to regulate the development of forebrain and eyes.

The N. vectensis genome contains one gene from the six3/6 group and four foxQ2 genes. Sinigaglia and colleagues found that Nvsix3/6 and one of the foxQ2 genes, NvFoxQ2a, were expressed predominantly on the aboral pole of the developing cnidarian but, after gastrulation, were excluded from a small spot in that region (NvSix3/6 was also expressed in a small number of other cells of the planula that resembled neurons). Because of this, the authors callNvSix3/6 and NvFoQ2a “ring genes”, and genes that are then expressed in that spot “spot genes.” The spot then develops into the apical tuft.

Through knockdown and rescue experiments, the researchers demonstrate that NvSix3/6 is required for the development of the aboral region; without it, the expression of spot genes is reduced or eliminated and the apical tuft of the planula doesn't form. This suggests that development of the region distal from the cnidarian mouth appears to parallel the development of the bilaterian head.

This research demonstrates that at least a subset of the genes that cause head and brain formation in bilaterians are also differentially expressed in the aboral region of the sea urchin. The expression patterns are not identical to those in all bilaterians; however, the similarities suggest that the patterns of gene expression arose in an ancestor common to bilaterians and cnidarians, and that the process was then modified in bilaterians to produce a brain. So to answer the evolutionary question posed above, it seems that the developmental module that produces a head came first.

Via Sakis Koukouvis
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Switching night vision on or off

Switching night vision on or off | Amazing Science |

Neurobiologists at the Friedrich Miescher Institute have been able to dissect a mechanism in the retina that facilitates our ability to see both in the dark and in the light. They identified a cellular switch that activates distinct neuronal circuits at a defined light level. The switch cells of the retina act quickly and reliably to turn on and off computations suited specifically for vision in low and high light levels thus facilitating the transition from night to day vision. The scientists have published their results online in Neuron.


"It was fascinating to see how modern neurobiological methods allowed us to answer a question about vision that has been controversially discussed for the last 50 years", said Karl Farrow, postdoctoral fellow in Botond Roska's group at the Friedrich Miescher Institute for Biomedical Research. Since the late 1950 scientists debated how the retina handles the different visual processes at low and high light intensities, at starlight and at daylight. Farrow and his colleagues have now identified a cellular switch in the retina that controls perception during these two settings.


At first glance, everything seems clear. The interplay of two photoreceptor types in the retina, the rods and the cones, allow us to see across a wide range of light intensities. The rods are highly sensitive and spring into action in the dark; the cones are activated during the day and in humans come in three diversities allowing us to see color. The rods help us detect objects during the night; while the cones allow us to discriminate the fine details of those objects during the day. The plethora of initial signals originating from the photoreceptors is computed in a system of only approximately 20 neuronal channels that transport information to the brain. The relay stations are the roughly 20 types of ganglion cells in the retina. How they manage the transition from light to dark and enable vision at the different light regimes has remained unclear.


In the retina several cell layers are stacked on top of each other. The photoreceptors are the first to be activated by light; they relay the information to bipolar cells, which in turn activate ganglion cells. The different types of ganglion cells take on distinct tasks during vision. These ganglion cells are embedded in a mesh of amacrine cells that modulate their activity. "Here is where our new genetic tools proofed very helpful," said Farrow, "because they allowed us to look at individual ganglion cell types and to specifically measure their activities at different light intensities." Farrow and colleagues could thus show that the activity of one particular type of ganglion cells, called PV1, is modulated like a switch by amacrine cells. The amacrine cells inhibit the ganglion cell strongly at high light intensities and weakly at low ambient light levels. This switch is abrupt and reversible and it occurs at the light intensities where cones are starting to be activated. "We were surprised to see how fast this switch occurs and how reliable we were able to switch between the two states at defined light intensities", comments Farrow.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Wastewater Injection Spurred Biggest Earthquake Yet, Says Study

Wastewater Injection Spurred Biggest Earthquake Yet, Says Study | Amazing Science |

A new study in the journal Geology is the latest to tie a string of unusual earthquakes, in this case, in central Oklahoma, to the injection of wastewater deep underground. Researchers now say that the magnitude 5.7 earthquake near Prague, Okla., on Nov. 6, 2011, may also be the largest ever linked to wastewater injection. Felt as far away as Milwaukee, more than 800 miles away, the quake—the biggest ever recorded in Oklahoma--destroyed 14 homes, buckled a federal highway and left two people injured. Small earthquakes continue to be recorded in the area.


The recent boom in U.S. energy production has produced massive amounts of wastewater. The water is used both in hydrofracking, which cracks open rocks to release natural gas, and in coaxing petroleum out of conventional oil wells.  In both cases, the brine and chemical-laced water has to be disposed of, often by injecting it back underground elsewhere, where it has the potential to trigger earthquakes. The water linked to the Prague quakes was a byproduct of oil extraction at one set of oil wells, and was pumped into another set of depleted oil wells targeted for waste storage.


Scientists have linked a rising number of quakes in normally calm parts of Arkansas, Texas, Ohio and Colorado to below-ground injection. In the last four years, the number of quakes in the middle of the United States jumped 11-fold from the three decades prior, the authors of the Geology study estimate. Last year, a group at the U.S. Geological Survey also attributed a remarkable rise in small- to mid-size quakes in the region to humans. The risk is serious enough that the National Academy of Sciences, in a report last year called for further research to “understand, limit and respond” to induced seismic events. Despite these studies, wastewater injection continues near the Oklahoma earthquakes.


The magnitude 5.7 quake near Prague was preceded by a 5.0 shock and followed by thousands of aftershocks. What made the swarm unusual is that wastewater had been pumped into abandoned oil wells nearby for 17 years without incident. In the study, researchers hypothesize that as wastewater replenished compartments once filled with oil, the pressure to keep the fluid going down had to be ratcheted up. As pressure built up, a known fault—known to geologists as the Wilzetta fault--jumped. “When you overpressure the fault, you reduce the stress that’s pinning the fault into place and that’s when earthquakes happen,” said study coauthor Heather Savage, a geophysicist at Columbia University’s Lamont-Doherty Earth Observatory.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

University of Illinois supercomputer debuts at 11.6 quadrillion calculations per second

University of Illinois supercomputer debuts at 11.6 quadrillion calculations per second | Amazing Science |

After years of work, one of the most powerful computers in the world has launched at the University of Illinois at Urbana-Champaign.

Researchers and scientists throughout the country can now use the $350 million Blue Waters supercomputer. Some businesses also can use the 5,500-square-foot machine run by the university's National Center for Supercomputing Applications.


The supercomputer is especially adept at crunching numbers and analyzing data. At its peak  the supercomputer operates at 11.6 petaflops, or 11.6 quadrillion calculations per second, NCSA spokeswoman Trish Barker said. "For the supercomputer, that's a piece of cake," Barker said. "A person with a calculator, you'd need about 31 million years."

Funded by the National Science Foundation through a grant in 2007, the supercomputer project suffered a setback about five years later when IBM Corp. pulled out, citing financial and technical reasons. Eventually, Cray Inc. took over construction of the hardware.


Last year, engineers started installing the equipment in a campus building and then ran tests on the machine, which includes thousands of processors.

Researchers won't have to travel to Urbana to use the supercomputer. With a log-in code, they'll be able to study a variety of science and engineering topics — from earthquakes to astronomy and the molecular mechanisms of disease — 365 days a year from their workspace.


The foundation will cover the supercomputer's operational costs through a separate grant that lasts five years. After the grant expires, the foundation could extend funding, opt to break the machine into smaller pieces or take it apart, Barker said.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Harvard Physicists Have Measured The Magnetic Moment of a Single Antimatter Particle

Harvard Physicists Have Measured The Magnetic Moment of a Single Antimatter Particle | Amazing Science |

A research team led by Harvard University scientists has measured the magnetic moment of the antiproton more accurately than ever before. “That is a spectacular jump in precision for any fundamental quality of the antiproton measurements. That’s a leap that we don’t often see in physics, at least not in a single step,” said Prof Gerald Gabrielse of the Harvard University’s Department of Physics.

The physicists were able to capture individual protons and antiprotons in a ‘trap’ created by electric and magnetic fields. By precisely measuring the oscillations of each particle, they were able to measure the magnetism of a proton more than 1,000 times more accurately than a proton had been measured before. Similar tests with antiprotons produced a 680-fold increase in accuracy in the size of the magnet in an antiproton.


“Such measurements,” Prof Gabrielse said, “could one day help scientists answer a question that seems more suited for the philosophy classroom than the physics lab – why are we here?”


“One of the great mysteries in physics is why our Universe is made of matter. According to our theories, the same amount of matter and antimatter was produced during the Big Bang. When matter and antimatter meet, they are annihilated. As the Universe cools down, the big mystery is: Why didn’t all the matter find the antimatter and annihilate all of both? There’s a lot of matter and no antimatter left, and we don’t know why.”


Making precise measurements of protons and antiprotons could begin to answer those questions by potentially shedding new light on whether the CPT (Charge conjugation, Parity transformation, Time reversal) theorem is correct. An outgrowth of the standard model of particle physics, CPT states that the protons and antiprotons should be virtually identical – with the same magnitude of charge and mass – yet should have opposite charges.

While researchers were able to capture and measure protons with relative ease, antiprotons are only produced by high-energy collisions that take place at the extensive tunnels of the CERN laboratory in Geneva, leaving researchers facing a difficult choice.


“Last year, we published a report showing that we could measure a proton much more accurately than ever before,” Prof Gabrielese said. “Once we had done that, however, we had to make a decision – did we want to take the risk of moving our people and our entire apparatus – crates and crates of electronics and a very delicate trap apparatus – to CERN and try to do the same thing with antiprotons? Antiprotons would only be available till mid-December and then not again for a year and a half.”


Though their results still fit within the predictions made by the standard model, Prof Gabrielse said being able to more accurately measure the characteristics of both matter and antimatter may yet help shed new light on how the Universe works.


“What’s also very exciting about this breakthrough is that it now prepares us to continue down this road. I’m confident that, given this start, we’re going to be able to increase the accuracy of these measurements by another factor of 1,000, or even 10,000.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A new patent filed by Apple shows how future smartphones may look like

A new patent filed by Apple shows how future smartphones may look like | Amazing Science |

According to a new patent filed by Apple, patent application 20130076612, a potential smartphone design could include a full wraparound display and have no buttons. Enclosed within "transparent housing," a flexible display panel would be configured to display content at any portion of the gadget's frame.


The patent application explains: "The majority of portable electronic device manufacturers utilize a common form factor consisting generally of a flat planar form factor with a single surface dedicated mainly for use as a display surface, while the other surfaces remain largely unused, save for the occasional button or switch."


Potentially, the smartphone could contain up to two AMOLED screens. Standing for Active Matrix OLED -- these types of screen pair a traditional TFT display with an OLED display. Due to their active matrix, AMOLED screens have faster pixel switching, which means that response types are higher than OLED displays. The use of AMOLED and a conical shape for the flexible panel could offer users "an illusion of depth perception [...] mimicking a 3D experience."


In addition, the patent explains that facial recognition technology could be used to detect a registered end-user of the smartphone.


The aim of the new design is to make use of "unused space" which is typical with any current mobile device, extending functionality to both the back and the sides of a gadget. Therefore, Apple may be considering the use of curved screens and a flexible display which not only is touch-responsive, but makes use of the surface on both sides of a future iPhone.


The patent proposes that touch replace buttons entirely -- and a coating could be added to the display to reduce smudge marks and keep the gadget looking pretty. As an example, the application says that "Instead of the hold button a multi touch gesture along one of the sides could instead act as a method of locking and unlocking the hold function."


The new design relies heavily on flexible display technology, but in order to limit the possibility of damage, the patent describes the phone as being incased in glass -- bent into a conical shape, but serving as protection in case you drop your gadget. With such advances being made in the smartphone industry -- and the market already crowded with operating systems, similar phone designs and application ecosystems -- perhaps Apple's move into flexible displays could help the firm keep its dominant position.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nanoscribe: 3D Scaffolds for Biomimetics for Cell Biology

Nanoscribe: 3D Scaffolds for Biomimetics for Cell Biology | Amazing Science |

3D polymer scaffolds for cells: Biocompatible 3D microstructures act as artificial extracellular matrices for cells to mimic a natural but reproducible environment. Other applications are the fabrication of micro-needles, stents and so on for medical purposes.


Shown are a series of structures fabricated by means of the direct laser writing technique with Photonic Professional systems. Typical topics of interest which are under investigation are the study of cell migration or stem cell differentiation. The 3D tailored environment acts as an artificial extracellular matrix, i.e., a scaffold for the cells. Pictures: F. Klein, B. Richter J. Fischer, T. Striebel und M. Bastmeyer; Karlsruher Institut für Technologie (KIT). 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Crystal-free crystallography: tiny molecular sponges hold molecules in place for imaging

Crystal-free crystallography: tiny molecular sponges hold molecules in place for imaging | Amazing Science |

Crystallography, the technique that revealed DNA's double helix and the shapes of thousands of other molecules is getting an upgrade.

A method described in Nature  makes X-ray crystallography of small molecules simpler, faster and more sensitive, largely doing away with the laborious task of coaxing molecules to form crystals. Instead, porous scaffolding holds molecules in the orderly arrangement needed to discern their structure with X-rays.


"You could call it crystal-free crystallography," says Jon Clardy, a biological chemist at Harvard Medical School.


X-ray crystallography is one of the most important techniques in science, because it is one of only a few ways to directly determine the shape of large molecules. It does this by blasting molecules with X-rays and measuring how their rays are diffracted. Transforming these reflections into molecular models isn’t simple. But cajoling many molecules to crystallize is tedious and time-consuming — like getting a puppy to sit still for a photograph — and, Clardy says, the biggest bottle-neck in X-ray crystallography.

“Some crystallize easily, some crystallize hardly and some are impossible to crystallize, if they are liquid compounds,” says Makoto Fujita, a chemist at the University of Tokyo who led the work along with colleague Yasuhide Inokuma.

The team grew materials called metal-organic frameworks, which had large, regularly spaced cavities. These materials acted as 'crystalline sponges', mopping up tiny quantities of small molecules after a short incubation period and holding them in an ordered arrangement within a cage-like scaffold. The sponges were then subjected to X-ray diffraction.


In a blind test, the researchers used their technique to correctly determine the shape of several small molecules, the structures of which were already known. More impressively, the method allowed the authors to determine the structure of miyakosyne A, a chemical made in very small quantities by a species of sea sponge. The molecule has evaded crystallization because its sinewy shape causes it to flop around.


“It’s a remarkable achievement,” says Clardy. He thinks the technique will help researchers to mine marine life, soil bacteria and other organisms for compounds that might have uses in, for example, cancer drugs, because it is often difficult to determine the shape of these molecules from the small quantities found in nature. “I think this could be — to use an overused word — transformational,” Clardy says.


In its current form, the new technique isn’t applicable to proteins, because the pockets in the crystalline sponge are not big enough. But Fujita says his team is trying to make sponges with larger pockets. “Our next grand challenge is to apply this method to protein crystallography,” he says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

UCLA physicists' technique for cooling molecules may be a stepping stone to quantum computing

UCLA physicists' technique for cooling molecules may be a stepping stone to quantum computing | Amazing Science |
The next generation of computers promises far greater power and faster processing speeds than today's silicon-based based machines. These "quantum computers" — so called because they would harness the unique quantum mechanical properties of atomic particles — could draw their computing power from a collection of super-cooled molecules. But chilling molecules to a fraction of a degree above absolute zero, the temperature at which they can be manipulated to store and transmit data, has proven to be a difficult challenge for scientists. Now, UCLA physicists have pioneered a new technique that combines two traditional atomic cooling technologies and brings normally springy molecules to a frozen standstill. Their research is published March 28 in the journal Nature. "Scientists have been trying to cool molecules for a decade and have succeeded with only a few special molecules," said Eric Hudson, a UCLA assistant professor of physics and the paper's senior author. "Our technique is a completely different approach to the problem — it is a lot easier to implement than the other techniques and should work with hundreds of different molecules." Previous attempts to create ultracold molecules were only effective with one or two specific kinds. Creating a method that can be used with many different molecules would be a major step forward because it is difficult to say which materials might be used in quantum computers or other future applications, Hudson said. By immersing charged barium chloride molecules in an ultracold cloud of calcium atoms, Hudson and his colleagues are able to prevent most of the molecules from vibrating and rotating. Halting the molecules is a necessary hurdle to overcome before they can be used to store information like a traditional computer does. "The goal is to build a computer that doesn't work with zeros and ones, but with quantum mechanical objects," Hudson said. "A quantum computer could crack any code created by a classical computer and transmit information perfectly securely."
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Parasites Use Sophisticated Biochemistry to Take Over Their Hosts

Parasites Use Sophisticated Biochemistry to Take Over Their Hosts | Amazing Science |
Parasites that take over hosts, effectively turning them into zombies, are far from rare. But only recently have scientists started to work out the sophisticated biochemistry that the parasites use.


n the rain forests of Costa Rica lives Anelosimus octavius, a species of spider that sometimes displays a strange and ghoulish habit. From time to time these spiders abandon their own webs and build radically different ones, a home not for the spider but for a parasitic wasp that has been living inside it. Then the spider dies — a zombie architect, its brain hijacked by its parasitic invader — and out of its body crawls the wasp’s larva, which has been growing inside it all this time.

There are many of such examples of zombies in nature. They are far from rare. Viruses, fungi, protozoans, wasps, tapeworms and a vast number of other parasites can control the brains of their hosts and get them to do their bidding. But only recently have scientists started to work out the sophisticated biochemistry that the parasites use.

“The knowledge that parasites can manipulate their hosts is old. The new part is how they do it,” said Shelley Adamo of Dalhousie University in Nova Scotia, a co-editor of the new issue. “The last 5 to 10 years have really been exciting.”


In the case of the Costa Rican spider, the new web is splendidly suited to its wasp invader. Unlike the spider’s normal web, mostly a tangle of threads, this one has a platform topped by a thick sheet that protects it from the rain. The wasp larva crawls to the edge of the platform and spins a cocoon that hangs down through an opening that the spider has kindly provided for the parasite.

To manipulate the spiders, the wasp must have genes that produce proteins that alter spider behavior, and in some species, scientists are now pinpointing this type of gene. Such is the case with the baculovirus, a virus sprinkled liberally on leaves in forests and gardens. (The cabbage in a serving of coleslaw carries 100 million baculoviruses.)


David P. Hughes of Penn State University and his colleagues have found that a single gene, known as egt, is responsible for driving the caterpillars up trees. The gene encodes an enzyme. When the enzyme is released inside the caterpillar, it destroys a hormone that signals a caterpillar to stop feeding and molt.


Dr. Hughes suspects that the virus goads the caterpillar into a feeding frenzy. Normally, gypsy moth caterpillars come out at night to feed and then return to crevices near the bottom of trees to hide from predators. The zombie caterpillars, on the other hand, cannot stop searching for food.

“The infected individuals are out there, just eating and eating,” Dr. Hughes said. “They’re stuck in a loop.”


Whether humans are susceptible to this sort of zombie invasion is less clear. It is challenging enough to figure out how parasites manipulate invertebrates, which have a few hundred thousand neurons in their nervous systems. Vertebrates, including humans, have millions or billions of neurons, and so scientists have made fewer advances in studying their zombification.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bees Buzz Each Other through Changes in the Electric Field Surrounding Them

Bees Buzz Each Other through Changes in the Electric Field Surrounding Them | Amazing Science |

The electric fields that build up on honey bees as they fly, flutter their wings, or rub body parts together may allow the insects to talk to each other, a new study suggests. Tests show that the electric fields, which can be quite strong, deflect the bees' antennae, which, in turn, provide signals to the brain through specialized organs at their bases.


Scientists have long known that flying insects gain an electrical charge when they buzz around. That charge, typically positive, accumulates as the wings zip through the air—much as electrical charge accumulates on a person shuffling across a carpet. And because an insect's exoskeleton has a waxy surface that acts as an electrical insulator, that charge isn't easily dissipated, even when the insect lands on objects, says Randolf Menzel, a neurobiologist at the Free University of Berlin in Germany.


Although researchers have suspected for decades that such electrical fields aid pollination by helping the tiny grains stick to insects visiting a flower, only more recently have they investigated how insects sense and respond to such fields. Just last month, for example, a team reported thatbumblebees may use electrical fields to identify flowers recently visited by other insects from those that may still hold lucrative stores of nectar and pollen. A flower that a bee had recently landed on might have an altered electrical field, the researchers speculated.


Now, in a series of lab tests, Menzel and colleagues have studied how honey bees respond to electrical fields. In experiments conducted in small chambers with conductive walls that isolated the bees from external electrical fields, the researchers showed that a small, electrically charged wand brought close to a honey bee can cause its antennae to bend. Other tests, using antennae removed from honey bees, indicated that electrically induced deflections triggered reactions in a group of sensory cells, called the Johnston's organ, located near the base of the antennae. In yet other experiments, honey bees learned that a sugary reward was available when they detected a particular pattern of electrical field.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Earth is closer to the inner edge of Sun's habitable zone than previously thought

The Earth is closer to the inner edge of Sun's habitable zone than previously thought | Amazing Science |

The Earth could be closer than previously thought to the inner edge of the Sun's habitable zone, according to a new study by planetary scientists in the US and France. The research also suggests that if our planet moved out of the habitable zone, it could lead to a "moist greenhouse" climate that could kick-start further drastic changes to the atmosphere.


A star's habitable zone is the set of orbits within which a planet could have liquid water on its surface – and being within this zone is considered to be an important prerequisite for the development of life.


The current consensus is that the Sun's habitable zone begins at about 0.95 astronomical units (AU), a comfortable distance from the Earth's orbit at 1 AU. However, this latest work by James Kasting and colleagues at Penn State University, NASA and the University of Bordeaux suggests that that inner edge of the zone is much further out at 0.99 AU.


"Our new climate model predicts that we are closer to the moist-greenhouse scenario than we had thought," says Kasting. In this scenario, the stratosphere becomes wet and fully saturated as the Earth's surface warms. This results in the dissociation of water molecules and the release of hydrogen into space. Depending on the levels of atmospheric saturation, the oceans would be completely lost over timescales as long as several billion years. This, say the scientists, would result in our climate changing to resemble a Venus-styled runaway greenhouse.


Penn State's Ramses Ramirez points out that the atmosphere currently has an average surface relative humidity of 77%, which gradually decreases to 10% or less above an altitude of 10 km – so the atmosphere is far from fully saturated. However, there are two ways that the Earth's atmosphere could move in that direction.


One is that the Earth's orbit changes and it slips across the 0.99 AU inner edge. The second is that the Earth remains at 1 AU but rising temperatures caused by greenhouse gases such as water vapour and carbon dioxide lead to a moist greenhouse. Indeed, the researchers are now calculating how much carbon dioxide would be needed for the second scenario to occur.


Scientists believe that a moist greenhouse would begin when the global average temperature reaches 340 K – whereas the current average is 288 K. Kasting says that under really pessimistic assumptions – a 10-fold to 20-fold increase in atmospheric carbon dioxide – it could be possible for the average temperature to reach 340 K. However, he points out that even if humans continue to burn fossil fuels at a very high rate, a catastrophic moist greenhouse would not kick in until at least 2300.


Other researchers, however, point out that the Earth has been much hotter in the past and such a transition did not occur. Dorian Abbot, a climate scientist at the University of Chicago, points out that average temperatures were about 10–15 K warmer during the Cretaceous period. "As far as we know, Earth has never been in a moist-greenhouse state," says Abbot. "We certainly did not lose our entire oceans."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Babies' brains may be tuned to language even before birth

Babies' brains may be tuned to language even before birth | Amazing Science |

Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown.


Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice, can tell apart two languages they’d heard before birth and remember short stories read to them while in the womb. 


But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. 


To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex.


To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices.


The young brains were able to distinguish between male and female voices, as well as between the trickier sounds ‘ga’ and ‘ba’, which demands even faster processing. What is more, the parts of the cortex used were the same as those used by adults for sophisticated understanding of speech and language. 


The results show that linguistic connections inside the cortex are already “present and functional” and did not need to be gradually acquired through repeated exposure to sound, Wallois says. This suggests at least part of these speech-processing abilities is innate. The work could also lead to better techniques caring for the most vulnerable brains, Wallois adds, including premature babies.

Miro Svetlik's curator insight, March 28, 2013 6:16 AM

This may prove really interesting, babies can surely learn a lot new languages quicky in their early life but I think they will retain the preference (liking) for the language of some type, that might answer this (just a wild guess :)

Scooped by Dr. Stefan Gruenwald!

Brain-Computer Interface Goes Wireless

Brain-Computer Interface Goes Wireless | Amazing Science |

Engineers at Brown University have improved on their original and groundbreaking brain-computer interface by creating a wireless device that has successfully been implanted into the brains of monkeys and pigs. The device houses its own internal operating system, complete with a lithium ion battery, ultralow-power circuits for processing and conversion, wireless radio, infrared transmitters, and a copper coil for recharging. "A pill-sized chip of electrodes implanted on the cortex sends signals through uniquely designed electrical connections into the device's laser-welded, hermetically sealed 2.2 inches-long, 9 mm thick titanium 'can.'"

The device, recently written about in the Journal of Neural Engineering, has been functioning well in animals for over a year. Now scientists expect to move closer to testing the device on humans, for which the device was originally intended. "Brain-computer interfaces could help people with severe paralysis control devices with their thoughts. ... Brain-computer interfaces (BCIs) are used to assess the feasibility of people with severe paralysis being able to move assistive devices like robotic arms or computer cursors by thinking about moving their arms and hands."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mini-Supernovas Discovered

Mini-Supernovas Discovered | Amazing Science |
A new class of supernova represents a miniature star explosion.


n 2002, researchers began noticing that many supernovas appeared to be similar to regular Type Ia supernovas, but were distinctly fainter. Some shone with only 1 percent of the peak luminosity of Type Ia supernovas. Now, based on past and new observations, Foley and his colleagues have identified 25 examples of what they call Type Iax supernovas.


The data the scientists gathered suggest that, like a Type Ia supernova, a Type Iax supernova comes from a binary star system containing a white dwarf and a companion star. In Type Iax supernovas, the companion star has apparently already lost its outer hydrogen, leaving it dominated by helium. The white dwarfs then go on to accumulate helium from their companion stars.


It remains unclear what precisely happens during a Type Iax supernova. The helium in the companion star's outer shell might undergo nuclear fusion, blasting a shock wave at the white dwarf that makes it detonate. On the other hand, all the helium the white dwarf accumulated from its companion star could alter the density and temperature of the white dwarf's interior, forcing carbon, oxygen and maybe helium within the star to fuse, triggering an explosion.


In any case, it appears that in many Type Iax supernovas, the white dwarf actually survives the explosion, unlike in Type Ia supernovas, in which the white dwarfs are completely destroyed.


Type Iax supernovas are about a third as common as Type Ia supernovas. The reason so few Type Iax supernovas have been detected so far is that the faintest are only one-hundredth as bright as a Type Ia.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Crocodile extinction led to dinosaur domination 200 million years ago

Crocodile extinction led to dinosaur domination 200 million years ago | Amazing Science |

A mass extinction that occurred over 200 million years ago, killed off a slew of huge predators, including hefty beasts that looked like crocodiles and enormous armadillos, according to new research.


Some of the prehistoric predators - animals known collectively as the early pseudosuchians - likely preyed on certain dinosaurs, which later evolved some of impressive characteristics of the ancient pseudosuchians. Those included features like sturdy body armour and strong tails for whacking enemies.


"It is likely, therefore, that dinosaurs prospered to some extent as a result of the extinction of most pseudosuchians and many other groups at the end of the Triassic," says co-author Richard Butler, a palaeontologist at Ludwig-Maximilians-Universität.


He adds that some evidence suggests dinosaurs "had better locomotor and breathing systems than pseudosuchians," so they thrived in the Jurassic after the mass extinction.


As for what caused that die-off, researchers suspect anenormous burst of volcanic activity, as part of the Atlantic Ocean's formation, led to dramatic increases in atmospheric carbon dioxide and rapid global warming.

For the latest study, published in Biology Letters , Butler and colleague Olja Toljagić assessed changes in pseudosuchians that occurred during the critical Late Triassic and Early Jurassic periods.


The study shows that during the extinction event 201 million years ago, these animals declined rapidly, with only one lineage surviving into the Jurassic. Some of the animals evolved into ancestors of today's alligators and crocodiles. Another lineage, referred to as the "bird-line archosaurs", consisted of the non-avian dinosaurs and their species that later evolved into modern birds

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers generate nerve cells directly in the brain from transplanted fibroblasts and astrocytes

Researchers generate nerve cells directly in the brain from transplanted fibroblasts and astrocytes | Amazing Science |

Cellular reprogramming is a new and rapidly emerging field in which somatic cells can be turned into pluripotent stem cells or other somatic cell types simply by the expression of specific combinations of genes. By viral expression of neural fate determinants, it is possible to directly reprogram mouse and human fibroblasts into functional neurons, also known as induced neurons. The resulting cells are nonproliferating and present an alternative to induced pluripotent stem cells for obtaining patient- and disease-specific neurons to be used for disease modeling and for development of cell therapy.

In addition, because the cells do not pass a stem cell intermediate, direct neural conversion has the potential to be performed in vivo. In an new study, a team of researchers show that transplanted human fibroblasts and human astrocytes, which are engineered to express inducible forms of neural reprogramming genes, convert into neurons when reprogramming genes are activated after transplantation. Using a transgenic mouse model to specifically direct expression of reprogramming genes to parenchymal astrocytes residing in the striatum, we also show that endogenous mouse astrocytes can be directly converted into neural nuclei (NeuN)-expressing neurons in situ. These experiments provide proof of principle that direct neural conversion can take place in the adult rodent brain when using transplanted human cells or endogenous mouse cells as a starting cell for neural conversion.

No comment yet.