1. Early Evolution of Life
2. Planetary Conditions for Life
3. Evolution of Advanced Life
4. Prebiotic Evolution
5. Solar Systems Exploration
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
In future, some diseases might be diagnosed earlier and treated more effectively. Researchers at the Max Planck Institute for the Science of Light in Erlangen have developed an optical method that makes individual proteins, such as the proteins characteristic of some cancers, visible. Other methods that achieve this only work if the target biomolecules have first been labelled with fluorescent tags; In general, however, that approach is difficult or even impossible. By contrast, with their method, coined iSCAT, the researchers in Erlangen are able to directly detect the scattered light of individual proteins via their shadows. The method could not only make biomedical diagnoses more sensitive, but also provide new insights into fundamental biological processes.
A biosensor for the scattered light of individual unmarked biomolecules such as proteins and tumour markers may facilitate medical diagnosis. The biodetector, that a team led by V. Sandoghdar has developed at the Max Planck Institute for the Science of Light, uses the interferometric method iSCAT.
Vahid Sandoghdar, Director at the Max Planck Institute for the Science of Light, and Marek Piliarik, a post doc in Sandoghdar’s division, are now able to produce a much clearer image without the need for elaborate attachment of luminous markers to the target proteins. This is possible thanks to iSCAT, short for interferometric detection of scattering. The researchers shine laser light onto a microscope slide on which the relevant proteins have been captured with appropriate biochemical lures. The proteins scatter the laser light, thus casting a shadow, albeit a very weak one. “iSCAT not only promises more sensitive diagnosis of diseases such as cancers, but will also shed light on many fundamental biochemical processes in nature,” says Vahid Sandoghdar.
The Erlangen-based researchers have succeeded to achieve a high level of sensitivity for individual proteins by applying some tricks, and because they were not hampered by a misconception that a lot of other scientists have: “Until now it was thought that if you want to detect scattered light from nanoparticles, you have to eliminate all background light,” explains Vahid Sandoghdar. “However, in recent years we’ve realized that it is more advantageous to illuminate the sample strongly and visualize the feeble signal of a tiny nanoparticle as a shadow against the intense background light.” The researchers therefore allow the background light to interfere with the weak scattered light so that the desired signal is amplified.
However, at this stage they are still unable to detect the shadows of a single protein in the interference image, because the pattern is akin to that of a television broadcast in black and white that is distorted by a lot of noise. The interferometric detection method is so sensitive that any small roughness or contamination of the sample carrier will also cast a shadow that could in fact swamp the protein signal.
Nevertheless, this difficulty did not put off the two researchers. They have learned to eliminate the noise by applying a second trick. They take a snapshot with the iSCAT microscope not only after they have dripped a solution containing the desired protein onto the sample holder but also before. “Since most of the optical noise generated by nanoscopic irregularities of the sample do not change, we can subtract one image from the other and thus eliminate the noise,” says Piliarik. The target proteins then stand out clearly from the background as dark spots, even though the shadow of the protein is only one ten-thousandth or even one hundred-thousandth as dark as the background.
Marek Piliarik and Vahid Sandoghdar are able to detect various proteins as shadows under the microscope not only in pure solutions. They can also home in on individual proteins in mixtures containing concentrations of other proteins that are up to 2000 times greater.
Researchers at the UNC School of Medicine have discovered how two genes – Period and Cryptochrome – keep the circadian clocks in all human cells in time and in proper rhythm with the 24-hour day, as well as the seasons. The finding, published today in the journal Genes and Development, has implications for the development of drugs for various diseases such as cancers and diabetes, as well as conditions such as metabolic syndrome, insomnia, seasonal affective disorder, obesity, and even jetlag.
"Discovering how these circadian clock genes interact has been a long-time coming," said Aziz Sancar, MD, PhD, Sarah Graham Kenan Professor of Biochemistry and Biophysics and senior author of the Genes and Development paper. "We've known for a while that four proteins were involved in generating daily rhythmicity but not exactly what they did. Now we know how the clock is reset in all cells. So we have a better idea of what to expect if we target these proteins with therapeutics."
In all human cells, there are four genes – Cryptochrome, Period, CLOCK, and BMAL1 – that work in unison to control the cyclical changes in human physiology, such as blood pressure, body temperature, and rest-sleep cycles. Previously, scientists found that CLOCK and BMAL1 work in tandem to kick start the circadian clock. These genes bind to many other genes and turn them on to express proteins. This allows cells, such as brain cells, to behave the way we need them to at the start of a day.
Specifically, CLOCK and BMAL1 bind to a pair of genes called Period and Cryptochrome and turn them on to express proteins, which – after several modifications – wind up suppressing CLOCK and BMAL1 activity. Then, the Period and Cryptochrome proteins are degraded, allowing for the circadian clock to begin again.
"It's a feedback loop," said Sancar, who discovered Cryptochrome in 1998. "The inhibition takes 24 hours. This is why we can see gene activity go up and then down throughout the day."
But scientists didn't know exactly how that gene suppression and protein degradation happened at the back end. In fact, during experiments using one compound to stifle Cryptochrome and another drug to hinder Period, other researchers found inconsistent effects on the circadian clock, suggesting that Cryptochrome and Period did not have the same role. Sancar, a member of the UNC Lineberger Comprehensive Cancer Center who studies DNA repair in addition to the circadian clock, thought the two genes might have complementary roles. His team conducted experiments to find out.
Chris Selby, PhD, a research instructor in Sancar's lab, used two different kinds of genetics techniques to create the first-ever cell line that lacked both Cryptochrome and Period. Each cell has two versions of each gene. Selby knocked out all four copies.
Then Rui Ye, PhD, a postdoctoral fellow in Sancar's lab and first author of the Genes and Development paper, put Period back into the new mutant cells. But Period by itself did not inhibit CLOCK-BMAL1; it actually had no active function inside the cells.
Next, Ye put Cryptochrome alone back into the cell line. He found that Cryptochrome not only suppressed CLOCK and BMAL1, but it squashed them indefinitely. "The Cryptochrome just sat there," Sancar said. "It wasn't degraded. The circadian clock couldn't restart."
For the final experiment, Sancar's team added Period to the cells with Cryptochrome. As Period's protein accumulated inside cells, the scientists could see that it began to remove the Cryptochrome, as well as CLOCK and BMAL1. This led to the eventual degradation of Cryptochrome, and then the CLOCK-BMAL1 genes were free to restart the circadian clock anew to complete the 24-hour cycle. "What we've done is show how the entire clock really works," Sancar said.
As all who study astronomy know, one of the most incredible things about the universe is the never-ending potential for wonderful discoveries that sound more like fiction than fact. With this paper, the authors are pushing the boundaries of fiction into fact with the potential discovery of a new exotic object, known as a Thorne–Żytkow object (TZO). First predicted in the 1970s by Kip Thorne and AnnaŻytkow, these bodies occur when a neutron star in a binary system with a red supergiant (RSG) merges into the second star. This merger creates an unusual system where there is a neutron star surrounded by a large, diffuse envelope of material. The system still produces most of its energy at the core of the material envelope through thermonuclear energy, and a smaller amount (about 5% of the total energy) from the gravitational accretion of material onto the neutron star. Eventually, after several hundred years, the core of the envelope and the neutron star would merge, resulting in either a larger neutron star or a black hole.
TZOs are fascinating objects in a special state of a binary system’s evolution, and there is a lot of new physics that can be learned from such a system, but there has been one problem with them until now: they are identical in appearance to typical red supergiants. There are a lot of normal red supergiants no matter where you look, and knowing if a RSG is a TZO is only possible when you look in detail at the stellar spectra for the over-abundance of lithium and other specific heavy metals. Finding a TZO is definitely a “find the needle in a haystack” kind of observing problem!
Luckily for science, the authors successfully found the needle. They did this by conducting a survey of stars in the Milky Way andMagellanic Clouds from previous stellar surveys where effective temperature and photometry data indicated a RSG. The authors then took the stellar spectra of the 62 stars in their sample at Apache Point Observatory in New Mexico and the Magellan telescopes in Chile, and then analyzing the spectra for the ratios between elements in order to see whether there were any anomalies. In one case, for a star known as HV 2112, in the Small Magellanic Cloud, and found it had unusually high concentrations of lithium, molybdenum, and rubidium. These elements, especially in the amounts found in HV 2112, are indications the star is not a RSG at all, but rather a TZO. Some spectral features were also observed that are not predicted in TZO models, but the authors aknowledge that available TZO models are older and do not take into account some recent advances in stellar convection modeling.
This TZO discovery, if confirmed from follow-up theoretical models, is exciting because HV 2112 would be the prototype of a whole new kind of system. But beyond being a scientific curiousity, a TZO can provide a new environment for answering several questions, such as a new fate of massive binary systems. Further, because this is a completely new kind of stellar interior, we are also looking at a different kind of stellar nuclear synthesis process for heavy metals than anything previously observed. It is like being handed a new laboratory in which to test astrophysical ideas, and to distinguish the fact from fiction.
In the early days of quantum physics, in an attempt to explain the wavelike behavior of quantum particles, the French physicist Louis de Broglie proposed what he called a “pilot wave” theory. According to de Broglie, moving particles — such as electrons, or the photons in a beam of light — are borne along on waves of some type, like driftwood on a tide.
Physicists’ inability to detect de Broglie’s posited waves led them, for the most part, to abandon pilot-wave theory. Recently, however, a real pilot-wave system has been discovered, in which a drop of fluid bounces across a vibrating fluid bath, propelled by waves produced by its own collisions.
In 2006, Yves Couder and Emmanuel Fort, physicists at Université Paris Diderot, used this system to reproduce one of the most famous experiments in quantum physics: the so-called “double-slit” experiment, in which particles are fired at a screen through a barrier with two holes in it.
In the latest issue of the journal Physical Review E (PRE), a team of MIT researchers, in collaboration with Couder and his colleagues, report that they have produced the fluidic analogue of another classic quantum experiment, in which electrons are confined to a circular “corral” by a ring of ions. In the new experiments, bouncing drops of fluid mimicked the electrons’ statistical behavior with remarkable accuracy.
“This hydrodynamic system is subtle, and extraordinarily rich in terms of mathematical modeling,” says John Bush, a professor of applied mathematics at MIT and corresponding author on the new paper. “It’s the first pilot-wave system discovered and gives insight into how rational quantum dynamics might work, were such a thing to exist.”
John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.
Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.
“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”
Stanford Professor of Bioengineering and Applied Physics, Stephen Quake, and Head of the Ophthalmic Science and Engineering Lab at Bar Ilan University Dr. Yossi Mandell teamed up to create a state-of-the-art intraocular implant that will change glaucoma treatment by making intraocular pressure readings frequent, easy and convenient.
Made to fit inside a commonly used intraocular lens prosthetic, and implanted through simple surgery such as for cataracts which many glaucoma patients already receive, the device measures the pressure of the fluid within the eye. A smart phone app or a wearable device such as Google Glass allows the wearer to take “snapshots” of the device that reports back the pressure.
The lens device holds a tiny tube, capped at one end and opened on the other, filled with gas. As the fluid pressure pushes against the gas, a marked scale permits reading of the intraocular pressure. The implant does not interfere with vision, as proven in an Air Force-approved vision test, and in one reported study the implant was responsible for changes to treatment for glaucoma in nearly 80 percent of the wearers.
Nearly 2.2 million Americans battle the eye disease glaucoma. Patients endure weekly visits to the ophthalmologist to have the disease monitored and treated. The disease is characterized by increasing pressure inside the eye, which results in a continuous loss of a specific type of retinal cell accompanied by degradation of the optic nerve fiber. The mechanism that links pressure to damage is not clear but there is correlation between the intensity of pressure readings and level of damage.
We live on a vast, underexplored planet that is largely ocean. Despite modern technology, Global Positioning System (GPS) navigation, and advanced engineering of ocean vessels, the ocean is unforgiving, especially in rough weather. Coastal ocean navigation, with risks of running aground and inconsistent weather and sea patterns, can also be challenging and hazardous. In 2012, more than 100 international incidents of ships sinking, foundering, grounding, or being lost at sea were reported (http://en.wikipedia.org/wiki/List_of_shipwrecks_in_2012). Even a modern jetliner can disappear in the ocean with little or no trace, and the current costs and uncertainty associated with search and rescue make the prospects of finding an object in the middle of the ocean daunting .
Notwithstanding satellite constellations, autonomous vehicles, and more than 300 research vessels worldwide (www.wikipedia.org/wiki/List_of_research_vessels_by_country), we lack fundamental data relating to our oceans. These missing data hamper our ability to make basic predictions about ocean weather, narrow the trajectories of floating objects, or estimate the impact of ocean acidification and other physical, biological, and chemical characteristics of the world's oceans. To cope with this problem, scientists make probabilistic inferences by synthesizing models with incomplete data. Probabilistic modeling works well for certain questions of interest to the scientific community, but it is difficult to extract unambiguous policy recommendations from this approach. The models can answer important questions about trends and tendencies among large numbers of events but often cannot offer much insight into specific events. For example, probabilistic models can tell us with some precision the extent to which storm activity will be intensified by global climate change but cannot yet attribute the severity of a particular storm to climate change. Probabilistic modeling can provide important insights into the global traffic patterns of floating debris but is not of much help to search-and-rescue personnel struggling to learn the likely trajectory of a particular piece of debris left by a wreck.
Oceanographic data are incomplete because it is financially and logistically impractical to sample everywhere. Scientists typically sample over time, floating with the currents and observing their temporal evolution (the Langrangian approach), or they sample across space to cover a gradient of conditions—such as temperature or nutrients (the Eulerian approach). These observational paradigms have various strengths and weaknesses, but their fundamental weakness is cost. A modern ocean research vessel typically costs more than US$30,000 per day to operate—excluding the full cost of scientists, engineers, and the cost of the research itself. Even an aggressive expansion of oceanographic research budgets would not do much to improve the precision of our probabilistic models, let alone to quickly and more accurately locate missing objects in the huge, moving, three-dimensional seascape. Emerging autonomous technologies such as underwater gliders and in situ biological samplers (e.g., environmental sample processors) help fill gaps but are cost prohibitive to scale up. Similarly, drifters (e.g., the highly successful Argo floats program) have proven very useful for better defining currents, but unless retrieved after their operational lifetime, they become floating trash, adding to a growing problem.
Long-term sampling efforts such as the continuous plankton recorder in the North Sea and North Atlantic  provide valuable data on decadal trends and leveraged English Channel ferries to accomplish much of the sampling. Modernizing and expanding this approach is a goal of citizen science initiatives.
What will be named?
Who can submit names?
How can names be submitted?
Scientists have successfully ‘reset’ human pluripotent stem cells to the earliest developmental state – equivalent to cells found in an embryo before it implants in the womb (7-9 days old). These ‘pristine’ stem cells may mark the true starting point for human development, but have until now been impossible to replicate in the lab. fThe discovery, published in Cell, will lead to a better understanding of human development and could in future allow the production of safe and more reproducible starting materials for a wide range of applications including cell therapies.
At first glance, water seems to be a simple molecule in which a single oxygen atom is bound to two hydrogen atoms. However, it is more complex when taking into account hydrogen’s nuclear spin – a property reminiscent of a rotation of its nucleus about its own axis. The spin of a single hydrogen can assume two different orientations, symbolized as up and down. Thus, the spins of water’s two hydrogen atoms can either add up, called ortho water, or cancel out, called para water. Ortho and para states are also said to be symmetric and antisymmetric, respectively.
Fundamental symmetry rules prohibit para water from turning into ortho water and vice versa – at least theoretically. “If you had a magic bottle with isolated paraand ortho molecules, they would remain in their spin states at all times,” says DESY scientist Jochen Küpper who led the recent study. “In principle, they are different molecular species, different types of water.” However, in the real world, water molecules are not isolated and frequently collide with other molecules or surfaces in their vicinity, causing nuclear spin orientations to change. “Through these interactions, para and ortho water can actually transform easily into one another,” explains Küpper who is also a professor at the University of Hamburg and a member of the Hamburg Centre for Ultrafast Imaging (CUI). “Therefore, it is very challenging to separate them and produce water that is not a mixture of both.”
Yet, the CFEL researchers have now demonstrated a way of isolating para and ortho water in the lab. To start, the scientists placed a drop of water in a compartment, which they pressurized with neon or argon gas. This mixture was released into vacuum through a pulsed valve. “Due to the large pressure difference, the gas expands quickly into the vacuum when the valve is opened, dragging along water molecules and, at the same time, cooling them down,” says Daniel Horke, the first author of the study.
This expansion produces a narrow beam of ultracold water molecules, which propagate at supersonic speed and are so dilute that individual molecules no longer collide with each other, thereby suppressing the conversion between para and ortho spin states.
The molecular beam then travels through a strong electric field, which deflects the water molecules from their original flight path and acts like a prism for nuclear spin states. “Para andortho water interact with the electric field differently,” Horke explains. “Thus, they also get deflected differently, allowing us to separate them in space and obtain pure para and orthosamples.” Spectroscopy showed that the purity of the para and ortho water was 74 per cent and over 97 per cent, respectively. Especially for para water the purity can be greatly enhanced in the future, as Horke says. Storing the separated water species was not an aim of the study.
The new method could benefit studies of a wide range of phenomena. In astrophysics, for example, it is commonly assumed that the relative amounts of para and ortho species can be linked to the temperature of interstellar ice. This theory is based on the temperature dependence of hydrogen’s ortho-to-para ratio, which is three to one at room temperature and drops with decreasing temperatures. “In fact, certain regions of the universe exhibit ratios that are quite different from what you would expect,” Horke says. “Yet, the specific reasons are unknown and lab-based experiments could provide new insights.”
Back on Earth, the study may also help determine the structures of proteins – biomolecules that are essential to all life. A method known as nuclear magnetic resonance (NMR) spectroscopy reconstructs protein structures from the relative orientation of the nuclear spins of hydrogen and other atoms. “Para hydrogen has successfully been used to enhance the sensitivity of the NMR method,” says Horke. “Thus, enriching para water in a protein’s water shell could become an interesting approach to improve NMR spectroscopy of these biological systems due to an almost natural environment.”
Astronomers using NASA's Hubble Space Telescope have discovered a companion star to a rare type of supernova. This observation confirms the theory that the explosion originated in a double-star system where one star fueled the mass-loss from the aging primary star.
This detection is the first time astronomers have been able to put constraints on the properties of the companion star in an unusual class of supernova called Type IIb. They were able to estimate the surviving star's luminosity and mass, which provide insight into the conditions that preceded the explosion.
"A binary system is likely required to lose the majority of the primary star's hydrogen envelope prior to the explosion. The problem is that, to date, direct observations of the predicted binary companion star have been difficult to obtain since it is so faint relative to the supernova itself," said lead researcher Ori Fox of the University of California (UC) at Berkeley.
Astronomers estimate that a supernova goes off once every second somewhere in the universe. Yet they don't fully understand how stars explode. Finding a "smoking gun" companion star provides important new clues to the variety of supernovae in the universe. "This is like a crime scene, and we finally identified the robber," quipped team member Alex Filippenko, professor of astronomy at UC Berkeley. "The companion star stole a bunch of hydrogen before the primary star exploded."
The explosion happened in the galaxy M81, which is about 11 million light-years away from Earth in the direction of the constellation Ursa Major (the Great Bear). Light from the supernova was first detected in 1993, and the object was designated SN 1993J. It was the nearest known example of this type of supernova, called a Type IIb, due to the specific characteristics of the explosion. For the past two decades astronomers have been searching for the suspected companion, thought to be lost in the glare of the residual glow from the explosion.
Observations made in 2004 at the W.M. Keck Observatory on Mauna Kea, Hawaii, showed circumstantial evidence for spectral absorption features that would come from a suspected companion. But the field of view is so crowded that astronomers could not be certain if the spectral absorption lines were from a companion object or from other stars along the line of sight to SN 1993J. "Until now, nobody was ever able to directly detect the glow of the star, called continuum emission," Fox said.
The companion star is so hot that the so-called continuum glow is largely in ultraviolet (UV) light, which can only be detected above Earth's absorbing atmosphere. "We were able to get that UV spectrum with Hubble. This conclusively shows that we have an excess of continuum emission in the UV, even after the light from other stars has been subtracted," said team member Azalee Bostroem of the Space Telescope Science Institute (STScI), in Baltimore, Maryland.
New prospects for secure data traffic: Flashes of light in particularly sensitive quantum states can be transmitted through the atmosphere. Erlangen-based physicists have sent bright pulses in sensitive quantum states through the window of a technical services room on the roof of the Max Planck Institute for the Science of Light to a building of the University Erlangen-Nürnberg.
It could be difficult for the NSA to hack encrypted messages in the future – at least if a technology being investigated by scientists at the Max Planck Institute for the Science of Light in Erlangen and the University Erlangen-Nürnberg will be successful: quantum cryptography. The physicists are now laying the foundation to make this technique, which can already be used for the generation of secret keys, available for a wider range of applications. They are the first scientists to send a pulse of bright light in a particularly sensitive quantum state through 1.6 kilometers of air from the Max Planck Institute to a University building. This quantum state, which they call squeezed, was maintained, which is something many physicists thought to be impossible. Using flashes of bright light for quantum communication through the atmosphere would have several advantages compared to the technique usually used today: it allows the photon packets to be transmitted in sunlight, something that is challenging with individual photons. Moreover, the receivers required for this are already presently in use for optical telecommunication via fibre optics and also via satellite.
Eavesdropping on a message protected by quantum cryptography cannot be done without being noticed. This is because quantum physics prevents a spy from reading a key which is encoded by specific quantum states without influencing these states. This can be exploited in a clever procedure for exchanging the key with which the data is encrypted, so that an unwelcome listener is not only detected, but is also prevented from accessing the information.
The quantum-protected communication is a fragile thing, however, and easily disturbed. All the more remarkable is the work of the Erlangen-based scientists working with Gerd Leuchs, Director at the Max Planck Institute for the Science of Light and professor at the University Erlangen-Nürnberg: "We have now succeeded in transmitting a flash of light, namely a pulse which contains many photons, through the atmosphere in a particularly sensitive quantum state," says Christian Peuntinger, who played an important role in the project. He and his colleagues sent a photon packet in a straight line from the roof of the Max Planck Institute in Nuremberg to the building of the University Erlangen-Nürnberg some 1.6 kilometers away. "This even works in broad daylight," says Christian Peuntinger.
According to the traditional theory of nerves, two nerve impulses sent from opposite ends of a nerve annihilate when they collide. New research from the Niels Bohr Institute now shows that two colliding nerve impulses simply pass through each other and continue unaffected. This supports the theory that nerves function as sound pulses. The results are published in the scientific journal Physical Review X.
In 1952, Hodgkin and Huxley introduced a model in which nerve signals were described as an electric current along the nerve produced by the flow of ions. The mechanism is produced by layers of electrically charged particles (ions of sodium and potassium) on either side of the nerve membrane that change places when stimulated. This change in charge creates an electric current.
This model has enjoyed general acceptance. For more than 60 years, all medical and biology textbooks have said that nerves function is due to an electric current along the nerve pathway. However, this model cannot explain a number of phenomena that are known about nerve function. Researchers at the Niels Bohr Institute at the University of Copenhagen have now conducted experiments that raise doubts about this well-established model of electrical impulses along the nerve pathway.
"According to the theory of this ion mechanism, the electrical signal leaves an inactive region in its wake, and the nerve can only support new signals after a short recovery period of inactivity. Therefore, two electrical impulses sent from opposite ends of the nerve should be stopped after colliding and running into these inactive regions," explains Thomas Heimburg, Professor and head of the Membrane Biophysics Group at the Niels Bohr Institute at the University of Copenhagen.
Thomas Heimburg and his research group conducted experiment in the laboratory using nerves from earthworms and lobsters. The nerves were removed and used in an experiment in which allowed the researchers to stimulate the nerve fibres with electrodes on both ends. Then they measured the signals en route.
"Our study showed that the signals passed through each other completely unhindered and unaltered. That's how sound waves work. A sound wave doesn't stop when it meets another sound wave. Both waves continue on unimpeded. The nerve impulse can therefore be explained by the fact that the pulse is a mechanical wave in the form of a sound pulse, a soliton, that moves along the nerve membrane," explains Thomas Heimburg. When the sound pulse moves through the nerve pathway, the membrane changes locally from a liquid to a more solid form. The membrane is compressed slightly, and this change leads to an electrical pulse as a consequence of the piezoelectric effect. "The electrical signal is thus not based on an electric current but is caused by a mechanical force," points out Thomas Heimburg.
The largest spacecraft welding tool in the world, the Vertical Assembly Center officially is open for business at NASA's Michoud Assembly Facility in New Orleans. The 170-foot-tall, 78-foot-wide giant completes a world-class welding toolkit that will be used to build the core stage of America's next great rocket, the Space Launch System (SLS).
SLS will be the most powerful rocket ever built for deep space missions, including to an asteroid and eventually Mars. The core stage, towering more than 200 feet tall (61 meters) with a diameter of 27.6 feet (8.4 meters), will store cryogenic liquid hydrogen and liquid oxygen that will feed the rocket's four RS-25 engines.
"This rocket is a game changer in terms of deep space exploration and will launch NASA astronauts to investigate asteroids and explore the surface of Mars while opening new possibilities for science missions, as well," said NASA Administrator Charles Bolden during a ribbon-cutting ceremony at Michoud Friday.
The Vertical Assembly Center is part of a family of state-of-the-art tools designed to weld the core stage of SLS. It will join domes, rings and barrels to complete the tanks or dry structure assemblies. It also will be used to perform evaluations on the completed welds. Boeing is the prime contractor for the SLS core stage, including avionics.
"The SLS Program continues to make significant progress," said Todd May, SLS program manager. "The core stage and boosters have both completed critical design review, and NASA recently approved the SLS Program's progression from formulation to development. This is a major milestone for the program and proof the first new design for SLS is mature enough for production."
A new study published in The Journal of Geology provides support for the theory that a cosmic impact event over North America some 13,000 years ago caused a major period of climate change known as the Younger Dryas stadial, or “Big Freeze.”
Around 12,800 years ago, a sudden, catastrophic event plunged much of the Earth into a period of cold climatic conditions and drought. This drastic climate change—the Younger Dryas—coincided with the extinction of Pleistocene megafauna, such as the saber-tooth cats and the mastodon, and resulted in major declines in prehistoric human populations, including the termination of the Clovis culture.
With limited evidence, several rival theories have been proposed about the event that sparked this period, such as a collapse of the North American ice sheets, a major volcanic eruption, or a solar flare.
However, in a study published in The Journal of Geology, an international group of scientists analyzing existing and new evidence have determined a cosmic impact event, such as a comet or meteorite, to be the only plausible hypothesis to explain all the unusual occurrences at the onset of the Younger Dryas period.
Researchers from 21 universities in 6 countries believe the key to the mystery of the Big Freeze lies in nanodiamonds scattered across Europe, North America, and portions of South America, in a 50-million-square-kilometer area known as the Younger Dryas Boundary (YDB) field.
Microscopic nanodiamonds, melt-glass, carbon spherules, and other high-temperature materials are found in abundance throughout the YDB field, in a thin layer located only meters from the Earth’s surface. Because these materials formed at temperatures in excess of 2200 degrees Celsius, the fact they are present together so near to the surface suggests they were likely created by a major extraterrestrial impact event.
In addition to providing support for the cosmic impact event hypothesis, the study also offers evidence to reject alternate hypotheses for the formation of the YDB nanodiamonds, such as by wildfires, volcanism, or meteoric flux.
The team’s findings serve to settle the debate about the presence of nanodiamonds in the YDB field and challenge existing paradigms across multiple disciplines, including impact dynamics, archaeology, paleontology, limnology, and palynology.
New research shows that schizophrenia isn’t a single disease but a group of eight genetically distinct disorders, each with its own set of symptoms. The finding could be a first step toward improved diagnosis and treatment for the debilitating psychiatric illness.
The research at Washington University School of Medicine in St. Louis is reported online Sept. 15 in The American Journal of Psychiatry. About 80 percent of the risk for schizophrenia is known to be inherited, but scientists have struggled to identify specific genes for the condition.
Now, in a novel approach analyzing genetic influences on more than 4,000 people with schizophrenia, the research team has identified distinct gene clusters that contribute to eight different classes of schizophrenia.
“Genes don’t operate by themselves,” said C. Robert Cloninger, MD, PhD, one of the study’s senior investigators. “They function in concert much like an orchestra, and to understand how they’re working, you have to know not just who the members of the orchestra are but how they interact.”
Researchers have developed a high-tech method to rid the body of infections — even those caused by unknown pathogens. A device inspired by the spleen can quickly clean blood of everything from Escherichia coli to Ebola, researchers report on 14 September in Nature Medicine1.
The device uses a modified version of mannose-binding lectin (MBL), a protein found in humans that binds to sugar molecules on the surfaces of more than 90 different bacteria, viruses and fungi, as well as to the toxins released by dead bacteria that trigger the immune overreaction in sepsis.
The researchers coated magnetic nanobeads with MBL. As blood enters the biospleen device, passes by the MBL-equipped nanobeads, which bind to most pathogens. A magnet on the biospleen device then pulls the beads and their quarry out of the blood, which can then be routed back into the patient.
To test the device, Ingber and his team infected rats with either E. coli or Staphylococcus aureus and filtered blood from some of the animals through the biospleen. Five hours after infection, 89% of the rats whose blood had been filtered were still alive, compared with only 14% of those that were infected but not treated. The researchers found that the device had removed more than 90% of the bacteria from the rats' blood. The rats whose blood had been filtered also had less inflammation in their lungs and other organs, suggesting they would be less prone to sepsis.
The researchers then tested whether the biospleen could handle the volume of blood in an average adult human — about 5 liters. They ran human blood containing a mixture of bacteria and fungi through the biospleen at a rate of 1 litre per hour, and found that the device removed most of the pathogens within five hours.
Discovery might ultimately lead to new, more energy-efficient transistors and microchips.
When moving through a conductive material in an electric field, electrons tend to follow the path of least resistance — which runs in the direction of that field.
But now physicists at MIT and the University of Manchester have found an unexpectedly different behavior under very specialized conditions — one that might lead to new types of transistors and electronic circuits that could prove highly energy-efficient.
They’ve found that when a sheet of graphene — a two-dimensional array of pure carbon — is placed atop another two-dimensional material, electrons instead move sideways, perpendicular to the electric field. This happens even without the influence of a magnetic field — the only other known way of inducing such a sideways flow.
What’s more, two separate streams of electrons would flow in opposite directions, both crosswise to the field, canceling out each other’s electrical charge to produce a “neutral, chargeless current,” explains Leonid Levitov, an MIT professor of physics and a senior author of a paper describing these findings this week in the journal Science.
The exact angle of this current relative to the electric field can be precisely controlled, Levitov says. He compares it to a sailboat sailing perpendicular to the wind, its angle of motion controlled by adjusting the position of the sail.
Levitov and co-author Andre Geim at Manchester say this flow could be altered by applying a minute voltage on the gate, allowing the material to function as a transistor. Currents in these materials, being neutral, might not waste much of their energy as heat, as occurs in conventional semiconductors — potentially making the new materials a more efficient basis for computer chips.
“It is widely believed that new, unconventional approaches to information processing are key for the future of hardware,” Levitov says. “This belief has been the driving force behind a number of important recent developments, in particular spintronics” — in which the spin of electrons, not their electric charge, carries information.
A Japanese woman in her 70s is the world's first recipient of cells derived from induced pluripotent stem cells, a technology that has created great expectations since it could offer the same advantages as embryo-derived cells but without some of the controversial aspects and safety concerns.
In a two-hour procedure, a team of three eye specialists lead by Yasuo Kurimoto of the Kobe City Medical Center General Hospital, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells into an eye of the Hyogo prefecture resident, who suffers from age-related macular degeneration.
The procedure took place at the Institute of Biomedical Research and Innovation Hospital, next to the RIKEN Center for Developmental Biology (CDB) where ophthalmologist Masayo Takahashi had developed and tested the epithelium sheets. She derived them from the patient's skin cells, after producing induced pluripotent stem (iPS) cells and then getting them to differentiate into retinal cells. Afterwards, the patient experienced no effusive bleeding or other serious problems, RIKEN has reported.
The patient “took on all the risk that go with the treatment as well as the surgery”, Kurimoto said in a statement released by RIKEN. “I have deep respect for bravery she showed in resolving to go through with it.”
He hit a somber note in thanking Yoshiki Sasai, a CDB researcher who recenty committed suicide. “This project could not have existed without the late Yoshiki Sasai’s research, which led the way to differentiating retinal tissue from stem cells.”
Kurimoto also thanked Shinya Yamanaka, a stem-cell scientist at Kyoto University “without whose discovery of iPS cells, this clinical research would not be possible.” Yamanaka shared the 2012 Nobel Prize in Physiology or Medicine for that work.
Kurimoto performed the procedure a mere four days after a health-ministry committee gave Takahashi clearance for the human trials (see 'Next-generation stem cells cleared for human trial').
Gibbons have such strange, scrambled DNA, it looks like someone has taken a hammer to it. Their genome has been massively reshuffled, and some biologists say that could be how new gibbon species evolved.
Gibbons are apes, and were the first to break away from the line that led to humans. There are around 16 living gibbon species, in four genera. They all have small bodies, long arms and no tails. But it's what gibbons don't share that is most unusual. Each species carries a distinct number of chromosomes in its genome: some species have just 38 pairs, some as many as 52 pairs.
"This 'genome plasticity' has always been a mystery," says Wesley Warrenof Washington University in St Louis, Missouri. It is almost as if the genome exploded and was then pieced back together in the wrong order. To understand why, Warren and his colleagues have now produced the first draft of a gibbon genome. It comes from a female northern white-cheeked gibbon (Nomascus leucogenys) called Asia.Inside the genome, Warren and his colleagues may have identified one of the players responsible for the reshuffling. It is called LAVA, and it is a piece of DNA called a retrotransposon that inserts itself into the genetic code. Seemingly unique to gibbons, LAVA tends to slip into genes that help control the way chromosomes pair up during cell division. By altering how those genes work, LAVA has made the gibbon genome unstable.
We believe this is the driving force that causes, for want of a better word, the 'scrambling' of the genome," says Warren. However, solving this mystery has created another. Such dramatic genome changes are normally associated with diseases such as cancer, and should be harmful. "It's a complete mystery still how these genomes are able to pass from one generation to the next and not cause any major issues in terms of survival of the species," says Warren. It may be that genomes are much more resilient than anyone expected, saysJames Shapiro at the University of Chicago. "The genome can endure lots of changes and still function."
Researchers at the University of California, San Diego have built the first 500 Gigahertz (GHz) photon switch. “Our switch is more than an order of magnitude faster than any previously published result to date,” said UC San Diego electrical and computer engineering professor Stojan Radic. “That exceeds the speed of the fastest lightwave information channels in use today.” The work took nearly four years to complete and it opens a fundamentally new direction in photonics – with far-reaching potential consequences for the control of photons in optical fiber channels.
According to an article in the journal Science*, switching photons at such high speeds was made possible by advances in the control of a strong optical beam using only a few photons, and by the scientists’ ability to engineer the optical fiber itself with accuracy down to the molecular level.
In the research paper, Radic and his colleagues in the UC San Diego Jacobs School of Engineering argue that ultrafast optical control is critical to applications that must manipulate light beyond the conventional electronic limits. In addition to very fast beam control and fast switching, the latest work opens the way to a new class of sensitive receivers (also capable of operating at very high rates), faster photon sensors, and optical processing devices.
To build the new switch, the UC San Diego team developed a new measurement technique capable of resolving sub-nanometer fluctuations in the fiber core. This was critical because local fiber dispersion varies substantially, even with small core fluctuations, and until recently, control of such small variations was not considered feasible, particularly over long device lengths.
In the experiment, a three-photon input was used to manipulate a Watt-scale beam at a speed exceeding 500 Gigahertz.
In their research, the engineers in the Photonic Systems Laboratory of UC San Diego’s Qualcomm Institute demonstrated that fast control becomes possible in fiber made of silica glass. “Silica fiber represents a nearly ideal physical platform because of very low optical loss, exceptional transparency and kilometer-scale interaction lengths,” noted Radic. “We showed that a silica fiber core can be controlled with sub-nanometer precision and be used for fast, few-photon control.”
Until recently, control of small variations was not considered feasible – particularly over long scales. But once they were able to profile the fluctuation of the actual fiber, it became clear that the silica fiber core could be controlled with sub-nanometer precision – and be used for fast, few-photon control.
A 24-year-old woman has discovered that her cerebellum is completely missing, explaining some of the unusual problems she has had with movement and speech. The case highlights just how adaptable the organ is.
The discovery was made when the woman was admitted to the Chinese PLA General Hospital of Jinan Military Area Command in Shandong Province complaining of dizziness and nausea. She told doctors she'd had problems walking steadily for most of her life, and her mother reported that she hadn't walked until she was 7 and that her speech only became intelligible at the age of 6.
Doctors did a CAT scan and immediately identified the source of the problem – her entire cerebellum was missing (see scan, below left). The space where it should be was empty of tissue. Instead it was filled with cerebrospinal fluid, which cushions the brain and provides defence against disease.
The cerebellum – sometimes known as the "little brain" – is located underneath the two hemispheres. It looks different from the rest of the brain because it consists of much smaller and more compact folds of tissue. It represents about 10 per cent of the brain's total volume but contains 50 per cent of its neurons.
Although it is not unheard of to have part of your brain missing, either congenitally or from surgery, the woman joins an elite club of just nine people who are known to have lived without their entire cerebellum. A detailed description of how the disorder affects a living adult is almost non-existent, say doctors from the Chinese hospital, because most people with the condition die at a young age and the problem is only discovered on autopsy (Brain,doi.org/vh7).
The cerebellum's main job is to control voluntary movements and balance, and it is also thought to be involved in our ability to learn specific motor actions and speak. Problems in the cerebellum can lead to severe mental impairment, movement disorders, epilepsy or a potentially fatal build-up of fluid in the brain. However, in this woman, the missing cerebellum resulted in only mild to moderate motor deficiency, and mild speech problems such as slightly slurred pronunciation. Her doctors describe these effects as "less than would be expected", and say her case highlights the remarkable plasticity of the brain.
Induced pluripotent stem cells (iPSCs) are commonly generated by transduction of Oct4, Sox2, Klf4, and Myc (OSKM) into cells. Although iPSCs are pluripotent, they frequently exhibit high variation in terms of quality, as measured in mice by chimera contribution and tetraploid complementation. Reliably high-quality iPSCs will be needed for future therapeutic applications. Here, we show that one major determinant of iPSC quality is the combination of reprogramming factors used.
Based on tetraploid complementation, we found that ectopic expression of Sall4, Nanog, Esrrb, and Lin28 (SNEL) in mouse embryonic fibroblasts (MEFs) generated high-quality iPSCs more efficiently than other combinations of factors including OSKM. Although differentially methylated regions, transcript number of master regulators, establishment of specific superenhancers, and global aneuploidy were comparable between high- and low-quality lines, aberrant gene expression, trisomy of chromosome 8, and abnormal H2A.X deposition were distinguishing features that could potentially also be applicable to human.
Malaria is an infectious disease which claims the lives of more children worldwide than any other; it is caused by parasitic micro-organisms, plasmodiums. These parasites are transmitted to their hosts via mosquito bites, where they induce a range of symptoms, including vomiting, fever, headaches, and, in severe cases, death. Incidence of malaria is prevalent in poverty stricken areas around the equator, with an estimated 207 million cases in 2012. The most life threatening form of the disease is caused by Plasmodium falciparum.
Within humans, P. falciparum undergoes two distinct stages, asexual replication, and differentiation into what are called gametocytes. Asexual replication occurs within red blood cells, with pathological symptoms arising from the inevitable red blood cell rupturing. Released parasites generally invade additional red blood cells for subsequent rounds of asexual replication; however, a small subset of parasites instead differentiate into the male and female forms that are refered to as gametocytes. Once taken up by a female Anopheles mosquito, these gametocytes are able to undergo sexual replication. Thus, the differentiation of P. falciparum into gametocytes represents an attractive target for intervention strategies. However, within the blood only mature gametocytes are found, and until recently, relatively little was known about immature gametocyte sequestration within tissues.
A recent study carried out a systematic organ survey of children who died from malaria, successfully identifying sites of immature gametocyte accumulation using a combination of immunohistochemical labelling, and quantitative reverse transcription polymerase chain reaction (qRT-PCR). The study provides strong evidence that gametocyte development occurs within the haematopoietic system present in the bone marrow, where they may form and develop within red blood cell precursors. Furthermore, binding interactions with red blood cell precursors seem to support the retention of developing gametocytes within the bone marrow’s extravascular space, where they are able to avoid immune detection until they are mature enough to be released back into the blood. The observation of gametocytes adopting a specialised niche within the haematopoietic system of the bone marrow is supported by anindependent study, which earlier this year used qRT-PCR to demonstrate that the bone marrow of infected children is enriched for immature gametocytes.
The recently characterised locations and mechanisms of gametocyte sequestration within the bone marrow, provide novel targets through which malaria transmission could be blocked, advancing both prevention and treatment efforts.
Case fatality rate" - or CFR - is a term that's been tossed around a lot lately in the context of the 2014 West African Ebola outbreak… But what does it really mean?
The CFR – which is calculated by dividing the number of deaths that have occurred due to a certain condition by the total number of cases – is actually a measure of risk. For infectious disease, CFR is a very important epidemiological measure to estimate because it tells us the probability of dying after infection. If estimated properly in the middle of an outbreak, it can even help us examine the efficacy of interventions as they take place.
Because different outbreaks of the same disease can demonstrate different CFRs, there’s usually a range of possible CFRs for a given disease. In the past, outbreaks caused by Zaire ebolavirus have demonstrated a mean end-of-outbreak CFR of 80% . But based off of the WHO's most recent report, it seems that only about 53% of reported Ebola cases thus far have ended in death since the 2014 outbreak began.
However, if we want to be particular, that 53% isn't really a CFR; it's actually the proportion of fatal cases - or PFC. This is a critical distinction. Because the outbreak in West Africa is still ongoing, we can't calculate end-of-outbreak CFR yet. We don’t know how many people will die from Ebola in the weeks ahead or how many total cases will ultimately accumulate by the end of the outbreak. So, for the time being, we have to make do with the PFC, which is essentially the number of deaths thus far divided by the number of cases to date.
When the WHO releases a report on the current situation in West Africa, it tells us two things: the number of people who've died and the number of reported cases at some specified point in time. For instance, in the most recent report, the WHO cited 4293 total cases and 2296 deaths as of September 8th. Dividing 2296 by 4293 gives us our previously stated PFC of 53%.
At first glance, it might seem then that only 53% of Ebola cases have been dying during this outbreak - a good deal less than the 80% we've seen prior... But what it really means is that only 53% of Ebola cases have died as of September 8th. We have no way of knowing whether all the people who were still hospitalized as of September 8th will survive the disease. Because of this, mid-outbreak PFC - as we've defined it thus far - doesn't tell us much about the likelihood of dying.
Despite Ebola’s frightening reputation, not all Ebola fatalities happen quickly. Without a little fine-tuning, PFC doesn't account for the lag between when a case is reported and when a case dies - approximately 16 days for this outbreak . What this means is that the 2296 deaths reported as of September 8th were all likely reported as cases by August 23rd. Adjusting PFC for this lag-time gives us a much better approximation of CFR well before the outbreak ends.
Below is a chart that shows both unadjusted and lag-adjusted PFC over time for Ebola in West Africa . The lag-adjusted PFC - about 80-85% - is significantly higher than the unadjusted PFC but is consistent with recent fatality estimates by Médecins Sans Frontières.
Every summer solstice, tens of thousands of people throng to Stonehenge, creating a festival-like atmosphere at the 4,400-year-old stone monument. For the 2015 solstice, they will have a bit more room to spread out. A just-completed four-year project to map the vicinity of Stonehenge reveals a sprawling complex that includes 17 newly discovered monuments and signs of 1.5 kilometre-round “super henge”.
The digital map — made from high-resolution radar and magnetic and laser scans that accumulated several terabytes of data — shatters the picture of Stonehenge as a desolate and exclusive site that was visited by few, says Vincent Gaffney, an archaeologist at the University of Birmingham, UK, who co-led the effort.
Take the cursus, a 3-kilometer-long, 100-meter-wide ditch north of Stonehenge that was thought to act as barrier. The team’s mapping uncovered gaps in the cursus leading to Stonehenge, as well as several large pits, one of which would have been perfectly aligned with the setting solstice Sun. New magnetic and radar surveys of the Durrington Walls (which had been excavated before) uncovered more than 60 now-buried holes in which stones would have sat, and a few stones still buried.
“They look as they may have been pushed over. That’s a big prehistoric monument which we never knew anything about,” says Gaffney, who calls the structure a ‘super henge.’ His team will discuss the work at the British Science Festival this week, and they plan to present it to the institutions that manage the site. “I’m sure it will guide future excavations,” Gaffney says.