These pictures show what a black hole looks like, now imagine a Big Bang.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
An atomically thin, two-dimensional, ultrasensitive semiconductor material for biosensing developed by University of California Santa Barbara (UCSB) researchers promises to push the boundaries of biosensing technology in many fields, from health care to environmental protection to forensic industries.
It’s based on molybdenum disulfide, or molybdenite (MoS2), as an alternative to graphene. Molybdenum disulfide — commonly used as a dry lubricant — surpasses graphene’s already high sensitivity, offers better scalability, and lends itself to high-volume manufacturing, the researchers say. Results of their study have been published in ACS Nano.
“This invention has established the foundation for a new generation of ultrasensitive and low-cost biosensors that can eventually allow single-molecule detection — the holy grail of diagnostics and bioengineering research,” said Samir Mitragotri, co-author and professor of chemical engineering and director of the Center for Bioengineering at UCSB.
The key, according to UCSB professor of electrical and computer engineering Kaustav Banerjee, who led this research, is MoS2’s band gap, a characteristic of a material that determines its electrical conductivity, the minimum amount of energy required for conduction; i.e., for an electron to break free of its bound state in a material — the gap between bound and free.
Semiconductor materials have a small but nonzero band gap and can be switched between conductive and insulated states controllably. The larger the band gap, the better its ability to switch states and to insulate leakage current in an insulated state. MoS2’s wide band gap allows current to travel but also prevents leakage and results in more sensitive and accurate readings.
Graphene has attracted wide interest as a biosensor due to its two-dimensional structure (which allows for excellent electrostatic control of the transistor channel by the gate) and its high surface-to-volume ratio. However, the sensitivity of a field-effect transistor (FET) biosensor based on graphene is fundamentally limited by graphene’s zero (fully conductive) band gap, which results in increased leakage current, leading to reduced sensitivity, explained Banerjee, who is also the director of the Nanoelectronics Research Lab at UCSB.
Via Official AndreasCY
New research has determined that a single group of micro-organisms may be responsible for much of the world’s vitamin B12 production in the oceans, with implications for the global carbon cycle and climate change.
Although vitamin B12 is an essential molecule required by most life on this planet, it is only produced by a relatively small group of micro-organisms because it is so large and complex. For humans, vitamin B12 plays a key role in maintaining the brain and nervous systems, as well as DNA synthesis in cells throughout the body.
Professors Andew Doxey and Josh Neufeld, from the Faculty of Science at the University of Waterloo, led a study that discovered that Thaumarchaeota are likely dominant vitamin B12 producers. This group from the Archea domain has never before been associated with vitamin B12 synthesis.
"We assumed that most major global sources of something as fundamental as vitamin B12 would have already been characterized, and so this finding changes how we think about global production of this important vitamin," said Professor Doxey.
The researchers, both of whom teach in the Department of Biology at Waterloo, used computational methods to search through vast amounts of sequenced environmental DNA for the genes that make vitamin B12, identifying the likely producers in marine and freshwater environments.
"Because Thaumarchaeota are among the most abundant organisms on the planet, especially in marine environments, their contribution to vitamin B12 production have enormous implications for ecology and metabolism in the oceans," said Professor Neufeld.
The availability of vitamin B12 may control how much or how little biological productivity by phytoplankton takes place in the oceans. Phytoplankton remove carbon dioxide from the atmosphere through photosynthesis, much like plants and trees, thus reducing the atmospheric concentration of this greenhouse gas, the largest contributor to global warming.
The research also found that proportions of archaeal B12synthesis genes increased with ocean depth and were more prevalent in winter and polar waters, suggesting that archaeal vitamin B12 may be critical for the survival of other species in both the deep and cold marine environments.
An interdisciplinary team of scientists and engineers has developed a thin, flexible 4-layer material that autonomously camouflages itself to the surroundings by optically evaluating the background and changing its pattern to match much like how the skin of an octopus or chameleon does so in the wild. The system mimics different patterns of background quickly within 1 to 2 seconds. To date there has been no other similar system which includes the crucial capabilities of sensing and actuation in a distributed manner.
The inspiration for this creation came from understanding of the skin of cephalopods (examples of which include octopus, squid, cuttlefish etc.), sea creatures that mimic in full color and with greater resolution the appearance of their environment. Celphalopod skin has faster response times, from 250-750 milliseconds. The prototype material is much simpler, arranged as an array of 16 x 16 relatively large, 1 mm square “pixels” that change from black to white and back again.
Response times are slower too in the 1 to 2 second range.
There is no overall camera system to detect the background and no central processing that controls the patterning of the material. In real octopuses, the eyes are involved, but the skin has its own photoreceptors similar to those found in the retina. The designed layered material works in the latter, distributed way, by integrating distributed optical sensors that monitor its surroundings and then commanding independent optical “actuators” to adapt dynamically.
In future, some diseases might be diagnosed earlier and treated more effectively. Researchers at the Max Planck Institute for the Science of Light in Erlangen have developed an optical method that makes individual proteins, such as the proteins characteristic of some cancers, visible. Other methods that achieve this only work if the target biomolecules have first been labelled with fluorescent tags; In general, however, that approach is difficult or even impossible. By contrast, with their method, coined iSCAT, the researchers in Erlangen are able to directly detect the scattered light of individual proteins via their shadows. The method could not only make biomedical diagnoses more sensitive, but also provide new insights into fundamental biological processes.
A biosensor for the scattered light of individual unmarked biomolecules such as proteins and tumour markers may facilitate medical diagnosis. The biodetector, that a team led by V. Sandoghdar has developed at the Max Planck Institute for the Science of Light, uses the interferometric method iSCAT.
Vahid Sandoghdar, Director at the Max Planck Institute for the Science of Light, and Marek Piliarik, a post doc in Sandoghdar’s division, are now able to produce a much clearer image without the need for elaborate attachment of luminous markers to the target proteins. This is possible thanks to iSCAT, short for interferometric detection of scattering. The researchers shine laser light onto a microscope slide on which the relevant proteins have been captured with appropriate biochemical lures. The proteins scatter the laser light, thus casting a shadow, albeit a very weak one. “iSCAT not only promises more sensitive diagnosis of diseases such as cancers, but will also shed light on many fundamental biochemical processes in nature,” says Vahid Sandoghdar.
The Erlangen-based researchers have succeeded to achieve a high level of sensitivity for individual proteins by applying some tricks, and because they were not hampered by a misconception that a lot of other scientists have: “Until now it was thought that if you want to detect scattered light from nanoparticles, you have to eliminate all background light,” explains Vahid Sandoghdar. “However, in recent years we’ve realized that it is more advantageous to illuminate the sample strongly and visualize the feeble signal of a tiny nanoparticle as a shadow against the intense background light.” The researchers therefore allow the background light to interfere with the weak scattered light so that the desired signal is amplified.
However, at this stage they are still unable to detect the shadows of a single protein in the interference image, because the pattern is akin to that of a television broadcast in black and white that is distorted by a lot of noise. The interferometric detection method is so sensitive that any small roughness or contamination of the sample carrier will also cast a shadow that could in fact swamp the protein signal.
Nevertheless, this difficulty did not put off the two researchers. They have learned to eliminate the noise by applying a second trick. They take a snapshot with the iSCAT microscope not only after they have dripped a solution containing the desired protein onto the sample holder but also before. “Since most of the optical noise generated by nanoscopic irregularities of the sample do not change, we can subtract one image from the other and thus eliminate the noise,” says Piliarik. The target proteins then stand out clearly from the background as dark spots, even though the shadow of the protein is only one ten-thousandth or even one hundred-thousandth as dark as the background.
Marek Piliarik and Vahid Sandoghdar are able to detect various proteins as shadows under the microscope not only in pure solutions. They can also home in on individual proteins in mixtures containing concentrations of other proteins that are up to 2000 times greater.
Researchers at the UNC School of Medicine have discovered how two genes – Period and Cryptochrome – keep the circadian clocks in all human cells in time and in proper rhythm with the 24-hour day, as well as the seasons. The finding, published today in the journal Genes and Development, has implications for the development of drugs for various diseases such as cancers and diabetes, as well as conditions such as metabolic syndrome, insomnia, seasonal affective disorder, obesity, and even jetlag.
"Discovering how these circadian clock genes interact has been a long-time coming," said Aziz Sancar, MD, PhD, Sarah Graham Kenan Professor of Biochemistry and Biophysics and senior author of the Genes and Development paper. "We've known for a while that four proteins were involved in generating daily rhythmicity but not exactly what they did. Now we know how the clock is reset in all cells. So we have a better idea of what to expect if we target these proteins with therapeutics."
In all human cells, there are four genes – Cryptochrome, Period, CLOCK, and BMAL1 – that work in unison to control the cyclical changes in human physiology, such as blood pressure, body temperature, and rest-sleep cycles. Previously, scientists found that CLOCK and BMAL1 work in tandem to kick start the circadian clock. These genes bind to many other genes and turn them on to express proteins. This allows cells, such as brain cells, to behave the way we need them to at the start of a day.
Specifically, CLOCK and BMAL1 bind to a pair of genes called Period and Cryptochrome and turn them on to express proteins, which – after several modifications – wind up suppressing CLOCK and BMAL1 activity. Then, the Period and Cryptochrome proteins are degraded, allowing for the circadian clock to begin again.
"It's a feedback loop," said Sancar, who discovered Cryptochrome in 1998. "The inhibition takes 24 hours. This is why we can see gene activity go up and then down throughout the day."
But scientists didn't know exactly how that gene suppression and protein degradation happened at the back end. In fact, during experiments using one compound to stifle Cryptochrome and another drug to hinder Period, other researchers found inconsistent effects on the circadian clock, suggesting that Cryptochrome and Period did not have the same role. Sancar, a member of the UNC Lineberger Comprehensive Cancer Center who studies DNA repair in addition to the circadian clock, thought the two genes might have complementary roles. His team conducted experiments to find out.
Chris Selby, PhD, a research instructor in Sancar's lab, used two different kinds of genetics techniques to create the first-ever cell line that lacked both Cryptochrome and Period. Each cell has two versions of each gene. Selby knocked out all four copies.
Then Rui Ye, PhD, a postdoctoral fellow in Sancar's lab and first author of the Genes and Development paper, put Period back into the new mutant cells. But Period by itself did not inhibit CLOCK-BMAL1; it actually had no active function inside the cells.
Next, Ye put Cryptochrome alone back into the cell line. He found that Cryptochrome not only suppressed CLOCK and BMAL1, but it squashed them indefinitely. "The Cryptochrome just sat there," Sancar said. "It wasn't degraded. The circadian clock couldn't restart."
For the final experiment, Sancar's team added Period to the cells with Cryptochrome. As Period's protein accumulated inside cells, the scientists could see that it began to remove the Cryptochrome, as well as CLOCK and BMAL1. This led to the eventual degradation of Cryptochrome, and then the CLOCK-BMAL1 genes were free to restart the circadian clock anew to complete the 24-hour cycle. "What we've done is show how the entire clock really works," Sancar said.
As all who study astronomy know, one of the most incredible things about the universe is the never-ending potential for wonderful discoveries that sound more like fiction than fact. With this paper, the authors are pushing the boundaries of fiction into fact with the potential discovery of a new exotic object, known as a Thorne–Żytkow object (TZO). First predicted in the 1970s by Kip Thorne and AnnaŻytkow, these bodies occur when a neutron star in a binary system with a red supergiant (RSG) merges into the second star. This merger creates an unusual system where there is a neutron star surrounded by a large, diffuse envelope of material. The system still produces most of its energy at the core of the material envelope through thermonuclear energy, and a smaller amount (about 5% of the total energy) from the gravitational accretion of material onto the neutron star. Eventually, after several hundred years, the core of the envelope and the neutron star would merge, resulting in either a larger neutron star or a black hole.
TZOs are fascinating objects in a special state of a binary system’s evolution, and there is a lot of new physics that can be learned from such a system, but there has been one problem with them until now: they are identical in appearance to typical red supergiants. There are a lot of normal red supergiants no matter where you look, and knowing if a RSG is a TZO is only possible when you look in detail at the stellar spectra for the over-abundance of lithium and other specific heavy metals. Finding a TZO is definitely a “find the needle in a haystack” kind of observing problem!
Luckily for science, the authors successfully found the needle. They did this by conducting a survey of stars in the Milky Way andMagellanic Clouds from previous stellar surveys where effective temperature and photometry data indicated a RSG. The authors then took the stellar spectra of the 62 stars in their sample at Apache Point Observatory in New Mexico and the Magellan telescopes in Chile, and then analyzing the spectra for the ratios between elements in order to see whether there were any anomalies. In one case, for a star known as HV 2112, in the Small Magellanic Cloud, and found it had unusually high concentrations of lithium, molybdenum, and rubidium. These elements, especially in the amounts found in HV 2112, are indications the star is not a RSG at all, but rather a TZO. Some spectral features were also observed that are not predicted in TZO models, but the authors aknowledge that available TZO models are older and do not take into account some recent advances in stellar convection modeling.
This TZO discovery, if confirmed from follow-up theoretical models, is exciting because HV 2112 would be the prototype of a whole new kind of system. But beyond being a scientific curiousity, a TZO can provide a new environment for answering several questions, such as a new fate of massive binary systems. Further, because this is a completely new kind of stellar interior, we are also looking at a different kind of stellar nuclear synthesis process for heavy metals than anything previously observed. It is like being handed a new laboratory in which to test astrophysical ideas, and to distinguish the fact from fiction.
In the early days of quantum physics, in an attempt to explain the wavelike behavior of quantum particles, the French physicist Louis de Broglie proposed what he called a “pilot wave” theory. According to de Broglie, moving particles — such as electrons, or the photons in a beam of light — are borne along on waves of some type, like driftwood on a tide.
Physicists’ inability to detect de Broglie’s posited waves led them, for the most part, to abandon pilot-wave theory. Recently, however, a real pilot-wave system has been discovered, in which a drop of fluid bounces across a vibrating fluid bath, propelled by waves produced by its own collisions.
In 2006, Yves Couder and Emmanuel Fort, physicists at Université Paris Diderot, used this system to reproduce one of the most famous experiments in quantum physics: the so-called “double-slit” experiment, in which particles are fired at a screen through a barrier with two holes in it.
In the latest issue of the journal Physical Review E (PRE), a team of MIT researchers, in collaboration with Couder and his colleagues, report that they have produced the fluidic analogue of another classic quantum experiment, in which electrons are confined to a circular “corral” by a ring of ions. In the new experiments, bouncing drops of fluid mimicked the electrons’ statistical behavior with remarkable accuracy.
“This hydrodynamic system is subtle, and extraordinarily rich in terms of mathematical modeling,” says John Bush, a professor of applied mathematics at MIT and corresponding author on the new paper. “It’s the first pilot-wave system discovered and gives insight into how rational quantum dynamics might work, were such a thing to exist.”
John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.
Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.
“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”
Stanford Professor of Bioengineering and Applied Physics, Stephen Quake, and Head of the Ophthalmic Science and Engineering Lab at Bar Ilan University Dr. Yossi Mandell teamed up to create a state-of-the-art intraocular implant that will change glaucoma treatment by making intraocular pressure readings frequent, easy and convenient.
Made to fit inside a commonly used intraocular lens prosthetic, and implanted through simple surgery such as for cataracts which many glaucoma patients already receive, the device measures the pressure of the fluid within the eye. A smart phone app or a wearable device such as Google Glass allows the wearer to take “snapshots” of the device that reports back the pressure.
The lens device holds a tiny tube, capped at one end and opened on the other, filled with gas. As the fluid pressure pushes against the gas, a marked scale permits reading of the intraocular pressure. The implant does not interfere with vision, as proven in an Air Force-approved vision test, and in one reported study the implant was responsible for changes to treatment for glaucoma in nearly 80 percent of the wearers.
Nearly 2.2 million Americans battle the eye disease glaucoma. Patients endure weekly visits to the ophthalmologist to have the disease monitored and treated. The disease is characterized by increasing pressure inside the eye, which results in a continuous loss of a specific type of retinal cell accompanied by degradation of the optic nerve fiber. The mechanism that links pressure to damage is not clear but there is correlation between the intensity of pressure readings and level of damage.
We live on a vast, underexplored planet that is largely ocean. Despite modern technology, Global Positioning System (GPS) navigation, and advanced engineering of ocean vessels, the ocean is unforgiving, especially in rough weather. Coastal ocean navigation, with risks of running aground and inconsistent weather and sea patterns, can also be challenging and hazardous. In 2012, more than 100 international incidents of ships sinking, foundering, grounding, or being lost at sea were reported (http://en.wikipedia.org/wiki/List_of_shipwrecks_in_2012). Even a modern jetliner can disappear in the ocean with little or no trace, and the current costs and uncertainty associated with search and rescue make the prospects of finding an object in the middle of the ocean daunting .
Notwithstanding satellite constellations, autonomous vehicles, and more than 300 research vessels worldwide (www.wikipedia.org/wiki/List_of_research_vessels_by_country), we lack fundamental data relating to our oceans. These missing data hamper our ability to make basic predictions about ocean weather, narrow the trajectories of floating objects, or estimate the impact of ocean acidification and other physical, biological, and chemical characteristics of the world's oceans. To cope with this problem, scientists make probabilistic inferences by synthesizing models with incomplete data. Probabilistic modeling works well for certain questions of interest to the scientific community, but it is difficult to extract unambiguous policy recommendations from this approach. The models can answer important questions about trends and tendencies among large numbers of events but often cannot offer much insight into specific events. For example, probabilistic models can tell us with some precision the extent to which storm activity will be intensified by global climate change but cannot yet attribute the severity of a particular storm to climate change. Probabilistic modeling can provide important insights into the global traffic patterns of floating debris but is not of much help to search-and-rescue personnel struggling to learn the likely trajectory of a particular piece of debris left by a wreck.
Oceanographic data are incomplete because it is financially and logistically impractical to sample everywhere. Scientists typically sample over time, floating with the currents and observing their temporal evolution (the Langrangian approach), or they sample across space to cover a gradient of conditions—such as temperature or nutrients (the Eulerian approach). These observational paradigms have various strengths and weaknesses, but their fundamental weakness is cost. A modern ocean research vessel typically costs more than US$30,000 per day to operate—excluding the full cost of scientists, engineers, and the cost of the research itself. Even an aggressive expansion of oceanographic research budgets would not do much to improve the precision of our probabilistic models, let alone to quickly and more accurately locate missing objects in the huge, moving, three-dimensional seascape. Emerging autonomous technologies such as underwater gliders and in situ biological samplers (e.g., environmental sample processors) help fill gaps but are cost prohibitive to scale up. Similarly, drifters (e.g., the highly successful Argo floats program) have proven very useful for better defining currents, but unless retrieved after their operational lifetime, they become floating trash, adding to a growing problem.
Long-term sampling efforts such as the continuous plankton recorder in the North Sea and North Atlantic  provide valuable data on decadal trends and leveraged English Channel ferries to accomplish much of the sampling. Modernizing and expanding this approach is a goal of citizen science initiatives.
What will be named?
Who can submit names?
How can names be submitted?
Scientists have successfully ‘reset’ human pluripotent stem cells to the earliest developmental state – equivalent to cells found in an embryo before it implants in the womb (7-9 days old). These ‘pristine’ stem cells may mark the true starting point for human development, but have until now been impossible to replicate in the lab. fThe discovery, published in Cell, will lead to a better understanding of human development and could in future allow the production of safe and more reproducible starting materials for a wide range of applications including cell therapies.
At first glance, water seems to be a simple molecule in which a single oxygen atom is bound to two hydrogen atoms. However, it is more complex when taking into account hydrogen’s nuclear spin – a property reminiscent of a rotation of its nucleus about its own axis. The spin of a single hydrogen can assume two different orientations, symbolized as up and down. Thus, the spins of water’s two hydrogen atoms can either add up, called ortho water, or cancel out, called para water. Ortho and para states are also said to be symmetric and antisymmetric, respectively.
Fundamental symmetry rules prohibit para water from turning into ortho water and vice versa – at least theoretically. “If you had a magic bottle with isolated paraand ortho molecules, they would remain in their spin states at all times,” says DESY scientist Jochen Küpper who led the recent study. “In principle, they are different molecular species, different types of water.” However, in the real world, water molecules are not isolated and frequently collide with other molecules or surfaces in their vicinity, causing nuclear spin orientations to change. “Through these interactions, para and ortho water can actually transform easily into one another,” explains Küpper who is also a professor at the University of Hamburg and a member of the Hamburg Centre for Ultrafast Imaging (CUI). “Therefore, it is very challenging to separate them and produce water that is not a mixture of both.”
Yet, the CFEL researchers have now demonstrated a way of isolating para and ortho water in the lab. To start, the scientists placed a drop of water in a compartment, which they pressurized with neon or argon gas. This mixture was released into vacuum through a pulsed valve. “Due to the large pressure difference, the gas expands quickly into the vacuum when the valve is opened, dragging along water molecules and, at the same time, cooling them down,” says Daniel Horke, the first author of the study.
This expansion produces a narrow beam of ultracold water molecules, which propagate at supersonic speed and are so dilute that individual molecules no longer collide with each other, thereby suppressing the conversion between para and ortho spin states.
The molecular beam then travels through a strong electric field, which deflects the water molecules from their original flight path and acts like a prism for nuclear spin states. “Para andortho water interact with the electric field differently,” Horke explains. “Thus, they also get deflected differently, allowing us to separate them in space and obtain pure para and orthosamples.” Spectroscopy showed that the purity of the para and ortho water was 74 per cent and over 97 per cent, respectively. Especially for para water the purity can be greatly enhanced in the future, as Horke says. Storing the separated water species was not an aim of the study.
The new method could benefit studies of a wide range of phenomena. In astrophysics, for example, it is commonly assumed that the relative amounts of para and ortho species can be linked to the temperature of interstellar ice. This theory is based on the temperature dependence of hydrogen’s ortho-to-para ratio, which is three to one at room temperature and drops with decreasing temperatures. “In fact, certain regions of the universe exhibit ratios that are quite different from what you would expect,” Horke says. “Yet, the specific reasons are unknown and lab-based experiments could provide new insights.”
Back on Earth, the study may also help determine the structures of proteins – biomolecules that are essential to all life. A method known as nuclear magnetic resonance (NMR) spectroscopy reconstructs protein structures from the relative orientation of the nuclear spins of hydrogen and other atoms. “Para hydrogen has successfully been used to enhance the sensitivity of the NMR method,” says Horke. “Thus, enriching para water in a protein’s water shell could become an interesting approach to improve NMR spectroscopy of these biological systems due to an almost natural environment.”
The quest to create camouflaging metamaterials that can “see” colors and automatically blend into the background is one step closer to reality, thanks to a breakthrough color-display technology unveiled this week by Rice University‘s Laboratory for Nanophotonics (LANP).
The new full-color display technology uses aluminum nanorods to create the vivid red, blue and green hues found in today’s top-of-the-line LCD televisions and monitors.
The technology is described in a new study in the Early Edition of the Proceedings of the National Academy of Sciences (PNAS) (open access).
The breakthrough is the latest in a string of recent discoveries by a Rice-led team that set out in 2010 to create metamaterials capable of mimicking the camouflage abilities of cephalopods — the family of marine creatures that includes squid, octopus and cuttlefish.
“Our goal is to learn from these amazing animals so that we could create new materials with the same kind of distributed light-sensing and processing abilities that they appear to have in their skins,” said LANP Director Naomi Halas, a co-author of the PNAS study.
She is the principal investigator on a $6 million Office of Naval Research grant for a multi-institutional team that includes marine biologists Roger Hanlon of the Marine Biological Laboratory in Woods Hole, Mass., and Thomas Cronin of the University of Maryland, Baltimore County.
“We know cephalopods have some of the same proteins in their skin that we have in our retinas, so part of our challenge, as engineers, is to build a material that can ‘see’ light the way their skin sees it, and another challenge is designing systems that can react and display vivid camouflage patterns,” Halas said.
LANP’s new color display technology delivers bright red, blue and green hues from five-micron-square pixels that each contains several hundred aluminum nanorods. By varying the length of the nanorods and the spacing between them, LANP researchers Stephan Link and Jana Olson showed they could create pixels that produced dozens of colors, including rich tones of red, green and blue that are comparable to those found in high-definition LCD displays.
“Aluminum is useful because it’s compatible with microelectronic production methods, but until now the tones produced by plasmonic aluminum nanorods have been muted and washed out,” said Link, associate professor of chemistry at Rice and the lead researcher on the PNAS study. “The key advancement here was to place the nanorods in an ordered array.”
Maze tests reveal subtle advantage bestowed by human FOXP2 gene
As a uniquely human trait, language has long baffled evolutionary biologists. Not until FOXP2 was linked to a genetic disorder that caused problems in forming words could they even begin to study language’s roots in our genes. Soon after that discovery, a team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, discovered that just two bases, the letters that make up DNA, distinguished the human and chimp versions of FOXP2.
To try to determine how those changes influenced the gene's function, that group put the human version of the gene in mice. In 2009, they observed that these "humanized" mice produced more frequent and complex alarm calls, suggesting the human mutations may have been involved in the evolution of more complex speech.
Another study showed that humanized mice have different activity in the part of the brain called the striatum, which is involved in learning, among other tasks. But the details of how the human FOXP2 mutations might affect real-world learning remained murky. To solve the mystery, the Max Planck researchers sent graduate student Christiane Schreiweis to work with Ann Graybiel, a neuroscientist at the Massachusetts Institute of Technology in Cambridge. She's an expert in testing mouse smarts by seeing how quickly they can learn to find rewards in mazes.
In humans and other animals, learning occurs in two ways, Graybiel explains. The first requires breaking the task at hand into distinct steps and performing them one at a time. For example, to learn to ride a bike, you first need to remember to hold the handlebars straight, then to put your feet on the pedals, and finally push with your legs to make the pedals go around. At some point, though, these step-by-step movements become habit and you switch to the second type of learning, which is based on unconscious repetition. Now, your bike riding improves simply by repeating the task, rather than thinking through each step.
To figure out which type of learning may have been aided by the changes in the human version of FOXP2, Schreiweis tested humanized mice in mazes. In some cases, the mice were required to remember that turning right always led to a reward, indicating that they had acquired the repetitive habit of turning right and their skill had become “unconscious.” In other cases, they had to look around and figure out that the reward was always on the east arm of the maze, a task that required the behavioral flexibility of step-by-step learning. That’s because, depending on where in the maze the mouse started, it had to look around to figure out where to go.
When humanized mice and wild mice were put in mazes that engaged both types of learning,the humanized mice mastered the route to the reward faster than their wild counterparts, report Schreiweis, Graybiel, and their colleagues online today in the Proceedings of the National Academy of Sciences. But when the mice were engaged in just one type of learning, humanized and wild mice did equally well on all the tests. That was unexpected; the researchers forecast that the humanized mice would have some advantage in at least one of the learning types.
Conditions on Earth for the first 500 million years after it formed may have been surprisingly similar to the present day, complete with oceans, continents and active crustal plates.
This alternate view of Earth’s first geologic eon, called the Hadean, has gained substantial new support from the first detailed comparison of zircon crystals that formed more than 4 billion years ago with those formed contemporaneously in Iceland, which has been proposed as a possible geological analog for early Earth.
The study was conducted by a team of geologists directed by Calvin Miller, the William R. Kenan Jr. Professor of Earth and Environmental Sciences at Vanderbilt University, and published online this weekend by the journal Earth and Planetary Science Letters in a paper titled, “Iceland is not a magmatic analog for the Hadean: Evidence from the zircon record.”
From the early 20th century up through the 1980’s, geologists generally agreed that conditions during the Hadean period were utterly hostile to life. Inability to find rock formations from the period led them to conclude that early Earth was hellishly hot, either entirely molten or subject to such intense asteroid bombardment that any rocks that formed were rapidly remelted. As a result, they pictured the surface of the Earth as covered by a giant “magma ocean.”
Two schools of thought have emerged: One argues that Hadean Earth was surprisingly similar to the present day. The other maintains that, although it was less hostile than formerly believed, early Earth was nonetheless a foreign-seeming and formidable place, similar to the hottest, most extreme, geologic environments of today. A popular analog is Iceland, where substantial amounts of crust are forming from basaltic magma that is much hotter than the magmas that built most of Earth’s current continental crust.
“We reasoned that the only concrete evidence for what the Hadean was like came from the only known survivors: zircon crystals – and yet no one had investigated Icelandic zircon to compare their telltale compositions to those that are more than 4 billion years old, or with zircon from other modern environments,” said Miller.
The largest spacecraft welding tool in the world, the Vertical Assembly Center officially is open for business at NASA's Michoud Assembly Facility in New Orleans. The 170-foot-tall, 78-foot-wide giant completes a world-class welding toolkit that will be used to build the core stage of America's next great rocket, the Space Launch System (SLS).
SLS will be the most powerful rocket ever built for deep space missions, including to an asteroid and eventually Mars. The core stage, towering more than 200 feet tall (61 meters) with a diameter of 27.6 feet (8.4 meters), will store cryogenic liquid hydrogen and liquid oxygen that will feed the rocket's four RS-25 engines.
"This rocket is a game changer in terms of deep space exploration and will launch NASA astronauts to investigate asteroids and explore the surface of Mars while opening new possibilities for science missions, as well," said NASA Administrator Charles Bolden during a ribbon-cutting ceremony at Michoud Friday.
The Vertical Assembly Center is part of a family of state-of-the-art tools designed to weld the core stage of SLS. It will join domes, rings and barrels to complete the tanks or dry structure assemblies. It also will be used to perform evaluations on the completed welds. Boeing is the prime contractor for the SLS core stage, including avionics.
"The SLS Program continues to make significant progress," said Todd May, SLS program manager. "The core stage and boosters have both completed critical design review, and NASA recently approved the SLS Program's progression from formulation to development. This is a major milestone for the program and proof the first new design for SLS is mature enough for production."
A new study published in The Journal of Geology provides support for the theory that a cosmic impact event over North America some 13,000 years ago caused a major period of climate change known as the Younger Dryas stadial, or “Big Freeze.”
Around 12,800 years ago, a sudden, catastrophic event plunged much of the Earth into a period of cold climatic conditions and drought. This drastic climate change—the Younger Dryas—coincided with the extinction of Pleistocene megafauna, such as the saber-tooth cats and the mastodon, and resulted in major declines in prehistoric human populations, including the termination of the Clovis culture.
With limited evidence, several rival theories have been proposed about the event that sparked this period, such as a collapse of the North American ice sheets, a major volcanic eruption, or a solar flare.
However, in a study published in The Journal of Geology, an international group of scientists analyzing existing and new evidence have determined a cosmic impact event, such as a comet or meteorite, to be the only plausible hypothesis to explain all the unusual occurrences at the onset of the Younger Dryas period.
Researchers from 21 universities in 6 countries believe the key to the mystery of the Big Freeze lies in nanodiamonds scattered across Europe, North America, and portions of South America, in a 50-million-square-kilometer area known as the Younger Dryas Boundary (YDB) field.
Microscopic nanodiamonds, melt-glass, carbon spherules, and other high-temperature materials are found in abundance throughout the YDB field, in a thin layer located only meters from the Earth’s surface. Because these materials formed at temperatures in excess of 2200 degrees Celsius, the fact they are present together so near to the surface suggests they were likely created by a major extraterrestrial impact event.
In addition to providing support for the cosmic impact event hypothesis, the study also offers evidence to reject alternate hypotheses for the formation of the YDB nanodiamonds, such as by wildfires, volcanism, or meteoric flux.
The team’s findings serve to settle the debate about the presence of nanodiamonds in the YDB field and challenge existing paradigms across multiple disciplines, including impact dynamics, archaeology, paleontology, limnology, and palynology.
New research shows that schizophrenia isn’t a single disease but a group of eight genetically distinct disorders, each with its own set of symptoms. The finding could be a first step toward improved diagnosis and treatment for the debilitating psychiatric illness.
The research at Washington University School of Medicine in St. Louis is reported online Sept. 15 in The American Journal of Psychiatry. About 80 percent of the risk for schizophrenia is known to be inherited, but scientists have struggled to identify specific genes for the condition.
Now, in a novel approach analyzing genetic influences on more than 4,000 people with schizophrenia, the research team has identified distinct gene clusters that contribute to eight different classes of schizophrenia.
“Genes don’t operate by themselves,” said C. Robert Cloninger, MD, PhD, one of the study’s senior investigators. “They function in concert much like an orchestra, and to understand how they’re working, you have to know not just who the members of the orchestra are but how they interact.”
Researchers have developed a high-tech method to rid the body of infections — even those caused by unknown pathogens. A device inspired by the spleen can quickly clean blood of everything from Escherichia coli to Ebola, researchers report on 14 September in Nature Medicine1.
The device uses a modified version of mannose-binding lectin (MBL), a protein found in humans that binds to sugar molecules on the surfaces of more than 90 different bacteria, viruses and fungi, as well as to the toxins released by dead bacteria that trigger the immune overreaction in sepsis.
The researchers coated magnetic nanobeads with MBL. As blood enters the biospleen device, passes by the MBL-equipped nanobeads, which bind to most pathogens. A magnet on the biospleen device then pulls the beads and their quarry out of the blood, which can then be routed back into the patient.
To test the device, Ingber and his team infected rats with either E. coli or Staphylococcus aureus and filtered blood from some of the animals through the biospleen. Five hours after infection, 89% of the rats whose blood had been filtered were still alive, compared with only 14% of those that were infected but not treated. The researchers found that the device had removed more than 90% of the bacteria from the rats' blood. The rats whose blood had been filtered also had less inflammation in their lungs and other organs, suggesting they would be less prone to sepsis.
The researchers then tested whether the biospleen could handle the volume of blood in an average adult human — about 5 liters. They ran human blood containing a mixture of bacteria and fungi through the biospleen at a rate of 1 litre per hour, and found that the device removed most of the pathogens within five hours.
Discovery might ultimately lead to new, more energy-efficient transistors and microchips.
When moving through a conductive material in an electric field, electrons tend to follow the path of least resistance — which runs in the direction of that field.
But now physicists at MIT and the University of Manchester have found an unexpectedly different behavior under very specialized conditions — one that might lead to new types of transistors and electronic circuits that could prove highly energy-efficient.
They’ve found that when a sheet of graphene — a two-dimensional array of pure carbon — is placed atop another two-dimensional material, electrons instead move sideways, perpendicular to the electric field. This happens even without the influence of a magnetic field — the only other known way of inducing such a sideways flow.
What’s more, two separate streams of electrons would flow in opposite directions, both crosswise to the field, canceling out each other’s electrical charge to produce a “neutral, chargeless current,” explains Leonid Levitov, an MIT professor of physics and a senior author of a paper describing these findings this week in the journal Science.
The exact angle of this current relative to the electric field can be precisely controlled, Levitov says. He compares it to a sailboat sailing perpendicular to the wind, its angle of motion controlled by adjusting the position of the sail.
Levitov and co-author Andre Geim at Manchester say this flow could be altered by applying a minute voltage on the gate, allowing the material to function as a transistor. Currents in these materials, being neutral, might not waste much of their energy as heat, as occurs in conventional semiconductors — potentially making the new materials a more efficient basis for computer chips.
“It is widely believed that new, unconventional approaches to information processing are key for the future of hardware,” Levitov says. “This belief has been the driving force behind a number of important recent developments, in particular spintronics” — in which the spin of electrons, not their electric charge, carries information.
A Japanese woman in her 70s is the world's first recipient of cells derived from induced pluripotent stem cells, a technology that has created great expectations since it could offer the same advantages as embryo-derived cells but without some of the controversial aspects and safety concerns.
In a two-hour procedure, a team of three eye specialists lead by Yasuo Kurimoto of the Kobe City Medical Center General Hospital, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells into an eye of the Hyogo prefecture resident, who suffers from age-related macular degeneration.
The procedure took place at the Institute of Biomedical Research and Innovation Hospital, next to the RIKEN Center for Developmental Biology (CDB) where ophthalmologist Masayo Takahashi had developed and tested the epithelium sheets. She derived them from the patient's skin cells, after producing induced pluripotent stem (iPS) cells and then getting them to differentiate into retinal cells. Afterwards, the patient experienced no effusive bleeding or other serious problems, RIKEN has reported.
The patient “took on all the risk that go with the treatment as well as the surgery”, Kurimoto said in a statement released by RIKEN. “I have deep respect for bravery she showed in resolving to go through with it.”
He hit a somber note in thanking Yoshiki Sasai, a CDB researcher who recenty committed suicide. “This project could not have existed without the late Yoshiki Sasai’s research, which led the way to differentiating retinal tissue from stem cells.”
Kurimoto also thanked Shinya Yamanaka, a stem-cell scientist at Kyoto University “without whose discovery of iPS cells, this clinical research would not be possible.” Yamanaka shared the 2012 Nobel Prize in Physiology or Medicine for that work.
Kurimoto performed the procedure a mere four days after a health-ministry committee gave Takahashi clearance for the human trials (see 'Next-generation stem cells cleared for human trial').
Gibbons have such strange, scrambled DNA, it looks like someone has taken a hammer to it. Their genome has been massively reshuffled, and some biologists say that could be how new gibbon species evolved.
Gibbons are apes, and were the first to break away from the line that led to humans. There are around 16 living gibbon species, in four genera. They all have small bodies, long arms and no tails. But it's what gibbons don't share that is most unusual. Each species carries a distinct number of chromosomes in its genome: some species have just 38 pairs, some as many as 52 pairs.
"This 'genome plasticity' has always been a mystery," says Wesley Warrenof Washington University in St Louis, Missouri. It is almost as if the genome exploded and was then pieced back together in the wrong order. To understand why, Warren and his colleagues have now produced the first draft of a gibbon genome. It comes from a female northern white-cheeked gibbon (Nomascus leucogenys) called Asia.Inside the genome, Warren and his colleagues may have identified one of the players responsible for the reshuffling. It is called LAVA, and it is a piece of DNA called a retrotransposon that inserts itself into the genetic code. Seemingly unique to gibbons, LAVA tends to slip into genes that help control the way chromosomes pair up during cell division. By altering how those genes work, LAVA has made the gibbon genome unstable.
We believe this is the driving force that causes, for want of a better word, the 'scrambling' of the genome," says Warren. However, solving this mystery has created another. Such dramatic genome changes are normally associated with diseases such as cancer, and should be harmful. "It's a complete mystery still how these genomes are able to pass from one generation to the next and not cause any major issues in terms of survival of the species," says Warren. It may be that genomes are much more resilient than anyone expected, saysJames Shapiro at the University of Chicago. "The genome can endure lots of changes and still function."
Researchers at the University of California, San Diego have built the first 500 Gigahertz (GHz) photon switch. “Our switch is more than an order of magnitude faster than any previously published result to date,” said UC San Diego electrical and computer engineering professor Stojan Radic. “That exceeds the speed of the fastest lightwave information channels in use today.” The work took nearly four years to complete and it opens a fundamentally new direction in photonics – with far-reaching potential consequences for the control of photons in optical fiber channels.
According to an article in the journal Science*, switching photons at such high speeds was made possible by advances in the control of a strong optical beam using only a few photons, and by the scientists’ ability to engineer the optical fiber itself with accuracy down to the molecular level.
In the research paper, Radic and his colleagues in the UC San Diego Jacobs School of Engineering argue that ultrafast optical control is critical to applications that must manipulate light beyond the conventional electronic limits. In addition to very fast beam control and fast switching, the latest work opens the way to a new class of sensitive receivers (also capable of operating at very high rates), faster photon sensors, and optical processing devices.
To build the new switch, the UC San Diego team developed a new measurement technique capable of resolving sub-nanometer fluctuations in the fiber core. This was critical because local fiber dispersion varies substantially, even with small core fluctuations, and until recently, control of such small variations was not considered feasible, particularly over long device lengths.
In the experiment, a three-photon input was used to manipulate a Watt-scale beam at a speed exceeding 500 Gigahertz.
In their research, the engineers in the Photonic Systems Laboratory of UC San Diego’s Qualcomm Institute demonstrated that fast control becomes possible in fiber made of silica glass. “Silica fiber represents a nearly ideal physical platform because of very low optical loss, exceptional transparency and kilometer-scale interaction lengths,” noted Radic. “We showed that a silica fiber core can be controlled with sub-nanometer precision and be used for fast, few-photon control.”
Until recently, control of small variations was not considered feasible – particularly over long scales. But once they were able to profile the fluctuation of the actual fiber, it became clear that the silica fiber core could be controlled with sub-nanometer precision – and be used for fast, few-photon control.
A 24-year-old woman has discovered that her cerebellum is completely missing, explaining some of the unusual problems she has had with movement and speech. The case highlights just how adaptable the organ is.
The discovery was made when the woman was admitted to the Chinese PLA General Hospital of Jinan Military Area Command in Shandong Province complaining of dizziness and nausea. She told doctors she'd had problems walking steadily for most of her life, and her mother reported that she hadn't walked until she was 7 and that her speech only became intelligible at the age of 6.
Doctors did a CAT scan and immediately identified the source of the problem – her entire cerebellum was missing (see scan, below left). The space where it should be was empty of tissue. Instead it was filled with cerebrospinal fluid, which cushions the brain and provides defence against disease.
The cerebellum – sometimes known as the "little brain" – is located underneath the two hemispheres. It looks different from the rest of the brain because it consists of much smaller and more compact folds of tissue. It represents about 10 per cent of the brain's total volume but contains 50 per cent of its neurons.
Although it is not unheard of to have part of your brain missing, either congenitally or from surgery, the woman joins an elite club of just nine people who are known to have lived without their entire cerebellum. A detailed description of how the disorder affects a living adult is almost non-existent, say doctors from the Chinese hospital, because most people with the condition die at a young age and the problem is only discovered on autopsy (Brain,doi.org/vh7).
The cerebellum's main job is to control voluntary movements and balance, and it is also thought to be involved in our ability to learn specific motor actions and speak. Problems in the cerebellum can lead to severe mental impairment, movement disorders, epilepsy or a potentially fatal build-up of fluid in the brain. However, in this woman, the missing cerebellum resulted in only mild to moderate motor deficiency, and mild speech problems such as slightly slurred pronunciation. Her doctors describe these effects as "less than would be expected", and say her case highlights the remarkable plasticity of the brain.
Induced pluripotent stem cells (iPSCs) are commonly generated by transduction of Oct4, Sox2, Klf4, and Myc (OSKM) into cells. Although iPSCs are pluripotent, they frequently exhibit high variation in terms of quality, as measured in mice by chimera contribution and tetraploid complementation. Reliably high-quality iPSCs will be needed for future therapeutic applications. Here, we show that one major determinant of iPSC quality is the combination of reprogramming factors used.
Based on tetraploid complementation, we found that ectopic expression of Sall4, Nanog, Esrrb, and Lin28 (SNEL) in mouse embryonic fibroblasts (MEFs) generated high-quality iPSCs more efficiently than other combinations of factors including OSKM. Although differentially methylated regions, transcript number of master regulators, establishment of specific superenhancers, and global aneuploidy were comparable between high- and low-quality lines, aberrant gene expression, trisomy of chromosome 8, and abnormal H2A.X deposition were distinguishing features that could potentially also be applicable to human.