NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
Hens do not have teeth, and humans do not have tails. Research suggests we all have "what it takes" for a tail, and hens, indeed, have the genes that encode for teeth; however, only in very rare situations do these traits manifest themselves as a phenotype. This phenomenon is called atavism—the reappearance of a trait that had been lost during evolution. Our genes do not determine who we are, but with atavism, they can sometimes serve as reminders of our evolutionary past.
Traits that appear or disappear over time are not the result of newly mutated genes encoding defective versions of the proteins associated with teeth or tails, nor are they caused by a loss of existing genes. Instead, a growing body of experimental evidence has shown such traits reflect changes in how, where, and when these genes are expressed.
Even though birds lost teeth as physical structures between 60 and 80 million years ago, several studies have shown that those tissues within birds that would normally produce teeth still retain the potential to do so. For example, in 1821, Geoffrey St. Hilaire was the first scientist to publish the observation that some bird embryos exhibited evidence of tooth formation, but his contemporaries considered his work flawed. Since then, however, many investigators have unearthed molecular evidence that the genes involved in odontogenesis (tooth development) are indeed retained in chickens.
Despite this discovery, no one had yet demonstrated that chickens could develop teeth without external cues. This situation soon changed, however, when researchers Matthew Harris (a graduate student at the time) and John Fallon launched a study involving chickens with a particular kind of autosomal recessive mutation (Harris et al., 2006). These chickens, designated by the abbreviation ta2 for talpid-2, displayed signs reminiscent of early tooth development.
The researchers needed a positive control with which to compare their hens' teeth-that is, a closely related animal in which teeth occur. Typically, the nonmutant or "wild-type" phenotype serves as a control in gene mutation experiments, but this was an exceptional case in that the wild-type chicken doesn't have teeth. Harris and Fallon specifically needed to compare the structures they believed to be teeth in their ta2 mutant chickens with the next best thing—the closest ancestor to the chicken that still has teeth—which in this case was the archosaur, otherwise known as the common crocodile. Therefore, the researchers examined the expression of several biomarkers in wild-type chicken embryos, ta2 mutant embryos, and crocodile embryos. They found that the ta2 mutant oral cavities appeared developmentally closer to those of the crocodiles than to those of their wild-type siblings. These results thus demonstrated that all the genetic pieces to the tooth-building puzzle exist in chickens, but the directions have evolved to tell those pieces to do something different over the last 80 million years.
The peripheral olfactory system is unparalleled in its ability to detect and discriminate amongst an extremely large number of volatile compounds in the environment. To detect this wide variety of volatiles, most organisms have evolved large families of receptor genes that typically encode 7-transmembrane proteins expressed in the olfactory neurons. Coding of information in the peripheral olfactory system depends on two fundamental factors: interaction of individual odors with subsets of the odorant receptor repertoire and mode of signaling that an individual receptor-odor interaction elicits, activation or inhibition. Each volatile chemical in the environment is thought to interact with a specific subset of odorant receptors depending upon odor structure and binding sites on the receptor. This precise detection and coding of odors by the peripheral olfactory neurons are subsequently processed, transformed and integrated in the central nervous system to generate specific behavioral responses that are critical for survival such as finding food, finding mates, avoiding predators etc.
A group of researchers has now developed a cheminformatics pipeline that predicts receptor–odorant interactions from a large collection of chemical structures (>240,000) for receptors that have been tested to a smaller panel of odorants (∼100). Using a computational approach, they first identify shared structural features from known ligands of individual receptors. They then used these features to screen in silico new candidate ligands from >240,000 potential volatiles for several Odorant receptors (Ors) in the Drosophila (fruitfly) antenna. Functional experiments from 9 Ors support a high success rate (∼71%) for the screen, resulting in identification of numerous new activators and inhibitors. Such computational prediction of receptor–odor interactions has the potential to enable systems level analysis of olfactory receptor repertoires in organisms.
Researchers in Germany have invented micromotors that can propel themselves through water while degrading organic pollutants. The micromotors, which run on dilute hydrogen peroxide, could be used to clean up small reservoirs, pipes and other hard to reach places.
Organic pollutants are found in many industrial wastewaters, including those of textile companies, pharmaceutical companies and agriculture. They are an increasing problem, because they are often resistant to environmental degradation and cannot be processed with conventional biological or chemical water treatments.
Micromotors could help. Last year, building on previous uses of micromotors as on-chip biosensors and cell transporters, Joseph Wang, at the University of California, San Diego in the US, and colleagues developed self-propelled micromotors that could capture oil droplets – thereby offering a means to clean up small oil spills. Only now, however, have micromotors been used to actually degrade pollutants. ‘This study indicates the great potential of micromotors for environmental monitoring and remediation,’ says Wang.
Developed by Samuel Sanchez and colleagues at the Leibniz Institute for Solid State and Materials Research (IFW) the latest micromotors consist of a tubular core of platinum that is surrounded by iron. Releasing them into polluted water containing dilute hydrogen peroxide results in the motors’ platinum cores converting the peroxide into oxygen bubbles and the surrounding iron produces hydroxyl radicals. The bubbles propel the micromotors along, while the hydroxyl radicals oxidise organic pollutants.
Two experts in micromotors, Ayusman Sen at Penn State University in the US and Martin Pumera at Nanyang Technological University in Singapore, both say that the big advantage of the micromotors is their self-propulsion, which speeds up reaction rates and, therefore, quickly degrades pollutants. ‘The hydroxyl radicals can reach the target pollutant molecules much faster than would be possible by simple diffusion,’ says Pumera.
The micromotors would probably not be able to remediate ‘huge amounts’ of waste water, says Sanchez. ‘We aim to clean contaminated capillaries, small pipes and places difficult to reach,’ he adds. ‘We are dealing with applications especially for the microscale and environments hard to get to.’
Li Zhang at the Chinese University of Hong Kong says the results are ‘striking’, and hold promise for environmental applications. ‘To date, though several research groups have been working on micromotors, they have mainly put great efforts on biological and biomedical applications,’ he says. ‘It is apparent that for industrial application, such as wastewater treatment, this process needs to be further scaled-up and the micromotors require multi-functionality. I think it is worth doing those trials and continuing this research topic.’
It’s a question that’s perplexed philosophers for centuries and scientists for decades: where does consciousness come from?
Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That's just the way the universe works.
What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism -- and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.
Koch's insights have been detailed in dozens of scientific articles and a series of books, including last year's Consciousness: Confessions of a Romantic Reductionist. Wired talked to Koch about his understanding of this age-old question.
Water fleas live only days or under optimal conditions weeks, but their mortality increases sharply with age, as is the case in longer-lived animals such as humans. But other animals — such as the hermit crab, the red abalone and the hydra, a microscopic freshwater animal that can live centuries — buck that trend, enjoying near constant levels of fertility and mortality.
A comparison of standardized demographic patterns across 46 species, published in Nature, suggests that the vast diversity of ‘ageing strategies’ among them challenges the notion that evolution inevitably leads to senescence, or deterioration of mortality and fertility, with age, says Owen Jones, a biologist at the University of Southern Denmark in Odense, who led the study.
“By taking a grand view and doing a survey across species, we found plenty of violations of this underpinning theory,” says Jones.
To compare fertility and mortality patterns, the authors assembled published life-history data sets for 11 mammals, 12 other vertebrates, 10 invertebrates, 12 vascular plants and a green alga, and standardized the trajectories — dividing mortality rates at each point in the lifespan by the average mortality rate.
The researchers found no association between the length of life and the degree of senescence. Of the 24 species showing the most abrupt increase in mortality with age, 11 had relatively long lifespans and 13 had relatively short lifespans. A similar split in lifespan occurred in the species that had a less abrupt increase in mortality.
Laurence Mueller, an evolutionary biologist at the University of California at Irvine, agrees. “Organisms in the field die from a lot of causes — for example, predation or disease — other than ageing,” he says. “Unfortunately, the unknown source of mortality in field-data sets confounds the age-related patterns of senescence, which is what we’re all interested in,” he adds.
With viruses serving as construction crews and DNA as the blueprint, biotechnology may hold the key to postlithography ICs
Biological self-assembly, as this field of research is called, has a compelling appeal. Living creatures produce the most complex molecular structures known to science. Crafted over eons by natural selection, these three-dimensional arrangements of atoms manifest a precision and fidelity, not to mention a minuteness, far beyond the capabilities of current technology. Under the direction of genes encoded in DNA, cells construct proteins that put together the fine structures necessary for life. And now that scientists can alter the genetic codes of microbes with increasing ease and accuracy, more and more research is showing that this same mechanism can be forced to construct and assemble materials critical not to nature necessarily, but to future generations of electronics.
Most scientists say the technology will first be used to construct sensors consisting of one or a few nanodevices connected to ordinary silicon circuitry. But that's not what drives the research. Their ultimate ambition is to upend current fabrication methods by genetically engineering microbes to build nanoscale circuits based on codes implanted in their DNA. No more cutting patterns into semiconductor wafers, an increasingly arduous process involving lasers, plasma, exotic gases, and high temperatures in expensive industrial environments. Instead, a room-temperature potion of biomolecules will execute, on cue, a genetically programmed chemical dance that ends in a functioning circuit with nanometer-scale dimensions.
In 2001, Belcher and UCSB's Evelyn Hu founded Semzyme (Cambridge, MA), a company that will exploit biological self-assembly to make electronic materials as well as more biotechnology-specific applications, such as long-term storage of DNA. The company is set to begin operations this year and is choosing a first product to bring to market.
Big, established companies are taking this research seriously, too. The Army's Institute for Collaborative Biotechnologies has attracted sponsorship from Aerospace Corp., Applied Biosystems, Genencor, IBM, SAIC, and Becton Dickinson.
Genencor, in particular, took an early interest in bioengineering viruses, forming a $35 million partnership with silicon materials giant Dow Corning in 2001. In the short term, the two firms are merging peptides with silicon-based chemicals to make fabric treatment and cosmetic products. Sensors and other electronics elements are future targets.
DuPont, too, is tinkering with bioevolved peptides. According to Tim Gierke, the company has identified one short-term application: purifying carbon nanotubes. Recently, these hollow pipes just a few nanometers wide have been turned into experimental logic circuits and other devices. Depending on the nanotube's structure, it acts as either a semiconductor or a metal. Unfortunately, current methods generate tubes of both types along with a messy soup of soot, and there's no good way of sorting anything out.
So DuPont evolved peptides that selectively grab the nanotubes and ignore other forms of carbon. To separate the semiconductors from the metallics, the company turned to another important biomolecule--DNA. DuPont scientists discovered that when a particular form of DNA and carbon nanotubes bind, metallic and semiconducting tubes can, to a degree, be separated using a common laboratory trick.
Glass media that stores data in 3-spatial and 2-optical dimensions could outlast us all
An experimental computer memory format uses five dimensions to store data with a density that would allow more than 300 terabytes to be crammed onto a standard optical disc. But unlike an optical disc, which is made of plastic, the experimental media is quartz glass. Researchers have long been trying to use glass as a storage material because it is far more durable than existing plastics.
A team led by optoelectronics researcher Jingyu Zhang at the University of Southampton, in the U.K., has demonstrated that information can be stored in glass by changing its birefringence, a property related to how polarized light moves through the glass.
In conventional optical media, such as DVDs, you store data by burning tiny pits on one or more layers on the plastic disc, which means you're using three spatial dimensions to store information. But in Zhang's experiment, he and colleagues exploit two additional, optical dimensions.
When their data-recording laser marks the glass, it doesn’t just make a pit: it changes two parameters of the birefringence of the glass. The researchers set these parameters, called slow axis orientation and strength of retardance, by controlling the polarization and intensity of their laser beam. Add the two optical dimensions to three spatial coordinates and the result is "5D data storage," as Zhang calls it.
Previous attempts at storing data in glass consisted of burning tiny holes into the material, but that approach means that an optical microscope is required to read out the data. Zhang's goal is to write data into glass in a format readable with lasers, like existing optical discs, to keep data-reading costs down.
The writing costs will be higher, though, since changing birefringence in glass requires fine control of a laser's polarization and intensity. Earlier attempts involved rotating the laser and using an attenuator, Zhang says, but that could take several seconds between writing operations, making it far too slow for practical applications.
Instead Zhang and colleagues bounced the beam of their ultrafast writing laser off a tiny, commercially available LCD-like screen called a spatial light modulator, or SLM (see illustration below). It changes its reflectivity quickly in response to electrical charges, giving the team fine control over the intensity of the reflected beam.
At its core, RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.
Bringing a new meaning to the phrase “experience is the best teacher”, the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction.
RoboEarth offers a complete Cloud Robotics infrastructure, which includes everything needed to close the loop from robot to RoboEarth to robot. The RoboEarth World-Wide-Web style database is implemented on a server with Internet and Intranet functionality, making it attractive for both research and business applications. It stores information required for object recognition (e.g., images, object models), navigation (e.g., maps, world models), tasks (e.g., action recipes, manipulation strategies) and hosts intelligent services (e.g., image annotation, offline learning).
To close the loop, the RoboEarth Collaborators have implemented components for a ROS compatible, robot-unspecific, high-level operating system as well as components for robot-specific, low level controllers accessible via a Hardware Abstraction Layer.
A new X-ray movie technique using extreme ultraviolet (XUV) pulses from Artemis, could help unravel the mysteries of phenomena such as magnetism or high-temperature superconductivity.
The new materials science beamline at Artemis has succeeded in making movies of electronic and structural changes in a complex material, using XUV pulses produced through high harmonic generation, a technique where a laser is fired into a gas jet and just one part in a million is converted into XUV pulses.
Members of the international collaboration from the STFC Central Laser Facility, Diamond Light Source and the universities of Hamburg, Lausanne, Oxford and Padua used these XUV pulses to study a layered crystal of Tantalum Disulphide. The resulting movies – whose frames captured slices of time lasting less than a millionth of a millionth of a second –revealed that electrical conductivity in this material is governed by strong interactions between the electrons themselves.
Understanding this type of ‘correlated-electron’ behaviour is of crucial importance, since it underlies effects such as high-temperature superconductivity. Superconductivity is the phenomenon where electric current can travel through a material with no loss, because the material is a perfect conductor with zero resistance. Although superconductors are widely used, how they work is less well understood. Even high temperature superconductors need to be kept at -170°C, requiring sophisticated cryogenics systems.
The world has ever increasing energy requirements and the search is on to find superconductors that work at room temperature. Understanding the complex physics that underlies this phenomenon is the key.
They call it the "white plague," and like its black counterpart from the Middle Ages, it conjures up visions of catastrophic death, with a cause that was at first uncertain even as it led to widespread destruction -- on marine corals in the Caribbean Sea.
Now one of the possible causes of this growing disease epidemic has been identified -- a group of viruses that are known as small, circular, single-strand DNA (or SCSD) viruses. Researchers in the College of Science at Oregon State University say these SCSD viruses are associated with a dramatic increase in the white plague that has erupted in recent decades.
Prior to this, it had been believed that the white plague was caused primarily by bacterial pathogens. Researchers are anxious to learn more about this disease and possible ways to prevent it, because its impact on coral reef health has exploded.
"Twenty years ago you had to look pretty hard to find any occurrences of this disease, and now it's everywhere," said Nitzan Soffer, a doctoral student in the Department of Microbiology at OSU and lead author on a new study just published in the International Society for Microbial Ecology. "It moves fast and can wipe out a small coral colony in a few days.
"In recent years the white plague has killed 70-80 percent of some coral reefs," Soffer said. "There are 20 or more unknown pathogens that affect corals and in the past we've too-often overlooked the role of viruses, which sometimes can spread very fast."
This is one of the first studies to show viral association with a severe disease epidemic, scientists said. It was supported by the National Science Foundation.
Marine wildlife diseases are increasing in prevalence, the researchers pointed out. Reports of non-bleaching coral disease have increased more than 50 times since 1965, and are contributing to declines in coral abundance and cover.
White plague is one of the worst. It causes rapid tissue loss, affects many species of coral, and can cause partial or total colony mortality. Some, but not all types are associated with bacteria. Now it appears that viruses also play a role. Corals with white plague disease have higher viral diversity than their healthy counterparts, the study concluded.
Deep beneath the Pacific’s surface, the world’s tallest waves have been discovered. Reaching up to 800 feet, they are known to researchers as internal waves.
Almost three miles beneath the ocean’s surface, internal waves are formed at the boundary of layers of water with different densities in a deep South Pacific trench, known as the Samoan Passage. These giant waves rise up due to ridges on the ocean floor in a narrow channel to the northwest of Samoa where cold, saltier water rises up into the warmer water above then plunges back down into the denser water on the other side of the ridge.
The findings are published in a journal named Geophysical Research Letters where Professor Matthew Alford says, “the flow accelerates substantially at the primary sill within the passage, reaching speeds as great as 0.55 m s−1. A strong hydraulic response is seen, with layers first rising to clear the sill and then plunging hundreds of meters downward.”
Although it will never (we can safely assume) be possible for surfers to ride these waves, they do play a much more important role. Scientists say that the waves are essential for mixing nutrients in the ocean.
“Oceanographers used to talk about the so-called ‘dark mixing’ problem, where they knew that there should be a certain amount of turbulence in the deep ocean, and yet every time they made a measurement they observed a tenth of that,” Alford said.
As the dense bottom layer of water flows over two consecutive ridges in the Samoan Passage, it forms waves, similar to air rising over a mountain. On reaching the lighter, warmer water above, they become unstable and break, mixing the two layers of water. The waves may also play a role in stimulating global currents teaching us that the seasonal swells we enjoy at our local breaks aren’t an annual coincidence.
Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”
Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.
“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.
“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.
Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”
It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.
Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”
Scientists have found evidence of an ancient freshwater lake on Mars well suited to support microbial life, the researchers said. The lake, located inside Gale Crater where the rover landed in August 2012, likely covered an area 31 miles long and 3 miles wide, though its size varied over time. Analysis of sedimentary deposits gathered by NASA's Mars rover Curiosity shows the lake existed for at least tens of thousands of years, and possibly longer, geologist John Grotzinger, with the California Institute of Technology in Pasadena, told reporters at the American Geophysical Union conference in San Francisco.
"We've come to appreciate that is a habitable system of environments that includes the lake, the associated streams and, at times when the lake was dry, the groundwater," he said. Analysis of clays drilled out from two rock samples in the area known as Yellowknife Bay show the freshwater lake existed at a time when other parts of Mars were dried up or dotted with shallow, acidic, salty pools ill-suited for life.
In contrast, the lake in Gale Crater could have supported a simple class of rock-eating microbes, known as chemolithoautotrophs, which on Earth are commonly found in caves and hydrothermal vents on the ocean floor, Grotzinger said. Scientists also reported that the clays, which form in the presence of water, were younger than expected, a finding that expands the window of time for when Mars may have been suited for life. The planet's surface is riddled with geologic features carved by water, such as channels, dried up riverbeds, lake deltas and other sedimentary deposits.
Scientists will continue to look for rocks that may have higher concentrations of organics or better chemical conditions for their preservation, Grotzinger said. "A key hurdle that we need to overcome is understanding how those organics may have been preserved over time, from the time they entered the rock to the time that we actually detect them," said Curiosity scientist Jennifer Eigenbrode with NASA's Goddard Space Flight Center in Greenbelt, Maryland.
A super-luminous supernova (SLSN) is a class of supernova whose peak luminosity is several times larger than a typical supernova. Mechanisms that can give rise to a SLSN include the explosion of a very massive star in a pair-instability supernova, the interaction of supernova ejecta with circumstellar matter or a dual-shock quark nova (dsQN) event. SN 2006oz is currently the only SLSN known to exhibit a double-humped lightcurve that is consistant with a dsQN model. The lightcurve of SN 2006oz can be explained by a quark nova occurring 6.5 days after a core-collapse supernova explosion. A dsQN event like SN 2006oz is very rare since it is estimated to occur at a rate of 1 in every 10,000 core-collapse supernovae.
In a dsQN model, a massive star explodes in normal core-collapse supernova and leaves behind a rapidly-spinning, high-mass neutron star. As the neutron star spins down, its central density gradually increases. This eventually leads to a detonative phase transition known as a quark nova, where the neutron star violently converts into a quark star. During this process, the neutron star’s outer layer is ejected at ultra-relativistic velocities. An enormous amount of kinetic energy is carried away since the quark nova ejecta consists of ~100 Earth masses of material travelling close to the speed of light. Although the quark nova occurs several days after the core-collapse supernova, ejecta from the quark nova travel many times faster than the supernova ejecta.
The quark nova ejecta rapidly catch up and collide with ejecta from the preceding supernova. This re-shocks the supernova ejecta and leads to a rise in luminosity over an extended period of time. As a result, a SLSN consisting of a normal core-collapse supernova followed by a quark nova is characterised by a double-humped lightcurve. The fainter first hump corresponds to the core-collapse supernova while the brighter second hump corresponds to the re-shocked supernova ejecta. A double-humped lightcurve indicative of a dsQN is only produced when the quark nova happens ~10 days after the core-collapse supernova. If the time interval between the supernova and quark nova is too long, the supernova ejecta would have dissipated so much that the quark nova basically occurs in isolation. In contrast, if the time interval is too short, the two lightcurves would overlap and prevent a distinct double-hump.
A new sensor technology developed by researchers at the University of Illinois at Urbana-Champaign and collaborators at Daktari Diagnostics can diagnose HIV/AIDS using just a drop of blood. The device could provide less costly, easy-to-use, immediate disease diagnostics, especially useful in remote areas of the world and locations with limited resources.
This small, disposable biochip can count CD4+/CD8+ T cells quickly and accurately for HIV diagnosis. Developed by the research group of Rashid Bashir, professor and head of the Department of Bioengineering at Illinois, the device uses a microfluidic biochip, a miniaturized chip designed to process fluids and sense the cells electronically. It works similar to a common blood sugar test, where a patient can put a drop of blood on a strip and insert the strip into a handheld reader to get a blood glucose result. In this case, the strip is a biochip inside of a cartridge, where white blood cells are captured in a microfluidic chamber coated with proteins.
The portable device provides information on the number of white blood cells and CD4+ T cells (immune cells that get destroyed when a patient is infected with the HIV virus) are in a drop of blood. Clinical diagnoses of AIDS are based on when CD4 cells get below 200-350 cells per microliter of whole blood.
Results of the research have been published in the latest issue of the journal Science Translational Medicine. According to the paper’s first co-authors, Nicholas Watkins and Umer Hassan, the approach can detect sub-populations of white blood cells, such as CD4+ and CD8+ T cells, and it can count white blood cells just as accurately as more complex time-consuming approaches using cell counting technologies that require larger volumes of blood. And, by using the CD4/CD8 ratio, doctors may obtain a more complete “picture” of HIV infection.
The group is working on miniaturizing the setup to make the technology handheld, as well as designing a cartridge that can be mass-produced. The biochip also could be used in many other situations where white blood cell counts are needed.
What if the universe had no beginning, and time stretched back infinitely without a big bang to start things off? That's one possible consequence of an idea called "rainbow gravity," so-named because it posits that gravity's effects on spacetime are felt differently by different wavelengths of light, aka different colors in the rainbow.
Rainbow gravity was first proposed 10 years ago as a possible step toward repairing the rifts between the theories of general relativity (covering the very big) and quantum mechanics (concerning the realm of the very small). The idea is not a complete theory for describing quantum effects on gravity, and is not widely accepted. Nevertheless, physicists have now applied the concept to the question of how the universe began, and found that if rainbow gravity is correct, spacetime may have a drastically different origin story than the widely accepted picture of the big bang.
According to Einstein's general relativity, massive objects warp spacetime so that anything traveling through it, including light, takes a curving path. Standard physics says this path shouldn't depend on the energy of the particles moving through spacetime, but in rainbow gravity, it does. "Particles with different energies will actually see different spacetimes, different gravitational fields," says Adel Awad of the Center for Theoretical Physics at Zewail City of Science and Technology in Egypt, who led the new research, published in October in the Journal of Cosmology and Astroparticle Physics. The color of light is determined by its frequency, and because different frequencies correspond to different energies, light particles (photons) of different colors would travel on slightly different paths though spacetime, according to their energy.
The effects would usually be tiny, so that we wouldn't notice the difference in most observations of stars, galaxies and other cosmic phenomena. But with extreme energies, in the case of particles emitted by stellar explosions called gamma-ray bursts, for instance, the change might be detectable. In such situations photons of different wavelengths released by the same gamma-ray burst would reach Earth at slightly different times, after traveling somewhat altered courses through billions of light-years of time and space. "So far we have no conclusive evidence that this is going on," says Giovanni Amelino-Camelia, a physicist at the Sapienza University of Rome who has researched the possibility of such signals. Modern observatories, however, are just now gaining the sensitivity needed to measure these effects, and should improve in coming years.
Scientists say they have for the first time successfully grown human hairs using dermal papilla cells taken from the inside of hair follicles.
The method could significantly expand the use of hair transplantation to women with hair loss as well as to men in early stages of baldness. Dermal papilla cells give rise to hair follicles, and the notion of cloning hair follicles using inductive dermal papilla cells has been around for 40 years or so. However, once the dermal papilla cells are put into conventional, two-dimensional tissue culture, they revert to basic skin cells and lose their ability to produce hair follicles. So we were faced with a Catch-22: how to expand a sufficiently large number of cells for hair regeneration while retaining their inductive properties,” said co-author Prof Colin Jahoda from Durham University, UK.
The team found a clue to overcoming this barrier in their observations of rodent hair. Rodent papillae can be easily harvested, expanded, and successfully transplanted back into rodent skin, a method pioneered by Dr Jahoda several years ago. The main reason that rodent hair is readily transplantable, the researchers suspected, is that their dermal papillae tend to spontaneously aggregate, or form clumps, in tissue culture. The team reasoned that these aggregations must create their own extracellular environment, which allows the papillae to interact and release signals that ultimately reprogram the recipient skin to grow new follicles.
To test their hypothesis, the team harvested dermal papillae from 7 human donors and cloned the cells in tissue culture. No additional growth factors were added to the cultures. After a few days, the cultured papillae were transplanted between the dermis and epidermis of human skin that had been grafted onto the backs of mice. In five of the seven tests, the transplants resulted in new hair growth that lasted at least six weeks.
DNA analysis confirmed that the new hair follicles were human and genetically matched the donors.
A translucent underwater cave dweller that looks like a skeleton and travels like an inchworm is the newest member of California's array of marine life.
Scientists found a new species of skeleton shrimp — a group of tiny crustaceans that are actually caprellid amphipods, not shrimp — in vials collected from a small cave offshore of Southern California's Catalina Island. The two vials, one containing a male and one containing a female, were housed in the Canadian Museum of Nature in Ottawa.
Lead study author José Manuel Guerra-García, a caprellid expert at the University of Seville in Spain, realized the "shrimp" were a never-before-recognized species during a 2010 visit to the museum. Guerra-García compared the ghostlike creatures with other species of the genus, Liropus, and confirmed other scientists had never described the tiny crustaceans.
X-rays transformed medicine a century ago by providing a noninvasive way to detect internal structures in the body. Still, they have limitations: X-rays cannot image the body’s soft tissues, except with the use of contrast-enhancing agents that must be swallowed or injected, and their resolution is limited.
But a new approach developed by researchers at MIT and Massachusetts General Hospital (MGH) could dramatically change that, enabling the most detailed images ever — including clear views of soft tissue without any need for contrast agents.
The new technology “could make X-rays ubiquitous, because of its higher resolution, the fact that the dose would be smaller and the hardware smaller, cheaper, and more capable than current X-rays,” says Luis Velásquez-García, a principal research scientist at MIT’s Microsystems Technology Laboratories and senior author of the PowerMEMS paper.
Velásquez-García says that while conventional X-ray systems show little or no structure in most soft tissues — including all of the body’s major organ systems — the new system would show these in great detail. A test the team performed with an eye from a cadaver using X-rays from a particle accelerator clearly shows “all the structures, the lens and the cornea,” he says. “In time we are confident our system will be able to achieve such resolution with a far simpler and cheaper device.”
The key is to produce coherent beams of X-rays from an array of micron-sized point sources, instead of a spread from a single, large point as in conventional systems, Velásquez-García explains. The team’s approach includes developing hardware that is an innovative application of batch microfabrication processes used to make microchips for computers and electronic devices.
Scientists from the National Institute of Standards and Technology (NIST) and Sandia National Laboratories have added something new to a family of engineered, high-tech materials called metal-organic frameworks (MOFs): the ability to conduct electricity. This breakthrough—conductive MOFs—has the potential to make these already remarkable materials even more useful, particularly for detecting gases and toxic substances.
MOFs are three-dimensional crystalline materials with nanoscale pores made up of metal ions linked by various organic molecules. MOFs have huge surface areas, and scientists can easily control the size of their pores and how the pores interact with molecules by tinkering with their chemistries. These characteristics make them ideal for use as catalysts, membranes or sponges for gas storage or for drug delivery, among other applications.
Thousands of new MOF structures are discovered and characterized each year. While they come in a dizzying array of chemistries and structures, none of them conducts electricity well. The NIST/Sandia team developed a method to modify the electrical conductivity of MOF thin films and to control it over six orders of magnitude.
"MOFs are typically extremely poor electrical conductors because their constituent building blocks, the organic linkers and the metal ions, don't really talk to each other in terms of electrical conduction," says NIST materials engineer Andrea Centrone. "Our work points to a way of controlling and increasing their conductivity."
The group accomplished this by "infiltrating an insulating MOF with redox-active, conjugated guest molecules." In other words, they infused and bound electron-sharing molecules into MOF thin films to create a material that is stable in air and approximately a million times more conductive than the unaltered MOF.
"Based on several spectroscopic experiments, we believe that the guest molecules serve two important purposes: they create additional bridges between the metal ions—copper, in this case—and they accept electrical charge," says NIST chemist Veronika Szalai.
According to NIST physicist Paul Haney, who provided some modeling for the experimental data, the arrangement of the guest molecules in the MOF creates a unique conductivity mechanism while preserving the benefits of the porous MOF crystalline structure.
Hawking has given many lectures to the general public. Below are some of the more recent public lectures. Included with these lectures is a Glossary of some of the terms used.
Into a Black Hole (2008): Is it possible to fall in a black hole, and come out in another universe? Can you escape from a black hole once you fall inside? What have we discovered about black holes?
The Origin of the Universe (2005): Why are we here? Where did we come from? The answer generally given was that humans were of comparatively recent origin, because it must have been obvious, even at early times, that the human race was improving in knowledge and technology. So it can't have been around that long, or it would have progressed even more.
Godel and the End of Physics (2002): How far can we go in our search for understanding and knowledge? Will we ever find a complete form of the laws of nature - a set of rules that in principle at least enable us to predict the future to an arbitrary accuracy, knowing the state of the universe at one time? A qualitative understanding of the laws has been the aim of philosophers and scientists, from Aristotle onwards.
Space and Time Warps (1999): In science fiction, space and time warps are a commonplace. They are used for rapid journeys around the galaxy, or for travel through time. But today's science fiction, is often tomorrow's science fact. So what are the chances for space and time warps?
Does God Play Dice (1999): Can predict the future, or is it arbitrary and random? In ancient times, the world must have seemed pretty arbitrary. Disasters such as floods or diseases must have seemed to happen without warning or apparent reason. Primitive people attributed such natural phenomena, to a pantheon of gods and goddesses, who behaved in a capricious and whimsical way. There was no way to predict what they would do, and the only hope was to win favour by gifts or actions.
The Beginning of Time (1996): Has time itself a beginning, and will it have an end? All the evidence seems to indicate, that the universe has not existed forever, but that it had a beginning, about 15 billion years ago. This is probably the most remarkable discovery of modern cosmology. Yet it is now taken for granted. We are not yet certain whether the universe will have an end.
Life in the Universe (1996): Speculations about how life has developed in the universe, and in particular, the development of intelligent life.
Mapping media coverage of climate change and the vital data needed to understand its causes and impacts.
Climate Commons is an interactive map developed by the Earth Journalism Network. The map features weather data and emissions data related to climate. The map allows you to compare baseline weather data with anomalies and extreme weather events. The map also features articles about climate change. The articles are displayed on the map according to location.
As early as 2015, your Amazon purchases could be dropped at your door within 30 minutes courtesy of unmanned aerial drones. Amazon CEO Jeff Bezos revealed plans for the delivery service Prime Air (an extension of Amazon Prime which guarantees two-day shipping) in a 60 Minutes prime time interview.
The service would ship orders under five pounds (2.3 kg) after they are packed into small plastic containers and then scooped up by Amazon's custom-built "octocopter." The drone then delivers the package to customers within a 10 mile (16 km) radius of Amazon's fulfillment centers.
Clearly the company will need to jump through various hoops to get the service off the ground, with public safety being a primary concern. "Safety will be our top priority, and our vehicles will be built with multiple redundancies designed to commercial aviation standards," the company says.
The Federal Aviation Administration (FAA) is currently working on rules and regulations for unmanned aerial vehicles, a process which Amazon hopes will be completed sooner rather than later. "We hope the FAA's rules will be in place as early as sometime in 2015. We will be ready at that time."
We have seen a rise in proposals for the use of drones to deliver commercial products. One Australian startup plans to use drones to deliver school textbooks to customers in March 2014, while The Burrito Bomberhopes to be dropping Mexican cuisine on people as soon as 2015. With Amazon's product range, however, Prime Air would be the first to do so on such a large and diverse scale.
It may sound like science fiction, but given that Bezos claims that 300 items per second will be ordered from Amazon on Cyber Monday, it is possible that flocks of Prime Air drones will be zipping around above us in the very near future.