Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centers for reading across cultures.
Via Sakis Koukouvis
Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
A decades-old method called the “bootstrap” is enabling new discoveries about the geometry underlying all quantum theories.
In the 1960s, the charismatic physicist Geoffrey Chew, Member (1956) in the School of Mathematics/Natural Sciences, espoused a radical vision of the universe, and with it, a new way of doing physics, arguing that "Nature is as it is because this is the only possible nature consistent with itself.” He believed he could deduce nature’s laws solely from the demand that they be self-consistent. Particles, Chew said, “pull themselves up by their own bootstraps.”
Recently, the bootstrap method has been re-energized. As the new generation of bootstrappers, including Professor Nima Arkani-Hamed and Carl P. Feinberg Professor Juan Maldacena, current Member David Simmons-Duffin, Member (2010–13) Thomas Hartman, and Junior Visiting Professor (2015–16) and Member (2011–12) David Poland in the School of Natural Sciences, explore this abstract theory space, they seem to be verifying the vision that Chew, now 92 and long retired, laid out half a century ago—but they’re doing it in an unexpected way.
As physicists use the bootstrap to explore the geometry of this theory space, they are pinpointing the roots of “universality,” a remarkable phenomenon in which identical behaviors emerge in materials as different as magnets and water. They are also discovering general features of quantum gravity theories, with apparent implications for the quantum origin of gravity in our own universe and the origin of space-time itself. The bootstrap is technically a method for computing “correlation functions” — formulas that encode the relationships between the particles described by a quantum field theory. Consider a chunk of iron. The correlation functions of this system express the likelihood that iron atoms will be magnetically oriented in the same direction, as a function of the distances between them. The two-point correlation function gives you the likelihood that any two atoms will be aligned, the three-point correlation function encodes correlations between any three atoms, and so on. These functions tell you essentially everything about the iron chunk. But they involve infinitely many terms riddled with unknown exponents and coefficients. They are, in general, onerous to compute. The bootstrap approach is to try to constrain what the terms of the functions can possibly be in hopes of solving for the unknown variables. Most of the time, this doesn’t get you far. But in special cases, as the theoretical physicist Alexander Polyakov began to figure out in 1970, the bootstrap takes you all the way.
Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements.
The team analyzed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect.
Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures.
The Washington Post's Heliograf software can autowrite tons of basic stories in no time, which could free up reporters to do more important work — or allow them to just retire.
USA Today has used this AI-driven production software to create short videos. It can condense news articles into a script, string together a selection of images or video footage, and even add narration with a synthesized newscaster voice.
Reuters’ algorithmic prediction tool helps journalists gauge the integrity of a tweet. The tech scores emerging stories on the basis of “credibility” and “newsworthiness” by evaluating who’s tweeting about it, how it’s spreading across the network, and if nearby users have taken to Twitter to confirm or deny breaking developments.
Originally designed to crowdsource reporting from the Republican and Democratic National Conventions, BuzzFeed’s software collects information from on-the-ground sources at news events. BuzzBot has since been open-sourced, portending a wave of bot-aided reporting tools.
Twenty-two years ago, researchers first reported that adolescents with autism spectrum disorder had increased brain volume. During the intervening years, studies of younger and younger children showed that this brain “overgrowth” occurs in childhood.
Now, a team at the University of North Carolina, Chapel Hill, has detected brain growth changes linked to autism in children as young as 6 months old. And it piqued our interest because a deep-learning algorithm was able to use that data to predict whether a child at high-risk of autism would be diagnosed with the disorder at 24 months.
The algorithm correctly predicted the eventual diagnosis in high-risk children with 81 percent accuracy and 88 percent sensitivity. That’s pretty damn good compared with behavioral questionnaires, which yield information that leads to early autism diagnoses (at around 12 months old) that are just 50 percent accurate.
“This is outperforming those kinds of measures, and doing it at a younger age,” says senior author Heather Hazlett, a psychologist and brain development researcher at UNC.
As part of the Infant Brain Imaging Study, a U.S. National Institues of Health–funded study of early brain development in autism, the research team enrolled 106 infants with an older sibling who had been given an autism diagnosis, and 42 infants with no family history of autism. They scanned each child’s brain—no easy feat with an infant—at 6-, 12-, and 24 months.
The researchers saw no change in any of the babies’ overall brain growth between 6- and 12-month mark. But there was a significant increase in the brain surface area of the high-risk children who were later diagnosed with autism. That increase in surface area was linked to brain volume growth that occurred between ages 12 and 24 months. In other words, in autism, the developing brain first appears to expand in surface area by 12 months, then in overall volume by 24 months.
The team also performed behavioral evaluations on the children at 24 months, when they were old enough to begin to exhibit the hallmark behaviors of autism, such as lack of social interest, delayed language, and repetitive body movements. The researchers note that the greater the brain overgrowth, the more severe a child’s autistic symptoms tended to be.
Though the new findings confirmed that brain changes associated with autism occur very early in life, the researchers did not stop there. In collaboration with computer scientists at UNC and the College of Charleston, the team built an algorithm, trained it with the brain scans, and tested whether it could use these early brain changes to predict which children would later be diagnosed with autism.
It worked well. Using just three variables—brain surface area, brain volume, and gender (boys are more likely to have autism than girls)—the algorithm identified up eight out of 10 kids with autism. “That’s pretty good, and a lot better than some behavioral tools,” says Hazlett.
To train the algorithm, the team initially used half the data for training and the other half for testing—“the cleanest possible analysis,” according to team member Martin Styner, co-director of the Neuro Image Analysis and Research Lab at UNC. But at the request of reviewers, they subsequently performed a more standard 10-fold analysis, in which data is subdivided into 10 equal parts. Machine learning is then done 10 times, each time with 9 folds used for training and the 10th saved for testing. In the end, the final program gathers together the “testing only” results from all 10 rounds to use in its predictions.
Happily, the two types of analyses—the initial 50/50 and the final 10-fold—showed virtually the same results, says Styner. And the team was pleased with the prediction accuracy. “We do expect roughly the same prediction accuracy when more subjects are added,” said co-author Brent Munsell, an assistant professor at College of Charleston, in an email to IEEE. “In general, over the last several years, deep learning approached that have been applied to image data have proved to be very accurate,” says Munsell.
Company chief Elon Musk announced the surprising news: Two people who know one another approached the company about sending them on a weeklong flight just beyond the moon. Musk won't identify the pair or the price tag. He says they've already paid a "significant" deposit.
Musk says SpaceX is on track to launch astronauts to the International Space Station for NASA in mid-2018. This moon mission would follow about six months later, using a Dragon crew capsule and a Falcon heavy rocket.
Musk says the moon mission is designed to be autonomous—unless something goes wrong. SpaceX says the passengers would fly to the moon, but won't land on it (yet).
When three physicists first discovered through their calculations that a decaying atom moving through the vacuum experiences a friction-like force, they were highly suspicious. The results seemed to go against the laws of physics: The vacuum, by definition, is completely empty space and does not exert friction on objects within it. Further, if true, the results would contradict the principle of relativity, since they would imply that observers in two different reference frames would see the atom moving at different speeds (most observers would see the atom slow down due to friction, but an observer moving with the atom would not).
Writing in Physical Review Letters, physicists Matthias Sonnleitner, Nils Trautmann, and Stephen M. Barnett at the University of Glasgow knew something must be wrong, but at first they weren't sure what. "We spent ages searching for the mistake in the calculation and spent even more time exploring other strange effects until we found this (rather simple) solution," Sonnleitner explained.
The physicists eventually realized that the missing puzzle piece was a tiny bit of extra mass called the "mass defect"—an amount so tiny that it has never been measured in this context. This is the mass in Einstein's famous equation E = mc2, which describes the amount of energy required to break up the nucleus of an atom into its protons and neutrons. This energy, called the "internal binding energy," is regularly accounted for in nuclear physics, which deals with larger binding energies, but is typically considered negligible in the context of atom optics (the field here) because of the much lower energies.
This subtle but important detail allowed the researchers to paint a very different picture of what was going on. As a decaying atom moves through the vacuum, it really does experience some kind of force resembling friction. But a true friction force would cause the atom to slow down, and this is not what's happening.
What's really happening is that, since the moving atom loses a tiny bit of mass as it decays, it loses momentum, not velocity. To explain in more detail: Although the vacuum is empty and does not exert any forces on the atom, it still interacts with the atom, and this interaction causes the excited atom to decay. As the moving atom decays to a lower energy state, it emits photons, causing it to lose a little bit of energy corresponding to a certain amount of mass. Since momentum is the product of mass and velocity, the decrease in mass causes the atom to lose a little bit of momentum, just as expected according to the conservation of energy and momentum in special relativity. So while the atom's mass (energy) and momentum decrease, its velocity remains constant.
Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.
The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.
Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researhers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.
“The working mechanism of ENODEs is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”
The ENODe achieves significant energy savings** in two ways:
“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”
NASA's Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.
The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”
At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.
This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory's Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.
The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.
Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.
Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, "snowball-like" world, but further observations are needed.
"The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star," said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. "It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds."
A new interface system allowed three paralyzed individuals to type words up to four times faster than the speed that had been demonstrated in earlier studies.
The promise of brain–computer interfaces (BCIs) for restoring function to people with disabilities has driven researchers for decades, yet few devices are ready for widespread practical use. Several obstacles exist, depending on the application. For typing, however, one important barrier has been reaching speeds sufficient to justify adopting the technology, which usually involves surgery. A study published Tuesday in eLife reports the results of a system that enabled three participants—Degray and two people with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease, a neurodegenerative disease that causes progressive paralysis)—to type at the fastest speeds yet achieved using a BCI—speeds that bring the technology within reach of being practically useful. “We're approaching half of what, for example, I could probably type on a cell phone,” says neurosurgeon and co-senior author, Jamie Henderson of Stanford University.
Lonesome George, the last of a now extinct type of giant tortoise from the Galapagos Islands, could be cloned after scientists have preserved some of his cells by cryogenically freezing them.
As the last of his kind, his life was a lonely one and his death brought the extinction of a lineage of animals that stretched back hundreds of thousands of years. Now Lonesome George, the last giant tortoise from Pinta Island in the Galapagos Islands who died just over a month ago, may be able to achieve in death what he could not in life – and produce an heir.
Scientists have cryogenically frozen tissue taken from the five foot long reptile just after his death in the hope that they may be able to resurrect the subspecies of Galapagos giant tortoise.
By using the same cloning techniques that created Dolly the Sheep, they believe it may be possible to one day bring the now extinct Pinta Island tortoises, or Chelonoidis nigra abingdonii as they are known scientifically, back to life.
It is a fitting legacy for an animal that had become a national icon in Ecuador, featuring on the country’s bank notes. His plight became a symbol for the efforts to conserve threatened species around the world, and attempts to find him a mate were followed by a global audience.
But with no female Pinta Island tortoises left alive, it was apparent he was doomed to spend the rest of his long life plodding slowly around his home alone until subspecies finally went extinct. The discovery sparked a huge effort to find him a mate so the animals might continue to survive. As the descendant of giant tortoises that arrived on Pinta Island around 300,000 years ago, it was unclear whether evolution during that time had left George unable to mate with related tortoises on nearby islands. Attempts to get George to mate with tortoises from neighbouring islands failed but last year hopes were raised when genetic testing revealed that his closest relatives were the giant tortoises of Espanola Island.
Research in to the cloning of endangered species is still at an early stage as most cloning to date has been achieved with animals whose biology is well understood such as cats, dogs and domestic farm animals like sheep and cows.
Recently, endangered black-footed cats were successfully cloned, while a wild ox called a gaur, wild cattle called banteng and a type of mountain goat called the Pyrenean ibex have been also been cloned, raising hopes for other endangered animals.
Back in October of 2015, astronomers shook the world when they reported how the Kepler mission had noticed a strange and sudden drop in brightness coming from KIC 8462852 (aka. Tabby's Star). This was followed by additional studies that showed how the star appeared to be consistently dimming over time. All of this led to a flurry of speculation, with possibilities ranging from large asteroids and a debris disc to an alien megastructure.
But in what may be the greatest explanation yet, a team of researchers from Columbia University and the University of California, Berkley, have suggested that the star's strange flickering could be the result of a planet it consumed at some point in the past. This would have resulted in a big outburst of brightness from which the star is now recovering; and the remains of this planet could be transiting in front of the star, thus causing periodic drops.
For the sake of their study – titled "Secular dimming of KIC 8462852 following its consumption of a planet", which is scheduled to appear in the Monthly Notices of the Royal Astronomical Society – the team took the initial Kepler findings, which showed sudden drops of 15% and 22% in brightness. They then considered subsequent studies that took a look at the long-term behavior of Tabby's Star (both of which were published in 2016).
The first study, conducted by Bradley Schaefer of Louisiana State University, showed a decrease of 14% between the years of 1890 and 1989. The second study, conducted by Ben Monet and Joshua Simon (of Caltech and the Carnegie Institution of Washington, respectively), showed how the star faded by 3% over the course of the four years that Kepler continuously viewed it.
They then attempted to explain this behavior using the Kozai Mechanism (aka. Kozai Effect, Lidov-Kozai mechanism), which is a long-standing method in astronomy for calculating the orbits of planets based on their eccentricity and inclination. Applied to KIC 8462852, they determined that the star likely consumed a planet (or planets) in the past, likely around 10,000 years ago. This process would have caused a temporary brightening from which the star is now returning to normal (thus explaining the long term trend). They further determined that the periodic drops in brightness could be caused by the remnants of this planet passing in high-eccentricity orbits in front of the star, thus accounting for the sudden changes.
Their calculations also put mass constraints on the planet (or planets) consumed. By their estimates, it was either a single Jupiter-sized planet, or a large number of smaller objects – such as moon-mass bodies that were about 1 km in diameter. This latter possibility seems more inviting, since a large number of objects would have produced a field of debris that would be more consistent with the dimming rate observed by previous studies. These results are not only the best explanation of this star's strange behavior, they could have serious implications for the study of stellar evolution – in which stars gobble up some of their planets over time.
Researchers at EPFL and UNIL have discovered a faster and more efficient gait, never observed in nature, for six-legged robots walking on flat ground. Bio-inspired gaits – less efficient for robots – are used by real insects since they have adhesive pads to walk in three dimensions. The results provide novel approaches for roboticists and new information to biologists.
When vertebrates run, their legs exhibit minimal contact with the ground. But insects are different. These six-legged creatures run fastest using a three-legged, or “tripod” gait where they have three legs on the ground at all times – two on one side of their body and one on the other. The tripod gait has long inspired engineers who design six-legged robots, but is it necessarily the fastest and most efficient way for bio-inspired robots to move on the ground?
Researchers at EPFL and UNIL revealed that there is in fact a faster way for robots to locomote on flat ground, provided they don’t have the adhesive pads used by insects to climb walls and ceilings. This suggests designers of insect-inspired robots should make a break with the tripod-gait paradigm and instead consider other possibilities including a new locomotor strategy denoted as the “bipod” gait. The researchers’ findings are published in Nature Communications.
The scientists carried out a host of computer simulations, tests on robots and experiments on Drosophila melanogaster – the most commonly studied insect in biology. “We wanted to determine why insects use a tripod gait and identify whether it is, indeed, the fastest way for six-legged animals and robots to walk,” said Pavan Ramdya, co-lead and corresponding author of the study. To test the various combinations, the researchers used an evolutionary-like algorithm to optimize the walking speed of a simulated insect model based on Drosophila. Step-by-step, this algorithm sifted through many different possible gaits, eliminating the slowest and shortlisting the fastest.
Pluto could be set to regain its planetary status after 11 years in exile, if NASAscientists have their way. A new definition of planets would add over 100 to our solar system, with even Earth’s moon due a promotion.
The International Astronomical Union (IAU) currently requires an object to be orbiting the Sun to be classified as a planet. But the NASA team wants the IAU to drop that requirement, insisting that a world’s physical properties are more important than their interactions with stars.
“In keeping with both sound scientific classification and peoples’ intuition, we propose a geophysically-based definition of ‘planet’ that importantly emphasizes a body’s intrinsic physical properties over its extrinsic orbital properties,” the researchers explain. The proposal was made by a team of NASA scientists led by Alan Stern, principal investigator of the space agency’s New Horizons mission to Pluto.
An exotic binary star system 380 light-years away has been identified as an elusive white dwarf pulsar -- the first of its kind ever to be discovered in the universe -- thanks to research by the University of Warwick.
Professors Tom Marsh and Boris Gänsicke of the University of Warwick's Astrophysics Group, with Dr David Buckley from the South African Astronomical Observatory, have identified the star AR Scorpii (AR Sco) as the first white dwarf version of a pulsar -- objects found in the 1960s and associated with very different objects called neutron stars.
AR Sco contains a rapidly spinning, burnt-out stellar remnant called a white dwarf, which lashes its neighbor -- a red dwarf -- with powerful beams of electrical particles and radiation, causing the entire system to brighten and fade dramatically twice every two minutes.
The latest research establishes that the lash of energy from AR Sco is a focused 'beam', emitting concentrated radiation in a single direction -- much like a particle accelerator -- something which is totally unique in the known universe.
AR Sco lies in the constellation Scorpius, 380 light-years from Earth, a close neighbor in astronomical terms. The white dwarf in AR Sco is the size of Earth but 200,000 times more massive, and is in a 3.6 hour orbit with a cool star one third the mass of the Sun.
With an electromagnetic field 100 million times more powerful than Earth, and spinning on a period just shy of two minutes, AR Sco produces lighthouse-like beams of radiation and particles, which lash across the face of the cool star, a red dwarf.
As the researchers previously discovered, this powerful light house effect accelerates electrons in the atmosphere of the red dwarf to close to the speed of light, an effect never observed before in similar types of binary stars. The red dwarf is thus powered by the kinetic energy of its spinning neighbor. The distance between the two stars is around 1.4 million kilometers -- which is three times the distance between the Moon and the Earth.
A Caltech-led study has shown that the electrical wire-like behavior of DNA is involved in the molecule's replication.
In the early 1990s, Jacqueline Barton, the John G. Kirkwood and Arthur A. Noyes Professor of Chemistry at Caltech, discovered an unexpected property of DNA—that it can act like an electrical wire to transfer electrons quickly across long distances. Later, she and her colleagues showed that cells take advantage of this trait to help locate and repair potentially harmful mutations to DNA.
Now, Barton's lab has shown that this wire-like property of DNA is also involved in a different critical cellular function: replicating DNA. When cells divide and replicate themselves in our bodies—for example in the brain, heart, bone marrow, and fingernails—the double-stranded helix of DNA is copied. DNA also copies itself in reproductive cells that are passed on to progeny.
The new Caltech-led study, based on work by graduate student Elizabeth O'Brien in collaboration with Walter Chazin's group at Vanderbilt University, shows that a key protein required for replicating DNA depends on electrons traveling through DNA.
"Nature is the best chemist and knows exactly how to take advantage of DNA electron-transport chemistry," says Barton, who is also the Norman Davidson Leadership Chair of Caltech's Division of Chemistry and Chemical Engineering. "The electron transfer process in DNA occurs very quickly," says O'Brien, lead author of the study, appearing in the February 24 issue of Science. "It makes sense that the cell would utilize this quick-acting pathway to regulate DNA replication, which necessarily is a very rapid process."
The researchers found their first clue that DNA replication might involve the transport of electrons through the double helix by taking a closer look at the proteins involved. Two of the main players in DNA replication, critical at the start of the process, are the proteins DNA primase and DNA polymerase alpha. DNA primase typically binds to single-stranded, uncoiled DNA to begin the replication process. It creates a "primer" made of RNA to help DNA polymerase alpha start its job of copying the single strand of DNA to create a new segment of double-helical DNA.
DNA primase and DNA polymerase alpha molecules both contain iron-sulfur clusters. Barton and her colleagues previously discovered that these metal clusters are crucial for DNA electron transport in DNA repair. In DNA repair, specific proteins send electrons down the double helix to other DNA-bound repair proteins as a way to "test the line," so to speak, and make sure there are no mutations in the DNA. If there are mutations, the line is essentially broken, alerting the cell that mutations are in need of repair. The iron-sulfur clusters in the DNA repair proteins are responsible for donating and accepting traveling electrons.
Via Integrated DNA Technologies
Researchers at MIT and Brigham and Women's Hospital have discovered a combination of drugs that induces supporting cells in the ear to differentiate into hair cells, offering a potential new way to treat hearing loss.
Within the inner ear, thousands of hair cells detect sound waves and translate them into nerve signals that allow us to hear speech, music, and other everyday sounds. Damage to these cells is one of the leading causes of hearing loss, which affects 48 million Americans. Each of us is born with about 15,000 hair cells per ear, and once damaged, these cells cannot regrow. However, researchers at MIT, Brigham and Women's Hospital, and Massachusetts Eye and Ear have now discovered a combination of drugs that expands the population of progenitor cells (also called supporting cells) in the ear and induces them to become hair cells, offering a potential new way to treat hearing loss.
"Hearing loss is a real problem as people get older. It's very much of an unmet need, and this is an entirely new approach," says Robert Langer, the David H. Koch Institute Professor at MIT, a member of the Koch Institute for Integrative Cancer Research, and one of the senior authors of the study.
Jeffrey Karp, an associate professor of medicine at Brigham and Women's Hospital (BWH) and Harvard Medical School in Boston; and Albert Edge, a professor of otolaryngology at Harvard Medical School based at Massachusetts Eye and Ear, are also senior authors of the paper, which appears in the Feb. 21 issue of Cell Reports.
Scientists have found that a superconducting current flows in only one direction through a chiral nanotube, marking the first observation of the effects of chirality on superconductivity. Until now, superconductivity has only been demonstrated in achiral materials, in which the current flows in both directions equally.
The team of researchers, F. Qin et al., from Japan, the US, and Israel, have published a paper on the first observation of chiral superconductivity in a recent issue of Nature Communications.
Chiral superconductivity combines two typically unrelated concepts in a single material: Chiral materials have mirror images that are not identical, similar to how left and right hands are not identical because they cannot be superimposed one on top of the other. And superconducting materials can conduct an electric current with zero resistance at very low temperatures.
Observing chiral superconductivity has been experimentally challenging due to the material requirements. Although carbon nanotubes are superconducting, chiral, and commonly available, so far researchers have only successfully demonstrated superconducting electron transport in nanotube assemblies and not in individual nanotubes, which are required for this purpose.
"The most important significance of our work is that superconductivity is realized in an individual nanotube for the first time," coauthor Toshiya Ideue at The University of Tokyo told Phys.org. "It enables us to search for exotic superconducting properties originating from the characteristic (tubular or chiral) structure."
The achievement is only possible with a new two-dimensional superconducting material called tungsten disulfide, a type of transition metal dichalcogenide, which is a new class of materials that have potential applications in electronics, photonics, and other areas. The tungsten disulfide nanotubes are superconducting at low temperatures using a method called ionic liquid gating and also have a chiral structure. In addition, it's possible to run a superconducting current through an individual tungsten disulfide nanotube.
When the researchers ran a current through one of these nanotubes and cooled the device down to 5.8 K, the current became superconducting—in this case, meaning its normal resistance dropped by half. When the researchers applied a magnetic field parallel to the nanotube, they observed small antisymmetric signals that travel in one direction only. These signals are negligibly small in nonchiral superconducting materials, and the researchers explain that the chiral structure is responsible for strongly enhancing these signals.
"The asymmetric electric transport is realized only when a magnetic field is applied parallel to the tube axis," Ideue said. "If there is no magnetic field, current should flow symmetrically. We note that electric current should be asymmetric (if the magnetic field is applied parallel to the tube axis) even in the normal state (non-superconducting region), but we could not see any discernible signals in the normal state yet, interestingly, it shows a large enhancement in the superconducting region."
Nobody yet understands how a collection of mushy cells in the brain gives rise to the brilliance of consciousness seen in higher-order animals, including humans. But two discoveries give scientists vital clues to how human consciousness works.
In 2014, a 54-year-old woman went to George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment. In extreme cases like hers, one option is to introduce electrodes into the brain regions that may be causing epileptic seizures. During the treatment, however, doctor Mohamad Koubeissi and his teamaccidentally found what seemed to be a consciousness on-off switch in the brain.
When electrodes near a region called the claustrum were stimulated in the woman’s brain, she stopped reading and blankly stared into space. She didn’t respond to calls or gestures, and her breathing slowed down. When the stimulation was stopped, she regained consciousness and had no memory of the lost period. This only happened when the stimulation was to the claustrum, and no other region.
The woman’s case proved to be just the evidence that Christof Koch of the Allen Institute for Brain Science was looking for to advance his understanding of consciousness. Koch believes that the densely connected claustrum is the “seat” of consciousness in the brains of at least humans and mice, which he has studied extensively.
Now in an announcement first reported in Nature, Koch has found additional evidence that supports his hypothesis. While studying imaging techniques on mouse brains, Koch uncovered three giant neurons—brain cells that transmit signals—emanating from the claustrum and connecting to many regions in both hemispheres of the brain. One of those neurons wraps around the entire brain like a “crown of thorns,” Koch told Nature. He believes that the giant neuron may be coordinating signals from different brain regions to create consciousness.
Researchers have developed a non-invasive brain-computer interface (BCI) for completely locked-in patients. This is the first time that these patients, with complete motor paralysis but an intact cognitive state, have been able to reliably communicate. A completely locked-in state involves the loss of all motor control, including that of the eye muscles, and until now some researchers suspected that such patients were unable to communicate.
The study, published in PLoS Biology, detailed the researchers’ efforts in developing a non-invasive method to allow four completely locked-in patients to answer “yes or no” questions. The technique involves patients wearing a cap that uses infrared light to measure blood flow in different areas of the brain when they think about responding “yes” or “no” to a question. The researchers trained the patients by asking them control test questions to make sure the system could accurately record their answers, before asking questions about their current lives.
Brain-computer interfaces involving implantable electrodes have previously been successfully used in patients with less severe forms of locked-in paralysis. However, these methods involved direct implantation of electrodes in the brain. The current method is non- invasive, along with being the first approach that has reliably worked for patients who are completely locked-in.
Nearly one year ago today, the LIGO Collaboration announced the detection of gravitational waves, once again confirming Einstein's theory of General Relativity. This important discovery by the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) has spurred great interest in improving these advanced optical detectors. The mission of gravitational wave scientists worldwide is to make gravitational wave detection a routine occurrence. Scientists from the institute that developed the lasers used in Advanced LIGO have made significant progress to support that goal.
The advanced LIGO is a 2.5-mile long optical device known as an interferometer that uses laser light to detect gravitational waves coming from distant cosmic events such as colliding black holes or collapsing stars. Improving the stability of the laser source and decreasing noise that can hide weak signals coming from gravitational waves could help improve the sensitivity of gravitational wave detectors.
"We have made significant progress towards stable laser sources for third-generation gravitational wave detectors and prototypes of those," said Benno Willke of the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) and Leibniz Universität Hannover, leader of the research team. "More stable lasers enable interferometers to sense gravitational waves that are weaker and from sources further away and thus reveal important insights into astrophysical events involving black holes and neutron stars."
Cooperation between individuals is one of the defining features of our species. While other animals, such as chimpanzees, elephants, coral trout and rooks also exhibit cooperative behaviors, it is not clear if they think about cooperation in the same way as humans do. In this study scientists presented the kea, a parrot endemic to New Zealand, with a series of tasks designed to assess cooperative cognition. They found that keas were capable of working together, even when they had to wait for their partner for up to 65 seconds. The keas also waited for a partner only when a partner was actually needed to gain food.
This is the first demonstration that any non-human animal can wait for over a minute for a cooperative partner, and the first conclusive evidence that any bird species can successful track when a cooperative partner is required, and when not. The keas did not attend to whether their partner could actually access the apparatus themselves, which may have been due to issues with task demands, but one kea did show a clear preference for working together with other individuals, rather than alone. This preference has been shown to be present in humans but absent in chimpanzees.
Taken together, these results provide the first evidence that a bird species can perform at a similar level to chimpanzees and elephants across a range of collaborative tasks. This raises the possibility that aspects of the cooperative cognition seen in the primate lineage have evolved convergently in birds.
Engineers at the University of California San Diego have developed a material that could reduce signal losses in photonic devices. The advance has the potential to boost the efficiency of various light-based technologies including fiber optic communication systems, lasers and photovoltaics.
The discovery addresses one of the biggest challenges in the field of photonics: minimizing loss of optical (light-based) signals in devices known as plasmonic metamaterials. Plasmonic metamaterials are materials engineered at the nanoscale to control light in unusual ways. They can be used to develop exotic devices ranging from invisibility cloaks to quantum computers. But a problem with metamaterials is that they typically contain metals that absorb energy from light and convert it into heat. As a result, part of the optical signal gets wasted, lowering the efficiency.
In a recent study published in Nature Communications, a team of photonics researchers led by electrical engineering professor Shaya Fainman at the UC San Diego Jacobs School of Engineering demonstrated a way to make up for these losses by incorporating into the metamaterial something that emits light—a semiconductor. "We're offsetting the loss introduced by the metal with gain from the semiconductor. This combination theoretically could result in zero net absorption of the signal—a 'lossless' metamaterial," said Joseph Smalley, an electrical engineering postdoctoral scholar in Fainman's group and the first author of the study.
In their experiments, the researchers shined light from an infrared laser onto the metamaterial. They found that depending on which way the light is polarized—which plane or direction (up and down, side to side) all the light waves are set to vibrate—the metamaterial either reflects or emits light.
"This is the first material that behaves simultaneously as a metal and a semiconductor. If light is polarized one way, the metamaterial reflects light like a metal, and when light is polarized the other way, the metamaterial absorbs and emits light of a different 'color' like a semiconductor," Smalley said.metamaterial-boost-efficiency-lasers.html#jCp
The Earth's geomagnetic field increased in intensity around the Levant during the late eighth century B.C. before rapidly weakening.
The Earth is surrounded by a magnetic field that arises from the motion of iron in the liquid outer core. Direct observation of the field has been possible for only about 180 years, Ben-Yosef told Live Science. In that time, the field has weakened by about 10 percent, he said. Some researchers think the fieldmight be in the process of flipping, so that magnetic north becomes magnetic south and vice versa.
The new study reveals much faster changes in intensity. There was a spike in intensity during the late eighth century B.C., culminating in a rapid decline after about 732 B.C., Ben-Yosef and his colleagues reported today (Feb. 13) in the journal Proceedings of the National Academy of the Sciences. In a mere 31 years beginning in the year 732 B.C., there was a 27 percent decrease in the strength of the magnetic field, the researchers found. From the sixth century B.C. to the second century B.C., the field was generally stable, with a slight gradual decline.
"Our research shows that the field is very fluctuating," Ben-Yosef said. "It fluctuates quite rapidly, so there is nothing to worry about," as far as the current decline, he said. This doesn't mean that the magnetic field isn't going to flip in the near future; the new study looked at only strength of the field, not directionality. The findings do suggest that there's no reason to worry that a 10 percent decline in the field strength over more than a century is abnormal, Ben-Yosef said.
At least in the Levant, that is. All of the pottery in the study came from this region, which encompasses what is now Syria, Jordan, Israel, Palestine, Lebanon and nearby areas. That means researchers can't be sure whether the same fluctuations were happening elsewhere. Because the scientists also don't know for sure the precise locations within the Levant where the pottery was fired, they can't say anything about the direction of the geomagnetic field at the time, only its strength.
Our close cousins definitely ate each other, but no one knows why.
Neandertals ate each other—at least once in a while—according to a new analysis of bones unearthed in a Belgian cave. The remains were excavated near Goyet beginning in the 19th century and now sit in museums in Brussels. The outdated excavation techniques make it impossible to reconstruct how these Neandertals lived, but when researchers examined the bones, it was unmistakably clear what happened to them after they died. Many of the bones were covered in cut marks and dents caused by pounding, indicating that the meat and marrow had been removed. The researchers also spotted what appear to be bite marks running up and down finger bones. The marks were identical to those found on reindeer and horse bones also uncovered at the site, suggesting all three species were prepared and eaten, the researchers report this week in Scientific Reports.
A few of the Neandertal bones also showed additional wear and tear, suggesting they were later used to shape stone tools. The bones are between 40,500 and 45,500 years old, which is before Homo sapiens arrived in the region, so the only possible culprits are the Neandertals themselves. Although scientists knew that Neandertals had practiced cannibalism in Croatia, this is the first evidence of it in northern Europe. No one yet knows if Neandertal cannibalism was a ritual practice, reserved for special occasions and imbued with special meaning, or if they were just really, really hungry.