The asteroid that killed the dinosaurs must have ejected billions of tons of life-bearing rock into space. Now physicists have calculated what must have happened to it.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
It’s the most basic of ways to find out what something does, whether it’s an unmarked circuit breaker or an unidentified gene — flip its switch and see what happens. New remote-control technology may offer biologists a powerful way to do this with cells and genes. A team at Rensselaer Polytechnic Institute and Rockefeller University is developing a system that would make it possible to remotely control biological targets in living animals — rapidly, without wires, implants, or drugs.
In a technical report published today in the journal Nature Medicine, the team describes successfully using electromagnetic waves to turn on insulin production to lower blood sugar in diabetic mice. Their system couples a natural iron storage particle, ferritin, to activate an ion channel called TRPV1 such that when the metal particle is exposed to a radio wave or magnetic field, it opens the channel, leading to the activation of an insulin-producing gene. Together, the two proteins act as a nano-machine that can be used to trigger gene expression in cells.
“The use of a radiofrequency-driven magnetic field is a big advance in remote gene expression because it is non-invasive and easily adaptable,” said Jonathan S. Dordick, the Howard P. Isermann Professor of Chemical and Biological Engineering and vice president for research at Rensselaer Polytechnic Institute. “You don’t have to insert anything — no wires, no light systems — the genes are introduced through gene therapy. You could have a wearable device that provides a magnetic field to certain parts of the body and it might be used therapeutically for many diseases, including neurodegenerative diseases. It's limitless at this point.”
Other techniques exist for remotely controlling the activity of cells or the expression of genes in living animals. But these have limitations. Systems that use light as an on/off signal require permanent implants or are only effective close to the skin, and those that rely on drugs can be slow to switch on and off.
The new system, dubbed radiogenetics, uses a signal, in this case low-frequency radio waves or a magnetic field, to activate ferritin particles. They, in turn, prompt the opening of TRPV1, which is situated in the membrane surrounding the cell. Calcium ions then travel through the channel, switching on a synthetic piece of DNA the scientists developed to turn on the production of a downstream gene, which in this study was the insulin gene. In an earlier study, the researchers used only radio waves as the “on” signal, but in the current study, they also tested out a related signal – a magnetic field – that could also activate insulin production. They found it had a similar effect as the radio waves.
A Glasgow-based startup is reducing the cost of access to space by offering "satellite kits" that make it easier for space enthusiasts, high schools and universities alike to build a small but functional satellite for as little as US$6,000 and then, thanks to its very small size, to launch for significantly less than the popular CubeSats.
Building a cheap, working satellite is far from easy. The tiny Kickstarter-funded KickSats, released as a secondary payload during SpaceX’s third ISS resupply mission, ran into a technical problem and failed to deploy in time, while the cheap TubeSats, though an interesting concept, have not seen a single launch to date. And although the more proven CubeSats have had more success, they still aren’t exactly affordable (launching a small 3U CubeSat into low Earth orbit will set you back almost $300,000).
As the name suggests, PocketQubes are "pocket-sized" cube-shaped satellites that measure just 5 cm (1.97 in) per side versus CubeSat’s 10 cm (3.94 in). At one eighth the volume and weight of the typical CubeSat, they are much cheaper to send into orbit (launch is approximately $20,000) but still capable of doing interesting things while in low Earth orbit. PocketQubes were first proposed by CubeSat co-inventor Prof. Bob Twiggs of Morehead State University as a way to further cheapen launch costs for universities, and, like CubeSats, they are modular and can be stacked together to create larger craft.
Thanks to a successful launch on a Russian Dnepr-1 rocket in November last year, there are already four PocketQubes currently in orbit, including the still-operational $50SAT, which cost less than $250 in parts and was developed with the help of Prof. Twiggs himself.
But for those who lack the know-how, building your first satellite – even a tiny one – is bound to be an exceedingly complicated and expensive affair. Glasgow-based startup PocketQube Shop, which sells components for the picosatellites, is trying to fill the gap by announcing the introduction of a "PocketQube Kit" that contains the main building blocks for any small budget satellite project.
The kit includes a spacecraft frame, a radio board, an on-board computer and a programmable Labsat development board that can be used to test different electronic boards. Moreover, the kit can interface with third party payloads. With a low (in spacecraft terms) starting price of $5,999 (around £4,000), the kit is targeted at high schools, university students and hobbyists alike.
It will take about 11 trillion gallons of water (42 cubic kilometers) -- around 1.5 times the maximum volume of the largest U.S. reservoir -- to recover from California's continuing drought, according to a new analysis of NASA satellite data.
The finding was part of a sobering update on the state's drought made possible by space and airborne measurements and presented by NASA scientists Dec. 16 at the American Geophysical Union meeting in San Francisco. Such data are giving scientists an unprecedented ability to identify key features of droughts, and can be used to inform water management decisions.
A team of scientists led by Jay Famiglietti of NASA's Jet Propulsion Laboratory in Pasadena, California, used data from NASA's Gravity Recovery and Climate Experiment (GRACE) satellites to develop the first-ever calculation of this kind -- the volume of water required to end an episode of drought.
Earlier this year, at the peak of California's current three-year drought, the team found that water storage in the state's Sacramento and San Joaquin river basins was 11 trillion gallons below normal seasonal levels. Data collected since the launch of GRACE in 2002 show this deficit has increased steadily.
"Spaceborne and airborne measurements of Earth's changing shape, surface height and gravity field now allow us to measure and analyze key features of droughts better than ever before, including determining precisely when they begin and end and what their magnitude is at any moment in time," Famiglietti said. "That's an incredible advance and something that would be impossible using only ground-based observations."
GRACE data reveal that, since 2011, the Sacramento and San Joaquin river basins decreased in volume by four trillion gallons of water each year (15 cubic kilometers). That's more water than California's 38 million residents use each year for domestic and municipal purposes. About two-thirds of the loss is due to depletion of groundwater beneath California's Central Valley.
In related results, early 2014 data from NASA's Airborne Snow Observatory indicate that snowpack in California's Sierra Nevada range was only half of previous estimates. The observatory is providing the first-ever high-resolution observations of the water volume of snow in the Tuolumne River, Merced, Kings and Lakes basins of the Sierra Nevada and the Uncompahgre watershed in the Upper Colorado River Basin.
Researchers from North Carolina State University have developed a new lithography technique that uses nanoscale spheres to create three-dimensional (3-D) structures with biomedical, electronic and photonic applications. The new technique is significantly less expensive than conventional methods and does not rely on stacking two-dimensional (2-D) patterns to create 3-D structures.
“Our approach reduces the cost of nanolithography to the point where it could be done in your garage,” says Dr. Chih-Hao Chang, an assistant professor of mechanical and aerospace engineering at NC State and senior author of a paper on the work.
Most conventional lithography uses a variety of techniques to focus light on a photosensitive film to create 2-D patterns. These techniques rely on specialized lenses, electron beams or lasers – all of which are extremely expensive. Other conventional techniques use mechanical probes, which are also costly. To create 3-D structures, the 2-D patterns are essentially printed on top of each other. The NC State researchers took a different approach, placing nanoscale polystyrene spheres on the surface of the photosensitive film.
The nanospheres are transparent, but bend and scatter the light that passes through them in predictable ways according to the angle that the light takes when it hits the nanosphere. The researchers control the nanolithography by altering the size of the nanosphere, the duration of light exposures, and the angle, wavelength and polarization of light. The researchers can also use one beam of light, or multiple beams of light, allowing them to create a wide variety of nanostructure designs.
“We are using the nanosphere to shape the pattern of light, which gives us the ability to shape the resulting nanostructure in three dimensions without using the expensive equipment required by conventional techniques,” Chang says. “And it allows us to create 3-D structures all at once, without having to make layer after layer of 2-D patterns.”
The researchers have also shown that they can get the nanospheres to self-assemble in a regularly-spaced array, which in turn can be used to create a uniform pattern of 3-D nanostructures.
“This could be used to create an array of nanoneedles for use in drug delivery or other applications,” says Xu Zhang, a Ph.D. student in Chang’s lab and lead author of the paper.
For decades, the mantra of electronics has been smaller, faster, cheaper. Today, Stanford engineers add a fourth word - taller. At a conference in San Francisco, a Stanford team will reveal how to build high-rise chips that could leapfrog the performance of the single-story logic and memory chips on today's circuit cards.
Those circuit cards are like busy cities in which logic chips compute and memory chips store data. But when the computer gets busy, the wires connecting logic and memory can get jammed. The Stanford approach would end these jams by building layers of logic atop layers of memory to create a tightly interconnected high-rise chip. Many thousands of nanoscale electronic "elevators" would move data between the layers much faster, using less electricity, than the bottle-neck prone wires connecting single-story logic and memory chips today.
The work is led by Subhasish Mitra, a Stanford professor of electrical engineering and computer science, and H.-S. Philip Wong, the Williard R. and Inez Kerr Bell Professor in Stanford's School of Engineering. They describe their new high-rise chip architecture in a paper being presented at the IEEE International Electron Devices Meeting on Dec. 15-17. The researchers' innovation leverages three breakthroughs.
The first is a new technology for creating transistors, those tiny gates that switch electricity on and off to create digital zeroes and ones. The second is a new type of computer memory that lends itself to multi-story fabrication. The third is a technique to build these new logic and memory technologies into high-rise structures in a radically different way than previous efforts to stack chips.
"This research is at an early stage, but our design and fabrication techniques are scalable," Mitra said. "With further development this architecture could lead to computing performance that is much, much greater than anything available today."
Unlike in mathematics, it is rare to have exact solutions to physics problems.
"When they do present themselves, they are an opportunity to test the approximation schemes (algorithms) that are used to make progress in modern physics," said Michael Strickland, Ph.D., associate professor of physics at Kent State University. Strickland and four of his collaborators recently published an exact solution in the journal Physical Review Letters that applies to a wide array of physics contexts and will help researchers to better model galactic structure, supernova explosions and high-energy particle collisions, such as those studied at the Large Hadron Collider at CERN in Switzerland. In these collisions, experimentalists create a short-lived high-temperature plasma of quarks and gluons called quark gluon plasma (QGP), much like what is believed to be the state of the universe milliseconds after the Big Bang 13.8 billion years ago.
In their article, Strickland and co-authors Gabriel S. Denicol of McGill University, Ulrich Heinz and Mauricio Martinez of the Ohio State University, and Jorge Noronha of the University of São Paulo presented the first exact solution that describes a system that is expanding at relativistic velocities radially and longitudinally.
The equation that was solved was invented by Austrian physicist Ludwig Boltzmann in 1872 to model the dynamics of fluids and gases. This equation was ahead of its time since Boltzmann imagined that matter was atomic in nature and that the dynamics of the system could be understood solely by analyzing collisional processes between sets of particles.
"In the last decade, there has been a lot of work modeling the evolution of the quark gluon plasma using hydrodynamics in which the QGP is imagined to be fluidlike," Strickland said. "As it turns out, the equations of hydrodynamics can be obtained from the Boltzmann equation and, unlike the hydrodynamical equations, the Boltzmann equation is not limited to the case of a system that is in (or close to) thermal equilibrium.
"Both types of expansion occur in relativistic heavy ion collisions, and one must include both if one hopes to make a realistic description of the dynamics," Strickland continued. "The new exact solution has both types of expansion and can be used to tell us which hydrodynamical framework is the best."
The abstract for this article can be found at journals.aps.org/prl/abstract/… ysRevLett.113.202301.
The International Ocean Discovery Program (IODP) found microbes living 2,400m beneath the seabed off Japan. The tiny, single-celled organisms survive in this harsh environment on a low-calorie diet of hydrocarbon compounds and have a very slow metabolism. The findings are being presented at the America Geophysical Union Fall Meeting.
Elizabeth Trembath-Reichert, from the California Institute of Technology, who is part of the team that carried out the research, said: "We keep looking for life, and we keep finding it, and it keeps surprising us as to what it appears to be capable of." The IODP Expedition 337 took place in 2012 off the coast of Japan’s Shimokita Peninsula in the northwestern Pacific. From the Chikyu ship, a monster drill was set down more than 1,000m (3,000ft) beneath the waves, where it penetrated a record-breaking 2,446m (8,024ft) of rock under the seafloor. Samples were taken from the ancient coal bed system that lies at this depth, and were returned to the ship for analysis.
The team found that microbes, despite having no light, no oxygen, barely any water and very limited nutrients, thrived in the cores. To find out more about how this life from the "deep biosphere" survives, the researchers set up a series of experiments in which they fed the little, spherical organisms different compounds. Dr Trembath-Reichert said: "We chose these coal beds because we knew there was carbon, and we knew that this carbon was about as tasty to eat, when it comes to coal, as you could get for microbes. "The thought was that while there are some microbes that can eat compounds in coal directly, there may be smaller organic compounds – methane and other types of hydrocarbons - sourced from the coal that the microbes could eat as well."
The experiments revealed that the microbes were indeed dining on these methyl compounds. The tests also showed that the organisms lived life in the slow lane, with an extremely sluggish metabolism.
MIT researchers have discovered a new mathematical relationship — between material thickness, temperature, and electrical resistance — that appears to hold in all superconductors. They describe their findings in the latest issue of Physical Review B.
“We were able to use this knowledge to make larger-area devices, which were not really possible to do previously, and the yield of the devices increased significantly,” says Yachin Ivry, a postdoc in MIT’s Research Laboratory of Electronics, and the first author on the paper. Ivry works in the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, a professor of electrical engineering and one of Ivry’s co-authors on the paper. Among other things, the group studies thin films of superconductors.
Superconductors are materials that, at temperatures near absolute zero, exhibit no electrical resistance; this means that it takes very little energy to induce an electrical current in them. A single photon will do the trick, which is why they’re useful as quantum photodetectors. And a computer chip built from superconducting circuits would, in principle, consume about one-hundredth as much energy as a conventional chip.
“Thin films are interesting scientifically because they allow you to get closer to what we call the superconducting-to-insulating transition,” Ivry says. “Superconductivity is a phenomenon that relies on the collective behavior of the electrons. So if you go to smaller and smaller dimensions, you get to the onset of the collective behavior.”
By 2040, the world’s energy supply mix will be divided into nearly four equal parts; Oil, gas, coal and low-carbon sources—nuclear and renewables—according to the International Energy Agency’s (IEA) 2014 World Energy Outlook. The assessment by the IEA finds that under current and planned policies, the average temperature will also increase by 3.6 degrees Celsius by 2100. Renewable energy takes a far greater role in new electricity supply in the near future—expanding from about 1700 gigawatts today to 4550 gigawatts in 2040—but it is not enough to offset the global dominance of fossil fuels.
“As our global energy system grows and transforms, signs of stress continue to emerge,” IEA Executive Director Maria van der Hoeven, said in a statement. “But renewables are expected to go from strength to strength, and it is incredible that we can now see a point where they become the world’s number one source of electricity generation.”
Renewable energy production will double as a share of world electricity demand by 2040, according to the report. But that still does not dethrone coal in electricity generation. Coal will simply shift regionally from the United States and China to Southeast Asia and India, according to the EIA.
The least attractive piece of all, energy efficiency, is poised to be a winner in coming decades and could have an even greater impact if some of the world’s largest energy users carry through with proposed efficiency plans. Efficiency measures are set to halve the global growth in energy demand from 2 percent annually to about 1 percent beginning in 2025, according to the IEA.
Efficiency standards for cars and more stringent energy efficiency targets for industry and everyday devices are key to slowing the demand for energy, but they do not necessarily help diminish the world’ reliance on fossil fuels because the true price of fossil fuels are not acurately reflected in the price people pay in some regions.
Fossil fuels receive about $550 billion in subsidies in 2013, compared to $120 billion for all renewable energies. Although the fossil fuel subsidies were $25 billion lower than 2012, there is still vast room for improvement to end price breaks for the mature industries, especially in gas and oil-rich nations, which offer the bulk of the subsidies.
When early humans discovered how to purposefully create fire and make the most of it for their survival, it was a feat comparable to modern day milestones of sending men to the moon, but while the mastery of fire is hailed as among the most crucial developments in human history and evolution, researchers are not certain when this happened.
Some anthropologists believe that early humans started to exploit fire as early as 1.5 million years ago, but much of the evidence supporting this claim such as the heated clays and charcoal fragments is disputed because they can be attributed to natural bush fires. Many experts also think that the early uses of fire were opportunistic, meaning early humans used natural bush fires instead of starting the fire themselves.
A group of archeologists studying artifacts from an ancient cave, however, claims to have figured out when humans learned to master fire. For their study published in the journal Science on Oct. 19, Ron Shimelmitz, from the Zinman Institute of Archaeology of the University of Haifa in Israel, and colleagues examined artifact, most of which were flint tools and debris excavated from Israel's Tabun Cave. The archeological site, which was declared as having universal value by UNESCO two years ago, documents half a million years of human history and provided the researchers with the opportunity to study how the use of fire evolved in the cave.
By examining the cave's sediment layers, the researchers found that most of the flints were not burned in layers that were older than 350,000 years old. Burned-up flints, however, started to show up more regularly after this with most of the flints characterized by cracking, red or black coloration, and small round depressions where fragments called pot lids flaked off the stone, indicating exposure to fire.
Shimelmitz and colleagues said that while fire had been in use for a long time, it took a while before humans learned how to control and start it with the study indicating that habitual use of fire in Israel's Tabun Cave started just between 350,000-320,000 years ago. "While hominins may have used fire occasionally, perhaps opportunistically, for some million years, we argue here that it only became a consistent element in behavioral adaptations during the second part of the Middle Pleistocene," the researchers wrote.
A new technique to magnetically deliver drug-carrying nanorods to deep targets in the body using fast-pulsed magnetic fields could transform the way deep-tissue tumors and other diseases are treated, say researchers at the University of Maryland (UMD) and Bethesda-based Weinberg Medical Physics LLC (WMP).
Instead of surgery or systemically administered treatments (such as chemotherapy), the use of magnetic nanoparticles as drug carriers could potentially allow clinicians to use external magnets to focus therapy to the precise locations of a disease within a patient, such as inoperable deep tumors or sections of the brain that have been damaged by trauma, vascular, or degenerative diseases.
So for years, researchers have worked with magnetic nanoparticles loaded with drugs or genes to develop noninvasive techniques to direct therapies and diagnostics to targets in the body. However, due to the physics of magnetic forces, particles otherwise unaided could only be attracted to a magnet, not concentrated into points distant from the magnet face. So in clinical trials, magnets held outside the body have only been able to concentrate treatment to targets at or just below the skin surface, the researchers say.
“What we have shown experimentally is that by exploiting the physics of nanorods we can use fast-pulsed magnetic fields to focus the particles to a deep target between the magnets,” said UMD Institute for Systems Research Professor Benjamin Shapiro.
These pulsed magnetic fields allowed the team to reverse the usual behavior of magnetic nanoparticles. Instead of a magnet attracting the particles, they showed that an initial magnetic pulse can orient the rod-shaped particles without pulling them, and then a subsequent pulse can push the particles before the particles can reorient. By repeating the pulses in sequence, the particles were focused to locations between the electromagnets. The study, published last week in Nano Letters, shows that using this method, ferromagnetic nanorods carrying drugs or molecules could be concentrated to arbitrary deep locations between magnets.
The researchers are now working to demonstrate this method in vivo to prove its therapeutic potential and have launched IronFocus Medical, Inc., a startup company established to commercialize their invention.
“There are various pulse oximeters already on the market that measure pulse rate and blood-oxygen saturation levels, but those devices use rigid conventional electronics, and they are usually fixed to the fingers or earlobe,” said Ana Arias, an associate professor of electrical engineering and computer sciences and head of the UC Berkeley team that is developing a new organic optoelectronic sensor.
By switching from silicon to an organic, or carbon-based, design, the researchers were able to create a device that could ultimately be thin, cheap and flexible enough to be slapped on like a Band-Aid during that jog around the track or hike up the hill. The engineers put the new prototype up against a conventional pulse oximeter and found that the pulse and oxygen readings were just as accurate.
A conventional pulse oximeter typically uses light-emitting diodes (LEDs) to send red and infrared light through a fingertip or earlobe. Sensors detect how much light makes it through to the other side. Bright, oxygen-rich blood absorbs more infrared light, while the darker hues of oxygen-poor blood absorb more red light. The ratio of the two wavelengths reveals how much oxygen is in the blood. For the organic sensors, Arias and her team of graduate students – Claire Lochner, Yasser Khan and Adrien Pierre – used red and green light, which yield comparable differences to red and infrared when it comes to distinguishing high and low levels of oxygen in the blood.
Using a solution-based processing system, the researchers deposited the green and red organic LEDs and the translucent light detectors onto a flexible piece of plastic. By detecting the pattern of fresh arterial blood flow, the device can calculate a pulse.
“We showed that if you take measurements with different wavelengths, it works, and if you use unconventional semiconductors, it works,” said Arias. “Because organic electronics are flexible, they can easily conform to the body.” Arias added that because the components of conventional oximeters are relatively expensive, healthcare providers will choose to disinfect them if they become contaminated. In contrast, “organic electronics are cheap enough that they are disposable like a Band-Aid after use,” she said.
Phen-Gen is the first computer analysis software that cross-references a patient’s symptoms and a person’s genome sequence, to better aid doctors in diagnosing diseases. The software was created by a team of scientists at A*STAR’s Genome Institute of Singapore (GIS), led by Dr. Pauline Ng. Results from the research were published in the prestigious journal Nature Methods on 4th August 2014.
Though printing items like chocolate and pizza might be satisfying enough for some, 3D printing still holds a lot of unfulfilled potential. Talk abounds of disrupting manufacturing, changing the face of construction and even building metal components in space. While it is hard not to get a little bit excited by these potentially world-changing advances, there is one domain where 3D printing is already having a real-life impact. Its capacity to produce customized implants and medical devices tailored specifically to a patient's anatomy has seen it open up all kinds of possibilities in the field of medicine, with the year 2014 having turned up one world-first surgery after another. Let's cast our eye over some of the significant, life-changing procedures to emerge in the past year made possible by 3D printing technology.
NASA's Mars Curiosity rover has measured a tenfold spike in methane, an organic chemical, in the atmosphere around it and detected other organic molecules in a rock-powder sample collected by the robotic laboratory's drill.
"This temporary increase in methane—sharply up and then back down—tells us there must be some relatively localized source," said Sushil Atreya of the University of Michigan, Ann Arbor, and Curiosity rover science team. "There are many possible sources, biological or non-biological, such as interaction of water and rock."
Researchers used Curiosity's onboard Sample Analysis at Mars (SAM) laboratory a dozen times in a 20-month period to sniff methane in the atmosphere. During two of those months, in late 2013 and early 2014, four measurements averaged seven parts per billion. Before and after that, readings averaged only one-tenth that level.
Curiosity also detected different Martian organic chemicals in powder drilled from a rock dubbed Cumberland, the first definitive detection of organics in surface materials of Mars. These Martian organics could either have formed on Mars or been delivered to Mars by meteorites.
Organic molecules, which contain carbon and usually hydrogen, are chemical building blocks of life, although they can exist without the presence of life. Curiosity's findings from analyzing samples of atmosphere and rock powder do not reveal whether Mars has ever harbored living microbes, but the findings do shed light on a chemically active modern Mars and on favorable conditions for life on ancient Mars.
"We will keep working on the puzzles these findings present," said John Grotzinger, Curiosity project scientist of the California Institute of Technology in Pasadena (Caltech). "Can we learn more about the active chemistry causing such fluctuations in the amount of methane in the atmosphere? Can we choose rock targets where identifiable organics have been preserved?"
In a study in the journal Neuron, scientists describe a new high data-rate, low-power wireless brain sensor. The technology is designed to enable neuroscience research that cannot be accomplished with current sensors that tether subjects with cabled connections. Experiments in the paper confirm that new capability. The results show that the technology transmitted rich, neuroscientifically meaningful signals from animal models as they slept and woke or exercised.
“We view this as a platform device for tapping into the richness of electrical signals from the brain among animal models where their neural circuit activity reflects entirely volitional and naturalistic behavior, not constrained to particular space,” said Arto Nurmikko, professor of engineering and physics affiliated with the Brown Institute for Brain Science and the paper’s senior and corresponding author. “This enables new types of neuroscience experiments with vast amounts of brain data wirelessly and continuously streamed from brain microcircuits.”
“The brain sensor is opening unprecedented opportunities for the development of neuroprosthetic treatments in natural and unconstrained environments,” said study co-author Grégoire Courtine, a professor at EPFL (École polytechnique fédérale de Lausanne), who collaborated with Nurmikko’s group on the research. To confirm the system performance, the researchers did a series of experiments with rhesus macaques, which walked on a treadmill while the researchers used the wireless system to measure neural signals associated with the brain’s motion commands. They also did another experiment in which animal subjects went through sleep/wake cycles, unencumbered by cables or wires; the data showed distinct patterns related to the different stages of consciousness and the transitions between them.
“We hope that the wireless neurosensor will change the canonical paradigm of neuroscience research, enabling scientists to explore the nervous system within its natural context and without the use of tethering cables,” said co-lead author David Borton. “Subjects are free to roam, forage, sleep, etc., all while the researchers are observing the brain activity. We are very excited to see how the neuroscience community leverages this platform.”
Every so often our Earth encounters a large chunk of space debris which reminds us that our solar system still contains plenty of debris that could potentially have an impact on life on Earth.
While the great bulk of planetary accretion occurs in the first few hundred million years after the birth of a given system, the process never really comes to an end. Most of the objects that make up the tail of this accretion – grains of dust, lumps of ice, and pieces of rock – smash into our atmosphere and ablate harmlessly many kilometres above the ground, visible only as shooting stars. Larger impacts do, however, continue to occur – as illustrated on February 15, 2013, in the Russian city of Chelyabinsk. On that day, with no warning, a small near-Earth asteroid detonated in the atmosphere, and outshone the noon-day sun.
Though the object itself was relatively small, around 20m in diameter, it exploded with sufficient force to shatter windows many kilometres away, damaging more than 7,000 buildings. Amazingly, nobody was killed – but the impact served as a stark reminder of the dangers posed by rocks from space. The longer the timescale we consider, the larger the biggest collision the Earth might experience. A stand-out example is the impact, around 65 million years ago, thought to have contributed to the extinction of the dinosaurs.
Fortunately for us here on Earth, the rate at which such catastrophic impacts occur is relatively low, but this might not be the case in other planetary systems. Thanks to observations carried out at infrared wavelengths, we are now in a position to start categorising the small object populations of other planetary systems. As we do, we are finding that many systems contain far more debris, left over from their formation, than does our own. This gives us an additional tool by which we can assess potentially habitable planets. It should be possible to estimate the impact regimes that they might experience, based on these kind of observations.
Topological quantum computing (TQC) is a newer type of quantum computing that uses "braids" of particle tracks, rather than actual particles such as ions and electrons, as the qubits to implement computations. Using braids has one important advantage: it makes TQCs practically immune to the small perturbations in the environment that cause decoherence in particle-based qubits and often lead to high error rates.
Ever since TQC was first proposed in 1997, experimentally realizing the appropriate braids has been extremely difficult. For one thing, the braids are formed not by the trajectories of ordinary particles, but by the trajectories of exotic quasiparticles (particle-like excitations) called anyons. Also, movements of the anyons must be non-Abelian, a property similar to the non-commutative property in which changing the order of the anyons' movements changes their final tracks. In most proposals of TQC so far, the non-Abelian statistics of the anyons has not been powerful enough, even in theory, for universal TQC.
Now in a new study published in Physical Review Letters, physicists Abolhassan Vaezi at Cornell University and Maissam Barkeshli at Microsoft's research lab Station Q have theoretically shown that anyons tunneling in a double-layer system can transition to an exotic non-Abelian state that contains "Fibonacci" anyons that are powerful enough for universal TQC.
"Our work suggests that some existing experimental setups are rich enough to yield a phase capable of performing 'universal' TQC, i.e., all of the required logical gates for the performance of a quantum computer can be made through the braiding of anyons only," Vaezi told Phys.org. "Since braiding is a topological operation and does not perturb the low-energy physics, the resulting quantum computer is fault-tolerant."
Damage to neural tissue is typically permanent and causes lasting disability in patients, but a new approach has recently been discovered that holds incredible potential to reconstruct neural tissue at high resolution in three dimensions. Research work recently published in the Journal of Neural Engineering demonstrated a method for embedding scaffolding of patterned nanofibers within three-dimensional (3D) hydrogel structures, and it was shown that neurite outgrowth from neurons in the hydrogel followed the nanofiber scaffolding by tracking directly along the nanofibers, particularly when the nanofibers were coated with a type of cell adhesion molecule called laminin. It was also shown that the coated nanofibers significantly enhanced the length of growing neurites, and that the type of hydrogel could significantly affect the extent to which the neurites tracked the nanofibers.
“Neural stem cells hold incredible potential for restoring damaged cells in the nervous system, and 3D reconstruction of neural tissue is essential for replicating the complex anatomical structure and function of the brain and spinal cord,” said Dr. McMurtrey, author of the study and director of the research institute that led this work. “So it was thought that the combination of induced neuronal cells with micropatterned biomaterials might enable unique advantages in 3D cultures, and this research showed that not only can neuronal cells be cultured in 3D conformations, but the direction and pattern of neurite outgrowth can be guided and controlled using relatively simple combinations of structural cues and biochemical signaling factors.”
Since the late 1970's, NASA has been monitoring changes in the Greenland Ice Sheet. Recent analysis of seven years of surface elevation readings from NASA's ICESat satellite and four years of laser and and ice-penetrating radar data from NASA's airborne mission Operation IceBridge shows us how the surface elevation of the ice sheet has changed.
Some 130 million light years away, within the constellation Canis Major, two twinkling spiral galaxies are in the process of colliding, treating our eyes to a dazzling show. The pair—NGC 2207 and IC 2163—has hosted the explosive deaths of three stars as supernovas in the last 15 years, and is also spawning stars at an intense rate. But scientists have become particularly interested in these merging galaxies for another reason: They are home to one of the largest known collections of super bright X-ray objects. These so-called “ultraluminous X-ray sources” have been spotted using NASA’s Chandra X-ray Observatory, which is partly responsible for the lustrous composite image shown above.
Just like the Milky Way, these galaxies are teeming with bright sources of X-rays called X-ray binaries. These are systems in which a normal star is closely orbiting a collapsed star, such as a neutron star or black hole. Strong gravitational forces from the compact star draw material from the normal star, a process known as accretion. And when the material hits the companion star, it is heated to millions of degrees, a process that generates a huge amount of X-rays. While intense, the emission from these systems pales in comparison to that from ultraluminous X-ray sources (ULXs).
As the name suggests, ULXs are exceedingly bright; they emit more radiation in the X-rays than a million suns would at all wavelengths. Although they are not well understood, many believe they could be black holes of approximately 10 solar masses that are collecting, or accreting, material onto a disk, emitting X-rays in an intense beam. ULXs are also extremely rare; our galaxy doesn’t have one, most galaxies don’t, but those that do usually only have one. Between NGC 2207 and IC 2163, however, there are 28.
Nearly 269,000 tons of plastic pollution may be floating in the world's oceans, according to a new study. Microplastic pollution is found in varying concentrations throughout the oceans, but estimates of the global abundance and weight of floating plastics, both micro and macroplastic, lack sufficient data to support them. To better estimate the total number of plastic particles and their weight floating in the world's oceans, scientists from six countries contributed data from 24 expeditions collected over a six-year period from 2007-2013 across all five sub-tropical gyres, coastal Australia, Bay of Bengal, and the Mediterranean Sea.
Microplastic pollution is found in varying concentrations throughout the oceans, but estimates of the global abundance and weight of floating plastics, both micro and macroplastic, lack sufficient data to support them. To better estimate the total number of plastic particles and their weight floating in the world's oceans, scientists from six countries contributed data from 24 expeditions collected over a six-year period from 2007-2013 across all five sub-tropical gyres, coastal Australia, Bay of Bengal, and the Mediterranean Sea. The data included information about microplastics collected using nets and large plastic debris from visual surveys, which were then used to calibrate an ocean model of plastic distribution.
Based on the data and model, the authors of the study estimate a minimum of 5.25 trillion plastic particles weighing nearly 269,000 tons in the world's oceans. Large plastics appear to be abundant near coastlines, degrading into microplastics in the 5 subtropical gyres, and that the smallest microplastics were present in more remote regions, such as the subpolar gyres, which the authors did not expect. The distribution of the smallest microplastics in remote regions of the ocean may suggest that gyres act as 'shredders' of large plastic items into microplastics, after which they eject them across the ocean.
"Our findings show that the garbage patches in the middle of the five subtropical gyres are not the final resting places for the world's floating plastic trash. The endgame for micro-plastic is interactions with entire ocean ecosystems," says Marcus Eriksen, PhD, Director of Research for the 5 Gyres Institute.
Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot - what happens next? It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don't have a concrete example of the principle in action.
The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.
The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload "3". The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.
The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the "nose" then UDP packets are sent to the sensory neurons in the network.
The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor. The motor neurons can be excitatory or inhibitory and positive and negative weights are used.
And the result? It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.
More Information: The Robotic Worm (Biocoder pdf - free on registration)
All big galaxies in the Universe host a supermassive black hole in their center and in about a tenth of all galaxies, these supermassive black holes are growing by swallowing huge amounts of gas and dust from their surrounding environments. In this process, the material heats up and becomes very bright – becoming the most energetic sources of emission in the Universe known as active galactic nuclei.
The hot dust forms a ring around the supermassive black hole and emits infrared radiation, which the scientists used as the ruler.
By combining the light from the two 10-m Keck telescopes on Mauna Kea on Hawaii using a method called interferometry, the scientists achieved an effective resolution equivalent to a telescope with a perfect 85-meter diameter mirror. This provided very high resolution – a hundred times better resolution than the Hubble Space Telescope – and allowed them to measure the angular size of the dust ring on the sky.
By combining the physical size of 30 light-days with the apparent size measured with the data from the Keck interferometer, the astronomers were able to determine the distance to NGC 4151. “We calculated the distance to be 62 million light-years,” said Dr Darach Watson of the University of Copenhagen’s Niels Bohr Institute, who is a co-author of the paper published in the journal Nature.
“The previous calculations based on redshift were between 13 million and 95 million light-years, so we have gone from a great deal of uncertainty to now being able to determine the precise distance. This is very significant for astronomical calculations of cosmic scale distances.” “Such distances are key in pinning down the cosmological parameters that characterize our Universe or for accurately measuring black hole masses,” Dr Hoenig added.
“Indeed, NGC 4151 is a crucial anchor to calibrate various techniques to estimate black hole masses. Our new distance implies that these masses may have been systematically underestimated by 40 per cent.”
The Krubera cave is located in the Arabika Massif mountain range on the edge of the Black Sea in Abkhazia, which some argue is part of Georgia. It is said to be bottomless, but experts have managed to map Earth’s deepest cave. Intrepid explorers have charted every known twist and turn of the terrifying Krubera cave that measures 7,208ft (2,197meters) deep. And with every expedition the chasm seems to become deeper as divers plunge to new depths never visited by humans to extend the cave’s reach into the Earth.
The cave is called Voronya in Russia, which means crow's cave. The name was used as slang by Kiev cavers during the 1980s because of the number of crows nesting in the entrance pit. The Arabika Massif is one of the largest high-mountain limestone karst massifs (the main mass of an exposed structure) in the Western Caucasus, which is an area of southern Russia. It is composed of Lower Cretaceous and Upper Jurassic limestones that dip continuously southwest to the Black Sea and plunge below the modern sea level. The cave, which is named after Russian geologist Alexander Krubera, is the only chasm on Earth that's known to be deeper than 6,561ft (2,000m).
In 2005 he organized a series of expeditions and his team of 56 carried some five tons of equipment into the chasm. Much like scaling a mountain, the team had to cover certain distances so they could set up camp at depth of 2,300, 3,986, 4,630, and 5,380ft (700, 1,215, 1,410, and 1,640metres). The explorers were able to cook meals, sleep in tents and huddle together for warmth before venturing down the limestone rock faces for up to 20 hours at a time, sometimes though extremely cold water. It takes about 1 month to climb down to the bottom.