Why Is Machine Learning (CS 229) The Most Popular Course At Stanford? It turns out that artificial intelligence (AI) and the robotics that is tied to it, consists of two primary systems, control and perception.
Via Ben van Lier
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The part of the brain that tells us the direction to travel when we navigate has been identified by UCL scientists, and the strength of its signal predicts how well people can navigate.
It has long been known that some people are better at navigating than others, but until now it has been unclear why. The latest study, funded by the Wellcome Trust and published in Current Biology, shows that the strength and reliability of ‘homing signals’ in the human brain vary among people and can predict navigational ability.
In order to successfully navigate to a destination, you need to know which direction you are currently facing and which direction to travel in. For example, ‘I am facing north and want to head east’. It is already known that mammals have brain cells that signal the direction that they are currently facing, a discovery that formed part of the 2014 Nobel Prize in Physiology or Medicine to UCL Professor John O’Keefe.
The latest research reveals that the part of the brain that signals which direction you are facing, called the entorhinal region, is also used to signal the direction in which you need to travel to reach your destination. This part of the brain tells you not only which direction you are currently facing, but also which direction you should be facing in the future. In other words, the researchers have found where our ‘sense of direction’ comes from in the brain and worked out a way to measure it using functional magnetic resonance imaging (fMRI).
In the study, 16 healthy volunteers were asked to navigate a simple square environment simulated on a computer. Each wall had a picture of a different landscape, and each corner contained a different object. Participants were placed in a corner of the environment, facing a certain direction and asked how to navigate to an object in another corner.
Dr Martin Chadwick (UCL Experimental Psychology), lead author of the study, said: “Our results provide evidence to support the idea that your internal ‘compass’ readjusts as you move through the environment. For example, if you turn left then your entorhinal region should process this to shift your facing direction and goal direction accordingly. If you get lost after taking too many turns, this may be because your brain could not keep up and failed to adjust your facing and goal directions.”
Interferometers capture a basic mystery of quantum mechanics: a single particle can exhibit wave behaviour, yet that wave behaviour disappears when one tries to determine the particle’s path inside the interferometer. This idea has been formulated quantitatively as an inequality, for example, by Englert and Jaeger, Shimony and Vaidman, which upper bounds the sum of the interference visibility and the path distinguishability. Such wave–particle duality relations (WPDRs) are often thought to be conceptually inequivalent to Heisenberg’s uncertainty principle, although this has been debated in the past. A group of physicists now shows that WPDRs correspond precisely to a modern formulation of the uncertainty principle in terms of entropies, namely, the min- and max-entropies. This observation unifies two fundamental concepts in quantum mechanics. Furthermore, it leads to a robust framework for deriving novel WPDRs by applying entropic uncertainty relations to interferometric models.
Contrary to intuition, adding pockets of water to solids can actually make them stronger. This finding, the result of research by Yale scientists, offers “a new knob to turn” for engineers, the researchers say. Engineers will be able to add exciting new properties to composite materials–such as electromagnetism–by embedding droplets of liquid, and, on a purely scientific level, the research provides valuable insight into the nature of the material properties at small and large scales–how the relative strengths of a material at one size can be opposite to that at another size.
“This is a great example of how different types of physics emerge at different scales,” Dr. Eric Dufresne, associate professor of mechanical engineering and materials science at Yale and principle investigator of the study, told The Speaker. “Shrinking the scale of an object can really change how it behaves.”
“Surface tension is a force that tries to reduce the surface area of a material,” Dufresne told us. “It is familiar in fluids–it’s the force that pulls water into a sponge, makes wet hair clump together and lets insects walk on water. Solids have surface tension too, but usually the ‘elastic force’ of the solid is so strong that surface tension doesn’t have much of an effect. The ‘elastic force’ of a solid is what makes a solid spring back to its original shape after you stop pushing on it. “As the solid gets stiffer, the liquid droplets need to be smaller in order to have this stiffening or cloaking effect. By embedding the solid with droplets of different materials, one can give it new electrical, optical or mechanical properties.
“It turns out that the importance of surface tension is inversely proportional to the size,” Dufresne said of the study. “So what’s just a negligible force for big things becomes a strong force for very small things–which in turn can strongly affect the material as a whole.”
The report, “Stiffening solids with liquid inclusions,” was completed by Drs. W. Style, Rostislav Boltyanskiy, Benjamin Allen, Katharine E.Jensen, Henry P. Foote, John S. Wettlaufer, and Eric R. Dufresne, and was published in December’s Nature Physics.
NASA's Systems Analysis and Concepts Directorate has issued a report outlining a possible way for humans to visit Venus, rather than Mars—by hovering in the atmosphere instead of landing on the surface. The hovering vehicle, which they call a High Altitude Venus Operational Concept (HAVOC), would resemble a blimp with solar panels on top, and would allow people to do research just 50 kilometers above the surface of the planet.
Most everyone knows that NASA wants to send people to Mars—that planet also gets most of the press. Mars is attractive because it looks more like Earth and is relatively close to us. The surface of Venus on the other hand, though slightly closer, is not so attractive, with temperatures that can melt lead and atmospheric pressure 92 times that of Earth. There's also that thick carbon dioxide atmosphere with sulfuric acid clouds, lots of earthquakes, volcanoes going off and terrifying lightning bolts. So, why would anyone rather go to Venus than Mars? Because of far lower radiation and much better solar energy.
No one wants to go the surface of Venus, at least not anytime soon, instead, researchers at NASA are looking into the possibility of sending people to hover in the sky above the planet, conducting research in a far less dangerous place than even on the surface of Mars. At 50 kilometers up, an HAVOC would experience just one atmosphere of atmospheric pressure and temperatures averaging just 75 degrees Celsius, with radiation levels equivalent to those in Canada. Astronauts on Mars, on the other hand would experience 40 times the amount of radiation typically faced back here on Earth, which suggests they'd have to live deep underground to survive—a problem that scientists have not yet solved. Some are even beginning to wonder about the feasibility of sending humans to the surface of Mars.
The mass extinction event was thought to have paved the way for mammals to dominate, but researchers say many of them died out alongside the dinosaurs. During the Cretaceous period, extinct relatives of living marsupials – such as possums and kangaroos – thrived.
An international team of experts on mammal evolution and mass extinctions has shown that the once-abundant animals – known as metatherian mammals – came close to extinction. A 10-km-wide asteroid struck what is now Mexico at the end of the Cretaceous period, unleashing a global cataclysm of environmental destruction which led to the demise of the dinosaurs.
The study, including the University of Edinburgh scientists, shows that two-thirds of all metatherians living in North America also perished. This included more than 90 per cent of species living in the northern Great Plains of the US, which is the best area in the world for finding latest Cretaceous mammal fossils, researchers said.
Metatherians never recovered their previous diversity, which explains why marsupials are rare today and largely restricted to unusual environments in Australia and South America.
Species that give birth to well-developed live young – known as placental mammals – took full advantage of the metatherians’ demise. Placental mammals – which include many species from mice to men – are ubiquitous across the globe today, researchers said.
“This is a new twist on a classic story. It wasn’t only that dinosaurs died out, providing an opportunity for mammals to reign, but that many types of mammals, such as most metatherians, died out too – this allowed advanced placental mammals to rise to dominance,” said Dr Thomas Williamson from the New Mexico Museum of Natural History and Science.
Researchers reviewed the evolutionary history of metatherians and constructed the most up-to-date family tree for the mammals based on the latest fossil records, allowing them to study extinction patterns in unprecedented detail.
Electrons may be seen as small magnets that also carry a negative electrical charge. On a fundamental level, these two properties are indivisible. However, in certain materials where the electrons are constrained in a quasi one-dimensional world, they appear to split into a magnet and an electrical charge, which can move freely and independently of each other. A longstanding question has been whether or not similar phenomenon can happen in more than one dimension. A team lead by EPFL scientists now has uncovered new evidence showing that this can happen in quasi two-dimensional magnetic materials. Their work is published in Nature Physics.
A strange phenomenon occurs with electrons in materials that are so thin that they can be thought of as being one-dimensional, e.g. nanowires. Under certain conditions, the electrons in these materials can actually split into an electrical charge and a magnet, which are referred to as "fractional particles". An important but still unresolved question in fundamental particle physics is whether this phenomenon could arise and be observed in more dimensions, like two- or three-dimensional systems.
Under temperatures close to absolute zero, electrons bind together to form an exotic liquid that can flow with exactly no friction. While this was previously observed at near-absolute zero temperatures in other materials, this electron liquid can form in cuprates at much higher temperatures that can be reached using liquid nitrogen alone. Consequently, there is currently an effort to find new materials displaying high-temperature superconductivity at room temperature. But understanding how it arises on a fundamental level has proven challenging, which limits the development of materials that can be used in applications. The advances brought by the EPFL scientists now bring support for the theory of superconductivity as postulated by Anderson.
"This work marks a new level of understanding in one of the most fundamental models in physics," says Henrik M. Rønnow. "It also lends new support for Anderson's theory of high-temperature superconductivity, which, despite twenty-five years of intense research, remains one of the greatest mysteries in the discovery of modern materials."
It’s the most basic of ways to find out what something does, whether it’s an unmarked circuit breaker or an unidentified gene — flip its switch and see what happens. New remote-control technology may offer biologists a powerful way to do this with cells and genes. A team at Rensselaer Polytechnic Institute and Rockefeller University is developing a system that would make it possible to remotely control biological targets in living animals — rapidly, without wires, implants, or drugs.
In a technical report published today in the journal Nature Medicine, the team describes successfully using electromagnetic waves to turn on insulin production to lower blood sugar in diabetic mice. Their system couples a natural iron storage particle, ferritin, to activate an ion channel called TRPV1 such that when the metal particle is exposed to a radio wave or magnetic field, it opens the channel, leading to the activation of an insulin-producing gene. Together, the two proteins act as a nano-machine that can be used to trigger gene expression in cells.
“The use of a radiofrequency-driven magnetic field is a big advance in remote gene expression because it is non-invasive and easily adaptable,” said Jonathan S. Dordick, the Howard P. Isermann Professor of Chemical and Biological Engineering and vice president for research at Rensselaer Polytechnic Institute. “You don’t have to insert anything — no wires, no light systems — the genes are introduced through gene therapy. You could have a wearable device that provides a magnetic field to certain parts of the body and it might be used therapeutically for many diseases, including neurodegenerative diseases. It's limitless at this point.”
Other techniques exist for remotely controlling the activity of cells or the expression of genes in living animals. But these have limitations. Systems that use light as an on/off signal require permanent implants or are only effective close to the skin, and those that rely on drugs can be slow to switch on and off.
The new system, dubbed radiogenetics, uses a signal, in this case low-frequency radio waves or a magnetic field, to activate ferritin particles. They, in turn, prompt the opening of TRPV1, which is situated in the membrane surrounding the cell. Calcium ions then travel through the channel, switching on a synthetic piece of DNA the scientists developed to turn on the production of a downstream gene, which in this study was the insulin gene. In an earlier study, the researchers used only radio waves as the “on” signal, but in the current study, they also tested out a related signal – a magnetic field – that could also activate insulin production. They found it had a similar effect as the radio waves.
A Glasgow-based startup is reducing the cost of access to space by offering "satellite kits" that make it easier for space enthusiasts, high schools and universities alike to build a small but functional satellite for as little as US$6,000 and then, thanks to its very small size, to launch for significantly less than the popular CubeSats.
Building a cheap, working satellite is far from easy. The tiny Kickstarter-funded KickSats, released as a secondary payload during SpaceX’s third ISS resupply mission, ran into a technical problem and failed to deploy in time, while the cheap TubeSats, though an interesting concept, have not seen a single launch to date. And although the more proven CubeSats have had more success, they still aren’t exactly affordable (launching a small 3U CubeSat into low Earth orbit will set you back almost $300,000).
As the name suggests, PocketQubes are "pocket-sized" cube-shaped satellites that measure just 5 cm (1.97 in) per side versus CubeSat’s 10 cm (3.94 in). At one eighth the volume and weight of the typical CubeSat, they are much cheaper to send into orbit (launch is approximately $20,000) but still capable of doing interesting things while in low Earth orbit. PocketQubes were first proposed by CubeSat co-inventor Prof. Bob Twiggs of Morehead State University as a way to further cheapen launch costs for universities, and, like CubeSats, they are modular and can be stacked together to create larger craft.
Thanks to a successful launch on a Russian Dnepr-1 rocket in November last year, there are already four PocketQubes currently in orbit, including the still-operational $50SAT, which cost less than $250 in parts and was developed with the help of Prof. Twiggs himself.
But for those who lack the know-how, building your first satellite – even a tiny one – is bound to be an exceedingly complicated and expensive affair. Glasgow-based startup PocketQube Shop, which sells components for the picosatellites, is trying to fill the gap by announcing the introduction of a "PocketQube Kit" that contains the main building blocks for any small budget satellite project.
The kit includes a spacecraft frame, a radio board, an on-board computer and a programmable Labsat development board that can be used to test different electronic boards. Moreover, the kit can interface with third party payloads. With a low (in spacecraft terms) starting price of $5,999 (around £4,000), the kit is targeted at high schools, university students and hobbyists alike.
It will take about 11 trillion gallons of water (42 cubic kilometers) -- around 1.5 times the maximum volume of the largest U.S. reservoir -- to recover from California's continuing drought, according to a new analysis of NASA satellite data.
The finding was part of a sobering update on the state's drought made possible by space and airborne measurements and presented by NASA scientists Dec. 16 at the American Geophysical Union meeting in San Francisco. Such data are giving scientists an unprecedented ability to identify key features of droughts, and can be used to inform water management decisions.
A team of scientists led by Jay Famiglietti of NASA's Jet Propulsion Laboratory in Pasadena, California, used data from NASA's Gravity Recovery and Climate Experiment (GRACE) satellites to develop the first-ever calculation of this kind -- the volume of water required to end an episode of drought.
Earlier this year, at the peak of California's current three-year drought, the team found that water storage in the state's Sacramento and San Joaquin river basins was 11 trillion gallons below normal seasonal levels. Data collected since the launch of GRACE in 2002 show this deficit has increased steadily.
"Spaceborne and airborne measurements of Earth's changing shape, surface height and gravity field now allow us to measure and analyze key features of droughts better than ever before, including determining precisely when they begin and end and what their magnitude is at any moment in time," Famiglietti said. "That's an incredible advance and something that would be impossible using only ground-based observations."
GRACE data reveal that, since 2011, the Sacramento and San Joaquin river basins decreased in volume by four trillion gallons of water each year (15 cubic kilometers). That's more water than California's 38 million residents use each year for domestic and municipal purposes. About two-thirds of the loss is due to depletion of groundwater beneath California's Central Valley.
In related results, early 2014 data from NASA's Airborne Snow Observatory indicate that snowpack in California's Sierra Nevada range was only half of previous estimates. The observatory is providing the first-ever high-resolution observations of the water volume of snow in the Tuolumne River, Merced, Kings and Lakes basins of the Sierra Nevada and the Uncompahgre watershed in the Upper Colorado River Basin.
Researchers from North Carolina State University have developed a new lithography technique that uses nanoscale spheres to create three-dimensional (3-D) structures with biomedical, electronic and photonic applications. The new technique is significantly less expensive than conventional methods and does not rely on stacking two-dimensional (2-D) patterns to create 3-D structures.
“Our approach reduces the cost of nanolithography to the point where it could be done in your garage,” says Dr. Chih-Hao Chang, an assistant professor of mechanical and aerospace engineering at NC State and senior author of a paper on the work.
Most conventional lithography uses a variety of techniques to focus light on a photosensitive film to create 2-D patterns. These techniques rely on specialized lenses, electron beams or lasers – all of which are extremely expensive. Other conventional techniques use mechanical probes, which are also costly. To create 3-D structures, the 2-D patterns are essentially printed on top of each other. The NC State researchers took a different approach, placing nanoscale polystyrene spheres on the surface of the photosensitive film.
The nanospheres are transparent, but bend and scatter the light that passes through them in predictable ways according to the angle that the light takes when it hits the nanosphere. The researchers control the nanolithography by altering the size of the nanosphere, the duration of light exposures, and the angle, wavelength and polarization of light. The researchers can also use one beam of light, or multiple beams of light, allowing them to create a wide variety of nanostructure designs.
“We are using the nanosphere to shape the pattern of light, which gives us the ability to shape the resulting nanostructure in three dimensions without using the expensive equipment required by conventional techniques,” Chang says. “And it allows us to create 3-D structures all at once, without having to make layer after layer of 2-D patterns.”
The researchers have also shown that they can get the nanospheres to self-assemble in a regularly-spaced array, which in turn can be used to create a uniform pattern of 3-D nanostructures.
“This could be used to create an array of nanoneedles for use in drug delivery or other applications,” says Xu Zhang, a Ph.D. student in Chang’s lab and lead author of the paper.
For decades, the mantra of electronics has been smaller, faster, cheaper. Today, Stanford engineers add a fourth word - taller. At a conference in San Francisco, a Stanford team will reveal how to build high-rise chips that could leapfrog the performance of the single-story logic and memory chips on today's circuit cards.
Those circuit cards are like busy cities in which logic chips compute and memory chips store data. But when the computer gets busy, the wires connecting logic and memory can get jammed. The Stanford approach would end these jams by building layers of logic atop layers of memory to create a tightly interconnected high-rise chip. Many thousands of nanoscale electronic "elevators" would move data between the layers much faster, using less electricity, than the bottle-neck prone wires connecting single-story logic and memory chips today.
The work is led by Subhasish Mitra, a Stanford professor of electrical engineering and computer science, and H.-S. Philip Wong, the Williard R. and Inez Kerr Bell Professor in Stanford's School of Engineering. They describe their new high-rise chip architecture in a paper being presented at the IEEE International Electron Devices Meeting on Dec. 15-17. The researchers' innovation leverages three breakthroughs.
The first is a new technology for creating transistors, those tiny gates that switch electricity on and off to create digital zeroes and ones. The second is a new type of computer memory that lends itself to multi-story fabrication. The third is a technique to build these new logic and memory technologies into high-rise structures in a radically different way than previous efforts to stack chips.
"This research is at an early stage, but our design and fabrication techniques are scalable," Mitra said. "With further development this architecture could lead to computing performance that is much, much greater than anything available today."
Unlike in mathematics, it is rare to have exact solutions to physics problems.
"When they do present themselves, they are an opportunity to test the approximation schemes (algorithms) that are used to make progress in modern physics," said Michael Strickland, Ph.D., associate professor of physics at Kent State University. Strickland and four of his collaborators recently published an exact solution in the journal Physical Review Letters that applies to a wide array of physics contexts and will help researchers to better model galactic structure, supernova explosions and high-energy particle collisions, such as those studied at the Large Hadron Collider at CERN in Switzerland. In these collisions, experimentalists create a short-lived high-temperature plasma of quarks and gluons called quark gluon plasma (QGP), much like what is believed to be the state of the universe milliseconds after the Big Bang 13.8 billion years ago.
In their article, Strickland and co-authors Gabriel S. Denicol of McGill University, Ulrich Heinz and Mauricio Martinez of the Ohio State University, and Jorge Noronha of the University of São Paulo presented the first exact solution that describes a system that is expanding at relativistic velocities radially and longitudinally.
The equation that was solved was invented by Austrian physicist Ludwig Boltzmann in 1872 to model the dynamics of fluids and gases. This equation was ahead of its time since Boltzmann imagined that matter was atomic in nature and that the dynamics of the system could be understood solely by analyzing collisional processes between sets of particles.
"In the last decade, there has been a lot of work modeling the evolution of the quark gluon plasma using hydrodynamics in which the QGP is imagined to be fluidlike," Strickland said. "As it turns out, the equations of hydrodynamics can be obtained from the Boltzmann equation and, unlike the hydrodynamical equations, the Boltzmann equation is not limited to the case of a system that is in (or close to) thermal equilibrium.
"Both types of expansion occur in relativistic heavy ion collisions, and one must include both if one hopes to make a realistic description of the dynamics," Strickland continued. "The new exact solution has both types of expansion and can be used to tell us which hydrodynamical framework is the best."
The abstract for this article can be found at journals.aps.org/prl/abstract/… ysRevLett.113.202301.
As the West African Ebola epidemic enters its second year small batches of experimental vaccines are on the cusp of reaching people in the affected countries. Nature magazine tackles the questions that will determine whether vaccines play a role in ending the current epidemic — and can prevent future flare-ups. Two vaccines are leading contenders to be deployed in West Africa early next year. The furthest along is one co-developed by London-based drug firm GlaxoSmithKline (GSK) and the US National Institute for Allergy and Infectious Disease (NIAID) in Bethesda, Maryland. Their vaccine is made of an inactivated chimpanzee cold-causing adenovirus (called ChAd3) that has been engineered to produce an Ebola protein.
Only efficacy trials — slated for next year in Liberia and Sierra Leone — can determine whether a vaccine can prevent Ebola infection. But researchers are scouring data from safety trials to identify the doses and regimens that offer the best chances of working. Those decisions will be made by mid-January, according to Ballou.
The Liberia trial will probably enrol around 30,000 people living in Monrovia. The plan is for volunteers to be randomly assigned to receive either the GSK–NIAID vaccine, the NewLink–Merck vaccine or a saline-solution injection that will serve as a placebo control. But it is unclear how the suspension of the NewLink–Merck safety trial in Geneva will affect those plans. “If that means it is not going forward rapidly for safety reasons, it would impact on plans for the efficacy trial in Liberia,” Hill says. One possibility is to delay this arm of the trial until additional safety tests are completed.
A phase 3 trial of around 6,000 health-care workers in Sierra Leone is also in the planning stages. The current strategy is for all the volunteers to receive a vaccine (yet to be selected) in a phased roll-out. Researchers will determine whether the vaccine works by comparing infection rates among vaccinated and unvaccinated people.
Officials are also discussing the possibility of a third efficacy trial to test whether a strategy known as ring vaccination can quell the epidemic. In this approach, patients living around a newly diagnosed case are vaccinated, in an attempt to prevent transmission.
Infrasound may have alerted warblers to the approaching storm, prompting them to fly more than a thousand kilometers to avoid it.
A group of songbirds may have avoided a devastating storm by fleeing their US breeding grounds after detecting telltale infrasound waves.
Researchers noticed the behaviour after analysing trackers attached to the birds to study their migration patterns. They believe it is the first documented case of birds making detours to avoid destructive weather systems on the basis of infrasound.
The golden-winged warblers had just returned from South America to their breeding grounds in the mountains of Tennessee in 2013 when a massive storm was edging closer. Although the birds had just completed a migration of more than 2,500km, they still had the energy to evade the danger.
The storm, which spawned more than 80 tornadoes across the US and killed 35 people, was 900km away when the birds, apparently acting independently of one another, fled south, with one bird embarking on a 1,500km flight to Cuba before making the return trip once the storm had passed.
“We looked at barometric pressure, wind speeds on the ground and at low elevations, and the precipitation, but none of these things that typically trigger birds to move had changed,” said David Andersen at the University of Minnesota. “What we’re left with is something that allows them to detect a storm from a long distance, and the one thing that seems to be the most obvious is infrasound from tornadoes, which travels through the ground.”
The scientists cannot be sure that the birds picked up infrasound waves from the storm, but previous work in pigeons has suggested that birds might use infrasound to help them navigate. Infrasound waves range from about 0.5Hz to 18Hz, below the audible range of humans.
The discovery of the evasive action could be good news, said Andersen. “With climate change increasing the frequency and severity of storms, this suggests that birds may have some ability to cope that we hadn’t previously realised. These birds seemed to be capable of making really dramatic movements at short notice, even just after returning on their northwards migration,” he said.
Had the storm arrived a couple of weeks later, the birds may not have taken flight. By that time, they would have been nesting, and females especially may have been less likely to flee. “It’s hard to say what would happen. It may be more advantageous to survive than stay with a nest that is going to be destroyed anyway,” Andersen said.
Crows have the brain power to solve higher-order, relational-matching tasks, and they can do so spontaneously, according to new research. That means crows join humans, apes and monkeys in exhibiting advanced relational thinking, according to the research.
Crows have long been heralded for their high intelligence -- they can remember faces, use tools and communicate in sophisticated ways.
But a newly published study finds crows also have the brain power to solve higher-order, relational-matching tasks, and they can do so spontaneously. That means crows join humans, apes and monkeys in exhibiting advanced relational thinking, according to the research.
Russian researcher Anna Smirnova studies a crow making the correct selection during a relational matching trial.
"What the crows have done is a phenomenal feat," says Ed Wasserman, a psychology professor at the University of Iowa and corresponding author of the study. "That's the marvel of the results. It's been done before with apes and monkeys, but now we're dealing with a bird; but not just any bird, a bird with a brain as special to birds as the brain of an apes is special to mammals."
Here is how it worked: the birds were placed into a wire mesh cage into which a plastic tray containing three small cups was occasionally inserted. The sample cup in the middle was covered with a small card on which was pictured a color, shape or number of items. The other two cups were also covered with cards -- one that matched the sample and one that did not. During this initial training period, the cup with the matching card contained two mealworms; the crows were rewarded with these food items when they chose the matching card, but they received no food when they chose the other card.
Once the crows has been trained on identity matching-to-sample, the researchers moved to the second phase of the experiment. This time, the birds were assessed with relational matching pairs of items.
These relational matching trials were arranged in such a way that neither test pairs precisely matched the sample pair, thereby eliminating control by physical identity. For example, the crows might have to choose two same-sized circles rather than two different-sized circles when the sample card displayed two same-sized squares.
What surprised the researchers was not only that the crows could correctly perform the relational matches, but that they did so spontaneously--without explicit training. "That is the crux of the discovery," Wasserman says. "Honestly, if it was only by brute force that the crows showed this learning, then it would have been an impressive result. But this feat was spontaneous."
Anna Smirnova, Zoya Zorina, Tanya Obozova, Edward Wasserman. Crows Spontaneously Exhibit Analogical Reasoning. Current Biology, 2014 DOI: 10.1016/j.cub.2014.11.063
The prolific spacecraft has spotted its first newalien planet since being hobbled by a malfunction in May 2013, researchers announced today (Dec. 18). The newly discovered world, called HIP 116454b, is a "super Earth" about 2.5 times larger than our home planet. It lies 180 light-years from Earth, in the constellation Pisces — close enough to be studied by other instruments, scientists said.
"Like a phoenix rising from the ashes, Kepler has been reborn and is continuing to make discoveries," study lead author Andrew Vanderburg, of the Harvard-Smithsonian Center for Astrophysics (CfA), said in a statement. "Even better, the planet it found is ripe for follow-up studies."
Kepler launched in March 2009, on a 3.5-year mission to determine how frequently Earth-like planets occur around the Milky Way galaxy. The spacecraft has been incredibly successful to date, finding nearly 1,000 confirmed planets — more than half of all known alien worlds — along with about 3,200 other "candidates," the vast majority of which should turn out to be the real deal.
HIP 116454b is about 20,000 miles (32,000 kilometers) wide and is 12 times more massive than Earth, scientists said. The planet's density suggests that it is either primarily covered by water or is a "mini Neptune" with a large, thick atmosphere. HIP 116454b lies just 8.4 million miles (13.5 million km) from its host star, an "orange dwarf" slightly smaller and cooler than the sun, and completes one orbit every 9.1 days. The close-orbiting planet is too hot to host life as we know it, researchers said. The planet's relative proximity to Earth means it will likely attract further attention in the future.
Members of Genlisea, a genus of carnivorous plants, possess the smallest genomes known in plants. To elucidate genomic evolution in the group as a whole, researchers have now surveyed a wider range of species, and found a new record-holder.
The genus Genlisea (corkscrew plants) belongs to the bladderwort family (Lentubulariaceae), a family of carnivorous plants. Some of the 29 species of Genlisea that have been described possess tiny genome sizes. Indeed, the smallest genome yet discovered among flowering plants belongs to a member of the group. The term 'genome' here refers to all genetic material arranged in a set of individual chromosomes present in each cell of a given species. An international team of researchers, led by Professor Günther Heubl of LMU's Department of Biology, has now explored, for the first time, the evolution of genome size and chromosome number in the genus. Heubl and his collaborators studied just over half the known species of Genlisea, and their findings are reported in the latest issue of the journal Annals of Botany.
"During the evolution of the genus, the genomes of some Genlisea species must have undergone a drastic reduction in size, which was accompanied by miniaturization of chromosomes, but an increase in chromosome number," says Dr. Andreas Fleischmann, a member of Heubl's research group. Indeed, the chromosomes of the corkscrew plants are so minute that they can only just be resolved by conventional light microscopy. With the aid of an ingenious preparation technique, Dr. Aretuza Sousa, a specialist in cytogenetics and cytology at the Institute of Systematic Botany at LMU, was able to visualize the ultrasmall chromosomes of Genlisea species by fluorescence microscopy. Thanks to this methodology, the researchers were able to identify individual chromosomes and determine their number, as well as measuring the total DNA content of the nuclear genomes of selected representatives of the genus.
The LMU researchers also discovered a new record-holder. Genlisea tuberosa, a species that was discovered only recently from Brazil, and was first described by Andreas Fleischmann in collaboration with Brazilian botanists, turns out to have a genome that encompasses only 61 million base pairs (= Mbp; the genome size is expressed as the total number of nucleotide bases found on each of the paired strands of the DNA double helix) Thus G. tuberosa possesses now the smallest plant genome known, beating the previous record by 3 Mbp. Moreover, genome sizes vary widely between different Genlisea species, spanning the range from ~60 to 1700 Mbp.
In a development that holds promise for future magnetic memory and logic devices, researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and Cornell University successfully used an electric field to reverse the magnetization direction in a multiferroic spintronic device at room temperature. This demonstration, which runs counter to conventional scientific wisdom, points a new way towards spintronics and smaller, faster and cheaper ways of storing and processing data.
“Our work shows that 180-degree magnetization switching in the multiferroic bismuth ferrite can be achieved at room temperature with an external electric field when the kinetics of the switching involves a two-step process,” says Ramamoorthy Ramesh, Berkeley Lab’s Associate Laboratory Director for Energy Technologies, who led this research. “We exploited this multi-step switching process to demonstrate energy-efficient control of a spintronic device.”
Ramesh, who also holds the Purnendu Chatterjee Endowed Chair in Energy Technologies at the University of California (UC) Berkeley, is the senior author of a paper describing this research in Nature. The paper is titled “Deterministic switching of ferromagnetism at room temperature using an electric field.” John Heron, now with Cornell University, is the lead and corresponding author.
“The electrical currents that today’s memory and logic devices rely on to generate a magnetic field are the primary source of power consumption and heating in these devices,” he says. “This has triggered significant interest in multiferroics for their potential to reduce energy consumption while also adding functionality to devices.” To demonstrate the potential technological applicability of their technique, Ramesh, Heron and their co-authors used heterostructures of bismuth ferrite and cobalt iron to fabricate a spin-valve, a spintronic device consisting of a non-magnetic material sandwiched between two ferromagnets whose electrical resistance can be readily changed. X-ray magnetic circular dichroism photoemission electron microscopy (XMCD-PEEM) images showed a clear correlation between magnetization switching and the switching from high-to-low electrical resistance in the spin-valve. The XMCD-PEEM measurements were completed at PEEM-3, an aberration corrected photoemission electron microscope at beamline 11.0.1 of Berkeley Lab’s Advanced Light Source.
Phen-Gen is the first computer analysis software that cross-references a patient’s symptoms and a person’s genome sequence, to better aid doctors in diagnosing diseases. The software was created by a team of scientists at A*STAR’s Genome Institute of Singapore (GIS), led by Dr. Pauline Ng. Results from the research were published in the prestigious journal Nature Methods on 4th August 2014.
Though printing items like chocolate and pizza might be satisfying enough for some, 3D printing still holds a lot of unfulfilled potential. Talk abounds of disrupting manufacturing, changing the face of construction and even building metal components in space. While it is hard not to get a little bit excited by these potentially world-changing advances, there is one domain where 3D printing is already having a real-life impact. Its capacity to produce customized implants and medical devices tailored specifically to a patient's anatomy has seen it open up all kinds of possibilities in the field of medicine, with the year 2014 having turned up one world-first surgery after another. Let's cast our eye over some of the significant, life-changing procedures to emerge in the past year made possible by 3D printing technology.
NASA's Mars Curiosity rover has measured a tenfold spike in methane, an organic chemical, in the atmosphere around it and detected other organic molecules in a rock-powder sample collected by the robotic laboratory's drill.
"This temporary increase in methane—sharply up and then back down—tells us there must be some relatively localized source," said Sushil Atreya of the University of Michigan, Ann Arbor, and Curiosity rover science team. "There are many possible sources, biological or non-biological, such as interaction of water and rock."
Researchers used Curiosity's onboard Sample Analysis at Mars (SAM) laboratory a dozen times in a 20-month period to sniff methane in the atmosphere. During two of those months, in late 2013 and early 2014, four measurements averaged seven parts per billion. Before and after that, readings averaged only one-tenth that level.
Curiosity also detected different Martian organic chemicals in powder drilled from a rock dubbed Cumberland, the first definitive detection of organics in surface materials of Mars. These Martian organics could either have formed on Mars or been delivered to Mars by meteorites.
Organic molecules, which contain carbon and usually hydrogen, are chemical building blocks of life, although they can exist without the presence of life. Curiosity's findings from analyzing samples of atmosphere and rock powder do not reveal whether Mars has ever harbored living microbes, but the findings do shed light on a chemically active modern Mars and on favorable conditions for life on ancient Mars.
"We will keep working on the puzzles these findings present," said John Grotzinger, Curiosity project scientist of the California Institute of Technology in Pasadena (Caltech). "Can we learn more about the active chemistry causing such fluctuations in the amount of methane in the atmosphere? Can we choose rock targets where identifiable organics have been preserved?"
In a study in the journal Neuron, scientists describe a new high data-rate, low-power wireless brain sensor. The technology is designed to enable neuroscience research that cannot be accomplished with current sensors that tether subjects with cabled connections. Experiments in the paper confirm that new capability. The results show that the technology transmitted rich, neuroscientifically meaningful signals from animal models as they slept and woke or exercised.
“We view this as a platform device for tapping into the richness of electrical signals from the brain among animal models where their neural circuit activity reflects entirely volitional and naturalistic behavior, not constrained to particular space,” said Arto Nurmikko, professor of engineering and physics affiliated with the Brown Institute for Brain Science and the paper’s senior and corresponding author. “This enables new types of neuroscience experiments with vast amounts of brain data wirelessly and continuously streamed from brain microcircuits.”
“The brain sensor is opening unprecedented opportunities for the development of neuroprosthetic treatments in natural and unconstrained environments,” said study co-author Grégoire Courtine, a professor at EPFL (École polytechnique fédérale de Lausanne), who collaborated with Nurmikko’s group on the research. To confirm the system performance, the researchers did a series of experiments with rhesus macaques, which walked on a treadmill while the researchers used the wireless system to measure neural signals associated with the brain’s motion commands. They also did another experiment in which animal subjects went through sleep/wake cycles, unencumbered by cables or wires; the data showed distinct patterns related to the different stages of consciousness and the transitions between them.
“We hope that the wireless neurosensor will change the canonical paradigm of neuroscience research, enabling scientists to explore the nervous system within its natural context and without the use of tethering cables,” said co-lead author David Borton. “Subjects are free to roam, forage, sleep, etc., all while the researchers are observing the brain activity. We are very excited to see how the neuroscience community leverages this platform.”
Every so often our Earth encounters a large chunk of space debris which reminds us that our solar system still contains plenty of debris that could potentially have an impact on life on Earth.
While the great bulk of planetary accretion occurs in the first few hundred million years after the birth of a given system, the process never really comes to an end. Most of the objects that make up the tail of this accretion – grains of dust, lumps of ice, and pieces of rock – smash into our atmosphere and ablate harmlessly many kilometres above the ground, visible only as shooting stars. Larger impacts do, however, continue to occur – as illustrated on February 15, 2013, in the Russian city of Chelyabinsk. On that day, with no warning, a small near-Earth asteroid detonated in the atmosphere, and outshone the noon-day sun.
Though the object itself was relatively small, around 20m in diameter, it exploded with sufficient force to shatter windows many kilometres away, damaging more than 7,000 buildings. Amazingly, nobody was killed – but the impact served as a stark reminder of the dangers posed by rocks from space. The longer the timescale we consider, the larger the biggest collision the Earth might experience. A stand-out example is the impact, around 65 million years ago, thought to have contributed to the extinction of the dinosaurs.
Fortunately for us here on Earth, the rate at which such catastrophic impacts occur is relatively low, but this might not be the case in other planetary systems. Thanks to observations carried out at infrared wavelengths, we are now in a position to start categorising the small object populations of other planetary systems. As we do, we are finding that many systems contain far more debris, left over from their formation, than does our own. This gives us an additional tool by which we can assess potentially habitable planets. It should be possible to estimate the impact regimes that they might experience, based on these kind of observations.
Topological quantum computing (TQC) is a newer type of quantum computing that uses "braids" of particle tracks, rather than actual particles such as ions and electrons, as the qubits to implement computations. Using braids has one important advantage: it makes TQCs practically immune to the small perturbations in the environment that cause decoherence in particle-based qubits and often lead to high error rates.
Ever since TQC was first proposed in 1997, experimentally realizing the appropriate braids has been extremely difficult. For one thing, the braids are formed not by the trajectories of ordinary particles, but by the trajectories of exotic quasiparticles (particle-like excitations) called anyons. Also, movements of the anyons must be non-Abelian, a property similar to the non-commutative property in which changing the order of the anyons' movements changes their final tracks. In most proposals of TQC so far, the non-Abelian statistics of the anyons has not been powerful enough, even in theory, for universal TQC.
Now in a new study published in Physical Review Letters, physicists Abolhassan Vaezi at Cornell University and Maissam Barkeshli at Microsoft's research lab Station Q have theoretically shown that anyons tunneling in a double-layer system can transition to an exotic non-Abelian state that contains "Fibonacci" anyons that are powerful enough for universal TQC.
"Our work suggests that some existing experimental setups are rich enough to yield a phase capable of performing 'universal' TQC, i.e., all of the required logical gates for the performance of a quantum computer can be made through the braiding of anyons only," Vaezi told Phys.org. "Since braiding is a topological operation and does not perturb the low-energy physics, the resulting quantum computer is fault-tolerant."
Damage to neural tissue is typically permanent and causes lasting disability in patients, but a new approach has recently been discovered that holds incredible potential to reconstruct neural tissue at high resolution in three dimensions. Research work recently published in the Journal of Neural Engineering demonstrated a method for embedding scaffolding of patterned nanofibers within three-dimensional (3D) hydrogel structures, and it was shown that neurite outgrowth from neurons in the hydrogel followed the nanofiber scaffolding by tracking directly along the nanofibers, particularly when the nanofibers were coated with a type of cell adhesion molecule called laminin. It was also shown that the coated nanofibers significantly enhanced the length of growing neurites, and that the type of hydrogel could significantly affect the extent to which the neurites tracked the nanofibers.
“Neural stem cells hold incredible potential for restoring damaged cells in the nervous system, and 3D reconstruction of neural tissue is essential for replicating the complex anatomical structure and function of the brain and spinal cord,” said Dr. McMurtrey, author of the study and director of the research institute that led this work. “So it was thought that the combination of induced neuronal cells with micropatterned biomaterials might enable unique advantages in 3D cultures, and this research showed that not only can neuronal cells be cultured in 3D conformations, but the direction and pattern of neurite outgrowth can be guided and controlled using relatively simple combinations of structural cues and biochemical signaling factors.”