Amazing Science
775.8K views | +105 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A stream of superfluid light produced: Room-temperature superfluidity in a polariton condensate

A stream of superfluid light produced: Room-temperature superfluidity in a polariton condensate | Amazing Science | Scoop.it

Scientists have known for centuries that light is composed of waves. The fact that light can also behave as a liquid, rippling and spiraling around obstacles like the current of a river, is a much more recent finding that is still a subject of active research. The "liquid" properties of light emerge under special circumstances, when the photons that form the light wave are able to interact with each other.

 

Researchers from CNR NANOTEC of Lecce in Italy, in collaboration with Polytechnique Montreal in Canada have shown that for light "dressed" with electrons, an even more dramatic effect occurs. Light become superfluid, showing frictionless flow when flowing across an obstacle and reconnecting behind it without any ripples.

Daniele Sanvitto, leading the experimental research group that observed this phenomenon, states that "Superfluidity is an impressive effect, normally observed only at temperatures close to absolute zero (-273 degrees Celsius), such as in liquid Helium and ultracold atomic gasses. The extraordinary observation in our work is that we have demonstrated that superfluidity can also occur at room-temperature, under ambient conditions, using light-matter particles called polaritons."

 

"Superfluidity, which allows a fluid in the absence of viscosity to literally leak out of its container," adds Sanvitto, "is linked to the ability of all the particles to condense in a state called a Bose-Einstein condensate, also known as the fifth state of matter, in which particles behave like a single macroscopic wave, oscillating all at the same frequency.

 

Something similar happens, for example, in superconductors: electrons, in pairs, condense, giving rise to superfluids or super-currents able to conduct electricity without losses."

 

These experiments have shown that it is possible to obtain superfluidity at room-temperature, whereas until now this property was achievable only at temperatures close to absolute zero. This could allow for its use in future photonic devices.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The largest virtual universe ever simulated

The largest virtual universe ever simulated | Amazing Science | Scoop.it

Beyond Trillion Particle Cosmological Simulations for the Next Era of Galaxy Surveys. Researchers from the University of Zurich have simulated the formation of our entire universe with a large supercomputer. A gigantic catalogue of about 25 billion virtual galaxies has been generated from 2 trillion digital particles. This catalogue is being used to calibrate the experiments on board the Euclid satellite, that will be launched in 2020 with the objective of investigating the nature of dark matter and dark energy.

 

Over a period of three years, a group of astrophysicists from the University of Zurich has developed and optimised a revolutionary code to describe with unprecedented accuracy the dynamics of dark matter and the formation of large-scale structures in the universe. As Joachim Stadel, Douglas Potter and Romain Teyssier report in their recently published paper, the code (called PKDGRAV3) has been designed to use optimally the available memory and processing power of modern supercomputing architectures, such as the "Piz Daint" supercomputer of the Swiss National Computing Center (CSCS). The code was executed on this world-leading machine for only 80 hours, and generated a virtual universe of two trillion (i.e., two thousand billion or 2 x 1012) macro-particles representing the dark matter fluid, from which a catalogue of 25 billion virtual galaxies was extracted.

 

Thanks to the high precision of their calculation, featuring a dark matter fluid evolving under its own gravity, the researchers have simulated the formation of small concentration of matter, called dark matter halos, in which we believe galaxies like the Milky Way form. The challenge of this simulation was to model galaxies as small as one tenth of the Milky Way, in a volume as large as our entire observable universe. This was the requirement set by the European Euclid mission, whose main objective is to explore the dark side of the universe.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cognition for Autonomous Cars Using 6D Localization

Cognition for Autonomous Cars Using 6D Localization | Amazing Science | Scoop.it

To drive autonomously, vehicles need software which emulates the routines of natural human cognition (processes used to judge, plan, acquire knowledge, or otherwise — “think”). Autonomous vehicles must be able to understand the world that surrounds them, and this environmental context can be provided in the form of a machine-readable, high-definition “semantic map.” Detailed 3D semantic maps, also commonly known as “HD maps,” have become the industry-wide standard to enable higher cognition in self-driving cars and Civil Maps is trailblazing at the frontier of this emergent market.

 

Even so, HD semantic maps are of little use to a vehicle without precise localization — the ability for an autonomous vehicle to accurately position itself within the reference map. Similar to yourself, while intending to go somewhere, an intelligent vehicle needs to know where it is currently located before it can design a route, and then follow its desired path. Moreover, while the new generation of highly-detailed 3D maps are far more comprehensive than traditional 2D mapping projections, they are not entirely sufficient for achieving Level Four (SAE) autonomous driving, wherein the human driver has no necessitated responsibilities towards vehicle control or route planning. Truly “self-driving cars” need much more, in the form of “cognitive tools” to aid in environmental awareness and decision-making.

 

Civil Maps has addressed this knowledge gap by developing techniques for localizing a vehicle in six degrees of freedom: the movement axes (x, y, z) and also rotational axes (roll, pitch, yaw) that are more familiar to pilots than automotive enthusiasts. The above concept video shows the result of combining our highly detailed, 3D semantic map with localization in six degrees of freedom (“6DoF” also referred to as “6D”). Localization in 6D allows the 3D semantic map to be projected in the field of view of vision sensors such as LiDARs, cameras, and radars. By utilizing Civil Maps’ localization routines in this manner, the car is given an additional layer of assistive map information, enabling smarter decisions and safer driving. With both location and orientation in 6DoF, the vehicle can focus (foveate) its sensors towards a particular region in space, where a need-to-know action is occurring in the car’s local frame of reference and environment.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Build-A-Face: Brains encode faces piece by piece

Build-A-Face: Brains encode faces piece by piece | Amazing Science | Scoop.it

A monkey’s brain builds a picture of a human face somewhat like a Mr. Potato Head — piecing it together bit by bit. The code that a monkey’s brain uses to represent faces relies not on groups of nerve cells tuned to specific faces — as has been previously proposed — but on a population of about 200 cells that code for different sets of facial characteristics. Added together, the information contributed by each nerve cell lets the brain efficiently capture any face, researchers report June 1 in Cell.

 

“It’s a turning point in neuroscience — a major breakthrough,” says Rodrigo Quian Quiroga, a neuroscientist at the University of Leicester in England who wasn’t part of the work. “It’s a very simple mechanism to explain something as complex as recognizing faces.”

 

Until now, Quiroga says, the leading explanation for the way the primate brain recognizes faces proposed that individual nerve cells, or neurons, respond to certain types of faces (SN: 6/25/05, p. 406). A system like that might work for the few dozen people with whom you regularly interact. But accounting for all of the peripheral people encountered in a lifetime would require a lot of neurons.

 

It now seems that the brain might have a more efficient strategy, says Doris Tsao, a neuroscientist at Caltech. Tsao and coauthor Le Chang used statistical analyses to identify 50 variables that accounted for the greatest differences between 200 face photos. Those variables represented somewhat complex changes in the face — for instance, the hairline rising while the face becomes wider and the eyes becomes further-set.

 

The researchers turned those variables into a 50-dimensional “face space,” with each face being a point and each dimension being an axis along which a set of features varied. Then, Tsao and Chang extracted 2,000 faces from that map, each linked to specific coordinates. While projecting the faces one at a time onto a screens in front of two macaque monkeys, the team recorded the activity in single neurons in parts of the monkey’s temporal lobe known to respond specifically to faces. All together, the recordings captured activity from 205 neurons.

more...
LEONARDO WILD's curator insight, June 9, 10:04 AM
When writing facial descriptions, one of the hard things is to differentiate characters, to give them their unique visual appearance. Perhaps this can help find some key elements that could make those facial descriptions make more sense, both to the writer and the reader.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Oldest Homo sapiens fossil claim rewrites our species' history

Oldest Homo sapiens fossil claim rewrites our species' history | Amazing Science | Scoop.it
Remains from Morocco dated to 315,000 years ago push back our species' origins by 100,000 years — and suggest we didn't evolve only in East Africa.

 

At an archaeological site near the Atlantic coast, finds of skull, face and jaw bones identified as being from early members of our species have been dated to about 315,000 years ago. That indicates H. sapiens appeared more than 100,000 years earlier than thought: most researchers have placed the origins of our species in East Africa about 200,000 years ago. The finds, which are published on 7 June in Nature1, 2, do not mean that H. sapiens originated in North Africa. Instead, they suggest that the species' earliest members evolved all across the continent, scientists say.

 

“Until now, the common wisdom was that our species emerged probably rather quickly somewhere in a ‘Garden of Eden’ that was located most likely in sub-Saharan Africa,” says Jean-Jacques Hublin, an author of the study and a director at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Now, “I would say the Garden of Eden in Africa is probably Africa — and it’s a big, big garden.” Hublin was one of the leaders of the decade-long excavation at the Moroccan site, called Jebel Irhoud.

 

Hublin first became familiar with Jebel Irhoud in the early 1980s, when he was shown a puzzling specimen of a lower jawbone of a child from the site. Miners had discovered a nearly complete human skull there in 1961; later excavations had also found a braincase, as well as sophisticated stone tools and other signs of human presence.

 

The bones “looked far too primitive to be anything understandable, so people came up with some weird ideas”, Hublin says. Researchers guessed they were 40,000 years old and proposed that Neanderthals had lived in North Africa.

 

More recently, researchers have suggested that the Jebel Irhoud humans were an ‘archaic’ species that survived in North Africa until H. sapiens from south of the Sahara replaced them. East Africa is where most scientists place our species’ origins: two of the oldest known H. sapiens fossils — 196,000 and 160,000-year-old skulls3, 4 — come from Ethiopia, and DNA studies of present-day populations around the globe point to an African origin some 200,000 years ago5.

 
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Near-infrared light proves an effective and precise tool for engineering tissues from stem cells

Near-infrared light proves an effective and precise tool for engineering tissues from stem cells | Amazing Science | Scoop.it

Researchers in UC Santa Barbara’s departments of Chemistry and Biochemistry, and of Molecular, Cellular and Developmental Biology have gotten a step closer to unlocking the secrets of tissue morphology with a method of three-dimensional culturing of embryonic stem cells using light. “The important development with our method is that we have good spatiotemporal control over which cell — or even part of a cell — is being excited to differentiate along a particular gene pathway,” said lead author Xiao Huang, who conducted this study as a doctoral student at UCSB and is now a postdoctoral scholar in the Desai Lab at UC San Francisco. The research, titled “Light-Patterned RNA Interference of 3D-Cultured Human Embryonic Stem Cells,” appears in volume 28, issue 48 of the journal Advanced Materials.

 

Similar to other work in the field of optogenetics — which largely focuses neurological disorders and activity in living organisms, leading to insights into diseases and conditions such as Parkinson’s and drug addiction — this new method relies on light to control gene expression. The researchers used a combination of hollow gold nanoshells attached to small molecules of synthetic RNA (siRNA) — a molecule that plays a large role in gene regulation — and thermoreversible hydrogel as 3D scaffolding for the stem cell culture, as well as invisible, near-infrared (NIR) light. NIR light, Huang explained, is ideal when creating a three-dimensional culture in the lab.

 

“Near-infrared light has better tissue penetration that is useful when the sample becomes thick,” he explained. In addition to enhanced penetration — up to 10 cm deep — the light can be focused tightly to specific areas. Irradiation with the light released the RNA molecules from the nanoshells in the sample and initiated gene-silencing activity, which knocked down green fluorescent protein genes in the cell cluster. The experiment also showed that the irradiated cells grew at the same rate as the untreated control sample; the treated cells showed unchanged viability after irradiation.

 

Of course, culturing tissues consisting of related but varying cell types is a far more complex process than knocking down a single gene. “It’s a concert of orchestrated processes,” said co-author and graduate student researcher Demosthenes Morales, describing the process by which human embryonic stem cells become specific tissues and organs. “Things are being turned on and turned off.” Perturbing one aspect of the system, he explained, sets off a series of actions along the cells’ developmental pathways, much of which is still unknown.

 

“One reason we’re very interested in spatiotemporal control is because these cells, when they’re growing and developing, don’t always communicate the same way,” Morales said, explaining that the resulting processes occur at different speeds, and occasionally overlap. “So being able to control that communication on which cell differentiates into which cell type will help us to be able to control tissue formation,” he added.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

“Smoking gun” on ice ages revisited

“Smoking gun” on ice ages revisited | Amazing Science | Scoop.it

Paleoclimatologists Rock -Two million years of radical climate change is significant. “The smoking gun of the ice ages” is the title of an article in the Dec. 9, 2016 issue of Science, the journal of the American Association for the Advancement of Science. The author, David A. Hodel, is listed with the Laboratory for Paleoclimate Research, Department of Earth Sciences, at Cambridge University in the UK.

 

Hodel cites a 40-year-old paper in Science, 194,1121 (1976). In that paper, Hays, Imbrie and Shackleton reported that their proxies for paleo sea surface temperatures and changing continental ice volumes exhibited periodicities of 42,000, 23,500 and 19,000 years, matching almost exactly the predicted orbital periods of planetary obliquity, precession and eccentricity. They also found that the dominant rhythm in the paleoclimate variations was 100,000 (±20,000) years.

 

Other climatologists have identified 20 glacial/interglacial oscillations over the past two million years with glacial parts of the cycles lasting about four times as long as the warm, interglacial parts. The last glacial maximum was about 18,000 years ago. We have been enjoying the present warm interglacial for about 12,000 years.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists solve mystery of how most antimatter in the Milky Way forms

Scientists solve mystery of how most antimatter in the Milky Way forms | Amazing Science | Scoop.it

A team of international astrophysicists led by ANU has shown how most of the antimatter in the Milky Way forms.

Antimatter is material composed of the antiparticle partners of ordinary matter -- when antimatter meets with matter, they quickly annihilate each other to form a burst of energy in the form of gamma-rays.

 

Scientists have known since the early 1970s that the inner parts of the Milky Way galaxy are a strong source of gamma-rays, indicating the existence of antimatter, but there had been no settled view on where the antimatter came from. ANU researcher Dr Roland Crocker said the team had shown that the cause was a series of weak supernova explosions over millions of years, each created by the convergence of two white dwarfs which are ultra-compact remnants of stars no larger than two suns.

 

"Our research provides new insight into a part of the Milky Way where we find some of the oldest stars in our galaxy," said Dr Crocker from the ANU Research School of Astronomy and Astrophysics.

 

Dr Crocker said the team had ruled out the supermassive black hole at the centre of the Milky Way and the still-mysterious dark matter as being the sources of the antimatter. He said the antimatter came from a system where two white dwarfs form a binary system and collide with each other. The smaller of the binary stars loses mass to the larger star and ends its life as a helium white dwarf, while the larger star ends as a carbon-oxygen white dwarf.

 

"The binary system is granted one final moment of extreme drama: as the white dwarfs orbit each other, the system loses energy to gravitational waves causing them to spiral closer and closer to each other," Dr Crocker said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists Improve Evolutionary Tree of Life for Archaea

Scientists Improve Evolutionary Tree of Life for Archaea | Amazing Science | Scoop.it
An international group of researchers from UK, France, Hungary and Sweden has provided new insights into the origins of the Archaea, the group of simple cellular organisms that are the ancestors of all complex life.

 

The Archaea are one of the primary domains of cellular life, and are possibly the most ancient form of life: putative fossils of archaean cells in stromatolites have been dated to almost 3.5 billion years ago. Like bacteria, these microorganisms are prokaryotes, meaning that they have no cell nucleus or any other organelles in their cells. They thrive in a bewildering variety of habitats, from the familiar – soils and oceans – to the inhospitable and bizarre. They play major roles in modern-day biogeochemical cycles, and are central to debates about the origin of eukaryotic cells. However, understanding their origins and evolutionary history is challenging because of the huge time spans involved.

 

To find the root of the archaeal tree and to resolve the metabolism of the earliest archaeal cells, University of Bristol researcher Dr. Tom Williams and co-authors applied a new statistical approach that harnesses the information in patterns of gene family evolution. “With the development of new technologies for sequencing genomes directly from the environment, many new groups of the Archaea have been discovered,” Dr. Williams said. “But while these genomes have greatly improved our understanding of the diversity of the Archaea, they have so far failed to bring clarity to the evolutionary history of the group. This is because, like other microorganisms, the Archaea frequently obtain DNA from distantly related organisms by lateral gene transfer, which can greatly complicate the reconstruction of evolutionary history.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Far Away is Fusion? Unlocking the Power of the Sun

How Far Away is Fusion? Unlocking the Power of the Sun | Amazing Science | Scoop.it
The Sun uses its enormous mass to crush hydrogen into fusion, releasing enormous energy. How long will it be until we’ve got this energy source for Earth?

 

The trick to the Sun’s ability to generate power through nuclear fusion, of course, comes from its enormous mass. The Sun contains 1.989 x 10^30 kilograms of mostly hydrogen and helium, and this mass pushes inward, creating a core heated to 15 million degrees C, with 150 times the density of water.

It’s at this core that the Sun does its work, mashing atoms of hydrogen into helium. This process of fusion is an exothermic reaction, which means that every time a new atom of helium is created, photons in the form of gamma radiation are also released.

 

The only thing the Sun uses this energy for is light pressure, to counteract the gravity pulling everything inward. Its photons slowly make their way up through the Sun and then they’re released into space. So wasteful. How can we replicate this on Earth?

 

Now gathering together a Sun’s mass of hydrogen here on Earth is one option, but it’s really impractical. Where would we put all that hydrogen. The better solution will be to use our technology to simulate the conditions at the core of the Sun. If we can make a fusion reactor where the temperatures and pressures are high enough for atoms of hydrogen to merge into helium, we can harness those sweet sweet photons of gamma radiation.

 

The main technology developed to do this is called a tokamak reactor; it’s a based on a Russian acronym for: “toroidal chamber with magnetic coils”, and the first prototypes were created in the 1960s. There are many different reactors in development, but the method is essentially the same.

A vacuum chamber is filled with hydrogen fuel. Then an enormous amount of electricity is run through the chamber, heating up the hydrogen into a plasma state. They might also use lasers and other methods to get the plasma up to 150 to 300 million degrees Celsius (10 to 20 times hotter than the Sun’s core).

 

Superconducting magnets surround the fusion chamber, containing the plasma and keeping it away from the chamber walls, which would melt otherwise. Once the temperatures and pressures are high enough, atoms of hydrogen are crushed together into helium just like in the Sun. This releases photons which heat up the plasma, keeping the reaction going without any addition energy input. Excess heat reaches the chamber walls, and can be extracted to do work.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Assay of nearly 5,000 mutations reveals roots of genetic splicing errors

Assay of nearly 5,000 mutations reveals roots of genetic splicing errors | Amazing Science | Scoop.it

It’s not so hard anymore to find genetic variations in patients, said Brown University genomics expert William Fairbrother, but it remains difficult to understand whether and how those mutations undermine health. In a new study in Nature Genetics, his research team used a new assay technology called “MaPSy” to sort through nearly 5,000 mutations and identify about 500 that led to errors in how cells processed genes. The system also showed precisely how and why the processing failed.

 

“Today, because we can, we’re getting tens of thousands of variants from each individual that could be relevant,” said Fairbrother, an associate professor of biology. “We can sequence everything. But we want to know which variants are causing diseases — that’s the beginning of precision medicine. How you respond to a therapy is going to be determined by which variant is causing your disease and how.”

 

To accelerate that knowledge, Fairbrother has dedicated his lab to developing a variety of tools and techniques, including software and biophysical systems such as MaPSy, to study gene splicing. Genes are sections of DNA sequence that provide cells with the instructions, or code, for making proteins the body needs for its functions. During this manufacturing process, useful protein coding sequences need to be cut out and reconnected — spliced — from the longer sequences, much as usable movie scenes are cut from longer reels of raw footage when making a film.

 

Genes are often viewed as the blueprint of proteins. Sometimes mutations in genes affect not the code of the proteins themselves, but instead the splicing sites and instructions that govern how the gene sequence should be read. That can be a big problem — while the former kind of problem might affect a component of a protein, the latter kind of error can affect whether the protein is made at all. It’s therefore important to understand how an individual’s genetic variation could alter gene splicing, Fairbrother said. “Splicing errors can be very deleterious because instead of just changing one amino acid [the building block of a protein], it can take out a stretch of 40 or 50 amino acids,” he said.

 

In 2012, Fairbrother’s lab unveiled free web-based software, Spliceman, which analyzes DNA sequences to determine if mutations are likely to cause errors in splicing. Later that year, the lab was part of a team that won the CLARITY contest in which scientists analyzed the whole genomes of three families to find the mutations causing a disease in children from each family.

 

In the new project, Fairbrother and co-lead authors Rachel Soemedi, a postdoctoral researcher at Brown, and Kamil Cygan, a graduate student, developed a “Massively Parallel Splicing Assay,” or “MaPSy,” for rapidly screening the splicing implications of 4,964 variations in the Human Gene Mutation Database (HGMD) of disease-causing genetic problems. MaPSy works by making thousands of artificial genes that can model the effects of disease-causing mutations. The researchers synthesized artificial genes that correspond to “normal” and disease-carrying versions of thousands of genes. These “pooled” artificial genes are processed in large batches in two modes. In the “in vivo” mode, the scientists introduced both healthy and mutant versions of the synthesized genes into living cells to see how often the normal or mutant genes would be successfully processed.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Daily Magazine
Scoop.it!

3D-printed ovaries restore fertility in mice

3D-printed ovaries restore fertility in mice | Amazing Science | Scoop.it
 

 

Fans of 3D printing say it has the potential to revolutionize medicine—think 3D-printed skin,ears, bone scaffolds, and heart valves. Now, prosthetic ovaries made of gelatin have allowed mice to conceive and give birth to healthy offspring. Such engineered ovaries could one day be used to help restore fertility in cancer survivors rendered sterile by radiation or chemotherapy.

 

This “landmark study” is a “significant advance in the application of bioengineering to reproductive tissues,” says Mary Zelinski, a reproductive scientist at the Oregon National Primate Research Center in Beaverton who was not involved with the work.

 

The researchers used a 3D printer with a nozzle that fired gelatin, derived from the collagen that’s naturally found in animal ovaries. The scientists built the ovaries by printing various patterns of overlapping gelatin filaments on glass slides—like building with Lincoln Logs, but on a miniature scale: Each scaffold measured just 15 by 15 millimeters. The team then carefully inserted mouse follicles—spherical structures containing a growing egg surrounded by hormone-producing cells—into these “scaffolds.” The scaffolds that were more tightly woven hosted a higher fraction of surviving follicles after 8 days, an effect the team attributes to the follicles having better physical support.

 

The researchers then tested the more tightly woven scaffolds in live mice. The researchers punched out 2-millimeter circles through the scaffolds and implanted 40–50 follicles into each one, creating a “bioprosthetic” ovary. They then surgically removed the ovaries from seven mice and sutured the prosthetic ovaries in their place. The team showed that blood vessels from each mouse infiltrated the scaffolds. This vascularization is critical because it provides oxygen and nutrients to the follicles and allows hormones produced by the follicles to circulate in the blood stream.  

 

The researchers allowed the mice to mate, and three of the females gave birth to healthy litters, the team reports today in Nature Communications. The mice that gave birth also lactated naturally, which demonstrated that the follicles embedded in the scaffolds produced normal levels of hormones.


Via THE *OFFICIAL ANDREASCY*
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Radar warns motorcycle pilots of nearby traffic before they even see the oncoming cars

Radar warns motorcycle pilots of nearby traffic before they even see the oncoming cars | Amazing Science | Scoop.it

Radar warns motorcyclists of nearby traffic before they see oncoming cars. Motorcyclists are 18 times more likely to be killed in a collision. This new technology is about to change that. The claim is that this new radar could prevent nearly one-third of all motorcycle accidents.

 

RADAR technology initially developed for use in driverless cars has been adapted for motorcycles. Vehicle-to-vehicle communications developer Cohda Wireless from South Australia has partnered with Bosch, Ducati and Autotalks on a “digital protective shield” that warns riders of nearby traffic before they see oncoming cars. Bosch is commercializing the technology in Ducati production bikes but the radar could also be retrofitted to any car or motorcycle.

 

Production of the technology is being driven by a proposed mandate from the United States Department of Transportation that would require all new vehicles to have vehicle-to-vehicle radars installed. Cohda Wireless Managing Director Paul Gray said the radar was the next step in safety from seatbelts and airbags. “Technologists have gone as far as they can in terms of minimizing harm during an accident and now it is about avoiding the accidents before they even happen,” he said.

 

“If a motorcyclist is riding down the street, it will be alerted when a car turning onto the same road creates an opportunity for an accident. This can also happen when the car moving onto the road is not visible to the rider. The radar will also alert drivers who are changing lanes if someone is in their blind spot, which is quite an issue for motorcyclists.”

 

Gray said the technology would eventually be in every autonomous car as well. Cohda commands about 60% of the vehicle-to-vehicle communication market. The system uses the public WLAN standard (ITS G5) as the basis for the exchange of data between motorcycles and cars.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New Use for a Century-Old Relativity Experiment to Measure a White Dwarf's Mass

New Use for a Century-Old Relativity Experiment to Measure a White Dwarf's Mass | Amazing Science | Scoop.it

Astronomers have used the sharp vision of NASA’s Hubble Space Telescope to repeat a century-old test of Einstein’s general theory of relativity. The team measured the mass of white dwarf Stein 2051 B, the burned-out remnant of a normal star, by seeing how much it deflects the light from a background star. The gravitational microlensing method data provide a solid estimate of the white dwarf’s mass and yield insights into theories of the structure and composition of the burned-out star.

 

Albert Einstein reshaped our understanding of the fabric of space. In his general theory of relativity in 1915, he proposed the revolutionary idea that massive objects warp space, due to the effects of gravity. Until that time, Isaac Newton's theory of gravity from two centuries earlier held sway: that space was unchanging. Einstein's theory was experimentally verified four years later when a team led by British astronomer Sir Arthur Eddington measured how much the sun's gravity deflected the image of a background star as its light grazed the sun during a solar eclipse.

 

Astronomers had to wait a century, however, to build telescopes powerful enough to detect this gravitational warping phenomenon caused by a star outside our solar system. The amount of deflection is so small only the sharpness of the Hubble Space Telescope could measure it. Hubble observed the nearby white dwarf star Stein 2051 B as it passed in front of a background star. During the close alignment, the white dwarf's gravity bent the light from the distant star, making it appear offset by about 2 milliarcseconds from its actual position. This deviation is so small that it is equivalent to observing an ant crawl across the surface of a quarter from 1,500 miles away.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Solving linear equations with quantum mechanics

Solving linear equations with quantum mechanics | Amazing Science | Scoop.it

Physicists have experimentally demonstrated a purely quantum method for solving systems of linear equations that has the potential to work exponentially faster than the best classical methods. The results show that quantum computing may eventually have far-reaching practical applications, since solving linear systems is commonly done throughout science and engineering.

 

The physicists, led by Haohua Wang at Zhejiang University and Chao-Yang Lu and Xiaobo Zhu at the University of Science and Technology of China, along with their coauthors from various institutions in China, have published their paper on what they refer to as a "quantum linear solver" in a recent issue of Physical Review Letters.

 

"For the first time, we have demonstrated a quantum algorithm for solving systems of linear equations on a superconducting quantum circuit," Lu told Phys.org. "[This is] one of the best solid-state platforms with excellent scalability and remarkable high fidelity."

 

The quantum algorithm they implemented is called the Harrow, Hassidim, and Lloyd (HHL) algorithm, which was previously shown to have the ability, in principle, to lead to an exponential quantum speedup over classical algorithms. However, so far this has not been experimentally demonstrated.

 

In the new study, the scientists showed that a superconducting quantum circuit running the HHL algorithm can solve the simplest type of linear system, which has two equations with two variables. The method uses just four qubits: one ancilla qubit (a universal component of most quantum computing systems), and three qubits that correspond to the input vector b and the two solutions represented by the solution vector x in the standard linear system Ax = b, where A is a 2 x 2 matrix.

 

By performing a series of rotations, swapings of states, and binary conversions, the HHL algorithm determines the solutions to this system, which can then be read out by a quantum non-demolition measurement. The researchers demonstrated the method using 18 different input vectors and the same matrix, generating different solutions for different inputs. As the researchers explain, it is too soon to tell how much faster this quantum method might work since these problems are easily solved by classical methods.

 

"The whole calculation process takes about one second," Zhu said. "It is hard to directly compare the current version to the classical methods now. In this work, we showed how to solve the simplest 2 x 2 linear system, which can be solved by classical methods in a very short time. The key power of the HHL quantum algorithm is that, when solving an 's-sparse' system matrix of a very large size, it can gain an exponential speed-up compared to the best classical method. Therefore, it would be much more interesting to show such a comparison when the size of the linear equation is scaled to a very large system."

 

The researchers expect that, in the future, this quantum circuit could be scaled up to solve larger linear systems. They also plan to further improve the system's performance by making some straightforward adjustments to the device fabrication to reduce some of the error in its implementation. In addition, the researchers want to investigate how the circuit could be used to implement other quantum algorithms for a variety of large-scale applications.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Exoplanet KELT9b breaks all heat records and is hotter than many stars

Exoplanet KELT9b breaks all heat records and is hotter than many stars | Amazing Science | Scoop.it

The planet KELT 9b is so hot — hotter than many stars — that it shatters gas giant temperature records, researchers report online June 5 in Nature. This Jupiter-like exoplanet revolves around a star just 650 light-years away, locked in an orbit that keeps one side always facing its star. With blistering temps hovering at about 4,300o Celsius, the atmosphere on KELT 9b’s dayside is over 700 degrees hotter than the previous record-holder — and hot enough that atoms cannot bind together to form molecules.

 

“It’s like a star-planet hybrid,” says Drake Deming, a planetary scientist at the University of Maryland in College Park who was not involved in the research. “A kind of object we’ve never seen before.”

 

KELT 9b also boasts an unusual orbit, traveling around the poles of its star, rather than the equator, once every 36 hours. And radiation from KELT 9b’s host star is so intense that it blows the planet’s atmosphere out like a comet tail — and may eventually strip it away completely.

 

The planet is so bizarre that it took scientists nearly three years to convince themselves it was real, says Scott Gaudi of Ohio State University. Deming suspects KELT 9b is “the tip of the iceberg” for an undiscovered population of scalding-hot gas giants.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Huntington's disease trial test is 'major advance' due to neurofilament light chain

Huntington's disease trial test is 'major advance' due to neurofilament light chain | Amazing Science | Scoop.it

Experts describe the early research as a "major advance" in this field. The study, in the Lancet Neurology, suggests the prototype test could help in the hunt for new treatments. Huntington's disease is an inherited and incurable brain disorder that is currently fatal. Around 10,00 people in the UK have the condition and around 25,000 are at risk. It is passed on through genes, and children who inherit a faulty gene from parents have a 50% chance of getting the disease in later life. People can develop a range of problems including involuntary movements, personality changes and altered behavior and may be fully dependent on carers towards the end of their lives.

 

In this study, an international team - including researchers from University College London - looked at 200 people with genes for Huntington's disease - some of whom already had signs of the disease, and others at earlier stages. They compared them to some 100 people who were not at risk of getting the condition. Volunteers had several tests over three years, including brain scans and clinical check-ups to see how Huntington's disease affected people's thinking skills and movement as the condition became more severe. At the same time scientists looked for clues in blood samples - measuring a substance called neurofilament light chain (NFL) - released from damaged brain cells. They found levels of the brain protein were high in people with Huntington's disease and were even elevated in people who carried the gene for Huntington's disease but were many years away from showing any symptoms. And researchers found NFL levels rose as the condition worsened and as people's brains shrank over time.

 

Dr Edward Wild, at UCL, said: "Neurofilament light chain has the potential to serve as a speedometer in Huntington's disease, since a single blood test reflects how quickly the brain is changing. "We have been trying to identify blood biomarkers to help track the progression of Huntington's disease for well over a decade and this is the best candidate we have seen so far."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

3-D printing offers new approach to make buildings in 14 hours time

3-D printing offers new approach to make buildings in 14 hours time | Amazing Science | Scoop.it
MIT researchers have developed a system that can 3-D print the basic structure of an entire building.

 

MIT's system  is a massive robotic arm attached to a track vehicle. The arm is fitted with nozzles that can spray foam insulation on the ground and fill the area in with concrete.

It is operated electrically and can harvest its power from the sun using solar panels. A scoop attached to the robot lets it prepare the building surface and acquire local materials, such as dirt for a rammed-earth building, for the construction itself.

 

To demonstrate the technology, the team constructed the walls of a 50-foot-diameter, 12-foot-high dome in 14 hours of 'printing' time.  Although this technology could change how humans build homes on Earth, the team foresees this system being useful when we go to Mars, as it can create small dwellings in less than 24 hours.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Have Gravitational Waves Left Scars in the Fabric of Spacetime?

Have Gravitational Waves Left Scars in the Fabric of Spacetime? | Amazing Science | Scoop.it

Gravitational waves are ripples in spacetime caused by the universe’s most violent collisions, and we detect them with experiments like the Laser Interferometer Gravitational Wave Observatory (LIGO) and its European counterpart, Virgo. These detectors are a series of several-kilometer-long L-shaped buildings that measure gravitational waves passing through Earth as tiny differences in the distance traveled by two laser beams’ light waves. Scientists have spotted gravitational waves twice, maybe three times.

 

If these waves permanently altered spacetime, our detectors might be able to measure the slight change. These changes to spacetime wouldn’t affect your life at all, since they’d be tinier than the individual protons and neutrons in atoms. But the idea is that, given enough passing gravitational waves from incredible black hole collisions, we’d eventually be able to pick up the sum of all these spacetime ripples as a tiny shift in the detector. This could happen after as few as 20 gravitational wave events similar to the first one ever discovered, according to a paper published last year.

 

It’s possible that scientists might be able to spot the scars caused by gravitational waves without observing the waves themselves, which would be useful seeing as our gravitational wave detectors are only sensitive to waves with certain frequencies. Scientists named this idea “orphan memory” in a paper published this month in the journal Physical Review Letters. It’s a bit like The Flash’s footprints—something moving beyond the comprehension of our detectors but leaving behind a tiny hint of a passing force.

 

Others researchers are excited about the prospect of detecting hints of higher frequency gravitational waves—these could signal exotic physics and extra dimensions, Sanjeev Seahra, associate professor in mathematics at the University of New Brunswick told Gizmodo. “But detectors such as LIGO are not optimised to see such signals, so the possibility that the gravitational wave memory effect could act as an observable low-frequency component to intrinsically high-frequency waveforms is very encouraging.” The detectors are optimized to see signals between 10 and 2000Hz.

 

At least one scientist wasn’t so encouraged. I asked Lionel London, a research associate in gravitational waves at Cardiff University what he thought about using experiments like LIGO to detect the ghostly traces of past spacetime ripples, and he was skeptical. He pointed out that a few of the paper’s statements go against what many astrophysicists know about black hole mergers, like the amount of a black hole’s energy that gets converted into gravitational waves. The paper assumes the entire remnant black hole mass turns into gravitational waves after a collision, but London said only about 10 percent of the system’s initial mass can turn to gravitational waves.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

VLA Reveals Secondary Black Hole Near Supermassive Black Hole in Cygnus A Galaxy

VLA Reveals Secondary Black Hole Near Supermassive Black Hole in Cygnus A Galaxy | Amazing Science | Scoop.it
Astronomers were surprised when the VLA revealed that a bright new object has appeared near the core of a famous galaxy. They think it's a second supermassive black hole, indicating that the galaxy has merged with another in the past.

 

Pointing the National Science Foundation’s Very Large Array (VLA) at a famous galaxy for the first time in two decades, a team of astronomers got a big surprise, finding that a bright new object had appeared near the galaxy’s core. The object, the scientists concluded, is either a very rare type of supernova explosion or, more likely, an outburst from a second supermassive black hole closely orbiting the galaxy’s primary, central supermassive black hole.

 

The astronomers observed Cygnus A, a well-known and often-studied galaxy discovered by radio-astronomy pioneer Grote Reber in 1939. The radio discovery was matched to a visible-light image in 1951, and the galaxy, some 800 million light-years from Earth, was an early target of the VLA after its completion in the early 1980s. Detailed images from the VLA published in 1984 produced major advances in scientists’ understanding of the superfast “jets” of subatomic particles propelled into intergalactic space by the gravitational energy of supermassive black holes at the cores of galaxies. “This new object may have much to tell us about the history of this galaxy,” said Daniel Perley, of the Astrophysics Research Institute of Liverpool John Moores University in the U.K., lead author of a paper in the Astrophysical Journalannouncing the discovery.

 

“The VLA images of Cygnus A from the 1980s marked the state of the observational capability at that time,” said Rick Perley, of the National Radio Astronomy Observatory (NRAO). “Because of that, we didn’t look at Cygnus A again until 1996, when new VLA electronics had provided a new range of radio frequencies for our observations.” The new object does not appear in the images made then. “However, the VLA’s upgrade that was completed in 2012 made it a much more powerful telescope, so we wanted to have a look at Cygnus A using the VLA’s new capabilities,” Perley said.

 

Daniel and Rick Perley, along with Vivek Dhawan, and Chris Carilli, both of NRAO, began the new observations in 2015, and continued them in 2016. “To our surprise, we found a prominent new feature near the galaxy’s nucleus that did not appear in any previous published images. This new feature is bright enough that we definitely would have seen it in the earlier images if nothing had changed,” said Rick Perley. “That means it must have turned on sometime between 1996 and now,” he added.

 

The scientists then observed Cygnus A with the Very Long Baseline Array (VLBA) in November of 2016, clearly detecting the new object. A faint infrared object also is seen at the same location in Hubble Space Telescope and Keck observations, originally made between 1994 and 2002. The infrared astronomers, from Lawrence Livermore National Laboratory, had attributed the object to a dense group of stars, but the dramatic radio brightening is forcing a new analysis.

 

What is the new object? Based on its characteristics, the astronomers concluded it must be either a supernova explosion or an outburst from a second supermassive black hole near the galaxy’s center. While they want to watch the object’s future behavior to make sure, they pointed out that the object has remained too bright for too long to be consistent with any known type of supernova. “Because of this extraordinary brightness, we consider the supernova explanation unlikely,” Dhawan said.

While the new object definitely is separate from Cygnus A’s central supermassive black hole, by about 1500 light-years, it has many of the characteristics of a supermassive black hole that is rapidly feeding on surrounding material.

 

“We think we’ve found a second supermassive black hole in this galaxy, indicating that it has merged with another galaxy in the astronomically-recent past,” Carilli said. “These two would be one of the closest pairs of supermassive black holes ever discovered, likely themselves to merge in the future.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Destruction of a quantum monopole finally observed

Destruction of a quantum monopole finally observed | Amazing Science | Scoop.it

Scientists at Amherst College and Aalto University have made the first experimental observations of the dynamics of isolated monopoles in quantum matter.

 

The new study provided a surprise: the quantum monopole decays into another analogue of the magnetic monopole. The obtained fundamental understanding of monopole dynamics may help in the future to build even closer analogues of the magnetic monopoles.

 

Unlike usual magnets, magnetic monopoles are elementary particles that have only a south or a north magnetic pole, but not both. They have been theoretically predicted to exist, but no convincing experimental observations have been reported. Thus physicists are busy looking for analogue objects.

 

"In 2014, we experimentally realized a Dirac monopole, that is, Paul Dirac's 80-year-old theory where he originally considered charged quantum particles interacting with a magnetic monopole," says Professor David Hall from Amherst College. And in 2015, we created real quantum monopoles," adds Dr. Mikko Möttönen from Aalto University.

 

Whereas the Dirac monopole experiment simulates the motion of a charged particle in the vicinity of a monopolar magnetic field, the quantum monopole has a point-like structure in its own field resembling that of the magnetic monopole particle itself.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Astronomers Watch as Collapsing Star Turns Into a Black Hole

Astronomers Watch as Collapsing Star Turns Into a Black Hole | Amazing Science | Scoop.it
Using data from several telescopes, a team of astronomers watched as a massive, dying star was likely reborn as a black hole.

 

The doomed star, named N6946-BH1, was 25 times as massive as our sun. It began to brighten weakly in 2009. But, by 2015, it appeared to have winked out of existence. By a careful process of elimination, based on observations researchers eventually concluded that the star must have become a black hole. This may be the fate for extremely massive stars in the universe.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Pharmacological characterisation of the highly NaV1.7 selective spider venom peptide Pn3a

Pharmacological characterisation of the highly NaV1.7 selective spider venom peptide Pn3a | Amazing Science | Scoop.it
Human genetic studies have implicated the voltage-gated sodium channel NaV1.7 as a therapeutic target for the treatment of pain.

 

As the Nav1.7 channel appears to be a highly important component in nociception, with null activity conferring total analgesia,[14] there has been immense interest in developing selective Nav1.7 channel blockers as potential novel analgesics.[27] Nav1.7 is a sodium ion channel that in humans is encoded by the SCN9A gene.[3][4][5] Since Nav1.7 is not present in heart tissue or the central nervous system, selective blockers of Nav1.7, unlike non-selective blockers such as local anesthetics, could be safely used systemically for pain relief. Moreover, selective Nav1.7 blockers may prove to be far more effective analgesics, and with fewer undesirable effects, relative to current pharmacotherapies.[27][28][29]

 

A number of selective Nav1.7 (and/or Nav1.8) blockers are in clinical development, including funapide (formerly TV-45070, XEN402), raxatrigine (formerly CNV1014802, GSK-1014802), PF-05089771, PF-04531083, DSP-2230, AZD-3161, NKTR-171, GDC-0276, and RG7893(formerly GDC-0287).[30][31][32] Ralfinamide (formerly NW-1029, FCE-26742A, PNU-0154339E) is a multimodal, non-selective Nav channel blocker which is under development for the treatment of pain.[33]

 

Spiders are the most successful venomous animals with an estimated 100,000 extant species [1]. The vast majority of spiders employ a lethal cocktail to rapidly subdue their prey, which are often many times their own size. However, despite their fearsome reputation, less than a handful of these insect assassins are harmful to humans [2,3]. Nevertheless, it is this small group of medically important species that first prompted scientists more than half a century ago to begin exploring the remarkable pharmacological diversity of spider venoms.

 

Amongst the ranks of animals that employ venom for their survival, spiders are the most successful, the most geographically widespread, and arguably consume the most diverse range of prey. Although the predominant items on a spider’s dinner menu are other arthropods, larger species will readily kill and feed on small fish, reptiles, amphibians, birds, and mammals. Thus, spider venoms contain a wealth of toxins that target a diverse range of receptors, channels, and enzymes in a wide range of vertebrate and invertebrate species.

 

Spider venoms are complex cocktails composed of a variety of compounds, including salts, small organic molecules, peptides, and proteins [4,5,6,7,8,9]. However, peptides are the primary components of spider venoms, and some species produce venom containing >1000 unique peptides of mass 2–8 kDa [10]. Based on the number of described spider species and a relatively conservative estimate of the complexity of their venom it has been estimated that the potential number of unique spider venom peptides could be upwards of 12 million [11]. In recent years there has been an exponential increase in the number of spider-toxin sequences being reported [12] due to the application of high-throughput proteomic [13,14] and transcriptomic [15,16,17] approaches, or a combination of these methods [10,18,19]. In the last 18 months alone the number of toxins in the ArachnoServer spider-toxin database [20,21] has more than doubled, and is now excess of 900 (see http://www.arachnoserver.org/). Nevertheless, our knowledge of the diversity of spider-venom peptides is still rudimentary, with less than 0.01% of potential peptides having been isolated and studied.

 

Although only a small number of spider venom peptides have been pharmacologically characterized, the array of known biological activities is impressive [9]. In addition to the well known neurotoxic effects of spider venoms, they contain peptides with antiarrhythmic, antimicrobial, analgesic, antiparasitic, cytolytic, haemolytic, and enzyme inhibitory activity. Furthermore, the crude venom of Macrothele raveni has antitumor activity, for which the responsible component has not yet been identified [22,23]. Finally, larger toxins such as the latrotoxins from the infamous black widow spider (Latrodectus mactans) and related species induce neurotransmitter release and they have played an important role in dissecting the process of synaptic vesicle exocytosis [24].

 

Since spiders employ their venom primarily to paralyse prey, it is no surprise that these venoms contain an abundance of peptides that modulate the activity of neuronal ion channels and receptors. Indeed, the majority of characterized spider-venom peptides target voltage-gated potassium (KV) [25], calcium (CaV) [26,27], or sodium (NaV) [26,28] channels. More recently, novel spider-venom peptides have been found that interact with ligand-gated channels (e.g., purinergic receptors [29]) and recently discovered families of channels such as acid sensing ion channels [30], mechanosensitive channels [31], and transient receptor potential channels [32]. Not only do most of these peptides have selectivity for a given class of ion channel, they can have anything from mild preference to exquisite selectivity for a given channel subtype. This potential for high target affinity and selectivity makes spider-venom peptides an ideal natural source for the discovery of novel therapeutic leads [33].

 

Despite the advent of automation and the rise of high-throughput and high-content screening in the pharmaceutical industry there has been a sharp decline in the rate of discovery and development of novel chemical entities [34,35]. A group of scientists reviewed the emerging role that venom-derived components can play in addressing this decline with an emphasis on technical advances that can aid the discovery process [36]. It is worth noting that two of the 20 FDA-approved peptide pharmaceuticals were derived from animal venoms (i.e., ziconitide and exendin-4) [37].

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Poison for cancer cells: New method identifies active agents in mixtures of hundreds of substances

Poison for cancer cells: New method identifies active agents in mixtures of hundreds of substances | Amazing Science | Scoop.it
The pharmaceutical industry is always on the lookout for precisely such substances to deploy them against threads like cancer. In the case of cancer, for example, when the proteasome is blocked, rapidly growing cancer cells choke on their own waste. The first medication of this kind is already generating annual revenues of over one billion US dollars. The scientists are now looking for further substances with lesser side effects.

Following preliminary studies, one such candidate was a toxic substance produced by the bacterium Photorhabdus luminescens. This is the poison that kills the larvae of the garden chafer. Using his new methodology, the scientists discovered that the bacterium lives inactively in the intestines of the threadworm. When it lays its eggs, the worm infects the larvae. The sudden change in environment causes the bacterium to emit toxins. After the larva dies, the bacterium ceases to produce toxins. Once the threadworms hatch from the protective egg membrane, they ingest the inactive bacterium into their intestines, and the cycle can start again.

Since the newly developed method also works in intensively colored solutions and in the presence of hundreds of other substances, the workgroup at the Chair of Biochemistry succeeded in isolating the unknown poison directly from the bacterial brew: It turned out to be two structurally very similar compounds, cepafungin I and glidobactin A. The latter was previously considered the strongest proteasome blocker. In spite of the resemblance, cepafungin I had never been tested as a proteasome blocking agent. The tests of the research group showed that Cepafungin I is indeed a strong Proteasomhemmer. In effect, it even surpasses the previous record holder.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Daily Magazine
Scoop.it!

New insights on the spin dynamics of a material candidate for low-power devices

New insights on the spin dynamics of a material candidate for low-power devices | Amazing Science | Scoop.it
Computers process and transfer data through electrical currents passing through tiny circuits and wires. As these currents meet with resistance, they create heat that can undermine the efficiency and even the safety of these devices.

 

To minimize heat loss and optimize performance for low-power technology, researchers are exploring other ways to process information that could be more energy-efficient. One approach that researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory are exploring involves manipulating the magnetic spin of electrons, a scientific field known as spintronics.

 

"In spintronics, you can think of information as a magnet pointing one way and another magnet pointing in the opposite direction," said Argonne materials scientist Axel Hoffman. "We're interested in how we can use magnetic excitation in applications because processing information this way expends less energy than carrying information through an electrical charge."

 

In a recent report published in Nano Letters, Hoffman and fellow researchers reveal new insights into the properties of a magnetic insulator that is a candidate for low-power device applications; their insights form early stepping-stones towards developing high-speed, low-power electronics that use electron spin rather than charge to carry information. The material they studied, yttrium iron garnet (YIG), is a magnetic insulator that generates and transmits spin current efficiently and dissipates little energy. Because of its low dissipation, YIG has been used in microwave and radar technologies, but recent discoveries of spintronic effects associated with YIG have prompted researchers to explore potential spintronic applications.

 

In their report, Argonne researchers characterize the spin dynamics associated with a small-scale sample of YIG when that material is exposed to an electrical current. "This is the first time for anyone to have measured spin dynamics on a sample size this small," said Benjamin Jungfleisch, an Argonne postdoctoral appointee and lead author of the report. "Understanding the behavior at a small size is crucial because these materials need to be small to ever have the potential to be successfully integrated in low-power devices."

 

Researchers attached the YIG sample to platinum nanowires using electric beam lithography, creating a micrometer-size YIG/platinum structure. They then sent an electrical current through the platinum to excite the YIG and drive spin dynamics. They then took electrical measurements to characterize the magnetization dynamics and measure how these dynamics changed by shrinking the YIG.

 

"When shrinking materials, they can behave in different ways, ways that could present a roadblock to identifying and actualizing potential new applications," Hoffman said. "What we've observed is that, although there are small details that change when YIG is made smaller, there doesn't appear to be a fundamental roadblock that prevents us from using the physical approaches we use for small electrical devices."


Via THE *OFFICIAL ANDREASCY*
more...
No comment yet.