Amazing Science
758.8K views | +73 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making CRISPR More Accessible

Making CRISPR More Accessible | Amazing Science | Scoop.it

Researchers have developed a novel computational tool to help design single guide RNAs (sgRNAs) for DNA deletion using the CRISPR-Cas system. The new tool, CRISPETa, was reported in PLOS Computational Biology recently.

 

Since its initial discovery and subsequent development throughout the last decade, CRISPR has become known as a powerful tool in genomic experiments, both for trying to understand the genome and attempting to treat genetic disorders. Several variants of the system have been developed, such as CRISPRi to influence gene expression at a transcription level and dCas9 to bind to the DNA without cleaving the strand. In 2015, Professor Rory Johnson and his team developed another CRISPR variant, known as DECKO, which was designed specifically to facilitate the removal of selected DNA sequences from the genome.

 

DECKO uses two distinct sgRNAs to guide the cleavage protein Cas9 to the correct sites in the genome on either side of the material being deleted. When the nuclease cleaves the DNA at both sites, the sequence between the two loci is removed completely from the genome with high accuracy. The nature of CRISPR means that DECKO can be used to remove both coding and non-coding material and as a result has become a popular tool among researchers.

 

During the initial development of DECKO, the team noticed that one of the most time-consuming parts of their experiments was the sgRNA design process because there was no pre-made design software available. Now, Master’s student Carlos Pulido may have created the solution to this problem with a novel software pipeline called CRISPETa, which can suggest sgRNAs based on the intended target region.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Star discovered whipping around a black hole twice an hour

Star discovered whipping around a black hole twice an hour | Amazing Science | Scoop.it

Astronomers have found evidence for a star that whips around a black hole about twice an hour. This may be the tightest orbital dance ever witnessed for a black hole and a companion star.

Michigan State University scientists were part of the team that made this discovery, which used NASA's Chandra X-ray Observatory as well as NASA's NuSTAR and the Australia Telescope Compact Array.

 

The close-in stellar couple -- known as a binary -- is located in the globular cluster 47 Tucanae, a dense cluster of stars in our galaxy about 14,800 light years away from Earth. While astronomers have observed this binary for many years, it wasn't until 2015 that radio observations revealed the pair likely contains a black hole pulling material from a companion star called a white dwarf, a low-mass star that has exhausted most or all of its nuclear fuel.

 

New Chandra data of this system, known as X9, show that it changes in X-ray brightness in the same manner every 28 minutes, which is likely the length of time it takes the companion star to make one complete orbit around the black hole. Chandra data also shows evidence for large amounts of oxygen in the system a characteristic of white dwarfs. A strong case can, therefore, be made that that the companion star is a white dwarf, which would then be orbiting the black hole at only about 2.5 times the separation between Earth and the moon.

 

"This white dwarf is so close to the black hole that material is being pulled away from the star and dumped onto a disk of matter around the black hole before falling in," said Arash Bahramian, lead author with the University of Alberta (Canada) and MSU. "Luckily for this star, we don't think it will follow this path into oblivion, but instead will stay in orbit." Although the white dwarf does not appear to be in danger of falling in or being torn apart by the black hole, its fate is uncertain.

 

"For a long time astronomers thought that black holes were rare or totally absent in globular star clusters," said Jay Strader, MSU astronomer and co-author of the paper. "This discovery is additional evidence that, rather than being one of the worst places to look for black holes, globular clusters might be one of the best."

 

How did the black hole get such a close companion? One possibility is that the black hole smashed into a red giant star, and then gas from the outer regions of the star was ejected from the binary. The remaining core of the red giant would form into a white dwarf, which becomes a binary companion to the black hole. The orbit of the binary would then have shrunk as gravitational waves were emitted, until the black hole started pulling material from the white dwarf.

 

The gravitational waves currently being produced by the binary have a frequency that is too low to be detected with Laser Interferometer Gravitational-Wave Observatory, LIGO, that has recently detected gravitational waves from merging black holes. Sources like X9 could potentially be detected with future gravitational wave observatories in space.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Detect. Lock on. Intercept. The remarkable hunting ability of the robber fly

Detect. Lock on. Intercept. The remarkable hunting ability of the robber fly | Amazing Science | Scoop.it

Drones could use it — the remarkable hunting ability of the robber fly. 

 

A small fly the size of a grain of rice could be the Top Gun of the fly world, with a remarkable ability to detect and intercept its prey mid-air, changing direction mid-flight if necessary before sweeping round for the kill.

 

The robber fly Holcocephala is a relatively small fly -- at 6mm in length, it is similar in size of the average mosquito. Yet it has the ability to spot and catch prey more than half a meter away in less than half a second -- by comparison to its size, this would be the equivalent of a human spotting its prey at the other end of a football pitch. Even if the prey changes direction, the predator is able to adapt mid-air and still catch its prey.

 

An international team led by researchers from the University of Cambridge was able to capture this activity by tricking the fly into launching itself at a fake prey -- in fact, just a small bead on a fishing line. This enabled the team to witness the fly's remarkable aerial attack strategy. Their findings are published today in the journal Current Biology.

 

The robber fly has incredibly sophisticated eyes: like all flies, it has compound eyes made up of many lenses -- in the case of the robber fly, it is thought to have several thousand lenses per eye. However, unlike many species of fly, it has a range of lens sizes, from just over 20 microns to around 78 microns -- the width of a human hair. The larger lenses are the same size as those of a dragonfly, which is believed to have the best vision of all insects but is 10 times larger, and help reduce diffraction which would otherwise distort the image

 

"There's a trade-off going on between having excellent vision -- which requires bigger lenses -- and the size of the insect," explains Dr Paloma Gonzalez-Bellido from Cambridge's Department of Physiology, Development and Neuroscience. "The only way a robber fly could have vision as excellent as the 'poster child' of predatory insects, the dragonfly, across its entire visual field would be to have an eye with many more and larger lenses -- but then the fly itself would need to be much larger to be able to carry it."

 

To get around this problem, the robber fly has a concentration of larger lenses in the centre of its vision, accounting for only around one thousandth of its visual space. The lenses get smaller in size around the outside of the eye. Importantly, the team of researchers also showed that below the very large central lenses, this robber fly has evolved extremely small light detectors, which are placed almost parallel to each other and much further away from the lens than normal. This arrangement preserves the high local image resolution, which is very close to that of much larger dragonflies.

 

When it sees a potential prey, the fly launches itself upwards while maintaining a 'constant bearing angle' -- in other words, it moves in a direction such that while moving closer and closer to its prey, it still maintains the same relative bearing. This ensures that it will intercept its prey.

 

"If you think of this as though you're driving along the motorway and a car is coming down the slip road, then if the relative angle between you and this car remains constant, you will collide," explains PhD student Sam Fabian. "Of course, you'd take evasive action, but in the case of the robber fly, this is what it wants."

 

This strategy of maintaining the constant relative bearing also allows the robber fly to maneuver itself mid-air in the event that its prey changes direction. The researchers demonstrated this by switching the direction of their fake prey while the robber fly was mid-flight and observing how the fly responded. Once the fly is around 29 cm away from its prey -- though exactly how it judges this distance is still unclear -- the fly displays a remarkable strategy never before observed in a flying animal. It 'locks-on' to its prey while changing its own trajectory, enabling it to sweep round, slow down and come alongside the prey to make its final attack.

 

"What you see is similar to a baton pass in a relay race: when the two runners are heading in a similar direction and speed, they are more likely to be successful than if they are passing each other at ninety degrees," says Dr Trevor Wardill.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Visualising the genome: Researchers create first 3D structures of active DNA

Visualising the genome: Researchers create first 3D structures of active DNA | Amazing Science | Scoop.it

Scientists have determined the first 3D structures of intact mammalian genomes from individual cells, showing how the DNA from all the chromosomes intricately folds to fit together inside the cell nuclei.

 

Researchers from the University of Cambridge and the MRC Laboratory of Molecular Biology used a combination of imaging and up to 100,000 measurements of where different parts of the DNA are close to each other to examine the genome in a mouse embryonic stem cell. Stem cells are 'master cells', which can develop -- or 'differentiate' -- into almost any type of cell within the body.

 

Most people are familiar with the well-known 'X' shape of chromosomes, but in fact chromosomes only take on this shape when the cell divides. Using their new approach, the researchers have now been able to determine the structures of active chromosomes inside the cell, and how they interact with each other to form an intact genome. This is important because knowledge of the way DNA folds inside the cell allows scientists to study how specific genes, and the DNA regions that control them, interact with each other. The genome's structure controls when and how strongly genes -- particular regions of the DNA -- are switched 'on' or 'off'. This plays a critical role in the development of organisms and also, when it goes awry, in disease.

 

The researchers have illustrated the structure in accompanying videos, which show the intact genome from one particular mouse embryonic stem cell. In the film, above, each of the cell's 20 chromosomes is colored differently.

 

In a second video regions of the chromosomes where genes are active are colored blue, and the regions that interact with the nuclear lamina (a dense fibrillar network inside the nucleus) are colored yellow. The structure shows that the genome is arranged such that the most active genetic regions are on the interior and separated in space from the less active regions that associate with the nuclear lamina. The consistent segregation of these regions, in the same way in every cell, suggests that these processes could drive chromosome and genome folding and thus regulate important cellular events such as DNA replication and cell division.

 

Professor Ernest Laue, whose group at Cambridge's Department of Biochemistry developed the approach, commented: "Knowing where all the genes and control elements are at a given moment will help us understand the molecular mechanisms that control and maintain their expression.

 

"In the future, we'll be able to study how this changes as stem cells differentiate and how decisions are made in individual developing stem cells. Until now, we've only been able to look at groups, or 'populations', of these cells and so have been unable to see individual differences, at least from the outside. Currently, these mechanisms are poorly understood and understanding them may be key to realizing the potential of stem cells in medicine."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The strangeness of the quantum realm opens up exciting new technological possibilities

The strangeness of the quantum realm opens up exciting new technological possibilities | Amazing Science | Scoop.it

A bathing cap that can watch individual neurons, allowing others to monitor the wearer’s mind. A sensor that can spot hidden nuclear submarines. A computer that can discover new drugs, revolutionize securities trading and design new materials.

 

Quantum mechanics—a theory of the behaviour at the atomic level put together in the early 20th century—has a well-earned reputation for weirdness. That is because the world as humanity sees it is not, in fact, how the world works. Quantum mechanics replaced wholesale the centuries-old notion of a clockwork, deterministic universe with a reality that deals in probabilities rather than certainties—one where the very act of measurement affects what is measured.

 

Along with that upheaval came a few truly mind-bending implications, such as the fact that particles are fundamentally neither here nor there but, until pinned down, both here and there at the same time: they are in a “superposition” of here-there-ness. The theory also suggested that particles can be spookily linked: do something to one and the change is felt instantaneously by the other, even across vast reaches of space.

 

This “entanglement” confounded even the theory’s originators.

It is exactly these effects that show such promise now: the techniques that were refined in a bid to learn more about the quantum world are now being harnessed to put it to good use.

 

Gizmos that exploit superposition and entanglement can vastly outperform existing ones—and accomplish things once thought to be impossible. Improving atomic clocks by incorporating entanglement, for example, makes them more accurate than those used today in satellite positioning. That could improve navigational precision by orders of magnitude, which would make self-driving cars safer and more reliable. And because the strength of the local gravitational field affects the flow of time (according to general relativity, another immensely successful but counter-intuitive theory), such clocks would also be able to measure tiny variations in gravity. That could be used to spot underground pipes without having to dig up the road, or track submarines far below the waves.

 

Other aspects of quantum theory permit messaging without worries about eavesdroppers. Signals encoded using either superposed or entangled particles cannot be intercepted, duplicated and passed on. That has obvious appeal to companies and governments the world over. China has already launched a satellite that can receive and reroute such signals; a global, unhackable network could eventually follow.

 

The advantageous interplay between odd quantum effects reaches its zenith in quantum computers. Rather than the 0s and 1s of standard computing, a quantum computer’s bits are in superpositions of both, and each “qubit” is entangled with every other. Using algorithms that recast problems in quantum-amenable forms, such computers will be able to chomp their way through calculations that would take today’s best supercomputers millennia. Even as high-security quantum networks are being developed, a countervailing worry is that quantum computers will eventually render obsolete today’s cryptographic techniques, which are based on hard mathematical problems.

 

Long before that happens, however, smaller quantum computers will make other contributions in industries from energy and logistics to drug design and finance. Even simple quantum computers should be able to tackle classes of problems that choke conventional machines, such as optimizing trading strategies or plucking promising drug candidates from scientific literature. Google said last week that such machines are only five years from commercial exploitability. Recently, IBM, which already runs a publicly accessible, rudimentary quantum computer, announced expansion plans. As our Technology Quarterly in this issue explains, big tech firms and startups alike are developing software to exploit these devices’ curious abilities. A new ecosystem of middlemen is emerging to match new hardware to industries that might benefit.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Synthetic Spider Silk for Sale in Expensive Necktie

Synthetic Spider Silk for Sale in Expensive Necktie | Amazing Science | Scoop.it

We were promised carbon-nanotube space elevators, nanobots that would mend us from the inside out, bulletproof vests made from spider silk, and so much more. What we will get, at least when it comes to arachnid materials, is a $314.15, limited edition spider-silk necktie.

 

Bolt Threads will unveil the tie, which the company calls the first commercially available spider-silk product, Friday at the South by Southwest conference in Austin, Texas. David Breslauer, chief scientific officer at Bolt, says the production of the ties shows that spider silk fibers and textiles can be produced at large scale.

 

But don’t expect to see spider-silk ties in Bloomingdale’s anytime soon. Starting on Saturday, just 50 ties will be available on the company’s website. Breslauer says the tie is a showpiece and that Bolt will release a more widely available product soon, though he declined to provide any details.

 

Spider silk’s properties have excited biomaterials researchers and the public for some time. The structure of spider-silk proteins, which mixes hard, crystalline regions with more elastic ones, gives the material some superlative properties. When single strands of spider silk are tested in the lab, the best can hold their own against steel and Kevlar, the material used to make bulletproof vests. Silks are versatile, and different spider species have their own distinct variations on the silk formula.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Social Foraging
Scoop.it!

How AI researchers built a neural network that learns to speak in just a few hours

How AI researchers built a neural network that learns to speak in just a few hours | Amazing Science | Scoop.it
The Chinese search giant’s Deep Voice system learns to talk in just a few hours with little or no human interference.

 

In the battle to apply deep-learning techniques to the real world, one company stands head and shoulders above the competition. Google’s DeepMind subsidiary has used the technique to create machines that can beat humans at video games and the ancient game of Go. And last year, Google Translate services significantly improved thanks to the behind-the-scenes introduction of deep-learning techniques.

 

So it’s interesting to see how other companies are racing to catch up. Today, it is the turn of Baidu, an Internet search company that is sometimes described as the Chinese equivalent of Google. In 2013, Baidu opened an artificial intelligence research lab in Silicon Valley, raising an interesting question: what has it been up to?

 

Now Baidu’s artificial intelligence lab has revealed its work on speech synthesis. One of the challenges in speech synthesis is to reduce the amount of fine-tuning that goes on behind the scenes. Baidu’s big breakthrough is to create a deep-learning machine that largely does away with this kind of meddling. The result is a text-to-speech system called Deep Voice that can learn to talk in just a few hours with little or no human interference.

 

First some background. Text-to-speech systems are familiar in the modern world in navigation apps, talking clocks, telephone answering systems, and so on. Traditionally these have been created by recording a large database of speech from a single individual and then recombining the utterances to make new phrases.

 

The problem with these systems is that it is difficult to switch to a new speaker or change the emphasis in their words without recording an entirely new database. So computer scientists have been working on another approach. Their goal is to synthesize speech in real time from scratch as it is required.

 

Last year, Google’s DeepMind made a significant breakthrough in this area. It unveiled a neural network that learns how to speak by listening to the sound waves from real speech while comparing this to a transcript of the text. After training, it was able to produce synthetic speech based on text it was given. Google DeepMind called its system WaveNet.

 

Baidu’s work is an improvement on WaveNet, which still requires some fine-tuning during the training process. WaveNet is also computationally demanding, so much so that it is unclear whether it could ever be used to synthesize speech in real time in the real world.


Via Ashish Umre
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

One of greatest mass extinctions was due to an ice age and not to Earth's warming

One of greatest mass extinctions was due to an ice age and not to Earth's warming | Amazing Science | Scoop.it

Earth has known several mass extinctions over the course of its history. One of the most important happened at the Permian-Triassic boundary 250 million years ago. Over 95% of marine species disappeared and, up until now, scientists have linked this extinction to a significant rise in Earth temperatures.

 

But researchers from the University of Geneva (UNIGE), Switzerland, working alongside the University of Zurich, discovered that this extinction took place during a short ice age which preceded the global climate warming. It's the first time that the various stages of a mass extinction have been accurately understood and that scientists have been able to assess the major role played by volcanic explosions in these climate processes.

 

This research, which can be read in Scientific Reports, completely calls into question the scientific theories regarding these phenomena, founded on the increase of CO2 in the atmosphere, and paves the way for a new vision of Earth's climate history.

 

Teams of researchers led by Professor Urs Schaltegger from the Department of Earth and Environmental Sciences at the Faculty of Science of the UNIGE and by Hugo Bucher, from the University of Zürich, have been working on absolute dating for many years.

 

They work on determining the age of minerals in volcanic ash, which establishes a precise and detailed chronology of Earth's climate evolution. They became interested in the Permian-Triassic boundary, 250 million years ago, during which one of the greatest mass extinctions ever took place, responsible for the loss of 95% of marine species. How did this happen? for how long marine biodiversity stayed at very low levels?

 

A technique founded on the radioactive decay of uranium. Researchers worked on sediment layers in the Nanpanjiang basin in southern China. They have the particularity of being extremely well preserved, which allowed for an accurate study of the biodiversity and the climate history of the Permian and the Triassic.

 

"We made several cross-sections of hundreds of metres of basin sediments and we determined the exact positions of ash beds contained in these marine sediments," explained Björn Baresel, first author of the study. They then applied a precise dating technique based on natural radioactive decay of uranium, as Urs Schaltegger added: "In the sedimentary cross-sections, we found layers of volcanic ash containing the mineral zircon which incorporates uranium. It has the specificity of decaying into lead over time at a well-known speed. This is why, by measuring the concentrations of uranium and lead, it was possible for us to date a sediment layer to an accuracy of 35,000 years, which is already fairly precise for periods over 250 million years." Ice is responsible for mass extinction

By dating the various sediment layers, researchers realised that the mass extinction of the Permian-Triassic boundary is represented by a gap in sedimentation, which corresponds to a period when the sea-water level decreased. The only explanation to this phenomenon is that there was ice, which stored water, and that this ice age which lasted 80,000 years was sufficient to eliminate much of marine life. Scientists from the UNIGE explain the global temperature drop by a stratospheric injection of large amounts of sulphur dioxide reducing the intensity of solar radiation reaching the surface of Earth. "We therefore have proof that the species disappeared during an ice age caused by the activity of the first volcanism in the Siberian Traps," added Urs Schaltegger. This ice age was followed by the formation of limestone deposits through bacteria, marking the return of life on Earth at more moderate temperatures. The period of intense climate warming, related to the emplacement of large amounts of basalt of the Siberian Traps and which we previously thought was responsible for the extinction of marine species, in fact happened 500,000 years after the Permian-Triassic boundary.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Amazon Echo and the Internet of Things (IoT) that spy on you

Amazon Echo and the Internet of Things (IoT) that spy on you | Amazing Science | Scoop.it

Amazon’s Echo is a robot that sits in your house and listens. The virtual personal assistant can be summoned into action by saying its name, Alexa, and will then act on commands, like ordering a dollhouse and cookies when asked to do so by a too-clever kindergartener. And because it works by listening, Alexa is an always-on surveillance device, quietly storing snippets of information. Which has placed a particular Echo unit in an uncomfortable role: possible witness to a murder.

 

On Nov. 22, 2015, Victor Collins at the home of James Bates in Bentonville, Arkansas. The night before, Bates invited friends, including Collins, over to watch the football game, and after Bates reported Collins death, police collected some evidence of struggle from the scene. Still, there is more potential evidence police would like to use in the case: audio recorded by the Echo, which could illuminate more about what transpired that night.

 

That evidence is held by Amazon, as data on Amazon servers, and to gain access to it, police filed a search warrant in December 2015. For over a year, Amazon responded in part to the requests: providing police with the subscriber information for the account, and noted that police tried to access the suspect’s cellphone, as a way to access his Echo account, but were unable to do so.

 

On Feb. 17, 2017, Amazon filed a motion to quash the warrant for the recordings from the Echo, arguing that such a search violates first amendment and privacy rights. So does Alexa, the program that speaks through Echo on behalf of Amazon, actually have privacy rights?

 

“What Amazon’s doing is drawing on a line of cases that say there is a connection between freedom of expression, which is protected by the First Amendment, and privacy. That connection is that when you have government surveillance—especially of intellectual activity, let’s say listening to music or reading books or buying books or even using the search engine,” says Margot Kaminski, a professor at Ohio State University’s Moritz College of law, who specializes in law and technology, “that surveillance implicates intellectual freedom in a way that’s important for free expression.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Synthetic yeast chromosomes help probe mysteries of evolution

Synthetic yeast chromosomes help probe mysteries of evolution | Amazing Science | Scoop.it

Evolutionary biologist Stephen Jay Gould once pondered what would happen if the cassette “tape of life” were rewound and played again. Synthetic biologists have tested one aspect of this notion by engineering chromosomes from scratch, sticking them into yeast and seeing whether the modified organisms can still function normally.

 

They do, according to seven papers published today in Science that describe the creation, testing and refining of five redesigned yeast chromosomes1–7. Together with a sixth previously synthesized chromosome8, they represent more than one-third of the genome of the baker’s yeastSaccharomyces cerevisiae. An international consortium of more than 200 researchers that created the chromosomes expects to complete a fully synthetic yeast genome by the end of the year.

 

The work the team has already done could help to optimize the creation of microbes to pump out alcohol, drugs, fragrances and fuel. And it serves as a guide for future research on how genomes evolve and function.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Earth's oceans are warming 13% faster than thought, and accelerating

Earth's oceans are warming 13% faster than thought, and accelerating | Amazing Science | Scoop.it

New research has convincingly quantified how much the Earth has warmed over the past 56 years. Human activities utilize fossil fuels for many beneficial purposes but have an undesirable side effect of adding carbon dioxide to the atmosphere at ever-increasing rates. That increase - of over 40%, with most since 1980 - traps heat in the Earth’s system, warming the entire planet.

 

But how fast is the Earth warming and how much will it warm in the future? Those are the critical questions we need to answer if we are going to make smart decisions on how to handle this issue.

 

At any time the direct effect of this blanket is small, but the accumulated effects are huge and have consequences for our weather and climate. Over 90% of the extra heat ends up in the ocean and hence perhaps the most important measurements of global warming are made in the oceans.

 

But measuring the ocean temperature is not straightforward. Since about 2005 a new type of sensing device has been deployed (the Argo float system). These floats (approximately 3500 in total at any time) are spread out across oceans where they autonomously rise and fall in the ocean waters, collecting temperature data to depths of 2000 meters.

 

When they rise to the ocean surface, they send their data wirelessly to satellites for later analysis. Hence we can now map the ocean heat content quite well. But what about the past, when we mainly had measurements from expendable bathythermographs deployed mainly along major shipping routes and largely confined to the northern hemisphere? Putting data from these various sensors together has been a struggle and has been a major impediment to an accurate quantification of the ocean’s temperature history.

 

Fortunately, a paper just published in Science Advances uses a new strategy to improve upon our understanding of ocean heating to estimate the total global warming from 1960 to 2015. I was fortunate to co-author the study, which uses several innovative steps to make improvements.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Potential life could have spread with relative ease amongst newly-discovered group of seven exoplanets

Potential life could have spread with relative ease amongst newly-discovered group of seven exoplanets | Amazing Science | Scoop.it

The odds of life spreading between the worlds of the newly-discovered seven-planet TRAPPIST-1 system are up to 1,000 times greater than in our own solar system. That’s the conclusion of a new analysis posted March 2 to the arXiv, an online repository of scientific papers.

 

The idea that simple organisms could accidentally travel between planets is known as panspermia. “Imagine one planet has life, and then you have a meteorite or asteroid impact that ejects some rock into space,” says mathematical physicist and lead author Manasvi Lingam of Harvard University in Cambridge, Massachusetts. “If these rocks are captured by a different planet, they could spawn life there.”

 

Panspermia has ancient origins, dating back to the Greek philosopher Anaxagoras in 500 B.C., who believed that cosmic “life seeds” brought organisms to Earth. The first detailed scientific treatment came in 1908, when Swedish chemist Svante Arrhenius wrote a book arguing that bacterial spores could have bounded from one planet to another. A modern-day version of the hypothesis briefly gained popularity in 1996 when a Martian meteorite found in Antarctica seemed to bear signs of microbial fossils—a claim ultimately rejected by most researchers.

 

Announced last month, TRAPPIST-1 is the newest exoplanetary system to spur excitement among astronomers. Orbiting the cool red dwarf star are seven rocky worlds ranging in size from slightly larger than Mars to a bit bigger than Earth. Three of them are in the star’s habitable zone, meaning liquid water could theoretically exist on their surface—though because red dwarf stars produce more flares and harsh radiation than sun-like stars, some scientists think life is unlikely on the surrounding planets. Others disagree.

 

Because the star TRAPPIST-1 is smaller and dimmer than our sun, its habitable zone encompasses a closer-in region—the three potentially habitable planets take only six, nine, and 12 days, respectively, to go around their parent star. The planets are much closer to each other as well. The two inner potentially habitable planets, for instance, are 30 times closer than Venus is to Earth, while the two farther ones are 65 times closer than Earth and Mars.

 

Applying models from the field of ecology, the researchers presuppose that the microbial transfer is similar to species jumping between islands. If the islands are closer together, a larger number of species could potentially travel between them. Hence, by analogy, it’s possible that many species might have made it from one planet in the TRAPPIST-1 system to another.

 

“Of course, this is only an analogy and not an exact match,” says Lingam. “But we know that greater biodiversity in any ecosystem also implies greater stability. If you have more species being transferred to the new planet, there’s less of a chance that they’ll die.”

 

Many unknowns remain, including just how long microbes can survive in space, whether or not they would be viable after traversing an atmosphere and crash-landing on a surface, and if the habitat of one world is similar enough to its sibling planets to allow such organisms to flourish.

 

The TRAPPIST-1 system is well-positioned to answer at least some of these; its exoplanets all eclipse in front of their parent star, meaning that telescopes here on Earth can capture starlight filtering through any potential planetary atmospheres. Such observations could reveal signs of vegetation—plants on Earth preferentially absorb red light, for example, giving our planet a “red edge” as seen from far away—or other biosignature molecules. If the TRAPPIST-1 worlds display similar signals, it could imply that panspermia occurred.

 

“What’s exciting to me more than anything is that this hypothesis is testable. This is no longer in the realm of theory,” says astronomer Rory Barnes of the University of Washington in Seattle, who was not involved in the work. Current and upcoming telescopes might be up the challenge, though it will likely take a large new space-based observatory such as the proposed Large UV/Optical/Infrared Surveyor (LUVOIR), which wouldn’t be launched until the 2030s, to answer such questions in detail.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physicists Find That as Clocks Get More Precise, Time Gets More Fuzzy

Physicists Find That as Clocks Get More Precise, Time Gets More Fuzzy | Amazing Science | Scoop.it
Time is weird – in spite of what we think, the Universe doesn't have a master clock to run by, making it possible for us to experience time differently depending on how we're moving or how much gravity is pulling on us. Now physicists have combined two grand theories of physics to conclude not only is time not universally consistent, any clock we use to measure it will blur the flow of time in its surrounding space.
 
Don't worry, that doesn't mean your wall clock is going to make you age quicker. We're talking about time keepers in highly precise experiments here, such as atomic clocks. A team of physicists from the University of Vienna and the Austrian Academy of Sciences have applied quantum mechanics and general relativity to argue that increasing the precision of measurements on clocks in the same space also increases their warping of time.
 
Let's take a step back for a moment and consider in simple terms what physicists already know. Quantum mechanics is incredibly useful in describing the Universe on a very tiny scale, such as sub-atomic particles and forces over short distances. As accurate and incredibly useful as the mathematics supporting quantum mechanics might be, it makes predictions which seem counter-intuitive to our everyday experiences.
 
One such prediction is called Heisenberg's uncertainty principle, which says as you know one thing with increasing precision, measurement of a complementary variable becomes less precise. For example, the more you pinpoint the position of an object in time and space, the less certain you can be about its momentum. This isn't a question of being clever enough or having better equipment – the Universe fundamentally works this way; electrons keep from crashing into protons thanks to a balance of 'uncertainty' of position and momentum.
 
Another way to think of it is this: objects with ultra-precise positions require us to consider increasingly ridiculous amounts of energy. Applied to a hypothetical timepiece, splitting fractions of a second on our clock makes us less certain about the clock's energy. This is where general relativity comes in – another highly trusted theory in physics, only this time it is most useful in explaining how massive objects affect one another at a distance.
 
Thanks to Einstein's work, we understand there is an equivalence between mass and energy, made famous in the equation (for objects at rest) as Energy = mass x speed of light squared (or E=mc^2). We also know time and space are connected, and that this space-time can be affected as if it was more than just an empty box; mass – and therefore energy – can 'bend' it. This is why we see cool things like gravitational lensing, where massive objects like stars and black holes dimple space so much, light can both travel straight and yet bend around them.
 
It also means mass can affect time through a phenomenon called gravitational time dilation, where time looks like it is running slower the closer it gets to a gravitational source. Unfortunately, while the theories are both supported by experiments, they usually don't play well together, forcing physicists to consider a new theory that will allow them both to be correct at the same time.
 
Meanwhile, it's important that we continue to understand how both theories describe the same phenomena, such as time. Which is what this new paper does. In this case, the physicists hypothesized the act of measuring time in greater detail requires the possibility of increasing amounts of energy, in turn making measurements in the immediate neighborhood of any time-keeping devices less precise.
 
"Our findings suggest that we need to re-examine our ideas about the nature of time when both quantum mechanics and general relativity are taken into account", says researcher Esteban Castro.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World's spiders eat 400 to 800 million tons of insects, study finds

World's spiders eat 400 to 800 million tons of insects, study finds | Amazing Science | Scoop.it

A new study reveals some stunning estimates about just how much the world's spiders eat annually: between 400 and 800 million tons of insects, springtails, and other invertebrates. For a sense of just how much this is, take the following into account: all humans together consume an estimated 400 million tons of meat and fish annually. Whales feed on 280 to 500 million tons of seafood, while the world's total seabird population eats an estimated 70 million tons of fish and other seafood.

 

In the process of eating all these insects, these eight-legged carnivores play an important role to keep countless insect pests, especially in forests and grassland areas, in check. This is according to the findings of Martin Nyffeler of the University of Basel in Switzerland and Klaus Birkhofer of Lund University in Sweden and the Brandenburg University of Technology Cottbus-Senftenberg in Germany, published in Springer's journal The Science of Nature.

 

Using data from 65 previous studies, Nyffeler and Birkhofer first estimated how many spiders are currently to be found in seven biomes on the planet. Their conclusion: altogether there are about 25 million metric tons' worth of them around. Most spiders are found in forests, grasslands and shrublands, followed by croplands, deserts, urban areas and tundra areas.

 

The researchers then used two simple models to calculate how much prey all the world's spiders as a whole kill per year. In their first approach, they took into account how much most spiders generally need to eat to survive, as well as census data on the average spider biomass per square meter in the various biomes. The second approach was based on prey capture observations in the field, combined with estimates of spider numbers per square meter. According to their extrapolations, 400 to 800 million tons of prey are being killed by spiders each year.

 

According to further calculations, spiders in forests and grasslands account for more than 95 percent of the annual prey kill of the global spider community. The figure reflects the fact that forests, grasslands and savannas are less frequently disturbed than for instance agricultural or urban areas, and therefore allow for greater spider biomass.

 

"These estimates emphasize the important role that spider predation plays in semi-natural and natural habitats, as many economically important pests and disease vectors breed in those forest and grassland biomes," says lead author Martin Nyffeler.

 

According to the researchers, spiders are not only important predators, but are also valuable sources of prey. Between 8,000 and 10,000 other predators, parasitoids and parasites feed exclusively on spiders, while spiders at the same time form an important part of the diet of an estimated 3,000 to 5,000 bird species.

 

"We hope that these estimates and their significant magnitude raise public awareness and increase the level of appreciation for the important global role of spiders in terrestrial food webs," adds Nyffeler.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ultrashort configurable light pulses for fast 'lightwave' computers

Ultrashort configurable light pulses for fast 'lightwave' computers | Amazing Science | Scoop.it

Extremely short, configurable "femtosecond" pulses of light demonstrated by an international team could lead to future computers that run up to 100,000 times faster than today's electronics.

 

The researchers, including engineers at the University of Michigan, showed that they could control the peaks within the laser pulses and also twist the light. The method moves electrons faster and more efficiently than electrical currents -- and with reliable effects on their quantum states. It is a step toward so-called "lightwave electronics" and, in the more distant future, quantum computing, said Mackillo Kira, U-M professor of electrical engineering and computer science who was involved in the research.

 

Electrons moving through a semiconductor in a computer, for instance, occasionally run into other electrons, releasing energy in the form of heat. But a concept called lightwave electronics proposes that electrons could be guided by ultrafast laser pulses. While high speed in a car makes it more likely that a driver will crash into something, high speed for an electron can make the travel time so short that it is statistically unlikely to hit anything.

 

"In the past few years, we and other groups have found that the oscillating electric field of ultrashort laser pulses can actually move electrons back and forth in solids," said Rupert Huber, professor of physics at the University of Regensburg who led the experiment. "Everybody was immediately excited because one may be able to exploit this principle to build future computers that work at unprecedented clock rates -- 10 to a hundred thousand times faster than state-of-the-art electronics."

 

But first, researchers need to be able to control electrons in a semiconductor. This work takes a step toward this capability by mobilizing groups of electrons inside a semiconductor crystal using terahertz radiation -- the part of the electromagnetic spectrum between microwaves and infrared light. The researchers shone laser pulses into a crystal of the semiconductor gallium selenide. These pulses were very short at less than 100 femtoseconds, or 100 quadrillionths of a second. Each pulse popped electrons in the semiconductor into a higher energy level -- which meant that they were free to move around -- and carried them onward. The different orientations of the semiconductor crystal with respect to the pulses meant that electrons moved in different directions through the crystal -- for instance, they could run along atomic bonds or in between them.

 

"The different energy landscapes can be viewed as a flat and straight street for electrons in one crystal direction, but for others, it may look more like an inclined plane to the side," said Fabian Langer, a doctoral student in physics at Regensburg. "This means that the electrons may no longer move in the direction of the laser field but perform their own motion dictated by the microscopic environment."

 

When the electrons emitted light as they came down from the higher energy level, their different journeys were reflected in the pulses. They emitted much shorter pulses than the electromagnetic radiation going in. These bursts of light were just a few femtoseconds long. Inside a crystal, they are quick enough to take snapshots of other electrons as they move among the atoms, and they could also be used to read and write information to electrons. For that, researchers would need to be able to control these pulses -- and the crystal provides a range of tools.

 

"There are fast oscillations like fingers within a pulse. We can move the position of the fingers really easily by turning the crystal," said Kira, whose group worked with researchers at the University of Marburg, Germany, to interpret Huber's experiment.

The crystal could also twist the outgoing light waves or not, depending on its orientation to the incoming laser pulses.

Because femtosecond pulses are fast enough to intercept an electron between being put into an excited state and coming down from that state, they can potentially be used for quantum computations using electrons in excited states as qubits.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A perfect storm of fire and ice may have led to snowball Earth

A perfect storm of fire and ice may have led to snowball Earth | Amazing Science | Scoop.it

What caused the largest glaciation event in Earth's history, known as 'snowball Earth'? Geologists and climate scientists have been searching for the answer for years but the root cause of the phenomenon remains elusive.

 

Now, Harvard University researchers have a new hypothesis about what caused the runaway glaciation that covered Earth pole-to-pole in ice. The research is published in Geophysical Research Letters.

 

Researchers have pinpointed the start of what's known as the Sturtian snowball Earth event to about 717 million years ago -- give or take a few 100,000 years. At around that time, a huge volcanic event devastated an area from present-day Alaska to Greenland. Coincidence?

 

Harvard professors Francis Macdonald and Robin Wordsworth thought not. "We know that volcanic activity can have a major effect on the environment, so the big question was, how are these two events related," said Macdonald, the John L. Loeb Associate Professor of the Natural Sciences.

 

At first, Macdonald's team thought basaltic rock -- which breaks down into magnesium and calcium -- interacted with CO2 in the atmosphere and caused cooling. However, if that were the case, cooling would have happened over millions of years and radio-isotopic dating from volcanic rocks in Arctic Canada suggest a far more precise coincidence with cooling.

 

Macdonald turned to Wordsworth, who models climates of non-Earth planets, and asked: could aerosols emitted from these volcanos have rapidly cooled Earth? The answer: yes, under the right conditions.

 

"It is not unique to have large volcanic provinces erupting," said Wordsworth, assistant professor of Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Science. "These types of eruptions have happened over and over again throughout geological time but they're not always associated with cooling events. So, the question is, what made this event different?"

 

Geological and chemical studies of this region, known as the Franklin large igneous province, showed that volcanic rocks erupted through sulfur-rich sediments, which would have been pushed into the atmosphere during eruption as sulfur dioxide. When sulfur dioxide gets into the upper layers of the atmosphere, it's very good at blocking solar radiation. The 1991 eruption of Mount Pinatubo in the Philippines, which shot about 10 million metric tons of sulfur into the air, reduced global temperatures about 1 degree Fahrenheit for a year.

 

Sulfur dioxide is most effective at blocking solar radiation if it gets past the tropopause, the boundary separating the troposphere and stratosphere. If it reaches this height, it's less likely to be brought back down to earth in precipitation or mixed with other particles, extending its presence in the atmosphere from about a week to about a year. The height of the tropopause barrier all depends on the background climate of the planet -- the cooler the planet, the lower the tropopause.

 

"In periods of Earth's history when it was very warm, volcanic cooling would not have been very important because Earth would have been shielded by this warm, high tropopause," said Wordsworth. "In cooler conditions, Earth becomes uniquely vulnerable to having these kinds of volcanic perturbations to climate."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Two radio signals, one chip, open a new world for wireless communication

Two radio signals, one chip, open a new world for wireless communication | Amazing Science | Scoop.it

Cornell engineers have devised a method for transmitting and receiving radio signals on a single chip, which could ultimately help change the way wireless communication is done.

 

Separating the send and receive bands is difficult enough, but the problem is compounded by the ever-increasing number of bands in the latest devices, which handle everything wireless technology has to offer. From GPS to Bluetooth to Wi-Fi, each band requires a filter to stop the strong transmit signals from drowning out reception.

 

Alyosha Molnar, associate professor of electrical and computer engineering (ECE), and Alyssa Apsel, professor of ECE, have come up with an ingenious way to separate the signals. Their work is described in "A wideband fully integrated software-defined transceiver for FDD and TDD operation," published online in the Institute of Electrical and Electronics Engineers' Journal of Solid-State Circuits.

 

Their idea lies in the transmitter -- actually a series of six subtransmitters -- all hooked into an artificial transmission line. Each of the subtransmitters send signals at regular intervals, and their individually weighted outputs are programmed so that they combine to produce a radio frequency signal in the forward direction, at the antenna port, while canceling out at the receive port.

 

The programmability of the individual outputs allows this simultaneous summation and cancellation to be tuned across a wide range of frequencies, and to adjust to signal strength at the antenna. "In one direction, it's a filter and you basically get this cancellation," Apsel said. "And in the other direction, it's an amplifier."

 

"You put the antenna at one end and the amplified signal goes out the antenna, and you put the receiver at the other end and that's where the nulling happens," Molnar said. "Your receiver sees the antenna through this wire, the transmission line, but it doesn't see the transmit signal because it's canceling itself out at that end."

 

This work builds on research reported six years ago by a group from Stanford University, which devised a way for the transmitter to filter its own transmission, allowing the weaker incoming signal to be heard. It's the theory behind noise-canceling headphones. Unlike the Stanford work, the Cornell group's subtransmitter concept will work over a range of frequencies - a positive in this age of scrambling for available frequencies that used to be the realm of over-the-air television.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Newest Machine Learning Trends

Newest Machine Learning Trends | Amazing Science | Scoop.it

In the research areas, Machine Learning is steadily moving away from abstractions and engaging more in business problem solving with support from AI and Deep Learning. In What Is the Future of Machine Learning, Forbes predicts the theoretical research in ML will gradually pave the way for business problem solving. With Big Data making its way back to mainstream business activities, now smart (ML) algorithms can simply use massive loads of both static and dynamic data to continuously learn and improve for enhanced performance.

 

 If the threat of intelligent machines taking over Data Scientists is really as real as it is made out to be, then 2017 is probably the year when the global Data Science community should take a new look at the capabilities of so-called “smart machines.” The repeated failure of autonomous cars has made one point clear – that even learning machines cannot surpass the natural thinking faculties bestowed by nature on human beings. If autonomous or self-guided machines have to be useful to human society, then the current Artificial Intelligence and Machine Learning research should focus on acknowledging the limits of machine power and assign tasks that are suitable for the machines and include more human interventions at necessary checkpoints to avert disasters. Repetitive, routine tasks can be well handled by machines, but any out-of-the-ordinary situations will still require human intervention.

 

2017 Machine Learning and Application Development Trends

Gartner’s Top 10 Technology Trends for 2017  predicts that the combined AI and advanced ML practice that ignited about four years ago and since continued unscathed, will dominate Artificial Intelligence application development in 2017. This lethal combination will deliver more systems that “understand, learn, predict, adapt and potentially operate autonomously. “ Cheap hardware, cheap memory, cheap storage technologies, more processing power, superior algorithms, and massive data streams will all contribute to the success of ML-powered AI applications.  There will be steady rise in Ml-powered AI application in industry sectors like preventive healthcare, banking, finance, and media. For businesses that means more automated functions and fewer human checkpoints.  2017 Predictions from Forrester suggests that the Artificial Intelligence and Machine Learning Cloud will increasingly feed on IoT data as sensors and smart apps take over every facet of our daily lives.

 

Democratization of Machine Learning in the Cloud          

Democratization of AI and ML through Cloud technologies, open standards, and algorithm economy will continue. The growing trend of deploying prebuilt ML algorithms to enable Self-Service Business Intelligence and Analytics is a positive step towards democratization of ML. In Google Says Machine Learning is the Future, the author champions the democratization of ML through idea sharing. A case in point is Google’s Tensor Flow, which has championed the need for open standards in Machine Learning.  This article claims that almost anyone with a laptop and an Internet connection can dare to be a Machine Learning expert today provided they have the right mind set. The provisioning of Cloud-based IT services was already a good step to make advanced Data Science a mainstream activity, and now with Cloud and packaged algorithms, mid-sized ad smaller businesses will have access to Self-Service BI and Analytics, which was till now only a dream. Also, the mainstream business users will gradually take an active role in data-centric business systems. Machine Learning Trends – Future AI  claims that more enterprises in 2017 will capitalize on the Machine Learning Cloud and do their part to lobby for democratized data technologies.

 

Platform Wars will Peak in 2017

The platform war between IBM, Microsoft, Google, and Facebook to be the leader in ML developments will peak in 2017.  Where Machine Learning Is Headed, predicts that 2017 will experience a tremendous growth of smart apps, digital assistants, and main-stream use of Artificial Intelligence. Although many ML-enabled AI systems have turned into success stories, the self driving cars may die a premature death.

 

Humans will Make Peace with Machines

 Since 2012 the global business community has witnessed a meteoric rise and widespread proliferation of data technologies. Finally, humans will realize that it is time to stop fearing the machines and begin working with them. The InfoWorld article titled Application Development, Docker, Machine Learning Are Top Tech Trends for 2017 asserts humans and machines will work with each other, not against each other. In this context, readers should review the DATAVERSITY® article The Future of Machine Learning: Trends, Observations, and Forecasts, where the readers are reminded that as businesses develop a strong dependence on pre-built ML algorithms for Advanced Analytics, the need for Data Scientists or large IT departments may diminish.

 

Demand-Supply Gaps in Data Science and Machine Learning will Rise

The business world is steadily heading toward the prophetic 2018, when according to McKinsey the first void in data technology expertise will be felt in US and then gradually in the rest of the world. The demand-supply gap in Data Science and Machine Learning skills will continue to rise till academic programs and industry workshops begin to produce a ready workforce. In response to this sharp rise in demand-supply gap, more enterprises and academic institutions will collaborate to train future Data Scientists and ML experts. This kind of training will compete with the traditional Data Science classroom, and will focus more on practical skills rather than on theoretical knowledge. KDNuggets will continue to challenge the curious mind by publishing articles like 10 Algorithms that Machine Learning Engineers Should Know .  2017 will witness a steady rise in contributions from KDNugget and Kaggle in providing alternative training to future Data Scientists and Machine Learning experts through practical skill development.

 

Algorithm Economy will take Center Stage

Over the next year or two, businesses will be using canned algorithms for all data-centric activities like BI, Predictive Analytics, and CRM. The algorithm economy, which Forbes mentions, will usher in a marketplace where all data companies will compete for a space. In 2017, global businesses will engage in Self-Service BI, and experience the growth of algorithmic business solutions, and ML in the Cloud.  So far as algorithm-driven business decision making is concerned, 2017 may actually see two distinct types of algorithm economies. On one hand, average businesses will utilize canned algorithmic models for their operational and customer-facing functions. On the other hand, proprietary ML algorithms will become a market differentiator among large, competing enterprises. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This AI software dreams up new drug molecules

This AI software dreams up new drug molecules | Amazing Science | Scoop.it
Ingesting a heap of drug data allows a machine-learning system to suggest alternatives humans hadn’t tried yet.

 

A group of scientists now report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. The team can train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also enable the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. The researchers demonstrate the success of this method in the design of drug-like molecules as well as organic light-emitting diodes.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists reveal how thieves can steal your pin code within seconds

Scientists reveal how thieves can steal your pin code within seconds | Amazing Science | Scoop.it
Researchers from the University of Stuttgart in Germany say the technique works up to 30 seconds after you've stop tapping on the screen.

 

When you tap in your PIN code, your fingers leave traces of heat on your screen.  Scammers can use thermal cameras to quickly take a snapshot of your PIN seconds after you've tapped it in. The thermal images are then subjected to a six-stage process, where the color image is converted into grayscale and stripped to leave only the heat spots. The final step is to work out how much each circle has faded over time - to unveil the likely order that the passcode was typed in. The process can also be used to work out the hand-drawn pattern Android users use to unlock their phone.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

The prototype of a chemical computer detects a sphere

The prototype of a chemical computer detects a sphere | Amazing Science | Scoop.it

Chemical computers are becoming ever more of a reality - this is being proven by scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw. It turns out that after an appropriate teaching procedure even a relatively simple chemical system can perform non-trivial operations. In their most recent computer simulations researchers have shown that correctly programmed chemical matrices of oscillating droplets can recognize the shape of a sphere with great accuracy.

 

Modern computers use electronic signals for their calculations, that is, physical phenomena related to the movement of electric charges. Information can, however, be processed in many ways. For some time now efforts have been underway worldwide to use chemical signals for this purpose. For the time being, however, the resulting chemical systems perform only the simplest logic operations. Meanwhile, researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw have demonstrated that even uncomplicated and easy-to-produce collections of droplets, in which oscillating chemical reactions proceed, can process information in a useful way, e.g. recognizing the shape of a specified three-dimensional object with great accuracy or correctly classifying cancer cells into benign or malignant.

 

"A lot of work being currently carried out in laboratories focuses on building chemical equivalents of standard logic gates. We took a different approach to the problem," says Dr. Konrad Gizynski (IPC PAS) and explains: "We investigate systems of a dozen-or-so to a few dozen drops in which chemical signals propagate, and treat each one as a whole, as a kind of neuronal network. It turns out that such networks, even very simple ones, after a short teaching procedure manage well with fairly sophisticated problems. For instance, our newest system has ability to recognize the shape of a sphere in a set of x, y, z spatial coordinates".

 

The systems being studied at the IPC PAS work thanks to the Belousov-Zhabotinsky reaction proceeding in individual drops. This reaction is oscillatory: after the completion of one oscillation cycle the reagents necessary to begin the next cycle are regenerated in the solution. A droplet is a batch reactor. Before reagents are depleted a droplet has usually performed from a few dozen to a few hundred oscillations. The time evolution of a droplet is easy to observe, since its catalyst, ferroin, changes color during the cycle. In a thin layer of solution the effect is spectacular: colorful strips - chemical fronts - traveling in all directions appear in the liquid. Fronts can also be seen in the droplets, but in practice the phase of the cycle is indicated just by the color of the droplet: when the cycle begins, the droplet rapidly turns blue (excites), after which it gradually returns to its initial state, which is red.

 

"Our systems basically work by mutual communication between droplets: when the droplets are in contact, the chemical excitation can be transmitted from droplet to droplet. In other words, one droplet can trigger the reaction in the next! It is also important that an excited droplet cannot be immediately excited once again. Speaking somewhat colloquially, before the next excitation it has to 'have a rest', in order to return to its original state," explains Dr. Gizynski.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google unveils the most detailed view of Earth's changing oceans, seas, rivers and lakes

Google unveils the most detailed view of Earth's changing oceans, seas, rivers and lakes | Amazing Science | Scoop.it

Google has partnered with the European Commission’s Joint Research Centre to produce the greatest views of water on the surface of Earth ever. The images show changing water levels, and reveal some of the stories behind the changes from the past three decades to see how they "have shaped the world over time, in unprecedented detail." The project took more than three years and involved thousands of computers downloading 1.8 petabytes of data from the USGS/NASA Landsat satellite program. Each pixel in 3 million satellite images, going back to 1984, was analyzed by an algorithm developed by the Joint Research Centre running on the Google Earth Engine platform.

 

More than 10 million hours of computing time was needed for this, roughly equivalent to a modern 2-core computer running day and night for 600 years. From this, the researchers were able to establish that, over the past 32 years, 90,000 square kilometers of water - the equivalent of half of the lakes in Europe - have vanished, while 200,000 square kilometers of new, mostly man-made water bodies appeared.

 

The continuing drying up of the Aral Sea in Uzbekistan and Kazakhstan accounts for the biggest loss in the world. Iran and Afghanistan lost over a half, Iraq over a third of its water area.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Speeding Stars Are Evidence Our Galaxy’s Spiral Arms Will Disappear

Speeding Stars Are Evidence Our Galaxy’s Spiral Arms Will Disappear | Amazing Science | Scoop.it

A team of astronomers has identified a group of stars overtaking the Sun in its orbit around the Milky Way Galaxy. The stars are further from the Galaxy’s centre than the Sun and are passing us like runners lapping a slower runner in an inside lane.

 

The discovery provides evidence for a theory describing the nature of galactic spiral arms—a theory that predicts our Galaxy’s spiral arms will eventually disappear and be replaced by others.

 

The astronomers determined that a group of about a hundred stars are orbiting the centre of our Galaxy twenty kilometers per second faster than the Sun and other neighboring stars. The stars trail behind the Galaxy’s Perseus Arm and the added velocity is likely caused by the gravitational pull of the stars and gas concentrated in the arm.

 

“This group’s higher velocity is consistent with two models of spiral arms,” says Jason Hunt, a Dunlap Fellow at the Dunlap Institute for Astronomy & Astrophysics, University of Toronto, and lead author of the paper describing the discovery.

 

“But it does favor one model over the other.” One model says that spiral arms are density waves that travel through the disk of a galaxy like waves on the surface of a pond; i.e. the stars do not move with the wave. As well, the waves retain their shape as they travel around the galaxy.

 

The second, referred to as the co-rotational model, says that spiral arms are made of stars and that the stars and the wave move around the galaxy together. They are analogous to the arms of a pirouetting figure skater.

 

Probability argues that Hunt’s result is evidence for the co-rotational model. In the co-rotational model, there would be stars trailing the arm along its entire length, traveling faster than their neighbors along the entire length of the arm.

 

On the other hand, in the density-wave model, stars in the wake of a spiral arm would be traveling faster than their neighbors in only one location along the entire arm. At other locations along the arm, trailing stars would be traveling faster or slower than the arm. So the odds are small that Hunt and his colleagues found this single, isolated population of stars.

 

In the co-rotational model, spiral arm stars move more slowly, the further they are from a galaxy’s centre—just as planets move more slowly, the further they are from their parent star. With each rotation of the galaxy, the slower moving stars at the ends of the arms—nearer the edge of the galaxy—fall further and further behind, until the arm dissipates. This is commonly referred to as the “winding problem.”

 

Hunt and his collaborators made their discovery using data from the European Space Agency’s Gaia spacecraft. Gaia’s mission is to create a 3-dimensional map of our Galaxy and the data released so far includes distances, positions in the sky, and motion across the sky for around two million stars.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Machine Learning Algorithms Deciphers Bat Talk

Machine Learning Algorithms Deciphers Bat Talk | Amazing Science | Scoop.it

A machine learning algorithm helped decode the squeaks Egyptian fruit bats make in their roost, revealing that they “speak” to one another as individuals.

 

Plenty of animals communicate with one another, at least in a general way—wolves howl to each other, birds sing and dance to attract mates and big cats mark their territory with urine. But researchers at Tel Aviv University recently discovered that when at least one species communicates, it gets very specific. Egyptian fruit bats, it turns out, aren’t just making high pitched squeals when they gather together in their roosts. They’re communicating specific problems, reports Bob Yirka at Phys.org.

 

According to Ramin Skibba at Nature, neuroecologist Yossi Yovel and his colleagues recorded a group of 22 Egyptian fruit bats, Rousettus aegyptiacus, for 75 days. Using a modified machine learning algorithm originally designed for recognizing human voices, they fed 15,000 calls into the software. They then analyzed the corresponding video to see if they could match the calls to certain activities.

 

They found that the bat noises are not just random, as previously thought, reports Skibba. They were able to classify 60 percent of the calls into four categories. One of the call types indicates the bats are arguing about food. Another indicates a dispute about their positions within the sleeping cluster. A third call is reserved for males making unwanted mating advances and the fourth happens when a bat argues with another bat sitting too close. In fact, the bats make slightly different versions of the calls when speaking to different individuals within the group, similar to a human using a different tone of voice when talking to different people. Skibba points out that besides humans, only dolphins and a handful of other species are known to address individuals rather than making broad communication sounds. The research appears in the journal Scientific Reports.

 

“We have shown that a big bulk of bat vocalizations that previously were thought to all mean the same thing, something like ‘get out of here!’ actually contain a lot of information,” Yovel tells Nicola Davis at The Guardian. By looking even more carefully at stresses and patterns, Yovel says, researchers may be able to tease out even more subtleties in the bat calls.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Apple's ResearchKit generates reliable health data — at least for asthma patients

Apple's ResearchKit generates reliable health data — at least for asthma patients | Amazing Science | Scoop.it

Health data collected entirely from smartphones can be reliable, research from Mount Sinai Hospital claims. The researchers involved found that Apple’s ResearchKit platform and an app for asthma were fairly accurate when compared to existing patient studies.

 

Finding and recruiting participants is a big hurdle for medical studies. In recent years, people have started collecting health data from smartphones, which seems sensible given how common smartphones are. But this raises questions about whether data gathered this way can be trusted.

 

Today’s study, published today in the journal Nature Biotechnology, suggests that health care apps may be reliable, at least in regards to asthma. This is good news since smartphone usage is only increasing — there are supposed to be 6 billion smartphones used worldwide by 2020 — and collecting reliable health data from them could be very good for research.

 

Apple launched ResearchKit, a software medical platform, in 2015. It helps researchers recruit participants for studies; participants can enroll in trials and take surveys or provide other data. Early research partners included big names like the University of Oxford, Stanford Medicine, and the Dana-Farber Cancer Institute. The asthma mobile app from today’s study was one of the five disease-specific apps that Apple launched with the initial release of ResearchKit.

 

In today’s study, nearly 50,000 iPhone users downloaded the asthma app. Of these, about 7,600 people enrolled in the six-month study after completing the consent form. People in the study took surveys on how they treated their asthma; the app also provided information about location and air quality.

 

The scientists then looked at how this patient-reported data measured up when compared to external factors. For example, around the time there were fires in Washington state, patients in the area reported worse asthma symptoms, as might be expected, suggesting the data was fairly reliable. They were similarly able to correlate data related to heat and pollen.

more...
No comment yet.