Amazing Science
457.5K views | +90 today
Scooped by Dr. Stefan Gruenwald
onto Amazing Science!

Are material constants alterable after all?Heat conduction in graphene varies with size

Are material constants alterable after all?Heat conduction in graphene varies with size | Amazing Science |
Based on recent experiments and computer simulations, scientists at the Max Planck Institute for Polymer Research and the National Univ. of Singapore have attested that the thermal conductivity of graphene diverges with the size of the samples. This discovery challenges the fundamental laws of heat conduction for extended materials.

Scientists at the Max Planck Institute for Polymer Research (MPI-P) in Mainz and the National Univ. of Singapore have attested that the thermal conductivity of graphene diverges with the size of the samples. This discovery challenges the fundamental laws of heat conduction for extended materials.

Davide Donadio, head of a Max Planck Research Group at the MPI-P, and his partner from Singapore were able to predict this phenomenon with computer simulations and to verify it in experiments. Their research and their results have now been presented in the scientific journalNature Communications.

"We recognized mechanisms of heat transfer that actually contradict Fourier’s law in the micrometer scale. Now all the previous experimental measurements of the thermal conductivity of graphene need to be reinterpreted. The very concept of thermal conductivity as an intrinsic property does not hold for graphene, at least for patches as large as several micrometers", says Davide Donadio.

The French physicist Joseph Fourier had postulated the laws of heat propagation in solids. Accordingly, thermal conductivity is an intrinsic material property that is normally independent of size or shape. In graphene, a two-dimensional layer of carbon atoms, it is not the case, as our scientists now found out. With experiments and computer simulations, they found that the thermal conductivity logarithmically increases as a function of the size of the graphene samples: i.e., the longer the graphene patches, the more heat can be transferred per length unit. This is another unique property of this highly praised wonder material that is graphene: it is chemically very stable, flexible, a hundred times more tear-resistant than steel and at the same time very light. Graphene was already known to be an excellent heat conductor: The novelty here is that its thermal conductivity, which was so far regarded as a material constant, varies as the length of graphene increases. After analyzing the simulations, Davide Donadio found that this feature stems from the combination of reduced dimensionality and stiff chemical bonding, which make thermal vibration propagate with minimal dissipation at non-equilibrium conditions.

No comment yet.
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science |

NOTE: To subscribe to the RSS feed of Amazing Science, copy into the URL field of your browser and click "subscribe".


This newsletter is aggregated from over 1450 news sources:


All my Tweets and Scoop.It! posts sorted and searchable:



You can search through all the articles semantically on my

archived twitter feed


NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.


You can also type your own query:


e.g., you are looking for articles involving "dna" as a keyword

Or CLICK on the little

FUNNEL symbol at the

 top right of the screen


MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

Siegfried Holle's curator insight, July 4, 2014 8:45 AM

Your knowledge is your strength and power 

Saberes Sin Fronteras OVS's curator insight, November 30, 2014 5:33 PM

Acceso gratuito a documentos de las mejores universidades del mundo

♥ princess leia ♥'s curator insight, December 28, 2014 11:58 AM

WoW  .. Expand  your mind!! It has room to grow!!! 

Scooped by Dr. Stefan Gruenwald!

Rice University's new electron microscope will capture images at subnanometer resolution

Rice University's new electron microscope will capture images at subnanometer resolution | Amazing Science |

Rice University, renowned for nanoscale science, has installed microscopes that will allow researchers to peer deeper than ever into the fabric of the universe. The Titan Themis scanning/transmission electron microscope, one of the most powerful in the United States, will enable scientists from Rice as well as academic and industrial partners to view and analyze materials smaller than a nanometer — a billionth of a meter — with startling clarity.

The new microscope has the ability to take images of materials at angstrom-scale (one-tenth of a nanometer) resolution, about the size of a single hydrogen atom. Images will be captured with a variety of detectors, including X-ray, optical and multiple electron detectors and a 4K-resolution camera, equivalent to the number of pixels in the most modern high-resolution televisions. The microscope gives researchers the ability to create three-dimensional structural reconstructions and carry out electric field mapping of subnanoscale materials.

“Seeing single atoms is exciting, of course, and it’s beautiful,” said Emilie Ringe, a Rice assistant professor of materials science and nanoengineering and of chemistry. “But scientists saw single atoms in the ’90s, and even before. Now, the real breakthrough is that we can identify the composition of those atoms, and do it easily and reliably.” Ringe’s research group will operate the Titan Themis and a companion microscope that will image larger samples.

Electron microscopes use beams of electrons rather than rays of light to illuminate objects of interest. Because the wavelength of electrons is so much smaller than that of photons, the microscopes are able to capture images of much smaller things with greater detail than even the highest-resolution optical microscope.

“The beauty of these newer instruments is their analytical capabilities,” Ringe said. “Before, in order to see single atoms, we had to work a machine for an entire day and get it just right and then take a picture and hold our breath. These days, seeing atoms is routine.

“And now we can probe a particular atom’s chemical composition. Through various techniques, either via scattering intensity, X-rays emission or electron-beam absorption, we can figure out, say, that we’re looking at a palladium atom or a carbon atom. We couldn’t do that before.”

Ringe said when an electron beam ejects a bound electron from a target atom, it creates an empty site. “That can be filled by another electron within the atom, and the energy difference between this electron and the missing electron is emitted as an X-ray,” she said. “That X-ray is like a fingerprint, which we can read. Different types of atoms have different energies.” 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Chromatin Remodeling Enzymes: The Human Protein Methyltransferases

Chromatin Remodeling Enzymes: The Human Protein Methyltransferases | Amazing Science |

Methyltransferases are enzymes that facilitate the transfer of a methyl (-CH3) group to specific nucleophilic sites on proteins, nucleic acids or other biomolecules. They share a reaction mechanism in which the nucleophilic acceptor site attacks the electrophilic carbon of S-adenosyl-L-methionine (SAM) in an SN2 displacement reaction that produces a methylated biomolecule and S-adenosyl-L-homocysteine (SAH) as a byproduct. Methylation reactions are essential transformations in small-molecule metabolism, and methylation is a common modification of DNA and RNA. The recent discovery of dynamic and reversible methylation of amino acid side chains of chromatin proteins, particularly within the N-terminal tail of histone proteins, has revealed the importance of methyl 'marks' as regulators of gene expression. Human protein methyltransferases (PMTs) fall into two major families - protein lysine methyltransferases (PKMTs) and protein arginine methyltransferases (PRMTs) - that are distinguishable by the amino acid that accepts the methyl group and by the conserved sequences of their respective catalytic domains. Given their involvement in many cellular processes, PMTs have attracted attention as potential drug targets, spurring the search for small-molecule PMT inhibitors. Several classes of inhibitors have been identified, but new specific chemical probes that are active in cells will be required to elucidate the biological roles of PMTs and serve as potent leads for PMT-focused drug development.

Protein lysine methyltransferases (PKMTs)

The phylogenetic tree shows 51 genes predicted to encode PKMTs, which are positioned in the tree on the basis of the similarities of their amino acid sequences. This tree excludes one validated PKMT, DOT1L, which lacks a SET domain - the catalytic domain conserved in this family - and clusters more closely with the PRMTs. The tree has four major branches, and each branch contains enzymes with validated methyltransferase activity (highlighted in red). Some PKMTs add a single methyl group, resulting in a mono-methylated product (Kme), whereas others produce di-(Kme2) or tri-methylated (Kme3) lysine modifications. Many of the validated PKMTs methylate lysines on histones, though nonhistone substrates have also been identified.

Protein arginine methyltransferases (PRMTs)

The human PRMT phylogenetic tree comprises 45 predicted enzymes including the PKMT DOT1L. There are two major types of PRMTs; both catalyze the formation of mono-methylarginine (Rme1) but distinct reaction mechanisms yield symmetric (Rme2s) or asymmetric (Rme2a) dimethylarginine. A small number of predicted PRMTs have validated activity (highlighted in blue). In addition to PRMTs, this tree includes validated RNA methyltransferases (highlighted in green) and biosynthetic enzymes (highlighted in violet). It remains uncertain whether these latter enzymes have PRMT activity, despite their shared structural features. Substrates for the enzymes shown include RNA, metabolites, histones and RNA-binding and spiceosomal proteins.

More info:

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA explains why 30 June 2015 will get an extra ‘leap second’

NASA explains why 30 June 2015 will get an extra ‘leap second’ | Amazing Science |
The day will officially be a bit longer than usual on Tuesday, 30 June 2015, because an extra second, or “leap” second, will be added.
“Earth’s rotation is gradually slowing down a bit, so leap seconds are a way to account for that,” said Daniel MacMillan of NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

Strictly speaking, a day lasts 86,400 seconds. That is the case, according to the time standard that people use in their daily lives – Coordinated Universal Time, or UTC. UTC is “atomic time” – the duration of one second is based on extremely predictable electromagnetic transitions in atoms of caesium. These transitions are so reliable that the caesium clock is accurate to one second in 1,400,000 years.

However, the mean solar day – the average length of a day, based on how long it takes Earth to rotate – is about 86,400.002 seconds long. That’s because Earth’s rotation is gradually slowing down a bit, due to a kind of braking force caused by the gravitational tug of war between Earth, the Moon and the Sun. Scientists estimate that the mean solar day hasn’t been 86,400 seconds long since the year 1820 or so.

This difference of 2 milliseconds, or two thousandths of a second – far less than the blink of an eye – hardly seems noticeable at first. But if this small discrepancy were repeated every day for an entire year, it would add up to almost a second. In reality, that’s not quite what happens. Although Earth’s rotation is slowing down on average, the length of each individual day varies in an unpredictable way.

The length of day is influenced by many factors, mainly the atmosphere over periods less than a year. Our seasonal and daily weather variations can affect the length of day by a few milliseconds over a year. Other contributors to this variation include dynamics of the Earth’s inner core (over long time periods), variations in the atmosphere and oceans, groundwater, and ice storage (over time periods of months to decades), and oceanic and atmospheric tides. Atmospheric variations due to El Niño can cause Earth’s rotation to slow down, increasing the length of day by as much as 1 millisecond, or a thousandth of a second.

Scientists monitor how long it takes Earth to complete a full rotation using an extremely precise technique called Very Long Baseline Interferometry (VLBI). These measurements are conducted by a worldwide network of stations, with Goddard providing essential coordination of VLBI, as well as analysing and archiving the data collected.

The time standard called Universal Time 1, or UT1, is based on VLBI measurements of Earth’s rotation. UT1 isn’t as uniform as the caesium clock, so UT1 and UTC tend to drift apart. Leap seconds are added, when needed, to keep the two time standards within 0.9 seconds of each other. The decision to add leap seconds is made by a unit within the International Earth Rotation and Reference Systems Service.

Typically, a leap second is inserted either on 30 June or 31 December. Normally, the clock would move from 23:59:59 to 00:00:00 the next day. But with the leap second on 30 June, UTC will move from 23:59:59 to 23:59:60, and then to 00:00:00 on 1 July. In practice, many systems are instead turned off for one second. Previous leap seconds have created challenges for some computer systems and generated some calls to abandon them altogether. One reason is that the need to add a leap second cannot be anticipated far in advance.

“In the short term, leap seconds are not as predictable as everyone would like,” said Chopo Ma, a geophysicist at Goddard and a member of the directing board of the International Earth Rotation and Reference Systems Service. “The modelling of the Earth predicts that more and more leap seconds will be called for in the long-term, but we can’t say that one will be needed every year.”

From 1972, when leap seconds were first implemented, through 1999, leap seconds were added at a rate averaging close to one per year. Since then, leap seconds have become less frequent. This June’s leap second will be only the fourth to be added since 2000. Before 1972, adjustments were made in a different way.

Scientists don’t know exactly why fewer leap seconds have been needed lately. Sometimes, sudden geological events, such as earthquakes and volcanic eruptions, can affect Earth’s rotation in the short-term, but the big picture is more complex.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Sequencing Uncovers New Monogenic Form of Obesity

Sequencing Uncovers New Monogenic Form of Obesity | Amazing Science |

A team from the UK, the Netherlands, and Ireland has identified a form of inherited obesity and type 2 diabetes that appears to stem from a mutation in a single enzyme-coding gene. As they reported online in PLOS One, the researchers did exome sequencing on members of a consanguineous family affected by a condition characterized by extreme obesity, type 2 diabetes, intellectual disability, and other features. Their search led to truncating mutations affecting both copies of a gene that codes for a peptide-processing enzyme called carboxypeptidase E.

That enzyme normally plays a role in regulating hormone and neuropeptide peptides, the team explained. And past mouse studies suggest that mutations that alter the enzyme's ability to regulate such peptides can throw off appetite control, normal glucose metabolism, and other physiological processes.

"There are now an increasing number of single-gene causes of obesity and diabetes known," corresponding author Alexandra Blakemore, a diabetes, endocrinology, and metabolism researcher at the Imperial College of Medicine, said in a statement.

"We don't know how many more have yet to be discovered, or what proportion of the severely obese people in our population have these diseases — it is not possible to tell just by looking," Blakemore added, explaining that such inherited conditions can affect individuals' bodies and their ability to appropriately respond to hunger and fullness signals.

In an effort to track down new genes that contribute to inherited, single-gene forms of obesity, the researchers performed exome sequencing on members of a Sudanese family found through a genetic obesity clinic at a UK hospital.

Using the Agilent SureSelectXT Human All Exon V4+UTR kit, the team isolated protein-coding DNA from an affected family member — a morbidly obese 21-year-old woman with childhood-onset obesity, type 2 diabetes, intellectual disability, and reproductive problems — along with her mother and sister. After sequencing these exomes with the Illumina HiSeq 2500, the researchers scrutinized the sequences for single nucleotide changes, small insertions and deletions, and copy number variants.

The search ultimately led to a truncating frameshift mutation in the first exon of the CPE gene. With the help of Sanger sequencing, the team determined that the affected woman carried two copies of this mutation, while her mother, sister, and two brothers had one copy of the altered CPE gene. Similarly, when researchers used real-time PCR to track expression of the gene in blood samples from family members and female controls, they did not detect CPE transcripts in blood samples from the affected women. A sister with one copy of the mutation had lower-than-usual CPE expression compared to six control individuals.

The study's authors argued that the newly detected mutation, together with those in other genes involved in monogenic forms of obesity, should provide opportunities to find the basis of disease in ever more individuals with inherited obesity.

"Diagnosis is very valuable to the patient. It helps to set realistic expectations, and can help them get the best possible treatment," Blakemore noted, explaining that such diagnoses also make it possible to provide genetic counseling and advice to other members of affected families. 

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Medical Science!

It Feels Instantaneous, but How Long Does it Really Take to Think a Thought?

It Feels Instantaneous, but How Long Does it Really Take to Think a Thought? | Amazing Science |

As inquisitive beings, we are constantly questioning and quantifying the speed of various things. With a fair degree of accuracy, scientists have quantified the speed of light, the speed of sound, the speed at which the earth revolves around the sun, the speed at which hummingbirds beat their wings, the average speed of continental drift….

These values are all well-characterized. But what about the speed of thought? It’s a challenging question that’s not easily answerable – but we can give it a shot. To quantify the speed of anything, one needs to identify its beginning and end. For our purposes, a “thought” will be defined as the mental activities engaged from the moment sensory information is received to the moment an action is initiated. This definition necessarily excludes many experiences and processes one might consider to be “thoughts.”

Here, a “thought” includes processes related to perception (determining what is in the environment and where), decision-making (determining what to do) and action-planning (determining how to do it). The distinction between, and independence of, each of these processes is blurry. Further, each of these processes, and perhaps even their sub-components, could be considered “thoughts” on their own. But we have to set our start- and endpoints somewhere to have any hope of tackling the question.

Finally, trying to identify one value for the “speed of thought” is a little like trying to identify one maximum speed for all forms of transportation, from bicycles to rockets. There are many different kinds of thoughts that can vary greatly in timescale. Consider the differences between simple, speedy reactions like the sprinter deciding to run after the crack of the starting pistol (on the order of 150 milliseconds [ms]), and more complex decisions like deciding when to change lanes while driving on a highway or figuring out the appropriate strategy to solve a math problem (on the order of seconds to minutes).

Via Steven Krohn
No comment yet.
Scooped by Dr. Stefan Gruenwald!

SpaceX’s Rocket Explodes on the Way to the ISS

SpaceX’s Rocket Explodes on the Way to the ISS | Amazing Science |
Less than three minutes into its flight, SpaceX's Falcon 9 rocket disintegrated along with the cargo it was carrying to the ISS.

In the eternal war between SpaceX’s reusable rockets and SpaceX’s robot boat, the rockets lost again. Elon Musk’s company loaded up a Dragon capsule full of supplies this morning in what would have been its seventh mission to the International Space Station—and its third attempt to salvage the capsule’s rocket, Falcon 9, by landing on an autonomous barge. But the poor thing didn’t even get the chance to try. Less than three minutes into flight, the rocket and its cargo exploded, their disintegrating parts cloaked by a huge cloud of smoke. Astronaut Scott Kelly, watching the catastrophic failure from his perch in the ISS above, said it right: “Space is hard.”

It’s not clear yet what caused the rocket to break up. At the time of “launch vehicle failure,” in NASA-speak, Falcon was still firing all of its nine first-stage engines, with the Dragon capsule and second stage Merlin vacuum engine attached. Right now, the NASA mishap and anomaly teams are trying to piece together video analysis of the flight path with the two minutes or so of data sent from the craft before it exploded. Canadian astronaut Chris Hadfield speculated that the failure might have started at the front of the craft—near the second stage engine and the Dragon capsule.

In a NASA press conference today, SpaceX president and COO Gwynne Shotwell confirmed that a problem occurred in that general location, noting an overpressurization event in the liquid oxygen tank in the second stage of the rocket. But SpaceX doesn’t know yet what caused it. Even the typically speculation-happy Musk can’t say more yet, tweeting only that their “data suggests [a] counterintuitive cause.”

The Dragon capsule was carrying more than 4,000 pounds of supplies for the ISS. This is the third resupply mission to fail in the last eight months; at the end of April, a Russian Progress spacecraft and its Soyuz rocket similarly failed early in their launch, and last October, an Antares rocket from Orbital Sciences blew up right on the launch pad.

While that might seem to indicate a troubling trend, “there’s no commonality across these three events other than that it’s space and it’s difficult to fly,” says NASA’s associate administrator for Human Exploration and Operations William Gerstenmaier.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists name the deepest cave-dwelling centipede after Hades - the Greek God of the underworld

Scientists name the deepest cave-dwelling centipede after Hades - the Greek God of the underworld | Amazing Science |

An international team of scientists has discovered the deepest underground dwelling centipede. The animal was found by members of the Croatian Biospeleological Society in three caves in Velebit Mts, Croatia. Recorded as deep as -1100 m the new species was namedGeophilus hadesi, after Hades, the God of the Underworld in the Greek Mythology. The research was published in the open access journal ZooKeys.

Lurking in the dark vaults of some of the world's deepest caves, the Hades centipede has also had its name picked to pair another underground-dwelling relative named after Persephone, the queen of the underworld. Centipedes are carnivores that feed on other invertebrate animals. They are common cave inhabitants but members of this particular order, called geophilomorphs, usually find shelter there only occasionally. Species with an entire life cycle confined to cave environments are exceptionally rare in the group.

In fact, so far the Hades and Persephone centipedes are the only two geophilomorphs that have adapted to live exclusively in caves, thus rightfully bearing the titles of a queen and king of the underworld.

Like most cave-dwellers, the newly discovered centipede shows unusual traits, some of which commonly found in cave-dwelling arthropods, including much elongated antennae, trunk segments and leg claws. Equipped with powerful jaws bearing poison glands and long curved claws allowing to grasp and tightly hold its prey, the Hades centipede is among the top predators crawling in the darkness of the cave.

The new species is yet another addition to the astonishing cave critters that live in the Velebit, a mountain that stretches over 145 km in the Croatian Dinaric Karst, which is as a whole considered a hot spot of subterranean diversity. The deepest record comes from the Lukina jama - Trojama cave system, which is 1431 meters deep and is currently ranked the 15th deepest cave in the world.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Most internet anonymity software leaks users’ details

Most internet anonymity software leaks users’ details | Amazing Science |

Virtual Private Networks (VPNs) are legal and increasingly popular for individuals wanting to circumvent censorship, avoid mass surveillance or access geographically limited services like Netflix and BBC iPlayer. Used by around 20 per cent of European internet users they encrypt users’ internet communications, making it more difficult for people to monitor their activities.

The study of fourteen popular VPN providers found that eleven of them leaked information about the user because of a vulnerability known as ‘IPv6 leakage’. The leaked information ranged from the websites a user is accessing to the actual content of user communications, for example comments being posted on forums. Interactions with websites running HTTPS encryption, which includes financial transactions, were not leaked.

The leakage occurs because network operators are increasingly deploying a new version of the protocol used to run the Internet called IPv6. IPv6 replaces the previous IPv4, but many VPNs only protect user’s IPv4 traffic. The researchers tested their ideas by choosing fourteen of the most famous VPN providers and connecting various devices to a WiFi access point which was designed to mimic the attacks hackers might use.

Researchers attempted two of the kinds of attacks that might be used to gather user data – ‘passive monitoring’, simply collecting the unencrypted information that passed through the access point; and DNS hijacking, redirecting browsers to a controlled web server by pretending to be commonly visited websites like Google and Facebook.

The study also examined the security of various mobile platforms when using VPNs and found that they were much more secure when using Apple’s iOS, but were still vulnerable to leakage when using Google’s Android.

Dr Gareth Tyson, a lecturer from QMUL and co-author of the study, said: “There are a variety of reasons why someone might want to hide their identity online and it’s worrying that they might be vulnerable despite using a service that is specifically designed to protect them.

“We’re most concerned for those people trying to protect their browsing from oppressive regimes. They could be emboldened by their supposed anonymity while actually revealing all their data and online activity and exposing themselves to possible repercussions.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Interfering light waves produce unexpected forces

Interfering light waves produce unexpected forces | Amazing Science |

Few physical systems are better understood than the interference of two planar waves—like ripples on a pond. Proving that there are still secrets to be discovered even in such fundamentally well-known systems, RIKEN researchers Konstantin Bliokh, Aleksandr Bekshaev and Franco Nori have used theory to reveal a new, hidden force in this system that acts on particles in an unexpected way ("Transverse Spin and Momentum in Two-Wave Interference").

Two-dimensional waves have been studied for centuries: initially to understand the intrinsic behavior of waves and more recently to understand the fundamental mechanics of quantum physics. “The interference between two plane waves has always provided an important model for understanding the basic features of waves,” notes Bliokh. “It is difficult to find a simpler and more thoroughly studied system in physics. We show that such a basic system still exhibits unexpected and unusual features.”

Recent research has showed that interfering planar waves can have unusual properties on a small scale. For over a century, waves such as light beams have been known to carry both momentum and angular momentum in the direction of the propagating wave and this momentum can be used to move and rotate small particles. This is consistent with the common understanding of photons as particles carrying momentum and spin. On the local scale in non-plane-wave optical fields, however, light can also impart forces and torques perpendicular to the light beam, counterintuitive to our everyday experience. These unusual effects have been noticed in highly confined near-field radiation known as evanescent waves, but so far they have not turned up in freely propagating light waves.

In a comprehensive theoretical study, the scientists, from the RIKEN Center for Emergent Matter Science and Interdisciplinary Theoretical Science Research Group (iTHES), revisited the concept of two propagating waves interfering in the same plane. Their mathematical analysis of this system revealed that even this well-studied example of interfering waves can exert a force and torque on a small particle perpendicular to both waves (see figure). Both the force and torque are strongly dependent on the polarization of the two interfering waves, which differs to the conventional experience of waves carrying the same momentum irrespective of their polarizations.

The possibility of realizing such an effect in an actual experimental system and to potentially control it through parameters such as polarization is attractive and, Nori predicts, practically feasible. “Our findings offer a new vision for the fundamental properties of propagating optical fields and pave the way for novel optical manipulations of small particles.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New nanogenerator harvests power from rolling tires

New nanogenerator harvests power from rolling tires | Amazing Science |
A group of University of Wisconsin-Madison engineers and a collaborator from China have developed a nanogenerator that harvests energy from a car's rolling tire friction.

An innovative method of reusing energy, the nanogenerator ultimately could provide automobile manufacturers a new way to squeeze greater efficiency out of their vehicles.

The researchers reported their development, which is the first of its kind, in a paper published May 6, 2015, in the journal Nano Energy.

Xudong Wang, the Harvey D. Spangler fellow and an associate professor of materials science and engineering at UW-Madison, and his PhD student Yanchao Mao have been working on this device for about a year.

The nanogenerator relies on the triboelectric effect to harness energy from the changing electric potential between the pavement and a vehicle's wheels. The triboelectric effect is the electric charge that results from the contact or rubbing together of two dissimilar objects. Wang says the nanogenerator provides an excellent way to take advantage of energy that is usually lost due to friction.

"The friction between the tire and the ground consumes about 10 percent of a vehicle's fuel," he says. "That energy is wasted. So if we can convert that energy, it could give us very good improvement in fuel efficiency." The nanogenerator relies on an electrode integrated into a segment of the tire. When this part of the tire surface comes into contact with the ground, the friction between those two surfaces ultimately produces an electrical charge-a type of contact electrification known as the triboelectric effect.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Fish diversity exploded when dinosaurs went extinct

Fish diversity exploded when dinosaurs went extinct | Amazing Science |

The ray-finned fishes, so called because their fins are supported by bony spines or rays, make up more than 95% of all fish species. They come in all shapes and sizes, from the showy lionfish (pictured above) to the delicious Atlantic salmon. Yet paleontologists have been unsure when and why ray-finned fishes exploded into such prominence, in large part because the preservation of fish fossils is a very hit-or-miss affair. Now, researchers have taken a new approach to the problem: They looked at marine sediments taken from deep-sea cores at six sites around the world, including the Atlantic and Pacific oceans. To figure out when ray-finned fish numbers took off, they calculated the ratio of fossilized teeth from ray-finned fishes to the fossilized scales from another major group of fish: sharks.

As they report online this week in the Proceedings of the National Academy of Sciences (PNAS), this ratio shows that sharks well outnumbered the ray-finned fish at the end of the Cretaceous, about 66 million years ago. That was when dinosaurs, ammonites, and most marine reptiles went extinct, probably because of a large asteroid hitting Earth. After the extinction event, the ratio of these ray-finned fish remains shot up dramatically, quickly outnumbering those of sharks. Although the sharks also survived the end of the Cretaceous, their numbers appear to have remained flat, whereas the size and diversity of ray-finned fish populations took off. The researchers suggest that the mass extinction, especially of ammonites (which probably competed with fish for food), allowed the ray-fins to exploit new ecological niches and launched what the authors call a “new age of fish.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

With deep learning and dimensionality reduction, we can visualize the entirety of Wikipedia?

With deep learning and dimensionality reduction, we can visualize the entirety of Wikipedia? | Amazing Science |

Deep neural networks are an approach to machine learning that has revolutionized computer vision and speech recognition in the last few years, blowing the previous state of the art results out of the water. They’ve also brought promising results to many other areas, including language understanding and machine translation. Despite this, it remains challenging to understand what, exactly, these networks are doing.

Understanding neural networks is just scratching the surface, however, because understanding the network is fundamentally tied to understanding the data it operates on. The combination of neural networks and dimensionality reduction turns out to be a very interesting tool for visualizing high-dimensional data – a much more powerful tool than dimensionality reduction on its own.

Paragraph vectors, introduced by Le & Mikolov (2014), are vectors that represent chunks of text. Paragraph vectors come in a few variations but the simplest one, which we are using here, is basically some really nice features on top of a bag of words representation.

With word embeddings, we learn vectors in order to solve a language task involving the word. With paragraph vectors, we learn vectors in order to predict which words are in a paragraph.

Concretely, the neural network learns a low-dimensional approximation of word statistics for different paragraphs. In the hidden representation of this neural network, we get vectors representing each paragraph. These vectors have nice properties, in particular that similar paragraphs are close together.

Now, Google has some pretty awesome people. Andrew Dai, Quoc Le, and Greg Corrado decided to create paragraph vectors for some very interesting data sets. One of those was Wikipedia, creating a vector for every English Wikipedia article. The result is that we get a visualization of the entirety of Wikipedia. A map of Wikipedia. A large fraction of Wikipedia’s articles fall into a few broad topics: sports, music (songs and albums), films, species, and science.

Tom Vandermolen's curator insight, July 1, 1:12 AM

Another great machine learning/semantics tool.  We're getting closer, and it feels like all of these different techniques are homing in on *something* from different directions.

Scooped by Dr. Stefan Gruenwald!

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons

External magnetic field controlled, nanoscale bacteria-like robots could replace stents and angioplasty balloons | Amazing Science |

Swarms of microscopic, magnetic, robotic beads could be used within five years by vascular surgeons to clear blocked arteries. These minimally invasive microrobots, which look and move like corkscrew-shaped bacteria, are being developed by an $18-million, 11-institution research initiative headed by the Korea Evaluation Institute of Industrial Technologies (KEIT).

These “microswimmers” are driven and controlled by external magnetic fields, similar to how nanowires from Purdue University and ETH Zurich/Technion (recently covered on KurzweilAI) work, but based on a different design. Instead of wires, they’re made from chains of three or more iron oxide beads, rigidly linked together via chemical bonds and magnetic force. The beads are put in motion by an external magnetic field that causes each of them to rotate. Because they are linked together, their individual rotations cause the chain to twist like a corkscrew and this movement propels the microswimmer. The chains are small enough­­ — the nanoparticles are 50–100 nanometers in diameter — that they can navigate in the bloodstream like a tiny boat, Fantastic Voyage movie style (but without the microscopic humans) via a catheter to navigate directly to the blocked artery, where a drill would clear it completely.

Drilling through plaque:

The inspiration for using the robotic swimmers as tiny drills came from the Borrelia burgdorferi bacteria (shown above), which causes Lyme’s Disease and wreaks havoc inside the body by burrowing through healthy tissue. Its spiral shape enables both its movement and the resultant cellular destruction. By controlling the magnetic field, a surgeon could direct the speed and direction of the microswimmers. The magnetism also allows for joining separate strands of microswimmers together to make longer strings, which can then be propelled with greater force.

Once flow is restored in the artery, the microswimmer chains could disperse and be used to deliver anti-coagulant medication directly to the effected area to prevent future blockage. This procedure could supplant the two most common methods for treating blocked arteries: stenting and angioplasty. Stenting is a way of creating a bypass for blood to flow around the block by inserting a series of tubes into the artery, while angioplasty balloons out the blockage by expanding the artery with help from an inflatable probe.

“Current treatments for chronic total occlusion are only about 60 percent successful,” said MinJun Kim, PhD, a professor in the College of Engineering and director of the Biological Actuation, Sensing & Transport Laboratory (BASTLab) at Drexel University. “We believe that the method we are developing could be as high as 80–90 percent successful and possibly shorten recovery time. The microswimmers are composed of inorganic biodegradable beads so they will not trigger an immune response in the body. We can adjust their size and surface properties to accurately deal with any type of arterial occlusion.” Kim’s research was recently reported in the Journal of Nanoparticle Research.

Mechanical engineers at Drexel University are using these microswimmers as a part of a surgical toolkit being assembled by the Daegu Gyeongbuk Institute of Science and Technology (DGIST)Researchers from other institutions on the project include ETH ZurichSeoul National UniversityHanyang UniversityKorea Institute of Science and Technology, and Samsung Medical Center.

DGIST anticipates testing the technology in lab and clinical settings within the next four years.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Broad Institute, Google Genomics combine bioinformatics and computing expertise

Broad Institute, Google Genomics combine bioinformatics and computing expertise | Amazing Science |

Broad Institute of MIT and Harvard is teaming up with Google Genomics to explore how to break down major technical barriers that increasingly hinder biomedical research by addressing the need for computing infrastructure to store and process enormous datasets, and by creating tools to analyze such data and unravel long-standing mysteries about human health.

As a first step, Broad Institute’s Genome Analysis Toolkit, or GATK, will be offered as a service on the Google Cloud Platform, as part of Google Genomics. The goal is to enable any genomic researcher to upload, store, and analyze data in a cloud-based environment that combines the Broad Institute’s best-in-class genomic analysis tools with the scale and computing power of Google.

GATK is a software package developed at the Broad Institute to analyze high-throughput genomic sequencing data. GATK offers a wide variety of analysis tools, with a primary focus on genetic variant discovery and genotyping as well as a strong emphasis on data quality assurance. Its robust architecture, powerful processing engine, and high-performance computing features make it capable of taking on projects of any size.

GATK is already available for download at no cost to academic and non-profit users. In addition, business users can license GATK from the Broad. To date, more than 20,000 users have processed genomic data using GATK.

The Google Genomics service will provide researchers with a powerful, additional way to use GATK. Researchers will be able to upload genetic data and run GATK-powered analyses on Google Cloud Platform, and may use GATK to analyze genetic data already available for research via Google Genomics. GATK as a service will make best-practice genomic analysis readily available to researchers who don’t have access to the dedicated compute infrastructure and engineering teams required for analyzing genomic data at scale. An initial alpha release of the GATK service will be made available to a limited set of users.

“Large-scale genomic information is accelerating scientific progress in cancer, diabetes, psychiatric disorders, and many other diseases,” said Eric Lander, President and Director of Broad Institute. “Storing, analyzing, and managing these data is becoming a critical challenge for biomedical researchers. We are excited to work with Google’s talented and experienced engineers to develop ways to empower researchers around the world by making it easier to access and use genomic information.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists believe they are close to a blood test for pancreatic cancer (100% accurate in early tests)

Scientists believe they are close to a blood test for pancreatic cancer (100% accurate in early tests) | Amazing Science |

Scientists believe they are close to a blood test for pancreatic cancer - one of the hardest tumours to detect and treat. The test, which they describe as "a major advance", hunts for tiny spheres of fat that are shed by the cancers. Early results published in the journal Nature showed the test was 100% accurate.

Experts said the findings were striking and ingenious, but required refinement before they could become a cancer test. The number of people who survive 10 years after being diagnosed with pancreatic cancer is less than 1% in England and Wales compared with 78% for breast cancer. The tumor results in very few symptoms in its early stages and by the time people become unwell, the cancer has often spread around the body and become virtually untreatable.

A cell surface proteoglycan, glypican-1 (GPC1), on circulating exosomes may serve as a potential noninvasive diagnostic and screening tool to detect early stages of pancreatic cancer, according to research published online June 24 in Nature.

Raghu Kalluri, M.D., Ph.D., chair of cancer biology at the MD Anderson Cancer Center in Houston, and colleagues analyzed blood samples from about 250 pancreatic cancer patients and 32 breast cancer patients. For comparison, they used blood samples from healthy donors and small groups of people with other conditions, such as pancreatitis.

The researchers found that exosomes from cancer cells, but not other cell types, harbored high levels of the GPC1 protein. "Any time we identified GPC1-enriched exosomes, we could tell it was a cancer cell," Kalluri told HealthDay. And while many breast tumors released high amounts of GPC1, all pancreatic tumors did -- including early-stage cancers.

"GPC1+ circulating exosomes may serve as a potential noninvasive diagnostic and screening tool to detect early stages of pancreatic cancer to facilitate possible curative surgical therapy," the authors write. "These results encouraged us to perform further analyses to potentially inform on the utility of GPC1+ circulating exosomes as a detection and monitoring tool for pancreatic ductal adenocarcinoma."

Rescooped by Dr. Stefan Gruenwald from Breast Cancer News!

Paths To Longevity: The New Cancer Survivors

Paths To Longevity: The New Cancer Survivors | Amazing Science |
Extraordinary advances have turned cancer from an apparent death sentence into a manageable chronic illness for many. But what does it mean to live with a terminal disease...interminably?

Several broad forces have contributed to the transformation of cancer over the past two decades. The first is early detection. The preponderance of screening tests along with new, more refined imaging technologies have led to the discovery of tumors earlier than ever, often before they’ve spread beyond the original site. And even in the case of metastasized tumors, catching them early can improve a person’s ability to weather treatment and fight the disease.

There have also been remarkable medical advances, including targeted therapies, which are drugs designed to act against particular molecules involved in cancer-cell growth in specific types of cancer; personalized medicine, which allows doctors to identify and respond to genetic and biological abnormalities in an individual patient’s cancer; and targeted immunotherapy, a new type of treatment that harnesses the body’s own immune system to destroy cancer cells.

Last is the growing field of psycho-oncology, which has led to an expanded understandingof cancer patients’ emotional and social needs and has been shown to add not just to the quality of their years but to the quantity as well. Being better informed and supported can motivate people to work on their overall physical wellness and opt to participate in experimental treatments and clinical trials, which can be life-extending.

All these developments are factors in the increasing number of people whose cancer can be considered cured, a nebulous term that generally describes those who are cancer-free five years after their diagnosis. But at the same time, they’re enabling more and more people like Brad Slocum to live longer with active or persistent cancer, including tumors that are controlled without being eliminated or tumors that go through continuous cycles of remission and recurrence.

“It’s very different from being cured,” says Michael Fisch, chair of general oncology at the MD Anderson Cancer Center in Houston. “Being cured becomes a story like, ‘Back in 2002, I had a small breast tumor, and they took care of it,’ or ‘I had a small melanoma removed five years ago, and I live a normal life now.’ It’s a line item on a medical history that maybe isn’t too important. But taking Sutent, or periodically having surgeries, or having a lot of CT scans, or having a fear of recurrence or progression, or being on maintenance chemotherapy—that’s a different experience.”

Via Susan Zager
Susan Zager's curator insight, June 30, 11:04 AM

Great Article- thought provoking. 

Rescooped by Dr. Stefan Gruenwald from Eldritch Weird!

Quanta: A New Physics Theory of Life

Quanta: A New Physics Theory of Life | Amazing Science |
An MIT physicist has proposed the provocative idea that life exists because the law of increasing entropy drives matter to acquire lifelike physical properties.

Why does life exist? Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said. England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab. “He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New method of quantum entanglement vastly increases how much information can be carried in a photon

New method of quantum entanglement vastly increases how much information can be carried in a photon | Amazing Science |
A team of researchers led by UCLA electrical engineers has demonstrated a new way to harness light particles, or photons, that are connected to each other and act in unison no matter how far apart they are —a phenomenon known as quantum entanglement.

In previous studies, photons have typically been entangled by one dimension of their quantum properties—usually the direction of their polarization.

In the new study, researchers demonstrated that they could slice up and entangle each photon pair into multiple dimensions using quantum properties such as the photons' energy and spin. This method, called hyperentanglement, allows each photon pair to carry much more data than was possible with previous methods.

Quantum entanglement could allow users to send data through a network and know immediately whether that data had made it to its destination without being intercepted or altered. With hyperentanglement, users could send much denser packets of information using the same networks.

The research, published today in Nature Photonics, was led by Zhenda Xie, a research scientist in the lab of Chee Wei Wong, a UCLA associate professor of electrical engineering who was the research project's principal investigator. Researchers from MIT, Columbia University, the University of Maryland and the National Institute of Standards and Technology were also part of the team.

Albert Einstein famously described quantum entanglement as "spooky action at a distance" because it seems so improbable that what happens to one particle in an entangled pair also happens instantly to the other particle, even over great distances. The phenomenon exceeds the speed of light.

In the new study, researchers sent hyperentangled photons in a shape known as a biphoton frequency comb, essentially breaking up entangled photons into smaller parts. In secure data transfer, photons sent over fiber optic networks can be encrypted through entanglement. With each dimension of entanglement, the amount of information carried on a photon pair is doubled, so a photon pair entangled by five dimensions can carry 32 times as much data as a pair entangled by only one. The result greatly extends from wavelength multiplexing, the method for carrying many videos over a single optical fiber.

"We show that an optical frequency comb can be generated at single photon level," Xie said. "Essentially, we're leveraging wavelength division multiplexing concepts at the quantum level."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers develop 100-fold cheaper and faster way to make graphene

Researchers develop 100-fold cheaper and faster way to make graphene | Amazing Science |

Scientists at the University of Exeter say they've developed a way to make graphene better, cheaper, faster -- and at mass scale. Lead researcher Monica Craciun says the technology, known as the nanoCVD system, promises to usher in "a graphene-driven industrial revolution."

Graphene is a single layer of carbon atoms, organized a honeycomb like structure. The material is super strong, flexible and conductive.

"The vision for a 'graphene-driven industrial revolution' is motivating intensive research on the synthesis of high quality and low cost graphene," Craciun said in a press release. "Currently, industrial graphene is produced using a technique called chemical vapor deposition (CVD). Although there have been significant advances in recent years in this technique, it is still an expensive and time consuming process."

Craciun and her colleagues, in cooperation with U.K.-based graphene company Moorfield, have tweaked CVD technology to develop a "cold wall" device. CVD technology mixes volatile vapors to create a desired deposited material (like a film of graphene) on a substrate.

The research team's new nanoCVD system reportedly grows graphene at a rate 100 times faster than traditional methods, and at one percent of the cost.

"We are very excited about the potential of this breakthrough using Moorfield's technology and look forward to seeing where it can take the graphene industry in the future," said Jon Edgeworth, the company's technical director.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Anti-aging: FDA-approved cancer drug trametinib extends life span of fruit flies

Anti-aging: FDA-approved cancer drug trametinib extends life span of fruit flies | Amazing Science |

Results shore up the importance of cancer-associated Ras proteins in aging.

A cancer drug that boosts the lifespan of fruit flies is the latest addition to a small roster of compounds shown to lengthen life — although none has yet been proven in humans. Trametinib (Mekinist), which was developed by the London-based pharmaceutical firm GlaxoSmithKline, is already used to treat advanced melanoma. It extends the lifespan of adult fruit flies by about 12%, although the later in life the drug is started, the less effect it has, says Linda Partridge, a geneticist at University College London and the Max Planck Institute for Biology of Ageing in Cologne, Germany, who led the work. Her team’s research is reported on 25 June inCell1But Partridge cautions against rushing to take trametinib in search of a longer life. “That would be mad,” she says. “We just don’t know enough about the long-term consequences.”

Trametinib’s effects are connected to a biochemical pathway controlled by a family of proteins collectively called Ras which seem to be important to both cancer and aging. They are activated when cells need to grow and proliferate, for example to replace damaged tissue. Mutations in the proteins are associated with cancer — which has led to a decades-long pursuit of drugs that target Ras.

At the same time, Ras proteins are involved in other pathways that have been firmly linked to ageing. In yeast, deleting a gene for Ras extends lifespan2, notes Valter Longo, director of the University of Southern California’s Longevity Institute in Los Angeles.

And Partridge’s team showed that trametinib’s benefits in fruit flies depended on suppressing a pathway regulated by Ras. Flies genetically modified to have this pathway permanently switched on did not live longer on trametinib.

Partridge hopes to extend her Ras studies to mammalian cells grown in culture and to mice. “We don’t know in mammals at the moment what the situation is,” she says. Although many of Ras’s functions are similar in flies and mammals, Partridge notes that cellular pathways in mammals are often more complex than the analogous pathways in flies, with multiple alternative routes available to compensate if one branch of the pathway is shut down.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Key element of human language discovered in bird babble for the first time

Key element of human language discovered in bird babble for the first time | Amazing Science |
Stringing together meaningless sounds to create meaningful signals was previously thought to be the preserve of humans alone, but a new study has revealed that babbler birds are also able to communicate in this way.

Researchers at the Universities of Exeter and Zurich discovered that the chestnut-crowned babbler -- a highly social bird found in the Australian Outback -- has the ability to convey new meaning by rearranging the meaningless sounds in its calls. This babbler bird communication is reminiscent of the way humans form meaningful words. The research findings, which are published in the journal PLOS Biology, reveal a potential early step in the emergence of the elaborate language systems we use today.

"In contrast to most songbirds, chestnut-crowned babblers do not sing. Instead its extensive vocal repertoire is characterised by discrete calls made up of smaller acoustically distinct individual sounds." she added.

"We think that babbler birds may choose to rearrange sounds to code new meaning because doing so through combining two existing sounds is quicker than evolving a new sound altogether." said co-author Professor Andy Russell from the University of Exeter who has been studying the babblers since 2004.

The researchers noticed that chestnut-crowned babblers reused two sounds "A" and "B" in different arrangements when performing specific behaviors. When flying, the birds produced a flight call "AB," but when feeding chicks in the nest they emitted "BAB" prompt calls.

When the researchers played the sounds back, the listening birds showed they were capable of discriminating between the different call types by looking at the nests when they heard a feeding prompt call and by looking out for incoming birds when they heard a flight call. This was also the case when the researchers switched elements between the two calls: making flight calls from prompt elements and prompt calls from flight elements, indicating that the two calls were indeed generated from rearrangements of the same sounds.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists have made exotic new materials by creating laser-induced micro-explosions in silicon

Scientists have made exotic new materials by creating laser-induced micro-explosions in silicon | Amazing Science |
Making new materials with micro-explosions

Scientists have made exotic new materials by creating laser-induced micro-explosions in silicon, the common computer chip material (Nature Communications"Experimental evidence of new tetragonal polymorphs of silicon formed through ultrafast laser-induced confined microexplosion").

The new technique could lead to the simple creation and manufacture of superconductors or high-efficiency solar cells and light sensors, said leader of the research, Professor Andrei Rode, from The Australian National University (ANU)."We've created two entirely new crystal arrangements, or phases, in silicon and seen indications of potentially four more," said Professor Rode, a laser physicist at the ANU Research School of Physics and Engineering (RSPE). "Theory predicts these materials could have very interesting electronic properties, such as an altered band gap, and possibly superconductivity if properly doped."

By focusing lasers onto silicon buried under a clear layer of silicon dioxide, the group have perfected a way to reliably blast tiny cavities in the solid silicon. This creates extremely high pressure around the explosion site and forms the new phases.The phases have complex structures, which took the team of physicists from ANU and University College London a year to understand.Using a combination of electron diffraction patterns and structure predictions, the team discovered the new materials have crystal structures that repeat every 12, 16 or 32 atoms respectively, said Professor Jim Williams, from the Electronic Material Engineering group at RSPE."The micro-explosions change silicon's simplicity to much more complex structures, which opens up possibility for unusual and unexpected properties," he said.These complex phases are often unstable, but the small size of the structures means the materials cool very quickly and solidify before they can decay, said Professor Eugene Gamaly, also from the ANU Research School of Physics and Engineering. The new crystal structures have survived for more than a year now."These new discoveries are not an accident, they are guided by a deep understanding of how lasers interact with matter," he said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Measles vaccine protects against other deadly diseases

Measles vaccine protects against other deadly diseases | Amazing Science |

Measles kills about 140,000 people worldwide every year, but the millions of kids who have survived the disease aren’t in the clear. A new epidemiological study suggests that they remain susceptible to other infections for more than 2 years, much longer than researchers anticipated. The results bolster a hypothesis that the measles virus undermines the immune system’s memory—and indicate that the measles vaccine protects against other deadly diseases as well.

Researchers have long known that measles inhibits the immune system, but they generally thought this effect wore off after a few months at the most. However, studies of children in developing countries, where most cases occur, found that measles vaccination reduces the overall death rate from infections for up to 5 years, suggesting that preventing the disease somehow provides protection against other illnesses.

One possible explanation for this benefit is that the measles vaccine somehow spurs the immune system to produce defenses against these other diseases. But work on monkeys recovering from measles spawned an alternative hypothesis. In 2012, Rik de Swart of Erasmus MC in Rotterdam, Netherlands, and colleagues revealed that the measles virus kills large numbers of memory cells, white blood cells that prevent subsequent infections by the same pathogen. Thus, the measles virus might cause what the scientists termed immunological amnesia, impairing the immune system’s ability to remember and quickly eliminate other microbes it has already beaten. As a result, “you are vulnerable to diseases you shouldn’t be vulnerable to,” says Michael Mina, lead author of the new paper and a medical student at Emory University School of Medicine in Atlanta.

To test this explanation, a team that included De Swart and Mina, then a postdoc at Princeton University, obtained data on the numbers of measles cases and deaths from other infectious diseases in the United States, Denmark, and part of the United Kingdom. Measles vaccination started in the 1960s in the United Kingdom and United States and in the 1980s in Denmark, and the researchers had statistics from before and after its introduction.

The team’s mathematical analysis tried to determine whether there was a relationship between the number of measles cases and the number of kids who died from other diseases. If the virus inhibits immunity for only a short time, for example, the number of deaths from other infections in a specific year might correlate to the number of measles cases in that year. But if the virus triggers a prolonged immune amnesia, the number of deaths in a particular year might correlate to the total number of cases in that year and the previous year or two.

Using this approach, the researchers calculated that children who survive measles remain vulnerable to other diseases for an average of 2.5 years. The value was almost the same for all three countries, the team reports online today in Science. “Our results suggest that the adverse effects of measles are much more lasting,” Mina says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A New Facebook Lab Plans to Deliver Internet Access by Drone

A New Facebook Lab Plans to Deliver Internet Access by Drone | Amazing Science |

Watch out, Google. Facebook is gunning for the title of World’s Coolest Place to Work. And its arsenal includes unmanned drones, lasers, satellites and virtual reality headsets. Mark Zuckerberg, co-founder and chief executive of Facebook, announced on Thursday that the company was creating a new lab of up to 50 aeronautics experts and space scientists to figure out how to beam Internet access down from solar-powered drones and other “connectivity aircraft.”

To start the effort, Facebook is buying Ascenta, a small British company whose founders helped to create early versions of an unmanned solar-powered drone, the Zephyr, which flew for two weeks in July 2010 and broke a world record for time aloft.

“We want to think about new ways of connectivity that dramatically reduce the cost,” said Yael Maguire, engineering director for the new Facebook Connectivity Lab. “We want to explore whether there are ways from the sky to deliver the Internet access.”

It’s the second head-spinning announcement from Facebook this week and the third this year. On Tuesday, the company said it would spend at least $2 billion to buy Oculus VR, a Southern California start-up that is developing virtual reality headsets for playing games and other uses. Last month, it said it would buy WhatsApp, a messaging app that offers free texting around the world, for as much as $19 billion.

The lab is part of Mr. Zuckerberg’s ambitious project to bring the Internet to the two-thirds of the world’s population without Internet access. With partners like Qualcomm and Nokia, Facebook is working on technology to compress Internet data, cut the cost of mobile phones and extend connections to people who can’t afford them or live in places that are too difficult to reach.

That last part of the problem — reaching the 10 percent of the world’s population that are in areas difficult to reach via traditional Internet solutions — is the initial focus of the connectivity lab, said Mr. Maguire.

Currently, satellites can deliver Internet to sparsely populated areas with spotty Internet connections, but the cost is very high, said Mr. Maguire.

Facebook wants to explore whether access could be delivered more cheaply through both new types of satellites and unmanned aircraft.

The company envisions drones that could stay aloft for months, even years, at a time at an altitude of more than 12 miles from the surface of the earth — far above other planes and the ever-changing weather.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

D-Wave Systems Breaks the 1000 Qubit Quantum Computing Barrier

D-Wave Systems Breaks the 1000 Qubit Quantum Computing Barrier | Amazing Science |

New Milestone Will Enable System to Address Larger and More Complex Problems

D-Wave Systems Inc., the world's first quantum computing company, today announced that it has broken the 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort.

This is a major technological and scientific achievement that will allow significantly more complex computational problems to be solved than was possible on any previous quantum computer.

D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.” Every additional qubit doubles the search space of the processor. At 1000 qubits, the new processor considers 21000possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two. ‪In fact, the new search space contains far more possibilities than there are ‪particles in the observable universe.

“For the high-performance computing industry, the promise of quantum computing is very exciting. It offers the potential to solve important problems that either can’t be solved today or would take an unreasonable amount of time to solve,” said Earl Joseph, IDC program vice president for HPC. “D-Wave is at the forefront of this space today with customers like NASA and Google, and this latest advancement will contribute significantly to the evolution of the Quantum Computing industry.”

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.
No comment yet.