Amazing Science
331.4K views | +175 today
Scooped by Dr. Stefan Gruenwald
onto Amazing Science!

Neutrino shape-shift points to new physics

Neutrino shape-shift points to new physics | Amazing Science |
A detector in Japan has for the first time seen neutrinos morph between two of their three flavours, offering new ways to probe interactions between matter


The T2K experiment in Japan generates a beam of muon neutrinos at the J-PARC accelerator in Tokai, near Japan's east coast. It sends them 295 kilometres to the Super-Kamiokande neutrino detector in Kamioka, on the west coast. In 2011 the team saw the first hint of the transformation, but the 2011 megaquake temporarily shut down the experiment before it could confirm the sighting.


Now, with about four times as much data, T2K is finally able to claim certainty. They have detected a total of 28 electron neutrinos, when fewer than 5 would be expected if the neutrinos were not oscillating. Odds that the result is a fluke are less than one in a trillion. The team announced the results today at the European Physical Society meeting in Stockholm, Sweden.


Previously, this type of neutrino oscillation was only indicated, says David Wark of the Science and Technology Facilities Council in the UK, and a member of the T2K collaboration. "Now it counts as a discovery."


The result offers a path towards solving one of the biggest mysteries in physics: why there is more matter than antimatter in the universe. Standard theories say that matter and antimatter should have been created in equal amounts by the big bang. But for some reason, matter won out.


Now that we have seen the muon neutrino morph into the electron neutrino in normal matter, physicists can run the T2K experiment with a beam of anti-muon neutrinos. Subtle differences in the way neutrinos and antineutrinos oscillate could have skewed the ratios of matter and antimatter production in the early universe. "This is the first step along the way, but it proves that we're going to get there," says Wark.


The measurement is also interesting in its own right, says Janet Conrad of the Massachusetts Institute of Technology, and a member of the Double Chooz neutrino experiment in Chooz, France. Since 2011 that detector and other "disappearance" experiments have seen indirect signs of the muon-electron shift.


Comparing T2K's results to the disappearance data can directly point the way to new laws of physics beyond our current understanding. "If we see inconsistencies, it means there's new physics going on," says Conrad. "Neutrinos always surprise us, so this is an exciting opportunity to see what more these particles may have to say."

No comment yet.
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science |

NOTE: To subscribe to the RSS feed of Amazing Science, copy into the URL field of your browser and click "subscribe".


This newsletter is aggregated from over 1450 news sources:


All my Tweets and Scoop.It! posts sorted and searchable:



You can search through all the articles semantically on my

archived twitter feed


NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.


You can also type your own query:


e.g., you are looking for articles involving "dna" as a keyword

Or CLICK on the little FUNNEL symbol at the top right of the screen


MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

Casper Pieters's curator insight, March 9, 4:21 PM

Great resources for online learning just about everything.  All you need is will power and self- discipline.

Russ Roberts's curator insight, April 23, 8:37 PM

A very interesting site.  Amazing Science covers many disciplines.  Subscribe to the news letter and be " amazed." Aloha, Russ, KH6JRM. 

Siegfried Holle's curator insight, July 4, 5:45 AM

Your knowledge is your strength and power 

Scooped by Dr. Stefan Gruenwald!

How Will We Know When Computers Can Think for Themselves?

How Will We Know When Computers Can Think for Themselves? | Amazing Science |

Headlines recently exploded with news that a computer program called Eugene Goostman had become the first to pass the Turing test, a method devised by computing pioneer Alan Turing to objectively prove a computer can think.

The program fooled 33% of 30 judges into thinking it was a 13-year-old Ukrainian boy in a five-minute conversation. How impressive is the result? In a very brief encounter, judges interacted with a program that could be forgiven for not knowing much or speaking very eloquently—in the grand scheme, it’s a fairly low bar.

Chat programs like Eugene Goostman have existed since the 1970s. Though they have advanced over the years, none yet represents the revolutionary step in AI implied by the Turing test. So, if the Eugene Goostman program isn’t exemplary of a radical leap forward, what would constitute such a leap, and how will we know when it happens?

To explore that question, it’s worth looking at what the Turing test actually is and what it’s meant to measure.

In a 1950 paper, “Computing Machinery and Intelligence,” Alan Turing set out to discover how we might answer the question, “Can machines think?” Turing believed the answer would devolve into a semantic debate over the definitions of the words “machine” and “think.” He suggested what he hoped was a more objective test to replace the question.

Turing called it the imitation test. The test involved three participants, an interrogator (of either sex) and a male and female subject. The interrogator would try to discover which was male and which female by asking questions. The man would try to fool the interrogator and the woman would try to help him. To avoid revealing themselves by physical traits, the subjects and interrogator would ideally communicate by teletype from separate rooms.

Now, Turing said, substitute the participant trying to fool the interrogator with a computer. And instead of trying to discover which is a man and which a woman—have the interrogator decide which is human and which a computer.

Turing suggested this test would replace the subjective question, “Can a machine think?” and, later in the paper, suggested how well a computer might play the imitation game at the turn of the 21st century.

“I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Buckyball boron: First experimental evidence of an all boron fullerene

Buckyball boron: First experimental evidence of an all boron fullerene | Amazing Science |

Distorted 40-boron atom fullerene detected mixed with quasiplanar isomer.

The first experimental evidence for a boron fullerene has been produced by researchers in the US and China.[1] Unlike the football shaped C60 structure of buckminsterfullerene, the boron structure has very different symmetry, with a box-like shape containing both hexagonal and heptagonal holes. The researchers now hope that their findings will enable other boron fullerenes to be produced.

Boron cannot form a direct B60 analogue of buckminsterfullerene because it has only three electrons in its outer shell, so after forming three bonds it has no free electrons remaining to form the delocalised π-network essential to the stability of C60. In 2007, however, Boris Yakobson and colleagues at Rice University, US, proposed that this electron deficiency could be overcome by inserting an extra boron atom into the centre of each hexagon, forming B80.[2] And in June, Chinese scientists calculated that a the boron fullerene B38 would be stable.[3]

The stability of these hollow-cage structures has subsequently been challenged, but now Lai-Sheng Wang at Brown University, US, and colleagues have used two different electronic structure algorithms to calculate the most stable possible structure of B40. Both programs indicate that, by a considerable margin, the most stable isomer is a distorted fullerene with a hexagonal hole on the top and bottom and four heptagonal holes around the waist. They have christened this structure borospherene.

  1. H-J Zhai et alNat. Chem., 2014, DOI: 10.1038/nchem.1999
  2. N G Szwacki, A Sadrzadeh and B I Yakobson, Phys. Rev. Lett., 2007, 98, 166804 (DOI: 10.1103/physrevlett.98.166804)
  3. J Lv et alNanoscale, 2014, DOI: 10.1039/c4nr01846j
No comment yet.
Scooped by Dr. Stefan Gruenwald!

The World’s First Photonic Router

The World’s First Photonic Router | Amazing Science |
Weizmann Institute scientists have demonstrated for the first time a photonic router – a quantum device based on a single atom that enables routing of single photons by single photons. This achievement, as reported in Science magazine, is another step toward overcoming the difficulties in building quantum computers.

At the core of the device is an atom that can switch between two states. The state is set just by sending a single particle of light – or photon – from the right or the left via an optical fiber. The atom, in response, then reflects or transmits the next incoming photon, accordingly. For example, in one state, a photon coming from the right continues on its path to the left, whereas a photon coming from the left is reflected backwards, causing the atomic state to flip. In this reversed state, the atom lets photons coming from the left continue in the same direction, while any photon coming from the right is reflected backwards, flipping the atomic state back again. This atom-based switch is solely operated by single photons – no additional external fields are required.

“In a sense, the device acts as the photonic equivalent to electronic transistors, which switch electric currents in response to other electric currents,” says Dr. Barak Dayan, head of the Weizmann Institute’s Quantum Optics group, including Itay Shomroni, Serge Rosenblum, Yulia Lovsky, Orel Bechler and Gabriel Guendleman of the Chemical Physics Department in the Faculty of Chemistry. The photons are not only the units comprising the flow of information, but also the ones that control the device. 

This achievement was made possible by the combination of two state-of-the-art technologies. One is the laser cooling and trapping of atoms. The other is the fabrication of chip-based, ultra-high quality miniature optical resonators that couple directly to the optical fibers. Dayan’s lab at the Weizmann Institute is one of a handful worldwide that has mastered both these technologies.

Dayan: “The road to building quantum computers is still very long, but the device we constructed demonstrates a simple and robust system, which should be applicable to any future architecture of such computers. In the current demonstration a single atom functions as a transistor – or a two-way switch – for photons, but in our future experiments, we hope to expand the kinds of devices that work solely on photons, for example new kinds of quantum memory or logic gates.”
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Astronomers discover 7 new galaxies using a new type of telescope made by stitching together telephoto lenses

Astronomers discover 7 new galaxies using a new type of telescope made by stitching together telephoto lenses | Amazing Science |

Meet the seven new dwarf galaxies. Yale University astronomers, using a new type of telescope made by stitching together telephoto lenses, recently discovered seven celestial surprises while probing a nearby spiral galaxy. The previously unseen galaxies may yield important insights into dark matter and galaxy evolution, while possibly signaling the discovery of a new class of objects in space.

For now, scientists know they have found a septuplet of new galaxies that were previously overlooked because of their diffuse nature: The ghostly galaxies emerged from the night sky as the team obtained the first observations from the “homemade” telescope.

The discovery came quickly, in a relatively small section of sky. “We got an exciting result in our first images,” said Allison Merritt, a Yale graduate student and lead author of a paper about the discovery in The Astrophysical Journal Letters. “It was very exciting. It speaks to the quality of the telescope.”

Pieter van Dokkum, chair of Yale’s astronomy department, designed the robotic telescope with University of Toronto astronomer Roberto Abraham. Their Dragonfly Telephoto Array uses eight telephoto lenses with special coatings that suppress internally scattered light. This makes the telescope uniquely adept at detecting the very diffuse, low surface brightness of the newly discovered galaxies.

“These are the same kind of lenses that are used in sporting events like the World Cup. We decided to point them upward instead,” van Dokkum said. He and Abraham built the compact, oven-sized telescope in 2012 at New Mexico Skies, an observatory in Mayhill, N.M. The telescope was named Dragonfly because the lenses resemble the compound eye of an insect.

“We knew there was a whole set of science questions that could be answered if we could see diffuse objects in the sky,” van Dokkum said. In addition to discovering new galaxies, the team is looking for debris from long-ago galaxy collisions.

“It’s a new domain. We’re exploring a region of parameter space that had not been explored before,” van Dokkum said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Chimpanzee brain power is strongly heritable

Chimpanzee brain power is strongly heritable | Amazing Science |

Variations in chimpanzee intelligence have been shown for the first time to be strongly dictated by genetic inheritance, echoing findings in humans.

If a chimpanzee appears unusually intelligent, it probably had bright parents. That's the message from the first study to check if chimp brain power is heritable. The discovery could help to tease apart the genes that affect chimpintelligence and to see whether those genes in humans also influence intelligence. It might also help to identify additional genetic factors that give humans the intellectual edge over their non-human-primate cousins.

The researchers estimate that, similar to humans, genetic differences account for about 54 per cent of the range seen in "general intelligence" – dubbed "g" – which is measured via a series of cognitive tests. "Our results in chimps are quite consistent with data from humans, and the human heritability in g," says William Hopkins of the Yerkes National Primate Research Center in Atlanta, Georgia, who heads the team reporting its findings in Current Biology.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Million-fold smaller "nano-pixels" hold huge potential for flexible, low-power, high-res screens

Million-fold smaller "nano-pixels" hold huge potential for flexible, low-power, high-res screens | Amazing Science |

The Retina displays featured on Apple's iPhone 4 and 5 models pack a pixel density of 326 ppi, with individual pixels measuring 78 micrometers. That might seem plenty good enough given the average human eye is unable to differentiate between the individual pixels, but scientists in the UK have now developed technology that could lead to extremely high-resolution displays that put such pixel densities to shame.

Led by Oxford University scientists, a research team has created a prototype device that features pixels just 30 x 30 nanometers in size. The high resolution potential of the technology was discovered while the team was exploring the link between the electrical and optical properties of phase change materials (PCMs) that can switch from an amorphous to a crystalline state.

By sandwiching a seven-nanometer thick layer of the PCM Germanium-Antimony-Tellurium (Ge2Sb2Te5 or GST) between two layers of transparent electrodes made of indium tin oxide (ITO), the scientists discovered they could "draw" still images within the sandwich "stack" using an atomic force microscope. They then found that the "nano-pixels" could be switched on and off electronically, creating colored dots that could be used as the basis for an extremely high-resolution display.

"We didn't set out to invent a new kind of display," said Professor Harish Bhaskaran of Oxford University's Department of Materials, who led the research. "We were exploring the relationship between the electrical and optical properties of phase change materials and then had the idea of creating this GST 'sandwich' made up of layers just a few nanometers thick. We found that not only were we able to create images in the stack but, to our surprise, thinner layers of GST actually gave us better contrast. We also discovered that altering the size of the bottom electrode layer enabled us to change the color of the image."

But extremely high-resolution isn't the only impressive quality of the technology. The layers that make up the GST sandwich are created using a sputtering technique, which would allow them to be deposited as thin films on extremely thin and flexible substrates.

"We have already demonstrated that the technique works on flexible Mylar sheets around 200 nanometres thick," said Professor Bhaskaran. "This makes them potentially useful for 'smart' glasses, foldable screens, windshield displays, and even synthetic retinas that mimic the abilities of photoreceptor cells in the human eye."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Single-Cell Genomics Reveals Hundreds of Coexisting Subpopulations in Ocean Microbes

Single-Cell Genomics Reveals Hundreds of Coexisting Subpopulations in Ocean Microbes | Amazing Science |

The smallest, most abundant marine microbe, Prochlorococcus, is a photosynthetic bacteria species essential to the marine ecosystem. An estimated billion billion billion of the single-cell creatures live in the oceans, forming the base of the marine food chain and occupying a range of ecological niches based on temperature, light and chemical preferences, and interactions with other species. But the full extent and characteristics of diversity within this single species remains a puzzle.

To probe this question, scientists in MIT’s Department of Civil and Environmental Engineering (CEE) recently performed a cell-by-cell genomic analysis on a wild population of Prochlorococcus living in a milliliter — less than a quarter teaspoon — of ocean water, and found hundreds of distinct genetic subpopulations.

Each subpopulation in those few drops of water is characterized by a set of core gene alleles linked to a few flexible genes — a combination the MIT scientists call the “genomic backbone” — that endows the subpopulation with a finely tuned suitability for a particular ecological niche. Diversity also exists within the backbone subpopulations; most individual cells in the samples they studied carried at least one set of flexible genes not found in any other cell in its subpopulation.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

SupraPed Robots Will Use Trekking Poles to Hike Across Rough Terrain

SupraPed Robots Will Use Trekking Poles to Hike Across Rough Terrain | Amazing Science |

Last year at the Stanford-Berkeley Robotics Symposium, we saw some tantalizing slides from Oussama Khatib about a humanoid robot that used trekking poles to balance itself. We were promised more details later, and the Stanford researchers delivered at the IEEE International Conference on Robotics and Automation (ICRA) this year, where they presented the concept of SupraPed robots.

The idea is equipping robots with a pair of special trekking poles packed with sensors that, according to the researchers, "transforms biped humanoids into tripeds or quadrupeds or more generally, SupraPeds." By using these smart poles to steady themselves, the robots would be able to navigate through "cluttered and unstructured environments such as disaster sites."

Humans have had a lot of practice walking around on two legs. Robots have not, which isn't their fault, but at the moment, even the best robots are working up to the level of a toddler. Some of them aren't bad at flat terrain, but as we saw in the DARPA Robotics Challenge Trials, varied terrain is very, very difficult. It doesn't just require the physical ability to move and balance, but also the awareness to know what path to take and where feet should be placed.

As good at this as humans are, even we get into situations where our balance and movements with our legs and feet simply aren't enough. And when this happens, we scramble. If we're fancy, we might use a walking stick or hiking poles for balance assistance, and if we're not fancy, sometimes an outstretched arm is enough.

Similar to the research we looked at yesterday, this is an entirely different philosophy about obstacles: instead of things to be avoided, they're things that can potentially be used to complete tasks that would otherwise be unsafe or impossible.

However, this is all simulation, and the programming behind it is fairly complex. The robot (when they throw a real robot into this mix) will have sophisticated 3D vision, tactile sensing, and a special set of actuated ski poles. The SupraPed platform includes a pair smart walking staffs, a whole-body multi-contact control and planning software system, and real-time reactive controllers that integrate both tactile and visual information. Moreover, to bypass the difficulty of programming fully autonomous robot controllers, the SupraPed platform contains a remote haptic teleoperation system which allows the operator remotely give high level command.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ultimate solar system could contain 60 Earths

Ultimate solar system could contain 60 Earths | Amazing Science |

The laws of physics potentially allow one binary star system to contain a surprisingly large number of Earth-like planets, assuming there is enough matter.

Why settle for one habitable planet, when you can have 60? An astrophysicist has designed the ultimate star system by cramming in as many Earth-like worlds as possible without breaking the laws of physics. Such a monster cosmic neighbourhood is unlikely to exist in reality, but it could inspire future exoplanet studies. Sean Raymond of Bordeaux Observatory in France started his game of fantasy star system with a couple of ground rules. First, the arrangement of planets must be scientifically plausible. Second, they must be gravitationally stable over billions of years: there is no point in putting planets into orbit only to watch them spiral into the sun.

"The arguments were based on the recent scientific literature as well as some simple calculations I did," says Raymond. In some cases it was impossible to choose between two scenarios because of a lack of data, so he just picked the one he liked best.

Gas giants such as Jupiter are not habitable to life as we know it, but they can be orbited by Earth-like moons. In our solar system, Europa and Enceladus, which orbit Jupiter and Saturn, respectively, are prime candidates for extraterrestrial life. Raymond calculates that a red dwarf could hold four Jupiter-like planets, each with five Earth-like moons. What's more, the Trojan trick can allow another two Earth-like planets on either side of the orbiting Jupiters, upping the total number of habitable worlds around the red dwarf to 36.

Finally, Raymond turned his star system into a binary one, with two red dwarfs separated by roughly the distance from our sun to the edge of the solar system. Theory allows one star to carry the Earth-only configuration, and the other to carry the Earth-plus-Jupiters configuration. This creates the ultimate star system, with 60 habitable planets to choose from.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM invests $3 billion to extend Moore’s law with post-silicon-era chips and new architectures

IBM invests $3 billion to extend Moore’s law with post-silicon-era chips and new architectures | Amazing Science |

IBM announced today it is investing $3 billion for R&D in two research programs to push the limits of chip technology and extend Moore’s law.

The research programs are aimed at “7 nanometer and beyond” silicon technology and developing alternative technologies for post-silicon-era chips using entirely different approaches, IBM says.

IBM will be investing especially in carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing.

7 nanometer technology and beyond

IBM researchers and other semiconductor experts predict that semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.

However, scaling down to 7 nanometers by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing, IBM says.

Below 7 nanometers, the challenges dramatically increase, requiring a new kind of materials to power systems of the future, such as carbon nanotubes and graphene; and new computational approaches, such as quantum computing. and neurosynaptic computing.

Carbon Nanotubes. IBM Researchers are exploring whether carbon nanotube (CNT) electronics can replace silicon beyond the 7 nm node. IBM recently demonstrated two-way CMOS NAND gates using 50 nm gate-length carbon nanotube transistors, a first.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99%, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling. Modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible with CNTs.

Graphene. Graphene — pure carbon in the form of a one-atomic-layer-thick sheet — is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible. Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. That means faster switching transistors. In 2013, IBM demonstrated the world’s first graphene-based integrated-circuit receiver front-end for wireless communications.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Exoplanet Discoveries to Date Are Just a Drop in the Bucket - Systematic Searches Reveal Plenty Of Alien Worlds

Exoplanet Discoveries to Date Are Just a Drop in the Bucket - Systematic Searches Reveal Plenty Of Alien Worlds | Amazing Science |

Astronomers have in the past 20 years located several hundred planets orbiting distant stars, and they have only scratched the surface. In a small patch of stars—less than 1 percent of the sky—in the Northern Hemisphere, NASA's Kepler mission has already found more than 100 planets, along with strong hints of thousands more. Stars across the sky ought to be similarly laden with planets. A recent study indicated that each star hosts, on average, 1.6 planets. Exoplanets, as these strange worlds are called, are as plentiful as weeds—they crop up wherever they can. Whether any of them harbors life remains to be seen, but the odds of finding such a world are getting better.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The dead walk again: An arachnid that lived 410 million years ago has crawled back into the virtual world

The dead walk again: An arachnid that lived 410 million years ago has crawled back into the virtual world | Amazing Science |

A spider-like creature's remains were so well preserved in fossil form that scientists could see all its leg joints, allowing them to recreate its likely gait using computer graphics.

Known as a trigonotarbid, the animal was one of the first predators on land. Its prey were probably early flightless insects and other invertebrates, which it would run down and jump on.

"We know quite a bit about how it lived," said Russell Garwood, a palaeontologist with the University of Manchester, UK. "We can see from its mouth parts that it pre-orally digested its prey - something that most arachnids do - because it has a special filtering plate in its mouth. So, that makes us fairly sure it vomited digestive enzymes on to its prey and then sucked up liquid food," he explained.

The trigonotarbid specimens studied by Dr Garwood and colleagues are just a few millimetres in length. They were unearthed in Scotland, near the Aberdeenshire town of Rhynie. Its translucent Early Devonian chert sediments are renowned for their exquisite fossils.

The team used a collection held at the Natural History Museum in London that have actually been prepared since the 1920s. The rock had been cut into extremely fine slices just a few tens of microns thick, making it possible to construct 3D models of the arachnids, much like a doctor might do with the X-ray slices obtained in a CAT scan.

"We could see the articulation points in the legs," explained Dr Garwood. "Between each part of the leg, there are darker pieces where they join, and that allowed us to work out the range of movement.

"We then compared that with the gaits of modern spiders, which are probably a good analogy because they have similar leg proportions. The software enabled us to see the centre of mass and find a gait that worked. If it's too far back compared to the legs, the posterior drags on the ground. The trigonotarbid is an alternating tetrapod, meaning there are four feet on the ground at any one time."

"This new study has gone further and shows us how they probably walked. For me, what's really exciting here is that scientists themselves can make these animations now, without needing the technical wizardry (and immense costs) of a Jurassic-Park style film. When I started working on fossil arachnids, we were happy if we could manage a sketch of what they used to look like. Now, they run across our computer screens."

The work is part of a special collection of papers on 3D visualisations of fossils published in the Journal of Paleontology.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Britain plans to build the first commercial spaceport by 2018

Britain plans to build the first commercial spaceport by 2018 | Amazing Science |

Britain is to build a commercial spaceport that will be used to launch manned missions and commercial satellites. A list of eight locations for the spaceport – which could be used by Virgin Galactic and the US company XCOR to launch space tourism flights – has been drawn up by the government and will be announced on Tuesday at the Farnborough air show.

It is planned to have Britain's spaceport in operation by 2018 even though a decision has yet to be made on its location. Several sites around the country have been linked to spaceport plans and are now being studied by officials.

The spaceport would be used to launch both manned missions and commercial satellites, and the Guardian cites companies including Virgin Galactic and XCOR Aerospace as potential users. The eight potential spaceport sites are now being studied by government officials ahead of a final decision, though it isn't clear when that's expected.

Still, the spaceport plan calls for a quick turnaround, which shouldn't be a surprise considering how the UK's space sector has been growing. According to the Guardian, the sector is now worth over £11 billion ($18.8 billion), and the government wants to help raise that to around £40 billion for 2030.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's Orbital Debris Quarterly Archives Reveal a History of Garbage in Space

NASA's Orbital Debris Quarterly Archives Reveal a History of Garbage in Space | Amazing Science |

Space isn't empty, and near-Earth orbit is downright crowded. Every month, some junk burns up during re-entry as ever more is introduced into orbit. Poking around NASA's Orbital Debris Quarterly archives reveals the story of space junk from deliberate to accidental, and all of it hazardous.

Prior to June 1961, the entire population of artificial objects in near-Earth orbit was just over 50 objects, all spacecraft and rocket bodies. Then the Ablestar launch vehicle deployed its payload, the Transit 4A satellite, and exploded just over an hour later. The explosion created nearly 300 debris fragments, over two-thirds of which were still in orbit in 2002. After that, space just got messier.

Anti-satellite testing caused a whole lot of mess as the Soviet Union and the United States took turns proving they could blow up their own satellites. Between 1968 and 1982, the former Soviet Union conducted 20 tests, creating somewhere over 700 catalogue debris fragments, 301 of which are still in orbit. In 1985, the United States tested its own system, producing a whiff of debris, none of which remains in orbit. Realizing that all these explosions were producing a terrific mess, by collective international agreement, no one conducted any more tests of anti-satellite systems that produced debris. This agreement held for 20 years.

In the late '80s and early '90s, a whole lot of people did a whole lot of talking, eventually agreeing to voluntarily reduce the amount of junk they were producing. It worked for a while, reducing the growth rate of new debris cluttering up near-Earth space from a fairly steep climb in 1968 through 1988 to a far flatter climb from 1992 to 2006.

A few years later was a far more spectacular explosion. In June 1996, an abandoned upper stage rocket "broke up," the orbital debris euphemism for "suddenly exploded." The abrupt fragmentation of the rocket stage produced 700 pieces of distinct debris. The stage was the Pegasus Hydrazine Auxiliary Propulsion System (HAPS) from the STEP II mission that had launched 2 years previously. The event produced an order of magnitude more debris than models suggested it should have, forcing NASA to rethink what they did with abandoned craft. Eventually they figured out that the explosion was enhanced by excess fuel, leading to a procedure change where spent stages perform a propellant depletion maneuver to both reduce fuel and to place objects in a decaying orbit where they will (hopefully) burn up on re-entering the Earth's atmosphere within five years.

Most tracked events are nowhere near that exuberant. The next month, the French CERISE spacecraft was pinged by a fragment of an Ariane 1 launch vehicle that had exploded a decade earlier. No new debris was created, the spacecraft recovered, and everything kept on whizzing about the planet.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A study says heat-related deaths in Europe could reach 200,000 a year with a 3.5˚C temperature rise

A study says heat-related deaths in Europe could reach 200,000 a year with a 3.5˚C temperature rise | Amazing Science |

The costliest impact of climate change in Europe this century is likely to be on human health – and in particular heat-related deaths – according to a new economic assessment by the EU Joint Research Centre, the European Commission’s in-house science service.

The study looks at the impact of a 3.5˚C rise in global average temperature from pre-industrial levels – an increase expected if no concerted international action is taken. The official target is to limit the rise to 2˚C by cutting greenhouse gasses.

Heat-related deaths in Europe could reach 200,000 a year with a 3.5˚C temperature rise, according to the study. The economic cost of premature mortality caused by global warming is estimated at €120bn a year. This exceeds the impact on coastal infrastructure (€42bn) and agriculture (€18bn).

The total cost to Europe of unrestrained global warming is put at €200bn a year, though the JRC researchers warn that this figure considerably underestimates the dangers.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Learning ability in math and reading are tightly linked and highly genetic, scientists say

Learning ability in math and reading are tightly linked and highly genetic, scientists say | Amazing Science |

Around half of the genes that influence how well a child can read also play a role in their mathematics ability, say scientists from UCL, the University of Oxford and King’s College London who led a study into the genetic basis of cognitive traits.

While mathematics and reading ability are known to run in families, the complex system of genes affecting these traits is largely unknown. The finding deepens scientists’ understanding of how nature and nurture interact, highlighting the important role that a child’s learning environment may have on the development of reading and mathematics skills, and the complex, shared genetic basis of these cognitive traits.

The collaborative study, published today in Nature Communications as part of the Wellcome Trust Case-Control Consortium, used data from the Twins Early Development Study (TEDS) to analyse the influence of genetics on the reading and mathematics performance of 12-year-old children from nearly 2,800 British families.

Twins and unrelated children were tested for reading comprehension and fluency, and answered mathematics questions based on the UK national curriculum. The information collected from these tests was combined with DNA data, showing a substantial overlap in the genetic variants that influence mathematics and reading. 

Dr Chris Spencer (Oxford University), lead author said: “We’re moving into a world where analysing millions of DNA changes, in thousands of individuals, is a routine tool in helping scientists to understand aspects of human biology. This study used the technique to help investigate the overlap in the genetic component of reading and maths ability in children. Interestingly, the same method can be applied to pretty much any human trait, for example to identify new links between diseases and disorders, or the way in which people respond to treatments.”

Rick Frank's curator insight, Today, 2:27 AM

I can hear the protesters screaming already :)

Diane Johnson's curator insight, Today, 9:24 AM

Really interesting - the more we know about our genetic underpinnings, the more we know there is to learn.

Scooped by Dr. Stefan Gruenwald!

Blackest is the new black: Scientists have developed a material so dark that you can't see it

Blackest is the new black: Scientists have developed a material so dark that you can't see it | Amazing Science |

A British company has produced a "strange, alien" material so black that it absorbs all but 0.035 per cent of visual light, setting a new world record. To stare at the "super black" coating made of carbon nanotubes – each 10,000 times thinner than a human hair – is an odd experience. It is so dark that the human eye cannot understand what it is seeing. Shapes and contours are lost, leaving nothing but an apparent abyss.

If it was used to make one of Chanel's little black dresses, the wearer's head and limbs might appear to float incorporeally around a dress-shaped hole.

Actual applications are more serious, enabling astronomical cameras, telescopes and infrared scanning systems to function more effectively. Then there are the military uses that the material's maker, Surrey NanoSystems, is not allowed to discuss.

The nanotube material, named Vantablack, has been grown on sheets of aluminium foil by the Newhaven-based company. While the sheets may be crumpled into miniature hills and valleys, this landscape disappears on areas covered by it.

"You expect to see the hills and all you can see … it's like black, like a hole, like there's nothing there. It just looks so strange," said Ben Jensen, the firm's chief technical officer. Asked about the prospect of a little black dress, he said it would be "very expensive" – the cost of the material is one of the things he was unable to reveal. "You would lose all features of the dress. It would just be something black passing through," he said.

Vantablack, which was described in the journal Optics Express and will be launched at the Farnborough International Airshow this week, works by packing together a field of nanotubes, like incredibly thin drinking straws. These are so tiny that light particles cannot get into them, although they can pass into the gaps between. Once there, however, all but a tiny remnant of the light bounces around until it is absorbed.

Vantablack's practical uses include calibrating cameras used to take photographs of the oldest objects in the universe. This has to be done by pointing the camera at something as black as possible.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mathematica 10 launched with 700+ New Functions and an Amazing Amount of R&D

Mathematica 10 launched with 700+ New Functions and an Amazing Amount of R&D | Amazing Science |

Mathematica 10 has more new features than any previous version. It is satisfying to see such a long curve of accelerating development—and to realize that there are more new functions being added with Mathematica 10 than there were functions altogether in Mathematica 1. So what is the new functionality in Mathematica 10? It’s a mixture of completely new areas and directions (like geometric computationmachine learning and geographic computation)—together with extensive strengthening, polishing and expanding of existing areas. It’s also a mixture of things I’ve long planned for us to do—but which had to wait for us to develop the necessary technology—together with things I’ve only fairly recently realized we’re in a position to tackle.

When you first launch Mathematica 10 there are some things you’ll notice right away. One is that Mathematica 10 is set up to connect immediately to the Wolfram Cloud. Unlike Wolfram Programming Cloud—or the upcoming Mathematica OnlineMathematica 10 doesn’t run its interface or computations in the cloud. Instead, it maintains all the advantages of running these natively on your local computer—but connects to the Wolfram Cloud so it can have cloud-based files and other forms of cloud-mediated sharing, as well as the ability to access cloud-based parts of the Wolfram Knowledgebase.

If you’re an existing Mathematica user, you’ll notice some changes when you start using notebooks in Mathematica 10. Like there’s now autocompletion everywhere—for option values, strings, wherever. And there’s also a hovering help box that lets you immediately get function templates or documentation. And there’s also—as much requested by the user community—computation-aware multiple undo. It’s horribly difficult to know how and when you can validly undo Mathematica computations—but in Mathematica 10 we’ve finally managed to solve this to the point of having a practical multiple undo.

And in Mathematica 10 one important area where this is happening is machine learning. Inside the system there are all kinds of core algorithms familiar to experts—logistic regression, random forests, SVMs, etc. And all kinds of preprocessing and scoring schemes. But to the user there are just two highly automated functions: Classify and Predict. And with these functions, it’s now easy to call on machine learning whenever one wants.

There are huge new algorithmic capabilities in Mathematica 10 in graph theoryimage processingcontrol theory and lots of other areas. Sometimes one’s not surprised that it’s at least possible to have such-and-such a function—even though it’s really nice to have it be as clean as it is in Mathematica 10. But in other cases it at first seems somehow impossible that the function could work.

There are all kinds of issues. Maybe the general problem is undecidable, or theoretically intractable. Or it’s ill conditioned. Or it involves too many cases. Or it needs too much data. What’s remarkable is how often—by being algorithmically sophisticated, and by leveraging what we’ve built in Mathematica and the Wolfram Language—it’s possible to work around these issues, and to build a function that covers the vast majority of important practical cases.

Another important issue is just how much we can represent and do computation on. Expanding this is a big emphasis in the Wolfram Language—and Mathematica 10 has access to everything that’s been developed there. And so, for example, in Mathematica 10 there’s an immediate symbolic representation for dates, times and time series—as well as for geolocations and geographic data.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Layered paper 3D printers: Full color, life-like, durable objects at a fraction of the cost

Layered paper 3D printers: Full color, life-like, durable objects at a fraction of the cost | Amazing Science |

Irish company Mcor's unique paper-based 3D printers make some very compelling arguments. For starters, instead of expensive plastics, they build objects out of cut-and-glued sheets of standard 80 GSM office paper. That means printed objects come out at between 10-20 percent of the price of other 3D prints, and with none of the toxic fumes or solvent dips that some other processes require.

Secondly, because it's standard paper, you can print onto it in full color before it's cut and assembled, giving you a high quality, high resolution color "skin" all over your final object. Additionally, if the standard hard-glued object texture isn't good enough, you can dip the final print in solid glue, to make it extra durable and strong enough to be drilled and tapped, or in a flexible outer coating that enables moving parts - if you don't mind losing a little of your object's precision shape.

The process is fairly simple. Using a piece of software called SliceIt, a 3D model is cut into paper-thin layers exactly the thickness of an 80 GSM sheet. If your 3D model doesn't include color information, you can add color and detail to the model through a second piece of software called ColorIt.

Next, a regular CMYK inkjet printer prints each slice of the model onto a separate sheet of paper, with a ~5 mm-wide outline of the required color of the bit that will end up showing once it's assembled. The stack of printed slices is then loaded into the Mcor IRIS machine, which uses a process called selective deposition lamination.

Each sheet is laid down, and its slice shape is cut into it. Then a print nozzle lays soft glue all over the non-essential parts of that sheet that will be broken away after manufacture. A second, high density glue is applied to the sections of the paper that will be used to form the final model. Then, the next sheet is drawn over the top of it, and the stack is pressed up against a heat plate that seals the two layers together.

Once all layers have been cut, glued and pressed together, the object comes out of the printer as a chunky sheaf of paper. But the waste material, with its softer glue, is slightly flexible and pre-cut into little cubes, so it pulls away quickly and easily from the much tougher, denser material of the object itself.

Even without an outer coating, the final objects feel very solid – something like a medium density wood feel – and the print detail can be truly fantastic, miles ahead of what some other 3D printers are able to achieve. Some of the samples we looked at had started to peel apart a little bit – but then, these were road-weary trade samples that had been handled by hundreds of people. In general they felt very solid.

Geoff Hancock, CEO of DGS 3D, the Australian supplier of Mcor machinery, told us that while the paper-based print process was broadly useful in parts prototyping, presentation modelling, architectural modelling, sand casting and a range of other business use cases, one of the most successful areas of the business is in printing out miniaturized cityscapes, complete with topographical data.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Talks!

Planetary Nervous System, Global Participatory Platform, Social Information Technologies: How to Create a Better World?

It probably started with Linux, then came Wikipedia and Open Street Map. Crowd-sourced information systems are central for the Digital Society to thrive. So, what's next? In this video, Dirk Helbing introduces a number of concepts such as the Planetary Nervous System, Global Participatory Platform, Interactive Virtual Worlds, User-Controlled Information Filters and Reputation Systems, and the Digital Data Purse. He also discusses ideas such as the Social Mirror, Intercultural Adapter, the Social Protector and Social Money as tools to create a better world. These can help us to avoid systemic instabilities, market failures, tragedies of the commons, and exploitation, and to create the framework for a Participatory Market Society, where everyone can be better off.

Via Complexity Digest
No comment yet.
Scooped by Dr. Stefan Gruenwald!

The airborne panopticon: How plane-mounted cameras watch entire cities

The airborne panopticon: How plane-mounted cameras watch entire cities | Amazing Science |
Persistent Surveillance Systems can watch 25 sq. miles—for hours.

On June 28, 2012, in Dayton, Ohio, police received reports of an attempted robbery. A man armed with a box cutter had just tried to rob the Annex Naughty N’ Nice adult bookstore. Next, a similar report came from a Subway sandwich shop just a few miles northeast of the bookstore.

Coincidentally, a local company named Persistent Surveillance Systems (PSS) was flying a small Cessna aircraft 10,000 feet overhead at the time. The surveillance flight was loaded up with specialized cameras that could watch 25 square miles of territory, and it provided something no ordinary helicopter or police plane could: a Tivo-style time machine that could watch and record movements of every person and vehicle below.

After learning about the attempted robberies, PSS conducted frame-by-frame video analysis of the bookstore and sandwich shop and was able to show that exactly one car traveled between them. Further analysis showed that the suspect then moved on to a Family Dollar store in the northern part of the city, robbed it, stopped for gas—where his face was captured on video—and eventually returned home.

A man named Joseph Bucholtz was arrested the following month and pled guilty to three counts of aggravated robbery with a deadly weapon and one count of robbery. In November 2012, he was sentenced to five years in prison and ordered to pay $665 to the bookstore.

Though an all-seeing, always recording eye in the sky might sound dystopian, current PSS surveillance tech has real limitations. For now, the cameras can only shoot for a few hours at a time, only during the day, and sometimes just in black-and-white. When watching from 10,000 feet, PSS says that individuals are reduced to a single pixel—useful for tracking movements but not for identifying someone.

“You can’t tell if they’re red, white, green, or purple,” Ross McNutt, the company’s CEO, told Ars. And even if the half-meter resolution on his cameras got significantly better, McNutt said that he would prefer to fly higher and capture a larger area.

McNutt wants to be sensitive to people’s concerns, and PSS meets with the ACLU and other privacy activists as such. But he also wants to catch criminals. McNutt, who helped develop the technology when it was a military research project at the nearby Air Force Institute of Technology (AFIT) back in 2004, claims that his system has already proved its value.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Noninvasive brain control: New light-sensitive protein enabling neurons to be silenced noninvasively

Noninvasive brain control: New light-sensitive protein enabling neurons to be silenced noninvasively | Amazing Science |
New light-sensitive protein enables simpler, more powerful optogenetics.

MIT engineers have developed the first light-sensitive protein molecule that enables neurons to be silenced noninvasively. Using a light source outside the skull makes it possible to do long-term studies without an implanted light source.

The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once. The researchers described the protein in Nature Neuroscience.

Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on opsins, light-sensitive proteins that act as channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells.

Researchers insert a light source, such as an optical fiber, into the brain to suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled. The neurons to be studied must be genetically engineered to produce the opsins.

Also, inserting optical fibers into the brain “displaces brain tissue and can lead to side effects such as brain lesion, neural morphology changes, glial inflammation and motility, or aseptic compromise,” the researchers say in the paper.

In addition, such implants can be difficult to insert and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. And it is difficult to perform long-term studies of chronic diseases with these implants.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Hebbs Rule Shown: Researchers have for the first time directly created and destroyed neural connections

Hebbs Rule Shown: Researchers have for the first time directly created and destroyed neural connections | Amazing Science |

Researchers from UCSD have for the first time directly created and destroyed neural connections that connect high level sensory input and high level behavioral responses. 

Donald Hebb in 1949 was one of the first to seize upon this observation.  He proposed that on the biological level, neurons were rewired so that coordinated inputs and outputs get wired together.  As such, were there a nausea neuron and a boat neuron, through the effects of association, the two would get wired together so that the “boat” itself fires up pathways in the “nausea” part of the brain.

In the field of neural networks, this has a name: Hebbian learning.  Pavlov of course also described this phenomenon, and tested it in animals, bequeathing its name the “conditioned response”.

Until now the wiring of neural inputs and outputs was a theory with good but indirect evidence.  At UCSD, neuroscientists teamed up with molecular biologists to engineer a mouse whose neurons can be directly controlled for forming and losing connections.

They did this by injecting an engineered virus into the auditory nerve cells.  The viruses, largely harmless, carry a light responsive molecular switch (a membrane protein “channel” actually) which gets inserted into cells of the auditory region.  Using laser light of certain frequencies it is possible to both “potentiate” or “depress” the auditory nerve cells.

The upshot is that the researchers could directly make the auditory nerve cells increase or decrease their signal strength to other nerve cells, without needing a real, external noise.  In effect, they’ve short-circuited the noise input.  In experiments, they used a light electrical pulse to shock mice while simultaneously stimulating the auditory input with the laser-activated switch.

Basically they flashed the laser light at the ear of the mouse.  Over time, the mouse began to associate the laser pulse induced nerve signal with the electrical shock.  The mice were conditioned to exhibit fear even when there was no shock.

The crux of the experiment is what happened when the scientists flashed the laser in a way to weaken the auditory nerve.  Now the mouse stopped responding in fear to the laser auditory stimulus.

The experiments showed for the first time that associative learning was indeed the wiring together of sensory and response neurons.  The study was published in Nature.

Nature (2014) doi:10.1038/nature13294

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Earth's magnetic field is weakening 10 times faster than originally predicted, swarm satellites show

Earth's magnetic field is weakening 10 times faster than originally predicted, swarm satellites show | Amazing Science |

Earth's magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.

The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet's surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.

The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth's magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA's Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.

In fact over the past 20 million years, our planet has settled into a pattern of a pole reversal about every 200,000 to 300,000 years; as of 2012, however, it has been more than twice that long since the last reversal. These reversals aren't split-second flips, and instead occur over hundreds or thousands of years. During this lengthy stint, the magnetic poles start to wander away from the region around the spin poles (the axis around which our planet spins), and eventually end up switched around, according to Cornell University astronomers.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers develop holography technology that could change the way we view the world

Researchers develop holography technology that could change the way we view the world | Amazing Science |

Since the 1960s, theatergoers have shelled out for crude 3-D glasses, polarized glasses, and shutter glasses to enhance their viewing experience. These basic devices, used to trick the brain into perceiving an artificial three-dimensional reality, may soon be rendered obsolete with the introduction of new holography technology developed by Tel Aviv University researchers.

Tel Aviv University doctoral students Yuval Yifat, Michal Eitan, and Zeev Iluz have developed highly efficient holography based on nanoantennas that could be used for security as well as medical and recreational purposes. Prof. Yael Hanein, of TAU's School of Electrical Engineering and head of TAU's Center for Nanoscience and Nanotechnology, and Prof. Jacob Scheuer and Prof. Amir Boag of the School of Electrical Engineering, led the development team. Their research, published in the American Chemical Society's publication Nano Letters, uses the parameters of light itself to create dynamic and complex holographic images.

In order to effect a three-dimensional projection using existing technology, two-dimensional images must be "replotted"—rotated and expanded to achieve three-dimension-like vision. But the team's nanoantenna technology permits newly designed holograms to replicate the appearance of depth without being replotted. The applications for the technology are vast and diverse, according to the researchers, who have already been approached by commercial entities interested in the technology.

"We had this interesting idea—to play with the parameters of light, the phase of light," said Yifat. "If we could dynamically change the relation between light waves, we could create something that projected dynamically—like holographic television, for example. The applications for this are endless. If you take light and shine it on a specially engineered nanostructure, you can project it in any direction you want and in any form that you want. This leads to interesting results."

The researchers worked in the lab for over a year to develop and patent a small metallic nanoantenna chip that, together with an adapted holography algorithm, could determine the "phase map" of a light beam. "Phase corresponds with the distance light waves have to travel from the object you are looking at to your eye," said Prof. Hanein. "In real objects, our brains know how to interpret phase information so you get a feeling of depth, but when you look at a photograph, you often lose this information so the photographs look flat. Holograms save the phase information, which is the basis of 3-D imagery. This is truly one of the holy grails of visual technology."

According to the researchers, their methodology is the first of its kind to successfully produce high-resolution holographic imagery that can be projected efficiently in any direction.

"We can use this technology to reflect any desired object," said Prof. Scheuer. "Before, scientists were able to produce only basic shapes—circles and stripes, for example. We used, as our model, the logo of Tel Aviv University, which has a very specific design, and were able to achieve the best results seen yet."

No comment yet.