Amazing Science
775.1K views | +98 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

When Science, Math and Art Meets: ImageQuilts

When Science, Math and Art Meets: ImageQuilts | Amazing Science | Scoop.it

Thomas Baruchel’s website shows images derived from complex analysis. John D. Cook used the ImageQuilts software by Edward Tufte and Adam Schwartz to create a large variety of scientific and artistic images.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

International science collaboration growing at astonishing rate

International science collaboration growing at astonishing rate | Amazing Science | Scoop.it
Even those who follow science may be surprised by how quickly international collaboration in scientific studies is growing, according to new research.

 

The number of multiple-author scientific papers with collaborators from more than one country more than doubled from 1990 to 2015, from 10 to 25 percent, one study found. And 58 more countries participated in international research in 2015 than did so in 1990.

 

"Those are astonishing numbers," said Caroline Wagner, associate professor in the John Glenn College of Public Affairs at The Ohio State University, who helped conduct these studies.

"In the 20th century, we had national systems for conducting research. In this century, we increasingly have a global system."

Wagner presented her research Feb. 17 in Boston at the annual meeting of the American Association for the Advancement of Science. Even though Wagner has studied international collaboration in science for years, the way it has grown so quickly and widely has surprised even her.

 

One unexpected finding was that international collaboration has grown in all fields she has studied. One would expect more cooperation in fields like physics, where expensive equipment (think supercolliders) encourages support from many countries. But in mathematics?

 

"You would think that researchers in math wouldn't have a need to collaborate internationally - but I found they do work together, and at an increasing rate," Wagner said. "The methods of doing research don't determine patterns of collaboration. No matter how scientists do their work, they are collaborating more across borders."

 

In a study published online last month in the journal Scientometrics, Wagner and two co-authors (who are both from The Netherlands) examined the growth in international collaboration in six fields: astrophysics, mathematical logic, polymer science, seismology, soil science and virology.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ten trillionths of your suntan comes from beyond our galaxy

Ten trillionths of your suntan comes from beyond our galaxy | Amazing Science | Scoop.it

Lie on the beach this summer and your body will be bombarded by about sextillion photons of light per second. Most of these photons, or small packets of energy, originate from the Sun but a very small fraction have travelled across the Universe for billions of years before ending their existence when they collide with your skin.

 

In a new study to be published in the Astrophysical Journal on August 12th, astronomers have accurately measured the light hitting Earth from outside our galaxy over a very broad wavelength range. The research looked at photons whose wavelengths vary from a fraction of a micron (damaging) to millimeters (harmless). But radiation from outside the galaxy constitutes only ten trillionths of your suntan, so there is no immediate need for alarm.

 

International Centre for Radio Astronomy Research (ICRAR) astrophysicist Professor Simon Driver, who led the study, said we are constantly bombarded by about 10 billion photons per second from intergalactic space when we're outside, day and night.

 

"Most of the photons of light hitting us originate from the Sun, whether directly, scattered by the sky, or reflected off dust in the Solar System," he said. "However, we're also bathed in radiation from beyond our galaxy, called the extra-galactic background light. These photons are minted in the cores of stars in distant galaxies, and from matter as it spirals into supermassive black holes."

 

Professor Driver, who is based at the University of Western Australia, measured this ambient radiation from the Universe, from a wide range of wavelengths by combining deep images from a flotilla of space telescopes.

 

He and collaborators from Arizona State University and Cardiff University collated observations from NASA's Galaxy Evolution Explorer and Wide-field Infrared Survey Explorer telescopes, the Spitzer and Hubble space telescopes, the European Space Agency's Herschel space observatory and Australia's Galaxy And Mass Assembly survey to make the most accurate measurements ever of the extra-galactic background light.

 

While 10 billion photons a second might sound like a lot, Professor Driver said we would have to bask in it for trillions of years before it caused any long-lasting damage.

 

Professor Rogier Windhorst, from Arizona State University, said the Universe also comes with its own inbuilt protection as about half the energy coming from the ultraviolet light of galaxies is converted into a less damaging wavelength by dust grains. "The galaxies themselves provide us with a natural suntan lotion with an SPF of about two," he said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Gaian bottleneck: Alien life on most exoplanets likely dies young

Gaian bottleneck: Alien life on most exoplanets likely dies young | Amazing Science | Scoop.it

The Earth is not the only planet in our galaxy with liquid water on its surface and energy sources and nutrients to enable life to form. Although the universe is filled with stars and planets conducive to life, the absence of any evidence for alien life suggests that even if the emergence of life is easy, its persistence may be difficult.

 

Recehnt work challenges conventional views that physics-based habitable zones provide stable conditions for life for many billions of years. Although, the cottage industry of habitable zone modellers can turn various knobs that control atmospheric and geophysical properties to stabilise planets over short-timescales, they have mostly ignored the role of biology in keeping planets habitable over billions of years. This is in part because the complexities of interactions between microbial communities that keep ecosystems stable are not sufficiently understood.

 

Scientists now hypothesize that even if life does emerge on a planet, it rarely evolves quickly enough to regulate greenhouse gases, and thereby keep surface temperatures compatible with liquid water and habitability.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Popular Science
Scoop.it!

Online collaboration: Scientists and the social network

Online collaboration: Scientists and the social network | Amazing Science | Scoop.it
Giant academic social networks have taken off to a degree that no one expected even a few years ago. A Nature survey explores why.

Via Neelima Sinha
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Papers
Scoop.it!

If the World Began Again, Would Life as We Know It Exist?

If the World Began Again, Would Life as We Know It Exist? | Amazing Science | Scoop.it

Experiments in evolution are exploring what would happen if we rewound the tape of life.


Via Complexity Digest
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wellcome Image Awards 2015 showcase breathtaking shots of life, death, and science up close

Wellcome Image Awards 2015 showcase breathtaking shots of life, death, and science up close | Amazing Science | Scoop.it
Most wouldn't go looking for magnified cat tongues, sheep stomachs, or parasitoid wasps in a search for gorgeous imagery. But as the finalists for the 2015 Wellcome Image Awards show, these things can be breathtakingly beautiful.


The award showcases the best in science images for the year. "The breath-taking riches of the imagery that science generates are so important in telling stories about research and helping us to understand often abstract concepts," British geneticist, author, and broadcaster Adam Rutherford, one of this year's judges, said in a statement.


"It's not just about imaging the very small either, it's about understanding life, death, sex and disease: the cornerstones of drama and art. Once again, the Wellcome Image Awards celebrate all of this and more with this year’s incredible range of winning images," Rutherford said.


The images are part of the Wellcome Images collections, which are free for non-commercial use and intended to help illustrate scientific concepts and findings.


The winner will be announced at an awards ceremony on March 18. To see previous year's winners, check out Wellcome's Web site. The 20 finalists for 2015 will be showcased at 11 science centers around Britain. Fans of beauty and science in the United States are in luck, too: MIT's Koch Institute and The University of Texas Medical Branch at Galveston will also show off the winners sometime in March.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Charismatic Minifauna of 33 Million Things: The Secret World of Museums

Charismatic Minifauna of 33 Million Things: The Secret World of Museums | Amazing Science | Scoop.it

The American Museum of Natural History has a great new video series: Shelf Life. It features the 33,430,000 artifacts and specimens estimated to be in the museum. From their description:


Shelf Life is a collection for curious minds—opening doors, pulling out drawers, and taking the lids off some of the incredible, rarely-seen items in the American Museum of Natural History. Over the next year, Shelf Life will explore topics like specimen preparation, learn why variety is vital, and meet some of the people who work in the Museum collections.”


A lot of natural history museums are trying to make the invisible visible by turning to video and social media.  The vast majority of a museum’s collection is never seen by anyone besides a tiny group of experts. How do you convince the public that they should care about a bunch of dead stuff? The perception of a lot of people is that museums are about naming and pickling things. Travel to exotic places, find unusual species, and kill them.


This assumes that things are just warehoused in a museum, which is certainly true in one sense.  Museums are a long term, stable library of our past and our present.  But a library that stops acquiring and indexing books isn’t going to remain relevant.


What’s actually stored in a museum is change that you can touch and measure.  TheCDC is using museum specimens to track human pathogens and diseases over space and time. Ecologists are looking at Hawaiian birds collected and preserved 100 years ago (now extinct) to see if they can find a way to protect today’s Galapagos species from canarypox. Preserved insects helped us figure out dinosaurs didn’t have lice via advanced molecular techniques.


The video series also makes some of the work that goes into maintaining a collection visible. You can’t just put something in a jar and walk away; constant maintenance and care helps to make sure that we can still see insects collected by Darwin, or plants from Linnaeus’ cabinet.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

How do you build a large-scale quantum computer?

How do you build a large-scale quantum computer? | Amazing Science | Scoop.it

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteriaThe prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.


To glimpse the difficulty of this task, consider the guts of a would-be quantum computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A quantum computing device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.  


In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.


Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A (reference below), a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.


One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected.


The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.


Via Szabolcs Kósa
more...
Andreas Pappas's curator insight, March 28, 2014 4:40 AM

This article shows how scientists can increase the scale of quantum machine while still making them behave quantum mechanically by reading the qu-bits with lasers instead of conventional wiring.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Interactive Scientific Visualizations On The Web — D3

Interactive Scientific Visualizations On The Web — D3 | Amazing Science | Scoop.it

When D3 came out in 2011, it became clear pretty quickly that it was going to be a powerful tool for creating data visualizations. But it’s certainly not the first — or only — tool. Why did it succeed when so many other libraries have failed?


First of all, it works on the web. Data visualizations are only good if people see them, and there’s no better place to see them than on the internet, in your browser. Protovis was the first library to make any real headway in this direction, despite other libraries and services that tried. Manyeyes is cool, but it lacks graphic flexibility and the resulting visualizations can’t just live anywhere seamlessly.

Prefuse and Flare (both predecessors to D3) are nice, but neither one runs in a browser without a plugin. Quadrigram (previously Impure) has the same plugin problem.

 
Another reason it has worked so well is because of its flexibility. Since it works seamlessly with existing web technologies, and can manipulate any part of the document object model, it is as flexible as the client side web technology stack (HTML, CSS, SVG).


This gives it huge advantages over other tools because it can look like anything you want, and it isn’t limited to small regions of a webpage like Processing.jsPaper.jsRaphael.js, or other canvas or SVG-only based libraries. It also takes advantage of built in functionality that the browser has, simplifying the developer’s job, especially for mouse interaction.

 
All of these features have been timed perfectly to coincide with the rise of new browsers and a push towards documents created using open standards rather than relatively walled-in plugins. The death of Internet Explorer as the top browser plays no small role in this, and the rendering and javascript engines in other browsers have made huge strides with their newfound attention. Some of this momentum has carried over to D3 as a way to take advantage of the new features and technology buzz.

 
But snazzy new technologies that work seamlessly aren’t the only reason that D3 has become successful.


Great documentationexamplescommunity, and the accessibility of Mike Bostock have all played major roles in its rise to prominence. Without these components, D3 would likely have taken much longer to catch on.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Brainlike Computers Are Learning From Experience

Brainlike Computers Are Learning From Experience | Amazing Science | Scoop.it

Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.


The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.


The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.


In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.


Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.


“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.


Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.


The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.


The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.


They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.


“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”


Via Szabolcs Kósa
more...
VendorFit's curator insight, December 31, 2013 3:27 PM

Artificial intelligence is the holy grail of technological achievment, creating an entity that can learn from its own mistakes and can (independently of programmer intervention) develop new routines and programs.  The New York Times claims that the first ever "learning" computer chip is to be released in 2014, an innovation that has profound consequences for the tech market.  When these devices become cheaper, this should allow for robotics and device manufacture that incorporates more detailed sensory input and can parse real objects, like faces, from background noise. 

Laura E. Mirian, PhD's curator insight, January 10, 2014 1:16 PM

The Singularity is not far away

Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Aliens, computers and synthetic biology

Our capacity to partner with biology to make useful things is limited by the tools that we can use to specify, design, prototype, test, and analyze natural or engineered biological systems. However, biology has typically been engaged as a "technology of last resort" in attempts to solve problems that other more mature technologies cannot. This lecture will examine some recent progress on virus genome redesign and hidden DNA messages from outer space, building living data storage, logic, and communication systems, and how simple but old and nearly forgotten engineering ideas are helping make biology easier to engineer.


Via Szabolcs Kósa
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from green infographics
Scoop.it!

An interactive map of more than 800,000 Scientific Papers that have influenced math and physics most

An interactive map of more than 800,000 Scientific Papers that have influenced math and physics most | Amazing Science | Scoop.it

ArXiv is an online archive that stores hundreds of thousands of scientific papers in physics, mathematics, and other fields. The citations in those papers link to one another, forming a web, but you're not going to see those connections just by sifting through the archive.

 

So physicist Damien George and Ph.D student Rob Knegjens took it on themselves to create Paperscape, an interactive infographic that beautifully and intuitively charts the papers.


The infographic is a mass of circles. Each circle represents a paper, and the bigger a circle is, the more highly cited it is. The papers are color-coded by discipline--pink for astrophysics, yellow for math, etc.--and papers that share many of the same citations are placed closer together.


Via Lauren Moss
more...
Jay Ratcliff's curator insight, September 6, 2013 1:35 PM

This is cool!  It is like the map of the Internet done last year sometime.

I lucked out and found the section about SNA in the lower left hand side of the map.  Look for Network under the Quantitative Finance section, go figure.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

When Art, Science and Strange Attractors Meet

In the mathematical field of dynamical systems, an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system.[1] System values that get close enough to the attractor values remain close even if slightly disturbed.

 

An attractor is called strange if it has a fractal structure.[1] This is often the case when the dynamics on it are chaotic, but strange nonchaotic attractors also exist. If a strange attractor is chaotic, exhibiting sensitive dependence on initial conditions, then any two arbitrarily close alternative initial points on the attractor, after any of various numbers of iterations, will lead to points that are arbitrarily far apart (subject to the confines of the attractor), and after any of various other numbers of iterations will lead to points that are arbitrarily close together. Thus a dynamic system with a chaotic attractor is locally unstable yet globally stable: once some sequences have entered the attractor, nearby points diverge from one another but never depart from the attractor.[5]

 

The term strange attractor was coined by David Ruelle and Floris Takens to describe the attractor resulting from a series of bifurcations of a system describing fluid flow.[6] Strange attractors are often differentiable in a few directions, but some are like a Cantor dust, and therefore not differentiable. Strange attractors may also be found in presence of noise, where they may be shown to support invariant random probability measures of Sinai–Ruelle–Bowen type.[7]

 

Examples of strange attractors include the double-scroll attractor, Hénon attractor, Rössler attractor, Tamari attractor, and the Lorenz attractor.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from SciFrye
Scoop.it!

Units of measure are getting a fundamental upgrade

Units of measure are getting a fundamental upgrade | Amazing Science | Scoop.it

If scientists had sacred objects, this would be one of them: a single, closely guarded 137-year-old cylinder of metal, housed in a vault outside of Paris. It is a prototype that precisely defines a kilogram of mass everywhere in the universe. A kilogram of ground beef at the grocery store has the same mass as this one special hunk of metal, an alloy of platinum and iridium. A 60-kilogram woman has a mass 60 times as much. Even far-flung astronomical objects such as comets are measured relative to this all-important cylinder: Comet 67P/Churyumov–Gerasimenko, which was recently visited by the European Space Agency’s Rosetta spacecraft (SN: 2/21/15, p. 6), has a mass of about 10 trillion such cylinders.

 

But there’s nothing special about that piece of metal, and its mass isn’t even perfectly constant — scratches or gunk collecting on its surface could change its size subtly (SN: 11/20/10, p. 12). And then a kilogram of beef would be slightly more or less meat than it was before. That difference would be too small to matter when flipping burgers, but for precise scientific measurements, a tiny shift in the definition of the kilogram could cause big problems.

 

That issue nags at some researchers. They would prefer to define important units — including kilograms, meters and seconds — using immutable properties of nature, rather than arbitrary lengths, masses and other quantities dreamed up by scientists. If humans were to make contact with aliens and compare systems of units, says physicist Stephan Schlamminger, “we’d be the laughingstock of the galaxy.”

 

To set things right, metrologists — a rare breed of scientist obsessed with precise measurements — are revamping the system. Soon, they will use fundamental constants of nature — unchanging numbers such as the speed of light, the charge of an electron and the quantum mechanical Planck constant — to calibrate their rulers, scales and thermometers. They’ve already gotten rid of an artificial standard that used to define the meter — an engraved platinum-iridium bar. In 2018, they plan to jettison the Parisian kilogram cylinder, too.


Via Kim Frye
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Australia's gun laws stopped mass shootings and reduced homicides, study finds

Australia's gun laws stopped mass shootings and reduced homicides, study finds | Amazing Science | Scoop.it

Since major gun law reforms were introduced in Australia, mass shootings have not only stopped, but there has also been an accelerating reduction in rates of firearm-related homicide and suicides, a landmark study has found. It has been two decades since rapid-fire long guns were banned in Australia, including those already in private ownership, and 19 years since the mandatory buyback of prohibited firearms by government at market price was introduced. A handgun buyback program was later introduced, in 2003.

 

Researchers from the University of Sydney and Macquarie University analysed data on intentional suicide and homicide deaths caused by firearms from the National Injury Surveillance Unit, and intentional firearm death rates from the Australian Bureau of Statistics. For the period after the 1996 reforms, rates of total homicides and suicides from all causes were also examined to consider whether people may have substituted guns for alternative means.

 

From 1979 to 1996, the average annual rate of total non-firearm suicide and homicide deaths was rising at 2.1% per year. Since then, the average annual rate of total non-firearm suicide and homicide deaths has been declining by 1.4%, with the researchers concluding there was no evidence of murderers moving to other methods, and that the same was true for suicide.

 

The average decline in total firearm deaths accelerated significantly, from a 3% decline annually before the reforms to a 5% decline afterwards, the study found.

 

In the 18 years to 1996, Australia experienced 13 fatal mass shootings in which 104 victims were killed and at least another 52 were wounded. There have been no fatal mass shootings since that time, with the study defining a mass shooting as having at least five victims.

 

The findings were published in the Journal of the American Medical Association on Thursday, days after the US Senate rejected a string of Republican and Democrat measures to restrict guns. The reforms were proposed in response to the deadliest mass shooting in US history, at an LGBTI nightclub in Orlando.

 

The 1996 reforms introduced in Australia came just months after a mass shooting known as the Port Arthur massacre, when Martin Bryant used two semi-automatic rifles to kill 35 people and wound 23 others in Port Arthur, Tasmania. The reforms had the support of all major political parties.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

All scientific papers to be free by 2020 under EU proposals

All scientific papers to be free by 2020 under EU proposals | Amazing Science | Scoop.it

All publicly funded scientific papers published in Europe could be made free to access by 2020, under a “life-changing” reform ordered by the European Union’s science chief, Carlos Moedas. The Competitiveness Council, a gathering of ministers of science, innovation, trade and industry, agreed on the target following a two-day meeting in Brussels last week.

 

The move means publications of the results of research supported by public and public-private funds would be freely available to and reusable by anyone. It could affect the paid-for subscription model used by many scientific journals, and undermine the common practice of releasing reports under embargo.

 

At present the results of some publicly funded research are not accessible to people outside universities and similar institutions without one-off payments, which means that many teachers, doctors, entrepreneurs and others do not have access to the latest scientific insights. In the UK, funding bodies generally require that researchers publish under open access terms, with open access publishing fees paid from the researcher’s grant.

 

The council said this data must be made accessible unless there were well-founded reasons for not doing so, such as intellectual property rights or security or privacy issues.

The changes are part of a broader set of recommendations in support of Open Science, a concept that also includes improved storage of and access to research data, Science magazine reports.

 

Open Science has been heavily lobbied for by the Dutch government, which currently holds the presidency of the Council of the EU, as well as by Moedas, the European commissioner for research and innovation. Moedas told a press conference: “We probably don’t realize it yet, but what the Dutch presidency has achieved is unique and huge. The commission is totally committed to help move this forward.”

more...
AcademicLabs's curator insight, June 1, 2016 3:00 AM

Finally, all research publications free fo charge for the general public. Huge breakthrough! How will this impact:
- the publishing industry? Will the combined revenue sources of the article processing fees payed by researchers and the extra, advanced digital services make up for the losses?
- industry who has a new huge reservoir of knowledge freely available?
- research communication? With a potentially larger audience and thus impact, will publications include extra sections to explain the findings and background in 'human' language? Should each of them include an infographic to facilitate disseminiation and uptake of the findings?
- academics transitioning to industry? Will they be knowledge consultants, analyzing relevant literature and proposing an action plan as a service for industry?
- ...
Exciting times! Looking forward to participate in this knowledge ecosystem with AcademicLabs!

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Gold chip ion-trap captures Science Photography Competition's top prize - EPSRC website

Gold chip ion-trap captures Science Photography Competition's top prize - EPSRC website | Amazing Science | Scoop.it
An image of a gold chip that traps ions for use in quantum computing has come first in EPSRC's third science photography competition.

 

 

‘Microwave ion-trap chip for quantum computation’, by Diana Prado Lopes Aude Craik and Norbert Linke, from the University of Oxford, shows the chip’s gold wire-bonds connected to electrodes which transmit electric fields to trap single atomic ions a mere 100 microns above the device’s surface. The image, taken through a microscope in one of the university's cleanrooms, came first in the Eureka category as well as winning overall against many other stunning pictures, featuring research in action, in the EPSRC competition – now in its third year.

 

Doctoral student Diana Prado Lopes Aude Craik, explained how the chip works: “When electric potentials are applied to the chip’s gold electrodes, single atomic ions can be trapped. These ions are used as quantum bits (‘qubits’), units which store and process information in a quantum computer. Two energy states of the ions act as the ‘0’ and ‘1’ states of these qubits.

 

Slotted electrodes on the chip deliver microwave radiation to the ions, allowing us to manipulate the stored quantum information by exciting transitions between the ‘0’ and ‘1’ energy states. “This device was micro-fabricated using photolithography, a technique similar to photographic film development. Gold wire-bonds connect the electrodes to pads around the device through which signals can be applied. You can see the wire-bonding needle in the top-left corner of the image. The Oxford team recently achieved the world’s highest-performing qubits and quantum logic operations.”

 

The development of the ion-trap chip was funded jointly by the EPSRC and the US Army Research Office.

The competition’s five categories were: Eureka, Equipment, People, Innovation, and Weird and Wonderful. Winning images feature:

  • A spectacular 9.5 meter wave created to wow crowds at the FloWave Ocean Energy Research Facility at the University of Edinburgh
  • An iCub humanoid robot learning about how to play from a baby as part of robotics research taking place at Aberystwyth University
  • The intense, blinding light of plasma formed by an ultrafast laser being used to process glass at the EPSRC Centre for Innovative Manufacturing in Ultra Precision at the University of Cambridge
  • A beautiful rotating jet of viscoelastic liquid water resembling a spinning dancer that demonstrates the effect of adding a tiny amount of polymer to water and an example of fluid dynamics research at Imperial College London

 

One of the judges was Professor Robert Winston, he said: “This competition helps us engage with academics and these stunning images are a great way to connect the general public with research they fund, and inspire everyone to take an interest in science and engineering.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Americans are ten times more likely to die from firearms than citizens of other nations

Americans are ten times more likely to die from firearms than citizens of other nations | Amazing Science | Scoop.it

Gun deaths are a serious public health issue in the United States and the scope of the problem is often difficult to illustrate. A new study published in The American Journal of Medicine lays out the risk in concrete terms. When compared to 22 other high-income nations, Americans are ten times more likely to be killed by a gun than their counterparts in the developed world. Specifically, gun homicide rates are 25 times higher in the U.S. and, while the overall suicide rate is on par with other high-income nations, the U.S. gun suicide rate is eight times higher.


In order to help put America's relationship with guns into perspective, researchers from the University of Nevada-Reno and the Harvard T.H. Chan School of Public Health analyzed mortality data gathered by the World Health Organization in 2010. Investigators found that despite having similar rates of nonlethal crimes as other high-income countries, the U.S. has much higher rates of lethal violence, mostly driven by extremely higher rates of gun-related homicides.


The study reveals some stark truths about living and dying in the United States. When compared to other high-income nations, as an American you are:

• Seven times more likely to be violently killed

• Twenty-five times more likely to be violently killed with a gun

• Six times more likely to be accidentally killed with a gun

• Eight times more likely to commit suicide using a gun

• Ten times more likely to die from a firearm death overall


Homicide is the second leading cause of death for Americans 15 to 24 years of age, and the third leading cause of death among those 25 to 34 years of age. Investigators found that for these two groups, the risk relative to their counterparts in other developed nations is alarmingly elevated. Americans 15 to 24 years of age are 49 times more likely to die from firearm homicide compared to similarly aged young people in other high-income nations. For those aged 25 to 34, the risk is 32 times higher.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The #1 reason people die early, in each country

The #1 reason people die early, in each country | Amazing Science | Scoop.it

You're probably aware that heart disease and cancer are far and away the leading causes of death in America. But globally the picture is more complicated: The above map shows the leading cause of lost years of life by country (click to see a larger version). The data comes from the Global Burden of Disease study, whose 2013 installment was released just a few weeks ago. It's worth stressing that "cause of lost years of life" and "cause of death" aren't identical. For example, deaths from preterm births may cause more lost years of life in a country than deaths from heart disease even if heart disease is the leading cause of death. Deaths from preterm births amount to many decades of lost life, whereas heart disease tends to develop much later on.


But that makes the fact that heart disease is the leading cause of lost life in so many countries all the more striking, and indicative of those countries' successes in reducing childhood mortality. By contrast, in many lower-income countries, the leading cause is something like malaria, diarrhea, preterm birth, HIV/AIDS, or violence, which all typically afflict people earlier in life than heart disease or stroke. We've made considerable progress in fighting childhood mortality across the globe in recent decades, but there's still much work left to be done.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The top 100 papers: NATURE magazine explores the most-cited research papers of all time

The top 100 papers: NATURE magazine explores the most-cited research papers of all time | Amazing Science | Scoop.it

The discovery of high-temperature superconductors, the determination of DNA’s double-helix structure, the first observations that the expansion of the Universe is accelerating — all of these breakthroughs won Nobel prizes and international acclaim. Yet none of the papers that announced them comes anywhere close to ranking among the 100 most highly cited papers of all time.


Citations, in which one paper refers to earlier works, are the standard means by which authors acknowledge the source of their methods, ideas and findings, and are often used as a rough measure of a paper’s importance. Fifty years ago, Eugene Garfield published the Science Citation Index (SCI), the first systematic effort to track citations in the scientific literature. To mark the anniversary, Nature asked Thomson Reuters, which now owns the SCI, to list the 100 most highly cited papers of all time. (See the full list at Web of Science Top 100.xls or the interactive graphic, below.) The search covered all of Thomson Reuter’s Web of Science, an online version of the SCI that also includes databases covering the social sciences, arts and humanities, conference proceedings and some books. It lists papers published from 1900 to the present day.


The exercise revealed some surprises, not least that it takes a staggering 12,119 citations to rank in the top 100 — and that many of the world’s most famous papers do not make the cut. A few that do, such as the first observation1 of carbon nanotubes (number 36) are indeed classic discoveries. But the vast majority describe experimental methods or software that have become essential in their fields.


The most cited work in history, for example, is a 1951 paper2 describing an assay to determine the amount of protein in a solution. It has now gathered more than 305,000 citations — a recognition that always puzzled its lead author, the late US biochemist Oliver Lowry.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Computer science: The learning machines

Computer science: The learning machines | Amazing Science | Scoop.it

Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence. Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats1.


Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.


Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.


Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.



Via Szabolcs Kósa
more...
R Schumacher & Associates LLC's curator insight, January 15, 2014 1:43 PM

The monikers such as "deep learning" may be new, but Artificial Intelligence has always been the Holy Grail of computer science.  The applications are many, and the path is becoming less of an uphill climb.  

luiy's curator insight, February 26, 2014 6:19 AM

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

 

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

The amazing history of the Nobel Prize, told in maps and charts

The amazing history of the Nobel Prize, told in maps and charts | Amazing Science | Scoop.it

The U.S. has 4 percent of the world's population and 34 percent of its Nobel laureates. That's the most of any country in the world, by far: next-highest ranked is Britain with 120 laureates.


Up top is a heat map showing which countries have had the most Nobel laureates in the prize's history. Most countries have zero Nobel laureates. The faint yellow countries have received exactly one Nobel in the 113 years since the first prize was given. There's a small cluster of orange countries with maybe 10 to 15 Nobel laureates. A very tiny group of dark red countries have taken most of the Nobel prizes.


Just over 1,000 Nobels have been awarded since the prize was first established in 1901. Most of those have been in sciences but there's also the literature prize and, most famously, the peace prize. We've added up every Nobel awarded since 1901 and separated them out by country. The results are fascinating – and revealing.


A stunning 83 percent of all Nobel laureates have come from Western countries (that means Western Europe, the United States, Canada, Australia or New Zealand). We'll dive into some of the statistics of the Nobel below. But first here's a map of the prizes broken down by region.

more...
Thomas Faltin's curator insight, December 31, 2013 6:22 AM

add your insight...

   
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Most Amazing Science Images of 2013

The Most Amazing Science Images of 2013 | Amazing Science | Scoop.it

From slow-motion footage on YouTube to deep-space satellite imagery to weird washcloths on the International Space Station, this was a big year for science.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Papers
Scoop.it!

Scientific Data Has Become So Complex, We Have to Invent New Math to Deal With It

Scientific Data Has Become So Complex, We Have to Invent New Math to Deal With It | Amazing Science | Scoop.it

Simon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.

 

“How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.

 

“In physics, you typically have one kind of data and you know the system really well,” said DeDeo. “Now we have this new multimodal data [gleaned] from biological systems and human social systems, and the data is gathered before we even have a hypothesis.” The data is there in all its messy, multi-dimensional glory, waiting to be queried, but how does one know which questions to ask when the scientific method has been turned on its head?


Via Ashish Umre, Complexity Digest
more...
Arjen ten Have's curator insight, October 9, 2013 2:48 PM

This is not as much work for math, here is where it gets interesting, where it really becomes INTERdisciplinary rather than MULTI. The same for Bioinformatics. We are developing tools to correct for instance MSAs, very simple tricks that deal with the complexity. The biologist has to explain the math guy what he wants. It is not about new math, it is about flexibility!

Mark Waser's curator insight, October 10, 2013 4:53 PM

I dislike the title and the initial thrust but the article is well worth reading by the end.