Amazing Science
751.5K views | +42 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Researchers Develop Minibuilders, Tiny Robots Capable of 3D Printing Large Buildings

Researchers Develop Minibuilders, Tiny Robots Capable of 3D Printing Large Buildings | Amazing Science |

It is amazing how quickly the technologies around 3D printing have been developing over the last couple of years. Not only are we seeing Moore's Law-like increases in the speeds of these prints, all the while prices are dropping substantially, but entirely new innovative approaches seem to emerge each day.

For instance, we have already seen 3D printing drones, combo 3D printer/CNC machines,  a 3D printing assembly line, and all sorts of crazy new ways to print with food. Today a unique, but quite innovative approach to 3D printing has been unveiled by a team of researchers at the Institute for Advanced Architecture of Catalonia (IAAC), based in Barcelona, Spain.

One problem with 3D printers today, is that their build envelopes are limited by the size of the actual printer. In order to print a house, you need a 3D printer which is larger than that house. This severely limits the utility of any one device, and equates to substantial costs for any person or company trying to print on a large scale. A team of researchers, led by Sasa Jokic, and Petr Novikov at IAAC, and includes Stuart Maggs, Dori Sadan, Jin Shihui and Cristina Nan, have invented and worked diligently on a method of printing large scale objects, such as buildings, with mobile 3D printing robots they call Minibuilders.

The Minibuilder lineup consists of three different robotic devices, each with dimensions no larger than 42cm. Despite their small size, they are capable of printing buildings of almost any proportion. All three robots, all responsible for different functions, are required during any large 3D printing project. Working together these Minibuilders are able to produce large scale 3D prints without the need for a large scale 3D printer.

Although the technology may not have been perfected, researchers have put in place a stepping stone for a new method of printing buildings and other large object, which we are sure will continue to develop.

What do you think about this new 3D printing system? Could you see large buildings and homes eventually using a technology like this? Let us know in the Minibuilder forum thread at

Scooped by Dr. Stefan Gruenwald!

A new solution for storing hydrogen fuel for alternative energy

A new solution for storing hydrogen fuel for alternative energy | Amazing Science |

Turning the "hydrogen economy" concept into a reality, even on a small scale, has been a bumpy road, but scientists are developing a novel way to store hydrogen to smooth out the long-awaited transition away from fossil fuels. Their report on a new solid, stable material that can pack in a large amount of hydrogen that can be used as a fuel appears in the ACS journal Chemistry of Materials.

Umit B. Demirci and colleagues explain that storing hydrogen in solids is a recent development and a promising step toward building a hydrogen economy. That's the idea originated in the 1970s and promoted by former President George W. Bush that we replace fossil fuels with hydrogen, which can serve as a clean fuel. Although a promising alternative to conventional energy sources, hydrogen has posed a number of technological challenges that scientists are still overcoming. One of those issues has to do with storage

Previously, researchers were focused on developing hydrogen-containing liquids or compressing it in gas form. Now, solid storage is showing potential for holding hydrogen in a safe, stable and efficient way. In the latest development on this front, Demirci's team looked to a new kind of material.

They figured out a way to make a novel crystal phase of a material containing lithium, boron and the key ingredient, hydrogen. To check how they could get the hydrogen back out of the material, the scientists heated it and found that it released hydrogen easily, quickly and only traces of unwanted by-products.

More information: "Lithium Hydrazinidoborane: A Polymorphic Material with Potential for Chemical Hydrogen Storage" Chem. Mater., Article ASAP. DOI: 10.1021/cm500980b

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Astrobiology: Herschel Detects New Molecules Formed Around Old Stars

Astrobiology: Herschel Detects New Molecules Formed Around Old Stars | Amazing Science |

Using ESA's Herschel space observatory, astronomers have discovered that a molecule vital for creating water exists in the burning embers of dying Sun-like stars.

When low- to middleweight stars like our Sun approach the end of their lives, they eventually become dense, white dwarf stars. In doing so, they cast off their outer layers of dust and gas into space, creating a kaleidoscope of intricate patterns known as planetary nebulas.

These actually have nothing to do with planets, but were named in the late 18th century by astronomer William Herschel, because they appeared as fuzzy circular objects through his telescope, somewhat like the planets in our Solar System.

Over two centuries later, planetary nebulas studied with William Herschel's namesake, the Herschel space observatory, have yielded a surprising discovery.

Like the dramatic supernova explosions of weightier stars, the death cries of the stars responsible for planetary nebulas also enrich the local interstellar environment with elements from which the next generations of stars are born.

While supernovas are capable of forging the heaviest elements, planetary nebulas contain a large proportion of the lighter 'elements of life' such as carbon, nitrogen, and oxygen, made by nuclear fusion in the parent star.

A star like the Sun steadily burns hydrogen in its core for billions of years. But once the fuel begins to run out, the central star swells into a red giant, becoming unstable and shedding its outer layers to form a planetary nebula.

The remaining core of the star eventually becomes a hot white dwarf pouring out ultraviolet radiation into its surroundings. This intense radiation may destroy molecules that had previously been ejected by the star and that are bound up in the clumps or rings of material seen in the periphery of planetary nebulas.

The harsh radiation was also assumed to restrict the formation of new molecules in those regions. But in two separate studies using Herschel astronomers have discovered that a molecule vital to the formation of water seems to rather like this harsh environment, and perhaps even depends upon it to form. The molecule, known as OH+, is a positively charged combination of single oxygen and hydrogen atoms.

In one study, led by Dr Isabel Aleman of the University of Leiden, the Netherlands, 11 planetary nebulas were analysed and the molecule was found in just three.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Observing kinase activity in live single cells

Observing kinase activity in live single cells | Amazing Science |

Ongoing efforts have shown that multicellular systems are best understood as a combination of heterogeneous single cell behaviors. Intrinsic noise generates cell-to-cell variation that can be critical for cellular survival, development and differentiation. In response to changing environments, cells also generate complex signaling dynamics that encode relevant information for gene expression, proliferation or stress responses. Indeed, bulk population dynamics are often qualitatively different from single cell behaviors. As a result, live-cell microscopy has acquired a central role to study single cell biology. Dynamic single cell reporters are essential for live-cell microscopy.

However, the number and type of molecular events that can be dynamically monitored in an individual cell is small. Such reporters have led to the successful measurement of metabolic state, transcription factor localization and even protein activities in live single cells. In the latter category, kinase activities are of particular interest. Kinases are known to regulate multiple and diverse biological functions, including the cell cycle, the innate immune response, development and cell differentiation.

Recently, researchers have developed a novel technology to generate single cell reporters for kinase activity. Their approach is based on the concept of converting phosphorylation into a nucleocytoplasmic shuttling event. In fact, there are numerous examples of phosphorylation-regulated nucleo-cytoplasmic translocation in naturally occurring proteins. The scientists hypothesized that by understanding this phenomena they could synthetically engineer single color kinase reporters for single cells. Therefore, after exploring the sequence space using the JNK (c-Jun N terminal Kinase) substrate c-Jun, they defined a set of rules that they can use to engineer single cell kinase activity reporters. They named these reporters Kinase Translocation Reporters (KTR). In addition, they showed that KTR technology is generalizable by implementing KTR sensors for JNK, p38, ERK and PKA, thus covering different types of kinases. The researchers also used this technology to show that multiplexing capabilities go beyond any current method. In particular, they measured JNK, p38 and ERK activities simultaneously in live single cells.

This technology opens the possibility of analyzing multiple signaling networks, cell cycle and a broad range of kinase-mediated processes simultaneously in live single cells.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's Earth Day Global Selfie 2014

NASA's Earth Day Global Selfie 2014 | Amazing Science |

The year 2014 is a big one for NASA Earth science. Five NASA missions designed to gather critical data about our home planet are launching to space this year. NASA astronauts brought home the first ever images of the whole planet from space. Now NASA satellites capture new images of Earth every second. For Earth Day we created an image of Earth from the ground up while also fostering a collection of portraits of the people of Earth. As those pictures streamed around the world on Earth Day, the individual pictures tagged #GlobalSelfie were collected and used to create a mosaic image of Earth -- a new "Blue Marble" built bit by bit with your photos. 

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Jeff Morris!

Is supersonic passenger travel set to make a comeback?

Is supersonic passenger travel set to make a comeback? | Amazing Science |

On October, 24 2003, the last Concorde jet went out of service. What began as a promise of supersonic travel for all, ended as a museum exhibit of a false dawn. However, that may be changing with companies such as Aerion and Spike Aerospace looking to take business jets supersonic.

At Aviation 2014, an annual event of the American Institute of Aeronautics and Astronautics, NASA presented examples of the space agency’s work on new technologies that could lead to a revival of civilian supersonic travel within the next 15 years.

Via Jeff Morris
Eric Chan Wei Chiang's curator insight, June 25, 2014 9:13 PM

Farewell Concorde but welcome Aerion, Spike and the next generation of supersonic commercial aircraft.


Check out more future tech here:

Tekrighter's curator insight, June 26, 2014 9:44 AM

I was sad to see the Concorde go out of business. I hope the new version will be successful, and look forward to riding it!

Scooped by Dr. Stefan Gruenwald!

Eric Ladizinsky: Evolving Scalable Quantum Computers

Eric Ladizinsky visited the Quantum AI Lab at Google LA to give a talk "Evolving Scalable Quantum Computers." This talk took place on March 5, 2014.

"The nineteenth century was known as the machine age, the twentieth century will go down in history as the information age. I believe the twenty-first century will be the quantum age". Paul Davies

Quantum computation represents a fundamental paradigm shift in information processing. By harnessing strange, counterintuitive quantum phenomenon, quantum computers promise computational capabilities far exceeding any conceivable classical computing systems for certain applications. These applications may include the core hard problems in machine learning and artificial intelligence, complex optimization, and simulation of molecular dynamics .. the solutions of which could provide huge benefits to humanity. 

Realizing this potential requires a concerted scientific and technological effort combining multiple disciplines and institutions ... and rapidly evolving quantum processor designs and algorithms as learning evolves. D-Wave Systems has built such a mini-Manhattan project like effort and in just a under a decade, created the first, special purpose, quantum computers in a scalable architecture that can begin to address real world problems. D-Wave's first generation quantum processors (now being explored in conjunction with Google/NASA as well as Lockheed and USC) are showing encouraging signs of being at a "tipping point" .. matching state of the art solvers for some benchmark problems (and sometimes exceeding them) ... portending the exciting possibility that in a few years D-Wave processors could exceed the capabilities of any existing classical computing systems for certain classes of important problems in the areas of machine learning and optimization. 

In this lecture, Eric Ladizinsky, Co-Founder and Chief Scientist at D-Wave will describe the basic ideas behind quantum computation , Dwave's unique approach, and the current status and future development of D-Wave's processors. Included will be answers to some frequently asked questions about the D-Wave processors, clarifying some common misconceptions about quantum mechanics, quantum computing, and D-Wave quantum computers.

Speaker Info: Eric Ladizinsky is a physicist, Co-founder, and Chief Scientist of D-Wave Systems. Prior to his involvement with D-Wave, Mr. Ladizinsky was a senior member of the technical staff at TRW's Superconducting Electronics Organization (SCEO) in which he contributed to building the world's most advanced Superconducting Integrated Circuit capability intended to enable superconducting supercomputers to extend Moore's Law beyond CMOS. In 2000, with the idea of creating a quantum computing mini -Manhattan-project like effort, he conceived, proposed, won and ran a multi-million dollar, multi-institutional DARPA program to develop a prototype quantum computer using (macroscopic quantum) superconducting circuits. Frustrated with the pace of that effort Mr. Ladizinsky, in 2004, teamed with D-Wave's original founder (Geordie Rose) to transform the then primarily IP based company to a technology development company modeled on his mini-Manhattan-project vision. He is also responsible for designing the superconducting (SC) IC process that underlies the D-Wave quantum processors ... and transferring that process to state of art semiconductor production facilities to create the most advanced SC IC process in the world.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from DNA and RNA Research!

Therapeutic siRNA Interventions: What we have learned

Therapeutic siRNA Interventions: What we have learned | Amazing Science |

Treatments based on RNA interference are improving now that technologies are delivering longer-lasting gene silencing.

The 2006 Nobel Prize in Physiology or Medicine was awarded jointly to Andrew Z. Fire and Craig C. Mello for their 1998 discovery of RNA interference (RNAi), gene silencing by double-stranded RNA.

Today, RNAi-based therapeutics are in Phase II and Phase III clinical trials. The rapid development of this technology demonstrates its enormous potential for treatment of a range of diseases.

A major hurdle for clinical applications is the safe and effective delivery of small interfering RNA (siRNA). Unlike biologics that target membrane proteins, siRNA molecules need to enter the cytosol of diseased cells to work. In addition, unlike small molecules that diffuse freely across the cell membrane, siRNA molecules are large and negatively charged. They cannot easily and independently cross the cell membrane.

Current siRNA nanoparticle delivery platforms in clinical trials, such as cationic lipoplexes and polyplexes, induce transient gene silencing; they lack a sustained siRNA release property. In vitro studies have indicated that efficacy, in general, lasts less than two weeks at the cellular level.

A new lipid-polymer hybrid nanoparticle combines a cationic liposome system with a controlled-release polymer technology, allowing siRNA encapsulation along with sustained release. Encapsulation of the siRNA would be very low if it depended solely on the noncharged, controlled-release polymer technology. Sustained delivery allows for longer activity, and, potentially, subsequent lower dosage and injection frequencies.

An in vitro proof-of-concept study showed that the lipid-polymer hybrid nanoparticle slowly releases the siRNA over the course of a month, allowing sustained knockdown of PHB1, a protein involved in cell proliferation, apoptosis, chemoresistance, and other biological processes in lung carcinoma cells.

“It takes a long time to discover a drug or small molecule to target a protein of interest, plus there are many undruggable proteins. The beautiful thing about RNAi technology is you can target any protein you want by silencing the gene,” explains Jinjun Shi, Ph.D., assistant professor, Laboratory for Nanoengineering and Drug Delivery, Department of Anesthesiology, Brigham and Women’s Hospital, Harvard Medical School.

The new lipid-polymer hybrid nanoparticle technology is initially intended for use in fundamental research and target validation. The goal is to eventually extend its application to the clinic as a vehicle for delivering therapeutic siRNAs and, perhaps, for co-delivering chemotherapeutics and siRNAs for synergistic cancer treatment.

Via Integrated DNA Technologies
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Seeing around corners: When light echoes, the invisible becomes visible

Seeing around corners: When light echoes, the invisible becomes visible | Amazing Science |

How to see around a corner without a mirror | KurzweilAIScientists at the University of Bonn and the University of British Columbia (Vancouver, Canada) have developed a novel camera system which can see around the corner without using a mirror. Using diffusely reflected light, it reconstructs the shape of objects outside of the field of view. The researchers will be reporting their results at the international Conference for Computer Vision and Pattern Recognition (CVPR) from June 24-27 in Columbus (Ohio, USA).

A laser shines on the wall; a camera watches the scene. Nothing more than white ingrain wallpaper with a bright spot of light can be seen through the lens. A computer records these initially unremarkable images and as the data is processed further, little by little, the outlines of an object appear on a screen. Yet, this object is behind a partition and the camera cannot possibly have seen it – we have apparently looked around the corner. A magic trick? "No," says Prof. Dr.-Ing. Matthias B. Hullin from the Institute of Computer Science II at the University of Bonn. "This is an actual reconstruction from diffusely scattered light. Our camera, combined with a mathematical procedure, enables us to virtually transform this wall into a mirror." Scattered light is used as a source of information.

The laser dot on the wall is by itself a source of scattered light, which serves as the crucial source of information. Some of this light, in a roundabout way, falls back onto the wall and finally into the camera. "We are recording a kind of light echo, that is, time-resolved data, from which we can reconstruct the object," explains the Bonn computer scientist. "Part of the light has also come into contact with the unknown object and it thus brings valuable information with it about its shape and appearance." To be able to measure such echoes, a special camera system is required which Prof. Hullin has developed together with his colleagues at the University of British Columbia (Vancouver, Canada) and further refined after his return to Bonn. In contrast to conventional cameras, it records not just the direction from which the light is coming but also how long it took the light to get from the source to the camera.

The technical complexity for this is comparatively low – suitable image sensors came onto the mass market long ago. They are mainly found in depth image cameras as they are used, for instance, as video game controllers or for range measurements in the automotive field. The actual challenge is to elicit the desired information from such time-of-flight measurements. Hullin compares the situation to a room which reverberates so greatly that one can no longer have a conversation with one's partner. "In principle, we are measuring nothing other than the sum of numerous light reflections which reached the camera through many different paths and which are superimposed on each other on the image sensor."

This problem, known as multipath interference, has been giving engineers headaches for a long time. Traditionally, one would attempt to remove the undesired multipath scatter and only use the direct portion of the signal. Based on an advanced mathematical model, Hullin and his colleagues, however, developed a method which can obtain the desired information exclusively from what would usually be considered noise rather than signal. Since multipath light also originates from objects which are not at all in the field of view, the researchers can thus make visible what is virtually invisible.

Minimal technical complexity and intelligent programming

"The accuracy of our method has its limits, of course," says Prof. Hullin – the results are still limited to rough outlines. However, the researchers assume that based on the rapid development of technical components and mathematical models, an even higher resolution can be achieved soon. Together with his colleagues, he will present the method at the international Conference for Computer Vision and Pattern Recognition (CVPR) from June 24 to 27 in Columbus (Ohio, USA). The new technology is received with great interest – Hullin hopes that similar approaches can be used, for example, in telecommunications, remote sensing and medical imaging.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Shadow Internet That’s 100 Times Faster Than Even Google Fiber

The Shadow Internet That’s 100 Times Faster Than Even Google Fiber | Amazing Science |

When Google chief financial officer Patrick Pichette said the tech giant might bring 10 gigabits per second internet connections to American homes, it seemed like science fiction. That’s about 1,000 times faster than today’s home connections. But for NASA, it’s downright slow.

While the rest of us send data across the public internet, the space agency uses a shadow network called ESnet, short for Energy Science Network, a set of private pipes that has demonstrated cross-country data transfers of 91 gigabits per second–the fastest of its type ever reported.

NASA isn’t going bring these speeds to homes, but it is using this super-fast networking technology to explore the next wave of computing applications. ESnet, which is run by the U.S. Department of Energy, is an important tool for researchers who deal in massive amounts of data generated by projects such as the Large Hadron Collider and the Human Genome Project. Rather sending hard disks back and forth through the mail, they can trade data via the ultra-fast network. “Our vision for the world is that scientific discovery shouldn’t be constrained by geography,” says ESnet director Gregory Bell.

In making its network as fast as it can possibly be, ESnet and researchers are organizations like NASA are field testing networking technologies that may eventually find their way into the commercial internet. In short, ESnet a window into what our computing world will eventually look like.

ESnet has long been capable of 100 gigabit transfers, at least in theory. Network equipment companies have been offering 100 gigabit switches since 2010. But in practice, long-distance transfers were much slower. That’s because data doesn’t travel through the internet in a straight line. It’s less like a super highway and more like an interstate highway system. If you wanted to drive from San Francisco to New York, you’d pass through multiple cities along the way as you transferred between different stretches of highway. Likewise, to send a file from San Francisco to New York on the internet—or over ESnet—the data will flow through hardware housed in cities across the country.

NASA did a 98 gigabit transfer between Goddard and the University of Utah over ESnet in 2012. And Alcatel-Lucent and BT obliterated that record earlier this year with a 1.4 terabit connection between London and Ipswich. But in both cases, the two locations had a direct connection, something you rarely see in real world connections.

On the internet and ESnet, every stop along the way creates the potential for a bottleneck, and every piece of gear must be ready to handle full 100 gigabit speeds. In November, the team finally made it work. “This demonstration was about using commercial, off-the-shelf technology and being able to sustain the transfer of a large data network,” says Tony Celeste, a sales director at Brocade, the company that manufactured the equipment used in the record-breaking test.

Ellie Kesselman Wells's comment, July 22, 2014 9:20 AM
I don't suppose this is the follow-up to Internet 2?
Scooped by Dr. Stefan Gruenwald!

A super-stretchable yarn made of graphene

A super-stretchable yarn made of graphene | Amazing Science |

A simple, scalable method of making strong, stretchable graphene oxide fibers that are easily scrolled into yarns and have strengths approaching that of Kevlar is possible, according to Penn State and Shinshu University, Japan, researchers.

“We found this graphene oxide fiber was very strong, much better than other carbon fibers,” said Mauricio Terrones, professor of physics, chemistry and materials science and engineering, Penn State. “We believe that pockets of air inside the fiber keep it from being brittle.”

This method opens up multiple possibilities for useful products, according to Terrones and colleagues. For instance, removing oxygen from the graphene oxide fiber results in a fiber with high electrical conductivity.

Adding silver nanorods to the graphene film would increase the conductivity to the same as copper, which could make it a much lighter weight replacement for copper transmission lines. The researchers believe that the material lends itself to many kinds of highly sensitive sensors.

The researchers made a thin film of graphene oxide by chemically exfoliating graphite into graphene flakes, which were then mixed with water and concentrated by centrifugation into a thick slurry. The slurry was then spread by bar coating — something like a squeegee — across a large plate. When the slurry dries, it becomes a large-area transparent film that can be carefully lifted off without tearing. The film is then cut into narrow strips and wound on itself with an automatic fiber scroller, resulting in a fiber that can be knotted and stretched without fracturing. The researchers reported their results in a recent issue of ACSNano.

“The importance is that we can do almost any material, and that could open up many avenues — it’s a lightweight material with multifunctional properties,” said Terrones. And the main ingredient, graphite, is mined and sold by the ton.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Reconstruction of 1918-like avian influenza virus stirs concern over gain of function experiments

Reconstruction of 1918-like avian influenza virus stirs concern over gain of function experiments | Amazing Science |

The gain of function experiments in which avian influenza H5N1 virus was provided the ability to transmit by aerosol among ferrets were met with substantial outrage from both the press and even some scientists; scenarios of lethal viruses escaping from the laboratory and killing millions proliferated (see examples here and here). The recent publication of new influenza virus gain of function studies from the laboratories of Kawaoka and Perez have unleashed another barrage of criticism.

The work by Kawaoka and colleagues attempts to answer the question of whether an influenza virus similar to that which killed 50 million people in 1918 could emerge today. First they identified in the avian influenza virus sequence database individual RNA segments that encode proteins that are very similar to the 1918 viral proteins.

Next, an infectious influenza virus was produced with 8 RNA segments that encode proteins highly related to those of the 1918 virus. Each RNA segment originates from a different avian influenza virus, and differs by 8 (PB2), 6 (PB1), 20 (PB1-F2), 9 (PA), 7 (NP), 33 (HA), 31 (NA), 1 (M1), 5 (M2), 4 (NS1), and 0 (NS2) amino acids from the 1918 virus.

The 1918-like avian influenza virus was less pathogenic in mice and ferrets compared with the 1918 virus, and more pathogenic than a duck influenza virus isolated in 1976. Virulence in ferrets increased when the HA or PB2 genes of the 1918-like avian influenza virus were substituted with those from the 1918 virus.

Aerosol transmission among ferrets was determined for the 1918-like avian influenza virus, and reassortants containing 1918 viral genes (these experiments are done by housing infected and uninfected ferrets in neighboring cages). The 1918 influenza virus was transmitted to 2 of 3 ferrets. Neither the 1918-like avian influenza virus, nor the 1976 duck influenza virus transmitted among ferrets. Aerosol transmission among ferrets was observed after infection with two different reassortant viruses of the 1918-avian like influenza virus: one which possesses the 1918 virus PB2, HA, and NA RNAs (1918 PB2:HA:NA/Avian), and one which possesses the 1918 virus PA, PB1, PB2, NP, and HA genes (1918(3P+NP):HA/Avian).

It is known from previous work that amino acid changes in the viral HA and PB2 proteins are important in allowing avian influenza viruses to infect humans. Changes in the viral HA glycoprotein (HA190D/225D) shift receptor specificity from avian to human sialic acids, while a change at amino acid 627 of the PB2 protein to a lysine (627K) allows avian influenza viruses to efficiently replicate in mammalian cells, and at the lower temperatures of the human upper respiratory tract.

These changes were introduced into the genome of the 1918-like avian influenza virus. One of three contact ferrets was infected with 1918-like avian PB2-627K:HA-89ED/190D/225D virus (a mixture of glutamic acid and aspartic acid at amino acid 89 was introduced during propagation of the virus in cell culture). Virus recovered from this animal had three additional mutations: its genotype is 1918-like avian PB2-627K/684D : HA-89ED/113SN/ 190D/225D/265DV : PA-253M (there are mixtures of amino acids at HA89, 113, and 265). This virus was more virulent in ferrets and transmitted by aerosol more efficiently than the 1918-like avian influenza virus. The virus recovered from contact ferrets contained yet another amino acid change, a T-to-I mutation at position 232 of NP. Therefore ten amino acid changes are associated with allowing the 1918-like avian influenza virus to transmit by aerosol among ferrets. Aerosol transmission of these viruses is not associated with lethal disease in ferrets.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Amazing Science Infographics on Pinterest

Amazing Science Infographics on Pinterest | Amazing Science |

Over 675 infographics about science and related topics.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

First Evidence Of A Correction To The Speed of Light

First Evidence Of A Correction To The Speed of Light | Amazing Science |

When astronomers first observed light from a supernova arriving 7.7 hours after the neutrinos from the same event, they ignored the evidence. Now one physicist says the speed of light must be slower than Einstein predicted and has developed a theory that explains why.

In the early hours of the morning on 24 February 1987, a neutrino detector deep beneath Mont Blanc in northern Italy picked up a sudden burst of neutrinos. Three hours later, neutrino detectors at two other locations picked up a similar burst. The event consisted of two bursts of neutrinos separated by three hours followed by the first optical signals 4.7 hours later.

Some 4.7 hours after this, astronomers studying the Large Magellanic cloud that orbits our galaxy, noticed the tell-tale brightening of a blue supergiant star called Sanduleak -69 202, as it became a supernova. Since then, SN 1987a, as it was designated, has become one of the most widely studied supernovas in history.

Neutrinos and photons both travel at the speed of light and should therefore arrive simultaneously, all else being equal. The mystery is what caused this huge delay of 7.7 hours between the first burst of neutrinos and the arrival of the optical photons.

Today, we get an answer thanks to the work of James Franson at the University of Maryland in Baltimore. Franson has used the laws of quantum mechanics to calculate the speed of light travelling through a gravitational potential related to the mass of the Milky Way.

Because all previous speed-of-light calculations have relied only on general relativity, they do not take into account the tiny effects of quantum mechanics. But these effects are significant over such long distances and through such a large mass as the Milky Way, says Franson.

He says that quantum mechanical effects should slow down light in these kinds of circumstances and calculates that this more or less exactly accounts for the observed delay.

First, some background about the mechanism behind the supernova. A supernova begins with the collapse of a star’s core, generating both neutrinos and optical photons. However, the density of the core delays the emergence of the photons by about 3 hours. By contrast, the neutrinos interact less strongly with matter and so emerge unscathed more or less immediately.

Carlos Garcia Pando's comment, June 27, 2014 4:51 AM
"They ignored the evidence" Interesting. Why? Because the observation did not fit into the theory. Theory was their religion and couldn't be denied by facts.
Scooped by Dr. Stefan Gruenwald!

Fossilized nuclei and chromosomes of 180 million-year-old fern nearly identical to modern relative

Fossilized nuclei and chromosomes of 180 million-year-old fern nearly identical to modern relative | Amazing Science |

A trio of researchers in Sweden has unearthed a fossilized fern that has been dated to 180 million years ago, that remarkably, is in near pristine condition. Benjamin Bomfleur and Stephen McLoughlin, of the Swedish Museum of Natural History and Vivi Vajda of Lund University, report in their paper published in the journal Science that they discovered the fossil in a bed of volcanic rock near Korsaröd in Sweden, and found it so well preserved that microscopic analysis revealed that they could make out its DNA structure.

The calcified stem of a royal fern dating back to the early Jurassic period was apparently preserved by mineral precipitation from hydrothermal brines as they rapidly crystalized, trapping the fern, which was clearly alive at the time, encasing it in an airtight environment. Although very small (just 5.8 x 4.1 cm) the fossil was so well preserved that the researchers were still able to make out cell cytoplasm, nuclei and even chromosomes.

Curious, the team measured the sub-cellular parts of the fossilized plant and compared them to those of a modern relative, the cinnamon fern (Osmundastrum cinnamomeum), which has already earned the title of a "living fossil" due to prior research that revealed its origins could be dated back to 75 million years ago. In so doing they discovered that the number of chromosomes and indeed the DNA content itself was a very close match—so close that the team dubbed them a "paramount example of evolutionary stasis." Remarkably, the plant hasn't changed much at all over a period of 180 million years. When it lived, it likely looked much like the bright green cinnamon fern (though they turn to cinnamon color later in life) of today, growing to a height of one to five feet with spreading fronds reaching six to eight inches. The team suggests the specimen provides exceptional insight into how life can evolve over geologic time.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

ESA sets its sights on harpooning space debris

ESA sets its sights on harpooning space debris | Amazing Science |

In 2021, as part of its Clean Space Initiative, ESA plans to launch thee.DeOrbit mission. The aim of this mission is to clean up the important polar orbits between altitudes of 800 to 1,000 km (500 to 625 mil) that face the prospect of becoming unusable due to the increasing buildup of space debris. The ESA has now announced plans to examine the potential for the mission to use space harpoons to capture large items, such as derelict satellites and the upper stages of rockets.

The ESA has previously revealed it is considering a number of approaches to meet the challenge of capturing and securing space debris. These include snaring the debris in a net, securing it with clamping mechanisms, or grabbing hold of it using robotic arms. Another option is a tethered harpoon, which would pierce the debris with a high-energy impact before reeling it in.

Such an approach wouldn't be applicable for smaller debris, but is aimed at reeling in uncontrolled multitonne objects that threaten to fragment when colliding with other objects, resulting in debris clouds that would steadily increase in density due to the Kessler syndrome.

The ESA says the space harpoon concept has already undergone initial investigations by Airbus Defense and Space in Stevenage, UK, whose preliminary design incorporates a penetrating tip, crushable cartridge to help embed it in the target satellite structure and barbs to keep it sticking in so the satellite can then be reeled in.

The initial tests involved shooting a prototype harpoon into a satellite-like material to assess its penetration, the strength of the harpoon and tether as the target is reeled in, and the potential for the target to fragment, which would result in more debris that could threaten the e.DeOrbit satellite.

The ESA now plans to follow up these initial tests by building and testing a prototype "breadboard" version of the harpoon and its ejection system in the hope of adopting it for the e.DeOrbit mission. The project will examine the harpoon impact, target piercing and the reeling in of objects using computer models and experiments, ultimately leading up to a full hardware demonstration.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Synthetic biology: How best to build a living cell

Synthetic biology: How best to build a living cell | Amazing Science |
Experts weigh in on the biggest obstacles in synthetic biology — from names to knowledge gaps — and what it will take to overcome them.

The engineering slant of synthetic biology has brought impressive accomplishments. These include whole-cell biosensors; cells that synthesize antimalaria drugs; and bacterial viruses designed to disperse dangerous, tenacious biofilms.

To design these, engineers are trained to model systems as black boxes, abstractly linking inputs and outputs. They can often control a system with only a limited understanding of it. But synthetic-biology projects are frequently thwarted when engineering runs up against the complexity of biology.

Synthetic biology would benefit greatly from deeper insights into the mechanisms of biological systems. Such approaches have already yielded insights into how organized processes in cells work because of, rather than in spite of, noisy gene expression. Synthetic biology is also informing biology, helping to reveal how a gene product can amplify or inhibit its own expression and so allow cells to flip between stable states. Much more remains to be explored and discovered.

The biggest challenge for synthetic biology is how to extend beyond projects that focus on single products, organisms and processes. Right now, most applications engineer bacteria that start a synthesis with glucose and turn out biofuels or fine chemicals, such as vanillin or artemesinin. A broader scope could help to build a 'greener' economy, in which more organisms make a greater range of chemicals.

The chemical industry is a marvel of efficiency, taking raw materials such as oil and converting them into a wide range of products, including plastics and pharmaceuticals. This is possible in part because feedstocks can be interconverted through various large-scale reactions for which catalysts and processes have been optimized over several decades.

Synthetic biology could unlock the large-scale use of carbon sources from lignocellulose to coal. Synthetic 'bioalchemy' would reformat the basic elements of life to take advantage of abundant supplies of formerly rare intermediates such as the nylon precursor adipate, which is used to synthesize antibiotics. Metabolic engineering is already capable of syntheses that use glucose or other standard carbon sources as precursors, but the co-culture of synthetically modified organisms would make these processes more efficient. The ability to engineer photosynthetic organisms might even allow light to be used as the ultimate energy source and carbon dioxide as the ultimate carbon source.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Three-dimensional light-sensitive mini retina grown from human iPS cells in the lab

Three-dimensional light-sensitive mini retina grown from human iPS cells in the lab | Amazing Science |
Researchers at Johns Hopkins have constructed a functioning segment of a human retina out of stem cells that is able to respond to light.

The eye is often compared to a camera, but although its basic design is as simple as an old-fashioned box Brownie, its detailed structure is more complex than the most advanced electronics. This means that, unlike simpler organs, studies of retinal disease rely heavily on animal studies, and treating such illnesses is extremely difficult. One ray of hope in the field comes from researchers at Johns Hopkins, who have constructed a functioning segment of a human retina out of stem cells that is able to respond to light.

The retina is the complex lining of the human eye that acts like the the film (or the imaging sensor, for the younger crowd) in a camera. It’s made of some 10 layers of tissue, including structural membranes, nerve ganglia, and photoreceptor cells; the rods that detect black and white images and work best in low light, and the cones, which detect color. If scientists could recreate this structure in the laboratory, it would be a major breakthrough in treating eye diseases.

The Johns Hopkins researchers’ approach was to use human-induced pluripotent stem cells (iPS). In other words, adult cells were induced to revert to stem cells, from which any of the 200 specialized cells in the human body can be derived. The Johns Hopkins team programmed the stem cells to grow into retinal progenitor cells in a culture dish.

These cells developed into retina cells, much in the same way and at the same rate as in a human embryo. As they did so, the cells differentiated into the some of the seven different kinds of cells that make up the retina and organized themselves into the three-dimensional outer segment structures necessary for the photoreceptors to work.

"We knew that a 3D cellular structure was necessary if we wanted to reproduce functional characteristics of the retina," says M Valeria Canto-Soler, an assistant professor of ophthalmology at the Johns Hopkins University School of Medicine, "but when we began this work, we didn't think stem cells would be able to build up a retina almost on their own. In our system, somehow the cells knew what to do."

Growing retina segments has been achieved before, but where Johns Hopkins’ work stands out is that these mini-retinas actually function. When the mini-retinas reached the equivalent of 28-weeks of development, the researchers hooked the photoreceptor cells up to electrodes and flashed pulses of light at them. According to the scientists, the cells displayed the same photochemical reactions as in a normal retina – especially in regard to the rods that make up the majority of the photoreceptors.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists seek to develop new imaging capabilities to real-time view a human virus in action

Scientists seek to develop new imaging capabilities to real-time view a human virus in action | Amazing Science |

Virginia Tech Carilion Research Institute scientists Deborah Kelly and Sarah McDonald are ambitious and determined, and it’s paying off. The two assistant professors recently received a National Institutes of Health grant for their collaborative work developing new imaging technologies that will allow them to see live rotavirus activity.

McDonald and Kelly, whose offices share a wall and whose laboratories are adjacent, began collaborating more than two years ago.

“I was interested in examining RNA-related processes using high-resolution imaging technology,” Kelly said. “Dr. McDonald’s work with rotavirus provided the right opportunity to collaboratively develop innovative tools together, using the virus as a model system.”

Kelly focuses on developing new imaging platforms – specifically those that help capture the dynamic structure of functionally important proteins in human cells. Traditionally, scientists use cryo-electron microscopy to image protein complexes. With the capability to capture more details than ordinary light microscopy, electron microscopes can be used to peer at the invisible world around us, revealing unique information about macromolecules that can affect human health. The samples are frozen in place, keeping the structures close to how they would appear naturally. Researchers take snapshots of the structures and use computational algorithms to construct 3-D models of the imaged structures.

Rotaviruses have three layers, like a foil-wrapped chocolate egg. The innermost layer, the sweet cream, contains double-stranded RNA, the genetic material needed to create more viruses, and polymerases. Polymerases are responsible for manufacturing the messenger RNA molecules that infect a host. When the virus is active, it sheds the outer layer – the foil. Scientists believe that shedding activates the polymerases. The chocolate layer is already permeated with holes, letting the RNA snake out. The virus in this form is called an active double-layered particle.

Inactive double-layered rotavirus particles were previously imaged at a moderate resolution in the 1990s, producing 3-D models that give information about how the active form of rotavirus operates on a molecular level. With Kelly’s improved technology, McDonald and Kelly think they can determine even more details about how the virus functions and infects host cells.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA receives mysterious X-ray signal from Perseus cluster 240 million light years away

NASA receives mysterious X-ray signal from Perseus cluster 240 million light years away | Amazing Science |
A mysterious X-ray signal has been found in a detailed study of galaxy clusters using NASA’s Chandra X-ray Observatory and ESA’s XMM-Newton.

One intriguing possibility is that the X-rays are produced by the decay of sterile neutrinos, a type of particle that has been proposed as a candidate for dark matter.

While holding exciting potential, these results must be confirmed with additional data to rule out other explanations and determine whether it is plausible that dark matter has been observed.

Astronomers think dark matter constitutes 85% of the matter in the Universe, but does not emit or absorb light like “normal” matter such as protons, neutrons and electrons that make up the familiar elements observed in planets, stars, and galaxies. Because of this, scientists must use indirect methods to search for clues about dark matter.

The latest results from Chandra and XMM-Newton consist of an unidentified X-ray emission line, that is, a spike of intensity at a very specific wavelength of X-ray light. Astronomers detected this emission line in the Perseus galaxy cluster using both Chandra and XMM-Newton. They also found the line in a combined study of 73 other galaxy clusters with XMM-Newton.

“We know that the dark matter explanation is a long shot, but the pay-off would be huge if we're right,” said Esra Bulbul of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass. who led the study. “So we're going to keep testing this interpretation and see where it takes us.”

The authors suggest this emission line could be a signature from the decay of a “sterile neutrino.” Sterile neutrinos are a hypothetical type of neutrino that is predicted to interact with normal matter only via gravity. Some scientists have proposed that sterile neutrinos may at least partially explain dark matter.

“We have a lot of work to do before we can claim, with any confidence, that we’ve found sterile neutrinos,” said Maxim Markevitch, a co-author from NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “But just the possibility of finding them has us very excited.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bionic Human: The latest of realistic, artificial body parts

Bionic Human: The latest of realistic, artificial body parts | Amazing Science |

From lab-grown lungs to mechanical eyes, the latest, and most realistic, artificial body parts.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Hubble captures incredible star explosion in four-year time-lapse video

Hubble captures incredible star explosion in four-year time-lapse video | Amazing Science |

The unusual variable star V838 Monocerotis (V838 Mon) continues to puzzle astronomers. This previously inconspicuous star underwent an outburst early in 2002, during which it temporarily increased in brightness to become 600,000 times more luminous than our Sun. Light from this sudden eruption is illuminating the interstellar dust surrounding the star, producing the most spectacular "light echo" in the history of astronomy.

As light from the eruption propagates outward into the dust, it is scattered by the dust and travels to the Earth. The scattered light has travelled an extra distance in comparison to light that reaches Earth directly from the stellar outburst. Such a light echo is the optical analogue of the sound echo produced when an Alpine yodel is reflected from the surrounding mountainsides.

The NASA/ESA Hubble Space Telescope has been observing the V838 Mon light echo since 2002. Each new observation of the light echo reveals a new and unique "thin-section" through the interstellar dust around the star. This video morphs images of the light echo from the Hubble taken at multiple times between 2002 and 2006. The numerous whorls and eddies in the interstellar dust are particularly noticeable. Possibly they have been produced by the effects of magnetic fields in the space between the stars.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Monarch butterflies employ a light-dependent magnetic compass during migration

Monarch butterflies employ a light-dependent magnetic compass during migration | Amazing Science |
Scientists have identified a new component of the complex navigational system that allows monarch butterflies to transverse the 2,000 miles to their overwintering habitat each year. Monarchs use a light-dependent, inclination magnetic compass to help them orient southward during migration.

Previous attempts by scientists to isolate use of an internal inclination compass in monarchs have yielded conflicting or unconvincing results. These studies, however, may not have accounted for the possibility that the magnetic compass was influenced by ultraviolet light that can penetrate cloud cover.

Given the ability of monarch cryptochromes (CRY), a class of proteins that are sensitive to ultraviolet A/blue light, to restore a light-dependent magnetic response in CRY-deficient Drosophila, Reppert and colleagues suspected that monarchs also possessed a light-dependent magnetic compass.

Using flight simulators equipped with artificial magnetic fields, Patrick Guerra, PhD, a postdoctoral fellow in the Reppert lab, examined monarch flight behavior under diffuse white light conditions. He found that tethered monarchs in the simulators oriented themselves in a southerly direction. Further tests in the simulator revealed that the butterflies used the inclination angle of Earth's magnetic field to guide their movement. Reversing the direction of the inclination caused the monarchs to orient in the opposite direction, to the north instead of the south.

To test the light-dependence of the monarch's magnetic compass, Dr. Guerra applied a series of wavelength blocking filters to the lights in the simulator. Monarchs exposed to light only in the wavelength range above 420nm exhibited a lack of direction by flying in circles. Monarchs exposed to light in the wavelength range above 380nm showed clear signs of directional flight. These tests showed that the monarch's magnetic compass, and thus directional flight, was dependent on exposure to light wavelengths (380nm to 420nm) found in the ultraviolet A/blue light spectral range.

Together, these results provide the first demonstration that the monarch butterfly uses a light-dependent, inclination compass during its long journey. It is also the first evidence of such a navigational tool in a long-distance migratory insect.

  1. Patrick A Guerra, Robert J Gegear, Steven M Reppert. A magnetic compass aids monarch butterfly migrationNature Communications, 2014; 5 DOI:10.1038/ncomms5164
No comment yet.
Scooped by Dr. Stefan Gruenwald!

World's First Magnetic Hose Created

World's First Magnetic Hose Created | Amazing Science |

An international team of scientists led by researchers from the Department of Physics of the Universitat Autònoma de Barcelona has developed a material which guides and transports a magnetic field from one location to the other, similar to how an optical fibre transports light or a hose transports water.

The magnetic hose designed by the researchers consists of a ferromagnetic cylinder covered by a superconductor material, a surprisingly simple design given the complicated theoretical calculations and numerous lab tests it had to undergo. A 14-centimeter prototype was built, which transports the magnetic field from one extreme to the other with a efficiency of 400% in comparison to current methods used to transport these fields.
Even with the efficiency of the prototype, researchers theoretically demonstrated that the magnetic hose can be even more efficient if the ferromagnetic tube is covered with  thin layers of alternating superconductor and ferromagnetic material.
The device designed by the researchers can be implemented at any scale, even at nanometre scale. Thus, a magnetic nanohose capable of individually controlling quantum systems could help to solve some of the current technological problems existing in quantum computing.

Reference paper:
C. Navau, J. Prat-Camps, O. Romero-Isart, J. I. Cirac, and A. Sanchez. “Long-distance transfer and routing of static magnetic fields”. Phys. Rev. Lett. 112, 253901 (2014).

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Acid oceans threaten creatures that supply half the world's oxygen

Acid oceans threaten creatures that supply half the world's oxygen | Amazing Science |
Ocean acidification is turning phytoplankton toxic. Bad news for the many species - us, included - that rely on them as a principal source of food and oxygen.

What happens when phytoplankton, the (mostly) single-celled organisms that constitute the very foundation of the marine food web, turn toxic? Their toxins often concentrate in the shellfish and many other marine species (from zooplankton to baleen whales) that feed on phytoplankton. Recent trailblazing research by a team of scientists aboard the RV Melville shows that ocean acidification will dangerously alter these microscopic plants, which nourish a menagerie of sea creatures and produce up to 60 percent of the earth's oxygen.

The researchers worked in carbon saturated waters off the West Coast, a living laboratory to study the effects of chemical changes in the ocean brought on by increased atmospheric carbon dioxide. A team of scientists from NOAA's Fisheries Science Center and Pacific Marine Environmental Lab, along with teams from universities in Maine, Hawaii and Canada focused on the unique "upwelled" zones of California, Oregon and Washington. In these zones, strong winds encourage mixing, which pushes deep, centuries-old CO2 to the ocean surface. Their findings could reveal what oceans of the future will look like. The picture is not rosy.

Scientists already know that ocean acidification, the term used to describe seas soured by high concentrations of carbon, causes problems for organisms that make shells. “What we don't know is the exact effects ocean acidification will have on marine phytoplankton communities,” says Dr. Bill Cochlan, the biological oceanographer from San Francisco State University oceanographer who was the project’s lead investigator. “Our hypothesis is that ocean acidification will affect the quantity and quality of certain metabolities within the phytoplankton, specifically lipids and essential fatty acids.”

Scott Baker's curator insight, June 25, 2014 10:00 AM

will fertilization help?

Diane Johnson's curator insight, June 25, 2014 12:12 PM

Understanding systems and interdependence is just so critical!