Amazing Science
699.1K views | +306 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Farthest confirmed galaxy is a prolific star creator

Farthest confirmed galaxy is a prolific star creator | Amazing Science | Scoop.it

Astronomers have measured the distance of the farthest known galaxy, finding that its light took 13.1 billion years to reach Earth – which means the light was emitted just 700 million years after the Big Bang. Although the galaxy is much smaller than the Milky Way, it is forming stars at a much faster rate. The discovery provides important new information about the epoch of reionization, the ancient era when the neutral gas between galaxies became ionized.

 

To observe the farthest galaxies, astronomers exploit the universe's expansion, which stretches – or redshifts – the light waves of distant objects to longer, or redder, wavelengths. But dust can also redden light, so a red colour alone does not guarantee that a galaxy lies at the edge of the observable universe.

 

"The problem had been, over the previous few years, [that] people have been trying to confirm these really distant galaxies – and for the most part coming up empty," says Steven Finkelstein, an astronomer at the University of Texas at Austin.

 

Confirmation of a far-off galaxy's distance requires measuring the redshift of lines in the spectrum of light that it emits. This means that astronomers face the challenge of obtaining the spectrum of a faint object. So for two nights in April, Finkelstein took aim at 43 red objects in the constellation Ursa Major with one of the largest telescopes in the world, the 10-metre Keck I telescope atop Mauna Kea in Hawaii. A year earlier, this telescope had received a more sensitive spectrograph, which made Finkelstein's observations possible.

 

Finkelstein searched the spectra for a line from Lyman-alpha emission. This radiation arises when an electron falls from the n = 2 to the n = 1 state of hydrogen, which is the most abundant element in the cosmos. This spectral line normally emits far-ultraviolet radiation at a wavelength of 1216 Å (121.6 nm), but because of the hoped-for redshifts, Finkelstein obtained his spectra at near-infrared wavelengths instead.

 

In 42 of the 43 spectra, Finkelstein saw no lines. "I was disappointed, I think – until I figured out the redshift of the one we did see and realized it was the most distant one." That galaxy, bearing the unwieldy name z8_GND_5296, has a Lyman-alpha line at a wavelength of 10,343 Å (1.0343 μm), a 751% increase over the rest wavelength, which means that the galaxy's redshift is 7.51. It is 40 million light-years more remote than the previous record holder, at redshift 7.215.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

NASA shoots lasers at the moon and sets new data transmission record

NASA shoots lasers at the moon and sets new data transmission record | Amazing Science | Scoop.it

NASA is using lasers for a record-breaking 622 Mb of data per second between the moon and earth as a part of its Lunar Laser Communications Demonstration (LLCD). Pulsed laser beams were shot from ground control at the White Sands Test Facility in New Mexico to the Lunar Atmosphere and Dust Environment Explorer (LADEE) satellite orbiting the moon, and the results could herald promising new advances in deep space communication. Radio waves have long been the go-to option for sending information between spacecraft and our planet, but the greater data capacity lasers can accommodate may make it possible for future missions to send higher resolution images and 3D video transmissions across two-way channels.


Compared to the days of dial-up, today's web-sites load at lightning speed. Just like you need your web-pages load quickly and securely, NASA scientists and engineers want the same quick connectivity with their data-gathering spacecraft. To meet these demands NASA is moving away from their form of dial-up (radio frequency-based communication), to their own version of high-speed Internet; using laser communications.


NASA is venturing into a new era of space communications using laser communications technology and it's starting with the LLCD mission. For decades NASA has launched and operated satellites in order to expand our understanding of Earth and space science. In order to sustain this vision, satellites have increased their data-capturing capabilities and have had to send data over greater distances. Each of these advancements have required increases to data downlink rates and higher data volumes. Just as your home computer struggled to download large multimedia files in the past, NASA's communication networks may soon reach the same complications as data volumes continue to grow.  In an effort to address these challenges and enhance the Agency's communications capabilities, NASA has directed the Goddard Space Flight Center (GSFC) to lead the Lunar Laser Communication Demonstration (LLCD).


The LLCD mission consists of space-based and ground-based components. The Lunar Laser Space Terminal (LLST) is an optical communications test payload flying aboard the LADEE Spacecraft. The LLST is demonstrating laser communications using a data-downlink rate that is five-times the current communication capabilities from lunar orbit. The ground segment consists of three ground terminals that will perform high-rate communication with the LLST aboard LADEE.  The primary ground terminal, the Lunar Laser Ground Terminal (LLGT) is located in White Sands, NM and was developed by MIT/Lincoln Laboratory and NASA. The ground segment also includes two secondary terminals located at NASA/JPL's Table Mountain Facility in California and the European Space Agency's La Teide Observatory in Tenerife, Spain.



more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

David M. Eagleman at #beinghuman2013: The Future of Being Human

David Eagleman examines how the contemporary journey into massive scales of space, time, and big data irreversibly expands our perspective on ourselves. At Being Human 2013, a recent forward-thinking lecture series, Jer Thorp and David Eagleman delivered new keynotes speculating on the future of being human. The conference, which took place in San Francisco last month, focused on how our perception of the world will change in the future. And, how big data and other technological and medical innovations will affect the way we interact with our surroundings. 


Eagleman kicked off his speech by explaining that every animal in the world (humans included) has "their own window on reality." Our perception of our environment, known as our "umwelt," is typically determined by the biological tools we're born with. Humans, for example, are not equipped to see x-rays or gamma rays or feel the shape of the magnetic field. Eagleman asks: "How are our technologies going to expand our umwelt, and therefore, the experience of being human?" 

"Our peripheral sensory organs are what we've come to the table with—but not necessarily what we have to stick with," he explains. He describes how we're moving into the MPH (Mr. Potato Head) model of evolution: Our eyes, ears, fingers, etc., essentially act like plug-and-play external devices that can be substituted to improve or enhance our view of the world. "The bottom line is that the human umwelt is on the move," he concludes. "We are now in a position as we move into the future of getting to chose our own plug-and-play devices." Imagine being able to see by transmitting electronic impulses through your tongue, or, embedding magnets into your fingertips that allow you to feel the pull of the magnetic field. There's so much happening in the world that we can't see, and Eagleman envisions a future where we can plug into new experiences and broaden our view of the our environment.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Gathering Gondwana: New Look at an Ancient Puzzle

Gathering Gondwana: New Look at an Ancient Puzzle | Amazing Science | Scoop.it
New study hopes to settle a debate over Gondwana's breakup.

 

Dinosaurs roamed, mammals started to flourish, the first birds and lizards evolved, and a massive supercontinent began to split apart on Earth about 180 million years ago. Yet, the details of the breakup of one of the largest landmasses in history have stumped scientists until now.

 

The breakup of the supercontinent Gondwana eventually formed the continents in the Southern Hemisphere. Exactly how this happened has been debated by geologists for years. Most theories say Gondwana broke into many different pieces, but new research suggests the large land mass simply split in two.

 

 Researcher Graeme Eagles of the University of London said he was suspicious of the theory that Gondwana had divided into many smaller continents because it was inconsistent with what is known about all other supercontinent breakups, including the breakup of Pangea into Gondwana and Laurasia.

 

Other continents in the geologic past, such as Rodinia, the oldest known continent, and Pangea, followed a pattern of splitting along tectonic lines into fewer, larger pieces, geologists think. Eagles wondered if a similar process could explain the breakup of Gondwana.

 

By studying data from where the continent first began to fracture, he determined that Gondwana split into eastern and western plates. Then, about 30 million years later, as crocodiles and sharks were evolving, the two plates split apart, and one continent became two.

 

Before it cracked into several landmasses, Gondwana included what are today Africa, South America, Australia, India and Antarctica. The big continents — Africa and South America — split off about 180 million to 170 million years ago. In recent years, researchers have debated what happened next, as the remaining continents rocketed apart. For example, different Gondwana reconstruction models had a 250-mile (400 kilometers) disagreement in the fit between Australia and Antarctica, an error that has a cascading effect in plate reconstructions, said Lloyd White, a geologist at Royal HollowayUniversity in Surrey, England.

 

With a series of computer models, the scientists tested various best fits for Australia, Antarctica and India against the compiled research data. The winner was an old-school approach, first published in the 1980s, White said. The big picture shows India, Australia and Antarctica were all joined about 165 million years ago. India started to pull away from Antarctica, first breaking away from both continents by about 100 million years ago. It zoomed north, eventually smashing into Asia. Australia and Antarctica opened up like a zipper from west to east between 85 million to 45 million years ago, White said. When the last "tooth" broke, south of Tasmania, Australia rocketed northward.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bacteriograph: Photographs Printed with Bacterial Growth

Bacteriograph: Photographs Printed with Bacterial Growth | Amazing Science | Scoop.it

Microbiologist-turned-photographer Zachary Copfer has developed an amazing photo-printing technique that’s very different from any we’ve seen before. Rather than use photo-sensitive papers, chemicals, or ink, Copfer uses bacteria. The University of Cincinnati MFA photography student calls the technique “bacteriography”, which involves controlling bacteria growth to form desired images.

Here’s how Copfer’s method works: he first takes a supply of bacteria like E. coli, turns it into a fluorescent protein, and covers a plate with it. Next, he creates a “negative” of the photo he wants to print by covering the prepared plate with the photo and then exposing it to radiation. He then “develops” the image by having the bacterial grow, and finally “fixes” the image by coating the image with a layer of acrylic and resin.

 

Using this process, he creates images of things ranging from famous individuals to Hubble telescope photos of galaxies. Copfer writes that his project is intended to be a counterexample to the false dichotomy of art and science.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researcher develops new medicine that attacks HIV before it integrates into human DNA

Researcher develops new medicine that attacks HIV before it integrates into human DNA | Amazing Science | Scoop.it

Thirty-four million people are living with human immunodeficiency virus (HIV) worldwide and each year some 2.5 million more are infected, according to the World Health Organization (WHO). A new medicine developed at the University of Georgia attacks the virus before it integrates with human DNA, understood by researchers as the point of no return.

 

"In our laboratories, we have discovered a highly potent HIV integrase inhibitor, aimed at the ‘point of no return' in HIV infectivity," said Nair, who is the Georgia Research Alliance Eminent Scholar in Drug Discovery in the UGA College of Pharmacy. "This inhibitor is highly effective against many variations of HIV."

 

According to Nair, HIV integrase is an ideal target for drug therapy because it is essential for viral replication, and there is no human counterpart, which means there is a low risk of side effects. Cell signaling, the transmission of information by molecules that coordinate and direct cellular actions, plays a key role in HIV cell invasion and the hijacking of cellular biochemistry, allowing the virus to replicate itself.

 

In the initial stage of HIV infection, the body's immune system releases antibodies to attack the virus. Helper T-cells, called CD4+ cells, play a central role in the body's immune response, orchestrating other cells in the immune system to carry out their specific protective functions. In its invasion of CD4+ cells, HIV recognizes and attaches itself to the outer surface of the cell, penetrates it, sheds its outer coat, releases its 15 viral proteins and a ribonucleic acid and proceeds with exploiting the human cellular biochemical machinery to reproduce itself in massive numbers.

 

"Of all of the steps involved in the replication or reproduction of HIV in its infectivity of the human system, the single most devastating point is the incorporation or integration of viral DNA into human chromosomal DNA," said Nair.

 

This insertion of viral DNA into human DNA occurs through a complex biochemical process that is facilitated by HIV integrase, a viral enzyme. Only after this crucial step is the viral invader in a position to exploit human cellular biochemistry to reproduce itself in astonishing numbers to ultimately bring about the destruction of CD4 lymphocytes, the coordinators of the immune response system of the human body.

 

As the infected T-cells die, the immune system of the infected body is unable to defend itself; opportunistic infections such as pneumonia, meningitis, antibiotic-resistant TB and other bacterial and viral infections become deadly. HIV and, eventually, AIDS and drug-resistant tuberculosis are a particularly deadly liaison, which kill a quarter of a million people a year, according to the WHO. 

"A devastating consequence of the integration step is that once viral integration has occurred, it cannot be reversed," Nair said. "That's why integration is viewed as the ‘point of no return' in HIV infection."
The drug developed in Nair's lab blocks the viral enzyme from inserting its genome into the DNA of the host cell. While Nair acknowledges an HIV vaccine that eliminates the virus altogether may not be doable, therapies that allow people to live longer lives while infected are attainable.

more...
Adrian Vallejo Blanco's curator insight, October 31, 2013 7:32 AM

Nueva medicina desarrollada en la Universidad de Georgia ataca el VIH  antes de que se integra con el ADN humano. Se  trata de un inhibidor o bloqueador muy potente de la integrasa del VIH, dirigido al punto de no retorno en la infectividad del VIH.

Este inhibidor es extremadamente eficaz contra muchas variantes de VIH. La integrasa del VIH es un objetivo ideal para la terapia médica, ya que es esencial para la replicación del virus, y no existe ninguna contrapartida humana – porque la integrasa del VIH es exclusivamente viral y no existe en humanos, de modo que podría bloquearse sin que repercuta en el sujeto- existiendo mínimo riesgo de efectos secundarios

De los pasos implicados en la replicación del VIH y en su infectividad del sistema humano, el punto más devastador es la incorporación o integración de ADN viral en el ADN cromosómico humano que se produce a través de la integrasa del VIH  conocido como el punto de no retorno.

El fármaco bloquea la enzima viral a la hora de insertar su genoma en el ADN de la célula huésped, ya que una vacuna contra el VIH que elimine por completo el virus actualmente no es factible ya que el virus tiene múltiples formas o subtipos pero, sin embargo, un medicamento puede hacer el VIH casi totalmente impotente.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

An Ocean full of Viruses -- There are hundred million times more viruses on Earth than stars in the universe

An Ocean full of Viruses -- There are hundred million times more viruses on Earth than stars in the universe | Amazing Science | Scoop.it
Viruses abound in the world’s oceans, yet researchers are only beginning to understand how they affect life and chemistry from the water’s surface to the sea floor.

 

There are an estimated 10E31 viruses on Earth. That is to say: there may be a hundred million times more viruses on Earth than there are stars in the universe. The majority of these viruses infect microbes, including bacteria, archaea, and microeukaryotes, all of which are vital players in the global fixation and cycling of key elements such as carbon, nitrogen, and phosphorus. These two facts combined—the sheer number of viruses and their intimate relationship with microbial life—suggest that viruses, too, play a critical role in the planet’s biosphere.

 

Of all the Earth’s biomes, the ocean has emerged as the source for major discoveries on the interaction of viruses with their microbial hosts. Ocean viruses were the inspiration for early hypotheses of the so-called “viral shunt,” by which viral killing of microbial hosts redirects carbon and nutrients away from larger organisms and back toward other microorganisms. Furthermore, researchers analyzing oceanic life have discovered many novel viruses that defy much of the conventional wisdom about what a virus is and what a virus does.

 

Among these discoveries are “giant” marine viruses, with capsid cross-sections that can exceed 500 nm, an order of magnitude larger than prototypical viruses. Giant viruses infect eukaryotic hosts, including the protist Cafeteria and unicellular green algae. These viruses also carry genomes larger than nearly all previously identified viral types, in some cases upwards of 1 million base pairs. In both marine and nonmarine contexts, researchers have even identified viruses that can infect giant viruses, the so-called virophages, a modern biological example of Jonathan Swift’s 17th-century aphorism: “a flea/ Hath smaller fleas that on him prey;/ And these have smaller fleas to bite ’em;/ And so proceed ad infinitum.”

 

It is apparent that we still have much to learn about the rich and dynamic world of ocean microbes and viruses. For example, a liter of seawater collected in marine surface waters typically contains at least 10 billion microbes and 100 billion viruses—the vast majority of which remain unidentified and uncharacterized. Thankfully, there are an increasing number of high-throughput tools that facilitate the study of bacteriophages and other microbe-infecting viruses that cannot yet be cultured in the laboratory. Indeed, studying viruses in natural environments has recently gone mainstream with the advent of viral metagenomics, pioneered by Forest Rohwer and colleagues at San Diego State University in California.

 

More recently, culture-free methods have enabled insights into questions beyond that of characterizing viral diversity. For example, Matthew Sullivan’s group at the University of Arizona and colleagues recently developed an adapted “viral tagging” method, by which researchers can now characterize the genotypes of environmental viruses that infect a host of interest, even if those viruses cannot be isolated in culture. These and other techniques—and the increasingly interdisciplinary study of environmental viruses—bring the scientific community ever closer to a clearer understanding of how viruses shape ocean ecology.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Everything from ions to living cells can be directed to self-assemble using magnetic fields

Everything from ions to living cells can be directed to self-assemble using magnetic fields | Amazing Science | Scoop.it

Scientists in the US have devised a stunningly simple way to direct colloids to self-assemble in an almost infinite variety of configurations, in both two and three dimensions. The technique, which relies on the creation of a pre-determined pattern of magnetic fields to generate a ‘virtual mould’ to dictate the final position of the particles, can be used to separate and distribute, in a controlled way, anything from living cells to ions.

 

‘The concept is trivial,’ Bartosz Grzybowski, who led the research team at Northwestern University, cheerfully concedes. ‘Why no-one thought of it before now is a good question.’

 

The system consists of a patterned grid of nickel, generated by photolithography, embedded in a layer of poly(dimethyl siloxane) (PDMS). This is placed on a permanent magnet. This forms a patterned magnetic field on the grid: on the nickel the field is strong, on the adjacent ‘islands’ where there is no nickel, the field is weak.

 

When a colloidal mixture containing magnetic (paramagnetic) and non-magnetic (diamagnetic) particles is placed on the nickel grid and a magnetic field applied, the paramagnetic particles are drawn to the nickel regions, pushing aside any diamagnetic particles and directing them to the adjacent non-magnetic islands or voids.

 

The ability to construct three-dimensional architectures from the colloids also arises, given that the magnetic field penetrates the space above the nickel regions. An excess of diamagnetic colloid, for example, will coalesce on a low-field island to build a pillar. A further excess of particles can build bridges between pillars to produce arches. Such complex three-dimensional structures could be useful for electronic circuitry. To illustrate the versatility of the approach, the research team patterned a grid in such a way to fashion  a microscopic facsimile of the Blue Mosque in Istanbul, featuring large ‘domes’ connected by arches, and surrounded by four unconnected satellite domes.

 

‘For me, one of the main aspects of this work is in being able to position particles, and in particular living cells,’ says Grzybowski. ‘We should be able to address things that cannot be addressed by other means.’

Stefano Sacanna, who researches colloid self-assembly at New York University, says: ‘This is the kind of work that makes you think how come nobody has ever thought of this before?’ Sacanna says that while template-assisted self-assembly is a well-known technique the new work has ‘completely redefined this concept, introducing virtual magnetic moulds that can manipulate either paramagnetic or diamagnetic colloids simultaneously’.

 

‘Their idea of modulating magnetic fields at the micron-scale using a combination of paramagnetic fluids and magnetisable composite films is, in its simplicity, extremely powerful,’ he adds. ‘Not only can these virtual moulds extend in the third dimension, but they can also be switched on and off on demand, allowing for the creation of dynamic and reconfigurable three-dimensional colloidal architectures. As if this was not impressive enough already, they showed how magnetic moulds can manipulate objects other than colloids, including ions and colonies of – live! – bacteria. This work greatly extends our ability to manipulate colloidal matter and holds the promise for new exciting opportunities in nano-fabrication.’

more...
Chris Upton + helpers's curator insight, October 22, 2013 12:33 PM

LOL  It often is...    ‘The concept is trivial,’ Bartosz Grzybowski, who led the research team at Northwestern University, cheerfully concedes. ‘Why no-one thought of it before now is a good question.’

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Hitchhiking herpes simplex virus confirms saga of ancient human migration out of Africa

Hitchhiking herpes simplex virus confirms saga of ancient human migration out of Africa | Amazing Science | Scoop.it

A study of the full genetic code of a common human virus offers a dramatic confirmation of the "out-of-Africa" pattern of human migration, which had previously been documented by anthropologists and studies of the human genome.

 

The virus under study, herpes simplex virus type 1 (HSV-1), usually causes nothing more severe than cold sores around the mouth, says Curtis Brandt, a professor of medical microbiology and ophthalmology at UW-Madison.

 

When Brandt and co-authors Aaron Kolb and Cécile Ané compared 31 strains of HSV-1 collected in North America, Europe, Africa and Asia, "the result was fairly stunning," says Brandt.

 

"The viral strains sort exactly as you would predict based on sequencing of human genomes. We found that all of the African isolates cluster together, all the virus from the Far East, Korea, Japan, China clustered together, all the viruses in Europe and America, with one exception, clustered together," he says.


"What we found follows exactly what the anthropologists have told us, and the molecular geneticists who have analyzed the human genome have told us, about where humans originated and how they spread across the planet."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Gravitational waves help us understand how black-holes gain weight

Gravitational waves help us understand how black-holes gain weight | Amazing Science | Scoop.it
Supermassive black holes: every large galaxy's got one. But here's a real conundrum: how did they grow so big?

 

A paper in today's issue of Science pits the front-running ideas about the growth of supermassive black holes against observational data -- a limit on the strength of gravitational waves, obtained with CSIRO's Parkes radio telescope in eastern Australia.

 

"This is the first time we've been able to use information about gravitational waves to study another aspect of the Universe -- the growth of massive black holes," co-author Dr Ramesh Bhat from the Curtin University node of the International Centre for Radio Astronomy Research (ICRAR) said.

 

"Black holes are almost impossible to observe directly, but armed with this powerful new tool we're in for some exciting times in astronomy. One model for how black holes grow has already been discounted, and now we're going to start looking at the others."

 

The study was jointly led by Dr Ryan Shannon, a Postdoctoral Fellow with CSIRO, and Mr Vikram Ravi, a PhD student co-supervised by the University of Melbourne and CSIRO.

 

Einstein predicted gravitational waves -- ripples in space-time, generated by massive bodies changing speed or direction, bodies like pairs of black holes orbiting each other.

 

When galaxies merge, their central black holes are doomed to meet. They first waltz together then enter a desperate embrace and merge. "When the black holes get close to meeting they emit gravitational waves at just the frequency that we should be able to detect," Dr Bhat said.

 

Played out again and again across the Universe, such encounters create a background of gravitational waves, like the noise from a restless crowd.

 

Astronomers have been searching for gravitational waves with the Parkes radio telescope and a set of 20 small, spinning stars called pulsars. Pulsars act as extremely precise clocks in space. The arrival time of their pulses on Earth are measured with exquisite precision, to within a tenth of a microsecond. When the waves roll through an area of space-time, they temporarily swell or shrink the distances between objects in that region, altering the arrival time of the pulses on Earth.

 

The Parkes Pulsar Timing Array (PPTA), and an earlier collaboration between CSIRO and Swinburne University, together provide nearly 20 years worth of timing data. This isn't long enough to detect gravitational waves outright, but the team say they're now in the right ballpark.

 

"The PPTA results are showing us how low the background rate of gravitational waves is," said Dr Bhat.

 

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mapping the impossible: Matterhorn mapped by a fleet of drones in just under 6 hours

The Matterhorn, which juts out a full kilometre above the surrounding Swiss Alps, dominates the local skyline and has challenged countless mountaineers since it was first scaled in 1865.

 

Now this iconic peak has been mapped in unprecedented detail by a fleet of autonomous, fixed-wing drones, flung into the sky from the summit by their makers. What's more, the entire process took just 6 hours.

 

The mapping, which was unveiled at the Drones and Aerial Robotics Conference in New York City last weekend, was carried out by unmanned aerial vehicle (UAV) company SenseFly, and aerial photography company Pix4D.

 

Three eBee drones were launched from the top of the mountain, skimming their way down 100 metres from the face, capturing points just 20 centimetres apart. When they reached the bottom, a second team intercepted the drones and relaunched them for further mapping.

 

Speaking to Mapbox, the mapping company that built the 3D point cloud of the mountain when the drones had landed, SenseFly's Adam Klaptocz said: "Such a combination of high altitudes, steep rocky terrain and sheer size of dataset has simply not been done before with drones, we wanted to show that it was possible."

 

A video crew follows senseFly's (http://www.sensefly.com/) team of engineers marking a historic milestone in proof of surveying techniques, using eBee minidrones to map the epic Matterhorn and construct a 3D model of "the most beautiful mountain".

The mission involved the coordination of several teams with multiple eBee drones taking over 2200 images in 11 flights, all within a few hours of a sunny alpine morning. The results are stunning: a high-definition 3D point-cloud made of 300 million points covering an area of over 2800 hectares with an average resolution of 20 cm. A special thanks to our partners Pix4D (http://www.pix4d.com) for the creation of the 3D model, Drone Adventures (http://www.droneadventures.org) for mission coordination and MapBox (http://www.mapbox.com) for online visualisation.

senseFly is a Parrot company (http://parrot.com/)

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New cognitive computing capabilities allow for a more natural interaction between physicians, data and EMRs

New cognitive computing capabilities allow for a more natural interaction between physicians, data and EMRs | Amazing Science | Scoop.it

IBM Research and Cleveland Clinic announced WatsonPaths and Watson EMR (electronic medical records) Assistant – new tools to help physicians make more informed and accurate decisions faster and to cull new insights from electronic medical records.

 

WatsonPaths explores a complex scenario and draws conclusions much like people do in real life. When presented with a medical case, WatsonPaths extracts statements based on the knowledge it has learned as a result of being trained by medical doctors and from medical literature.   

 

WatsonPaths can use Watson’s question-answering abilities to examine the scenario from many angles. The system works its way through chains of evidence – pulling from reference materials, clinical guidelines and medical journals in real-time – and draws inferences to support or refute a set of hypotheses. This ability to map medical evidence allows medical professionals to consider new factors that may help them to create additional differential diagnosis and treatment options.

 

As medical experts interact with WatsonPaths, the system will use machine-learning to improve and scale the ingestion of medical information. WatsonPaths incorporates feedback from the physician who can drill down into the medical text to decide if certain chains of evidence are more important, provide additional insights and information, and weigh which paths of inferences the physician determines lead to the strongest conclusions. Through this collaboration loop, WatsonPaths compares its actions with that of the medical expert so the system can get “smarter”. 

 

WatsonPaths, when ready, will be available to Cleveland Clinic faculty and students as part of their problem-based learning curriculum and in clinical lab simulations. With an emphasis on critical thinking and problem solving, WatsonPaths will be able to help medical students learn how to quickly navigate the latest medical information and will display critical reasoning pathways from initial clinical observations all the way to possible diagnoses and treatment options. 

 

IBM and Cleveland Clinic are using Watson EMR Assistant to explore how to navigate and process electronic medical records to unlock hidden insights within the data, with the goal of helping physicians make more informed and accurate decisions about patient care. 

 

Historically, the potential of EMRs has not been realized due to the discrepancies of how the data is recorded, collected and organized across healthcare systems and organizations. The massive amount of health data within EMRs alone presents tremendous value in transforming clinical decision making, but can also be difficult to absorb. For example, analyzing a single patient’s EMR can be the equivalent of going through up to 100MB of structured and unstructured data, in the form of plain text that can span a lifetime of clinical notes, lab results and medication history. 

 

Watson’s natural language expertise allows it to process an EMR with a deep semantic understanding of the content and can help medical practitioners quickly and efficiently sift through the massive amounts of complex and disparate data and better make sense of it all.  With this research project, Watson’s robust pipeline of natural language processing and machine learning technologies is being applied to begin analyzing whole EMRs with the goal of surfacing information and relationships within the data in a visualization tool that may be useful to a medical practitioner.

 

Working with de-identified EMR data provided by Cleveland Clinic and with direction from Cleveland Clinic physicians, the goal of the Watson EMR Assistant research project is to develop technologies that will be able to collate key details in the past medical history and present to the physician a problem list of clinical concerns that may require care and treatment, highlight key lab results and medications that correlate with the problem list, and classify important events throughout the patient’s care presented within a chronological timeline. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World’s First Vertical Forest Gets Introduced in Italy

World’s First Vertical Forest Gets Introduced in Italy | Amazing Science | Scoop.it

The Bosco Verticale is a system that optimizes, recuperates, and produces energy. Covered in plant life, the building aids in balancing the microclimate and in filtering the dust particles contained in the urban environment. Milan is one of the most polluted cities in Europe. The diversity of the plants and their characteristics produce humidity, absorb CO2 and dust particles, producing oxygen and protect the building from radiation and acoustic pollution. This not only improves the quality of living spaces, but gives way to dramatic energy savings year round.

Each apartment in the building will have a balcony planted with trees that are able to respond to the city’s weather — shade will be provided within the summer, while also filtering city pollution; and in the winter the bare trees will allow sunlight to permeate through the spaces. Plant irrigation will be supported through the filtering and reuse of the greywater produced by the building. Additionally, Aeolian and photovoltaic energy systems will further promote the tower’s self-sufficiency.

The design of the Bosco Verticale is a response to both urban sprawl and the disappearance of nature from our lives and on the landscape. The architect notes that if the units were to be constructed unstacked as stand-alone units across a single surface, the project would require 50,000 square meters of land, and 10,000 square meters of woodland. Bosco Verticale is the first offer in his proposed BioMilano, which envisions a green belt created around the city to incorporate 60 abandoned farms on the outskirts of the city to be revitalized for community use.

more...
Sieg Holle's curator insight, October 25, 2013 10:43 AM

excellent use of space   for  new vitality -renewal

Eco Installer's curator insight, November 7, 2013 3:42 AM

A perfect way to live in a forest! 

Chris Vilcsak's curator insight, March 29, 2014 9:49 PM

Now THAT's a green building...

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Surprising: Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

Surprising: Quantum Experiment Shows How Time ‘Emerges’ from Entanglement | Amazing Science | Scoop.it

Then in 1983, the theorists Don Page and William Wooters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

 

Entanglement is a deep and powerful link and Page and Wooters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

 

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

 

In this case, Page and Wooters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

 

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

 

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

 

Of course, without experimental verification, Page and Wooter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

 

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page Wooters ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

 

In the last years several theoretical papers discussed if time can be an emergent property deriving from quantum correlations. Here, to provide an insight into how this phenomenon can occur, physicists present an experiment that illustrates Page and Wootters' mechanism of "static" time, and Gambini subsequent refinements. A static, entangled state between a clock system and the rest of the universe is perceived as evolving by internal observers that test the correlations between the two subsystems. They implement this mechanism using an entangled state of the polarization of two photons, one of which is used as a clock to gauge the evolution of the second: an "internal" observer that becomes correlated with the clock photon sees the other system evolve, while an "external" observer that only observes global properties of the two photons can prove it is static.

more...
Sharrock's curator insight, October 28, 2013 10:07 AM

from the article: "This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict."

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Internet power: Google gives a rare glimpse into the vast data centers around the globe that power its services

Internet power: Google gives a rare glimpse into the vast data centers around the globe that power its services | Amazing Science | Scoop.it
With hundreds of thousands of servers at each location, Google's vast server farms are housed in buildings ranging from a converted paper mill in Finland to cavernous warehouses in Iowa.

 

'Very few people have stepped inside Google’s data centers, and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,' the firm said. 'While we’ve shared many of our designs and best practices, and we’ve been publishing our efficiency data since 2008, only a small set of employees have access to the server floor itself. 


So, come and see inside our data centers and pay them a virtual visit. Google is also building data centers in Hong Kong, Taiwan, Singapore and Chile. Virtual tours of a North Carolina data center also will be available through Google's 'Street View' service, which is usually used to view photos of neighborhoods around the world. 


The photographic access to Google's data centers coincides with the publication of a Wired magazine article about how the company builds and operates them. The data centers represent Google's nerve center, although none are located near the company's headquarters in Mountain View, California. 


As Google blossomed from its roots in a Silicon Valley garage, company co-founders Larry Page and Sergey Brin worked with other engineers to develop a system to connect low-cost computer servers in a way that would help them realize their ambition to provide a digital roadmap to all of the world's information. 


Initially, Google just wanted enough computing power to index all the websites on the Internet and deliver quick responses to search requests. As Google's tentacles extended into other markets, the company had to keep adding more computers to store videos, photos, email and information about their users' preferences. 


The insights that Google gathers about the more than 1 billion people that use its services has made the company a frequent target of privacy complaints around the world. 


The latest missive came Tuesday in Europe, where regulators told Google to revise a 7-month-old change to its privacy policy that enables the company to combine user data collected from its different services. 


Google studies Internet search requests and Web surfing habits in an effort to gain a better understanding of what people like. The company does this in an effort to show ads of products and services to the people most likely to be interested in buying them. 


Even as it allows anyone with a Web browser to peer into its data centers, Google intends to closely guard physical access to its buildings. The company also remains cagey about how many computers are in its data centers, saying only that they house hundreds of thousands of machines to run Google's services. 


Google's need for so many computers has turned the company a major electricity user, although management says it's constantly looking for ways to reduce power consumption to protect the environment and lower its expenses.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

Is ‘massive open online research’ (MOOR) the next frontier for education?

Is ‘massive open online research’ (MOOR) the next frontier for education? | Amazing Science | Scoop.it

UC San Diego is launching the first major online course that prominently features “massive open online research” (MOOR).

 

For Bioinformatics, UC San Diego computer science and engineering professor Pavel Pevzner and his graduate students are offering a course on Coursera that combines research with a MOOC (massive open online course) for the first time.

 

“All students who sign up for the course will be given an opportunity to work on specific research projects under the leadership of prominent bioinformatics scientists from different countries, who have agreed to interact and mentor their respective teams.”

 

“The natural progression of education is for people to make a transition from learning to research, which is a huge jump for many students, and essentially impossible for students in isolated areas,” said Ph.D. student Phillip Compeau, who helped develop the online course. “By integrating the research with an interactive text and a MOOC, it creates a pipeline to streamline this transition.”

 

Bioinformatics Algorithms (Part I) will run for eight weeks starting October 21, 2013, and students are now able to sign up and download some of the course materials. It is offered free of charge to everyone.

 

Another unique feature of the online course: Pevzner and Compeau have developed Bioinformatics Algorithms: An Active-Learning Approach, a e-book supporting the course, while Pevzner’s colleagues in Russia developed a content delivery system that integrates the e-book with hundreds of quizzes and dozens of homework problems.

 

The U.S.-Russian team, led by Pevzner’s foreign student Nikolay Vyahhi, also implemented the online course using the beta version of Stepic, a new, fully integrated educational platform and startup developed by Vyahhi. Stepic derives its name from the “step-by-step, epic” solution its developers delivered for electronic publishing.

 

The course also provides access to Rosalind, a free online resource for learning bioinformatics through problem solving. Rosalind was developed by Pevzner’s students and colleagues in San Diego and St. Petersburg with funding from the Howard Hughes Medical Institute, the Russian Ministry of Education, and Russian Internet billionaires Yuri Milner and Pavel Durov through their “Start Fellows” award. Rosalind already has over 10,000 active users worldwide.

 

Rosalind — named in honor of British scientist Rosalind Franklin, whose X-ray crystallography with Raymond Gosling facilitated the discovery of the DNA double helix by Watson and Crick — will grade the programming assignments. They come in the form of bioinformatics problems of growing complexity as the course progresses.

 

“We developed Rosalind to inspire both biologists and computer science students,” said Rosalind principal developer Vyahhi, who worked with Pevzner during the latter’s sabbatical in Russia. “The platform allows biologists to develop vital programming skills for bioinformatics at their own pace, and Rosalind can also appeal to programmers who have never been exposed to some of the exciting computational problems generated by molecular biology.”


Via LilyGiraud
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Money does grow on trees: Gold found in tree leaves, leads to vast hidden underground deposits

Money does grow on trees: Gold found in tree leaves, leads to vast hidden underground deposits | Amazing Science | Scoop.it
Forget everything your parents ever taught you about managing your finances: Down under in Australia, scientists have found money growing on trees. Not paper money, of course, but eucalyptus leaves that are imbued with small amounts of pure gold.

 

These gilded leaves can help gold exploration companies discover new, underground gold deposits in difficult-to-reach locations.

 

The research, carried out by Melvyn Lintern at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and friends, discovered that trees in Australia and elsewhere in the world can be used to locate gold deposits that are more than 30 meters (100 feet) below the surface. Small amounts of gold are dissolved in water, which is then sucked up by tree roots. These particles eventually find themselves deposited in the leaves of the trees (the red dots in the image above).

 

Sadly, we’re talking about very small quantities of gold. Lintern says you would need to harvest 500 trees growing over a gold deposit to be able to make a gold ring. In scientific terms, we’re talking about a gold concentration of just 100 parts-per-billion in leaves, and about 50 ppb in twigs. To discover such tiny concentrations you can’t just use a conventional microscope: The Australian scientists had to use a synchrotron, a vast room-sized machine that uses X-rays to map the various elements present in a sample.

 

Still, despite the fact that money only grows on trees in minuscule amounts, the main takeaway here is that trees can be used to pinpoint underground gold deposits. As it stands, most gold is currently mined from outcrops, where gold-rich veins have been brought to the surface. As you can probably imagine, underground exploration — as with many new crude oil discoveries — is a lot more expensive and risky. If large gold deposits can be discovered just by analyzing a few leaves — and around 30% of the world’s gold reserves are believed to be under the Goldfields-Esperance region in Australia – then the price of gold could soon drop very quickly indeed. There’s no reason this technique can’t be applied to other important and valuable metals, such as copper and platinum, too.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Oxford Nanopore to Let Early Access Customers Test Handheld Sequencer

Oxford Nanopore to Let Early Access Customers Test Handheld Sequencer | Amazing Science | Scoop.it
Oxford Nanopore Technologies Ltd., the U.K. company developing portable gene sequencers, will begin providing its MinION handheld device to some customers to test, a sign it’s taking steps toward selling the instrument.

 

Oxford Nanopore's disposable DNA sequencer is about the size of a USB memory stick that can be plugged directly into a laptop or desktop computer and used to perform a single-molecule sensing experiment. The device is expected to sell for around $1,000, according to the company. 

 

Oxford Nanopore Technologies also unveiled a larger benchtop version of the technology. It says a configuration of 20 of the benchtop instruments could completely sequence a human genome in 15 minutes.

 

The technology is based on a radically different sequencing method that has been in the work for more than a decade at Oxford University, Harvard and the University of California, Santa Cruz. DNA strands are pulled through nanopores embedded in a polymer. As the DNA passes through the nanopore, specific sequences are identified based on varying electronic signals from the different bases. As a result, the technology can read DNA sequences directly and continuously. The company says double-stranded DNA can be sensed directly from blood.

 

These type of nanopore machines should be suitable to sit on the bench of a small lab, running small projects and with small budgets and floorspace. However, this isn’t the full story. Each individual machine is rocking the VCR-machine-circa-1992 look, and the reason for this becomes clear when you see many of them together. The boxes are designed to fit together in standard computing cluster racks, and Oxford Nanopore refer to each of the individual machines as “nodes”. The nodes connect together via a standard network, and can talk to each other, as well as reporting data in real time through the network to other computers. When joined together like this, one machine can be designated as the control node, and during sequencing many nodes can be assigned to sequence the same sample.

 

Another aspect is the ability of the machines to react in real time. The sequencer can change aspects of its behavior in response to orders given during sequencing. Some of these will be automatic quality-control changes; the salt concentration and the temperature can change to optimize the sequence speed or quality. The machines can also be given basic preset targets; sequence until we have enough reads, or enough coverage, or a good enough idea of the concentration of a particular protein. This means that instead of running the machine for a set period of time, you can instead run until you have what you want.

 

Also, the machines can be loaded with up to 96 different samples, so you can decide to sequence one sample until you have enough DNA from it, then move onto another one, and so on. The machines can also talk to each other; for instance, four machines could sequence the same sample, and stop once they had produced enough sequence between them. Finally, the machines has built in APIs to allow them to respond to external programs of arbitrary complexity; for instance, you could connect your machines to a computing cluster that is aligning reads and making variant calls as the sequence runs, and you could decide which sample to sequence next based on the SNP calls from the first.

 

This new generation of sequencing machines is going to raise a whole new set of bioinformatics challanges, as well as requiring scientists to think about experimental design more carefully to make the most of this technology.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Beddit, The Sleep Sensor You Tape To Your Bed

Beddit, The Sleep Sensor You Tape To Your Bed | Amazing Science | Scoop.it

Smart pedometers are just the beginning. Sensors of all kinds are emerging to track the way we move, what we do at home and the way we sleep. Beddit automatically tracks your sleeping patterns, heart rate, breathing, snoring, movements and environment. In the morning, Beddit tells you how you slept and how to do it better. Place Beddit ultra thin film sensor in your bed, under the sheet. No wearable sensors. The product is expected to be released by the second quarter of next year.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Exoplanet tally soars above 1,000

Exoplanet tally soars above 1,000 | Amazing Science | Scoop.it

The number of observed exoplanets - worlds circling distant stars - has passed 1,000. Of these, 12 could be habitable - orbiting at a distance where it is neither "too hot" nor "too cold" for water to be liquid on the surface. The planets are given away by tiny dips in light as they pass in front of their stars or through gravitational "tugs" on the star from an orbiting world.

 

The Kepler space telescope, which spotted many of these worlds in recent years, broke down earlier this year. Scientists still have to trawl through more than 3,500 other candidates from this mission so the number could rapidly increase.

 

In January 2013, astronomers used Kepler's data to estimate that there could be at least 17 billion Earth-sized exoplanets in the Milky Way galaxy. They said that one in six stars could host an Earth-sized planet in close orbit.


The number of confirmed planets frequently increases because as scientists analyse the data they are able publish their results online immediately. But as the finds are not yet peer reviewed, the total figure remains subject to change.

"Each night we get a list of astronomy papers where there might be an exoplanet announcement. When we get that we have to review it," explained Prof Mendez.

 

This exoplanet catalogue is organised by Jean Schneider, an astronomer at the Paris Observatory. For the past 18 years he has catalogued new exoplanets on the Extrasolar Planets Encyclopaedia.

 

http://exoplanet.eu/catalog/

more...
Vloasis's curator insight, October 22, 2013 2:38 PM

Better numbers are a good thing indeed.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

First venomous crustacean discovered

First venomous crustacean discovered | Amazing Science | Scoop.it
Cave-dwelling animal uses neurotoxin to kill prey and digest it before eating it.

 

Scattered throughout Mexico and central America are pools where water surfaces from underground networks of caves, which the ancient Maya said were gateways to the underworld. Biologists have now found that these bodies of water are home to a mysterious real-world creature as well: the first venomous crustaceans known to science.

 

The crustacean in question, Speleonectes tulumensis, belongs to the remipedes, a group first described in 1981. Observing these pale, blind and tiny animals in their natural habitat has been hard because they live in labyrinthine cave networks that are as difficult for divers to navigate as they are dangerous. Nonetheless, biologists including Björn von Reumont and Ronald Jenner, both of the Natural History Museum in London, found remipedes tossing away empty exoskeletons of shrimp, presumably having fed on them.

 

In 2007 researchers discovered structures on the animals’ front claws that resemble hypodermic needles, fuelling speculations that they might be injecting something into their prey. That idea is now proving true, as von Reumont and Jenner report in Molecular Biology and Evolution. The researchers found that reservoirs attached to the needle structures are surrounded by muscles that can pump fluid through the needles. Moreover, they found glands in the centre of the remipede body that manufacture venom and are connected to the reservoirs.

 

Von Reumont and Jenner also found that the crustaceans’ venom is made predominantly of peptidases, enzymes that have roles in digestion and are also found in rattlesnake venom, where they help to digest prey. The crustacean venom also contains a toxin that is nearly identical to a paralysis-inducing neurotoxin first described in spiders in 2010. “We think the neurotoxin is stopping their prey getting away and that the peptidases are allowing the remipedes to drink their prey like milkshakes,” says Jenner.

more...
Vloasis's curator insight, October 22, 2013 2:37 PM

I knew a venomous crustacean way before they discovered one of these things.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

MIT: The Million-Year Data Storage Disk Unveiled

MIT: The Million-Year Data Storage Disk Unveiled | Amazing Science | Scoop.it

Magnetic hard discs can store data for little more than a decade. But nanotechnologists have now designed and built a disk that can store data for a million years or more.

 

Back in 1956, IBM introduced the world’s first commercial computer capable of storing data on a magnetic disk drive. The IBM 305 RAMAC used fifty 24-inch discs to store up to 5 MB, an impressive feat in those days. Today, however, it’s not difficult to find hard drives that can store 1 TB of data on a single 3.5-inch disk. But despite this huge increase in storage density and a similarly impressive improvement in power efficiency, one thing hasn’t changed. The lifetime over which data can be stored on magnetic discs is still about a decade.

 

That raises an interesting problem. How are we to preserve information about our civilisation on a timescale that outlasts it? In other words, what technology can reliably store information for 1 million years or more?

 

Today, we get an answer thanks to the work of Jeroen de Vries at the University of Twente in the Netherlands and a few pals. These guys have designed and built a disk capable of storing data over this timescale. And they’ve performed accelerated ageing tests which show it should be able to store data for 1 million years and possibly longer.

 

These guys start with some theory about aging. Clearly, it’s impractical to conduct an ageing experiment in real time, particularly when the periods involved are measured in millions of years.  But there is a way to accelerate the process of aging.

 

This is based on the idea that data must be stored in an energy minimum that is separated from other minima by an energy barrier. So to corrupt data by converting a 0 to a 1, for example, requires enough energy to overcome this barrier.

 

The probability that the system will jump in this way is governed by an idea known as Arrhenius law. This relates the probability of jumping the barrier to factors such as its temperature, the Boltzmann constant and how often a jump can be attempted, which is related to the level of atomic vibrations.

 

Some straightforward calculations reveal that to last a million years, the required energy barrier is 63 KBT or 70 KBT to last a billion years. “These values are well within the range of today’s technology,” say de Vries and co.

 

The disk is simple in conception. The data is stored in the pattern of lines etched into a thin metal disc and then covered with a protective layer.

The metal in question is tungsten, which they chose because of its high melting temperature (3,422 degrees C) and low thermal expansion coefficient.  The protective layer is silicon nitride (Si3N4) chosen because of its high resistance to fracture and its low thermal expansion coefficient.

 

The results are impressive. According to Arrhenius law, a disk capable of surviving a million years would have to survive 1 hour at 445 Kelvin, a test that the new disks passed with ease. Indeed, they survived temperatures up to 848 Kelvin, albeit with significant amounts of information loss.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mixing and Matching DNA-based Nanoparticles to Make Multifunctional Materials

Mixing and Matching DNA-based Nanoparticles to Make Multifunctional Materials | Amazing Science | Scoop.it
Standardized technique for combining different types of nanoparticles to produce large-scale composite materials opens remarkable opportunities for 'mix and match' materials fabrication.

 

Scientists at the U.S. Department of Energy's Brookhaven National Laboratory have developed a general approach for combining different types of nanoparticles to produce large-scale composite materials. The technique, described in a paper published online by Nature Nanotechnology on October 20, 2013, opens many opportunities for mixing and matching particles with different magnetic, optical, or chemical properties to form new, multifunctional materials or materials with enhanced performance for a wide range of potential applications. 


The approach takes advantage of the attractive pairing of complementary strands of synthetic DNA—based on the molecule that carries the genetic code in its sequence of matched bases known by the letters A, T, G, and C. After coating the nanoparticles with a chemically standardized "construction platform" and adding extender molecules to which DNA can easily bind, the scientists attach complementary lab-designed DNA strands to the two different kinds of nanoparticles they want to link up. The natural pairing of the matching strands then "self-assembles" the particles into a three-dimensional array consisting of billions of particles. Varying the length of the DNA linkers, their surface density on particles, and other factors gives scientists the ability to control and optimize different types of newly formed materials and their properties.


"Our study demonstrates that DNA-driven assembly methods enable the by-design creation of large-scale 'superlattice' nanocomposites from a broad range of nanocomponents now available—including magnetic, catalytic, and fluorescent nanoparticles," said Brookhaven physicist Oleg Gang, who led the research at the Lab's Center for Functional Nanomaterials (CFN). "This advance builds on our previous work with simpler systems, where we demonstrated that pairing nanoparticles with different functions can affect the individual particles' performance, and it offers routes for the fabrication of new materials with combined, enhanced, or even brand new functions." 


Future applications could include quantum dots whose glowing fluorescence can be controlled by an external magnetic field for new kinds of switches or sensors; gold nanoparticles that synergistically enhance the brightness of quantum dots' fluorescent glow; or catalytic nanomaterials that absorb the "poisons" that normally degrade their performance, Gang said.

 

"Modern nano-synthesis methods provide scientists with diverse types of nanoparticles from a wide range of atomic elements," said Yugang Zhang, first author of the paper. "With our approach, scientists can explore pairings of these particles in a rational way." 

 

Pairing up dissimilar particles presents many challenges the scientists investigated in the work leading to this paper. To understand the fundamental aspects of various newly formed materials they used a wide range of techniques, including x-ray scattering studies at Brookhaven's National Synchrotron Light Source (NSLS) and spectroscopy and electron microcopy at the CFN.

 

For example, the scientists explored the effect of particle shape. "In principle, differently shaped particles don't want to coexist in one lattice," said Gang. "They either tend to separate into different phases like oil and water refusing to mix or form disordered structures." The scientists discovered that DNA not only helps the particles mix, but it can also improve order for such systems when a thicker DNA shell around the particles is used. 


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

NASA May Go Mars Geyser Hopping

NASA May Go Mars Geyser Hopping | Amazing Science | Scoop.it

In showbiz, the adage has always been “leave ‘em wanting more.” So, following the brilliant opening salvos of its $2.5 billion Mars rover mission, what does NASA do for an encore? Current budget paradigms dictate that the space agency think economically in its approach to future Mars exploration. For surface-based exploration, that means a realistic return to NASA’s Discovery-class solar system exploration missions on budgets of $450 million or less.


Thus, NASA has just announced that it has selected InSight, a new $425 million Mars surface mission for launch in 2016. Building on the space agency’s Mars Phoenix Lander spacecraft technology, InSight will study the Red Planet’s deep interior for clues to how its planetary structure actually evolved. It should also determine whether Mars has a liquid or solid core and why unlike earth, its crust lacks drifting tectonic plates.


But then how about some good old-fashioned geyser hopping? Mars Geyser Hopper, a Discovery-class mission concept study that has largely gone unnoticed, is potentially a follow-on to the Phoenix Lander mission and would launch at earliest in 2018.


The spacecraft would represent the first attempt to land at Mars’ geographic South Pole and would offer the promise of some spectacular high-quality live-action video of carbon dioxide geysers spewing forth at the beginning of early spring. That’s when the sun is still only a few degrees above the horizon and temperatures are typically 150 degrees below zero Celsius.

 

Using automated detection equipment, the hopper would pick up the first signs of an erupting geyser which, in turn, would trigger high-speed particle motion detectors and high-resolution imagery. There would also be detailed chemical analysis of geyser fallout once it hit the Martian surface.

 

It wouldn’t be the first time NASA has played the hopping game; in 1967, the space agency’s Surveyor VI spacecraft made an eight-ft. repositioning hop after landing on the lunar surface.  But the Geyser Hopper mission would make at least two subsequent hops after landing. The first would enable the spacecraft to better study the geyser fields during southern polar summer. And the second would be to position itself to best wait out the harsh dark polar winter.

 

There have been hundreds of geysers seen from Mars polar orbit already. But thousands of springtime geysers are thought to potentially stretch over an area of several hundred kilometers; crowding the polar landscape at a density of roughly one geyser for every 2 kilometers.


more...
SSMS Science's curator insight, October 29, 2013 1:18 PM

I think it would be really cool to see geysers on Mars. However, this article says that InSight will study Mars's deep interior for clues to how the planet evolved. If it does "find" how it evolved, the conclusion won't be true because God created everything including Mars during Creation. Nothing ever evolved. CB

Scooped by Dr. Stefan Gruenwald
Scoop.it!

New Graphene Oxide Based Microfluidic Chip Captures Circulating Tumor Cells with High Sensitivity

New Graphene Oxide Based Microfluidic Chip Captures Circulating Tumor Cells with High Sensitivity | Amazing Science | Scoop.it

Capturing the elusive circulating tumor cells (CTC) in whole blood is a major goal in medical science, potentially allowing for early detection and monitoring of a variety of cancers. Efficient and effective, technologically advanced capture techniques must be employed in order for CTC capture to be a cost effective option for early detection, as well as to minimize damage to the captured cells. Recently, microfluidic devices have been the focus of much research and development, and have had much success in the field of CTC capture. In the past we’ve covered CTC capture microfluidic chips with a >90% efficiency, and chips designed purely for separation of white blood cells from whole blood. However, we have yet to cover an efficient and effective chip designed for CTC capture. Now researchers at University of Michigan have created a microfluidic approach that may overcome some of the downsides of existing technology.


The new technology relies on gold particles in the shape of flowers that attract graphene oxide nanosheets to stick to them. In turn, the graphene oxide promotes growth of  molecular chains that grab onto CTCs. The team tested the technology on samples taken from pancreatic, breast and lung cancer patients, and showed that their technology delivers high sensitivity of detection at a low concentration of CTCs in a given sample.


To test the device, the team ran one-milliliter samples of blood through the chip’s thin chamber. Even when they had added just three-to-five cancer cells to the 5-10 billion blood cells, the chip was able to capture all of the cells in the sample half the time, with an average of 73 percent over 10 trials.

The team counted the captured cancer cells by tagging them with fluorescent molecules and viewing them through a microscope. These tags made the cancer cells easy to distinguish from accidentally caught blood cells. They also grew breast cancer cells over six days, using an electron microscope to see how they spread across the gold flowers.


more...
Organic Social Media's curator insight, October 31, 2013 11:24 AM

New #Graphene Oxide Based Microfluidic Chip Captures Circulating Tumor Cells with High Sensitivity