Amazing Science
837.5K views | +164 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Intestinal bacterial ecosystem: Identification of 741 bacteria, 181 new species, and 848 bacterial viruses

Intestinal bacterial ecosystem: Identification of 741 bacteria, 181 new species, and 848 bacterial viruses | Amazing Science | Scoop.it

Researchers at DTU (Technical University of Denmark) in collaboration with an international team from countries including France and China devised a method based on the co-abundance principle to easily identify the genomes (or genetic material) of unknown intestinal microorganisms. The scientists demonstrated this method on 396 human stool samples and uncovered 741 microbial species of which 181 are proposed to be completely novel.  Unlike prior methods to identify bacterial species, the use of CAGs obviates the need for assembly as well the need for a database of reference genomes.


The new approach also identified 848 viruses that infect each bacterium (called bacteriophages). The balance of intestinal fauna affects human health as it is increasingly recognized disrupting such balance for example by use of antibiotics leads to disease states.  Therefore modulation of the bacterial composition by viral agents is an attractive means to restore the balance.  Moreover, the new insight makes possible the exploitation of viruses to attack specific bacteria, thereby adding another tool to our pharmacological arsenal which is under increasing pressure from antibiotic resistance.


The human intestine is home to many microorganisms, whose cell population is estimated to be 10 times greater than the number of human cells in an individual.  Only a few species that can be cultivated in the laboratory to be sequenced by traditional methods.  Identification of the different microbial species in the intestinal ecology and their interactions will lead to better understanding of relevant disease conditions such as type 2 diabetes, asthma and obesity.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DIXDC1 gene discovered that inhibits spread of deadly lung cancer

DIXDC1 gene discovered that inhibits spread of deadly lung cancer | Amazing Science | Scoop.it
A gene responsible for stopping the movement of cancer from the lungs to other parts of the body has been discovered by researchers, indicating a new way to fight one of the world’s deadliest cancers. By identifying the cause of this metastasis, which often happens quickly in lung cancer and results in a bleak survival rate, scientists are able to explain why some tumors are more prone to spreading than others. The newly discovered pathway may also help researchers understand and treat the spread of melanoma and cervical cancers.


Lung cancer, which also affects nonsmokers, is the leading cause of cancer-related deaths in the country (estimated to be nearly 160,000 this year). The United States spends more than $12 billion on lung cancer treatments, according to the National Cancer Institute. Nevertheless, the survival rate for lung cancer is dismal: 80 percent of patients die within five years of diagnosis largely due to the disease's aggressive tendency to spread throughout the body.


To become mobile, cancer cells override cellular machinery that typically keeps cells rooted within their respective locations. Deviously, cancer can switch on and off molecular anchors protruding from the cell membrane (called focal adhesion complexes), preparing the cell for migration. This allows cancer cells to begin the processes to traverse the body through the bloodstream and take up residence in new organs.


In addition to different cancers being able to manipulate these anchors, it was also known that about a fifth of lung cancer cases are missing an anti-cancer gene called LKB1 (also known as STK11). Cancers missing LKB1 are often aggressive, rapidly spreading through the body. However, no one knew how LKB1 and focal adhesions were connected.


Now, the Salk team has found the connection and a new target for therapy: a little-known gene called DIXDC1. The researchers discovered that DIXDC1 receives instructions from LKB1 to go to focal adhesions and change their size and number.


When DIXDC1 is "turned on," half a dozen or so focal adhesions grow large and sticky, anchoring cells to their spot. When DIXDC1 is blocked or inactivated, focal adhesions become small and numerous, resulting in hundreds of small "hands" that pull the cell forward in response to extracellular cues. That increased tendency to be mobile aids in the escape from, for example, the lungs and allows tumor cells to survive travel through the bloodstream and dock at organs throughout the body.


"The communication between LKB1 and DIXDC1 is responsible for a 'stay-put' signal in cells," says first author and Ph.D. graduate student Jonathan Goodwin. "DIXDC1, which no one knew much about, turns out to be inhibited in cancer and metastasis."


Tumors, Shaw and collaborators found in the new research, have two ways to turn off this "stay-put" signal. One is by inhibiting DIXDC1 directly. The other way is by deleting LKB1, which then never sends the signal to DIXDC1 to move to the focal adhesions to anchor the cell. Given this, the scientists wondered if reactivating DIXDC1 could halt a cancer's metastasis. The team took metastatic cells, which had low levels of DIXDC1, and overexpressed the gene. The addition of DIXDC1 did indeed blunt the ability of these cells to be metastatic in vitro and in vivo.


"It was very, very surprising that this gene would be so powerful," says Goodwin. "At the start of this study, we had no idea DIXDC1 would be involved in metastasis. There are dozens of proteins that LKB1 affects; for a single one to control so much of this phenotype was not expected."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Nocturnal Moth Eyes Inspire More Efficient Photoelectrochemical Cells

Nocturnal Moth Eyes Inspire More Efficient Photoelectrochemical Cells | Amazing Science | Scoop.it

Collecting light with artificial moth eyes


All over the world researchers are investigating solar cells which imitate plant photosynthesis, using sunlight and water to create synthetic fuels such as hydrogen. Empa researchers have developed such a photoelectrochemical cell, recreating a moth’s eye to drastically increase its light collecting efficiency. The cell is made of cheap raw materials – iron and tungsten oxide. Rust – iron oxide – could revolutionise solar cell technology. This usually unwanted substance can be used to make photoelectrodes which split water and generate hydrogen.  Sunlight is thereby directly converted into valuable fuel rather than first being used to generate electricity. Unfortunately, as a raw material iron oxide has its limitations. Although it is unbelievably cheap and absorbs light in exactly the wavelength region where the sun emits the most energy, it conducts electricity very poorly and must therefore be used in the form of an extremely thin film in order for the water splitting technique to work. The disadvantage of this is that these thin-films absorb too little of the sunlight shining on the cell.


Empa researchers Florent Boudoire and Artur Braun have now succeeded in solving this problem. A special microstructure on the photoelectrode surface literally gathers in sunlight and does not let it out again. The basis for this innovative structure are tiny particles of tungsten oxide which, because of their saturated yellow colour, can also be used for photoelectrodes. The yellow microspheres are applied to an electrode and then covered with an extremely thin nanoscale layer of iron oxide. When external light falls on the particle it is internally reflected back and forth, till finally all the light is absorbed. All the entire energy in the beam is now available to use for splitting the water molecules.


In principle the newly conceived microstructure functions like the eye of a moth, explains Florent Boudoire. The eyes of these night active creatures need to collect as much light as possible to see in the dark, and also must reflect as little as possible to avoid detection and being eaten by their enemies. The microstructure of their eyes especially adapted to the appropriate wavelength of light. Empa's photocells take advantage of the same effect.


In order to recreate artificial moth eyes from metal oxide microspheres, Florent Boudoire sprays a sheet of glass with a suspension of plastic particles, each of which contains at its centre a drop of tungsten salt solution. The particles lie on the glass like a layer of marbles packed close to each other. The sheet is placed in an oven and heated, the plastic material burns away and each drop of salt solution is transformed into the required tungsten oxide microsphere. The next step is to spray the new structure with an iron salt solution and once again heat it in an oven.


Reference:
Florent Boudoire, Rita Toth, Jakob Heier, Artur Braun, Edwin C. Constable, Photonic light trapping in self-organized all-oxide microspheroids impacts photoelectrochemical water splitting, Energy & Environmental Sciences, in press.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Extending Moore's Law: Shrinking transistor size for smaller, more efficient computers

Extending Moore's Law: Shrinking transistor size for smaller, more efficient computers | Amazing Science | Scoop.it
Over the years, computer chips have gotten smaller thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore's Law. But there's one component of the chip-making process in need of an overhaul if Moore's law is to continue: the chemical mixture called photoresist. In a bid to continue decreasing transistor size while increasing computation and energy efficiency, chip-maker Intel has partnered with researchers to design an entirely new kind of resist.


Now, in a bid to continue decreasing transistor size while increasing computation and energy efficiency, chip-maker Intel has partnered with researchers from the U.S. Department of Energy's Lawrence Berkeley National Lab (Berkeley Lab) to design an entirely new kind of resist. And importantly, they have done so by characterizing the chemistry of photoresist, crucial to further improve performance in a systematic way. The researchers believe their results could be easily incorporated by companies that make resist, and find their way into manufacturing lines as early as 2017.


The new resist effectively combines the material properties of two pre-existing kinds of resist, achieving the characteristics needed to make smaller features for microprocessors, which include better light sensitivity and mechanical stability, says Paul Ashby, staff scientist at Berkeley Lab's Molecular Foundry, a DOE Office of Science user facility. "We discovered that mixing chemical groups, including cross linkers and a particular type of ester, could improve the resist's performance." The work is published this week in the journal Nanotechnology.


To understand why resist is so important, consider a simplified explanation of how your microprocessors are made. A silicon wafer, about a foot in diameter, is cleaned and coated with a layer of photoresist. Next ultraviolet light is used to project an image of the desired circuit pattern including components such as wires and transistors on the wafer, chemically altering the resist.


Depending on the type of resist, light either makes it more or less soluble, so when the wafer is immersed in a solvent, the exposed or unexposed areas wash away. The resist protects the material that makes up transistors and wires from being etched away and can allow the material to be selectively deposited. This process of exposure, rinse and etch or deposition is repeated many times until all the components of a chip have been created.


The problem with today's resist, however, is that it was originally developed for light sources that emit so-called deep ultraviolet light with wavelengths of 248 and 193 nanometers. But to gain finer features on chips, the industry intends to switch to a new light source with a shorter wavelength of just 13.5 nanometers. Called extreme ultraviolet (EUV), this light source has already found its way into manufacturing pilot lines. Unfortunately, today's photoresist isn't yet ready for high volume manufacturing.


"The semiconductor industry wants to go to smaller and smaller features," explains Ashby. While extreme ultraviolet light is a promising technology, he adds, "you also need the resist materials that can pattern to the resolution that extreme ultraviolet can promise." So teams led by Ashby and Olynick, which include Berkeley Lab postdoctoral researcher Prashant Kulshreshtha, investigated two types of resist. One is called crosslinking, composed of molecules that form bonds when exposed to ultraviolet light. This kind of resist has good mechanical stability and doesn't distort during development -- that is, tall, thin lines made with it don't collapse. But if this is achieved with excessive crosslinking, it requires long, expensive exposures. The second kind of resist is highly sensitive, yet doesn't have the mechanical stability.


When the researchers combined these two types of resist in various concentrations, they found they were able to retain the best properties of both. The materials were tested using the unique EUV patterning capabilities at the CXRO. Using the Nanofabrication and Imaging and Manipulation facilities at the Molecular Foundry to analyze the patterns, the researchers saw improvements in the smoothness of lines created by the photoresist, even as they shrunk the width. Through chemical analysis, they were also able to see how various concentrations of additives affected the cross-linking mechanism and resulting stability and sensitivity.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Untangling spider's webs: Largest-ever genomic study shows orb weaver spiders do not share common origins

Untangling spider's webs: Largest-ever genomic study shows orb weaver spiders do not share common origins | Amazing Science | Scoop.it
The largest-ever phylogenetic study of spiders shows that, contrary to long-held popular opinion, the two groups of spiders that weave orb-shaped webs do not share a single origin.


The largest-ever phylogenetic study of spiders, conducted by postdoctoral student Rosa Fernández, Gonzalo Giribet, Alexander Agassiz Professor of Zoology, and Gustavo Hormiga, a professor at George Washington University, shows that, contrary to long-held popular opinion, the two groups of spiders that weave orb-shaped webs do not share a single origin. The study is described in a July 17 paper published in Current Biology.


"This study examines two different groups of orb-weaver spiders, as well as several other species," Giribet said. "Using thousands of genes, we did a comparative phylogenetic analysis, and what we now know is there is not a single origin for the orb-weaver spiders.


"There are two possible explanations for this," he continued. "One is that the orb web evolved far back in the lineage of the two groups, but has been lost in some groups. The other option is that the orb web evolved independently in these two groups. We still haven't resolved that question yet -- we need to sample many more of these intermediate groups before we can say which option is correct."


The belief that orb-weaver spiders shared a common origin, Giribet said, came largely from earlier morphological studies. Even as new genetic tools became more commonplace in the last two decades, the single origin theory held sway, in part, because early phylogenetic studies relied on just a handful of genes to draw a picture of the spider evolutionary tree.


"Some early analyses pointed out that spiders with orb webs didn't form a group -- they appeared in different places along the tree," Giribet said. "But the genes that were being used weren't enough to elucidate the evolution of a very diverse group like spiders, so most people dismissed many of those results."


In recent years, however, sequencing technology has dropped dramatically in cost, meaning researchers who once were able to study only a handful of genes can now examine the entire genome of a particular organism.


"The technology has changed what we are able to do in terms of the questions we can ask and the questions we can answer," Giribet continued. "Even just five years ago, we were spending thousands of dollars to sequence 3,000 genes. Today, we're spending just a few hundreds of dollars to sequence millions, which is almost an entire genome. In the case of Giribet and Fernández, the technology allowed them to sequence genes from 14 different spiders, creating the largest genomic data set for the study of spiders.


"This paper is at the forefront of how these large data sets are being analyzed, and how we are now constructing phylogenies using molecular data," Giribet said. "We can now test all possible pitfalls of phylogenetic interference to make sure our results are as accurate as possible."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Quantum bounce could make black holes explode

Quantum bounce could make black holes explode | Amazing Science | Scoop.it
If space-time is granular, it could reverse gravitational collapse and turn it into expansion.


Black holes might end their lives by transforming into their exact opposite — 'white holes' that explosively pour all the material they ever swallowed into space, say two physicists. The suggestion, based on a speculative quantum theory of gravity, could solve a long-standing conundrum about whether black holes destroy information.


The theory suggests that the transition from black hole to white hole would take place right after the initial formation of the black hole, but because gravity dilates time, outside observers would see the black hole lasting billions or trillions of years or more, depending on its size. If the authors are correct, tiny black holes that formed during the very early history of the Universe would now be ready to pop off like firecrackers and might be detected as high-energy cosmic rays or other radiation. In fact, they say, their work could imply that some of the dramatic flares commonly considered to be supernova explosions could in fact be the dying throes of tiny black holes that formed shortly after the Big Bang.


Albert Einstein’s general theory of relativity predicts that when a dying star collapses under its own weight, it can reach a stage at which the collapse is irreversible and no known force of nature can stop it. This is the formation of a black hole: a spherical surface, known as the event horizon, appears, shrouding the star inside from outside observers while it continues to collapse, because nothing — not even light or any other sort of information — can escape the event horizon.


Because dense matter curves space, ‘classical’ general relativity predicts that the star inside will continue to shrink into what is known as a singularity, a region where matter is infinitely dense and space is infinitely curved. In such situations, the known laws of physics cease to be useful.


Many physicists, however, believe that at some stage in this process, quantum-gravity effects should take over, arresting the collapse and avoiding the infinities.



Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Watching the Milky Way grow up: How galaxies change over eons of time

Watching the Milky Way grow up: How galaxies change over eons of time | Amazing Science | Scoop.it

Humans take the starry night for granted. But Earth’s night sky hasn’t always sparkled: In the distant past, during the infancy of the Milky Way, it was a much darker place. Now scientists have pictures of what it looked like and how it’s changed.


“For the first time we have images of what the Milky Way looked like in the past,” said Pieter G. van Dokkum, chair of Yale University’s Astronomy department and leader of a project to reconstruct the galaxy’s history with data from NASA’s Hubble Space Telescope. “By looking at hundreds of other distant but similar galaxies, we’ve traced dramatic changes in our own. We’ve captured most of the Milky Way’s evolution.”


Milky Way star formation was explosive in the period between 11 billion and 7 billion years ago, the astronomers found: Nearly 90% of its stars formed then. At peak formation, about 9 billion years ago, it was generating about 15 stars a year, they said.


Van Dokkum’s team traced the history of Milky Way star formation by studying about 400 galaxies, selected from a sample of more than 100,000 galaxies observed by Hubble covering about 11 billion years of cosmic history. These galaxies are expected to evolve as the Milky Way did.


At first the Milky Way, like other galaxies, had lots of gas but no stars. Gravitational forces compressed the gases to extreme densities, leading to the birth of stars.


By the time Earth formed, 4.5 billion years ago, the Milky Way’s rate of star birth had slowed dramatically, to the rate of about three per year.

“The show was mostly over,” van Dokkum said.


The team relied on three large Hubble galaxy programs: the Great Observatories Origins Deep Survey (GOODS), the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS), and the 3D-HST survey.


“A lot of what we accomplished required the excellent vision of the Hubble Space Telescope,” said Joel Leja, a graduate student in astronomy at Yale and member of the research team. “The Hubble images help us see the Milky Way ancestors all the way to their early youth. To get the first baby pictures, we need the infrared eyes of NASA’s James Webb Space Telescope, scheduled to launch in 2018.”


The authors reported their research in a paper in The Astrophysical Journal Letters titled “The Assembly of Milky Way-like Galaxies Since z ˜ 2.5.” A related paper is in press. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Science fiction-esque rapid firing X-ray lasers illuminate the mystery of proteins

Science fiction-esque rapid firing X-ray lasers illuminate the mystery of proteins | Amazing Science | Scoop.it
A new scientific technique uses the Linac Coherent Light Source to rapidly pulse -x-ray free electron lasers at protein crystals to monitor dynamic protein changes.


While microscopes have been used to view countless wonders of the biological world, any object that is shorter than a wave of light, such as an atom, is physically impossible to see with light. For decades, those interested in protein structure have solved this issue by growing protein crystals and shooting x-rays at them. This technique, known as x-ray crystallography, requires multiple synthetic manipulations – these step make the technique less than ideal for discovering how things work in the natural world.  A new science fiction-esque technique instead pulses high frequency x-ray lasers at a tiny jet of suspended proteins to observe their structure. This new technique allows for a more authentic interpretation of protein dynamics while overcoming many disadvantages of traditional crystallography.


One of the more anachronistic techniques used in current molecular biology and medicine is x-ray crystallography. Invented by physicists in the early 1900s to solve the structure of minerals, it was applied to structural biology in the 1950s to analyze common proteins such as hemoglobin (the protein that makes blood red) and insulin. Since then, thousands of protein structures have been discovered, leading to invaluable advances in designing drugs and research.


While the last six decades have witnessed many improvements in the equipment used for x-ray crystallography, no fundamental changes in the underlying method have been made since it won the Nobel prize for physics in 1914.


What is x-ray crystallography and why hasn’t it evolved since it was discovered? X-ray crystallography, in short, solves an optical problem. The first lesson of elementary optics is that objects can only be detected using electromagnetic waves that are shorter than the object. It works a bit like a painting – you can’t use a paintbrush larger than the canvas unless your name is Ad Reinhardt. For a quick introduction why, check out our primer on optics and waves.


Essentially, x-rays cannot be focused like normal light, therefore scientists have to use a proxy to determine what a protein looks like. To do this, large crystals of a protein are grown and a single crystal is selected for quality. Next, the crystal is saturated with x-rays and the diffracted x-ray pattern is collected behind the protein crystals. This is back-calculated into a 3-dimensional structure through a complex computational process.


Anyone familiar with x-rays or proteins will realize two issues with this process. The first is that x-rays tend to harm living tissue – saturating protein crystals an x-ray beam destroys the sample. While performing the experiment at cryo-temperatures mitigates x-ray damage,  it is still an inherent limitation of the technique. The second issue is that the proteins are crystallized – it’s not often that your meat comes crystallized when you order it at a restaurant. In fact, proteins are never found crystallized in nature. This makes the method both unnatural and difficult from the start.


Enter the Femto-laser.  By using Stanford’s Linac Coherent Light Source, super-short bursts of radiation from a particle accelerator capture diffraction patterns from millions of tiny protein crystals. Each burst destroys individual proteins, but not before data is collected. By integrating data from millions of bursts, the protein’s shape is reconstructed. This yields two advantages over traditional crystallography: first, the proteins can be captured in smaller, more “natural” states. Second, the data collected is not limited by the decay of one individual crystal.


In an exciting step forward, scientists have captured a glimpse of how proteins work using the Fempto-laser. Kupitz and colleagues have determined the biophysics of a plant photosystem protein complex. The photosystem is used in the first step of photosynthesis by plants, and is a gigantic complex of proteins  responsible for converting sunlight to energy. By exciting the photosystem during bursts of the x-ray laser, they were able to capture two different structures of the protein, showing exactly how it works to convert light to energy. This may take engineers one step closer to generating organic solar panels with much greater efficiency than current photovoltaic or thin-film technology.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Entanglement between particle and wave-like states of light resembles Schrödinger's cat experiment

Entanglement between particle and wave-like states of light resembles Schrödinger's cat experiment | Amazing Science | Scoop.it

In a new paper published as the July 2014 cover article in Nature Photonics, physicists Hyunseok Jeong, et al., at institutions in South Korea, Italy, and Australia, have devised and experimentally demonstrated a novel scheme to generate entanglement between quantum and classical (or "particle-like" and "wave-like") states of light. This study marks the first time that physicists have generated entanglement between a single photon and a coherent wave-like state of light.


According to the scientists, this hybrid entanglement can be considered as the closest analogy of Schrödinger's Gedankenexperiment realized so far. It has practical applications, too, as it provides a new type of qubit (a hybrid qubit) that can be used for efficient quantum computation.


In previous attempts to generate entanglement between a single photon and coherent wave-like state of light, the major obstacle was the requirement of a strong and noiseless nonlinear interaction between photons; this was extremely difficult because photons seldom interact with each other.


In the new study, the researchers introduced a new scheme to generate hybrid entanglement that is experimentally accessible. The new scheme is based on superposition. Similar to how Schrödinger's cat is both alive and dead at the same time before anyone looks, the photons also occupy two states at the same time.


Rather than being dead and alive, however, the states correspond to the long and short arms of the interferometer through which each photon travels. It's as if a photon traveled through both arms at the same time. By performing a procedure that erases the information regarding which arm the photons travelled through, the physicists could entangle a photon with a wave-like state that they call a photon-added coherent state.


Because the new state is essentially a superposition including both particle and wave components, it can be viewed as a new type of qubit. Usually, a qubit consists of a particle that is in a superposition of two quantum states. The new idea here is that a qubit can consist of a quantum (or particle-like) state and a classical (or wave-like) one, which could effectively combine the intrinsic advantages of both. In this way, a hybrid qubit could offer unique advantages for quantum computing and quantum teleportation.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

BYU's smart object recognition algorithm doesn't need human calibration

BYU's smart object recognition algorithm doesn't need human calibration | Amazing Science | Scoop.it
BYU engineer Dah-Jye Lee has created an algorithm that can accurately identify objects in images or video sequences without human calibration.


If we’ve learned anything from post-apocalyptic movies it’s that computers eventually become self-aware and try to eliminate humans.

BYU engineer Dah-Jye Lee isn’t interested in that development, but he has managed to eliminate the need for humans in the field of object recognition. Lee has created an algorithm that can accurately identify objects in images or video sequences without human calibration.

“In most cases, people are in charge of deciding what features to focus on and they then write the algorithm based off that,” said Lee, a professor of electrical and computer engineering. “With our algorithm, we give it a set of images and let the computer decide which features are important.”


Not only is Lee’s genetic algorithm able to set its own parameters, but it also doesn’t need to be reset each time a new object is to be recognized—it learns them on its own.


Lee likens the idea to teaching a child the difference between dogs and cats. Instead of trying to explain the difference, we show children images of the animals and they learn on their own to distinguish the two. Lee’s object recognition does the same thing: Instead of telling the computer what to look at to distinguish between two objects, they simply feed it a set of images and it learns on its own.


In a study published in the December issue of academic journal Pattern Recognition, Lee and his students demonstrate both the independent ability and accuracy of their “ECO features” genetic algorithm.


The BYU algorithm tested as well or better than other top object recognition algorithms to be published, including those developed by NYU’s Rob Fergus and Thomas Serre of Brown University.


Lee and his students fed their object recognition program four image datasets from CalTech (motorbikes, faces, airplanes and cars) and found 100 percent accurate recognition on every dataset. The other published well-performing object recognition systems scored in the 95-98% range.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ultrafast X-ray laser sheds new light on fundamental ultrafast dynamics

Ultrafast X-ray laser sheds new light on fundamental ultrafast dynamics | Amazing Science | Scoop.it
Ultrafast X-ray laser research led by Kansas State University has provided scientists with a snapshot of a fundamental molecular phenomenon. The finding sheds new light on microscopic electron motion in molecules.



Artem Rudenko, assistant professor of physics and a member of the university's James R. Macdonald Laboratory; Daniel Rolles, currently a junior research group leader at Deutsches Elektronen-Synchrotron in Hamburg, Germany, who will be joining the university's physics department in January 2015; and an international group of collaborators studied how an electron moves between different atoms in an exploding molecule.


Researchers measured at which distances between the two atoms the electron transfer can occur. Charge transfer processes—particularly electron transfer—are important for photosynthesis in solar cells, and drive many other important reactions in physics, chemistry and biology.


Their observation, "Imaging charge transfer in iodomethane upon x-ray photoabsorption," appears in the journal Science.


"There is a very fundamental question about how far an electron can go to reach the nearby atom in a molecule, and how probable that transition is," Rudenko said. "It has been difficult to capture images of this motion because of the very short times and very small distances that need to be measured."


To find the answer, scientists shot an ultrafast optical laser at iodomethane molecules—molecules made of an iodine atom and a methyl group—to break the bond of these two partners.


The molecules were hit with an intense, ultrashort X-ray pulse to strip the electrons from the inner shells of the iodine atom as well as to study the charge transfer between the fragments. The experiment was performed using the Linac Coherent Light Source, the world's most powerful X-ray laser. The laser is at the SLAC National Accelerator Laboratory in California and delivers femtosecond X-ray pulses. One femtosecond is one-millionth of a billionth of a second.


Researchers were able to see electrons jumping over surprisingly long distances—up to 10 times the length of the original, intact molecule.

"Conceptually the study was pretty simple," Rudenko said. "We break up the molecule with the optical laser, use the X-rays to knock a few electrons from the iodine atom, and control the distance to the neighboring methyl group by tuning the timing between the laser and the X-rays. Then we watch how many electrons move from the methyl side to the iodine side to fill the created holes."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Salk: One single injection of FGF1 stops type-2 diabetes in its tracks for 2 days

Salk: One single injection of FGF1 stops type-2 diabetes in its tracks for 2 days | Amazing Science | Scoop.it

In mice with diet-induced diabetes—the equivalent of type 2 diabetes in humans—a single injection of the protein FGF1 is enough to restore blood sugar levels to a healthy range for more than two days. The discovery by Salk scientists, published today in the journal Nature, could lead to a new generation of safer, more effective diabetes drugs.


The team found that sustained treatment with the protein doesn't merely keep blood sugar under control, but also reverses insulin insensitivity, the underlying physiological cause of diabetes. Equally exciting, the newly developed treatment doesn't result in side effects common to most current diabetes treatments.


Controlling glucose is a dominant problem in our society," says Ronald M. Evans, director of Salk's Gene Expression Laboratory and corresponding author of the paper. "And FGF1 offers a new method to control glucose in a powerful and unexpected way."


Type 2 diabetes, which can be brought on by excess weight and inactivity, has skyrocketed over the past few decades in the United States and around the world. Almost 30 million Americans are estimated to have the disease, where glucose builds up in the bloodstream because not enough sugar-carting insulin is produced or because cells have become insulin-resistant, ignoring signals to absorb sugar. As a chronic disease, diabetes can cause serious health problems and has no specific cure. Rather it is managed—with varying levels of success—through a combination of diet, exercise and pharmaceuticals.


In 2012, Evans and his colleagues discovered that a long-ignored growth factor had a hidden function: it helps the body respond to insulin. Unexpectedly, mice lacking the growth factor, called FGF1, quickly develop diabetes when placed on a high-fat diet, a finding suggesting that FGF1 played a key role in managing blood glucose levels. This led the researchers to wonder whether providing extra FGF1 to diabetic mice could affect symptoms of the disease.


Evans' team injected doses of FGF1 into obese mice with diabetes to assess the protein's potential impact on metabolism. Researchers were stunned by what happened: they found that with a single dose, blood sugar levels quickly dropped to normal levels in all the diabetic mice.


"Many previous studies that injected FGF1 showed no effect on healthy mice," says Michael Downes, a senior staff scientist and co-corresponding author of the new work. "However, when we injected it into a diabetic mouse, we saw a dramatic improvement in glucose."


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Glass Brain: Virtual Reality Meets Neuroscience

Glass Brain: Virtual Reality Meets Neuroscience | Amazing Science | Scoop.it
Bridging the worlds of neuroscience and high-tech virtual realty, the Glass Brain, a project of the new Neuroscape Lab at the University of California San Francisco may open up new insights into the complicated mechanisms of the brain.


Researchers have developed a new way to explore the human brain through virtual reality. The system, called Glass Brain, initiated by Philip Rosedale, creator of the famous game Second Life, and Adam Gazzaley, a neuroscientist at the University of California San Francisco, combines brain scanning, brain recording and virtual reality to allow a user to journey through a person’s brain in real-time.

 For a recent demonstration at the South by Southwest (SXSW) Interactive festival in Austin, Texas, Rosedale made his wife a cap studded with electroencephalogram (EEG) electrodes that measure differences in electric potential in order to record brain activity, while he wore a virtual reality headset to explore her brain in 3D, as flashes of light displayed her brain activity from the EEG.

The Glass Brain didn’t actually show what Rosedale’s wife was thinking, but Gazzaley’s team ultimately hopes to get closer to decoding brain signals and displaying them using the virtual reality system.


This is an anatomically-realistic 3D brain visualization depicting real-time source-localized activity (power and "effective" connectivity) from EEG (electroencephalographic) signals. Each color represents source power and connectivity in a different frequency band (theta, alpha, beta, gamma) and the golden lines are white matter anatomical fiber tracts. Estimated information transfer between brain regions is visualized as pulses of light flowing along the fiber tracts connecting the regions.


The modeling pipeline includes MRI (Magnetic Resonance Imaging) brain scanning to generate a high-resolution 3D model of an individual's brain, skull, and scalp tissue, DTI (Diffusion Tensor Imaging) for reconstructing white matter tracts, and BCILAB (http://sccn.ucsd.edu/wiki/BCILAB) / SIFT (http://sccn.ucsd.edu/wiki/SIFT) to remove artifacts and statistically reconstruct the locations and dynamics (amplitude and multivariate Granger-causal (http://www.scholarpedia.org/article/G...) interactions) of multiple sources of activity inside the brain from signals measured at electrodes on the scalp (in this demo, a 64-channel "wet" mobile system by Cognionics/BrainVision (http://www.cognionics.com)).

The final visualization is done in Unity and allows the user to fly around and through the brain with a gamepad while seeing real-time live brain activity from someone wearing an EEG cap.

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Speedy computation enables scientists to reconstruct an animal's development cell by cell

Speedy computation enables scientists to reconstruct an animal's development cell by cell | Amazing Science | Scoop.it
Researchers have developed a new computational method that can rapidly track the three-dimensional movements of cells in such data-rich images. Using the method, scientists can essentially automate much of the time-consuming process of reconstructing an animal's developmental building plan cell by cell.


Recent advances in imaging technology are transforming how scientists see the cellular universe, showing the form and movement of once grainy and blurred structures in stunning detail. But extracting the torrent of information contained in those images often surpasses the limits of existing computational and data analysis techniques, leaving scientists less than satisfied.


Now, researchers at the Howard Hughes Medical Institute's Janelia Research Campus have developed a way around that problem. They have developed a new computational method that can rapidly track the three-dimensional movements of cells in such data-rich images. Using the method, the Janelia scientists can essentially automate much of the time-consuming process of reconstructing an animal's developmental building plan cell by cell.


Philipp Keller, a group leader at Janelia, led the team that developed the computational framework. He and his colleagues, including Janelia postdoc Fernando Amat, Janelia group leader Kristin Branson and former Janelia lab head Eugene Myers, who is now at the Max Plank Institute of Molecular Cell Biology and Genetics, have used the methodto reconstruct cell lineage during development of the early nervous system in a fruit fly. Their method can be used to trace cell lineages in multiple organisms and efficiently processes data from multiple kinds of fluorescent microscopes.


The scientists describe their approach in a paper published online on July 20, 2014, in Nature Methods.


"With this fairly fast, simple approach, we can solve easy cases fairly efficiently," Keller says. Those cases make up about 95 percent of the data. "In harder cases, where we might have mistakes, we use heavier machinery."


He explains that in instances where cells are harder to track -- because image quality is poor or cells are crowded, for example -- the computer draws on additional information. "We look at what all the cells in that neighborhood do a little bit into the future and a little bit into the past," Keller explains. Informative patterns usually emerge from that contextual information. The strategy takes more computing power than the initial tactics. "We don't want to do it for all the cells," Keller says. "But we try to crack these hard cases by gathering more information and making better informed decisions."


All of these steps can be carried out as quickly as images are acquired by the microscope, and the result is lineage information for every cell. "You know the path, you know where it is at a certain time point. You know it divided at a certain point, you know the daughter cells, you know what mother cell it came from," Keller says.


New York TImes article


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Steam from the sun: New sponge-like material converts 85% of solar energy into steam

Steam from the sun: New sponge-like material converts 85% of solar energy into steam | Amazing Science | Scoop.it

A new material structure developed at MIT generates steam by soaking up the sun. The structure — a layer of graphite flakes and an underlying carbon foam — is a porous, insulating material structure that floats on water. When sunlight hits the structure’s surface, it creates a hotspot in the graphite, drawing water up through the material’s pores, where it evaporates as steam. The brighter the light, the more steam is generated.


The new material is able to convert 85 percent of incoming solar energy into steam — a significant improvement over recent approaches to solar-powered steam generation. What’s more, the setup loses very little heat in the process, and can produce steam at relatively low solar intensity. This would mean that, if scaled up, the setup would likely not require complex, costly systems to highly concentrate sunlight.


Hadi Ghasemi, a postdoc in MIT’s Department of Mechanical Engineering, says the spongelike structure can be made from relatively inexpensive materials — a particular advantage for a variety of compact, steam-powered applications.


“Steam is important for desalination, hygiene systems, and sterilization,” says Ghasemi, who led the development of the structure. “Especially in remote areas where the sun is the only source of energy, if you can generate steam with solar energy, it would be very useful.”


Ghasemi and mechanical engineering department head Gang Chen, along with five others at MIT, report on the details of the new steam-generating structure in the journal Nature Communications.


Today, solar-powered steam generation involves vast fields of mirrors or lenses that concentrate incoming sunlight, heating large volumes of liquid to high enough temperatures to produce steam. However, these complex systems can experience significant heat loss, leading to inefficient steam generation.


Recently, scientists have explored ways to improve the efficiency of solar-thermal harvesting by developing new solar receivers and by working with nanofluids. The latter approach involves mixing water with nanoparticles that heat up quickly when exposed to sunlight, vaporizing the surrounding water molecules as steam. But initiating this reaction requires very intense solar energy — about 1,000 times that of an average sunny day.


By contrast, the MIT approach generates steam at a solar intensity about 10 times that of a sunny day — the lowest optical concentration reported thus far. The implication, the researchers say, is that steam-generating applications can function with lower sunlight concentration and less-expensive tracking systems.  

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Chasing the Future
Scoop.it!

Google Strikes Smart Contact Lens deal to track Diabetes and Cure Farsightedness

Google Strikes Smart Contact Lens deal to track Diabetes and Cure Farsightedness | Amazing Science | Scoop.it

With Glass and Android Wear, Google has already invested a lot of time and resources into developing the next-generation of wearables, but it's another of its eye-focused projects that has today received its first major boost. The search giant's secret Google[x] team has confirmed that it's licensed its smart eyewear to healthcare specialist Novartis, which will develop the technology into a product that can improve eye care and help manage diseases and conditions.


As part of the agreement, Google[x] and Novartis' eye care division Alcon will create smart lenses that feature "non-invasive sensors, microchips and other miniaturized electronics" and focus on two main areas. The first will provide a way for diabetic patients to keep on top of their glucose levels by measuring the sugar levels in their tear fluid, feeding the data back to a smartphone or tablet. The second solution aims to help restore the eye's natural focus on near objects, restoring clear vision to those who are only farsighted (presbyopia).


Google's role will be to develop the tiny electronics needed to collect data and will also take care of the low-power chip designs and fabrication. Alcon, on the other hand, will apply its medical knowledge to develop commercial versions of the smart contact lens. "Our dream is to use the latest technology in the miniaturization of electronics to help improve the quality of life for millions of people," says Google co-founder Sergey Brin. "We are very excited to work with Novartis to make this dream come true."


Via TechinBiz, Farid Mheir, Sílvia Dias
more...
Pier Bécotte's curator insight, July 16, 2014 9:31 AM

Google frappe intelligente lentilles de contact accord pour suivre le diabète et fixer l'hypermétropie

Farid Mheir's curator insight, July 17, 2014 1:44 PM

Strangely I never wrote about this but it certainly is worth a mention because Google has now made very strong moves towards "atoms and not bits" as Sergei Brin put it a few days ago, stating that Google has invested in search (bits) for a long time and is now complementing its focus to physical devices (atoms) such as the self driving car or here the contact lens.


This story is of particular importance as it shows that Google is not in the business of making contact lenses (or cars) but providing the R&D to disrupt industries that are not making the radical shifts they can by using digital technology.


Also consider:.

[INFOGRAPHIC] The Existing Wearable Technology Landscape via @WearableWorld http://sco.lt/7pTHPN

IV Technology's curator insight, July 18, 2014 11:18 AM

siguiente paso es hacer diagnosticos con scaner y que se prenda una luz roja para que vayamos al servicio.... Hospital

Scooped by Dr. Stefan Gruenwald
Scoop.it!

It's here: First case of Chikungunya Virus contracted in the U.S.

It's here: First case of Chikungunya Virus contracted in the U.S. | Amazing Science | Scoop.it

The CDC reports a man in Florida caught the mosquito-borne virus that's been taking the Caribbean by storm. U.S. health officials today announced the first locally acquired case of chikungunya, a mosquito-borne virus that's become prevalent in the Caribbean in recent months.


The CDC reports a male patient in Florida was diagnosed with the virus, and had not recently traveled outside the country. Federal and Florida state health officials are investigating how the man could have contracted the virus domestically. They're also working to monitor the region in an effort to prevent additional infections and educate residents on ways to prevent mosquito bites. Local transmission occurs when the insect bites a person with the infection and then transmits the virus by biting others.


Chikungunya -- an African word that loosely translates as "contorted with pain" -- is most commonly found in Asia and Africa, and began appearing in the Caribbean last winter. Between 2006 and 2013, there were approximately 28 reported cases of the virus each year in travelers returning to the U.S. This year, travel-related chikungunya has been diagnosed in patients who have recently visited to the Caribbean.


Chikungunya virus is transmitted to people by mosquitoes. The most common symptoms of chikungunya virus infection are fever and joint pain. Other symptoms may include headache, muscle pain, joint swelling, or rash. Outbreaks have occurred in countries in Africa, Asia, Europe, and the Indian and Pacific Oceans. In late 2013, chikungunya virus was found for the first time in the Americas on islands in the Caribbean. Chikungunya virus is not currently found in the continental United States. There is a risk that the virus will be imported to new areas by infected travelers. There is no vaccine to prevent or medicine to treat chikungunya virus infection. Travelers can protect themselves by preventing mosquito bites. When traveling to countries with chikungunya virus, use insect repellent, wear long sleeves and pants, and stay in places with air conditioning or that use window and door screens.

more...
MPCIRHC.ORG's curator insight, July 22, 2014 11:18 PM

There were over 255,000 cases of this deadly virus the French Island of Reunion in 2005 and over 0ne million cases in India and as many in Italy. The USA's CDC if of the opinion that they do not wish to 'alarm' the special citizens of the USA of this one more plague at their front door. NOW, they only report "one" case in the USA. If any one believes that only One person was bitten by ONE lone mosquito - well one head needs to be examine for a brain. It is time for the people of the USA stop looking at reality shows and demand the truth from their governmental health officials. They need to save their families and friends - even if their government only wants to save the rich.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Is The Universe A Multiverse Bubble? Physicists Are Trying To Bring It Into The Realm Of Testable Science

Is The Universe A Multiverse Bubble? Physicists Are Trying To Bring It Into The Realm Of Testable Science | Amazing Science | Scoop.it

Never mind the big bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate – bubbles formed.


Each bubble contained another vacuum, whose energy was lower, but still not nothing. This energy drove the bubbles to expand. Inevitably, some bubbles bumped into each other. It’s possible some produced secondary bubbles. Maybe the bubbles were rare and far apart; maybe they were packed close as foam.


Proponents of the multiverse theory argue that it’s the next logical step in the inflation story. Detractors argue that it is not physics, but metaphysics – that it is not science because it cannot be tested. After all, physics lives or dies by data that can be gathered and predictions that can be checked.


That’s where Perimeter Associate Faculty member Matthew Johnson comes in. Working with a small team that also includes Perimeter Faculty member Luis Lehner, Johnson is working to bring the multiverse hypothesis firmly into the realm of testable science.

“That’s what this research program is all about,” he says. “We’re trying to find out what the testable predictions of this picture would be, and then going out and looking for them.”


Specifically, Johnson has been considering the rare cases in which our bubble universe might collide with another bubble universe. He lays out the steps: “We simulate the whole universe. We start with a multiverse that has two bubbles in it, we collide the bubbles on a computer to figure out what happens, and then we stick a virtual observer in various places and ask what that observer would see from there.”


“Simulating the universe is easy,” says Johnson. Simulations, he explains, are not accounting for every atom, every star, or every galaxy – in fact, they account for none of them. “We’re simulating things only on the largest scales,” he says. “All I need is gravity and the stuff that makes these bubbles up. We’re now at the point where if you have a favourite model of the multiverse, I can stick it on a computer and tell you what you should see.”


That’s a small step for a computer simulation program, but a giant leap for the field of multiverse cosmology. By producing testable predictions, the multiverse model has crossed the line between appealing story and real science.


In fact, Johnson says, the program has reached the point where it can rule out certain models of the multiverse: “We’re now able to say that some models predict something that we should be able to see, and since we don’t in fact see it, we can rule those models out.”


For instance, collisions of one bubble universe with another would leave what Johnson calls “a disk on the sky” – a circular bruise in the cosmic microwave background. That the search for such a disk has so far come up empty makes certain collision-filled models less likely.


Meanwhile, the team is at work figuring out what other kinds of evidence a bubble collision might leave behind. It’s the first time, the team writes in their paper, that anyone has produced a direct quantitative set of predictions for the observable signatures of bubble collisions. And though none of those signatures has so far been found, some of them are possible to look for.


The real significance of this work is as a proof of principle: it shows that the multiverse can be testable. In other words, if we are living in a bubble universe, we might actually be able to tell.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Physical Science - SHS
Scoop.it!

First pictures from inside the hole: 80-meter wide gaping hole appears in Siberia, no one knows how deep it goes and what caused it

First pictures from inside the hole: 80-meter wide gaping hole appears in Siberia, no one knows how deep it goes and what caused it | Amazing Science | Scoop.it

First pictures from inside the hole


Scientists have a good handle on the inner workings of the planet these days, but sometimes weird stuff happens that requires a little more investigation. Case in point, a giant hole 80 meters (262 feet) in diameter has appeared on the Siberian Yamal Peninsula. It certainly looks unusual, and scientists are still working to figure out what could cause this bizarre geological formation to appear.


A team of Russian scientists has been dispatched to analyze the crater, the depth of which is unknown at this time. Based on the images and video of the hole, geologists are positing a few hypotheses. Anna Kurchatova from Sub-Arctic Scientific Research Centre believes the crater was formed by water, salt, and natural gas mixing and causing an explosion. That’s a pretty satisfying explanation seeing as the hole totally looks like something exploded. The area was also a major gas field in the 1970s.


Dr. Chris Fogwill from the University of New South Wales in Australia has a different explanation. He posits that the mystery hole in Siberia is an especially striking example of a geological phenomenon called a pingo. A pingo is a large block of subterranean ice found in tundra environments that has grown into a small hill on the surface. If the ice hill thaws, the pingo can collapse, leaving only the crater where it used to be. Pingos are usually much smaller, but gas exploration and the geological changes that come with it could have contributed to this spectacular example.


Both of the potential explanations also figure in the increasing rate of global warming. Climate change has caused shifts in frozen permafrost that has been solid for millennia. Of course, there are still some who believe the Siberian hole to be a hoax.Teams are expected to arrive in the area within a day to report back, at which point we should know the truth — or at least have better pictures of a giant hole in the ground.


Via Joy Kinley
more...
Joy Kinley's curator insight, July 17, 2014 5:03 PM

The weird theories are out in force over this one.  What do you think caused this hole?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists Believe Finding Life Beyond Earth is Within Reach

Scientists Believe Finding Life Beyond Earth is Within Reach | Amazing Science | Scoop.it

NASA’s quest to study planetary systems around other stars started with ground-based observatories, then moved to space-based assets like the Hubble Space Telescope, the Spitzer Space Telescope, and the Kepler Space Telescope. Today’s telescopes can look at many stars and tell if they have one or more orbiting planets. Even more, they can determine if the planets are the right distance away from the star to have liquid water, the key ingredient to life as we know it.


The NASA roadmap will continue with the launch of the Transiting Exoplanet Surveying Satellite (TESS) in 2017, the James Webb Space Telescope (Webb Telescope) in 2018, and perhaps the proposed Wide Field Infrared Survey Telescope – Astrophysics Focused Telescope Assets (WFIRST-AFTA) early in the next decade. These upcoming telescopes will find and characterize a host of new exoplanets — those planets that orbit other stars — expanding our knowledge of their atmospheres and diversity. The Webb telescope and WFIRST-AFTA will lay the groundwork, and future missions will extend the search for oceans in the form of atmospheric water vapor and for life as in carbon dioxide and other atmospheric chemicals, on nearby planets that are similar to Earth in size and mass, a key step in the search for life.


“This technology we are using to explore exoplanets is real,” said John Grunsfeld, astronaut and associate administrator for NASA’s Science Mission Directorate in Washington. “The James Webb Space Telescope and the next advances are happening now. These are not dreams — this is what we do at NASA.”


Since its launch in 2009, Kepler has dramatically changed what we know about exoplanets, finding most of the more than 5,000 potential exoplanets, of which more than 1700 have been confirmed. The Kepler observations have led to estimates of billions of planets in our galaxy, and shown that most planets within one astronomical unit are less than three times the diameter of Earth. Kepler also found the first Earth-size planet to orbit in the “habitable zone” of a star, the region where liquid water can pool on the surface.


“What we didn’t know five years ago is that perhaps 10 to 20 percent of stars around us have Earth-size planets in the habitable zone,” says Matt Mountain, director and Webb telescope scientist at the Space Telescope Science Institute in Baltimore. “It’s within our grasp to pull off a discovery that will change the world forever. It is going to take a continuing partnership between NASA, science, technology, the U.S. and international space endeavors, as exemplified by the James Webb Space Telescope, to build the next bridge to humanity’s future.”


This decade has seen the discovery of more and more super Earths, which are rocky planets that are larger and heftier than Earth. Finding smaller planets, the Earth twins, is a tougher challenge because they produce fainter signals. Technology to detect and image these Earth-like planets is being developed now for use with the future space telescopes. The ability to detect alien life may still be years or more away, but the quest is underway.


Said Mountain, “Just imagine the moment, when we find potential signatures of life. Imagine the moment when the world wakes up and the human race realizes that its long loneliness in time and space may be over — the possibility we’re no longer alone in the universe.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

More than 99 percent of microbes and viruses populating the oceans have not yet been cultivated in the lab

More than 99 percent of microbes and viruses populating the oceans have not yet been cultivated in the lab | Amazing Science | Scoop.it
A fishing expedition of microscopic proportions led by University of Arizona ecologists revealed that the lines between virus types in nature are less blurred than previously thought.


Using lab-cultured bacteria as "bait," a team of scientists led by Matthew Sullivan has sequenced complete and partial genomes of about 10 million viruses from an ocean water sample in a single experiment.


The study, published online on July 14 by the journal Nature, revealed that the genomes of viruses in natural ecosystems fall into more distinct categories than previously thought. This enables scientists to recognize actual populations of viruses in nature for the first time.


"You could count the number of viruses from a soil or water sample in a microscope, but you would have no idea what hosts they infect or what their genomes were like," said Sullivan, an associate professor in the UA's Department of Ecology and Evolutionary Biology and member of the UA's BIO5 Institute. "Our new approach for the first time links those same viruses to their host cells. In doing so, we gain access to viral genomes in a way that opens up a window into the roles these viruses play in nature."


Sullivan's team developed a new approach called viral tagging, which uses cultivated bacterial hosts as "bait" to fish for viruses that infect that host. The scientists then isolate the DNA of those viruses and decipher their sequence.


"Instead of a continuum, we found at least 17 distinct types of viruses in a single sample of Pacific Ocean seawater, including several that are new to science – all associated with the single 'bait' host used in the experiment," Sullivan said.


"Microbes are now recognized as drivers of the biogeochemical engines that fuel Earth, and the viruses that infect them control these processes by transferring genes between microbes, killing them in great numbers and reprogramming their metabolisms," explained the first author of the study, Li Deng, a former postdoctoral researcher in Sullivan's lab who now is a research scientist at the Helmholtz Research Center for Environmental Health in Neuherberg, Germany. "Our study for the first time provides the methodology needed to match viruses to their host microbes at scales relevant to nature."


Getting a grip on the diversity of viruses infecting a particular host is critical beyond environmental sciences, Deng said, and has implications for understanding how viruses affect pathogens that cause human disease, which in turn is relevant for vaccine design and antiviral drug therapy.


Sullivan estimates that up to 99 percent of microbes that populate the oceans and drive global processes such as nutrient cycles and climate have not yet been cultivated in the lab, which makes their viruses similarly inaccessible.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Study shows how effects of starvation can be passed to future generations

Study shows how effects of starvation can be passed to future generations | Amazing Science | Scoop.it

Evidence from human famines and animal studies suggests that starvation can affect the health of descendants of famished individuals. But how such an acquired trait might be transmitted from one generation to the next has not been clear. A new study, involving roundworms, shows that starvation induces specific changes in so-called small RNAs and that these changes are inherited through at least three consecutive generations, apparently without any DNA involvement. The study, conducted by Columbia University Medical Center (CUMC) researchers, offers intriguing new evidence that the biology of inheritance is more complicated than previously thought. The study was published in the July 10, 2014 edition of the journal Cell.

The idea that acquired traits can be inherited dates back to Jean Baptiste Larmarck (1744–1829), who proposed that species evolve when individuals adapt to their environment and transmit the acquired traits to their offspring. According to Lamarckian inheritance, for example, giraffes developed elongated long necks as they stretched to feed on the leaves of high trees, an acquired advantage that was inherited by subsequent generations. In contrast, Charles Darwin (1809–82) later theorized that random mutations that offer an organism a competitive advantage drive a species' evolution. In the case of the giraffe, individuals that happened to have slightly longer necks had a better chance of securing food and thus were able to have more offspring. The subsequent discovery of hereditary genetics supported Darwin's theory, and Lamarck's ideas faded into obscurity.


"However, events like the Dutch famine of World War II have compelled scientists to take a fresh look at acquired inheritance," said study leader Oliver Hobert, PhD, professor of biochemistry and molecular biophysics and a Howard Hughes Medical Institute Investigator at CUMC. Starving women who gave birth during the famine had children who were unusually susceptible to obesity and other metabolic disorders, as were their grandchildren. Controlled animal experiments have found similar results, including a study in rats demonstrating that chronic high-fat diets in fathers result in obesity in their female offspring.


In a 2011 study, Oded Rechavi, a postdoctoral fellow in Dr. Hobert's laboratory, found that roundworms (C. elegans) that developed resistance to a virus were able to pass along that immunity to their progeny for many consecutive generations. The immunity was transferred in the form of small viral-silencing RNAs working independently of the organism's genome. Other studies have reported similar findings, but none of these addressed whether a biological response induced by natural circumstances, such as famine, could be passed on to subsequent generations.

more...
Diane Johnson's curator insight, July 18, 2014 8:56 AM

Fascinating developments in understanding epigenetics

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Li-Fi record data transmission of 10Gbps achieved

Li-Fi record data transmission of 10Gbps achieved | Amazing Science | Scoop.it

A Mexican software company has managed to transmit audio, video and Internet across the spectrum of light emitted by LED lamps – at a data transfer rate of 10 gigabytes per second.

 

The technology can illuminate a large work space, such as an office, while providing full mobile Internet to every device that comes into the range of the light spectrum. The technology, called Li-Fi or light fidelity, is presented as an alternative to Wi-Fi because it will maximise the original provided speed of the internet to offer safer data transfer and a transfer rate of up to 10 gigabytes per second. The Li-Fi device circulates data via LEDs that emit an intermittent flicker at a speed imperceptible to the human eye.

 

“As Wi-Fi uses cables to spread our connections, wireless transmission Li-Fi uses LED lamps that emit high brightness light,” said Arturo Campos Fentanes, CEO of Sisoft in Mexico.

 

Another advantage in comparison to Wi-Fi is that there is no way to hack the signal since the internet is transmitted by light, there is no way to “steal it.” Furthermore, it can be installed in hospitals areas that use radiation apparatus and generally block or distort internet signal, Fentanes said.

 

With this new technology expansion through the market is sought, with lower costs and a service increased by five thousand per cent internet speed.

 

Currently in Mexico the highest transfer rate is 200 megabytes per second. Just to get an idea, with Li-Fi you could quickly download an entire HD movie in just 45 seconds.

 

Also known as visible light communications (VLC), this technology began with an internet speed of two Gigabits per second, but Sisoft along with researchers from the Autonomous Technological Institute of Mexico (ITAM) adapted the system to be multiplied five times.

 

Fentanes explained that the first experiments were conducted with audio, in which a cable is connected via 3.5 mm audio Jack from a smartphone to a protoboard table to transform the auditory signal in optical waves.

 

That way a special emitter transmits data across the spectrum of light generated by an LED lamp and is captured by a receptor located in a speaker that reproduces sound.

 

For wireless internet transmission, the mechanics is similar. The station developed by Sisoft stands above the router device that distributes the internet signal and a lamp-LED is incorporated to maximise the speed of data transfer. Light will emulate an antenna, but only the electronic apparatus that has the receptor for the “optical audio” signal and is inside the range of the halo of light will have a connection.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Genome Sequencing Moving to the Clinic: Massive British Sequencing Project Will Run on Illumina Machines

Genome Sequencing Moving to the Clinic: Massive British Sequencing Project Will Run on Illumina Machines | Amazing Science | Scoop.it
The world’s largest genome project will be carried out on instruments from the California sequencing company.


The British government says that it plans to hire the U.S. gene-sequencing company Illumina to sequence 100,000 human genomes in what is the largest national project to decode the DNA of a populace.


In a regulatory filing with the U.S. Securities and Exchange Commission, Illumina said it had been picked as the “preferred partner” for the £100 million project.


Genomics England confirmed that it had chosen the California company to carry out the sequencing project. “We’ve been through the ‘bake-off’ process to find the right company to do the sequencing, and will now be entering detailed negotiations,” says Vivienne Parry, a spokesperson for Genomics England. One in 17 people are born with or will develop a rare disease in their lifetime, with 80 percent of rare diseases having an identified genetic component. By sequencing 100,000 genomes, the project’s other aims are to kickstart the development of a U.K. genomics industry and introduce the technology into its mainstream health system, according to the Genomics England website.


Illumina’s sequencing instruments dominate the market for unraveling DNA (see “50 Smartest Companies”). Parry says fewer than five other companies bid for the job, one of the largest sequencing projects ever undertaken.


Some other countries are also considering large national sequencing projects. The U.K. project will focus on people with cancer, as well as adults and children with rare diseases. Because all Britons are members of the National Health Service, the project expects to be able to compare DNA data with detailed centralized health records (see “Why the U.K. Wants a Genomic National Health Service”).


While the number of genomes to be sequenced is 100,000, the total number of Britons participating in the study is smaller, about 70,000. That is because for cancer patients Genomics England intends to obtain the sequence of both their inherited DNA as well as that of their cancers.


Genomics England began talking early this year to potential bidders, including Chinese company and Illumina rival BGI (see “Inside China’s Genome Factory”). At the time, the average cost of completing a genome was about $3,000 to $4,000.


Completing all 100,000 genomes would have cost more than twice Genomics England’s budget. The agency said in December it intended to use its negotiating power to drive prices down.


Why Illumina is #1 (Technology Review, MIT)

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Developing Robotic Brains Capable of Thoughtful Communication

Developing Robotic Brains Capable of Thoughtful Communication | Amazing Science | Scoop.it

The Hagiwara Lab in the Department of Information and Computer Science of Keio University's Faculty of Science and Technology is trying to realize a robotic brain that can carry on a conversation, or in other words, a robotic brain that can understand images and words and can carry on thoughtful communication with humans.


"Even now, significant progress is being made with robots, and tremendous advancements are being made with the control parts. However, we feel like R&D with regards to the brain has been significantly delayed. When we think about what types of functions are necessary for the brain, the first thing that we as humans do is visual information processing. In other words, the brain needs to be able to process what is seen. The next thing is the language information processing that we as humans implement. By using language capabilities, humans are able to perform extremely advanced intellectual processing. However, even if a robotic brain can process what it sees and use words, it is still lacking one thing, specifically, feelings and emotions. Therefore, as a third pillar, we're conducting research on what is called Kansei Engineering, or affective information processing."

The Hagiwara Lab has adopted an approach of learning from the information processing of the human brain. The team is trying to construct a robotic brain while focusing on three elements: visual information processing, language information processing, and affective information processing, and an even more important point is integrating these three elements.


"With regards to visual information processing, by using a neural network as well, we're trying to recognize items through mechanisms based on experience and intuition in the same manner that is implemented directly by humans without having to use three-dimensional structures or perform complicated mathematical processing. In the conventional object recognition field, patterns from the recognized results are merely converted to symbols. However, by adding language processing to those recognized results, we can comprehensively utilize knowledge to get a better visual image. For example, even if an object is recognized as being a robot, knowledge such as the robot has a human form, or it has arms and legs can also be used. Next will be language information processing because processing of language functions is becoming extremely important. For example, even as a robot, the next step would be for it to recognize something as being cute, not cute, mechanical, or some other type of characteristic. Humans naturally have this type of emotional capability, but in current robotic research, that type of direction is not being researched much. Therefore, at our lab, we're conducting research in a direction that enables robots to understand what they see, to use language information processing to understand what they saw as knowledge, and to then comprehensively use the perspective of feelings and emotions like those of humans as well."


The robotic brain targeted by the Hagiwara Lab is one that is not merely just smart. Instead, the lab is targeting a robotic brain with emotions, feelings, and spirit that will enable it to interact skillfully with humans and other environments. To achieve this, the lab is conducting a broad range of research from the fundamentals of Kansei Engineering to applications thereof in fields such as entertainment, design, and healing.

more...
No comment yet.