Amazing views from 1947 to 2012, including several firsts.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The faces of all adults are home to microscopic eight-legged creatures, a new study suggests. Face mites are just half a millimetre long and not visible to the human eye, CBC science columnist Torah Kachur told Gill Deacon, host of CBC'sHere and Now. "They're semi-transparent, they have eight legs, they kind of look like a tiny, in a way, see-through caterpillar."
Until recently, scientists thought that only a small proportion of the population had face mites. However, a new study led by Megan Thoemmes, a graduate student in biology at North Carolina State University, found that 100 per cent of 253 people over age 18 sampled by her team had mite DNA on their faces, suggesting that the mites could be universal inhabitants of adult humans.
The study also found that human faces are home to two different species of the mites. The first is Demodex brevis, which burrows into our sweat glands. "It's actually evolved the perfect shape to wiggle in the pores," Kachur said.
The other species, Demodex folliculorum, lives in hair follicles of our eyelashes, eyebrows and facial skin.
Thoemmes told Kachur that the mites collected from faces in different places, such as China and the Americas, can be genetically distinguished from one another, which makes them useful for tracing human populations and their migrations.
Researchers today announced the creation of an imaging technology more powerful than anything that has existed before, and is fast enough to observe life processes as they actually happen at the molecular level. Fluorescent protein biosensors provide a technology to capture the biochemical process of life almost like a motion picture that could be viewed a frame at a time. This may allow the targeted design of next-generation biosensors to track life processes and battle diseases.
Chemical and biological actions can now be measured as they are occurring or, in old-fashioned movie parlance, one frame at a time. This will allow creation of improved biosensors to study everything from nerve impulses to cancer metastasis as it occurs.
The measurements, created by the use of short pulse lasers and bioluminescent proteins, are made in femtoseconds, which is one millionth of one billionth of a second. A femtosecond, compared to one second, is about the same as one second compared to 32 million years. That's a pretty fast shutter speed, and it should change the way biological research and physical chemistry are being done, scientists say.
Findings on the new technology were published today in Proceedings of the National Academy of Sciences, by researchers from Oregon State University and the University of Alberta.
"With this technology we're going to be able to slow down the observation of living processes and understand the exact sequences of biochemical reactions," said Chong Fang, an assistant professor of chemistry in the OSU College of Science, and lead author on the research.
"We believe this is the first time ever that you can really see chemistry in action inside a biosensor," he said. "This is a much more powerful tool to study, understand and tune biological processes."
The system uses advanced pulse laser technology that is, in itself, fairly new, and builds upon the use of "green fluorescent proteins" that are extremely popular in bioimaging and biomedicine. These remarkable proteins glow when light is shined upon them. Their discovery in 1962, and the applications that followed were the basis for a Nobel Prize in 2008.
Existing biosensor systems, however, are created largely by random chance or trial and error. By comparison, the speed of the new approach will allow scientists to "see" what is happening at the molecular level and create whatever kind of sensor they want by rational design. This will improve the study of everything from cell metabolism to nerve impulses, how a flu virus infects a person, or how a malignant tumor spreads.
"For decades, to create the sensors we have now, people have been largely shooting in the dark," Fang said. "This is a fundamental breakthrough in how to create biosensors for medical research from the bottom up. It's like daylight has finally come."
A team of scientists from Yale University and the University of Connecticut have developed a novel membrane with highly aligned nanoscale pores that open and close in response to temperature; this highly porous, valve-like material has many potential filtration applications, including water purification and molecular separation.
The membrane was created from a block copolymer that self-assembles into alternating groups of two types of molecular segments with different properties, each capable of packing into domains of different shapes, depending on the overall size and composition of the polymer. In this case, they embedded cylindrical columns of one segment type in a matrix composed of the other segment type.
Notably, the Yale researchers used magnetic fields to ensure the aligned orientation of the cylindrical domains, then chemically removed the column-shaped segments, leaving behind an empty pore with nanoscale dimensions. The resulting material is a thin film membrane with parallel aligned pores that stretch from one side of the film to the other.
“In most conventional membranes, the pores are not aligned — they go in every which way, creating a highly tortuous path,” says Chinedum Osuji, associate professor of chemical & environmental engineering and principal investigator of the research, published in Advanced Materials. “We developed a scalable method for aligning the pores in thin film, which results in highly attractive transport properties.”
The aligned pores of the Yale team’s material are also significant for being responsive to temperature. For example, at 25 degrees Celsius, the pores are open. However, when the temperature is raised to 75 degrees Celsius, the pores collapse — a process that is reversible if the temperature is again lowered to 25 degrees Celsius. “The pore collapse is rapid and reversible, which very conveniently provides control over the permeability of the membrane and opens the door for unique industrial applications,” says Menachem Elimelech, Roberto C. Goizueta Professor of Chemical & Environmental Engineering and co-author of the research
The membrane can in this way act like a valve, becoming alternately permeable and impermeable; additionally, the same behavior may enable the pore size to be modified gradually as a function of temperature, providing the material a temperature tunable porosity. This behavior could be particularly useful in industrial applications where high value enzymes must be separated from byproducts, as the variable pore size could enable this single type of thin film to isolate chemical products with various molecular sizes.
3D printing has been providing various forms of prosthetic devices such as fingers, hands, arms and legs for a short time now, mostly due to the fact that it is affordable, easy to use, faster than traditional manufacturing, and provides for total customization. Companies are also really beginning to see the potential of 3D printing in the rapid prototyping of medical products.
One company, Potomac Laser, has been in the business of specializing in and creating medical devices, as well as other unique electronic devices for over 32 years now. Located in Baltimore, Maryland, they use 3D printing, laser micromachining, micro CNC and micro drilling in their many unique projects.
Just recently, a woman by the name of Monika Kwacz, who is a researcher at the Institute of Micromechanics and Photonics at Warsaw Technical University in Poland, contacted Potomac Laser to see if they could help her 3D print something almost unheard of. She had been studying stapedotomy, which is a form of surgical procedure that aims at improving hearing loss in those who suffer from the fixation of their stapes. The stapes, which is one of the 3 tiny bones within the middle ear involved in the conduction of sound vibrations to the inner ear, is the smallest and lightest bone within the human body.
Millions of peoples in the US alone suffer from a condition called Otosclerosis, where the stapes becomes stuck in a fixed position, and can no longer efficiently receive and transmit vibrations needed for a subject to hear properly. This is mostly due to a mineralization process of the bone and surrounding tissue. It is estimated that 10% of the world’s adult Caucasian population suffers from this condition in one form or another.
Via Gust MEES
Researchers at the University of Basel in Switzerland have succeeded in observing the "forbidden" infrared spectrum of a charged molecule for the first time. These extremely weak spectra offer perspectives for extremely precise measurements of molecular properties and may also contribute to the development of molecular clocks and quantum technology. The results were published in the scientific journal Nature Physics.
Spectroscopy, the study of the interaction between matter and light, is probably the most important method for investigating the properties of molecules. Molecules can only absorb light at well-defined wavelengths which correspond to the difference between two quantum-mechanical energy states. This is referred to as a spectroscopic transition. An analysis of the wavelengths and the intensity of the transitions provides information about the chemical structure and molecular motions, such as vibration or rotation.
In certain cases, however, the transition between two energy levels is not permitted. The transition is then called "forbidden". Nevertheless, this restriction is not categorical, meaning that forbidden transitions can still be observed with an extremely sensitive method of measurement. Although the corresponding spectra are extremely weak, they can be measured to an exceptionally accurate degree. They provide information on molecular properties with a level of precision not possible within allowed spectra.
In the 1960s a series of satellites were built as part of Project Vela. Project Vela was intended to detect violations of the 1963 ban on above ground testing of nuclear weapons. The Vela satellites were designed to detect bursts of gamma rays, which are high energy electromagnetic waves produced by radioactive decay. If any nuclear weapon was detonated in space, the resulting radioactive decay would release a large amount of gamma rays which would be detected by the Vela satellites.
In 1967, two of the Vela probes detected a large spike of gamma rays. But the signature of this spike was very different from those of a nuclear explosion. Soon more gamma ray spikes were detected, and these likewise differed from the expected signature of a nuclear test. Since the bursts were observed by multiple satellites, the Vela team was able to compare the arrival of the bursts between different satellites, and it soon became clear that the bursts had an extraterrestrial source. Of course the Vela project was classified, so it wasn't until 1973 that the results were declassified and published in Astrophysical Journal. It was only then that astronomers were made aware of these gamma ray bursts (GRBs).
We now know that GRBs are very common. On average, about one gamma ray burst occurs every day. They appear randomly in all directions of the sky, and this means they aren't produced in our galaxy. If they were, then GRBs would mostly be found along the plane of the Milky Way.
Some gamma ray bursts (known as long bursts) can last more than two seconds. These bursts have afterglow caused by gamma rays colliding with interstellar material near the event, causing the emission of light at other wavelengths. This afterglow allows us to measure the redshift of these events, and what we find is that they are quite distant. The closest observed gamma ray burst occurred at a distance of 100 million light years, and many occurred billions of light years away.
We aren't entirely sure what causes a gamma ray burst. Because of their distance, and apparent brightness, they must be extraordinarily energetic, with about 100 times more energy than a supernova. They may be caused by huge supernova explosions known as hypernova, or they may be caused by supernova explosions occur with a rotational axis pointing in our direction, causing a jet-like burst of energy. Short burst GRBs, lasting less than 2 seconds, may be due to collisions between neutron stars.
Given the huge energy of GRBs, one might wonder if one could occur in our galaxy. Given the average rate of GRBs and the huge distances at which they typically occur, the rate at which one happens in our galaxy is probably about once every 5 million years.
Known as a 'Star of David' molecule, scientists have been trying to create one for over a quarter of a century and the team's findings are published in the 21 September 2014 issue of Nature Chemistry.
Consisting of two molecular triangles, entwined about each other three times into a hexagram, the structure's interlocked molecules are tiny – each triangle is 114 atoms in length around the perimeter. The molecular triangles are threaded around each other at the same time that the triangles are formed, by a process called 'self-assembly', similar to how the DNA double helix is formed in biology.
The molecule was created at The University of Manchester by PhD student Alex Stephens. Professor David Leigh, in Manchester's School of Chemistry, said: "It was a great day when Alex finally got it in the lab. In nature, biology already uses molecular chainmail to make the tough, light shells of certain viruses and now we are on the path towards being able to reproduce its remarkable properties.
"It's the next step on the road to man-made molecular chainmail, which could lead to the development of new materials which are light, flexible and very strong. Just as chainmail was a breakthrough over heavy suits of armour in medieval times, this could be a big step towards materials created using nanotechnology. I hope this will lead to many exciting developments in the future."
In recent years, new strains of bacteria have emerged that resist even the most powerful antibiotics. Each year, these superbugs, including drug-resistant forms of tuberculosis and staphylococcus, infect more than 2 million people nationwide, and kill at least 23,000. Despite the urgent need for new treatments, scientists have discovered very few new classes of antibiotics in the past decade.
MIT engineers have now turned a powerful new weapon on these superbugs. Using a gene-editing system that can disable any target gene, they have shown that they can selectively kill bacteria carrying harmful genes that confer antibiotic resistance or cause disease.
Led by Timothy Lu, an associate professor of biological engineering and electrical engineering and computer science, the researchers described their findings in the Sept. 21 issue of Nature Biotechnology.
Last month, Lu’s lab reported a different approach to combating resistant bacteria by identifying combinations of genes that work together to make bacteria more susceptible to antibiotics.
Lu hopes that both technologies will lead to new drugs to help fight the growing crisis posed by drug-resistant bacteria.
“This is a pretty crucial moment when there are fewer and fewer new antibiotics available, but more and more antibiotic resistance evolving,” he says. “We’ve been interested in finding new ways to combat antibiotic resistance, and these papers offer two different strategies for doing that.”
Most antibiotics work by interfering with crucial functions such as cell division or protein synthesis. However, some bacteria, including the formidable MRSA (methicillin-resistant Staphylococcus aureus) and CRE (carbapenem-resistant Enterobacteriaceae) organisms, have evolved to become virtually untreatable with existing drugs.
In the new Nature Biotechnology study, graduate students Robert Citorik and Mark Mimee worked with Lu to target specific genes that allow bacteria to survive antibiotic treatment. The CRISPR genome-editing system presented the perfect strategy to go after those genes.
CRISPR, originally discovered by biologists studying the bacterial immune system, involves a set of proteins that bacteria use to defend themselves against bacteriophages (viruses that infect bacteria). One of these proteins, a DNA-cutting enzyme called Cas9, binds to short RNA guide strands that target specific sequences, telling Cas9 where to make its cuts.
Lu and colleagues decided to turn bacteria’s own weapons against them. They designed their RNA guide strands to target genes for antibiotic resistance, including the enzyme NDM-1, which allows bacteria to resist a broad range of beta-lactam antibiotics, including carbapenems. The genes encoding NDM-1 and other antibiotic resistance factors are usually carried on plasmids — circular strands of DNA separate from the bacterial genome — making it easier for them to spread through populations.
When the researchers turned the CRISPR system against NDM-1, they were able to specifically kill more than 99 percent of NDM-1-carrying bacteria, while antibiotics to which the bacteria were resistant did not induce any significant killing. They also successfully targeted another antibiotic resistance gene encoding SHV-18, a mutation in the bacterial chromosome providing resistance to quinolone antibiotics, and a virulence factor in enterohemorrhagic E. coli.
In addition, the researchers showed that the CRISPR system could be used to selectively remove specific bacteria from diverse bacterial communities based on their genetic signatures, thus opening up the potential for “microbiome editing” beyond antimicrobial applications.
Ecole polytechnique fédérale de Lausanne (EPFL; Lausanne, Switzerland) researchers have developed a new light-emitting diode (LED)-based handheld device that is able to test a large number of proteins in our body all at once. Professor Hatice Altug and postoctoral fellow Arif Cetin from EPFL in collaboration with professor Aydogan Ozcan from UCLA (Los Angeles, CA) developed the compact and inexpensive "optical lab on a chip" to quickly analyze up to 170,000 different molecules in a blood sample--simultaneously identifying insulin levels, cancer and Alzheimer markers, or even certain viruses.
Instead of analyzing the biosample by looking at the spectral properties of the sensing platforms as has traditionally been the case, this new technique uses changes in the intensity of the light to do on-chip imaging, eliminating sometimes clunky spectrometers in the process.
Only 7.5 cm high and weighing 60 g, the device is able to detect viruses and single-layer proteins down to 3 nanometers thick. Detailed in a publication in Nature Light: Science & Application, the recipe is simple and contains few ingredients: an off-the-shelf CMOS chip, an LED, and a 10 square millimeter gold plate pierced with arrays of extremely small holes less than 200 nm wide.
Nanoholes on the gold substrates are compartmented into arrays of different sections, where each section functions as an independent sensor. Sensors are coated with special biofilms that are specifically attracting targeted proteins. Consequently, multiple different proteins in the biosamples could be captured at different places on the platform and monitored simultaneously. The LED light shines on the platform, passes through the nanoscale openings and its properties are recorded onto the CMOS chip. Since light going through the nanoscale holes changes its properties depending on the presence of biomolecules, it is possible to easily deduce the number of particles trapped on the sensors.
The great desert was born some 7 million years ago, as remnants of a vast sea called Tethys closed up. The movement of tectonic plates that created the Mediterranean Sea and the Alps also sparked the drying of the Sahara some 7 million years ago, according to the latest computer simulations of Earth’s ancient climate.
Though North Africa is currently covered by the world’s largest non-polar desert, climate conditions in the region have not been constant there for the last several million years. Subtle changes in Earth’s tilt toward the sun periodically increase the amount of solar energy received by the Northern Hemisphere in summer, altering atmospheric currents and driving monsoon rains. North Africa also sees more precipitation when less of the planet’s water is locked up in ice. Such increases in moisture limit how far the Sahara can spread and can even spark times of a “green Sahara”, when the sparse desert is replaced by abundant lakes, plants and animals.
Before the great desert was born, North Africa had a moister, semiarid climate. A few lines of evidence, including ancient dune deposits found in Chad, had hinted that the arid Sahara may have existed at least 7 million years ago. But without a mechanism to explain how it emerged, few scientists thought that the desert we see today could really be that old. Instead, most scientists argue that the Sahara took shape just 2 to 3 million years ago. Terrestrial and marine evidence suggest that North Africa underwent a period of drying at that time, when the Northern Hemisphere started its most recent cycle of glaciation.
Now Zhongshi Zhang of the Bjerknes Centre for Climate Research in Bergen, Norway, and colleagues have run simulations of climate change in North Africa over the last 30 million years. Their simulations take into account changes in Earth’s orbital position, atmospheric chemistry and the ratio of land to ocean as driven by tectonic forces. The models shows that precipitation in North Africa declined by more than half about 7 million years ago, causing the region to dry out. But this effect could not be explained by changes in vegetation, Earth’s tilt or greenhouse gas concentrations—leaving tectonic action.
About 250 million years ago, a huge body of water called the Tethys Sea separated the supercontinents of Laurasia to the north and Gondwana to the south. As those supercontinents broke apart and shuffled around, the African plate collided with the Eurasian plate, birthing the Alps and the Himalayas but closing off the bulk of the Tethys Sea. As the plates kept moving, the sea continued to shrink, eventually diminishing into the Mediterranean.
What set off the aridification in Africa was the replacement of the western arm of the Tethys Sea with the Arabian Peninsula around 7 to 11 million years ago. Replacing water with land, which reflects less sunlight, altered the region’s precipitation patterns. This created the desert and heightened its sensitivity to changes in Earth’s tilt, the researchers conclude in a study published today in Nature.
An ultrasensitive biosensor made from the wonder material graphene has been used to detect molecules that indicate an increased risk of developing cancer. The biosensor has been shown to be more than five times more sensitive than bioassay tests currently in use, and was able to provide results in a matter of minutes, opening up the possibility of a rapid, point-of-care diagnostic tool for patients.
The biosensor has been shown to be more than five times more sensitive than bioassay tests currently in use, and was able to provide results in a matter of minutes, opening up the possibility of a rapid, point-of-care diagnostic tool for patients. The biosensor has been presented today, 19 September, in IOP Publishing's journal 2D Materials.
To develop a viable bionsensor, the researchers, from the University of Swansea, had to create patterned graphene devices using a large substrate area, which was not possible using the traditional exfoliation technique where layers of graphene are stripped from graphite.
Instead, they grew graphene onto a silicon carbide substrate under extremely high temperatures and low pressure to form the basis of the biosensor. The researchers then patterned graphene devices, using semiconductor processing techniques, before attaching a number of bioreceptor molecules to the graphene devices. These receptors were able to bind to, or target, a specific molecule present in blood, saliva or urine.
The molecule, 8-hydroxydeoxyguanosine (8-OHdG), is produced when DNA is damaged and, in elevated levels, has been linked to an increased risk of developing several cancers. However, 8-OHdG is typically present at very low concentrations in urine, so is very difficult to detect using conventional detection assays, known as enzyme-linked immunobsorbant assays (ELISAs).
In their study, the researchers used x-ray photoelectron spectroscopy and Raman spectroscopy to confirm that the bioreceptor molecules had attached to the graphene biosensor once fabricated, and then exposed the biosensor to a range of concentrations of 8-OHdG.
Wouldn't it be great if you could just call up a supercomputer and ask it to do your data-wrangling for you? Actually, scratch that, no-one uses the phone anymore. What'd be really cool is if machines could respond to your queries straight from Twitter. It's a belief that's shared by Wolfram Research, which has just launched the Tweet a Program system to its computational knowledge engine, Wolfram Alpha. In a blog post, founder Stephen Wolfram explains that even complex queries can be executed within the space of 140 characters, including data visualizations.
In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun with the introduction of Tweet-a-Program. Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. TheTwitter bot will run your program in the Wolfram Cloud and tweet the result back to you. One can do a lot with Wolfram Language programs that fit in a tweet. It’s easy to make interesting patterns or even complicated fractals. Putting in some math makes it easy to get all sorts of elaborate structures and patterns.
The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population. There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. The Wolfram Language does really well with words and text and deals with images too.
There was little need, before, to know exactly how much dust peppers outer space, far from the plane of the Milky Way. Scientists understood that the dimly radiating grains aligned with our galaxy’s magnetic field and that the field’s twists and turns gave a subtle swirl to the dust glow. But those swirls were too faint to see. Only since March, when researchers claimed to have glimpsed the edge of space and time with a fantastically sensitive telescope, has the dust demanded a reckoning. For, like a cuckoo egg masquerading in a warbler’s nest, its pattern mimics a predicted signal from the Big Bang.
Now, scientists have shown that the swirl pattern touted as evidence of primordial gravitational waves — ripples in space and time dating to the universe’s explosive birth — could instead all come from magnetically aligned dust. A new analysis of data from the Planck space telescope has concluded that the tiny silicate and carbonate particles spewed into interstellar space by dying stars could account for as much as 100 percent of the signal detected by the BICEP2 telescope and announced to great fanfare this spring.
The Planck analysis is “relatively definitive in that we can’t exclude that the entirety of our signal is from dust,” said Brian Keating, an astrophysicist at the University of California, San Diego, and a member of the BICEP2 collaboration.
“We were, of course, disappointed,” said Planck team member Jonathan Aumont of the Université Paris-Sud.
The new dust analysis leaves open the possibility that part of the BICEP2 signal comes from primordial gravitational waves, which are the long-sought fingerprints of a leading Big Bang theory called “inflation.” If the universe began with this brief period of exponential expansion, as the cosmologist Alan Guth proposed in 1980, then quantum-size ripples would have stretched into huge, permanent undulations in the fabric of the universe. These gravitational waves would have stamped a swirl pattern, called “B-mode” polarization, in the cosmic microwave background, the oldest light now detectable in the sky.
After the successful synthesis of silicene in 2012, which was followed by a surge of studies on elemental, novel two-dimensional (2D) materials beyond graphene, a daunting quest was to obtain germanene, the germanium-based analogue of graphene, already predicted to possibly exist in 2009. Although its fully hydrogenated form, germanane, was fabricated using a wet chemistry method in 2013, germanene has remained elusive. Scientists now show compelling experimental and theoretical evidence of its synthesis by dry epitaxial growth on a gold (111) surface.
The discovery of graphene boosted research in nanoscience on 2D materials, especially on elemental ones. In 2012, silicene, graphene's silicon cousin , was successfully synthesized on two metallic templates, namely a silver (111) surface [2, 3] and the zirconium diboride (0001) surface of a thin film grown on a silicon (111) substrate . One year later, silicene was also grown on an iridium (111) surface . Germanene, another germanium-based cousin of graphene, along with silicene, had already been predicted to be stable as freestanding novel germanium and silicon 2D allotropes in a low buckled honeycomb geometry by Cahangirov et al in 2009 . In the quest for germanene, its fully hydrogen-terminated partner, germanane (GeH), was first fabricated from the topochemical deintercalation of the layered van der Waals solid calcium digermanide (CaGe2) ; next, the stability of germanane was improved by replacing the H atom termination with a methyl group one .
Since silicene has, up to now, only been synthesized in dry conditions under ultrahigh vacuum (UHV)—with silver (111) as the favored substrate—trying to synthesize germanene by also growing it on Ag(111) single crystals using germanium molecular beam epitaxy seems tempting. However, this has failed up to now, because (1) the 'magic mismatch' between three lattice constants of silicene and four of the Ag(111) plane is not fulfilled for germanene, and (2) germanium most probably prefers to form an ordered Ag2Ge surface alloy, where Ge atoms, up to a coverage of one-third of a monolayer (1/3 ML), substitute Ag ones at the silver surface. This surface alloy presents a complex '√3 × √3' structure , which not only deviates in its geometry but also in its electronic properties [9, 10] from the simple √3 × √3 reconstruction envisaged earlier .
Scientists have thus used a gold (111) substrate instead to avoid such a surface alloy formation. Indeed, for silicene synthesis they deposited silicon on silver (111) surfaces because the inverse system, silver grown on Si(111) surfaces, is well-known to form atomically abrupt interfaces, without intermixing . This choice of an Au(111) substrate is based on the same strategy. It turns out that among the four noble metals on elemental semiconductor systems studied, namely, Au, Ag/Ge, Si(111) , the most similar in several aspects, especially in the growth mode—the Stranski–Krastanov (or layer-plus-islands) mode characterized by the formation of a √3 × √3 R30° superstructure (or wetting layer) associated with the formation of Au trimers on Ge(111)  or Ag ones on Si(111)—appeared to be Si/Ag(111)  and Ge/Au(111) , a trend confirmed in a recent study of Au/Ge(111) .
Many scientists believe we are not alone in the universe. It's probable, they say, that life could have arisen on at least some of the billions of planets thought to exist in our galaxy alone -- just as it did here on planet Earth. This basic question about our place in the Universe is one that may be answered by scientific investigations. What are the next steps to finding life elsewhere?
Experts from NASA and its partner institutions addressed this question on July 14, 2014 at a public talk held at NASA Headquarters in Washington. They outlined NASA's roadmap to the search for life in the universe, an ongoing journey that involves a number of current and future telescopes.
"Sometime in the near future, people will be able to point to a star and say, 'that star has a planet like Earth'," says Sara Seager, professor of planetary science and physics at the Massachusetts Institute of Technology in Cambridge, Massachusetts. "Astronomers think it is very likely that every single star in our Milky Way galaxy has at least one planet."
NASA's quest to study planetary systems around other stars started with ground-based observatories, then moved to space-based assets like the Hubble Space Telescope, the Spitzer Space Telescope, and the Kepler Space Telescope. Today's telescopes can look at many stars and tell if they have one or more orbiting planets. Even more, they can determine if the planets are the right distance away from the star to have liquid water, the key ingredient to life as we know it.
The NASA roadmap will continue with the launch of the Transiting Exoplanet Surveying Satellite (TESS) in 2017, the James Webb Space Telescope (Webb Telescope) in 2018, and perhaps the proposed Wide Field Infrared Survey Telescope - Astrophysics Focused Telescope Assets (WFIRST-AFTA) early in the next decade. These upcoming telescopes will find and characterize a host of new exoplanets -- those planets that orbit other stars -- expanding our knowledge of their atmospheres and diversity. The Webb telescope and WFIRST-AFTA will lay the groundwork, and future missions will extend the search for oceans in the form of atmospheric water vapor and for life as in carbon dioxide and other atmospheric chemicals, on nearby planets that are similar to Earth in size and mass, a key step in the search for life.
"This technology we are using to explore exoplanets is real," said John Grunsfeld, astronaut and associate administrator for NASA's Science Mission Directorate in Washington. "The James Webb Space Telescope and the next advances are happening now. These are not dreams -- this is what we do at NASA."
Since its launch in 2009, Kepler has dramatically changed what we know about exoplanets, finding most of the more than 5,000 potential exoplanets, of which more than 1700 have been confirmed. The Kepler observations have led to estimates of billions of planets in our galaxy, and shown that most planets within one astronomical unit are less than three times the diameter of Earth. Kepler also found the first Earth-size planet to orbit in the "habitable zone" of a star, the region where liquid water can pool on the surface.
"What we didn't know five years ago is that perhaps 10 to 20 percent of stars around us have Earth-size planets in the habitable zone," says Matt Mountain, director and Webb telescope scientist at the Space Telescope Science Institute in Baltimore. "It's within our grasp to pull off a discovery that will change the world forever. It is going to take a continuing partnership between NASA, science, technology, the U.S. and international space endeavors, as exemplified by the James Webb Space Telescope, to build the next bridge to humanity's future."
This decade has seen the discovery of more and more super Earths, which are rocky planets that are larger and heftier than Earth. Finding smaller planets, the Earth twins, is a tougher challenge because they produce fainter signals. Technology to detect and image these Earth-like planets is being developed now for use with the future space telescopes. The ability to detect alien life may still be years or more away, but the quest is underway.
Scientists from Jülich and Xi’an have developed a new method with which crystal structures can be reconstructed with atomic precision in all three dimensions.
An important characteristic of nanoparticles is that they differ from other kinds of materials in that their surface determines their physical and technical properties to a much larger extent. The efficiency of catalysts, for instance, depends predominantly on the shape of the materials used and their surface texture. For this reason, physicists and material scientists are interested in being able to determine the structure of nanomaterials from all angles and through several layers, right down to the last atom. Until now, it was necessary to perform a whole series of tests from different angles to do so. However, scientists at Forschungszentrum Jülich, the Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons (ER-C) and Xi’an Jiaotong University in China have now succeeded for the first time in calculating the spatial arrangement of the atoms from just a single image from an electron microscope.
Their approach offers many advantages; radiation-sensitive samples can also be studied, which would otherwise be quickly damaged by the microscope’s high-energy electron beam. The comparatively short data acquisition time involved could even make it possible in the future to observe the transient intermediate steps of chemical reactions. Moreover, it enables a "gentle" measurement procedure to take place, to detect not only heavy but also light chemical elements, such as oxygen, which has an important function in many technologically significant materials.
"Acquiring three-dimensional information from a single two-dimensional image seems impossible at first glance. Nevertheless, it is in fact possible; we don’t obtain a simple two-dimensional projection of the three-dimensional sample as the experiment follows quantum mechanical principles instead", explains Prof. Chunlin Jia, researcher at the Jülich Peter Grünberg Institute, in Microstructure Research (PGI-5), the ER-C and at Jiaotong University. "On its way through the crystal lattice, the electron wave of the microscope acts as a highly sensitive atom detector and is influenced by each individual atom. The key point is that it does actually make a difference whether the wave front encounters an atom at the beginning or at the end of its pathway through the crystal."
Emissions of greenhouse gases are rising so fast that within one generation the world will have used up its margin of safety for limiting global warming to 2°C (3.6°F), an international team of scientists warned.
A report by the Global Carbon Project (GCP), published two days ahead of the UN climate summit on Tuesday, found that carbon dioxide (CO2) emissions from fossil-fuel combustion and cement production grew by 2.3 percent in 2013, reaching a record 36 billion tonnes of CO2. It predicted a further 2.5-percent increase in 2014.
It means that the world's "carbon quota" is fast being used up, according to the GCP research. Like an allowance, the quota is the maximum of heat-trapping gas that can be emitted before warming breaches 2°C as compared to the start of the Industrial Revolution in 1750.
"With current emission rates. the remaining 'quota' to surpass 2°C of global warming will be used up in around 30 years—or one generation," its authors said. "Total future CO2 emissions cannot exceed 1,200 billion tonnes for a likely—66 percent—chance of keeping average global warming under 2°C since pre-industrial times."
Member states have agreed to limit global warming to 2°C above pre-industrial levels, although they have not set a date by which this should be achieved.
The negotiations are supposed to climax in Paris at the end of 2015, providing a global pact that should come into force in 2020. But the talks are complex and bitterly-fought, with divisions over who should shoulder the burden of curbing the emissions.
Shellfish such as mussels and barnacles secrete very sticky proteins that help them cling to rocks or ship hulls, even underwater. Inspired by these natural adhesives, a team of MIT engineers has designed new materials that could be used to repair ships or help heal wounds and surgical incisions.
To create their new waterproof adhesives, the MIT researchers engineered bacteria to produce a hybrid material that incorporates naturally sticky mussel proteins as well as a bacterial protein found in biofilms—slimy layers formed by bacteria growing on a surface. When combined, these proteins form even stronger underwater adhesives than those secreted by mussels.
This project, described in the Sept. 21, 2014 issue of the journal Nature Nanotechnology, represents a new type of approach that can be exploited to synthesize biological materials with multiple components, using bacteria as tiny factories.
"The ultimate goal for us is to set up a platform where we can start building materials that combine multiple different functional domains together and to see if that gives us better materials performance," says Timothy Lu, an associate professor of biological engineering and electrical engineering and computer science (EECS) and the senior author of the paper.
The researchers tested the adhesives using atomic force microscopy, a technique that probes the surface of a sample with a tiny tip. They found that the adhesives bound strongly to tips made of three different materials—silica, gold, and polystyrene. Adhesives assembled from equal amounts of mussel foot protein 3 and mussel foot protein 5 formed stronger adhesives than those with a different ratio, or only one of the two proteins on their own.
These adhesives were also stronger than naturally occurring mussel adhesives, and they are the strongest biologically inspired, protein-based underwater adhesives reported to date, the researchers say. The team also plans to try to create "living glues" consisting of films of bacteria that could sense damage to a surface and then repair it by secreting an adhesive.
Scientists have devised a way to run a quantum cycle based on the use of quantum shortcuts to adiabaticity, where friction-like effects are quenched. In real physical processes, some energy is always lost any time work is produced. The lost energy almost always occurs due to friction, especially in processes that involve mechanical motion. But in a new study, physicists have designed an engine that operates with zero friction while still generating power by taking advantage of some quantum shortcuts.
The laws of thermodynamics successfully describe the concepts of work and heat in a wide variety of systems, ranging from refrigerators to black holes, as long as the systems are macroscopic. But for quantum technologies on the micro- and nano-scale, quantum fluctuationsthat are insignificant on large scales start to become prominent. As previous research as shown, the large quantum effects call for a complete reformulation of the thermodynamics laws.
What a quantum version of thermodynamics might look like is not yet known, and neither are the limitations or possible advantages of the quantum devices that would be described by such laws. However, one intriguing question is whether it may be possible to build a reversible quantum engine—one in which the engine's operation can be reversed without energy dissipation (an "adiabatic" process).
In the new paper, the physicists have shown one example of a quantum engine that is "super-adiabatic." That is, the engine uses quantum shortcuts to achieve a state that is usually achieved only by slow adiabatic processes. This engine can achieve a state that is fully frictionless; in other words, the engine reaches its maximum efficiency, while still generating some power.
"Shortcuts allow us to 'mimic' what would be achieved by running a cycle quasi-statically, i.e., very slowly, while performing transformations at finite time," coauthor Mauro Paternostro at Queen's University in Belfast, UK, told Phys.org. "Now, consider for instance a compression or expansion stage of a cycle run using a piston. When doing it at finite time, i.e., non-zero velocity, friction might affect the performance of the transformation. Yet, by using a shortcut to adiabaticity, friction-like effects would get quenched, the cycle performance being the same as that of a quasistatic motor."
Children born today will see the world committed to dangerous and irreversible levels of climate change by their young adulthood at current rates, as the world poured a record amount of greenhouse gases into the atmosphere this year.
Annual carbon dioxide emissions showed a strong rise of 2.5% on 2013 levels, putting the total emitted this year on track for 40bn tonnes. That means the global ‘carbon budget’, calculated as the total governments can afford to emit without pushing temperatures higher than 2C above pre-industrial levels, is likely to be used up within just one generation, or in thirty years from now.
Scientists think climate change is likely to have catastrophic and irreversible effects, including rising sea levels, polar melting, droughts, floods and increasingly extreme weather, if temperatures rise more than 2C. They have calculated that this threshold is likely to be breached if global emissions top 1,200 billion tonnes, giving a “carbon budget” to stick to in order to avoid dangerous warming.
Dave Reay, professor of carbon management at the University of Edinburgh, said: “If this were a bank statement it would say our credit is running out. We’ve already burned through two-thirds of our global carbon allowance and avoiding dangerous climate change now requires some very difficult choices. Not least of these is how a shrinking global carbon allowance can be shared equitably between more than 7bn people and where the differences between rich and poor are so immense.”
The study, by the Global Carbon Project, also found that China’s per capita emissions had surpassed those of Europe for the first time, between 2013 and 2014.
It comes ahead of a climate summit on Tuesday in New York, at which the UN secretary-general Ban Ki-moon will bring together heads of state and government from more than 120 countries to discuss climate change, and encourage them to make commitments on emissions reductions in the run-up to a crunch meeting in Paris late next year, at which a new global agreement on emissions is expected to be signed.
Emissions for 2014, according to the research, are set to rise to 40bn tonnes. That compares with emissions of 32bn tonnes in 2010, showing how fast the output is rising.
The rising trend has continued despite increasingly alarming warnings from scientists over the future of the climate, and commitments by developed countries to cut their carbon and from major developing economies to curb their emissions growth. There was a brief blip in global emissions growth at the time of the banking crisis, but this “breathing space” was quickly overtaken by an expansion in fossil fuel demand.
A team of Stanford researchers has developed a protein therapy that disrupts the process that causes cancer cells to break away from original tumor sites, travel through the blood stream and start aggressive new growths elsewhere in the body. This process, known as metastasis, can cause cancer to spread with deadly effect.
The Stanford team seeks to stop metastasis, without side effects, by preventing two proteins – Axl and Gas6 – from interacting to initiate the spread of cancer. Axl proteins are expressed on the surface of cancer cells, poised to receive biochemical signals from Gas6 proteins. When two Gas6 proteins link with two Axls, the signals that are generated enable cancer cells to leave the original tumor site, migrate to other parts of the body and form new cancer nodules.
Physicists at the University of Geneva have succeeded in teleporting the quantum state of a photon to a crystal over 25 kilometers of optical fiber. The experiment, carried out in the laboratory of Professor Nicolas Gisin, constitutes a first, and simply pulverises the previous record of 6 kilometres achieved ten years ago by the same UNIGE team. Passing from light into matter, using teleportation of a photon to a crystal, shows that, in quantum physics, it is not the composition of a particle which is important, but rather its state, since this can exist and persist outside such extreme differences as those which distinguish light from matter. The results obtained by Félix Bussières and his colleagues are reported in the latest edition of Nature Photonics.
Quantum physics, and with it the UNIGE, is again being talked about around the world with the Marcel Benoist Prize for 2014 being awarded to Professor Nicolas Gisin, and the publication of experiments in Nature Photonics. The latest experiments have enabled verifying that the quantum state of a photon can be maintained whilst transporting it into a crystal without the two coming directly into contact. One needs to imagine the crystal as a memory bank for storing the photon's information; the latter is transferred over these distances using the teleportation effect.
The experiment not only represents a significant technological achievement but also a spectacular advance in the continually surprising possibilities afforded by the quantum dimension. By taking the distance to 25 kilometres of optical fibre, the UNIGE physicists have significantly surpassed their own record of 6 kilometres, the distance achieved during the first long-distance teleportation achieved by Professor Gisin and his team in 2003.
While bubonic plague would seem a blight of the past, there have been recent outbreaks in India, Madagascar and the Congo. And it's mode of infection now appears similar to that used by other well-adapted human pathogens, such as the HIV virus.
In self-defense the hagfish produces from its glands a slime that is composed of nanometer width threads and what is likely sugar or glyco-modifications. The slime is thought to impede capture by making the hagfish slippery, and possibly by clogging the gills of a predator. The nanothreads are remarkable: comparable to spider silk in tensile strength (800 megapascals or near 1 gigapascal) and lightness, and 5 times stronger than steel on a weight basis. Moreover, each thread is only 12 nanometers wide but 15 centimeters long. Amazingly, a full thread is wrapped up in so that it fits within a single cell, highly specialized and called a gland thread cell (GTC).
Scientists have uncovered, using electron microscopy, the organization of a single hagfish nano-sized thread, helping resolve the mystery of why extrusion of such a long (compared to its width) thread from the cell does not cause tangling. The thread is coiled up in a conical “skein” in 15-20 layers. As a GTC matures, its nucleus migrates to an extreme pole, leaving most of the cell volume packed with a single coil of thread.
The conical shape of the coiling seems to be controlled by the shape of the nucleus of the GTC which deforms over time from being round to being elongated. The first layer of coils observed by the researchers is round with subsequent layers becoming more elongated. Therefore the nucleus provides an evolving “obstruction” which restricts the freedom of the thread and organizes it over the maturation time.
The authors used a method known as Focused Ion Beam Scanning Electron Microscopy or (FIB-SEM) to scan a matured GTC. The great advantage of FIB-SEM is the ability to acquire image slices through a succession of scanned planes. Software then takes image slices to reconstruct a 3D representation.
Attempts to industrialize strong, natural nanofibers with extraordinary properties have met with limited success. Harvesting silk from the silk worm in bulk is routine and has a history of thousands of years. But silk does not have the tensile properties of spider or hagfish thread. Harvesting spider silk is not doable on an industrial scale partly due to our inability to amass sufficient production volume, although there are some ongoing efforts to engineer the proteins into another organism such as a silkworm or bacteria that affords better control over production capacity.
Intel has been working on a 3D scanner small enough to fit in the bezel of even the thinnest tablets. The company aims to have the technology in tablets from 2015, with CEO Brian Krzanich telling the crowd at MakerCon in New York on Thursday that he hopes to put the technology in phones as well.
"Our goal is to just have a tablet that you can go out and buy that has this capability," Krzanich said. "Eventually within two or three years I want to be able to put it on a phone."
Krzanich and a few of his colleagues demonstrated the technology, which goes by the name "RealSense," on stage using a human model and an assistant who simply circled the model a few times while pointing a tablet at the subject. A full 3D rendering of the model slowly appeared on the screen behind the stage in just a few minutes. The resulting 3D models can be manipulated with software or sent to a 3D printer.
"The idea is you go out, you see something you like and you just capture it," Krzanich explained. He said consumer tablets with built in 3D scanners will hit the market in the third or fourth quarter of 2015, with Intel also working on putting the 3D scanning cameras on drones.
The predecessor to the 3D scanning tablets demonstrated on stage were announced earlier this month in the form of the Dell Venue 8 7000 series Android tablet sports Intel's RealSense snapshot depth camera, which brings light-field camera-like capabilities to a tablet. It will be available later this year.