A huge, copper-toned formation in West Africa dominates a mesmerizing photo taken by an astronaut aboard the International Space Station.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
At this year’s Consumer Electronics Show in Las Vegas, the big theme was the “Internet of things” — the idea that everything in the human environment, from kitchen appliances to industrial equipment, could be equipped with sensors and processors that can exchange data, helping with maintenance and the coordination of tasks.
Realizing that vision, however, requires transmitters that are powerful enough to broadcast to devices dozens of yards away but energy-efficient enough to last for months — or even to harvest energy from heat or mechanical vibrations.
“A key challenge is designing these circuits with extremely low standby power, because most of these devices are just sitting idling, waiting for some event to trigger a communication,” explains Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering at MIT. “When it’s on, you want to be as efficient as possible, and when it’s off, you want to really cut off the off-state power, the leakage power.”
This week, at the Institute of Electrical and Electronics Engineers’ International Solid-State Circuits Conference, Chandrakasan’s group will present a new transmitter design that reduces off-state leakage 100-fold. At the same time, it provides adequate power for Bluetooth transmission, or for the even longer-range 802.15.4 wireless-communication protocol.
“The trick is that we borrow techniques that we use to reduce the leakage power in digital circuits,” Chandrakasan explains. The basic element of a digital circuit is a transistor, in which two electrical leads are connected by a semiconducting material, such as silicon. In their native states, semiconductors are not particularly good conductors. But in a transistor, the semiconductor has a second wire sitting on top of it, which runs perpendicularly to the electrical leads. Sending a positive charge through this wire — known as the gate — draws electrons toward it. The concentration of electrons creates a bridge that current can cross between the leads.
To generate the negative charge efficiently, the MIT researchers use a circuit known as a charge pump, which is a small network of capacitors — electronic components that can store charge — and switches. When the charge pump is exposed to the voltage that drives the chip, charge builds up in one of the capacitors. Throwing one of the switches connects the positive end of the capacitor to the ground, causing a current to flow out the other end. This process is repeated over and over. The only real power drain comes from throwing the switch, which happens about 15 times a second.
To make the transmitter more efficient when it’s active, the researchers adopted techniques that have long been a feature of work in Chandrakasan’s group. Ordinarily, the frequency at which a transmitter can broadcast is a function of its voltage. But the MIT researchers decomposed the problem of generating an electromagnetic signal into discrete steps, only some of which require higher voltages. For those steps, the circuit uses capacitors and inductors to increase voltage locally. That keeps the overall voltage of the circuit down, while still enabling high-frequency transmissions.
What those efficiencies mean for battery life depends on how frequently the transmitter is operational. But if it can get away with broadcasting only every hour or so, the researchers’ circuit can reduce power consumption 100-fold.
Working with researchers at Zhejiang University in China, Changxi Zheng, assistant professor of computer science at Columbia Engineering, has developed a technique that enables hydrographic printing, a widely used industrial method for transferring color inks on a thin film to the surface of manufactured 3D objects, to color these surfaces with the most precise alignment ever attained. Using a new computational method they developed to simulate the printing process, Zheng and his team have designed a model that predicts color film distortion during hydrographic immersion, and uses it to generate a colored film that guarantees exact alignment of the surface textures to the object. The research will be presented at SIGGRAPH 2015, August 9 to 13, in Los Angeles.
"Attaining precise alignment of the color texture onto the surface of an object with a complex surface, whether it's a motorcycle helmet or a 3D-printed gadget, has been almost impossible in hydrographic printing until now," says Zheng. "By incorporating -- for the first time -- a computational model into the traditional hydrographic printing process, we've made it easy for anyone to physically decorate 3D surfaces with their own customized color textures."
Used in mass production for transferring repeated color patterns to a 3D surface, hydrographic printing can be applied to various materials including metal, plastic, wood, and porcelain. The process uses a PVA film with printed color patterns placed on top of water. An activator chemical is then sprayed on the film, softening the color film to make it easily stretchable. Next, a physical object is slowly dipped into the water through the floating film. Once the film touches the object, it gets stretched, wrapping the object's surface, and adhering to it. Throughout the process, the color ink printed on the PVA film is transferred to the surface. But the process has a fundamental limitation in that it is almost impossible to precisely align a color pattern to the object surface, because the object stretches the color film. With complex surfaces, the stretch can be severe and even tear the film apart.
"So current hydrographic printing has been limited to transferring repetitive color patterns," Zheng explains. "But there are many times when a user would like to color the surface of an object with particular color patterns, to decorate a 3D-printed mug with specific, personalized images or just to color a toy."
Building upon previous work on fluid and viscous sheet simulation also done at Columbia Computer Graphics Group, Zheng has developed a new viscous sheet simulation method to model the color film stretch during the hydrographic printing process. This model predicts the stretch and distortion of color films and creates a map between the locations on the film and the surface locations to which they are transferred. With the map, he can compute a color image for printing on the PVA film and then, after the hydrographic immersion, it forms the desired color pattern on the object's surface.
Fruit flies have a neural compass that tracks orientation by combining visual and self-motion cues, according to a study published today in the journal Nature. The new research show that the compass in the fruit fly brain works in a similar way to that of mammals, suggesting that this tiny creature could teach us a few things about how our own compass works.
Most animals use landmarks to find their way around, but when navigating bare or unfamiliar terrain, they can estimate their position by tracking the direction and speed of their movements relative to a starting point, a process called path integration. The brains of rodents and other mammals contain at least four different types of nerve cells that are involved in this process, which co-operate to form a cognitive map of the surroundings.
Insects also use path integration. Honey bees perform a ‘waggle dance’ near the entrance to their nest to signal the direction, distance and abundance of a food source to their fellow workers, and foraging desert ants retrace their steps back to where they think their nest is, even after being picked up and moved, so that their trajectory is disrupted, on their outward journey. It’s widely believed that insects use simpler neural computations to navigate, and there’s very little evidence that they form cognitive maps.
In fruit flies, a ring-shaped brain structure called the ellipsoid body is needed for navigation. Johannes Seelig and Vivek Jayaraman of the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia wanted to see how cells in this structure respond to visual stimuli, so designed an ingenious and tricky experiment to monitor the cells as the flies moved through a virtual reality environment.
First, they created genetically engineered fruit flies expressing a protein that fluoresces when nerve cells become active and the calcium level inside them rises. Then they attached individual flies to the end of a metal rod, placing them inside a circular screen displaying various lined patterns, with the laser beam of a powerful high-speed two-photon microscope focused into the ellipsoid body.
The flies were held in place over an air-suspended ball, and by running over this, they controlled the rotation of the screen, giving them the illusion of movement, with the horizontal and vertical stripes acting as landmarks along their virtual journey.
Seelig and Jarayaman noticed that the cells in the ellipsoid body itself tracked the fly’s orientation, producing ‘bumps’ of activity whose position around the ring-shaped structure corresponded to the direction of the stripes, and which rotated with the stripes as the flies turned the ball.
This compass-like neural activity continued when the flies were in the dark, using self-motion instead of visual cues, but became increasingly inaccurate with time. It even persisted for more than 30 seconds when the flies were removed from the ball and left standing in darkness, maybe forming a short-term memory of their orientation.
Seelig, J. D. & Jayaraman, V. (2015). Neural dynamics for landmark orientation and angular path integration. Nature, 521,186–191. DOI: 10.1038/nature14446
A dramatic video has captured the behavior of cytotoxic T cells – the body’s ‘serial killers’ – as they hunt down and eliminate cancer cells before moving on to their next target.
In a study published today in the journal Immunity, a collaboration of researchers from the UK and the USA, led by Professor Gillian Griffiths at the University of Cambridge, describe how specialised members of our white blood cells known as cytotoxic T cells destroy tumour cells and virally-infected cells. Using state-of-the-art imaging techniques, the research team, with funding from the Wellcome Trust, has captured the process on film.
A recent finding by scientists from the Hospital for Sick Children, Toronto, and Duke University challenges long-held ideas about why our bones have a harder time healing as we age. Their research discovered that old mouse bones mend like youthful bones do when they're exposed to young blood after a fracture.
“The traditional concept is that as you get older, your bone cells kind of wear out so they can't heal as well, and we thought we'd find that during this study as well,” explains study co-author Benjamin Alman, of the Hospital for Sick Children. “But it turns out that it's not the bone cells, it's the blood cells. As you get older, the blood cells change the way they behave when you have an injury, and as a result the cells that heal bone aren't able to work as efficiently.”
The researchers paired lab mice, one old and one young, and subjected them to bone fractures, but that wasn't all they had in common. The living animals' circulatory systems were also joined together by a 150-year-old surgical technique known as parabiosis. Scientists removed a layer of skin from each mouse and stitched the exposed surfaces together. As the animals healed their capillaries joined, enabling their two hearts to pump the same blood throughout the two bodies as a single system. Parabiosis, which has been gaining new popularity in aging research, allowed Alman and colleagues to see what impacts the circulating factors of the younger mouse's blood had when introduced into the body of an older mouse.
The experiment, published this week in Nature Communications, suggests that young blood cells secrete some as-yet-unknown molecule, likely a protein or possibly some other chemical, that speeds up the healing of fractured bone. The molecule apparently does so by regulating levels of beta-catenin in bone cells known as osteoblasts. Keeping beta-catenin at the proper levels appears crucial for the formation of new high-density bone.
This ability is greatly diminished in older animals' blood because it no longer secretes the molecule, whose exact chemical nature remains a mystery at this point. “My guess is that there are a number of proteins involved that are made differently as we get older, and that they are responsible for the difficulty in healing bone,” Alman says.
The findings could prove good news for aging humans, but healing our bones won’t require the type of transfusions used in the experiment—nor will it borrow the synthesized “True Blood” variety that may soon enter clinical trials. Sharing human blood in this manner raises a number of red flags ranging from practicality to possible medical complications.
Scientists have found a fossil dating back at least 16 million years of a female shrimplike creature with enormous fossilized sperm in her reproductive tract. It's a unique example of a female that copulated just before she died and started to turn to stone.
The fossil is a display of "ancient sex with gargantuan sperm," says the lead scientist, Renate Matzke-Karasz of German's Ludwig-Maximilian-University, via e-mail. "We have here direct evidence of a recent mating. All the co-authors are still amazed by the findings."
The post-coital specimen is an ancient example of a mussel shrimp, technically known as an ostracod. These tiny animals have hinged shells like a mussel's and live today in watery places from flower pots to the ocean, where they subsist on detritus in the water. The fossil specimens were discovered in an Australian cave where large numbers of bats roosted millions of years ago. The bats unwittingly made a major contribution to science: Their guano, Matzke-Karasz says, supplied chemicals that helped preserve the finest details of the mussel shrimps' anatomy.
The scientists found four fossilized female mussel shrimp and one male mussel shrimp with sperm in their bodies, some of the oldest fossilized sperm found to date. When they examined the fossilized male, "we almost couldn't believe our eyes," Matzke-Karasz says. The animal was replete with "sperm (that) looked like little ropes, exactly how modern ostracod giant sperm look!"
The mussel shrimp may be small, but the modern male is mighty, producing so-called "giant sperm" that can be four times longer than the animal itself. Only a handful of other animals, including some flies and moths, make giant sperm, whose purpose is still unclear.
The new study, appearing in this week's Proceedings of the Royal Society B: Biological Sciences, shows that male mussel shrimp may have been deploying giant sperm for more than 140 million years, says micropaleontologist David Horne of Britain's Queen Mary University of London.
Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.
The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.
“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday. Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”
Computing power matters in the world of deep learning, which has produced breakthroughs in speech, image, and face recognition and improved the image-search and speech-recognition services offered by Google and Baidu.
The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in ways loosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.
So far, bigger data sets and networks appear to always be better for this technology, said Wu. That’s one way it differs from previous machine-learning techniques, which had begun to produce diminishing returns with larger data sets. “Once you scaled your data beyond a certain point, you couldn’t see any improvement,” said Wu. “With deep learning, it just keeps going up.” Baidu says that Minwa makes it practical to create an artificial neural network with hundreds of billions of connections—hundreds of times more than any network built before.
A paper released Monday is intended to provide a taste of what Minwa’s extra oomph can do. It describes how the supercomputer was used to train a neural network that set a new record on a standard benchmark for image-recognition software. The ImageNet Classification Challenge, as it is called, involves training software on a collection of 1.5 million labeled images in 1,000 different categories, and then asking that software to use what it learned to label 100,000 images it has not seen before.
Software is compared on the basis of how often its top five guesses for a given image miss the correct answer. The system trained on Baidu’s new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent,reported by Google in March. One month before that, Microsoft had reportedachieving 4.94 percent, becoming the first to better average human performance of 5.1 percent.
Researchers have demonstrated a new metal matrix composite that is so light that it can float on water. A boat made of such lightweight composites will not sink despite damage to its structure. The new material also promises to improve automotive fuel economy because it combines light weight with heat resistance.
Although syntactic foams have been around for many years, this is the first development of a lightweight metal matrix syntactic foam. It is the work of a team of researchers from Deep Springs Technology (DST) and the New York University Polytechnic School of Engineering.
Their magnesium alloy matrix composite is reinforced with silicon carbide hollow particles and has a density of only 0.92 grams per cubic centimeter compared to 1.0 g/cc of water. Not only does it have a density lower than that of water, it is strong enough to withstand the rigorous conditions faced in the marine environment.
Significant efforts in recent years have focused on developing lightweight polymer matrix composites to replace heavier metal-based components in automobiles and marine vessels. The technology for the new composite is very close to maturation and could be put into prototypes for testing within three years. Amphibious vehicles such as the Ultra Heavy-lift Amphibious Connector (UHAC) being developed by the U.S. Marine Corps can especially benefit from the light weight and high buoyancy offered by the new syntactic foams, the researchers explained.
"This new development of very light metal matrix composites can swing the pendulum back in favor of metallic materials," forecasted Nikhil Gupta, an NYU School of Engineering professor in the Department of Mechanical and Aerospace Engineering and the study's co-author. "The ability of metals to withstand higher temperatures can be a huge advantage for these composites in engine and exhaust components, quite apart from structural parts."
The syntactic foam made by DST and NYU captures the lightness of foams, but adds substantial strength. The secret of this syntactic foam starts with a matrix made of a magnesium alloy, which is then turned into foam by adding strong, lightweight silicon carbide hollow spheres developed and manufactured by DST. A single sphere's shell can withstand pressure of over 25,000 pounds per square inch (PSI) before it ruptures—one hundred times the maximum pressure in a fire hose.
Scientists have discovered a way to regrow bone tissue using the protein signals produced by stem cells. This technology could help treat victims who have experienced major trauma to a limb, like soldiers wounded in combat or casualties of a natural disaster. The new method improves on older therapies by providing a sustainable source for fresh tissue and reducing the risk of tumor formation that can arise with stem cell transplants.
The new study, published in Scientific Reports, is is the first to extract the necessary bone-producing growth factors from stem cells and to show that these proteins are sufficient to create new bone. The stem cell-based approach was as effective as the current standard treatment in terms of the amount of bone created.
“This proof-of-principle work establishes a novel bone formation therapy that exploits the regenerative potential of stem cells,” says senior author Todd McDevitt, PhD, a senior investigator at the Gladstone Institutes. “With this technique, we can produce new tissue that is completely stem cell-derived and that performs similarly with the gold standard in the field.”
Digital medicine is poised to transform biomedical research, clinical practice and the commercial sector. Here we introduce a monthly column from R&D/venture creation firm PureTech tracking digital medicine's emergence.
Technology has already transformed the social fabric of life in the twenty-first century. It is now poised to profoundly influence disease management and healthcare. Beyond the hype of the 'mobile health' and 'wearable technology' movement, the ability to monitor our bodies and continuously gather data about human biology suggests new possibilities for both biomedical research and clinical practice. Just as the Human Genome Project ushered in the age of high-throughput genotyping, the ability to automate, continuously record, analyze and share standardized physiological and biological data augurs the beginning of a new era—that of high-throughput human phenotyping.
These advances are prompting new approaches to research and medicine, but they are also raising questions and posing challenges for existing healthcare delivery systems. How will these technologies alter biomedical research approaches, what types of experimental questions will researchers now be able to ask and what types of training will be needed? Will the ability to digitize individual characteristics and communicate by mobile technology empower patients and enable the modification of disease-promoting behaviors; at the same time, will it threaten patient privacy? Will doctors be prescribing US Food and Drug Administration (FDA)-cleared apps on a regular basis, not just to monitor and manage chronic disease but also to preempt acute disease episodes? Will the shift in the balance between disease treatment and early intervention have a broad economic impact on the healthcare system? How will the emergence of these new technologies reshape the healthcare industry and its underlying business models? What will be the defining characteristics of 'winning' products and companies?
These are just some of the questions we plan to ask over the coming months. In the meantime, we introduce here some of the key themes shaping R&D in the digital medicine field and focus on what they might mean for the biopharmaceutical and diagnostic/device industries.
Using sensitive observations from the Kepler space telescope, the researchers have uncovered evidence of daily weather cycles on six extra-solar planets seen to exhibit different phases. Such phase variations occur as different portions of these planets reflect light from their stars, similar to the way our own moon cycles though different phases.
Among the findings are indications of cloudy mornings on four of them and hot, clear afternoons on two others. "We determined the weather on these alien worlds by measuring changes as the planets circle their host stars, and identifying the day-night cycle," said Lisa Esteves, a PhD candidate in the Department of Astronomy & Astrophysics at the University of Toronto, and lead author of the study published today in The Astrophysical Journal.
"We traced each of them going through a cycle of phases in which different portions of the planet are illuminated by its star, from fully lit to completely dark," said Esteves.
Because the planets are very near to their stars, they are expected to rotate counter-clockwise - just as the majority of objects in our solar system do - with the right side moving in the direction of each planet's orbit. This causes an eastward movement of the planet's surface and therefore an eastward circulation of atmospheric winds. As a result, clouds that form on the planet's night side, where temperatures are cooler while it faces away from its host star, would be blown to the planet's morning side.
"As the winds continue to transport the clouds to the day side, they heat up and dissipate, leaving the afternoon sky cloud-free," said Esteves. "These winds also push the hot air eastward from the meridian, where it is the middle of the day, resulting in higher temperatures in the afternoon."
For four of the planets, the researchers saw excess brightness in the Kepler data that corresponds to when the morning side is visible. For the other two, they saw an excess when the evening side is visible. "By comparing the planets' previously determined temperatures to the phase cycle measurements provided by Kepler, we found that the excess brightness on the morning side is most likely generated by reflected starlight," said Esteves. "These four planets are not hot enough to generate this excess light through thermal emission.
"The excess light seen on the two very hot planets can be explained by thermal emission," said Esteves. "A likely explanation is that on these two planets, the winds are moving heat towards the evening side, resulting in the excess brightness."
The Kepler telescope was the ideal instrument for the study of exoplanet phase variations. The very precise measurements it provided and the vast amount of data it collected allowed astronomers to measure the tiny signals from these distant worlds. Most of the planets examined in this study are very hot and large, with temperatures greater than 1600 degrees Celsius and sizes comparable to Jupiter - conditions far from hospitable to life but excellent for phase measurements.
Coulomb interaction has a striking effect on electronic propagation in one-dimensional conductors. The interaction of an elementary excitation with neighboring conductors favors the emergence of collective modes, which eventually leads to the destruction of the Landau quasiparticle. In this process, an injected electron tends to fractionalize into separated pulses carrying a fraction of the electron charge. Here, a team of physicists now use two-particle interferences in the electronic analog of the Hong-Ou-Mandel experiment in a quantum Hall conductor at filling factor 2 to probe the fate of a single electron emitted in the outer edge channel and interacting with the inner one. By studying both channels, they analyze the propagation of the single electron and the generation of interaction-induced collective excitations in the inner channel. These complementary pieces of information reveal the fractionalization process in the time domain and establish its relevance for the destruction of the quasiparticle, which degrades into the collective modes.
There is a popular misconception about Moore’s law (that the number of transistors on a chip doubles every two years) which has led many to conclude that the 50-year-old prognostication is due to end shortly. This doubling of processing power, for the same cost, has continued apace since Gordon Moore, one of Intel's founders, observed the phenomenon in 1965. At the time, a few hundred transistors could be crammed on a sliver of silicon. Today’s chips can carry billions.
Whether Moore’s law is coming to an end is moot. As far as physical barriers to further shrinkage are concerned, there is no question that, having been made smaller and smaller over the decades, crucial features within transistors are approaching the size of atoms. Indeed, quantum and thermodynamic effects that occur at such microscopic dimensions have loomed large for several years.
Until now, integrated circuits have used a two-dimensional (planar) structure, with a metal gate mounted across a flat, conductive channel of silicon. The gate controls the current flowing from a source electrode at one end of the channel to a drain electrode at the other end. A small voltage applied to the gate lets current flow through the transistor. When there is no voltage on the gate, the transistor is switched off. These two binary states (on and off) are the ones and zeros that define the language of digital devices.
However, when transistors are shrunk beyond a certain point, electrons flowing from the source can tunnel their way through the insulator protecting the gate, instead of flowing direct to the drain. This leakage current wastes power, raises the temperature and, if excessive, can cause the device to fail. Leakage becomes a serious problem when insulating barriers within transistors approach thicknesses of 3 nanometres (nm) or so. Below that, leakage increases exponentially, rendering the device pretty near useless.
Intel, which sets the pace for the semiconductor industry, started preparing for the leakage problem several “nodes” (changes in feature size) ago. At the time, it was still making 32nm chips. The solution adopted was to turn a transistor’s flat conducting channel into a vertical fence (or fin) that stood proud of the substrate. Instead of just one small contact patch, this gave the gate straddling the fence three contact areas (a large one on either side of the fence and a smaller one across the top). With more control over the current flowing through the channel, leakage is reduced substantially. Intel reckons “Tri-Gate” processors switch 37% faster and use 50% less juice than conventional ones.
Having introduced the Tri-Gate transistor design (now known generically as FinFET) with its 22nm node, Intel is using the same three-dimensional architecture in its current 14nm chips, and expects to do likewise with its 10nm ones, due out later this year and in mainstream production by the middle of 2016. Beyond that, Intel says it has some ideas about how to make 7nm devices, but has yet to reveal details. The company’s road map shows question marks next to future 7nm and 5nm nodes, and peters out shortly thereafter.
At a recent event celebrating the 50th anniversary of Moore’s law, Intel’s 86-year-old chairman emeritus said his law would eventually collapse, but that “good engineering” might keep it afloat for another five to ten years. Mr Moore was presumably referring to further refinements in Tri-Gate architecture. No doubt he was also alluding to advanced fabrication processes, such as “extreme ultra-violet lithography” and “multiple patterning”, which seemingly achieve the impossible by being able to print transistor features smaller than the optical resolution of the printing system itself.
Quantum computers won’t ever outperform today’s classical computers unless they can correct for errors that disrupt the fragile quantum states of their qubits. A team at Google has taken the next huge step toward making quantum computing practical by demonstrating the first system capable of correcting such errors.
Google’s breakthrough originated with the hiring of a quantum computing research group from the University California, Santa Barbara in the autumn of 2014. The UCSB researchers had previously built a system of superconducting quantum circuits that performed with enough accuracy tomake error correction a possibility. That earlier achievement paved the way for the researchers—many now employed at Google—to build a system that can correct the errors that naturally arise during quantum computing operations. Their work is detailed in the 4 March 2015 issue of the journal Nature.
“This is the first time natural errors arising from the qubit environment were corrected,” said Rami Barends, a quantum electronics engineer at Google. “It’s the first device that can correct its own errors.”
Quantum computers have the potential to perform many simultaneous calculations by relying upon quantum bits, or qubits, that can represent information as both 1 and 0 at the same time. That gives quantum computing a big edge over today’s classical computers that rely on bits that can only represent either 1 or 0.
But a huge challenge in building practical quantum computers involves preserving the fragile quantum states of qubits long enough to run calculations. The solution that Google and UCSB have demonstrated is a quantum error-correction code that uses simple classical processing to correct the errors that arise during quantum computing operations.
Such codes can’t directly detect errors in qubits without disrupting the fragile quantum states. But they get around that problem by relying on entanglement, a physics phenomenon that enables a single qubit to share its information with many other qubits through a quantum connection. The codes exploit entanglement with an architecture that includes “measurement” qubits entangled with neighboring “data” qubits.
The Google and UCSB team has been developing a specific quantum error-correction code called “surface code.” They eventually hope to build a 2-D surface code architecture based on a checkerboard arrangement of qubits, so that “white squares” would represent the data qubits that perform operations and “black squares” would represent measurement qubits that can detect errors in neighboring qubits.
For now, the researchers have been testing the surface code in a simplified “repetition code” architecture that involves a linear, 1-D array of qubits. Their unprecedented demonstration of error correction used a repetition code architecture that included nine qubits. They tested the repetition code through the equivalent of 90,000 test runs to gather the necessary statistics about its performance.
NASA researchers have identified the brightest galaxy ever encountered, which shines in the infrared wavelength with the equivalent light of 300 trillion suns. This "extremely luminous infrared galaxy" was encountered in data from 2010's Wide-field Infrared Survey Explorer. The WISE space telescope has revealed a number of strange and unique galaxies. This one, the astronomers theorize, may have a supermassive black hole at the center, which draws immense amounts of gas and matter into itself and releases a veritable rainbow of electromagnetic energy. This energy, however, is blocked by thick a halo of dust, which absorbs it and heats up, emitting infrared light instead — and in unprecedented amounts.
What's more, this particular galaxy is so far away that the light we're receiving from Earth was given off some 12.5 billion years ago. That means it grew that large and that bright during the infancy of the universe itself. To the researchers, that suggests that the black hole forming the center of the galaxy is breaking the rules somehow: It may have simply started out bigger than any we've encountered, or it might have grown faster than we believed possible.
"Another way for a black hole to grow this big is for it to have gone on a sustained binge, consuming food faster than typically thought possible," said the University of Leicester's Andrew Blain, co-author of the report describing the galaxy, in a NASA news release.
Understanding the galaxy's formation will help shed light on the early history of the universe, and set a precedent for studying similar objects. The report appears in the May 22 issue of The Astrophysical Journal, and can be read on Arxiv.
A group of scientists, led by a team from the University of Bristol, UK has observed a sudden increase of ice loss in a previously stable region of Antarctica. The research is published today in Science. Using measurements of the elevation of the Antarctic ice sheet made by a suite of satellites, the researchers found that the Southern Antarctic Peninsula showed no signs of change up to 2009. Around 2009, multiple glaciers along a vast coastal expanse, measuring some 750km in length, suddenly started to shed ice into the ocean at a nearly constant rate of 60 cubic km, or about 55 trillion litres of water, each year.
This makes the region the second largest contributor to sea level rise in Antarctica and the ice loss shows no sign of waning. Dr Bert Wouters, a Marie Curie Fellow at the University of Bristol, who lead the study said: "To date, the glaciers added roughly 300 cubic km of water to the ocean. That's the equivalent of the volume of nearly 350,000 Empire State Buildings combined."
The changes were observed using the CryoSat-2 satellite, a mission of the European Space Agency dedicated to remote-sensing of ice. From an altitude of about 700km, the satellite sends a radar pulse to Earth, which is reflected by the ice and subsequently received back at the satellite. From the time the pulse takes to travel, the elevation of the ice surface can retrieved with incredible accuracy. By analysing roughly 5 years of the data, the researchers found that the ice surface of some of the glaciers is currently going down by as much as 4m each year.
Scientists working in the desert badlands of northwestern Kenya have found stone tools dating back 3.3 million years, long before the advent of modern humans, and by far the oldest such artifacts yet discovered. The tools, whose makers may or may not have been some sort of human ancestor, push the known date of such tools back by 700,000 years; they also may challenge the notion that our own most direct ancestors were the first to bang two rocks together to create a new technology.
Hominins are a group of species that includes modern humans, Homo sapiens, and our closest evolutionary ancestors. Anthropologists long thought that our relatives in the genus Homo - the line leading directly to Homo sapiens - were the first to craft such stone tools. But researchers have been uncovering tantalizing clues that some other, earlier species of hominin, distant cousins, if you will, might have figured it out.
The researchers do not know who made these oldest of tools. But earlier finds suggest a possible answer: The skull of a 3.3-million-year-old hominin, Kenyanthropus platytops, was found in 1999 about a kilometer from the tool site. A K. platyops tooth and a bone from a skull were discovered a few hundred meters away, and an as-yet unidentified tooth has been found about 100 meters away.
The precise family tree of modern humans is contentious, and so far, no one knows exactly how K. platyops relates to other hominin species. Kenyanthropus predates the earliest known Homo species by a half a million years. This species could have made the tools; or, the toolmaker could have been some other species from the same era, such as Australopithecus afarensis, or an as-yet undiscovered early type of Homo.
GE engineers have made a simple proof-of-concept 3D-printed mini jet engine that operates at 33,000 rotations per minute. The backpack-sized jet engine was built over the course of several years to test the technology’s abilities and to work on a side project together.
The team also designed and developed a fuel nozzle that will be additively manufactured for inclusion in the CFM LEAPjet engine for commercial single-aisle aircraft. The FAA recently approved the first 3D printed component for a version of the GE90 jet engine.
New research has revealed the opah, or moonfish, as the first fully warm-blooded fish that circulates heated blood throughout its body much like mammals and birds, giving it a competitive advantage in the cold ocean depths.
The silvery fish, roughly the size of a large automobile tire, is known from oceans around the world and dwells hundreds of feet beneath the surface in chilly, dimly lit waters. It swims by rapidly flapping its large, red pectoral fins like wings through the water.
Fish that typically inhabit such cold depths tend to be slow and sluggish, conserving energy by ambushing prey instead of chasing it. But the opah's constant flapping of its fins heats its body, speeding its metabolism, movement and reaction times, scientists report in the journal Science.
That warm-blooded advantage turns the opah into a high-performance predator that swims faster, reacts more quickly and sees more sharply, said fisheries biologist Nicholas Wegner of NOAA Fisheries' Southwest Fisheries Science Center in La Jolla, Calif., lead author of the new paper.
"Before this discovery I was under the impression this was a slow-moving fish, like most other fish in cold environments," Wegner said. "But because it can warm its body, it turns out to be a very active predator that chases down agile prey like squid and can migrate long distances."
Wegner realized the opah was unusual when a coauthor of the study, biologist Owyn Snodgrass, collected a sample of its gill tissue. Wegner recognized an unusual design: Blood vessels that carry warm blood into the fish's gills wind around those carrying cold blood back to the body core after absorbing oxygen from water.
The design is known in engineering as "counter-current heat exchange." In opah it means that warm blood leaving the body core helps heat up cold blood returning from the respiratory surface of the gills where it absorbs oxygen. Resembling a car radiator, it's a natural adaptation that conserves heat. The unique location of the heat exchange within the gills allows nearly the fish's entire body to maintain an elevated temperature, known as endothermy, even in the chilly depths.
A rival hacker group to the infamous Lizard Squad has been discovered quietly using a previously unknown global botnet of compromised broadband routers to carry out DDoS and Man-in-the-Middle (MitM) attacks.
The discovery was made by security firm Incapsula (recently acquired by Imperva), which first noticed attacks against a few dozen of its customers in December 2014 since when the firm estimates its size to exceed 40,000 IPs across 1,600 ISPs with at least 60 command and control (C2) nodes.
Almost all of the compromised routers appear to be unidentified ARM-based models from a single US vendor, Ubiquiti, which is sold across the world, including in the UK. Incapsula detected traffic from compromised devices in 109 countries, overwhelmingly in Thailand and router compromise hotspot, Brazil.
The compromise that allowed the Ubiquiti routers to be botted in the first place appears to be connected to one of two vulnerabilities. The first is simply that the devices have been left with their vendor username and password in its default state – perhaps a sign that some of these devices are older – allowing the attackers easy access.
The second and more unexpected flaw is that the routers also allow remote access to HTTP and SSH via default ports, a configuration issue which would be open sesame to attackers. Once compromised, the attacks appear to have been used to inject a number of pieces of malware, mainly the Linux Spike Trojan, aka, ‘MrBlack’, used to configure DDoS attacks. The firm inspected 13,000 malware samples and found evidence of other DDoS tools, including Dorfloo and Mayday.
The C2s for these tools were found to be in several countries, with 73 percent in China and 21 percent in the US. This doesn’t mean the attackers were based there, simply using infrastructure on hosts in those locations.
“Given how easy it is to hijack these devices, we expect to see them being exploited by additional perpetrators. Even as we conducted our research, the Incapsula security team documented numerous new malware types being added—each compounding the threat posed by the existence of these botnet devices,” said the firm’s researchers.
DNA, the genetic material of all living things, is what makes us who we are. Written in this molecular code are the instructions for making proteins, which are the building blocks of cells and thus living organisms. In natural circumstances, this code is formed of four basic units, or “letters”: A, C, G and T (adenine, cytosine, guanine and thymine). These so-called DNA bases pair up, A with T and C with G, forming a long, readable sequence that varies from gene to gene. Textbooks will tell you that it is those four letters that are the recipe for life, or so we long believed.
Back in the ‘80s, scientists threw another base into the mix: 5-methylcytosine (mC). As the name suggests, mC is cytosine with a functional unit called methyl attached. But it was not for another decade that scientists realized the importance of mC: The addition of methyl groups to DNA can switch genes on or off in order to meet the needs of each tissue, given that every cell contains the same DNA sequences. Such modifications are known as epigenetic changes; these allow the environment to affect gene expression, but they also play a role in various diseases. For example, alterations in mC have been shown to contribute to the development of cancer, amongst many other conditions.
But it turns out that the DNA alphabet does not even end here, as in recent years scientists have gradually expanded this list to eight. Now, scientists have just detailed descriptions of one more potential candidate, N6-methyladenine (6mA), in the journal Cell. As with mC, this is composed of the base adenine with a methyl group tacked onto it. While scientists have known about this modified base for some time, it was thought to have exclusively existed in bacteria, where it serves to protect against the unwanted addition of foreign DNA from other organisms.
Now, a group of researchers from IDIBELL-Bellvitge Biomedical Research Institute have found that this is not the case, providing evidence that 6mA is not simply a phenomenon of primitive cells. As described in the journal Cell, scientists found that some more complex cells, called eukaryotic cells, also possess the base. Eukaryotes are organisms within one of the three domains of life, the others being archaea and bacteria.
More specifically, researchers discovered 6mA in three different groups of eukaryotes: green algae, flies and worms. This was made possible through the development of highly sensitive analytical techniques, which picked up the exceedingly low levels of this base that previously eluded detection. Interestingly, newly gathered data indicates that, like mC, 6mA may also have a gene regulatory function in these animals, which could suggest that it also behaves as an epigenetic mark.
Now that scientists have found this base in various organisms, the researchers want to scrutinize our own genomes to see if it also exists in humans. This would be interesting given the fact that evidence seems to suggest that 6mA may play a role in stem cells. If it does indeed exist in our own species, then researchers may have a job on their hands trying to figure out what precise role it plays in the cell.
After wandering around an unfamiliar part of town, can you sense which direction to travel to get back to the subway or your car? If so, you can thank your entorhinal cortex, a brain area recently identified as being responsible for our sense of direction. Variation in the signals in this area might even explain why some people are better navigators than others.
Cloudy days can be a bit of a downer. But when you add them all from nearly 13 years of measurements, the bright side becomes more apparent.
NASA Earth Observatory just published a map that uses data collected between July 2002 and April 2015 to give an unparalleled view of the world’s cloudy (and sunny) spots.
One thing that’s immediately apparent is that the world is a pretty cloudy place. It’s no surprise the U.K.—renowned for its dreary weather—appears in white, indicating frequent clouds. Ditto for the Amazon rainforest, which requires copious clouds for its prodigious rain.
On the flip side, the Sahara, Atacama, Arabian and their fellow deserts (including Antarctica) are basically cloud free. Australia and the western U.S. are also light on cloud cover.
Aside from giving a sense of the globe’s overall cloudiness, the map also reveals key features of the climate system. The band of cloudiness just around the equator generally represents the Intertropical Convergence Zone, a girdle of thunderstorms around the earth that form there thanks to warm, moist air lifting off the ocean. The ITCZ, as it’s known in climatespeak, generally drifts back and forth across the equator with the seasons.
In comparison, dry air generally subsides from 15-30 degrees north and south of the equator. Not surprisingly, that’s where most of the world’s deserts are located.
When it comes to stars, three may not be a crowd. That’s according to a new paper published online before print in Astronomy & Astrophysics, which suggests that almost a quarter of twin star systems might have another sibling nearby.
Astronomers examined the light from nearly 14,000 eclipsing binary stars, fortuitous arrangements in which a stellar pair’s orbit is edge-on from our point of view. Telescopes on Earth can’t see the individual stars, but they do observe dips in light intensity as one goes behind the other. For lone binary stars, those dips occur at regular intervals. But in the presence of a third star, this timing speeds up and slows down as the pair orbits its mate and gets nearer and farther from Earth. Researchers found that 2% of the eclipsing binaries had this signature swing back and forth. However, in another 22%, the team found partial shifts potentially caused by a slower orbit around a companion.
These unfinished oscillations were just as likely to swing faster as slower, ruling out the possibility that the change was due to the interaction of the twin stars. If correct, this finding could modify our understanding of stellar formation, especially because scientists now believe binary stars are even more common than single-star systems. It also makes the skies of some exoplanets more exotic, as in the artist’s illustration of a planet in the three-star system Gliese 667 shown above.
Researchers have identified new molecules that kill cancer cells while protecting healthy cells and that could be used to treat a variety of different cancers. The research shines a light on what happens to cells at the moment they become cancerous. Professor Qing-Bin Lu, from the University of Waterloo's Faculty of Science, initiated a novel molecular-mechanism-based program to discover a new class of non-platinum-based-halogenated molecules that kill cancer cells, yet prevent healthy cells from being damaged.
Femtosecond time-resolved laser spectroscopy is a technique traditionally applied to study chemical reactions as they occur on a molecular level. The laser takes a series of rapid “snapshots” of molecules as they interact and change structure over time. The technique is part of a potential new field of science developed by Professor Lu called femtomedicine (FMD), which integrats the ultrafast laser with molecular biology and cell biology.
Professor Lu has applied the tool to understand the molecular mechanisms that cause cancer at the very moment when the DNA becomes damaged. He has also used it to investigate how radiation therapy and chemotherapy using chemical agents, in particular the widely used platinum chemotherapeutic Cisplatin, work in treating a variety of cancers.
“We know DNA damage is the initial step,” said Professor Lu. “With the novel femtomedicine approach we can go back to the very beginning to find out what causes DNA damage in the first place, then mutation, and then cancer.”
By understanding more about the fundamental mechanisms of the diseases, Professor Lu pre-selected molecules most likely to be effective as anti-cancer agents. In this case, he discovered a new family of non-platinum-based molecules similar in structure to Cisplatin but containing no toxic platinum.
Pre-clinical studies with various cultured human cells as well as on rodents show that these new molecules are effective against cervical, breast, ovarian, and lung cancers. Cisplatin, discovered more than 40 years ago, is an important, widely used platinum-based anti-cancer agent. Unfortunately, the inclusion of platinum in the molecule causes serious side effects like neurotoxicity, kidney damage, hearing loss, nausea and vomiting.
“It is extremely rare to discover anti-cancer agents that can selectively kill cancer cells and protect healthy cells, as well as being effective in treating many different types of cancer and having a novel molecular mechanism of action. These candidate drugs should have a high potential to pass through clinical trials and could ultimately save lives”, said Professor Lu.
Professor Lu has already applied for patents on the new family of non-platinum-based-halogenated molecules that he has discovered and hopes to start clinical trials soon.