NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
It probably started with Linux, then came Wikipedia and Open Street Map. Crowd-sourced information systems are central for the Digital Society to thrive. So, what's next? In this video, Dirk Helbing introduces a number of concepts such as the Planetary Nervous System, Global Participatory Platform, Interactive Virtual Worlds, User-Controlled Information Filters and Reputation Systems, and the Digital Data Purse. He also discusses ideas such as the Social Mirror, Intercultural Adapter, the Social Protector and Social Money as tools to create a better world. These can help us to avoid systemic instabilities, market failures, tragedies of the commons, and exploitation, and to create the framework for a Participatory Market Society, where everyone can be better off.
Persistent Surveillance Systems can watch 25 sq. miles—for hours.
On June 28, 2012, in Dayton, Ohio, police received reports of an attempted robbery. A man armed with a box cutter had just tried to rob the Annex Naughty N’ Nice adult bookstore. Next, a similar report came from a Subway sandwich shop just a few miles northeast of the bookstore.
Coincidentally, a local company named Persistent Surveillance Systems (PSS) was flying a small Cessna aircraft 10,000 feet overhead at the time. The surveillance flight was loaded up with specialized cameras that could watch 25 square miles of territory, and it provided something no ordinary helicopter or police plane could: a Tivo-style time machine that could watch and record movements of every person and vehicle below.
After learning about the attempted robberies, PSS conducted frame-by-frame video analysis of the bookstore and sandwich shop and was able to show that exactly one car traveled between them. Further analysis showed that the suspect then moved on to a Family Dollar store in the northern part of the city, robbed it, stopped for gas—where his face was captured on video—and eventually returned home.
A man named Joseph Bucholtz was arrested the following month and pled guilty to three counts of aggravated robbery with a deadly weapon and one count of robbery. In November 2012, he was sentenced to five years in prison and ordered to pay $665 to the bookstore.
Though an all-seeing, always recording eye in the sky might sound dystopian, current PSS surveillance tech has real limitations. For now, the cameras can only shoot for a few hours at a time, only during the day, and sometimes just in black-and-white. When watching from 10,000 feet, PSS says that individuals are reduced to a single pixel—useful for tracking movements but not for identifying someone.
“You can’t tell if they’re red, white, green, or purple,” Ross McNutt, the company’s CEO, told Ars. And even if the half-meter resolution on his cameras got significantly better, McNutt said that he would prefer to fly higher and capture a larger area.
McNutt wants to be sensitive to people’s concerns, and PSS meets with the ACLU and other privacy activists as such. But he also wants to catch criminals. McNutt, who helped develop the technology when it was a military research project at the nearby Air Force Institute of Technology (AFIT) back in 2004, claims that his system has already proved its value.
New light-sensitive protein enables simpler, more powerful optogenetics.
MIT engineers have developed the first light-sensitive protein molecule that enables neurons to be silenced noninvasively. Using a light source outside the skull makes it possible to do long-term studies without an implanted light source.
The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once. The researchers described the protein in Nature Neuroscience.
Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on opsins, light-sensitive proteins that act as channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells.
Researchers insert a light source, such as an optical fiber, into the brain to suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled. The neurons to be studied must be genetically engineered to produce the opsins.
Also, inserting optical fibers into the brain “displaces brain tissue and can lead to side effects such as brain lesion, neural morphology changes, glial inflammation and motility, or aseptic compromise,” the researchers say in the paper.
In addition, such implants can be difficult to insert and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. And it is difficult to perform long-term studies of chronic diseases with these implants.
Researchers from UCSD have for the first time directly created and destroyed neural connections that connect high level sensory input and high level behavioral responses.
Donald Hebb in 1949 was one of the first to seize upon this observation. He proposed that on the biological level, neurons were rewired so that coordinated inputs and outputs get wired together. As such, were there a nausea neuron and a boat neuron, through the effects of association, the two would get wired together so that the “boat” itself fires up pathways in the “nausea” part of the brain.
In the field of neural networks, this has a name: Hebbian learning. Pavlov of course also described this phenomenon, and tested it in animals, bequeathing its name the “conditioned response”.
Until now the wiring of neural inputs and outputs was a theory with good but indirect evidence. At UCSD, neuroscientists teamed up with molecular biologists to engineer a mouse whose neurons can be directly controlled for forming and losing connections.
They did this by injecting an engineered virus into the auditory nerve cells. The viruses, largely harmless, carry a light responsive molecular switch (a membrane protein “channel” actually) which gets inserted into cells of the auditory region. Using laser light of certain frequencies it is possible to both “potentiate” or “depress” the auditory nerve cells.
The upshot is that the researchers could directly make the auditory nerve cells increase or decrease their signal strength to other nerve cells, without needing a real, external noise. In effect, they’ve short-circuited the noise input. In experiments, they used a light electrical pulse to shock mice while simultaneously stimulating the auditory input with the laser-activated switch.
Basically they flashed the laser light at the ear of the mouse. Over time, the mouse began to associate the laser pulse induced nerve signal with the electrical shock. The mice were conditioned to exhibit fear even when there was no shock.
The crux of the experiment is what happened when the scientists flashed the laser in a way to weaken the auditory nerve. Now the mouse stopped responding in fear to the laser auditory stimulus.
The experiments showed for the first time that associative learning was indeed the wiring together of sensory and response neurons. The study was published in Nature.
A spider-like creature's remains were so well preserved in fossil form that scientists could see all its leg joints, allowing them to recreate its likely gait using computer graphics.
Known as a trigonotarbid, the animal was one of the first predators on land. Its prey were probably early flightless insects and other invertebrates, which it would run down and jump on.
"We know quite a bit about how it lived," said Russell Garwood, a palaeontologist with the University of Manchester, UK. "We can see from its mouth parts that it pre-orally digested its prey - something that most arachnids do - because it has a special filtering plate in its mouth. So, that makes us fairly sure it vomited digestive enzymes on to its prey and then sucked up liquid food," he explained.
The trigonotarbid specimens studied by Dr Garwood and colleagues are just a few millimetres in length. They were unearthed in Scotland, near the Aberdeenshire town of Rhynie. Its translucent Early Devonian chert sediments are renowned for their exquisite fossils.
The team used a collection held at the Natural History Museum in London that have actually been prepared since the 1920s. The rock had been cut into extremely fine slices just a few tens of microns thick, making it possible to construct 3D models of the arachnids, much like a doctor might do with the X-ray slices obtained in a CAT scan.
"We could see the articulation points in the legs," explained Dr Garwood. "Between each part of the leg, there are darker pieces where they join, and that allowed us to work out the range of movement.
"We then compared that with the gaits of modern spiders, which are probably a good analogy because they have similar leg proportions. The software enabled us to see the centre of mass and find a gait that worked. If it's too far back compared to the legs, the posterior drags on the ground. The trigonotarbid is an alternating tetrapod, meaning there are four feet on the ground at any one time."
"This new study has gone further and shows us how they probably walked. For me, what's really exciting here is that scientists themselves can make these animations now, without needing the technical wizardry (and immense costs) of a Jurassic-Park style film. When I started working on fossil arachnids, we were happy if we could manage a sketch of what they used to look like. Now, they run across our computer screens."
The work is part of a special collection of papers on 3D visualisations of fossils published in the Journal of Paleontology.
A look at three leading approaches using inlays to expand presbyopic patients’ range of vision.
Periodically, the search for a “cure” for presbyopia produces a new set of treatment options. The latest approach is the corneal inlay, intended to improve near vision without compromising distance vision in emmetropic presbyopes—and possibly non-emmetropes as well.
Three variations on the concept of placing an implant inside the cornea are in different stages of the approval process. The Kamra inlay (from AcuFocus in Irvine, Calif.) uses the pinhole principle to increase depth of field; the Raindrop (from ReVision Optics in Laguna Hills, Calif.) makes the cornea multifocal by reshaping it; and the Flexivue Microlens (from Presbia in Amsterdam) creates multifocal vision using an in-cornea lens.
Closer to a Presbyopia Cure? “All of these inlays seem to work,” notes Dr. Hovanesian. “You can make theoretical arguments as to why one might be better than the others, but they all seem to achieve a high level of near vision in the range of J1, while only minimally compromising distance vision to 20/20 or 20/25.”
“Overall, the data from the FDA trial of the Kamra, like the data from outside the United States regarding the Flexivue, indicates that these inlays are very safe,” adds Dr. Maloney.
Of course, they have a few disadvantages. Dr. Maloney notes that all of them reduce distance vision to some degree. “That’s the trade-off for improved reading vision,” he says. “And all of them cause night glare to some degree; that’s the trade-off for changing the way the eye focuses light. So if patients aren’t happy, it’s because their night vision isn’t good enough, their distance vision isn’t good enough, or their reading vision isn’t good enough—the inlay isn’t strong enough to give them the reading vision they need. Those limitations are probably common to all inlays. But the inlays can be explanted, and vision returns to being very close to what it was before surgery. In addition, we haven’t seen significant adverse effects with the current generation of these inlays.”
“Using an inlay requires a compromise in distance vision,” agrees Dr. Hovanesian. “That’s the nature of adding something to an emmetropic visual system. However, you’re usually doing it in the nondominant eye in a patient who is a good adapter. For most of these patients, what they sacrifice is well worth it for what they gain.
“The Raindrop inlay, and inlays in general, are going to serve a very important purpose,” he concludes. “As they become approved, we’re going to find that patients really want this kind of technology. It’s appealing because it serves emmetropic presbyopes—patients who are not well served by any other modality we have. Many of these patients are not willing to try monovision, and they’re generally too young for lens implant surgery. They want a quick and easy solution, and they like the idea of something that’s reversible if it doesn’t work out.”
“I think there will definitely be a place for these inlays in our clinical practices,” agrees Dr. Maloney. “It looks like the Kamra inlay is the one closest to FDA approval, but as a surgeon I’d be very happy to add any one of them to my practice.”
1. Tomita M, Kanamori T, et al. Simultaneous corneal inlay implantation and laser in situ keratomileusis for presbyopia in patients with hyperopia, myopia, or emmetropia: Six-month results. J Cataract Refract Surg 2012;38:495-506. 2. Tomita M, Kanamori T, et al. Small-aperture corneal inlay implantation to treat presbyopia after laser in situ keratomileusis. J Cataract Refract Surg 2013;39:898-905. 3. Waring GO 4th. Correction of presbyopia with a small aperture corneal inlay. J Refract Surg 2011;27:842-5. 4. Seyeddain O, Hohensinn M, et al. Small-aperture corneal inlay for the correction of presbyopia: 3-year follow-up. J Cataract Refract Surg 2012;38:35-45. 5. Chayet A, Garza EB. Combined hydrogel inlay and laser in situ keratomileusis to compensate for presbyopia in hyperopic patients: One-year safety and efficacy. J Cataract Refract Surg 2013;39:1713-21. 6. Garza EB, Gomez S, Chayet A, Dishler J. One-year safety and efficacy results of a hydrogel inlay to improve near vision in patients with emmetropic presbyopia. J Refract Surg 2013;29:166-72. 7. Limnopoulou AN, Bouzoukis DI, et al. Visual outcomes and safety of a refractive corneal inlay for presbyopia using femtosecond laser. J Refract Surg 2013;29:12-8. 8. Jackson GR, Owsley C, McGwin G Jr. Aging and dark adaptation. Vision Res 1999;39:3975-82. 9. King BR, Fogel SM, et al. Neural correlates of the age-related changes in motor sequence learning and motor adaptation in older adults. Front Hum Neurosci 2013;7:142. 10. Yılmaz OF, Alagöz N, et al. Intracorneal inlay to correct presbyopia:Long-term results. J Cataract Refract Surg 2011;37:1275-1281. 11.Alió JL, Abbouda A, et al. Removability of a small aperture intracorneal inlay for presbyopia correction. J Refract Surg 2013;29:8:550-6.
Researchers at the University College London (UCL) used a supercomputer to compute 10-billion “transition lines” of the spectral signature of methane, 200 times more comprehensive than previous best efforts. As methane is a biosignature, the development is an advancement toward the detection of life in planets outside our solar system.
Every molecule absorbs and emits light in a characteristic pattern called the absorption and emission spectrum. In order to determine the atmospheric composition of the exoplanets, astronomers break down the full atmospheric spectrum into known patterns to identify the component molecules.
Detecting methane in the study of astrobiology is important because it is an unstable molecule in the atmosphere that only lasts 300-600 years as it is broken down by solar ultraviolet radiation. Since it is unstable, one explanation for its presence on an exoplanet’s surface is continual production by a carbon-based life. The caveat is that geological processes also replenish methane so detection is suggestive of but does not guarantee life.
The previous methane spectra are incomplete in that they contain far fewer transition lines than the new effort so do not properly reflect methane in high temperature atmospheres (i.e. hotter than Earth). At high temperatures there are more transitions because the methane molecule is excited to higher energy states. As a result the methane levels of hot exoplanets and cool stars are detected only partially or incorrectly.
An international team of researchers, including University of Hawaii at Manoa astronomer Brent Tully, has mapped the motions of structures of the nearby universe in greater detail than ever before. The maps are presented as a video, which provides a dynamic three-dimensional representation of the universe through the use of rotation, panning, and zooming. The video was announced last week at the conference "Cosmic Flows: Observations and Simulations" in Marseille, France, that honored the career and 70th birthday of Tully.
The Cosmic Flows project has mapped visible and dark matter densities around our Milky Way galaxy up to a distance of 300 million light-years.
The team includes Helene Courtois, associate professor at the University of Lyon, France, and associate researcher at the Institute for Astronomy (IfA), University of Hawaii (UH) at Manoa, USA; Daniel Pomarede, Institute of Research on Fundamental Laws of the Universe, CEA/Saclay, France; Brent Tully, IfA, UH Manoa; and Yehuda Hoffman, Racah Institute of Physics, University of Jerusalem, Israel.
The large-scale structure of the universe is a complex web of clusters, filaments, and voids. Large voids—relatively empty spaces—are bounded by filaments that form superclusters of galaxies, the largest structures in the universe. Our Milky Way galaxy lies in a supercluster of 100,000 galaxies.
Just as the movement of tectonic plates reveals the properties of Earth's interior, the movements of the galaxies reveal information about the main constituents of the Universe: dark energy and dark matter. Dark matter is unseen matter whose presence can be deduced only by its effect on the motions of galaxies and stars because it does not give off or reflect light. Dark energy is the mysterious force that is causing the expansion of the universe to accelerate.
Whitehead Institute scientists have genetically and enzymatically modified red blood cells to carry a range of valuable payloads—from drugs, to vaccines, to imaging agents—for delivery to specific sites throughout the body.
“We wanted to create high-value red cells that do more than simply carry oxygen,” says Whitehead Founding Member Harvey Lodish, who collaborated with Whitehead Member Hidde Ploegh in this pursuit. “Here we’ve laid out the technology to make mouse and human red blood cells in culture that can express what we want and potentially be used for therapeutic or diagnostic purposes.”
The work, published this week in the Proceedings of the National Academy of Sciences (PNAS), combines Lodish’s expertise in the biology of red blood cells (RBCs) with biochemical methods developed in Ploegh’s lab.
RBCs are an attractive vehicle for potential therapeutic applications for a variety of reasons, including their abundance—they are more numerous than any other cell type in the body—and their long lifespan (up to 120 days in circulation). Perhaps most importantly, during RBC production, the progenitor cells that eventually mature to become RBCs jettison their nuclei and all DNA therein. Without a nucleus, a mature RBC lacks any genetic material or any signs of earlier genetic manipulation that could result in tumor formation or other adverse effects.
Exploiting this characteristic, Lodish and his lab introduced genes coding for specific slightly modified normal red cell surface proteins into early-stage RBC progenitors. As the RBCs approach maturity and enucleate, the proteins remain on the cell surface, where they are modified by Ploegh’s protein-labeling technique. Referred to as “sortagging,” the approach relies on the bacterial enzyme sortase A to establish a strong chemical bond between the surface protein and a substance of choice, be it a small-molecule therapeutic or an antibody capable of binding a toxin. The modifications leave the cells and their surfaces unharmed.
“Because the modified human red blood cells can circulate in the body for up to four months, one could envision a scenario in which the cells are used to introduce antibodies that neutralize a toxin,” says Ploegh. “The result would be long-lasting reserves of antitoxin antibodies.”
"Ten years from today the probability that you are depending on wires hanging on tree branches is as likely as that you'll still be installing land lines for telephones. Close to zero."
Inventor Dean Kamen is planning a 2.5 kW home version of hisDeka Research Beacon 10 Stirling engine that could provide efficient around-the-clock power or hot water to a home or business, reports Forbes. Kamen says the current Beacon is intended for businesses like laundries or restaurants that use a lot of hot water. “With commercialization partner NRG Energy, he’s deployed roughly 20 of the machines and expects to put them into production within 18 months,” says Forbes.
But Kamen has bigger plans: feeding excess power to the grid by networking devices across a region together. Depending on the price of natural gas, “ten years from today the probability that you are depending on wires hanging on tree branches is as likely as that you’ll still be installing land lines for telephones,” he says. “Close to zero.”
IBM has announced that it expects to have commercialised its carbon nanotube transistor technology in the early 2020s, thanks to a new design that would allow the transistors to be built on silicon wafers using similar techniques to existing chip manufacturing plants.
The semiconductor industry has been working hard for the last few decades on following Moore's Law, the observation by Intel co-founder Gordon Moore that the number of transistors on a chip tends to double roughly every eighteen months. In recent years, following that trend has become increasingly complex: the ever-shrinking size of the components and the distance between them makes manufacturing difficult, while interference between components must be corrected and designed out.
One possible solution is a move away from traditional semiconductor designs, and numerous companies are working on exactly that. Back in 2012, IBM announced the creation of a 9nm carbon nanotube resistor, dropping below the 10nm barrier for the first time. In September last year, the company further announced that it had used the transistors to build a fully working computer for the first time, but the company was still silent as to when the technology would be likely to leave the lab and reach shop shelves.
Speaking to MIT's Technology Review, IBM researchers have finally given themselves a deadline: to have commercialised carbon nanotube transistor semiconductors by the early 2020s. The secret is a shift in design, featuring six nanotubes measuring 1.4nm in width lined up in parallel, to build the transistors. This design, the company has claimed, could potentially be manufactured using current semiconductor fabrication plants with little modification - the route-to-market the technology desperately needed.
Soil moisture, the water contained within soil particles, is an important player in Earth's water cycle. It is essential for plant life and influences weather and climate. Satellite readings of soil moisture will help scientists better understand the climate system and have potential for a wide range of applications, from advancing climate models, weather forecasts, drought monitoring and flood prediction to informing water management decisions and aiding in predictions of agricultural productivity.
Launched June 10, 2011, aboard the Argentinian spacecraft Aquarius/Satélite de Aplicaciones Científicas (SAC)-D, Aquarius was built to study the salt content of ocean surface waters. The new soil wetness measurements were not in the mission's primary science objectives, but a NASA-funded team led by U.S. Department of Agriculture (USDA) researchers has developed a method to retrieve soil moisture data from the instrument's microwave radiometer.
The Aquarius measurements are considerably coarser in spatial resolution than the measurements from the upcoming NASASoil Moisture Active Passive (SMAP) mission, which was specifically designed to provide the highest quality soil moisture measurements available, including a spatial resolution 10 times that offered by Aquarius.
Soils naturally radiate microwaves and the Aquarius sensor can detect the microwave signal from the top 2 inches (5 centimeters) of the land, a signal that subtly varies with changes in the wetness of the soil. Aquarius takes eight days to complete each worldwide survey of soil moisture, although with gaps in mountainous or vegetated terrain where the microwave signal becomes difficult to interpret.
The smallest, most abundant marine microbe, Prochlorococcus, is a photosynthetic bacteria species essential to the marine ecosystem. An estimated billion billion billion of the single-cell creatures live in the oceans, forming the base of the marine food chain and occupying a range of ecological niches based on temperature, light and chemical preferences, and interactions with other species. But the full extent and characteristics of diversity within this single species remains a puzzle.
To probe this question, scientists in MIT’s Department of Civil and Environmental Engineering (CEE) recently performed a cell-by-cell genomic analysis on a wild population of Prochlorococcus living in a milliliter — less than a quarter teaspoon — of ocean water, and found hundreds of distinct genetic subpopulations.
Each subpopulation in those few drops of water is characterized by a set of core gene alleles linked to a few flexible genes — a combination the MIT scientists call the “genomic backbone” — that endows the subpopulation with a finely tuned suitability for a particular ecological niche. Diversity also exists within the backbone subpopulations; most individual cells in the samples they studied carried at least one set of flexible genes not found in any other cell in its subpopulation.
Last year at the Stanford-Berkeley Robotics Symposium, we saw some tantalizing slides from Oussama Khatib about a humanoid robot that used trekking poles to balance itself. We were promised more details later, and the Stanford researchers delivered at the IEEE International Conference on Robotics and Automation (ICRA) this year, where they presented the concept of SupraPed robots.
The idea is equipping robots with a pair of special trekking poles packed with sensors that, according to the researchers, "transforms biped humanoids into tripeds or quadrupeds or more generally, SupraPeds." By using these smart poles to steady themselves, the robots would be able to navigate through "cluttered and unstructured environments such as disaster sites."
Humans have had a lot of practice walking around on two legs. Robots have not, which isn't their fault, but at the moment, even the best robots are working up to the level of a toddler. Some of them aren't bad at flat terrain, but as we saw in the DARPA Robotics Challenge Trials, varied terrain is very, very difficult. It doesn't just require the physical ability to move and balance, but also the awareness to know what path to take and where feet should be placed.
As good at this as humans are, even we get into situations where our balance and movements with our legs and feet simply aren't enough. And when this happens, we scramble. If we're fancy, we might use a walking stick or hiking poles for balance assistance, and if we're not fancy, sometimes an outstretched arm is enough.
Similar to the research we looked at yesterday, this is an entirely different philosophy about obstacles: instead of things to be avoided, they're things that can potentially be used to complete tasks that would otherwise be unsafe or impossible.
However, this is all simulation, and the programming behind it is fairly complex. The robot (when they throw a real robot into this mix) will have sophisticated 3D vision, tactile sensing, and a special set of actuated ski poles. The SupraPed platform includes a pair smart walking staffs, a whole-body multi-contact control and planning software system, and real-time reactive controllers that integrate both tactile and visual information. Moreover, to bypass the difficulty of programming fully autonomous robot controllers, the SupraPed platform contains a remote haptic teleoperation system which allows the operator remotely give high level command.
The laws of physics potentially allow one binary star system to contain a surprisingly large number of Earth-like planets, assuming there is enough matter.
Why settle for one habitable planet, when you can have 60? An astrophysicist has designed the ultimate star system by cramming in as many Earth-like worlds as possible without breaking the laws of physics. Such a monster cosmic neighbourhood is unlikely to exist in reality, but it could inspire future exoplanet studies. Sean Raymond of Bordeaux Observatory in France started his game of fantasy star system with a couple of ground rules. First, the arrangement of planets must be scientifically plausible. Second, they must be gravitationally stable over billions of years: there is no point in putting planets into orbit only to watch them spiral into the sun.
"The arguments were based on the recent scientific literature as well as some simple calculations I did," says Raymond. In some cases it was impossible to choose between two scenarios because of a lack of data, so he just picked the one he liked best.
Gas giants such as Jupiter are not habitable to life as we know it, but they can be orbited by Earth-like moons. In our solar system, Europa and Enceladus, which orbit Jupiter and Saturn, respectively, are prime candidates for extraterrestrial life. Raymond calculates that a red dwarf could hold four Jupiter-like planets, each with five Earth-like moons. What's more, the Trojan trick can allow another two Earth-like planets on either side of the orbiting Jupiters, upping the total number of habitable worlds around the red dwarf to 36.
Finally, Raymond turned his star system into a binary one, with two red dwarfs separated by roughly the distance from our sun to the edge of the solar system. Theory allows one star to carry the Earth-only configuration, and the other to carry the Earth-plus-Jupiters configuration. This creates the ultimate star system, with 60 habitable planets to choose from.
IBM announced today it is investing $3 billion for R&D in two research programs to push the limits of chip technology and extend Moore’s law.
The research programs are aimed at “7 nanometer and beyond” silicon technology and developing alternative technologies for post-silicon-era chips using entirely different approaches, IBM says.
IBM will be investing especially in carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing.
7 nanometer technology and beyond
IBM researchers and other semiconductor experts predict that semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.
However, scaling down to 7 nanometers by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing, IBM says.
Below 7 nanometers, the challenges dramatically increase, requiring a new kind of materials to power systems of the future, such as carbon nanotubes and graphene; and new computational approaches, such as quantum computing. and neurosynaptic computing.
Carbon Nanotubes. IBM Researchers are exploring whether carbon nanotube (CNT) electronics can replace silicon beyond the 7 nm node. IBM recently demonstrated two-way CMOS NAND gates using 50 nm gate-length carbon nanotube transistors, a first.
IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99%, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling. Modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible with CNTs.
Graphene. Graphene — pure carbon in the form of a one-atomic-layer-thick sheet — is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible. Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. That means faster switching transistors. In 2013, IBM demonstrated the world’s first graphene-based integrated-circuit receiver front-end for wireless communications.
Earth's magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.
The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet's surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.
The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth's magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA's Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.
In fact over the past 20 million years, our planet has settled into a pattern of a pole reversal about every 200,000 to 300,000 years; as of 2012, however, it has been more than twice that long since the last reversal. These reversals aren't split-second flips, and instead occur over hundreds or thousands of years. During this lengthy stint, the magnetic poles start to wander away from the region around the spin poles (the axis around which our planet spins), and eventually end up switched around, according to Cornell University astronomers.
Since the 1960s, theatergoers have shelled out for crude 3-D glasses, polarized glasses, and shutter glasses to enhance their viewing experience. These basic devices, used to trick the brain into perceiving an artificial three-dimensional reality, may soon be rendered obsolete with the introduction of new holography technology developed by Tel Aviv University researchers.
Tel Aviv University doctoral students Yuval Yifat, Michal Eitan, and Zeev Iluz have developed highly efficient holography based on nanoantennas that could be used for security as well as medical and recreational purposes. Prof. Yael Hanein, of TAU's School of Electrical Engineering and head of TAU's Center for Nanoscience and Nanotechnology, and Prof. Jacob Scheuer and Prof. Amir Boag of the School of Electrical Engineering, led the development team. Their research, published in the American Chemical Society's publication Nano Letters, uses the parameters of light itself to create dynamic and complex holographic images.
In order to effect a three-dimensional projection using existing technology, two-dimensional images must be "replotted"—rotated and expanded to achieve three-dimension-like vision. But the team's nanoantenna technology permits newly designed holograms to replicate the appearance of depth without being replotted. The applications for the technology are vast and diverse, according to the researchers, who have already been approached by commercial entities interested in the technology.
"We had this interesting idea—to play with the parameters of light, the phase of light," said Yifat. "If we could dynamically change the relation between light waves, we could create something that projected dynamically—like holographic television, for example. The applications for this are endless. If you take light and shine it on a specially engineered nanostructure, you can project it in any direction you want and in any form that you want. This leads to interesting results."
The researchers worked in the lab for over a year to develop and patent a small metallic nanoantenna chip that, together with an adapted holography algorithm, could determine the "phase map" of a light beam. "Phase corresponds with the distance light waves have to travel from the object you are looking at to your eye," said Prof. Hanein. "In real objects, our brains know how to interpret phase information so you get a feeling of depth, but when you look at a photograph, you often lose this information so the photographs look flat. Holograms save the phase information, which is the basis of 3-D imagery. This is truly one of the holy grails of visual technology."
According to the researchers, their methodology is the first of its kind to successfully produce high-resolution holographic imagery that can be projected efficiently in any direction.
"We can use this technology to reflect any desired object," said Prof. Scheuer. "Before, scientists were able to produce only basic shapes—circles and stripes, for example. We used, as our model, the logo of Tel Aviv University, which has a very specific design, and were able to achieve the best results seen yet."
NASA's Cassini spacecraft has obtained the highest-resolution movie yet of a unique six-sided jet stream, known as the hexagon, around Saturn's north pole. The hexagon, which is wider than two Earths, owes its appearance to the jet stream that forms its perimeter. The jet stream forms a six-lobed, stationary wave which wraps around the north polar regions at a latitude of roughly 77 degrees North.
This is the first hexagon movie of its kind, using color filters, and the first to show a complete view of the top of Saturn down to about 70 degrees latitude. Spanning about 20,000 miles (30,000 kilometers) across, the hexagon is a wavy jet stream of 200-mile-per-hour winds (about 322 kilometers per hour) with a massive, rotating storm at the center. There is no weather feature exactly, consistently like this anywhere else in the solar system.
"The hexagon is just a current of air, and weather features out there that share similarities to this are notoriously turbulent and unstable," said Andrew Ingersoll, a Cassini imaging team member at the California Institute of Technology in Pasadena. "A hurricane on Earth typically lasts a week, but this has been here for decades -- and who knows -- maybe centuries."
Weather patterns on Earth are interrupted when they encounter friction from landforms or ice caps. Scientists suspect the stability of the hexagon has something to do with the lack of solid landforms on Saturn, which is essentially a giant ball of gas.
A team of physicists from the Paul-Drude-Institut für Festkörperelektronik (PDI) in Berlin, Germany, NTT Basic Research Laboratories in Atsugi, Japan, and the U.S. Naval Research Laboratory (NRL) has used a scanning tunneling microscope to create quantum dots with identical, deterministic sizes. The perfect reproducibility of these dots opens the door to quantum dot architectures completely free of uncontrolled variations, an important goal for technologies from nanophotonics to quantum information processing as well as for fundamental studies. The complete findings are published in the July 2014 issue of the journal Nature Nanotechnology.
Quantum dots are often regarded as artificial atoms because, like real atoms, they confine their electrons to quantized states with discrete energies. But the analogy breaks down quickly, because while real atoms are identical, quantum dots usually comprise hundreds or thousands of atoms - with unavoidable variations in their size and shape and, consequently, in their properties and behavior. External electrostatic gates can be used to reduce these variations. But the more ambitious goal of creating quantum dots with intrinsically perfect fidelity by completely eliminating statistical variations in their size, shape, and arrangement has long remained elusive.
Creating atomically precise quantum dots requires every atom to be placed in a precisely specified location without error. The team assembled the dots atom-by-atom, using a scanning tunneling microscope (STM), and relied on an atomically precise surface template to define a lattice of allowed atom positions. The template was the surface of an InAs crystal, which has a regular pattern of indium vacancies and a low concentration of native indium adatoms adsorbed above the vacancy sites. The adatoms are ionized +1 donors and can be moved with the STM tip by vertical atom manipulation. The team assembled quantum dots consisting of linear chains of N = 6 to 25 indium atoms; the example shown here is a chain of 22 atoms.
Stefan Fölsch, a physicist at the PDI who led the team, explained that "the ionized indium adatoms form a quantum dot by creating an electrostatic well that confines electrons normally associated with a surface state of the InAs crystal. The quantized states can then be probed and mapped by scanning tunneling spectroscopy measurements of the differential conductance." These spectra show a series of resonances labeled by the principal quantum number n. Spatial maps reveal the wave functions of these quantized states, which have n lobes and n - 1 nodes along the chain, exactly as expected for a quantum-mechanical electron in a box. For the 22-atom chain example, the states up to n = 6 are shown.
Sexual reproduction is an ancient feature of life on earth, and the familiar X and Y chromosomes in humans and other model species have led to the impression that sex determination mechanisms are old and conserved. In fact, males and females are determined by diverse mechanisms that evolve rapidly in many taxa. Yet this diversity in primary sex-determining signals is coupled with conserved molecular pathways that trigger male or female development. Conflicting selection on different parts of the genome and on the two sexes may drive many of these transitions, but few systems with rapid turnover of sex determination mechanisms have been rigorously studied. Here we survey our current understanding of how and why sex determination evolves in animals and plants and identify important gaps in our knowledge that present exciting research opportunities to characterize the evolutionary forces and molecular pathways underlying the evolution of sex determination.
A restored functional cornea following transplantation of human ABCB5-positive limbal stem cells to limbal stem cell-deficient mice.
Limbal stem cells, which reside in the eye’s limbus, help maintain and regenerate corneal tissue. Their loss due to injury or disease is one of the leading causes of blindness.
In the past, tissue or cell transplants have been used to help the cornea regenerate, but it was unknown whether there were actual limbal stem cells in the grafts, or how many, and the outcomes were not consistent.
ABCB5 allowed the researchers to locate hard-to-find limbal stem cells in tissue from deceased human donors and use these stem cells to regrow anatomically correct, fully functional human corneas in mice.
“Limbal stem cells are very rare, and successful transplants are dependent on these rare cells,” says Bruce Ksander, Ph.D., of Mass. Eye and Ear, co-lead author on the study with post-doctoral fellow Paraskevi Kolovou, M.D. “This finding will now make it much easier to restore the corneal surface. It’s a very good example of basic research moving quickly to a translational application.”
ABCB5 was originally discovered in the lab of Markus Frank, M.D., of Boston Children’s Hospital, and Natasha Frank, M.D., of the VA Boston Healthcare System and Brigham and Women’s Hospital (co-senior investigators on the study) as being produced in tissue precursor cells in human skin and intestine.
In the new work, using a mouse model developed by the Frank lab, they found that ABCB5 also occurs in limbal stem cells and is required for their maintenance and survival, and for corneal development and repair. Mice lacking a functional ABCB5 gene lost their populations of limbal stem cells, and their corneas healed poorly after injury.
“ABCB5 allows limbal stem cells to survive, protecting them from apoptosis [programmed cell death],” says Markus Frank. “The mouse model allowed us for the first time to understand the role of ABCB5 in normal development, and should be very important to the stem cell field in general.” according to Natasha Frank.
Markus Frank is working with the biopharmaceutical industry to develop a clinical-grade ABCB5 antibody that would meet U.S. regulatory approvals.
Spin-coating a polymer solution (green) to create a carbon nanosheet with characteristics similar to graphene, without the defects (black).
A team of Korean researchers has synthesized hexagonal carbon nanosheets similar to graphene, using a polymer. The new material is free of the defects and complexity involved in producing graphene, and can substitute for graphene as transparent electrodes for organic solar cells and in semiconductor chips, the researchers say.
The research team is led by Han-Ik Joh at Korea Institute of Science and Technology (KIST), Seok-In Na at Chonbuk National University, and Byoung Gak Kim at Korea Research Institute of Chemical Technology. The research was funded by the KIST Proprietary Research Project and National Research Foundation of Korea.
Na explains: "Through a catalyst- and transfer-free process, we fabricated indium tin oxide (ITO)-free organic solar cells (OSCs) using a carbon nanosheet (CNS) with properties similar to graphene. The morphological and electrical properties of the CNS is derived from a polymer of intrinsic microporosity-1 (PIM-1), which is mainly composed of several aromatic hydrocarbons and cycloalkanes, can be easily controlled by adjusting the polymer concentration. The CNSs, which are prepared by simple spin-coating and heat-treatment on a quartz substrate, are directly used as the electrodes of ITO-free OSCs, showing a high efficiency of approximately 1.922% under 100 mW cm−2 illumination and air mass 1.5 G conditions. This catalyst- and transfer-free approach is highly desirable for electrodes in organic electronics."
An international team of astronomers has developed a 3D model of a giant cloud ejected by the massive binary system Eta Carinae during its 19th century outburst. Eta Carinae lies about 7,500 light-years away in the southern constellation of Carina and is one of the most massive binary systems astronomers can study in detail. The smaller star is about 30 times the mass of the sun and may be as much as a million times more luminous. The primary star contains about 90 solar masses and emits 5 million times the sun's energy output. Both stars are fated to end their lives in spectacular supernova explosions.
Between 1838 and 1845, Eta Carinae underwent a period of unusual variability during which it briefly outshone Canopus, normally the second-brightest star. As a part of this event, which astronomers call the Great Eruption, a gaseous shell containing at least 10 and perhaps as much as 40 times the sun's mass was shot into space. This material forms a twin-lobed dust-filled cloud known as the Homunculus Nebula, which is now about a light-year long and continues to expand at more than 1.3 million mph (2.1 million km/h).
Using the European Southern Observatory's Very Large Telescope and its X-Shooter spectrograph, the team imaged near-infrared, visible and ultraviolet wavelengths along 92 separate swaths across the nebula, making the most complete spectral map to date. The researchers have used the spatial and velocity information provided by this data to create the first high-resolution 3D model of the Homunculus Nebula.
The shape model was developed using only a single emission line of near-infrared light emitted by molecular hydrogen gas. The characteristic 2.12-micron light shifts in wavelength slightly depending on the speed and direction of the expanding gas, allowing the team to probe even dust-obscured portions of the Homunculus that face away from Earth.
Scientists at the Massachusetts Institute of Technology are developing an audio reading device to be worn on the index finger of people whose vision is impaired, giving them affordable and immediate access to printed words.
The so-called FingerReader, a prototype produced by a 3-D printer, fits like a ring on the user’s finger, equipped with a small camera that scans text. A synthesized voice reads words aloud, quickly translating books, restaurant menus and other needed materials for daily living, especially away from home or office.
Reading is as easy as pointing the finger at text. Special software tracks the finger movement, identifies words and processes the information. The device has vibration motors that alert readers when they stray from the script, said Roy Shilkrot, who is developing the device at the MIT Media Lab.
For Jerry Berrier, 62, who was born blind, the promise of the FingerReader is its portability and offer of real-time functionality at school, a doctor’s office and restaurants.
“When I go to the doctor’s office, there may be forms that I wanna read before I sign them,” Berrier said.
He said there are other optical character recognition devices on the market for those with vision impairments, but none that he knows of that will read in real time.