NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
Meet the seven new dwarf galaxies. Yale University astronomers, using a new type of telescope made by stitching together telephoto lenses, recently discovered seven celestial surprises while probing a nearby spiral galaxy. The previously unseen galaxies may yield important insights into dark matter and galaxy evolution, while possibly signaling the discovery of a new class of objects in space.
For now, scientists know they have found a septuplet of new galaxies that were previously overlooked because of their diffuse nature: The ghostly galaxies emerged from the night sky as the team obtained the first observations from the “homemade” telescope.
The discovery came quickly, in a relatively small section of sky. “We got an exciting result in our first images,” said Allison Merritt, a Yale graduate student and lead author of a paper about the discovery in The Astrophysical Journal Letters. “It was very exciting. It speaks to the quality of the telescope.”
Pieter van Dokkum, chair of Yale’s astronomy department, designed the robotic telescope with University of Toronto astronomer Roberto Abraham. Their Dragonfly Telephoto Array uses eight telephoto lenses with special coatings that suppress internally scattered light. This makes the telescope uniquely adept at detecting the very diffuse, low surface brightness of the newly discovered galaxies.
“These are the same kind of lenses that are used in sporting events like the World Cup. We decided to point them upward instead,” van Dokkum said. He and Abraham built the compact, oven-sized telescope in 2012 at New Mexico Skies, an observatory in Mayhill, N.M. The telescope was named Dragonfly because the lenses resemble the compound eye of an insect.
“We knew there was a whole set of science questions that could be answered if we could see diffuse objects in the sky,” van Dokkum said. In addition to discovering new galaxies, the team is looking for debris from long-ago galaxy collisions.
“It’s a new domain. We’re exploring a region of parameter space that had not been explored before,” van Dokkum said.
The researchers estimate that, similar to humans, genetic differences account for about 54 per cent of the range seen in "general intelligence" – dubbed "g" – which is measured via a series of cognitive tests. "Our results in chimps are quite consistent with data from humans, and the human heritability in g," says William Hopkins of the Yerkes National Primate Research Center in Atlanta, Georgia, who heads the team reporting its findings in Current Biology.
The Retina displays featured on Apple's iPhone 4 and 5 models pack a pixel density of 326 ppi, with individual pixels measuring 78 micrometers. That might seem plenty good enough given the average human eye is unable to differentiate between the individual pixels, but scientists in the UK have now developed technology that could lead to extremely high-resolution displays that put such pixel densities to shame.
Led by Oxford University scientists, a research team has created a prototype device that features pixels just 30 x 30 nanometers in size. The high resolution potential of the technology was discovered while the team was exploring the link between the electrical and optical properties of phase change materials (PCMs) that can switch from an amorphous to a crystalline state.
By sandwiching a seven-nanometer thick layer of the PCM Germanium-Antimony-Tellurium (Ge2Sb2Te5 or GST) between two layers of transparent electrodes made of indium tin oxide (ITO), the scientists discovered they could "draw" still images within the sandwich "stack" using an atomic force microscope. They then found that the "nano-pixels" could be switched on and off electronically, creating colored dots that could be used as the basis for an extremely high-resolution display.
"We didn't set out to invent a new kind of display," said Professor Harish Bhaskaran of Oxford University's Department of Materials, who led the research. "We were exploring the relationship between the electrical and optical properties of phase change materials and then had the idea of creating this GST 'sandwich' made up of layers just a few nanometers thick. We found that not only were we able to create images in the stack but, to our surprise, thinner layers of GST actually gave us better contrast. We also discovered that altering the size of the bottom electrode layer enabled us to change the color of the image."
But extremely high-resolution isn't the only impressive quality of the technology. The layers that make up the GST sandwich are created using a sputtering technique, which would allow them to be deposited as thin films on extremely thin and flexible substrates.
"We have already demonstrated that the technique works on flexible Mylar sheets around 200 nanometres thick," said Professor Bhaskaran. "This makes them potentially useful for 'smart' glasses, foldable screens, windshield displays, and even synthetic retinas that mimic the abilities of photoreceptor cells in the human eye."
The smallest, most abundant marine microbe, Prochlorococcus, is a photosynthetic bacteria species essential to the marine ecosystem. An estimated billion billion billion of the single-cell creatures live in the oceans, forming the base of the marine food chain and occupying a range of ecological niches based on temperature, light and chemical preferences, and interactions with other species. But the full extent and characteristics of diversity within this single species remains a puzzle.
To probe this question, scientists in MIT’s Department of Civil and Environmental Engineering (CEE) recently performed a cell-by-cell genomic analysis on a wild population of Prochlorococcus living in a milliliter — less than a quarter teaspoon — of ocean water, and found hundreds of distinct genetic subpopulations.
Each subpopulation in those few drops of water is characterized by a set of core gene alleles linked to a few flexible genes — a combination the MIT scientists call the “genomic backbone” — that endows the subpopulation with a finely tuned suitability for a particular ecological niche. Diversity also exists within the backbone subpopulations; most individual cells in the samples they studied carried at least one set of flexible genes not found in any other cell in its subpopulation.
Last year at the Stanford-Berkeley Robotics Symposium, we saw some tantalizing slides from Oussama Khatib about a humanoid robot that used trekking poles to balance itself. We were promised more details later, and the Stanford researchers delivered at the IEEE International Conference on Robotics and Automation (ICRA) this year, where they presented the concept of SupraPed robots.
The idea is equipping robots with a pair of special trekking poles packed with sensors that, according to the researchers, "transforms biped humanoids into tripeds or quadrupeds or more generally, SupraPeds." By using these smart poles to steady themselves, the robots would be able to navigate through "cluttered and unstructured environments such as disaster sites."
Humans have had a lot of practice walking around on two legs. Robots have not, which isn't their fault, but at the moment, even the best robots are working up to the level of a toddler. Some of them aren't bad at flat terrain, but as we saw in the DARPA Robotics Challenge Trials, varied terrain is very, very difficult. It doesn't just require the physical ability to move and balance, but also the awareness to know what path to take and where feet should be placed.
As good at this as humans are, even we get into situations where our balance and movements with our legs and feet simply aren't enough. And when this happens, we scramble. If we're fancy, we might use a walking stick or hiking poles for balance assistance, and if we're not fancy, sometimes an outstretched arm is enough.
Similar to the research we looked at yesterday, this is an entirely different philosophy about obstacles: instead of things to be avoided, they're things that can potentially be used to complete tasks that would otherwise be unsafe or impossible.
However, this is all simulation, and the programming behind it is fairly complex. The robot (when they throw a real robot into this mix) will have sophisticated 3D vision, tactile sensing, and a special set of actuated ski poles. The SupraPed platform includes a pair smart walking staffs, a whole-body multi-contact control and planning software system, and real-time reactive controllers that integrate both tactile and visual information. Moreover, to bypass the difficulty of programming fully autonomous robot controllers, the SupraPed platform contains a remote haptic teleoperation system which allows the operator remotely give high level command.
The laws of physics potentially allow one binary star system to contain a surprisingly large number of Earth-like planets, assuming there is enough matter.
Why settle for one habitable planet, when you can have 60? An astrophysicist has designed the ultimate star system by cramming in as many Earth-like worlds as possible without breaking the laws of physics. Such a monster cosmic neighbourhood is unlikely to exist in reality, but it could inspire future exoplanet studies. Sean Raymond of Bordeaux Observatory in France started his game of fantasy star system with a couple of ground rules. First, the arrangement of planets must be scientifically plausible. Second, they must be gravitationally stable over billions of years: there is no point in putting planets into orbit only to watch them spiral into the sun.
"The arguments were based on the recent scientific literature as well as some simple calculations I did," says Raymond. In some cases it was impossible to choose between two scenarios because of a lack of data, so he just picked the one he liked best.
Gas giants such as Jupiter are not habitable to life as we know it, but they can be orbited by Earth-like moons. In our solar system, Europa and Enceladus, which orbit Jupiter and Saturn, respectively, are prime candidates for extraterrestrial life. Raymond calculates that a red dwarf could hold four Jupiter-like planets, each with five Earth-like moons. What's more, the Trojan trick can allow another two Earth-like planets on either side of the orbiting Jupiters, upping the total number of habitable worlds around the red dwarf to 36.
Finally, Raymond turned his star system into a binary one, with two red dwarfs separated by roughly the distance from our sun to the edge of the solar system. Theory allows one star to carry the Earth-only configuration, and the other to carry the Earth-plus-Jupiters configuration. This creates the ultimate star system, with 60 habitable planets to choose from.
IBM announced today it is investing $3 billion for R&D in two research programs to push the limits of chip technology and extend Moore’s law.
The research programs are aimed at “7 nanometer and beyond” silicon technology and developing alternative technologies for post-silicon-era chips using entirely different approaches, IBM says.
IBM will be investing especially in carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing.
7 nanometer technology and beyond
IBM researchers and other semiconductor experts predict that semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.
However, scaling down to 7 nanometers by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing, IBM says.
Below 7 nanometers, the challenges dramatically increase, requiring a new kind of materials to power systems of the future, such as carbon nanotubes and graphene; and new computational approaches, such as quantum computing. and neurosynaptic computing.
Carbon Nanotubes. IBM Researchers are exploring whether carbon nanotube (CNT) electronics can replace silicon beyond the 7 nm node. IBM recently demonstrated two-way CMOS NAND gates using 50 nm gate-length carbon nanotube transistors, a first.
IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99%, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling. Modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible with CNTs.
Graphene. Graphene — pure carbon in the form of a one-atomic-layer-thick sheet — is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible. Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. That means faster switching transistors. In 2013, IBM demonstrated the world’s first graphene-based integrated-circuit receiver front-end for wireless communications.
Astronomers have in the past 20 years located several hundred planets orbiting distant stars, and they have only scratched the surface. In a small patch of stars—less than 1 percent of the sky—in the Northern Hemisphere, NASA's Kepler mission has already found more than 100 planets, along with strong hints of thousands more. Stars across the sky ought to be similarly laden with planets. A recent study indicated that each star hosts, on average, 1.6 planets. Exoplanets, as these strange worlds are called, are as plentiful as weeds—they crop up wherever they can. Whether any of them harbors life remains to be seen, but the odds of finding such a world are getting better.
A spider-like creature's remains were so well preserved in fossil form that scientists could see all its leg joints, allowing them to recreate its likely gait using computer graphics.
Known as a trigonotarbid, the animal was one of the first predators on land. Its prey were probably early flightless insects and other invertebrates, which it would run down and jump on.
"We know quite a bit about how it lived," said Russell Garwood, a palaeontologist with the University of Manchester, UK. "We can see from its mouth parts that it pre-orally digested its prey - something that most arachnids do - because it has a special filtering plate in its mouth. So, that makes us fairly sure it vomited digestive enzymes on to its prey and then sucked up liquid food," he explained.
The trigonotarbid specimens studied by Dr Garwood and colleagues are just a few millimetres in length. They were unearthed in Scotland, near the Aberdeenshire town of Rhynie. Its translucent Early Devonian chert sediments are renowned for their exquisite fossils.
The team used a collection held at the Natural History Museum in London that have actually been prepared since the 1920s. The rock had been cut into extremely fine slices just a few tens of microns thick, making it possible to construct 3D models of the arachnids, much like a doctor might do with the X-ray slices obtained in a CAT scan.
"We could see the articulation points in the legs," explained Dr Garwood. "Between each part of the leg, there are darker pieces where they join, and that allowed us to work out the range of movement.
"We then compared that with the gaits of modern spiders, which are probably a good analogy because they have similar leg proportions. The software enabled us to see the centre of mass and find a gait that worked. If it's too far back compared to the legs, the posterior drags on the ground. The trigonotarbid is an alternating tetrapod, meaning there are four feet on the ground at any one time."
"This new study has gone further and shows us how they probably walked. For me, what's really exciting here is that scientists themselves can make these animations now, without needing the technical wizardry (and immense costs) of a Jurassic-Park style film. When I started working on fossil arachnids, we were happy if we could manage a sketch of what they used to look like. Now, they run across our computer screens."
The work is part of a special collection of papers on 3D visualisations of fossils published in the Journal of Paleontology.
A look at three leading approaches using inlays to expand presbyopic patients’ range of vision.
Periodically, the search for a “cure” for presbyopia produces a new set of treatment options. The latest approach is the corneal inlay, intended to improve near vision without compromising distance vision in emmetropic presbyopes—and possibly non-emmetropes as well.
Three variations on the concept of placing an implant inside the cornea are in different stages of the approval process. The Kamra inlay (from AcuFocus in Irvine, Calif.) uses the pinhole principle to increase depth of field; the Raindrop (from ReVision Optics in Laguna Hills, Calif.) makes the cornea multifocal by reshaping it; and the Flexivue Microlens (from Presbia in Amsterdam) creates multifocal vision using an in-cornea lens.
Closer to a Presbyopia Cure? “All of these inlays seem to work,” notes Dr. Hovanesian. “You can make theoretical arguments as to why one might be better than the others, but they all seem to achieve a high level of near vision in the range of J1, while only minimally compromising distance vision to 20/20 or 20/25.”
“Overall, the data from the FDA trial of the Kamra, like the data from outside the United States regarding the Flexivue, indicates that these inlays are very safe,” adds Dr. Maloney.
Of course, they have a few disadvantages. Dr. Maloney notes that all of them reduce distance vision to some degree. “That’s the trade-off for improved reading vision,” he says. “And all of them cause night glare to some degree; that’s the trade-off for changing the way the eye focuses light. So if patients aren’t happy, it’s because their night vision isn’t good enough, their distance vision isn’t good enough, or their reading vision isn’t good enough—the inlay isn’t strong enough to give them the reading vision they need. Those limitations are probably common to all inlays. But the inlays can be explanted, and vision returns to being very close to what it was before surgery. In addition, we haven’t seen significant adverse effects with the current generation of these inlays.”
“Using an inlay requires a compromise in distance vision,” agrees Dr. Hovanesian. “That’s the nature of adding something to an emmetropic visual system. However, you’re usually doing it in the nondominant eye in a patient who is a good adapter. For most of these patients, what they sacrifice is well worth it for what they gain.
“The Raindrop inlay, and inlays in general, are going to serve a very important purpose,” he concludes. “As they become approved, we’re going to find that patients really want this kind of technology. It’s appealing because it serves emmetropic presbyopes—patients who are not well served by any other modality we have. Many of these patients are not willing to try monovision, and they’re generally too young for lens implant surgery. They want a quick and easy solution, and they like the idea of something that’s reversible if it doesn’t work out.”
“I think there will definitely be a place for these inlays in our clinical practices,” agrees Dr. Maloney. “It looks like the Kamra inlay is the one closest to FDA approval, but as a surgeon I’d be very happy to add any one of them to my practice.”
1. Tomita M, Kanamori T, et al. Simultaneous corneal inlay implantation and laser in situ keratomileusis for presbyopia in patients with hyperopia, myopia, or emmetropia: Six-month results. J Cataract Refract Surg 2012;38:495-506. 2. Tomita M, Kanamori T, et al. Small-aperture corneal inlay implantation to treat presbyopia after laser in situ keratomileusis. J Cataract Refract Surg 2013;39:898-905. 3. Waring GO 4th. Correction of presbyopia with a small aperture corneal inlay. J Refract Surg 2011;27:842-5. 4. Seyeddain O, Hohensinn M, et al. Small-aperture corneal inlay for the correction of presbyopia: 3-year follow-up. J Cataract Refract Surg 2012;38:35-45. 5. Chayet A, Garza EB. Combined hydrogel inlay and laser in situ keratomileusis to compensate for presbyopia in hyperopic patients: One-year safety and efficacy. J Cataract Refract Surg 2013;39:1713-21. 6. Garza EB, Gomez S, Chayet A, Dishler J. One-year safety and efficacy results of a hydrogel inlay to improve near vision in patients with emmetropic presbyopia. J Refract Surg 2013;29:166-72. 7. Limnopoulou AN, Bouzoukis DI, et al. Visual outcomes and safety of a refractive corneal inlay for presbyopia using femtosecond laser. J Refract Surg 2013;29:12-8. 8. Jackson GR, Owsley C, McGwin G Jr. Aging and dark adaptation. Vision Res 1999;39:3975-82. 9. King BR, Fogel SM, et al. Neural correlates of the age-related changes in motor sequence learning and motor adaptation in older adults. Front Hum Neurosci 2013;7:142. 10. Yılmaz OF, Alagöz N, et al. Intracorneal inlay to correct presbyopia:Long-term results. J Cataract Refract Surg 2011;37:1275-1281. 11.Alió JL, Abbouda A, et al. Removability of a small aperture intracorneal inlay for presbyopia correction. J Refract Surg 2013;29:8:550-6.
Researchers at the University College London (UCL) used a supercomputer to compute 10-billion “transition lines” of the spectral signature of methane, 200 times more comprehensive than previous best efforts. As methane is a biosignature, the development is an advancement toward the detection of life in planets outside our solar system.
Every molecule absorbs and emits light in a characteristic pattern called the absorption and emission spectrum. In order to determine the atmospheric composition of the exoplanets, astronomers break down the full atmospheric spectrum into known patterns to identify the component molecules.
Detecting methane in the study of astrobiology is important because it is an unstable molecule in the atmosphere that only lasts 300-600 years as it is broken down by solar ultraviolet radiation. Since it is unstable, one explanation for its presence on an exoplanet’s surface is continual production by a carbon-based life. The caveat is that geological processes also replenish methane so detection is suggestive of but does not guarantee life.
The previous methane spectra are incomplete in that they contain far fewer transition lines than the new effort so do not properly reflect methane in high temperature atmospheres (i.e. hotter than Earth). At high temperatures there are more transitions because the methane molecule is excited to higher energy states. As a result the methane levels of hot exoplanets and cool stars are detected only partially or incorrectly.
An international team of researchers, including University of Hawaii at Manoa astronomer Brent Tully, has mapped the motions of structures of the nearby universe in greater detail than ever before. The maps are presented as a video, which provides a dynamic three-dimensional representation of the universe through the use of rotation, panning, and zooming. The video was announced last week at the conference "Cosmic Flows: Observations and Simulations" in Marseille, France, that honored the career and 70th birthday of Tully.
The Cosmic Flows project has mapped visible and dark matter densities around our Milky Way galaxy up to a distance of 300 million light-years.
The team includes Helene Courtois, associate professor at the University of Lyon, France, and associate researcher at the Institute for Astronomy (IfA), University of Hawaii (UH) at Manoa, USA; Daniel Pomarede, Institute of Research on Fundamental Laws of the Universe, CEA/Saclay, France; Brent Tully, IfA, UH Manoa; and Yehuda Hoffman, Racah Institute of Physics, University of Jerusalem, Israel.
The large-scale structure of the universe is a complex web of clusters, filaments, and voids. Large voids—relatively empty spaces—are bounded by filaments that form superclusters of galaxies, the largest structures in the universe. Our Milky Way galaxy lies in a supercluster of 100,000 galaxies.
Just as the movement of tectonic plates reveals the properties of Earth's interior, the movements of the galaxies reveal information about the main constituents of the Universe: dark energy and dark matter. Dark matter is unseen matter whose presence can be deduced only by its effect on the motions of galaxies and stars because it does not give off or reflect light. Dark energy is the mysterious force that is causing the expansion of the universe to accelerate.
Around half of the genes that influence how well a child can read also play a role in their mathematics ability, say scientists from UCL, the University of Oxford and King’s College London who led a study into the genetic basis of cognitive traits.
While mathematics and reading ability are known to run in families, the complex system of genes affecting these traits is largely unknown. The finding deepens scientists’ understanding of how nature and nurture interact, highlighting the important role that a child’s learning environment may have on the development of reading and mathematics skills, and the complex, shared genetic basis of these cognitive traits.
The collaborative study, published today in Nature Communications as part of the Wellcome Trust Case-Control Consortium, used data from the Twins Early Development Study (TEDS) to analyse the influence of genetics on the reading and mathematics performance of 12-year-old children from nearly 2,800 British families.
Twins and unrelated children were tested for reading comprehension and fluency, and answered mathematics questions based on the UK national curriculum. The information collected from these tests was combined with DNA data, showing a substantial overlap in the genetic variants that influence mathematics and reading.
Dr Chris Spencer (Oxford University), lead author said: “We’re moving into a world where analysing millions of DNA changes, in thousands of individuals, is a routine tool in helping scientists to understand aspects of human biology. This study used the technique to help investigate the overlap in the genetic component of reading and maths ability in children. Interestingly, the same method can be applied to pretty much any human trait, for example to identify new links between diseases and disorders, or the way in which people respond to treatments.”
A British company has produced a "strange, alien" material so black that it absorbs all but 0.035 per cent of visual light, setting a new world record. To stare at the "super black" coating made of carbon nanotubes – each 10,000 times thinner than a human hair – is an odd experience. It is so dark that the human eye cannot understand what it is seeing. Shapes and contours are lost, leaving nothing but an apparent abyss.
If it was used to make one of Chanel's little black dresses, the wearer's head and limbs might appear to float incorporeally around a dress-shaped hole.
Actual applications are more serious, enabling astronomical cameras, telescopes and infrared scanning systems to function more effectively. Then there are the military uses that the material's maker, Surrey NanoSystems, is not allowed to discuss.
The nanotube material, named Vantablack, has been grown on sheets of aluminium foil by the Newhaven-based company. While the sheets may be crumpled into miniature hills and valleys, this landscape disappears on areas covered by it.
"You expect to see the hills and all you can see … it's like black, like a hole, like there's nothing there. It just looks so strange," said Ben Jensen, the firm's chief technical officer. Asked about the prospect of a little black dress, he said it would be "very expensive" – the cost of the material is one of the things he was unable to reveal. "You would lose all features of the dress. It would just be something black passing through," he said.
Vantablack, which was described in the journal Optics Express and will be launched at the Farnborough International Airshow this week, works by packing together a field of nanotubes, like incredibly thin drinking straws. These are so tiny that light particles cannot get into them, although they can pass into the gaps between. Once there, however, all but a tiny remnant of the light bounces around until it is absorbed.
Vantablack's practical uses include calibrating cameras used to take photographs of the oldest objects in the universe. This has to be done by pointing the camera at something as black as possible.
Mathematica 10 has more new features than any previous version. It is satisfying to see such a long curve of accelerating development—and to realize that there are more new functions being added with Mathematica 10 than there were functions altogether in Mathematica 1. So what is the new functionality in Mathematica 10? It’s a mixture of completely new areas and directions (like geometric computation, machine learning and geographic computation)—together with extensive strengthening, polishing and expanding of existing areas. It’s also a mixture of things I’ve long planned for us to do—but which had to wait for us to develop the necessary technology—together with things I’ve only fairly recently realized we’re in a position to tackle.
When you first launch Mathematica 10 there are some things you’ll notice right away. One is that Mathematica 10 is set up to connect immediately to the Wolfram Cloud. Unlike Wolfram Programming Cloud—or the upcoming Mathematica Online—Mathematica 10 doesn’t run its interface or computations in the cloud. Instead, it maintains all the advantages of running these natively on your local computer—but connects to the Wolfram Cloud so it can have cloud-based files and other forms of cloud-mediated sharing, as well as the ability to access cloud-based parts of the Wolfram Knowledgebase.
If you’re an existing Mathematica user, you’ll notice some changes when you start using notebooks in Mathematica 10. Like there’s now autocompletion everywhere—for option values, strings, wherever. And there’s also a hovering help box that lets you immediately get function templates or documentation. And there’s also—as much requested by the user community—computation-aware multiple undo. It’s horribly difficult to know how and when you can validly undo Mathematica computations—but in Mathematica 10 we’ve finally managed to solve this to the point of having a practical multiple undo.
And in Mathematica 10 one important area where this is happening is machine learning. Inside the system there are all kinds of core algorithms familiar to experts—logistic regression, random forests, SVMs, etc. And all kinds of preprocessing and scoring schemes. But to the user there are just two highly automated functions: Classify and Predict. And with these functions, it’s now easy to call on machine learning whenever one wants.
There are huge new algorithmic capabilities in Mathematica 10 in graph theory, image processing, control theory and lots of other areas. Sometimes one’s not surprised that it’s at least possible to have such-and-such a function—even though it’s really nice to have it be as clean as it is in Mathematica 10. But in other cases it at first seems somehow impossible that the function could work.
There are all kinds of issues. Maybe the general problem is undecidable, or theoretically intractable. Or it’s ill conditioned. Or it involves too many cases. Or it needs too much data. What’s remarkable is how often—by being algorithmically sophisticated, and by leveraging what we’ve built in Mathematica and the Wolfram Language—it’s possible to work around these issues, and to build a function that covers the vast majority of important practical cases.
Another important issue is just how much we can represent and do computation on. Expanding this is a big emphasis in the Wolfram Language—and Mathematica 10 has access to everything that’s been developed there. And so, for example, in Mathematica 10 there’s an immediate symbolic representation for dates, times and time series—as well as for geolocations and geographic data.
Irish company Mcor's unique paper-based 3D printers make some very compelling arguments. For starters, instead of expensive plastics, they build objects out of cut-and-glued sheets of standard 80 GSM office paper. That means printed objects come out at between 10-20 percent of the price of other 3D prints, and with none of the toxic fumes or solvent dips that some other processes require.
Secondly, because it's standard paper, you can print onto it in full color before it's cut and assembled, giving you a high quality, high resolution color "skin" all over your final object. Additionally, if the standard hard-glued object texture isn't good enough, you can dip the final print in solid glue, to make it extra durable and strong enough to be drilled and tapped, or in a flexible outer coating that enables moving parts - if you don't mind losing a little of your object's precision shape.
The process is fairly simple. Using a piece of software called SliceIt, a 3D model is cut into paper-thin layers exactly the thickness of an 80 GSM sheet. If your 3D model doesn't include color information, you can add color and detail to the model through a second piece of software called ColorIt.
Next, a regular CMYK inkjet printer prints each slice of the model onto a separate sheet of paper, with a ~5 mm-wide outline of the required color of the bit that will end up showing once it's assembled. The stack of printed slices is then loaded into the Mcor IRIS machine, which uses a process called selective deposition lamination.
Each sheet is laid down, and its slice shape is cut into it. Then a print nozzle lays soft glue all over the non-essential parts of that sheet that will be broken away after manufacture. A second, high density glue is applied to the sections of the paper that will be used to form the final model. Then, the next sheet is drawn over the top of it, and the stack is pressed up against a heat plate that seals the two layers together.
Once all layers have been cut, glued and pressed together, the object comes out of the printer as a chunky sheaf of paper. But the waste material, with its softer glue, is slightly flexible and pre-cut into little cubes, so it pulls away quickly and easily from the much tougher, denser material of the object itself.
Even without an outer coating, the final objects feel very solid – something like a medium density wood feel – and the print detail can be truly fantastic, miles ahead of what some other 3D printers are able to achieve. Some of the samples we looked at had started to peel apart a little bit – but then, these were road-weary trade samples that had been handled by hundreds of people. In general they felt very solid.
Geoff Hancock, CEO of DGS 3D, the Australian supplier of Mcor machinery, told us that while the paper-based print process was broadly useful in parts prototyping, presentation modelling, architectural modelling, sand casting and a range of other business use cases, one of the most successful areas of the business is in printing out miniaturized cityscapes, complete with topographical data.
It probably started with Linux, then came Wikipedia and Open Street Map. Crowd-sourced information systems are central for the Digital Society to thrive. So, what's next? In this video, Dirk Helbing introduces a number of concepts such as the Planetary Nervous System, Global Participatory Platform, Interactive Virtual Worlds, User-Controlled Information Filters and Reputation Systems, and the Digital Data Purse. He also discusses ideas such as the Social Mirror, Intercultural Adapter, the Social Protector and Social Money as tools to create a better world. These can help us to avoid systemic instabilities, market failures, tragedies of the commons, and exploitation, and to create the framework for a Participatory Market Society, where everyone can be better off.
Persistent Surveillance Systems can watch 25 sq. miles—for hours.
On June 28, 2012, in Dayton, Ohio, police received reports of an attempted robbery. A man armed with a box cutter had just tried to rob the Annex Naughty N’ Nice adult bookstore. Next, a similar report came from a Subway sandwich shop just a few miles northeast of the bookstore.
Coincidentally, a local company named Persistent Surveillance Systems (PSS) was flying a small Cessna aircraft 10,000 feet overhead at the time. The surveillance flight was loaded up with specialized cameras that could watch 25 square miles of territory, and it provided something no ordinary helicopter or police plane could: a Tivo-style time machine that could watch and record movements of every person and vehicle below.
After learning about the attempted robberies, PSS conducted frame-by-frame video analysis of the bookstore and sandwich shop and was able to show that exactly one car traveled between them. Further analysis showed that the suspect then moved on to a Family Dollar store in the northern part of the city, robbed it, stopped for gas—where his face was captured on video—and eventually returned home.
A man named Joseph Bucholtz was arrested the following month and pled guilty to three counts of aggravated robbery with a deadly weapon and one count of robbery. In November 2012, he was sentenced to five years in prison and ordered to pay $665 to the bookstore.
Though an all-seeing, always recording eye in the sky might sound dystopian, current PSS surveillance tech has real limitations. For now, the cameras can only shoot for a few hours at a time, only during the day, and sometimes just in black-and-white. When watching from 10,000 feet, PSS says that individuals are reduced to a single pixel—useful for tracking movements but not for identifying someone.
“You can’t tell if they’re red, white, green, or purple,” Ross McNutt, the company’s CEO, told Ars. And even if the half-meter resolution on his cameras got significantly better, McNutt said that he would prefer to fly higher and capture a larger area.
McNutt wants to be sensitive to people’s concerns, and PSS meets with the ACLU and other privacy activists as such. But he also wants to catch criminals. McNutt, who helped develop the technology when it was a military research project at the nearby Air Force Institute of Technology (AFIT) back in 2004, claims that his system has already proved its value.
New light-sensitive protein enables simpler, more powerful optogenetics.
MIT engineers have developed the first light-sensitive protein molecule that enables neurons to be silenced noninvasively. Using a light source outside the skull makes it possible to do long-term studies without an implanted light source.
The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once. The researchers described the protein in Nature Neuroscience.
Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on opsins, light-sensitive proteins that act as channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells.
Researchers insert a light source, such as an optical fiber, into the brain to suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled. The neurons to be studied must be genetically engineered to produce the opsins.
Also, inserting optical fibers into the brain “displaces brain tissue and can lead to side effects such as brain lesion, neural morphology changes, glial inflammation and motility, or aseptic compromise,” the researchers say in the paper.
In addition, such implants can be difficult to insert and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. And it is difficult to perform long-term studies of chronic diseases with these implants.
Researchers from UCSD have for the first time directly created and destroyed neural connections that connect high level sensory input and high level behavioral responses.
Donald Hebb in 1949 was one of the first to seize upon this observation. He proposed that on the biological level, neurons were rewired so that coordinated inputs and outputs get wired together. As such, were there a nausea neuron and a boat neuron, through the effects of association, the two would get wired together so that the “boat” itself fires up pathways in the “nausea” part of the brain.
In the field of neural networks, this has a name: Hebbian learning. Pavlov of course also described this phenomenon, and tested it in animals, bequeathing its name the “conditioned response”.
Until now the wiring of neural inputs and outputs was a theory with good but indirect evidence. At UCSD, neuroscientists teamed up with molecular biologists to engineer a mouse whose neurons can be directly controlled for forming and losing connections.
They did this by injecting an engineered virus into the auditory nerve cells. The viruses, largely harmless, carry a light responsive molecular switch (a membrane protein “channel” actually) which gets inserted into cells of the auditory region. Using laser light of certain frequencies it is possible to both “potentiate” or “depress” the auditory nerve cells.
The upshot is that the researchers could directly make the auditory nerve cells increase or decrease their signal strength to other nerve cells, without needing a real, external noise. In effect, they’ve short-circuited the noise input. In experiments, they used a light electrical pulse to shock mice while simultaneously stimulating the auditory input with the laser-activated switch.
Basically they flashed the laser light at the ear of the mouse. Over time, the mouse began to associate the laser pulse induced nerve signal with the electrical shock. The mice were conditioned to exhibit fear even when there was no shock.
The crux of the experiment is what happened when the scientists flashed the laser in a way to weaken the auditory nerve. Now the mouse stopped responding in fear to the laser auditory stimulus.
The experiments showed for the first time that associative learning was indeed the wiring together of sensory and response neurons. The study was published in Nature.
Earth's magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.
The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet's surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.
The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth's magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA's Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.
In fact over the past 20 million years, our planet has settled into a pattern of a pole reversal about every 200,000 to 300,000 years; as of 2012, however, it has been more than twice that long since the last reversal. These reversals aren't split-second flips, and instead occur over hundreds or thousands of years. During this lengthy stint, the magnetic poles start to wander away from the region around the spin poles (the axis around which our planet spins), and eventually end up switched around, according to Cornell University astronomers.
Since the 1960s, theatergoers have shelled out for crude 3-D glasses, polarized glasses, and shutter glasses to enhance their viewing experience. These basic devices, used to trick the brain into perceiving an artificial three-dimensional reality, may soon be rendered obsolete with the introduction of new holography technology developed by Tel Aviv University researchers.
Tel Aviv University doctoral students Yuval Yifat, Michal Eitan, and Zeev Iluz have developed highly efficient holography based on nanoantennas that could be used for security as well as medical and recreational purposes. Prof. Yael Hanein, of TAU's School of Electrical Engineering and head of TAU's Center for Nanoscience and Nanotechnology, and Prof. Jacob Scheuer and Prof. Amir Boag of the School of Electrical Engineering, led the development team. Their research, published in the American Chemical Society's publication Nano Letters, uses the parameters of light itself to create dynamic and complex holographic images.
In order to effect a three-dimensional projection using existing technology, two-dimensional images must be "replotted"—rotated and expanded to achieve three-dimension-like vision. But the team's nanoantenna technology permits newly designed holograms to replicate the appearance of depth without being replotted. The applications for the technology are vast and diverse, according to the researchers, who have already been approached by commercial entities interested in the technology.
"We had this interesting idea—to play with the parameters of light, the phase of light," said Yifat. "If we could dynamically change the relation between light waves, we could create something that projected dynamically—like holographic television, for example. The applications for this are endless. If you take light and shine it on a specially engineered nanostructure, you can project it in any direction you want and in any form that you want. This leads to interesting results."
The researchers worked in the lab for over a year to develop and patent a small metallic nanoantenna chip that, together with an adapted holography algorithm, could determine the "phase map" of a light beam. "Phase corresponds with the distance light waves have to travel from the object you are looking at to your eye," said Prof. Hanein. "In real objects, our brains know how to interpret phase information so you get a feeling of depth, but when you look at a photograph, you often lose this information so the photographs look flat. Holograms save the phase information, which is the basis of 3-D imagery. This is truly one of the holy grails of visual technology."
According to the researchers, their methodology is the first of its kind to successfully produce high-resolution holographic imagery that can be projected efficiently in any direction.
"We can use this technology to reflect any desired object," said Prof. Scheuer. "Before, scientists were able to produce only basic shapes—circles and stripes, for example. We used, as our model, the logo of Tel Aviv University, which has a very specific design, and were able to achieve the best results seen yet."
NASA's Cassini spacecraft has obtained the highest-resolution movie yet of a unique six-sided jet stream, known as the hexagon, around Saturn's north pole. The hexagon, which is wider than two Earths, owes its appearance to the jet stream that forms its perimeter. The jet stream forms a six-lobed, stationary wave which wraps around the north polar regions at a latitude of roughly 77 degrees North.
This is the first hexagon movie of its kind, using color filters, and the first to show a complete view of the top of Saturn down to about 70 degrees latitude. Spanning about 20,000 miles (30,000 kilometers) across, the hexagon is a wavy jet stream of 200-mile-per-hour winds (about 322 kilometers per hour) with a massive, rotating storm at the center. There is no weather feature exactly, consistently like this anywhere else in the solar system.
"The hexagon is just a current of air, and weather features out there that share similarities to this are notoriously turbulent and unstable," said Andrew Ingersoll, a Cassini imaging team member at the California Institute of Technology in Pasadena. "A hurricane on Earth typically lasts a week, but this has been here for decades -- and who knows -- maybe centuries."
Weather patterns on Earth are interrupted when they encounter friction from landforms or ice caps. Scientists suspect the stability of the hexagon has something to do with the lack of solid landforms on Saturn, which is essentially a giant ball of gas.
A team of physicists from the Paul-Drude-Institut für Festkörperelektronik (PDI) in Berlin, Germany, NTT Basic Research Laboratories in Atsugi, Japan, and the U.S. Naval Research Laboratory (NRL) has used a scanning tunneling microscope to create quantum dots with identical, deterministic sizes. The perfect reproducibility of these dots opens the door to quantum dot architectures completely free of uncontrolled variations, an important goal for technologies from nanophotonics to quantum information processing as well as for fundamental studies. The complete findings are published in the July 2014 issue of the journal Nature Nanotechnology.
Quantum dots are often regarded as artificial atoms because, like real atoms, they confine their electrons to quantized states with discrete energies. But the analogy breaks down quickly, because while real atoms are identical, quantum dots usually comprise hundreds or thousands of atoms - with unavoidable variations in their size and shape and, consequently, in their properties and behavior. External electrostatic gates can be used to reduce these variations. But the more ambitious goal of creating quantum dots with intrinsically perfect fidelity by completely eliminating statistical variations in their size, shape, and arrangement has long remained elusive.
Creating atomically precise quantum dots requires every atom to be placed in a precisely specified location without error. The team assembled the dots atom-by-atom, using a scanning tunneling microscope (STM), and relied on an atomically precise surface template to define a lattice of allowed atom positions. The template was the surface of an InAs crystal, which has a regular pattern of indium vacancies and a low concentration of native indium adatoms adsorbed above the vacancy sites. The adatoms are ionized +1 donors and can be moved with the STM tip by vertical atom manipulation. The team assembled quantum dots consisting of linear chains of N = 6 to 25 indium atoms; the example shown here is a chain of 22 atoms.
Stefan Fölsch, a physicist at the PDI who led the team, explained that "the ionized indium adatoms form a quantum dot by creating an electrostatic well that confines electrons normally associated with a surface state of the InAs crystal. The quantized states can then be probed and mapped by scanning tunneling spectroscopy measurements of the differential conductance." These spectra show a series of resonances labeled by the principal quantum number n. Spatial maps reveal the wave functions of these quantized states, which have n lobes and n - 1 nodes along the chain, exactly as expected for a quantum-mechanical electron in a box. For the 22-atom chain example, the states up to n = 6 are shown.
Sexual reproduction is an ancient feature of life on earth, and the familiar X and Y chromosomes in humans and other model species have led to the impression that sex determination mechanisms are old and conserved. In fact, males and females are determined by diverse mechanisms that evolve rapidly in many taxa. Yet this diversity in primary sex-determining signals is coupled with conserved molecular pathways that trigger male or female development. Conflicting selection on different parts of the genome and on the two sexes may drive many of these transitions, but few systems with rapid turnover of sex determination mechanisms have been rigorously studied. Here we survey our current understanding of how and why sex determination evolves in animals and plants and identify important gaps in our knowledge that present exciting research opportunities to characterize the evolutionary forces and molecular pathways underlying the evolution of sex determination.