50 new tech tools you should knowCNN InternationalA new 3.0 update also offers LinkedIn integration for connecting with just one click. A must-have for conference-goers and business trippers.
Via Anise Smith
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The popularity of drones is climbing quickly among companies, governments and citizens alike. But the rules surrounding where, when and why you can fly an unmanned aerial vehicle aren’t very clear. The FAA has tried to assert control and insist on licensing for all drone operators, while drone pilots and some legal experts claim drones do not fall under the FAA’s purview. The uncertainty—and recent attempts by the FAA to fine a drone pilot and ground a search and rescue organization—has UAV operators nervous.
To help with the question of where it is legal to fly a drone, Mapbox has put together an interactive map of all the no-fly zones for UAVs they could find. Most of the red zones on the map are near airports, military sites and national parks. But as WIRED’s former Editor-in-Chief, Chris Anderson, now CEO of 3-D Roboticsand founder of DIY Drones, discovered in 2007 when he crashed a drone bearing a camera into a tree on the grounds of Lawrence Berkeley National Laboratory, there is plenty of trouble in all sorts of places for drone operators to get into.
As one of the map’s authors, Bobby Sudekum, writes on the Mapbox blog, it’s a work in progress. They’ve made the data they collected available for anyone to use, and if you know of other no-fly zones that aren’t on the map, you can add that data to a public repository they started on GitHub.
For instance, you’ll see on the map below that there isn’t a no-fly area over Berkeley Lab, which sits in the greyed area in the hills above UC Berkeley. Similarly, there is no zone marked around Lawrence Livermore National Laboratory, one of the country’s two nuclear weapons labs. I have a call into the lab to check on the rules*, but in the meantime, if you have a drone, just know that in 2006, the lab acquired a Gatling gun that has a range of 1 mile and can fire 4,000 rounds a minute.
In Aesop's fable about the crow and the pitcher, a thirsty bird happens upon a vessel of water, but when he tries to drink from it, he finds the water level out of his reach. Not strong enough to knock over the pitcher, the bird drops pebbles into it -- one at a time -- until the water level rises enough for him to drink his fill. New research demonstrates the birds' intellectual prowess may be more fact than fiction.
Highlighting the value of ingenuity, the fable demonstrates that cognitive ability can often be more effective than brute force. It also characterizes crows as pretty resourceful problem solvers. New research conducted by UC Santa Barbara's Corina Logan, with her collaborators at the University of Auckland in New Zealand, demonstrates the birds' intellectual prowess may be more fact than fiction. Her findings appear today in the scientific journal PLOS ONE.
Logan is lead author of the paper, which examines causal cognition using a water displacement paradigm. "We showed that crows can discriminate between different volumes of water and that they can pass a modified test that so far only 7- to 10-year-old children have been able to complete successfully. We provide the strongest evidence so far that the birds attend to cause-and-effect relationships by choosing options that displace more water."
Logan, a junior research fellow at UCSB's SAGE Center for the Study of the Mind, worked with New Caledonian crows in a set of small aviaries in New Caledonia run by the University of Auckland. "We caught the crows in the wild and brought them into the aviaries, where they habituated in about five days," she said. Keeping families together, they housed the birds in separate areas of the aviaries for three to five months before releasing them back to the wild.
The testing room contained an apparatus consisting of two beakers of water, the same height, but one wide and the other narrow. The diameters of the lids were adjusted to be the same on each beaker. "The question is, can they distinguish between water volumes?" Logan said. "Do they understand that dropping a stone into a narrow tube will raise the water level more?" In a previous experiment by Sarah Jelbert and colleagues at the University of Auckland, the birds had not preferred the narrow tube. However, in that study, the crows were given 12 stones to drop in one or the other of the beakers, giving them enough to be successful with either one.
"When we gave them only four objects, they could succeed only in one tube -- the narrower one, because the water level would never get high enough in the wider tube; they were dropping all or most of the objects into the functional tube and getting the food reward," Logan explained. "It wasn't just that they preferred this tube, they appeared to know it was more functional." However, she noted, we still don't know exactly how the crows think when solving this task. They may be imagining the effect of each stone drop before they do it, or they may be using some other cognitive mechanism. "More work is needed," Logan said.
Logan also examined how the crows react to the U-tube task. Here, the crows had to choose between two sets of tubes. With one set, when subjects dropped a stone into a wide tube, the water level raised in an adjacent narrow tube that contained food. This was due to a hidden connection between the two tubes that allowed water to flow. The other set of tubes had no connection, so dropping a stone in the wide tube did not cause the water level to rise in its adjacent narrow tube.
Each set of tubes was marked with a distinct color cue, and test subjects had to notice that dropping a stone into a tube marked with one color resulted in the rise of the floating food in its adjacent small tube. "They have to put the stones into the blue tube or the red one, so all you have to do is learn a really simple rule that red equals food, even if that doesn't make sense because the causal mechanism is hidden," said Logan.
As it turns out, this is a very challenging task for both corvids (a family of birds that includes crows, ravens, jays and rooks) and children. Children ages 7 to 10 were able to learn the rules, as Lucy Cheke and colleagues at the University of Cambridge discovered in 2012. It may have taken a couple of tries to figure out how it worked, Logan noted, but the children consistently put the stones into the correct tube and got the reward (in this case, a token they exchanged for stickers). Children ages 4 to 6, however, were unable to work out the process. "They put the stones randomly into either tube and weren't getting the token consistently," she said.
Recently, Jelbert and colleagues from the University of Auckland put the New Caledonian crows to the test using the same apparatus the children did. The crows failed. So Logan and her team modified the apparatus, expanding the distance between the beakers. And Kitty, a six-month-old juvenile, figured it out. "We don't know how she passed it or what she understands about the task," Logan said, "so we don't know if the same cognitive processes or decisions are happening as with the children, but we now have evidence that they can. It's possible for the birds to pass it.
Let's face it, humans are pretty intelligent. Most people would not argue with this. We spend a large majority of our lives trying to become MORE intelligent. Some of us spend nearly three decades of our lives in school, learning about the world. We also strive to work together in groups, as nations, and as a species, to better tackle the problems that face us.
A second track of transhumanism is to facilitate and support improvement of machines in parallel to improvements in human quality of life. Many people argue that we have also already built complex computer programs which show a glimmer of autonomous intelligence, and that in the future we will be able to create computer programs that are equal to, or have a much greater level of intelligence than humans. Such an intelligent system will be able to self-improve, just as we humans identify gaps in our knowledge and try to fill them by going to school and by learning all we can from others. Our computer programs will soon be able to read Wikipedia and Google Books to learn, just like their creators.
She is also the cofounder of carboncopies.org - and organization that works on connectome mapping of the brain and downloading memories.
Even in our deepest theories of machine intelligence, the idea of reward comes up. There is a theoretical model of intelligence called AIXI, developed by Marcus Hutter , which is basically a mathematical model which describes a very general, theoretical way in which an intelligent piece of code can work. This model is highly abstract, and allows, for example, all possible combinations of computer program code snippets to be considered in the construction of an intelligent system. Because of this, it hasn’t actually ever been implemented in a real computer. But, also because of this, the model is very general, and captures a description of the most intelligentprogram that could possibly exist. Note that in order to try and build something that even approximates this model is way beyond our computing capability at the moment, but we are talking now about computer systems that may in the future may be much more powerful. Anyway, the interesting thing about this model is that one of the parameters is a term describing… you guessed it… REWARD.
Changing your own code
We, as humans, are clever enough to look at this model, to understand it, and see that there is a reward term in there. And if we can see it, then any computer system that is based on this highly intelligent model will certainly be able to understand this model, and see the reward term too. But – and here’s the catch – the computer system that we build based on this model has the ability to change its own code! In fact it had to in order to become more intelligent than us in the first place, once it realized we were such lousy programmers and took over programming itself!
So imagine a simple example – our case from earlier – where a computer gets an additional ’1′ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1′ with an ‘add 2′? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
Physicists Sergei Filippov (MIPT and Russian Quantum Center at Skolkovo) and Mario Ziman (Masaryk University in Brno, Czech Republic, and the Institute of Physics in Bratislava, Slovakia) have found a way to preserve quantum entanglement of particles passing through an amplifier and, conversely, when transmitting a signal over long distances. Details are provided in an article published in the journal Physical Review A.
Decoherence is the destruction of the quantum state due to the interaction of a quantum system with the outside world. For experiments in quantum computing, scientists use single atoms caught in magnetic traps and cooled to temperatures close to absolute zero. After going through kilometers of fiber, photons cease to be quantum entangled in most cases and become ordinary, unrelated light quanta.
To create an effective quantum computing system, scientists have to solve a number of problems, including preserving quantum entanglement when the signal abates and when it passes through an amplifier. Fiber-optic cables on the ocean bed contain a great deal of special amplifiers composed of optical glass and rare earth elements. It is these amplifiers that make it possible to watch high-resolution videos stored on a server in California from the MIPT campus or a university in Beijing.
In their article, Filippov and Ziman say that a certain class of signals can be transmitted so that the risk ofruining quantum entanglement becomes much lower. In this case, neither the attenuation nor the amplification of a signal ruins the entanglement. To achieve this effect, it is necessary to have the particles in a special, non-Gaussian state, or, as physicists put it, "the wave function of the particles in the coordinate representation should not be in the form of a Gaussian wave packet." A wave function is a basic concept of quantum mechanics, and Gaussian distribution is a major mathematical function used not only by physicists but also by statisticians, sociologists and economists.
Scientists have discovered that greater mouse-eared bats use polarization patterns in the sky to navigate -- the first mammal that's known to do this.
The bats use the way the Sun's light is scattered in the atmosphere at sunset to calibrate their internal magnetic compass, which helps them to fly in the right direction, a study published in Nature Communications has shown.
Despite this breakthrough, researchers have no idea how they manage to detect polarized light. 'We know that other animals use polarization patterns in the sky, and we have at least some idea how they do it: bees have specially-adapted photoreceptors in their eyes, and birds, fish, amphibians and reptiles all have cone cell structures in their eyes which may help them to detect polarization,' says Dr Richard Holland of Queen's University Belfast, co-author of the study.
'But we don't know which structure these bats might be using.' Polarization patterns depend on where the sun is in the sky. They're clearest in a strip across the sky 90° from the position of the sun at sunset or sunrise. But animals can still see the patterns long after sunset. This means they can orient themselves even when they can't see the sun, including when it's cloudy. Scientists have even shown that dung beetles use the polarization pattern of moonlight for orientation.
A hugely diverse range of creatures – including bees, anchovies, birds, reptiles and amphibians – use the patterns as a compass to work out which way is north, south, east and west.
The 27-kilometer Large Hadron Collider at CERN could soon be overtaken as the world’s largest particle smasher by a proposed Chinese machine. Proposals for two accelerators could see country become collider capital of the world.
For decades, Europe and the United States have led the way when it comes to high-energy particle colliders. But a proposal by China that is quietly gathering momentum has raised the possibility that the country could soon position itself at the forefront of particle physics.
Scientists at the Institute of High Energy Physics (IHEP) in Beijing, working with international collaborators, are planning to build a ‘Higgs factory’ by 2028 — a 52-kilometre underground ring that would smash together electrons and positrons. Collisions of these fundamental particles would allow the Higgs boson to be studied with greater precision than at the much smaller Large Hadron Collider (LHC) at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland.
Physicists say that the proposed US$3-billion machine is within technological grasp and is considered conservative in scope and cost. But China hopes that it would also be a stepping stone to a next-generation collider — a super proton–proton collider — in the same tunnel.
European and US teams have both shown interest in building their own super collider (see Nature 503, 177; 2013), but the huge amount of research needed before such a machine could be built means that the earliest date either can aim for is 2035. China would like to build its electron–positron collider in the meantime, unaided by international funding if needs be, and follow it up as fast as technologically possible with the super proton collider. Because only one super collider is likely to be built, China’s momentum puts it firmly in the driving seat.
Speaking this month at the International Conference on High Energy Physics in Valencia, Spain, IHEP director Yifang Wang said that, to secure government support, China wanted to work towards a more immediate goal than a super collider by 2035. “You can’t just talk about a project which is 20 years from now,” he said.
An arms race has been waged between bacteria and bacteriophage that would bring a satisfactory tear to Sun Tzu’s eye. Scientists have recently recognized that countermeasures developed by bacteria (and archaea) in response to phage infections can be retooled for use within molecular biology. In 2013, large strides have been made to co-opt this system (specifically and most commonly from Streptococcus pyogenes) for use in mammalian cells. This countermeasure, CRISPR (clustered regularly interspaced short palindromic repeats), has brought about another successive wave of genome engineering initiated by recombineering and followed more recently by zinc finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs).
ZFNs and TALENs perform a similar function yet the learning curve appears to be more difficult for development due to the use of protein-DNA contacts rather than the simplicity of designing RNA-DNA homology contacts. Although the potential for CRISPR in regards to genome editing within mammalian cells will be of greatest interest to the reader, the CRISPR backstory is equally compelling. Just as we have evolved immune responses to pathogens, so too have bacteria. CRISPR is an adapted immune response evolved by bacteria to create an immunological memory to ward off future phage infections. When a phage infects and injects its DNA within a bacterium, the DNA commandeers bacterial proteins and enzymes for use towards lytic or lysogenic phases. However, exposure of phage DNA allows the bacterium to copy and insert snippets (called spacers) of phage DNA into its genomic DNA between direct repeats (DR). These snippets can later be expressed as an operon (pre-CRISPR RNA, pre-crRNA) alongside a trans-activating CRISPR RNA (tracrRNA) and an effector CRISPR associated nuclease (Cas). Together these components surveil for foreign crRNA cognate sequence and cleave the targeted sequence.
Although hallmarks of CRISPR have been known since the late 80’s (CRISPR timeline) and was acronymed in 2002, Jinek et al. in August 2012 were the first to suggest the suitability of CRISPR towards genome editing. In February of 2013, Feng Zhang’s and George Church’s labs simultaneously published the first papers describing the use long oligos/constructs for editing via CRISPR in mammalian cells and made their plasmids readily available on Addgene. Zhang’s lab went one step further and has supplemented their papers with a helpful website and user forum. They have even gone so far as to publish a methods paper to streamline the use of their plasmids towards a plug-and-play, modular cloning approach with your target sequence of interest.
CRISPR works fairly well out of the box yet still has some imperfections that are being addressed. For example, CRISPR relies upon a protospacer adjacent motif (PAM; S. pyogenes sequence: NGG) 3’ to the targeting sequence to permit digestion. Although the ubiquity of NGG within the genome may seem advantageous, it may be limiting in some regions. Other species make use of different PAM sites that can be considered when choosing a cut sites of interest. Since double-stranded cuts could potentially create DNA lesions (a byproduct of the cell using non-homologous end joining [NHEJ] instead of homologous recombination) some labs are choosing to use modified Cas enzymes that nick DNA, instead of creating a double-strand break. This potential weakness of CRISPR to create DNA lesions via NHEJ, however, has been exploited by Eric Lander’s and Zhang’s lab this month (Jan. 2014). They have capitalized on the cell’s use of NHEJ to manufacture DNA lesions (frameshift mutations) at cut sites within genes on a large scale as a means to perform large genetic screens. Using this technique knocks out a gene and has the obvious advantage of fully ablating a gene’s expression compared to RNAi where some residual expression can be expected.
The advantages of CRISPR lends itself to future therapies. High efficiency, low-to-no background mutagenesis and easy construction put CRISPR front and center as the tool de jour for gene therapy. In combination with induced pluripotent stem cells (iPSCs), one can imagine the creation of patient-specific iPSCs created with non-integrative iPSC vectors and modified by CRISPR, devoid of any residual DNA footprint left behind by the iPSC vector or CRISPR correction. In conjunction with whole genome sequencing, genetically clean cell lines can be selected that are suitable for differentiation towards the germ layer of interest for subsequent autologous transplantation. Proof of principle experiments have already been published in models of cystic fibrosis and cataracts.
For better or worse, CRISPR is catching on like wildfire with young investigators, as noted recently by Michael Eisen. What may be looming in the future and not as openly discussed at this time is the potential for CRISPR to open up the genome to large scale editing. We tend to think of any particular genome as fairly static with slight variations between any two individuals and increased variation down the evolutionary line. However, CRISPR has proven to be a fantastic multitasker, capable of modifying multiple loci in one fell swoop as demonstrated by the Jaenisch lab (five loci). With the creation of Caribou Biosciences and a surprising round of venture capital raised by a powerhouse team at Editas Medicine in November ($43 million), CRISPR appears to also have sparked an interest in the private sector. With large sums of money at their disposal, these companies can now begin to look at the genome, not as a static entity, but more akin to operating system, a code that now has a facile editing tool. George Church, an Editas co-founder, has speculated in the past about the potential use of the human genome as the backbone for recreating the Neanderthal genome in his recent book and interview with "Der Spiegel". In an era where the J. Craig Venter Institute can create an organism’s genome de novo and a collaboration between Synthetic Genomics and Integrated DNA Technologies has proposed to synthesize DNA upwards of 2Mbp, the combination of CRISPR, synthetic DNA and some elbow grease will make the genome more accessible and Church’s speculations a potential reality.
A kit of 3D-printed anatomical body parts could revolutionize medical education and training, according to its developers at Monash University.
Professor Paul McMenamin, Director of the University’s Centre for Human Anatomy Education, said the simple and cost-effective anatomical kit would dramatically improve trainee doctors’ and other health professionals’ knowledge and could even contribute to the development of new surgical treatments.
“Many medical schools report either a shortage of cadavers, or find their handling and storage too expensive as a result of strict regulations governing where cadavers can be dissected,” he said.
“Without the ability to look inside the body and see the muscles, tendons, ligaments, and blood vessels, it’s incredibly hard for students to understand human anatomy. We believe our version, which looks just like the real thing, will make a huge difference.”
The 3D Printed Anatomy Series kit, to go on sale later this year, could have particular impact in developing countries where cadavers aren’t readily available, or are prohibited for cultural or religious reasons.
After scanning real anatomical specimens with either a CT or surface laser scanner, the body parts are 3D printed either in a plaster-like powder or in plastic, resulting in high resolution, accurate color reproductions.
Further details have been published online in the journal Anatomical Sciences Education.
Organoids have been generated for a number of organs from both mouse and human stem cells. To date, human pluripotent stem cells have been coaxed to generate intestinal, kidney, brain, and retinal organoids, as well as liver organoid-like tissues called liver buds.
Derivation methods are specific to each of these systems, with a focus on recapitulation of endogenous developmental processes. Specifically, the methods so far developed use growth factors or nutrient combinations to drive the acquisition of organ precursor tissue identity.
Then, a permissive three-dimensional culture environment is applied, often involving the use of extracellular matrix gels such as Matrigel. This allows the tissue to self-organize through cell sorting out and stem cell lineage commitment in a spatially defined manner to recapitulate organization of different organ cell types.
These complex structures provide a unique opportunity to model human organ development in a system remarkably similar to development in vivo. Although the full extent of similarity in many cases still remains to be determined, organoids are already being applied to human-specific biological questions. Indeed, brain and retinal organoids have both been shown to exhibit properties that recapitulate human organ development and that cannot be observed in animal models. Naturally, limitations exist, such as the lack of blood supply, but future endeavors will advance the technology and, it is hoped, fully overcome these technical hurdles.
Outlook: The therapeutic promise of organoids is perhaps the area with greatest potential. These unique tissues have the potential to model developmental disease, degenerative conditions, and cancer. Genetic disorders can be modeled by making use of patient-derived induced pluripotent stem cells or by introducing disease mutations. Indeed, this type of approach has already been taken to generate organoids from patient stem cells for intestine, kidney, and brain.
Furthermore, organoids that model disease can be used as an alternative system for drug testing that may not only better recapitulate effects in human patients but could also cut down on animal studies. Liver organoids, in particular, represent a system with high expectations, particularly for drug testing, because of the unique metabolic profile of the human liver. Finally, tissues derived in vitro could be generated from patient cells to provide alternative organ replacement strategies. Unlike current organ transplant treatments, such autologous tissues would not suffer from issues of immunocompetency and rejection.
Soil deep in a crater dating to some 3.7 billion years ago contains evidence that Mars was once much warmer and wetter, says University of Oregon geologist Gregory Retallack, based on images and data captured by the rover Curiosity.
NASA rovers have shown Martian landscapes littered with loose rocks from impacts or layered by catastrophic floods, rather than the smooth contours of soils that soften landscapes on Earth. However, recent images from Curiosity from the impact Gale Crater, Retallack said, reveal Earth-like soil profiles with cracked surfaces lined with sulfate, ellipsoidal hollows and concentrations of sulfate comparable with soils in Antarctic Dry Valleys and Chile's Atacama Desert.
"The pictures were the first clue, but then all the data really nailed it," Retallack said. "The key to this discovery has been the superb chemical and mineral analytical capability of the Curiosity Rover, which is an order of magnitude improvement over earlier generations of rovers. The new data show clear chemical weathering trends, and clay accumulation at the expense of the mineral olivine, as expected in soils on Earth. Phosphorus depletion within the profiles is especially tantalizing, because it attributed to microbial activity on Earth."
The Great Filter, in the context of the Fermi paradox, is whatever prevents "dead matter" from giving rise, in time, to "expanding lasting life" in the universe. The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies the possibility something is wrong with one or more of the arguments from various scientific disciplines that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species actually observed (currently just one: human). This probability threshold, which could lie behind us (in our past) or in front of us (in our future), might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction. The main counter-intuitive conclusion of this observation is that the easier it was for life to evolve to our stage, the bleaker our future chances probably are.
The idea was first proposed in an online essay titled, "The Great Filter - Are We Almost Past It?" written by economist Robin Hanson. The first version was written in August 1996 and the article was last updated on September 15, 1998. Since that time, Hanson's formulation has received recognition in several published sources discussing the Fermi paradox and its implications.
According to the Great Filter hypothesis at least one of these steps - if the list were complete - must be improbable. If it's not an early step (i.e. in our past), then the implication is that the improbable step lies in our future and our prospects of reaching step 9 (interstellar colonization) are still bleak. If the past steps are likely, then many civilizations would have developed to the current level of the human race. However, none appear to have made it to step 9, or the Milky Way would be full of colonies. So perhaps step 9 is the unlikely one, and the only thing that appears likely to keep us from step 9 is some sort of catastrophe or the resource exhaustion leading to impossibility to make the step due to consumption of the available resources (like for example highly constrained energy resources). So by this argument, finding multicellular life on Mars (provided it evolved independently) would be bad news, since it would imply steps 2–6 are easy, and hence only 1, 7, 8 or 9 (or some unknown step) could be the big problem.
Although steps 1–7 have occurred on Earth, any one of these may be unlikely. If the first seven steps are necessary preconditions to calculating the likelihood (using the local environment) then an anthropically biased observer can infer nothing about the general probabilities from its (pre-determined) surroundings.
Dr. David Kirtley comes across as a smart, practical guy with a head for business. That creates some cognitive dissonance when he explains that his Redmond startup is developing fusion energy.
You’ve heard about fusion energy, the amazing power source of the future. Nuclear scientists promise fusion will have all the best qualities of conventional nuclear and natural-gas energy but none of the downsides. Fusion is carbon-free like today’s atomic power, but without the need to protect a thousand future generations from radioactive waste. Other than being mind-blowing, fusion would be relatively safe — no China Syndrome, no contamination, no weapons-grade materials to proliferate.
Thus far, fusion energy always has been an unfinished science that’s 50 years and $50 billion from commercialization. Seven nations are collaborating to build an experimental fusion reactor in France as an $11 billion proof of concept that still won’t produce electricity when it’s operational in 2027.
Dr. Kirtley’s company Helion Energy has taken the proven parts of fusion science and combined them into a design that can be commercially deployable within six years. That would be a decade ahead of Helion’s Bellevue neighbor, TerraPower LLC, a startup funded in part by Nathan Myhrvold and Bill Gates to build a traveling wave reactor that runs on uranium.
Helion Energy last week won the top prize in the Energy Generation category at the Cleantech Open Global Forum in Silicon Valley. The prize comes with a $5,000 check and a long menu of in-kind services. The audience also gave Helion a People’s Choice Award. The annual competition culminates a nine-month business accelerator for Cleantech startups.
The team at Helion comes out of the University of Washington and Mathematical Sciences Northwest. At its headquarters in Redmond, Helion has a working prototype that they say proves their design works. Deuterium gas goes in two ends of the device and produces a pair of plasmas per second. Plasma is responsible for the glow of lightning, neon lights and the Sun. As the two plasmas collide in the center, a magnetic pulse generates electricity.
At the Global Forum, Dr. Kirtley told me his design is compact, modular and competitive in today’s market. In the footprint of a semi trailer, each module will produce 50 megawatts of electricity (it would take ten of them to equal the output of a conventional power plant). The deuterium fuel is derived from seawater. The byproduct is a harmless stream of helium.
Helion Energy is raising $35 million to build a fusion reactor core that will demonstrate electricity production from fusion energy. Its technology previously received $4 million in funding from the U.S. Department of Energy.
“Helion isn’t looking for funding to do more science,” says Kirtley. “We already proved our technology. We’re now ready to start commercializing fusion energy.”
Harvard's first large-scale digital computer, which came to be known as the Mark I, was conceived by Howard H. Aiken (A.M. '37, Ph.D. '39) and built by IBM. Fifty-one feet long, it was installed in the basement of what is now Lyman Laboratory in 1944, and later moved to a new building called the Aiken Computation Laboratory, where a generation of computing pioneers were educated and where the Maxwell Dworkin building now stands as part of the mechanism remains on exhibit in the Science Center.
The Mark I performed additions and subtractions at a rate of about three per second; multiplication and division took considerably longer. This benchmark was soon surpassed by computers that could do thousands of arithmetic operations per second, then millions and billions. By the late 1990s a few machines were reaching a trillion (1012) operations per second; these were called terascale computers, as tera is the Système International prefix for 1012. The next landmark—and the current state of the art—is the petascale computer, capable of 1015 operations per second. In 2010, Kaxiras' blood flow simulation ran on a petascale computer called Blue Gene/P in Jülich, Germany, which at the time held fifth place on the Top 500 list of supercomputers.
The new goal is an exascale machine, performing at least 1018 operations per second. This is a number so immense it challenges the imagination. Stacks of pennies reaching to the moon are not much help in expressing its magnitude—there would be millions of them. If an exascale computer counted off the age of the universe in units of a billionth of a second, the task would take a little more than 10 seconds.
And what comes after exascale? We can look forward to zettascale (1021) and yottascale (1024); then we run out of prefixes. The engine driving these amazing gains in computer performance is the ability of manufacturers to continually shrink the dimensions of transistors and other microelectronic devices, thereby cramming more of them onto a single chip. (The number of transistors per chip is in the billions now.) Until about 10 years ago, making transistors smaller also made them faster, allowing a speedup in the master clock, the metronome-like signal that sets the tempo for all operations in a digital computer. Between 1980 and 2005, clock rates increased by a factor of 1,000, from a few megahertz to a few gigahertz. But the era of ever-increasing clock rates has ended.
The speed limit for modern computers is now set by power consumption. If all other factors are held constant, the electricity needed to run a processor chip goes up as the cube of the clock rate: doubling the speed brings an eightfold increase in power demand. SEAS Dean Cherry A. Murray, the John A. and Elizabeth S. Armstrong Professor of Engineering and Applied Sciences and Professor of Physics, points out that high-performance chips are already at or above the 100-watt level. "Go much beyond that," she says, "and they would melt."
If the chipmakers cannot build faster transistors, however, they can still make them smaller and thus squeeze more onto each chip. Since 2005 the main strategy for boosting performance has been to gang together multiple processor "cores" on each chip. The clock rate remains roughly constant, but the total number of operations per second increases if the separate cores can be put to work simultaneously on different parts of the same task. Large systems are assembled from vast numbers of these multicore processors.
When the Kaxiras group's blood flow study ran on the Blue Gene/P at Jülich, the machine had almost 300,000 cores. The world's largest and fastest computer, as of June 2014, is the Tianhe-2 in Guangzhou, China, with more than 3 million cores. An exascale machine may have hundreds of millions of cores, or possibly as many as a billion.
A team of Brazilian and American astronomers used CFHT observations of the system 16 Cygni to discover evidence of how giant planets like Jupiter form.
One of the main models to form giant planets is called "core accretion". In this scenario, a rocky core forms first by aggregation of solid particles until it reaches a few Earth masses when it becomes massive enough to accrete a gaseous envelope. For the first time, astronomers have detected evidence of this rocky core, the first step in the formation of a giant planet like our own Jupiter.
The astronomers used the Canada-France-Hawaii Telescope (CFHT) to analyze the starlight of the binary stars 16 Cygni A and 16 Cygni B. The system is a perfect laboratory to study the formation of giant planets because the stars were born together and are therefore very similar, and both resemble the Sun. However, observations during the last decades show that only one of the two stars, 16 Cygni B, hosts a giant planet which is about 2.4 times as massive as Jupiter. By decomposing the light from the two stars into their basic components and looking at the difference between the two stars, the astronomers were able to detect signatures left from the planet formation process on 16 Cygni B.
The fingerprints detected by the astronomers are twofold. First, they found that the star 16 Cygni A is enhanced in all chemical elements relative to 16 Cygni B. This means that 16 Cygni B, the star that hosts a giant planet, is metal deficient. As both stars were born from the same natal cloud, they should have exactly the same chemical composition. However, planets and stars form at about the same time, hence the metals that are missing in 16 Cygni B (relative to 16 Cygni A) were probably removed from its protoplanetary disk to form its giant planet, so that the remaining material that was falling into 16 Cygni B in the final phases of its formation was deficient in those metals.
The second fingerprint is that on top of an overall deficiency of all analyzed elements in 16 Cygni B, this star has a systematic deficiency in the refractory elements such as iron, aluminum, nickel, magnesium, scandium, and silicon. This is a remarkable discovery because the rocky core of a giant planet is expected to be rich in refractory elements. The formation of the rocky core seems to rob refractory material from the proto-planetary disk, so that the star 16 Cygni B ended up with a lower amount of refractories. This deficiency in the refractory elements can be explained by the formation of a rocky core with a mass of about 1.5 – 6 Earth masses, which is similar to the estimate of Jupiter's core.
"Our results show that the formation of giant planets, as well as terrestrial planets like our own Earth, leaves subtle signatures in stellar atmospheres", says Marcelo Tucci Maia (Universidade de São Paulo), the lead author of the paper.
No Man’s Sky is a video game quite unlike any other. Sean Murray, one of the creators of the computer game No Man’s Sky, can’t guarantee that the virtual universe is infinite, but he’s certain that, if it isn’t, nobody will ever find out. “If you were to visit one virtual planet every second,” he says, “then our own sun will have died before you’d have seen them all.”
Developed for Sony’s PlayStation 4 by an improbably small team (the original four-person crew has grown only to 10 in recent months) at Hello Games, an independent studio in the south of England, it’s a game that presents a traversable universe in which every rock, flower, tree, creature, and planet has been “procedurally generated” to create a vast and diverse play area.
“We are attempting to do things that haven’t been done before,” says Murray. “No game has made it possible to fly down to a planet, and for it to be planet-sized, and feature life, ecology, lakes, caves, waterfalls, and canyons, then seamlessly fly up through the stratosphere and take to space again. It’s a tremendous challenge.”
Procedural generation, whereby a game’s landscape is generated not by an artist’s pen but an algorithm, is increasingly prevalent in video games. Most famously Minecraft creates a unique world for each of its players, randomly arranging rocks and lakes from a limited palette of bricks whenever someone begins a new game (see “The Secret to a Video Game Phenomenon”). But No Man’s Sky is far more complex and sophisticated. The tens of millions of planets that comprise the universe are all unique. Each is generated when a player discovers it, and is subject to the laws of its respective solar systems and vulnerable to natural erosion. The multitude of creatures that inhabit the universe dynamically breed and genetically mutate as time progresses. This is virtual world building on an unprecedented scale (see video).
This presents numerous technological challenges, not least of which is how to test a universe of such scale during its development – the team is currently using virtual testers—automated bots that wander around taking screenshots which are then sent back to the team for viewing. Additionally, while No Man’s Sky might have an infinite-sized universe, there aren’t an infinite number of players. To avoid the problem of a kind of virtual loneliness, where a player might never encounter another person on his or her travels, the game starts every new player in the same galaxy (albeit on his or her own planet) with a shared initial goal of traveling to its center. Later in the game, players can meet up, fight, trade, mine, and explore. “Ultimately we don’t know whether people will work, congregate, or disperse,” Murray says. “I know players don’t like to be told that we don’t know what will happen, but that’s what is exciting to us: the game is a vast experiment.”
Workers with the Insect Museum of West China, who were recently given several very large dragon-fly looking insects, with long teeth, by locals in a part of Sichuan, have declared it, a giant dobsonfly the largest known aquatic insect in the world alive today. The find displaces the previous record holder, the South American helicopter damselfly, by just two centimeters.
The dobsonfly is common (there are over 220 species of them) in China, India, Africa, South America and some other parts of Asia, but until now, no specimens as large as those recently found in China have been known. The largest specimens in the found group had a wingspan of 21 centimeters, making it large enough to cover the entire face of a human adult. Locals don't have to worry too much about injury from the insects, however, as officials from the museum report that larger males' mandibles are so huge in proportion to their bodies that they are relatively weak—incapable of piercing human skin. They can kick up a stink, however, as they are able to spray an offensive odor when threatened.
Also, despite the fact that they look an awful lot like dragonflies, they are more closely related to fishflies. The long mandibles, though scary looking to humans, are actually used for mating—males use them to show off for females, and to hold them still during copulation. Interestingly, while their large wings (commonly twice their body length) make for great flying, they only make use of them for about a week—the rest of their time alive as adults is spent hiding under rocks or moving around on or under the water. That means that they are rarely seen as adults, which for most people is probably a good thing as the giants found in China would probably present a frightening sight. They are much better known during their long larval stage when they are used as bait by fishermen.
Scientists at the National Institute of Standards and Technology (NIST) have discovered that a gold nanorod submerged in water and exposed to high-frequency ultrasound waves can spin at an incredible speed of 150,000 RPM, about ten times faster than the previous record. The advance could lead to powerful nanomotors with important applications in medicine, high-speed machining, and the mixing of materials.
Take a rod only a few nanometers in size and find a way to make it spin as fast as possible, for as long as possible, and controlling it as precisely as possible. What you get is a nanomotor, a device that could one day be used to power hordes of tiny robots to build complex nanostructured materials or deliver drugs directly from inside a living cell.
Nanomotors have made giant strides in recent years: they've gotten much smaller and more reliable, and we can now also power them in many different ways. Available options include electricity, magnetic fields,blasting them with photons and, more recently, using ultrasound to rotate rods while they're submerged in water, which could prove very useful in a biological environment.
Previous studies have shown that applying a combination of ultrasound and magnetic fields can control both the spin and the forward motion of the nanorods, but nobody could tell just how fast they were spinning. Now, researchers at NIST have found that, despite being submerged in water, the rods are spinning at an impressive 150,000 RMP, which is 10 times faster than any nanoscale object submerged in liquid ever reported.
To clock the motor's speed, the researchers used gold rods which were 2 micrometers long and 300 nanometer wide. The rods were submerged in water and mixed with polystyrene nanoparticles, and positioned just above a speaker-type shaker.
The researchers will now focus on understanding exactly why the motors rotate (which is not yet well understood) and how the vortexes around the rods affects their interactions with each other.
A paper published in the journal ACS Nano describes the advance.
Cancer has left its 'footprint' on our evolution, according to a study which examined how the relics of ancient viruses are preserved in the genomes of 38 mammal species. The team found that as animals increased in size they 'edited out' potentially cancer-causing relics from their genomes so that mice have almost ten times as many ERVs as humans. The findings offer a clue as to why larger animals have a lower incidence of cancer than expected compared to smaller ones, and could help in the search for new anti-viral therapies.
Viral relics are evidence of the ancient battles our genes have fought against infection. Occasionally the retroviruses that infect an animal get incorporated into that animal's genome and sometimes these relics get passed down from generation to generation -- termed 'endogenous retroviruses' (ERVs). Because ERVs may be copied to other parts of the genome they contribute to the risk of cancer-causing mutations.
Now a team from Oxford University, Plymouth University, and the University of Glasgow has identified 27,711 ERVs preserved in the genomes of 38 mammal species, including humans, over the last 10 million years. The team found that as animals increased in size they 'edited out' these potentially cancer-causing relics from their genomes so that mice have almost ten times as many ERVs as humans. The findings offer a clue as to why larger animals have a lower incidence of cancer than expected compared to smaller ones, and could help in the search for new anti-viral therapies.
We set out to find as many of these viral relics as we could in everything from shrews and humans to elephants and dolphins,' said Dr Aris Katzourakis of Oxford University's Department of Zoology, lead author of the report. 'Viral relics are preserved in every cell of an animal: Because larger animals have many more cells they should have more of these endogenous retroviruses (ERVs) -- and so be at greater risk of ERV-induced mutations -- but we've found this isn't the case. In fact larger animals have far fewer ERVs, so they must have found ways to remove them.'
A combination of mathematical modelling and genome research uncovered some striking differences between mammal genomes: mice (c.19 grams) have 3331 ERVs, humans (c.59 kilograms) have 348 ERVs, whilst dolphins (c.281 kilograms) have just 55 ERVs.
'This is the first time that anyone has shown that having a large number of ERVs in your genome must be harmful -- otherwise larger animals wouldn't have evolved ways of limiting their numbers,' said Dr Katzourakis. 'Logically we think this is linked to the increased risk of ERV-based cancer-causing mutations and how mammals have evolved to combat this risk. So when we look at the pattern of ERV distribution across mammals it's like looking at the 'footprint' cancer has left on our evolution.'
Dr Robert Belshaw of Plymouth University Peninsula Schools of Medicine and Dentistry, School of Biomedical and Healthcare Sciences, added: "Cancer is caused by errors occurring in cells as they divide, so bigger animals -- with more cells -- ought to suffer more from cancer. Put simply, the blue whale should not exist. However, larger animals are not more prone to cancer than smaller ones: this is known as Peto's Paradox (named after Sir Richard Peto, the scientist credited with first spotting this). A team of scientists at Oxford, Plymouth and Glasgow Universities had been studying endogenous retroviruses, viruses like HIV but which have become part of their host's genome and which in other animals can cause cancer. Surprisingly, they found that bigger mammals have fewer of these viruses in their genome. This suggests that similar mechanism might be involved in fighting both cancer and the spread of these viruses, and that these are better in bigger animals (like humans) than smaller ones (like laboratory mice)."
The African elephant's genome contains the largest number of smell receptor genes - nearly 2,000 - say the researchers in the journal Genome Research.
Olfactory receptors detect odors in the environment. That means elephants' sniffers are five times more powerful than people's noses, twice that of dogs, and even stronger than the previous known record-holder in the animal kingdom: rats.
"Apparently, an elephant's nose is not only long but also superior," says lead study author Dr Yoshihito Niimura of the University of Tokyo.
Just how these genes work is not well understood, but they likely helped elephants survive and navigate their environment over the ages.
The ability to smell allows creatures to find mates and food - and avoid predators.
The study compared elephant olfactory receptor genes to those of 13 other animals, including horses, rabbits, guinea pigs, cows, rodents and chimpanzees.
Primates and people actually had very low numbers of olfactory receptor genes compared to other species, the study found.
This could be "a result of our diminished reliance on smell as our visual acuity improved," sats Niimura.
Researchers at Rice University’s Laboratory for Nanophotonics (LANP) have created a unique sensor that amplifies the optical signature of molecules by about 100 billion times — accurately identifying the composition and structure of individual molecules containing fewer than 20 atoms.
The new single-molecule imaging method, described in the journal Nature Communications, uses a form of Raman spectroscopy in combination with optical amplifier, making the sensor about 10 times more powerful that previously reported devices, said LANP Director Naomi Halas, the lead scientist on the study.
“The ideal single-molecule sensor would be able to identify an unknown molecule — even a very small one — without any prior information about that molecule’s structure or composition. That’s not possible with current technology, but this new technique has that potential.”
The optical sensor uses Raman spectroscopy, a technique pioneered in the 1930s that blossomed after the advent of lasers in the 1960s. When light strikes a molecule, most of its photons bounce off or pass directly through, but a tiny fraction — fewer than one in a trillion — are absorbed and re-emitted into another energy level that differs from their initial level. By measuring and analyzing these re-emitted photons through Raman spectroscopy, scientists can decipher the types of atoms in a molecule as well as their structural arrangement.
Scientists have created a number of techniques to boost Raman signals. In the new study, LANP graduate student Yu Zhang used one of these, a two-coherent-laser technique called “coherent anti-Stokes Raman spectroscopy,” or CARS. By using CARS in conjunction with a light amplifier made of four tiny gold nanodiscs, Halas and Zhang were able to measure single molecules in a powerful new way. LANP has dubbed the new technique “surface-enhanced CARS,” or SECARS.
Cedars-SinaI Medical Center researchers have developed a noninvasive retinal imaging device that can provide early detection of changes indicating Alzheimer’s disease 15 to 20 years before clinical diagnosis.
“In preliminary results in 40 patients, the test could differentiate between Alzheimer’s disease and non-Alzheimer’s disease with 100 percent sensitivity and 80.6 percent specificity, meaning that all people with the disease tested positive and most of the people without the disease tested negative,” said Shaun Frost, a biomedical scientist and the study manager at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia’s national science agency.
Keith Black, MD, professor and chair of Cedars-Sinai’s Department of Neurosurgery and director of the Maxine Dunitz Neurosurgical Institute and the Ruth and Lawrence Harvey Chair in Neuroscience, said the accumulation of beta-amyloid plaque in the brain is a hallmark sign of Alzheimer’s, but current tests detect changes only after the disease has advanced to late stages.
Researchers believe that as treatment options improve, early detection will be critical, but existing diagnostic methods are inconvenient, costly and impractical for routine screening.
“PET scans require the use of radioactive tracers, and cerebrospinal fluid analysis requires that patients undergo invasive and often painful lumbar punctures, but neither approach is quite feasible, especially for patients in the earlier stages of disease,” he said. Positron emission tomography, or PET, is the current diagnostic standard.
“The retina, unlike other structures of the eye, is part of the central nervous system, sharing many characteristics of the brain. A few years ago, we discovered at Cedars-Sinai that the plaques associated with Alzheimer’s disease occur not only in the brain but also in the retina.
Research reveals large increases in population expected in the next three decades need not result in widespread hunger.
The world’s existing cropland could feed at least 3 billion extra people if it were used more efficiently, a new study has found, showing that the large increases in population expected in the next three decades need not result in widespread hunger.
More than half of the fertiliser currently poured on to crops in many countries is wasted, according to the study. About 60% of the nitrogen applied to crops worldwide is not needed, as well as about half of the phosphorus, an element whose readily available sources are dwindling.
Cutting waste even by modest amounts would also feed millions, the authors found: between one-third and a half of the viable crops and food produced from them around the world are wasted, in the developing world usually because of a lack of infrastructure such as refrigerated transport, and in the rich world because of wasteful habits.
The study, published in the peer-review journal Science and led by scientists at the University of Minnesota in the US, suggested that a focus on staple crops such as wheat and rice in key countries, including China, India, the US, Brazil, Indonesia, Pakistan and Europe, would pay off in terms of producing more food for the world’s growing population. Most forecasts are that the world will number more than 9 billion people by 2050, up from about 7 billion people today.
Looking after water could also yield vast dividends, the report found: if the water used for irrigation was pinpointed more efficiently to where it is needed, then much more could be grown, but currently much of it is sprayed uselessly over crops. Between 8% and 15% of the water currently used could be saved, the study suggested.
But the research also found that at least 4 billion people could be fed with the crops we currently devote to fattening livestock, fuelling the argument that the over-reliance on meat in the west and among the growing middle classes in the developing world is an increasing problem when it comes to feeding the world.
A major breakthrough in understanding the molecular basis of fibroadenoma, one of the most common breast tumors diagnosed in women, has been made by a multidisciplinary team of scientists. The team used advanced DNA sequencing technologies to identify a critical gene called MED12 that was repeatedly disrupted in nearly 60 percent of fibroadenoma cases.
A multi-disciplinary team of scientists from the National Cancer Centre Singapore, Duke-NUS Graduate Medical School Singapore, and Singapore General Hospital have made a major breakthrough in understanding the molecular basis of fibroadenoma, one of the most common breast tumors diagnosed in women. The team, led by Professors Teh Bin Tean, Patrick Tan, Tan Puay Hoon and Steve Rozen, used advanced DNA sequencing technologies to identify a critical gene called MED12 that was repeatedly disrupted in nearly 60% of fibroadenoma cases. Their findings have been published in the top-ranked journal Nature Genetics.
Fibroadenomas are the most common benign breast tumors in women of reproductive age, affecting thousands of women in Singapore each year. Worldwide, it is estimated that millions of women are diagnosed with fibroadenoma annually. Frequently discovered in clinical workups for breast cancer diagnosis and during routine breast cancer screening, clinicians often face of challenge of distinguishing fibroadenomas from breast cancer.
To facilitate this diagnostic question, the team embarked on a study to identify if there are any genetic abnormalities in fibroadenomas that may be used to differentiate them. By analysing all the protein-coding genes in a panel of fibroadenomas from Singapore patients, the team identified frequent mutations in a gene called MED12 in a remarkable 60% of fibroadenomas. Prof Tan Puay Hoon said, "It is amazing that these common breast tumors can be caused by such a precise disruption in a single gene. Our findings show that even common diseases can have a very exact genetic basis. Importantly, now that we know the cause of fibroadenoma, this research can have many potential applications."
Prof Tan added, "For example, measuring the MED12 gene in breast lumps may help clinicians to distinguish fibroadenomas from other types of breast cancer. Drugs targeting the MED12 pathway may also be useful in patients with multiple and recurrent fibroadenomas as this could help patients avoid surgery and relieve anxiety."
The team's findings have also deepened the conceptual understanding of how tumors can develop. Like most breast tumors including breast cancers, fibroadenomas consist of a mixed population of different cell types, called epithelial cells and stromal cells. However, unlike breast cancers where the genetic abnormalities arise from the epithelial cells, the scientists, using a technique called laser capture microdissection (LCM), showed that the pivotal MED12 mutations in fibroadenomas are found in the stromal cells.
A paper analyses the potential of the electric solar wind sail for solar system space missions. Applications studied include fly-by missions to terrestrial planets (Venus, Mars and Phobos, Mercury) and asteroids, missions based on non-Keplerian orbits (orbits that can be maintained only by applying continuous propulsive force), one-way boosting to outer solar system, off-Lagrange point space weather forecasting and low-cost impactor probes for added science value to other missions. We also discuss the generic idea of data clippers (returning large volumes of high resolution scientific data from distant targets packed in memory chips) and possible exploitation of asteroid resources. Possible orbits were estimated by orbit calculations assuming circular and coplanar orbits for planets. Some particular challenge areas requiring further research work and related to some more ambitious mission scenarios are also identified and discussed.
The main purpose of this article is to analyze the potential of E-sail technology in some of the envisaged possible applications for solar system space activities. To a limited extent we also adopt a comparative approach,estimating the added value and other advantages stemming from E-sail technology in comparison with present chemical and electric propulsion systems and(in some cases) with other propellantless propulsion concepts. When making such comparisons a key quantity that we use for representing the mission cost is the total required velocity change, Av, also called delta-v.The Sail Propulsion Working Group, a joint working group between the Navigation Guidance and Control Section and the Electric Propulsion Section of the European Space Agency, has envisaged the study of three reference missions which could be successfully carried out using propellantless propulsion concepts.
Scientists analyzing data from NASA’s Cassini mission have firm evidence the ocean inside Saturn's largest moon, Titan, might be as salty as Earth's Dead Sea.
The new results come from a study of gravity and topography data collected during Cassini's repeated flybys of Titan during the past 10 years. Using the Cassini data, researchers presented a model structure for Titan, resulting in an improved understanding of the structure of the moon's outer ice shell. The findings are published in this week’s edition of the journal Icarus.
"Titan continues to prove itself as an endlessly fascinating world, and with our long-lived Cassini spacecraft, we’re unlocking new mysteries as fast as we solve old ones," said Linda Spilker, Cassini project scientist at NASA's Jet Propulsion Laboratory in Pasadena, California, who was not involved in the study.
Additional findings support previous indications the moon's icy shell is rigid and in the process of freezing solid. Researchers found that a relatively high density was required for Titan's ocean in order to explain the gravity data. This indicates the ocean is probably an extremely salty brine of water mixed with dissolved salts likely composed of sulfur, sodium and potassium. The density indicated for this brine would give the ocean a salt content roughly equal to the saltiest bodies of water on Earth.
"This is an extremely salty ocean by Earth standards," said the paper's lead author, Giuseppe Mitri of the University of Nantes in France. "Knowing this may change the way we view this ocean as a possible abode for present-day life, but conditions might have been very different there in the past."
Cassini data also indicate the thickness of Titan's ice crust varies slightly from place to place. The researchers said this can best be explained if the moon's outer shell is stiff, as would be the case if the ocean were slowly crystalizing and turning to ice. Otherwise, the moon's shape would tend to even itself out over time, like warm candle wax. This freezing process would have important implications for the habitability of Titan's ocean, as it would limit the ability of materials to exchange between the surface and the ocean.
A further consequence of a rigid ice shell, according to the study, is any outgassing of methane into Titan's atmosphere must happen at scattered "hot spots" -- like the hot spot on Earth that gave rise to the Hawaiian Island chain. Titan's methane does not appear to result from convection or plate tectonics recycling its ice shell.
How methane gets into the moon's atmosphere has long been of great interest to researchers, as molecules of this gas are broken apart by sunlight on short geological timescales. Titan's present atmosphere contains about five percent methane. This means some process, thought to be geological in nature, must be replenishing the gas. The study indicates that whatever process is responsible, the restoration of Titan's methane is localized and intermittent.
"Our work suggests looking for signs of methane outgassing will be difficult with Cassini, and may require a future mission that can find localized methane sources," said Jonathan Lunine, a scientist on the Cassini mission at Cornell University, Ithaca, New York, and one of the paper's co-authors. "As on Mars, this is a challenging task."