A technology used in projectors can create a high-resolution image 100 times faster than conventional microscopy equipment, which can be too slow to clearly document speedy biological processes.
Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The mission will peek through the gas giant’s swirling clouds in search of a planetary core.
On 4 July, NASA intends to finish a job that started with the agency’s Galileo mission 21 years ago. At 8:18 p.m. Pacific time, the Juno spacecraft will ignite its main engine for 35 minutes and nudge itself into orbit around Jupiter. If all goes well, it will eventually slip into an even tighter path that whizzes as close as 4,200 kilometres above the planet’s roiling cloud-tops — while dodging as much of the lethal radiation in the planet’s belts as possible.
The US$1.1-billion mission, which launched in 2011, will be the first to visit the Solar System’s biggest planet since NASA’s Galileo spacecraft in 1995. Picking up where Galileo left off, Juno is designed to answer basic questions about Jupiter, including what its water content is, whether it has a core and what is happening at its rarely seen poles (see ‘Mission to Jupiter’).
Scientists think that Jupiter was the first planet to condense out of the gases that swirled around the newborn Sun 4.6 billion years ago. As such, it is made up of some of the most primordial material in the Solar System. Scientists know that it consists mostly of hydrogen and helium, but they are eager to pin down the exact amounts of other elements found on the planet.
“What we really want is the recipe,” says Scott Bolton, the mission’s principal investigator and a planetary scientist at the Southwest Research Institute in San Antonio, Texas.
Jupiter’s familiar visage, with its broad brown belts and striking Great Red Spot, represents only the tops of its churning clouds of ammonia and hydrogen sulfide. Juno — named after the Roman goddess who could see through clouds — will peer hundreds of kilometers into the planet’s atmosphere using microwave wavelengths.
Exploration of Jupiter’s interior should reveal more about the formidable atmospheric convection that powers the planet, says Paul Steffes, an electrical engineer at the Georgia Institute of Technology in Atlanta. In anticipation of Juno’s arrival, professional and amateur astronomers have been observing Jupiter with ground-based and space-based telescopes. For now, the planet is not experiencing any unusual atmospheric changes. “It’s kind of in its normal state, which is good,” says Amy Simon, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. This ‘normal’ behaviour gives researchers confidence that they will be able to understand Juno’s findings.
The Great Red Spot continues to shrink, as it has done in recent years, and to interact less and less with the jet streams on either of its edges. The broad belt just north of the planet’s equator has been expanding since late 2015 — a change that might be connected to processes deep in the atmosphere. “Trying to connect events that are happening at one level to events happening in another tells you how well coupled the whole atmosphere is,” says Leigh Fletcher, a planetary astronomer at the University of Leicester, UK.
As Juno probes deeper and deeper into the planet’s atmosphere, researchers hope to get information on a layer of hydrogen compressed into a liquid by increasing pressures. That liquid conducts electricity, which powers Jupiter’s enormous magnetic field. Deeper still, the spacecraft will look for evidence of a core — a dense nugget of heavier elements that most scientists think exists, but has never been observed. Juno will make precise measurements of how Jupiter’s gravity tugs on the spacecraft, which should reveal whether a core is present.
Juno will provide scientists with the last chance to look at Jupiter for a long time. It is scheduled to make 37 total orbits before performing a kamikaze run in early 2018, burning up inside the planet’s clouds to keep it from contaminating the moon Europa. The only other mission planned to the gas giant is the European Space Agency’s Jupiter Icy Moons Explorer (JUICE) spacecraft, which could launch as early as 2022 and will focus mainly on the moon Ganymede.
German researchers have developed a complex lens system no bigger than a grain of salt that fits inside a syringe. The imaging tool could make for not just more productive medical imaging, but tiny cameras for everything from drones to slimmer smartphones.
Getting inside the human body to have a look around is always going to be invasive, but that doesn't mean more can't be done to make things a little more comfortable. With this goal in mind, German researchers have developed a complex lens system no bigger than a grain of salt that fits inside a syringe. The imaging tool could make for not just more productive medical imaging, but tiny cameras for everything from drones to slimmer smartphones.
Scientists from the University of Stuttgart built their three-lens camera using a new 3D printing technique. They say their new approach offers sub-micrometer accuracy that makes it possible to 3D print optical lens systems with two or more lenses for the first time. Their resulting multi-lens system opens up the possibility of correcting for aberration (where a lens cannot bring all wavelengths of color to the same focal plane), which could enable higher image quality from smaller devices.
Here's how they did it. Using a femtosecond laser, where the pulse durations were shorter than 100 femtoseconds (a femtosecond is one quadrillionth of a second), they blasted a light-sensitive material resting on a glass substrate. Two photons are absorbed by the material, which exposes it and crosslinks polymers within. Unexposed material is then washed away with a solvent, leaving behind the hardened, crosslinked polymer used to form the optical element.
The team used this approach to print imaging components for optical microscopes with a diameter and height of 125 micrometers, and then attached them to the end of a 5.6-ft (1.7-m) optical fiber the width of two human hairs. The camera on the end of this small endoscope is capable of focusing on images from a distance of 3 mm (0.12 in). The team says the entire imaging system fits comfortably inside a standard syringe needle, which raises the possibility of delivering it to directly to organs, and even the brain.
Learning from your mistakes is a key life lesson, and it's one that researchers at Pacific Northwest National Laboratory (PNNL) can attest to. After unintentionally creating carbon-rich nanorods, the team realized its accidental invention behaves weirdly with water, demonstrating a 20-year old theory and potentially paving the way to low-energy water harvesting systems and sweat-removing fabrics.
The researchers note that ordinarily materials will absorb more water as the humidity in the air around them increases. But between 50 and 80 percent relative humidity, these nanorods will actually do the opposite and expel water, a behavior they say is not shared by any other material. Below that range, they behave as normal, so the process is reversible by lowering the humidity again.
"Our unusual material behaves a bit like a sponge; it wrings itself out halfway before it's fully saturated with water," says David Lao, PNNL research associate and creator of the material. These nanorods were created by mistake while trying to fabricate magnetic nanowires, and the researchers decided to give the accidents a closer look. On examining them with a vapor analysis instrument, Satish Nune, one of the authors of the research paper, noticed that the structures were actually losing weight as the humidity increased.
Assuming the equipment was malfunctioning, the scientists switched to a microscope, and were able to observe water appearing from between the branches of the nanorods, and then evaporating at a higher humidity. On researching why this was the case, the team looked to previous works and found papers from 2012 and 2013 explaining how water can spontaneously vaporize when confined in an area just 1.5 nm wide, or when tightly surrounded by hydrophobic materials. Observations even go as far back as the 1990s, when scientists experimenting with crystallized proteins noticed similar happenings and theorized that some unknown process was allowing the water to rapidly evaporate.
The recent research at PNNL appears to be the first time this phenomenon has been directly seen in action. The team's hypothesis was that the water is condensing and drawing the branches of the nanorods together, and when they reach the 1.5 nm threshold, as specified in the previous work, the water quickly evaporates.
"Now that we've gotten over the initial shock of this unforeseen behavior, we're imagining the many ways it could be harnessed to improve the quality of our lives," says David Heldebrant, the second author of the paper.
Scientists discovered an ancient fungus garden grown by termites millions of years ago.
The fossil structures bore every hallmark of a prehistoric farm: Crops were arranged according to an intricate, complex plan. Material for harvesting littered the ground. Analysis revealed that the crop was a species that only grows when cultivated.
This was agriculture — but underground and on a micro scale. At 25 million years old, it was also far more ancient than anything constructed by humans; Homo sapiens didn't even exist yet.
Instead, the farmers who tilled these ancient plots were termites. And their harvest was fungus. The fossilized termite gardens, uncovered from exposed cliff sides in the Rukwa Rift Basin of southwestern Tanzania, are the oldest physical evidence of farming on Earth, scientists report in the journal PLOS One this week.
"It captures a record of the evolutionary coupling of termites and fungus ... and allows us to trace back the antiquity of this symbiotic relationship," lead author Eric Roberts, a geologist at James Cook University in Australia, wrote in an email. "The new fossils help us to calibrate our evolutionary clocks and use them to better understand when this symbiosis first developed, which we now think was probably around 31 million years ago."
Across Africa, the massive, edible "termite mushrooms" grown by these tiny insects are famous. But European scientists didn't realize the significance of what was happening inside termite colonies until the mid-20th century. When researchers dissected towering termite mounds, which contain dozens of interlocking chambers and can grow taller than a person, they saw that the insects weren't just eating the fungus that grew alongside them — they were cultivating it.
Via Neelima Sinha
A new approach to gas exploration has discovered a huge helium gas field, which could address the increasingly critical shortage of this vital yet rare element.
Helium doesn't just make your voice squeaky - it is critical to many things we take for granted, including MRI scanners in medicine, welding, industrial leak detection and nuclear energy. However, known reserves are quickly running out. Until now helium has never been found intentionally - being accidentally discovered in small quantities during oil and gas drilling.
Now, a research group from Oxford and Durham universities, working with Helium One, a helium exploration company headquartered in Norway, has developed a brand new exploration approach. The first use of this method has resulted in the discovery of a world-class helium gas field in Tanzania.
Their research shows that volcanic activity provides the intense heat necessary to release the gas from ancient, helium-bearing rocks. Within the Tanzanian East African Rift Valley, volcanoes have released helium from ancient deep rocks and have trapped this helium in shallower gas fields. The research is being presented by Durham University PhD student Diveena Danabalan at the Goldschmidt geochemistry conference in Yokohama, Japan.
Diveena Danabalan, of Durham University's Department of Earth Sciences, said: "We show that volcanoes in the Rift play an important role in the formation of viable helium reserves. Volcanic activity likely provides the heat necessary to release the helium accumulated in ancient crustal rocks. However, if gas traps are located too close to a given volcano, they run the risk of helium being heavily diluted by volcanic gases such as carbon dioxide, just as we see in thermal springs from the region. We are now working to identify the 'goldilocks-zone' between the ancient crust and the modern volcanoes where the balance between helium release and volcanic dilution is 'just right'."
Prof. Chris Ballentine, Department of Earth Sciences, University of Oxford, said: "We sampled helium gas (and nitrogen) just bubbling out of the ground in the Tanzanian East African Rift valley. By combining our understanding of helium geochemistry with seismic images of gas trapping structures, independent experts have calculated a probable resource of 54 Billion Cubic Feet (BCf) in just one part of the rift valley. This is enough to fill over 1.2 million medical MRI scanners. To put this discovery into perspective, global consumption of helium is about 8 BCf per year and the United States Federal Helium Reserve, which is the world's largest supplier, has a current reserve of just 24.2 BCf. Total known reserves in the USA are around 153 BCf. This is a game changer for the future security of society's helium needs and similar finds in the future may not be far away."
It was just over 40 years ago that the concept of a solar power satellite (SPS) first emerged. American scientist and aerospace engineer Dr. Peter Glaser won a patent for a broadcast system using a one-square kilometer antenna to channel power via microwaves to a receiver on the ground. The advantage of such a system, and space-based solar power in general, is that it harnesses the unobstructed output of the sun, unlike land-based solar systems which are affected by the weather and Earth's day/night cycle.
While Glaser's proposal never got off the ground, it did inspire further investigation of the potential of space-based solar power by various government departments and institutions. In 2008, a company called Space Energy conducted a long-range wireless power transmission test using a microwave beam between two Hawaiian islands, a distance of 148 km (91.96 mi). The result was a power yield of 1/1000th of one percent on the receiving end, raising questions over whether the technique could be employed over the much larger distance between a satellite in geosynchronous Earth orbit (GEO) and a ground station.
Writing in IEEE Spectrum, Professor Emeritus at JAXA, Susumi Sasaki, argues that this experiment failed largely due to the dense atmosphere disturbing the microwaves' phases as a result of the horizontal transmission. In detailing the agency's proposal he emphasized that in a space-based system the microwaves only need to pass through this dense atmosphere for the last few kilometers of their journey. This, along with new designs for the solar power satellites and anticipated advances in technology over the coming decades, gives JAXA confidence that it can eventually achieve an effective wireless transmission of solar energy over the necessary 36,000 km (22,500 miles) from GEO.
JAXA is working on two concepts. The simpler one involves a huge square panel that measures 2 km (1.24 mi) per side. The top surface would be covered with photovoltaic elements, with transmission antennas on the bottom side. A small bus housing controls and communication systems would be tethered to the panel via 10 km (6.2 mi) long wires. A limitation with this design is that the orientation of the panel is fixed, meaning that as the Earth and the satellite spin, the amount of sunlight the panel receives will vary, impacting its ability to generate power.
Via Tania Gammage
Brown dwarfs are too small to sustain the hydrogen fusion process that powers stars. Their temperatures can range from nearly as hot as a star to as cool as a planet, and their masses also range between star-like and giant planet-like. They are of particular interest to scientists because they can offer clues to star-formation processes.
The intrinsic brightness of brown dwarfs, particularly cool brown dwarfs, is poorly known, but this key parameter can only be determined once an object’s distance has been measured.
Intrinsic brightness is a determination of how bright an object would be if observed at a common distance, eliminating the fact that a bright star can seem dimmer if it is far away and a dim star can seem brighter if it is close.
An ongoing planet-hunting survey run by Carnegie co-authors Alycia Weinberger, Alan Boss, Ian Thompson, Sandy Keiser, and others has yielded the distances to 134 low mass stars and brown dwarfs, of which 38 had not been previously measured. “Accurate distances are the foundation upon which we can determine the physical properties and luminosities of brown dwarfs and low mass stars,” Weinberger said.
The team built a special instrument for precisely measuring the locations of stars over time, the CAPSCam—the Carnegie Astrometric Planet Search Camera—and they use it at the DuPont Telescope at our Las Campanas Observatory in Chile.
The primary goal of the CAPS project is to search for extrasolar planets by the astrometric method, where the planet's presence can be detected indirectly through the wobble of the host star around the center of mass of the system. But CAPSCam also measures parallaxes to stars and brown dwarfs, including the 134 objects published in this study.
“There is still so much about brown dwarfs that remains unknown,” explained Weinberger. “As we learn more about them, it could improve our knowledge about the star formation process and possibly also refine our understanding of the distribution of matter in the universe, since it seems that there are far more brown dwarfs than initially thought.” The study revealed some other useful distance measurements in addition to the brown dwarf discoveries.
The team used the motion of two stars and compared them to others in two different stellar groups to confirm the age of the two stars age, between 30 and 50 million years old for one and 100 million years old for the other. This is because distance measurements can tell researchers about the location of a star in 3-D, not just the star’s position in 2-D on the sky, and let them measure the star’s velocity. Finding groups of young stars moving together lets astronomers study them in aggregate and better estimate how old they are and learn about their evolution.
Pharmaceutical companies typically develop new drugs with thousands of staff and budgets that run into the billions of dollars. One estimate puts the cost of bringing a new drug to market at $2.6 billion with others suggesting that it could be double that cost at $5 billion.
One man, Professor Atul Butte director of the University of California Institute of Computational Health Sciences, believes that like other Silicon Valley startups, almost anyone can bring a drug to market from their garage with just a computer, the internet, and freely available data. In a talk given at the Science on the Swan conference held in Perth this week, Professor Butte outlined the process for an audience of local and international scientists and medics.
The starting point is the genetic data from thousands of studies on humans, mice and other animals, that is now freely available on sites from the National Institute of Health and the European Molecular Biology Laboratory. The proliferation of genetic data from experiments has been driven by the ever decreasing cost of sequencing genetic information using gene chip technologies.
Professor Butte, students, and research staff have found a range of different ways of using this data to look for new drugs. In one approach, they have constructed a map of how the genetic profiles of people with particular diseases are related to each other. In particular, to look for diseases with very similar genetic profiles. Having done that, they noticed that the genetic profile of people with heart conditions were very closely related to that of the much rarer condition of muscular dystrophy. What this potentially suggested was that drugs that work for one condition could potentially work in the other. This process of discovering other uses of drugs, called “drug repositioning”, is not new.
Drugs like Viagra were originally used for treatment of cardiovascular conditions. The difference is that Viagra’s repositioned use resulted from the observation of side-effects in patients taking the drug for its original intended purpose.
Professor Butte on the other hand is using “Big Data” and computers to show that given the close relationship in the genetic profile of two diseases, the potential cross-over effect of drugs working for one condition working in another.
Still in the garage, the next step from discovering a potential drug is to test if it actually works in an experimental setting on animals. Here again, Professor Butte has turned to the internet and sites like Assay Depot. This is a site, structured like Amazon, from which a researcher can order an experiment to be carried out to test a drug on a range of animal models. It is literally a case of choosing the experiment type you want, adding it to a shopping cart, paying by credit card and getting the experimental results mailed back in a few weeks time. “Shoppers” are given the choice of laboratory they want to use, including a choice of which country the lab is based.
Once a new use for a drug has been shown to work in an animal model, the next step would be to test the drug in humans, get approval for the use of the drug for that condition and then finally take the drug to market.
The elephantnose fish was in an aquarium connected to two different chambers; the animal could choose. Behind openings to the chambers there were differently shaped objects: a sphere or a cuboid. The fish learned to steer toward one of these objects by being rewarded with insect larvae. Subsequently, it searched for this object again, to obtain the reward again. When does the fish use a particular sense? To answer this question, the researchers repeated the experiments in absolute darkness. Now the fish could rely only on its electrical sense. As shown by images taken with an infrared camera, it was able to recognize the object only at short distances. With the light on the fish was most successful, because it was able to use its eyes and the electrical sense for the different distances. To find out when the fish used its eyes alone, the researchers made the objects invisible to the electrical sense. Now, the sphere and cuboid to be discriminated had the same electrical characteristics as the water.
Hydrogen is the most-abundant element in the universe. It's also the simplest--sporting only a single electron in each atom. But that simplicity is deceptive, because there is still so much we have to learn about hydrogen.
One of the biggest unknowns is its transformation under the extreme pressures and temperatures found in the interiors of giant planets, where it is squeezed until it becomes liquid metal, capable of conducting electricity. New work published in Physical Review Letters by Carnegie's Alexander Goncharov and University of Edinburgh's Stewart McWilliams measures the conditions under which hydrogen undergoes this transition in the lab and finds an intermediate state between gas and metal, which they're calling "dark hydrogen."
On the surface of giant planets like Jupiter, hydrogen is a gas. But between this gaseous surface and the liquid metal hydrogen in the planet's core lies a layer of dark hydrogen, according to findings gleaned from the team's lab mimicry. Using a laser-heated diamond anvil cell to create the conditions likely to be found in gas giant planetary interiors, the team probed the physics of hydrogen under a range of pressures from 10,000 to 1.5 million times normal atmospheric pressure and up to 10,000 degrees Fahrenheit. They discovered this unexpected intermediate phase, which does not reflect or transmit visible light, but does transmit infrared radiation, or heat.
"This observation would explain how heat can easily escape from gas giant planets like Saturn," explained Goncharov. They also found that this intermediate dark hydrogen is somewhat metallic, meaning it can conduct an electric current, albeit poorly. This means that it could play a role in the process by which churning metallic hydrogen in gas giant planetary cores produces a magnetic field around these bodies, in the same way that the motion of liquid iron in Earth's core created and sustains our own magnetic field.
"This dark hydrogen layer was unexpected and inconsistent with what modeling research had led us to believe about the change from hydrogen gas to metallic hydrogen inside of celestial objects," Goncharov added.
he Department of Transportation’s Federal Aviation Administration has finalized the first operational rules (PDF) for routine commercial use of small unmanned aircraft systems (UAS or “drones”), opening pathways towards fully integrating UAS into the nation’s airspace. These new regulations work to harness new innovations safely, to spur job growth, advance critical scientific research and save lives.
“We are part of a new era in aviation, and the potential for unmanned aircraft will make it safer and easier to do certain jobs, gather information, and deploy disaster relief,” said U.S. Transportation Secretary Anthony Foxx. “We look forward to working with the aviation community to support innovation, while maintaining our standards as the safest and most complex airspace in the world.”
According to industry estimates, the rule could generate more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years.
The new rule, which takes effect in late August, offers safety regulations for unmanned aircraft drones weighing less than 55 pounds that are conducting non-hobbyist operations.
The rule’s provisions are designed to minimize risks to other aircraft and people and property on the ground. The regulations require pilots to keep an unmanned aircraft within visual line of sight. Operations are allowed during daylight and during twilight if the drone has anti-collision lights. The new regulations also address height and speed restrictions and other operational limits, such as prohibiting flights over unprotected people on the ground who aren’t directly participating in the UAS operation.
The FAA is offering a process to waive some restrictions if an operator proves the proposed flight will be conducted safely under a waiver. The FAA will make an online portal available to apply for these waivers in the months ahead.
“With this new rule, we are taking a careful and deliberate approach that balances the need to deploy this new technology with the FAA’s mission to protect public safety,” said FAA Administrator Michael Huerta. “But this is just our first step. We’re already working on additional rules that will expand the range of operations.”
Under the final rule, the person actually flying a drone must be at least 16 years old and have a remote pilot certificate with a small UAS rating, or be directly supervised by someone with such a certificate. To qualify for a remote pilot certificate, an individual must either pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center or have an existing non-student Part 61 pilot certificate. If qualifying under the latter provision, a pilot must have completed a flight review in the previous 24 months and must take a UAS online training course provided by the FAA. The TSA will conduct a security background check of all remote pilot applications prior to issuance of a certificate.
For 3 billion years, one of the major carriers of information needed for life, RNA, has had a glitch that creates errors when making copies of genetic information. Researchers at The University of Texas at Austin have developed a fix that allows RNA to accurately proofread for the first time. The new discovery, published June 23 in the journal Science, will increase precision in genetic research and could dramatically improve medicine based on a person's genetic makeup.
Certain viruses called retroviruses can cause RNA to make copies of DNA, a process called reverse transcription. This process is notoriously prone to errors because an evolutionary ancestor of all viruses never had the ability to accurately copy genetic material.
The new innovation engineered at UT Austin is an enzyme that performs reverse transcription but can also "proofread," or check its work while copying genetic code. The enzyme allows, for the first time, for large amounts of RNA information to be copied with near perfect accuracy.
"We created a new group of enzymes that can read the genetic information inside living cells with unprecedented accuracy," says Jared Ellefson, a postdoctoral fellow in UT Austin's Center for Systems and Synthetic Biology. "Overlooked by evolution, our enzyme can correct errors while copying RNA."
Reverse transcription is mainly associated with retroviruses such as HIV. In nature, these viruses' inability to copy DNA accurately may have helped create variety in species over time, contributing to the complexity of life as we know it.
Since discovering reverse transcription, scientists have used it to better understand genetic information related to inheritable diseases and other aspects of human health. Still, the error-prone nature of existing RNA sequencing is a problem for scientists.
"With proofreading, our new enzyme increases precision and fidelity of RNA sequencing," says Ellefson. "Without the ability to faithfully read RNA, we cannot accurately determine the inner workings of cells. These errors can lead to misleading data in the research lab and potential misdiagnosis in the clinical lab."
Ellefson and the team of researchers engineered the new enzyme using directed evolution to train a high-fidelity (proofreading) DNA polymerase to use RNA templates. The new enzyme, called RTX, retains the highly accurate and efficient proofreading function, while copying RNA. Accuracy is improved at least threefold, and it may be up to 10 times as accurate. This new enzyme could enhance the methods used to read RNA from cells.
IF YOU WANTED to write a history of the Internet, one of the first things you would do is dig into the email archives of Vint Cerf. In 1973, he co-created the protocols that Internet servers use to communicate with each other without the need for any kind of centralized authority or control. He has spent the decades since shaping the Internet’s development, most recently as Google’s “chief Internet evangelist.”
Thankfully, Cerf says he has archived about 40 years of old email—a first-hand history of the Internet stretching back almost as far as the Internet itself. But you’d also have a pretty big problem: a whole lot of that email you just wouldn’t be able to open. The programs Cerf used to write those emails, and the formats in which they’re stored, just don’t work on any current computer you’d likely be using to try to read them.
Today, much of the responsibility for preserving the web’s history rests on The Internet Archive. The non-profit’s Wayback Machine crawls the web perpetually, taking snapshots that let you, say, go back and see how WIRED looked in 1997. But the Wayback Machine has to know about a site before it can index it, and it only grabs sites periodically. Based on the Internet Archive’s own findings, the average webpage only lasts about 100 days. In order to preserve a site, the Wayback Machine has to spot it in that brief window before it disappears.
New study shows that the same cellular machinery exists in humans.
The ability to grow a new limb may seem like something straight out of science fiction, but new research shows exactly how animals like salamanders and zebrafish perform this stunning feat—and how humans may share the biological machinery that lets them do it. Scientists have long known of the regenerative powers of some species of fish and amphibians: To recreate a limb or fin lost to a hungry predator, they can regrow everything from bone to muscle to blood vessels with stem cells that form at the site of the injury. But just how they do it at the genetic level is a mystery.
To figure out what might be happening, scientists amputated the appendages of two ray-finned fish—zebrafish and bichir—and a salamander known as the axolotl, all of which can regrow their legs and fins. They then compared RNA from the site of the amputation. They found 10 microRNAs—small pieces of RNA that regulate gene expression—that were the same in all three species. What’s more, they seemed to function in the same way, despite the structural difference between the axolotl (pictured above) and the fishes.
The finding supports an existing idea that the three master limb-replacers last shared a common ancestor about 420 million years ago, and it suggests that the evolutionary process of growing limbs is saved over time, not developed independently in separate species, the researchers report today in PLOS ONE. What does this mean for humans? If these microRNAs can be programmed to work like they do in salamanders and fish, humans could enhance their ability to heal from serious injuries. But don’t expect to get Wolverine-like powers just yet—scientists say such modifications are still a long way off.
Despite the fact that Einstein's unifying theory has never been supported by observations, let alone definitive mathematical proof, Einstein's work did ultimately lead many scientists to re-examine the universe in relation to a holistic theory of everything, including an amalgam of his gravitational theories and quantum gravity hypotheses. Much work leading on from his theories provided tantalizing glimpses at possible gravitational interactions, including the behavior of the smallest of all fermions yet discovered – leptons and quarks.
This research led directly to the discovery of a gauge-invariant quantum field theory of the weak force, which included an electromagnetic interaction (and produced the "electroweak" concept that now shows correlation between electromagnetic and weak nuclear fields), which was, in itself, a great breakthrough in particle physics research. Unfortunately, however, it did not progress to include an observable gravitational component.
Nevertheless, buoyed by such revelations, theoretical physicists sought out a similar quantum field theory for the strong nuclear force, and eventually found one, dubbing it quantum chromodynamics. In this case, quarks are shown to interact through the exchange of gluons. This research has led to further postulations that the electroweak and strong nuclear forces could be united in a grand unified theory, which would then incorporate three of the four known forces in the universe. Again, however, an inclusion of the influence of gravity failed to be reconciled.
So despite the successful conflation of the fields discussed above, physicists have been unable to formulate a complete particle-driven unified field theory for gravity since it seems to lack a force-carrier particle of its own.
There is, however, one contender: A contentious theoretical particle known as a "graviton". The graviton moniker was apparently coined by the Russian physicists Dmitrii Blokhintsev and F. M. Gal'perin sometime in the mid 1930s (interestingly, around the time of the Einstein-Bohr stoush), in relation to the notion that if Einstein's predicted gravity waves existed, then they must also possess a quanta of energy, as does electromagnetic energy. That is, the electromagnetic and strong and weak nuclear forces all act through a "force carrier", which is exchanged between the interacting particles. These exchange carriers are also known as field particles, or gauge bosons.
The graviton, if it exists, doesn't seem to act like any of the other particles in the Standard Model, as it does not exhibit these force carrier behaviors. Put simply, unlike the other forces, gravity can not be absorbed, transformed, or shielded against, and it only attracts and never repels. In effect, this theoretical particle appears to possess no discernible way to interact with any other particle. This fact by itself would prohibit its inclusion in the Standard Model, partly because no instrument of sufficient size or efficiency could possibly be built to detect the supposedly tiny energies associated with it, but mostly because the entire concept runs into enormous theoretical difficulties at energies close to the Planck scale, which are the smallest sizes and energies able to be probed with particle accelerators.
Despite this, quantum gravity and other yet-to-be-proven quantum mechanical models such as string theory are often associated with gravitons, both of which rely on its existence. And though much hope is pinned on one of these theories eventually providing a unified description of gravity and particle physics, quantum gravity may prove the best contender. This is because string theory alone is not a physical descriptor of reality, but instead a self-contained mathematical model that describes all of the fundamental forces and the various forms of matter as models, not observed phenomena.
Using novel software that incorporates all of the field theory equations developed by Einstein as part of his general theory of relativity, research teams from Europe and the United States have started developing a model of the universe that they claim will eventually provide the most precise and detailed representation of the cosmos ever created.
Incorporating two new independently-developed computer codes from a team comprising members from Case Western Reserve University and Kenyon College, Ohio, and a team formed by a collaboration between the Institute of Cosmology and Gravitation, Portsmouth, and the University of Catania, Italy, the new research aims to amalgamate a range of physical theoretical information to provide new insights into the nature of gravity and its effects on all of the objects in the universe.
The pair of new codes are also claimed to be the first to use the complete general theory of relativity to help explain why there is a clumping of matter in some areas of space, while there is a distinct dearth of matter in others.
Einstein's theory, despite being over 100 years old, is still the foremost and best theory of gravity that we have. However, despite reliably predicting a range of cosmological phenomena, including the groundbreaking proof of the existence of gravity waves, the general theory of relativity equations involved are so complex that, until now, physicists have had to use simplified versions of the theory when looking at the mechanisms at play in the entire universe.
The new code, embedded in a new mathematical tool developed by the researchers and dubbed "Cosmograph" are said to be able to work with the complexities inherent in Einstein's equations to provide much more nuanced and detailed modeling than has ever been achieved before.
The HBP is a €1.2 billion worth and 10 years long global project that will give us a deeper and more meaningful understanding of how the human brain operates. It is comprised of 130 research institutions throughout Europe and coordinated through the Ecole polytechnique fédérale de Lausanne (EFPL) in Switzerland (1).
Experimental mapping of the brain turned out to be a dead end, given that it takes 20,000 experiments to map just one neural circuit and that our brain consists of 100 billion neurons and 100 trillion synapses. The HBP came up with a better solution by building the first human brain model. These are neuromorphic computing systems which use the same basic principles of computation and cognitive architectures as the brain (1, 2, 3, 4).
The plan is to determine fundamental principles of how neurons are connected and use those principles to construct statistical simulations. A simulation model will then predict how the certain parts of the brain, for which we have none or little experimental information, are wired and then compare the results with real biological data. In other words, the idea is to find some underlying principle that governs brain’s morphology and reverse-engineer the human brain with the help of supercomputers (1, 2, 3).
Astronomers have identified a family of incredible galaxies that could shed further light on the transformation of the early Universe known as the 'epoch of reionisation'. Dr David Sobral of Lancaster University will present their results on Monday 27 June at the National Astronomy Meeting in Nottingham.
About 150 million years after the Big Bang, some 13 billion years ago, the Universe was completely opaque to high energy ultraviolet light, with neutral hydrogen gas blocking its passage. Astronomers have long realised that this situation ended in the so-called 'epoch of reionisation', where ultraviolet light from the earliest stars broke open neutral hydrogen atoms, and could start to travel freely through the cosmos. This reionisation period marks a key transition between the relatively simple early cosmos, with normal matter made up of hydrogen and helium, and the universe as we see it today: transparent on large scales and filled with heavier elements.
In 2015 Sobral led a team that found the first example of a spectacularly bright galaxy within the epoch of reionisation, named Cosmos Redshift 7 or CR7, which may harbour first generation stars. The team also discovered a similar galaxy, MASOSA, which, together with Himiko, discovered by a Japanese team, hinted at a larger population of similar objects, perhaps made up of the earliest stars and/or black holes.
Using the Subaru and Keck telescopes on Hawaii, and the Very Large Telescope in Chile, Sobral and his team, along with a group in the US, have now found more examples of this population. All of the newly found galaxies seem to have a large bubble of ionised gas around them. Sobral comments: "Stars and black holes in the earliest, brightest galaxies must have pumped out so much ultraviolet light that they quickly broke up hydrogen atoms in the surrounding universe. The fainter galaxies seem to have stayed shrouded from view for a lot longer. Even when they eventually become visible, they show evidence of plenty of opaque material still in place around them."
A virtual reality experience transforms the user into a 74-year-old named Alfred in order to see his perspective as a medical patient.
He's speaking to you, but you can't hear him clearly. There's a large black smudge where his face should be, so you're unable to really read his lips. What he's saying is important, so you lean in. But you're frustrated as you struggle to understand what's going on.
You're experiencing life through the eyes of a 74-year-old patient named Alfred -- seven minutes in the shoes of an elderly man whose audiovisual impairments are misdiagnosed as cognitive ones -- and a story that students across many disciplines have worked together to create. A virtual reality experience transforms the user into a 74-year-old named Alfred in order to see his perspective as a medical patient.
"We're trying to portray different kinds of medical conditions, sensory changes from the first-person perspective of a patient," said Carrie Shaw, a master's student in biomedical visualization and content creator of the case study titled, "We Are Alfred." The project was the focus of Shaw's research this year. It won first place in the Art/Design/Humanities & Social Sciences Category among graduate student projects at the UIC Research Forum, as well as the Vesalius Trust Scholarship Award. Their goal was to craft an interactive, experiential product that could be used for curriculum in geriatrics -- the health and care of elderly people -- because of predicted growth in future U.S. aging populations and a disconnect between patients and the students or doctors who treat them.
"Medical students are usually in their early 20s and not experiencing those kinds of challenges yet, so we decided to create something that would give them the experience of what it might be like to go through the aging process," Shaw said.
Users experience that with some headphones and the Oculus Rift Development Kit 2, a headset that can immerse them into a 360-degree virtual reality experience. The headset also includes a Leap Motion device that tracks and projects user's hands in the story to make them feel like they're Alfred. Becoming Alfred helps users empathize with and better understand elderly patients.
"The project is focusing on comfort," said Eric Swirsky, clinical assistant professor of biomedical and health information sciences and a faculty adviser for the project. "It's not curing, it's not curative, it's not even treatment-oriented. It's about comforting and understanding where the patient is so that you can be with him."
The group started with Alfred .5, the first iteration of the project. The prototype had a completely virtual environment. But after tests, trials, discussion and input from expert geriatricians, the second iteration was refocused to include graphic elements, real people and live scenes -- a redesigned interactive cinema on an almost zero-dollar budget.
"Interactive cinema is a kind of marriage of directing for film and directing for theater because you're directing a 360-degree space where the user has the freedom to look wherever they want," said the film's director, Ryan Lebar, a student collaborator from the University of North Carolina School of the Arts.
The World Economic Forum’s annual list of this year’s breakthrough technologies, published today, includes “socially aware” openAI, grid-scale energy storage, perovskite solar cells, and other technologies with the potential to “transform industries, improve lives, and safeguard the planet.” The WEF’s specific interest is to “close gaps in investment and regulation.”
The top 10 technologies that made 2016's list are:
The discovery power of the gene chip is coming to nanotechnology, as a Northwestern University research team develops a tool to rapidly test millions — and perhaps even billions — of different nanoparticles at one time to zero in on the best nanoparticle for a specific use.
When materials are miniaturized, their properties — optical, structural, electrical, mechanical and chemical — change, offering new possibilities. But determining what nanoparticle size and composition are best for a given application, such as catalysts, biodiagnostic labels, pharmaceuticals and electronic devices, is a daunting task.
“As scientists, we’ve only just begun to investigate what materials can be made on the nanoscale,” said Northwestern’s Chad A. Mirkin, a world leader in nanotechnology research and its application, who led the study. “Screening a million potentially useful nanoparticles, for example, could take several lifetimes. Once optimized, our tool will enable researchers to pick the winner much faster than conventional methods. We have the ultimate discovery tool.”
Combinatorial libraries of nanoparticles - more than half never existed on Earth.
Using a Northwestern technique that deposits materials on a surface, Mirkin and his team figured out how to make combinatorial libraries of nanoparticles in a controlled way. (A combinatorial library is a collection of systematically varied structures encoded at specific sites on a surface.) Their study was published today (June 24) by the journal Science.
The nanoparticle libraries are much like a gene chip, Mirkin says, where thousands of different spots of DNA are used to identify the presence of a disease or toxin. Thousands of reactions can be done simultaneously, providing results in just a few hours. Similarly, Mirkin and his team’s libraries will enable scientists to rapidly make and screen millions to billions of nanoparticles of different compositions and sizes for desirable physical and chemical properties.
“The ability to make libraries of nanoparticles will open a new field of nanocombinatorics, where size — on a scale that matters — and composition become tunable parameters,” Mirkin said. “This is a powerful approach to discovery science.”
Using just five metallic elements — gold, silver, cobalt, copper and nickel — Mirkin and his team developed an array of unique structures by varying every elemental combination. In previous work, the researchers had shown that particle diameter also can be varied deliberately on the 1- to 100-nanometer length scale.
The popping and crackling sounds associated with Aurora borealis (or the Northern Lights) are born when the related geomagnetic storm activates the charges that have accumulated in the atmosphere’s inversion layer causing them to discharge, according to researchers at Aalto University, Finland.
In 2012, Prof. Unto K. Laine of Aalto University and co-authors proved that the source of auroral sounds is located close to the ground at an altitude of approximately 230 feet (70 m).
Now, by combining their measurements with the temperature profiles measured by the Finnish Meteorological Institute, the researchers have found an explanation for the mechanism that creates the sounds.
“Temperatures generally drops the higher the altitude. However, when temperatures are well below 32 degrees Fahrenheit (0 degrees Celsius) and, generally in clear and calm weather conditions during the evening and night, the cold is near the surface and the air is warmer higher up,” Prof. Laine said. “This warm air does not mix, instead rising up towards a colder layer carrying negative charges from the ground.”
“The inversion layer forms a kind of lid hindering the vertical movements of the charges. The colder air above it is charged positively.”
“Finally, a geomagnetic storm causes the accumulated charges to discharge with sparks that create measurable magnetic pulses and sounds.”
In the military world, fighter pilots have long been described as the best of the best. As Tom Wolfe famously wrote, only those with the "right stuff" can handle the job. Now, it seems, the right stuff may no longer be the sole purview of human pilots.
A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene "Geno" Lee, and shot him down every time. Lee called it "the most aggressive, responsive, dynamic and credible A.I. I've seen to date."
And "Geno" is no slouch. He's a former Air Force Battle Manager and adversary tactics instructor. He's controlled or flown in thousands of air-to-air intercepts as mission commander or pilot. In short, the guy knows what he's doing. Plus he's been fighting A.I. opponents in flight simulators for decades. But he says this one is different. "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.
The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.
After hour-long combat missions against ALPHA, Lee says, "I go home feeling washed out. I'm tired, drained and mentally exhausted. AI has superhuman reflexes and there is no way to win. This may be artificial intelligence, but it represents a real challenge."
The results of the dogfight simulations are published in the Journal of Defense Management.
Google is one of the companies at the forefront of robotics and artificial intelligence research, and being in that position means they have the most to worry about. The idea of a robot takeover may still be an abstract, science fictional concept to us, but Google has actually compiled a list of behaviors that would cause them great concern, both for efficiency and safety in the future.
Via Ben van Lier
A vast ocean of water beneath the icy crust of Saturn’s moon Enceladus may be more accessible than previously thought, according to new research. A new study has revealed that near the moon’s poles, the ice covering Enceladus could be just two kilometers (one mile) thick—the thinnest known ice shell of any ocean-covered moon. The discovery not only changes scientists’ understanding of Enceladus’ structure, but also makes the moon a more appealing target for future exploration, according to the study’s authors.
Until recently, scientists saw Jupiter’s moon Europa as the moon most likely to yield new understanding into worlds with ice-covered oceans, according to Gabriel Tobie, a planetary scientist at the Laboratory of Planetology and Geodynamics of CNRS, the University of Nantes, and the University of Angers in Nantes, France and co-author of the new study.
Estimates of Europa’s ice shell thickness range from just a few kilometers to over 10 kilometers to over 20 kilometers (12 miles) thick. By comparison, Enceladus’ ice was previously thought to be 20 to 60 kilometers (12 to 37 miles) thick. But the new study suggests that at its south pole, Enceladus’ ice is less than five kilometers (three miles) thick, and possibly as little as two.
The thinness of the ice opens up future mission possibilities, according to authors of the new study published inGeophysical Research Letters, a journal of the American Geophysical Union. With ice this thin, an orbiting probe could use radar to see what lies beneath the moon’s shell. Though substantial engineering challenges would have to be solved first, scientists could even land a probe on the moon itself to drill down through the ice and sample the water below, Tobie said. Other scientists have proposed that ocean-covered moons like Europa could harbor life, and getting a look at Enceladus’ oceans could help us understand whether life could exist there, according to the authors.
The study yielded a second unexpected result: Enceladus’ core is likely much hotter than previously thought. Ice acts as an insulator, keeping the planet’s global oceans warm, but a thinner ice shell holds less heat. To maintain the same amount of water in the global oceans, with a thinner ice shell, Enceladus’ rocky core would have to generate much more heat than previously thought, according to the authors.