NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
Have you ever dreamt of living in outer space? Since the development of spaceflight, the off-Earth lifestyle has been limited to the select few aboard the International Space Station. For nearly 19 years, the station has played host to rotating teams of highly-trained international astronauts.
The ISS, however, may be nearing the end of its tenure in low-Earth orbit.1 While the U.S., Russia, Europe, Canada and Japan have all extended their involvement in ISS operations until 2024,2 the venerable space station may be facing decommission shortly thereafter. This doesn’t mean that we’re abandoning the idea of living in space—in fact, because of they way they are exiting, it signals precisely the opposite.
Soon, orbiting living spaces will no longer be the exclusive domain of agency-affiliated astronauts. Like so much in space, habitats are going commercial. By the 2020’s, NASA’s intention is to transition low-Earth orbit to the private sector—in terms of both supply and demand.3
ICFO Researchers report the discovery of a new technique that could drastically improve the sensitivity of instruments such as magnetic resonance imagers (MRIs) and atomic clocks. The study, published in Nature, reports a technique to bypass the Heisenberg uncertainty principle. This technique hides quantum uncertainty in atomic features not seen by the instrument, allowing the scientists to make very high precision measurements.
State-of-the-art sensors, such as MRIs and atomic clocks, are capable of making measurements with exquisite precision. MRI is used to image tissues deep within the human body and tells us whether we might suffer from an illness, while atomic clocks are extremely precise timekeepers used for GPS, internet synchronization, and long baseline interferometry in radio-astronomy. One might think these two instruments have nothing in common, but they do: both technologies are based on precise measurement the spin of the atom, the gyroscope-like motion of the electrons and the nucleus. In MRI, for example, the pointing angle of the spin gives information about where in the body the atom is located, while the amount of spin (the amplitude) is used to distinguish different kinds of tissue. Combining these two pieces of information, the MRI can make a 3D map of the tissues in the body.
The sensitivity of this kind of measurement was long thought to be limited by Heisenberg's uncertainty principle, which states that accurately measuring one property of an atom puts a limit to the precision of measurement you can obtain on another property. For example, if we measure an electron's position with high precision, Heisenberg's principle limits the accuracy in the measurement of its momentum. Since most atomic instruments measure two properties (spin amplitude and angle), the principle seems to say that the readings will always contain some quantum uncertainty.
This long-standing expectation has now been disproven, however, by ICFO researchers Dr. Giorgio Colangelo, Ferran Martin Ciurana, Lorena C. Bianchet and Dr. Robert J. Sewell, led by ICREA Prof. at ICFO Morgan W. Mitchell. In their article "Simultaneous tracking of spin angle and amplitude beyond classical limits", published this week in Nature, they describe how a properly designed instrument can almost completely avoid quantum uncertainty.
The trick is to realize that the spin has not one but two pointing angles, one for the north-east-south-west direction, and the other for the elevation above the horizon. The ICFO team showed how to put nearly all of the uncertainty into the angle that is not measured by the instrument. In this way they still obeyed Heisenberg's requirement for uncertainty, but hid the uncertainty where it can do no harm. As a result, they were able to obtain an angle-amplitude measurement of unprecedented precision, unbothered by quantum uncertainty.
The plot above shows the first 511 terms of the Fibonacci sequence represented in binary, revealing an interesting pattern of hollow and filled triangles (Pegg 2003). A fractal-like series of white triangles appears on the bottom edge, due in part to the fact that the binary representation of ends in zeros. Many other similar properties exist.
The Fibonacci numbers give the number of pairs of rabbits months after a single pair begins breeding (and newly born bunnies are assumed to begin breeding when they are two months old), as first described by Leonardo of Pisa (also known as Fibonacci) in his book Liber Abaci. Kepler also described the Fibonacci numbers (Kepler 1966; Wells 1986, pp. 61-62 and 65). Before Fibonacci wrote his work, the Fibonacci numbers had already been discussed by Indian scholars such as Gopāla (before 1135) and Hemachandra (c. 1150) who had long been interested in rhythmic patterns that are formed from one-beat and two-beat notes or syllables. The number of such rhythms having beats altogether is , and hence these scholars both mentioned the numbers 1, 2, 3, 5, 8, 13, 21, ... explicitly (Knuth 1997, p. 80).
The numbers of Fibonacci numbers less than 10, , , ... are 6, 11, 16, 20, 25, 30, 35, 39, 44, ... (OEIS A072353). For , 2, ..., the numbers of decimal digits in are 2, 21, 209, 2090, 20899, 208988, 2089877, 20898764, ... (OEIS A068070). As can be seen, the initial strings of digits settle down to produce the number 208987640249978733769..., which corresponds to the decimal digits of (OEIS A097348), where is the golden ratio. This follows from the fact that for any power function , the number of decimal digits for is given by .
Some 290 million years ago, a star much like the sun wandered too close to the central black hole of its galaxy. Intense tides tore the star apart, which produced an eruption of optical, ultraviolet and X-ray light that first reached Earth in 2014.
Now, a team of scientists using observations from NASA's Swift satellite have mapped out how and where these different wavelengths were produced in the event, named ASASSN-14li, as the shattered star's debris circled the black hole.
"We discovered brightness changes in X-rays that occurred about a month after similar changes were observed in visible and UV light," said Dheeraj Pasham, an astrophysicist at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts, and the lead researcher of the study. "We think this means the optical and UV emission arose far from the black hole, where elliptical streams of orbiting matter crashed into each other."
Astronomers think ASASSN-14li was produced when a sun-like star wandered too close to a 3-million-solar-mass black hole similar to the one at the center of our own galaxy. For comparison, the event horizon of a black hole like this is about 13 times bigger than the sun, and the accretion disk formed by the disrupted star could extend to more than twice Earth's distance from the sun.
When a star passes too close to a black hole with 10,000 or more times the sun's mass, tidal forces outstrip the star's own gravity, converting the star into a stream of debris. Astronomers call this a tidal disruption event. Matter falling toward a black hole collects into a spinning accretion disk, where it becomes compressed and heated before eventually spilling over the black hole's event horizon, the point beyond which nothing can escape and astronomers cannot observe. Tidal disruption flares carry important information about how this debris initially settles into an accretion disk.
Doctors have stumbled on an unlikely source for a drug to ward off brain damage caused by strokes: the venom of one of the deadliest spiders in the world.
A bite from an Australian funnel web spider can kill a human in 15 minutes, but a harmless ingredient found in the venom can protect brain cells from being destroyed by a stroke, even when given hours after the event, scientists say. If the compound fares well in human trials, it could become the first drug that doctors have to protect against the devastating loss of neurons that strokes can cause.
Researchers discovered the protective molecule by chance as they sequenced the DNA of toxins in the venom of the Darling Downs funnel web spider (Hadronyche infensa) that lives in Queensland and New South Wales. Venom from three spiders was gathered for the study after scientists trapped and “milked exhaustively” three spiders on Orchid beach, about 400km north of Brisbane.
The molecule, called Hi1a, stood out because it looked like two copies of another brain cell-protecting chemical stitched together. It was so intriguing that scientists decided to synthesize the compound and test its powers. “It proved to be even more potent,” said Glenn King at the University of Queensland’s centre for pain research.
Strokes occur when blood flow to the brain is interrupted and the brain is starved of oxygen. About 85% of strokes are caused by blockages in blood vessels in the brain, with the rest due to bleeds when vessels rupture. Approximately six million people a year die from stroke, making it the second largest cause of death worldwide after heart attacks.
If you’re overweight and find it challenging to exercise regularly, now there’s good news: A less strenuous form of exercise known as whole-body vibration (WBV) can mimic the muscle and bone health benefits of regular exercise — at least in mice — according to a new study published in the Endocrine Society’s journal Endocrinology.
Lack of exercise is contributing to the obesity and diabetes epidemics, according to the researchers. These disorders can also increase the risk of bone fractures. Physical activity can help to decrease this risk and reduce the negative metabolic effects of these conditions.
But the alternative, WBV, can be experienced while sitting, standing, or even lying down on a machine with a vibrating platform. When the machine vibrates, it transmits energy to your body, and your muscles contract and relax multiple times during each second.
“Our study is the first to show that whole-body vibration may be just as effective as exercise at combating some of the negative consequences of obesity and diabetes,” said the study’s first author, Meghan E. McGee-Lawrence, Ph.D., ofAugusta University in Georgia. “While WBV did not fully address the defects in bone mass of the obese mice in our study, it did increase global bone formation, suggesting longer-term treatments could hold promise for preventing bone loss as well.”
Just as effective as a treadmill
Glucose and insulin tolerance testing revealed that the genetically obese and diabetic mice showed similar metabolic benefits from both WBV and exercising on a treadmill. Obese mice gained less weight after exercise or WBV than obese mice in the sedentary group, although they remained heavier than normal mice. Exercise and WBV also enhanced muscle mass and insulin sensitivity in the genetically obese mice.
The findings suggest that WBV may be a useful supplemental therapy to combat metabolic dysfunction in individuals with morbid obesity. “These results are encouraging,” McGee-Lawrence said. “However, because our study was conducted in mice, this idea needs to be rigorously tested in humans to see if the results would be applicable to people.”
The authors included researchers at the National Institute of Health’s National Institute of Aging (NIA). Funding was provided by the American Diabetes Association, the National Institutes of Health’s National Institute of Diabetes and Digestive Kidney Diseases, and the National Institute on Aging.
If you should one day find yourself in a spacecraft circling Mars, don’t count on a good view. The Red Planet’s dusty atmosphere will probably obscure any window-seat vistas of its deep valleys and soaring mesas. “The best way to see the planet’s surface would be to take a digital image and enhance it on your computer,” says planetary geologist Alfred McEwen, principle investigator on NASA’s High Resolution Imaging Science Experiment.
He would know: In the past 12 years, the powerful HiRISE camera has snapped 50,000 spectacular, high-resolution stereo images of the Martian terrain from the planet’s orbit, creating anaglyphs that anyone can view in 3D using special glasses. The highly detailed stereograms depict the planet’s surface in remarkable detail—but 3D glasses aren’t always handy, and still images can only convey so much about Mars’ varied topography.
To fully appreciate the Martian landscape, one needs dimension and movement. In the video you see here, Finnish filmmaker Jan Fröjdman transformed HiRISE imagery into a dynamic, three-dimensional, overhead view of the Red Planet—no glasses required. For Fröjdman, creating the flyover effect was like assembling a puzzle. He began by colorizing the photographs (HiRISE captures images in grayscale). He then identified distinctive features in each of the anaglyphs—craters, canyons, mountains–and matched them between image pairs. To create the panning 3-D effect, he stitched the images together along his reference points and rendered them as frames in a video. “It was a very slow process,” he says.
Researchers at the Tokyo Institute of Technology and Nippon Telegraph and Telephone Corporation have developed a "spin-resolved oscilloscope." This device is a basic measuring instrument for plasmonics and spintronics, which are key technologies for future electronics applications. The coupling of light and electronic charges in plasmonics will pave the way for ultra-high-speed information processing, whereas spintronics will provide low-energy-consumption technology in a highly information-oriented society. The spin-resolved oscilloscope pioneers future "spin-plasmonics," where ultra-high-speed low-energy-consumption devices will be achieved.
An electron has charge and spin, and both the charge- and spin-density excitations in an electronic system can be utilized in information processing. The dynamics of charge-density waves has been investigated in plasmonics, and that of spin-density waves has been studied in the field of spintronics. However, less effort has been devoted to combining these two technologies and to developing the expected ultra-high-speed and low-energy-consumption devices. To date, a major obstacle preventing the promotion of this research field has been the lack of a measuring instrument that is sensitive to both charge and spin.
In their recent paper, published in Nature Physics, Dr. Masayuki Hashisaka at Tokyo Tech and colleagues reported a "spin-resolved oscilloscope" that enables measurement of the waveforms of both charge and spin signals in electronic devices. An oscilloscope is a basic measuring instrument used in electronics; however, conventional oscilloscopes do not facilitate both charge and spin measurement.
The "charge signal" is the total charge of the spin-up and -down electron densities. Further, the "spin signal" is the difference between the spin-up and -down electron densities. Both these signals traveling in a semiconductor device can be detected by the spin-resolved oscilloscope, which is composed of a spin filter and nanometer-scale time-resolved charge detectors. The spin filter separates the spin-up and -down electrons, while the time-resolved charge detector measures the waveforms of the charge-density waves. By combining these spintronic and plasmonic devices, the spin-resolved oscilloscope is established.
Scientists from IBM and ETH Zurich university have built a tiny “flow” battery that has the dual benefit of supplying power to chips and cooling them at the same time. Even taking pumping into account, it produces enough energy to power a chip while dissipating much more heat than it generates. The result could be smaller, more efficient chips, solar cells that store their own energy or devices used for remote monitoring that don’t require external power sources.
“Redox flow” batteries that use liquid electrolytes are normally used on a large scale to store energy. For instance, Harvard Researchers recently created one that can last over ten years with very little degradation, making it ideal to store solar or wind energy.
Building them on a scale tiny enough for chips is another matter, however. The team from ETH Zurich and IBM managed to find two liquids that are suitable both as flow-battery electrolytes and cooling agents that can dissipate heat from chips in the same circuit. “We are the first scientists to build such a small flow battery so as to combine energy supply and cooling,” says doctoral student Julian Marschewski.
Why is the sky blue? It’s a common question asked by children, and the simple answer is that blue light is scattered by our atmosphere more than red light, hence the blue sky. That’s basically true, but then why don’t we see a violet sky?
The blue sky we observe depends upon two factors: how sunlight interacts with Earth’s atmosphere, and how our eyes perceive that light.
When light interacts with our atmosphere it can scatter, similar to the way one billiard ball can collide with another, making them go off in different directions. The main form of atmospheric scattering is known as Rayleigh scattering. If you imagine photons bouncing off molecules of air, that’s a rough approximation.
But photons and air molecules aren’t billiard balls, so there are differences. One of these is that the amount of scattering depends upon the wavelength (or color) of the light. The shorter the wavelength, the more the light scatters. Since the rainbow of colors going from red to violet corresponds with wavelengths of light going from long to short, the shorter blue wavelengths are scattered more. So our sky appears blue because of all the scattered blue light. This is also the reason why sunsets can appear red. Blue light is scattered away, leaving a reddish looking sunset.
But if that’s the case, why isn’t the sky violet? Sure, blue light is scattered more than red or green, but violet light has an even shorter wavelength, so violet should be scattered more than blue. Shouldn’t the sky appear violet, or at least a violet-blue? It turns out our sky is indeed violet, but it appears blue because of the way our eyes work.
Microscopic marine plankton are not helplessly adrift in the ocean. They can perceive cues that indicate turbulence, rapidly respond to regulate their behaviour and actively adapt. ETH researchers have demonstrated for the first time how they do this.
Plankton in the ocean are constantly on the move. By day, these tiny organisms, one-tenth the diameter of a human hair, actively migrate towards the sunlit ocean surface to carry out photosynthesis. At night, they make their way to depths of tens of meters, where the supply of nutrients is greater. During their regular trips between well-lit and nutrient-rich zones, plankton cells frequently encounter turbulent layers, which disrupt this essential migratory pattern.
It is still a mystery how these minute organisms can navigate through the dangers of turbulent waters. Plankton cells are whirled around by turbulence -- particularly by the smallest, millimeter-sized flow vortices -- as if they were in a miniature washing machine, which can induce permanent damage to their propulsion appendages and cell envelope. In the worst case, they can perish in turbulence.
Aquila is a conceptual 50-meter sailing yacht that features solar sails thanks to CIGS solar cells technology. This project was born out of the idea to create a new generation of sailing yacht that follows recent trends of implementing futuristic technology in existing transportation. This futuristic sailing yacht features 50 meters length and 11.2 meters beam, it can accommodate up to 10 people at a time.
This yacht design aims to redefine sustain sailing navigation by highlighting its ability to operate entirely on solar power. It uses green technology such as solar sails to operate the electronic systems, this yacht can also generate energy from the wind.
A team of more than 80 mathematicians from 12 countries has begun charting the terrain of rich, new mathematical worlds, and sharing their discoveries on the Web. The mathematical universe is filled with both familiar and exotic items, many of which are being made available for the first time.
The "L-functions and Modular Forms Database," abbreviated LMFDB, is an intricate catalog of mathematical objects and the connections between them. Making those relationships visible has been made possible largely by the coordinated efforts of a group of researchers developing new algorithms and performing calculations on an extensive network of computers. The project provides a new tool for several branches of mathematics, physics, and computer science.
A "periodic table" of mathematical objects
Project member John Voight, from Dartmouth College, observed that "our project is akin to the first periodic table of the elements. We have found enough of the building blocks that we can see the overall structure and begin to glimpse the underlying relationships." Similar to the elements in the periodic table, the fundamental objects in mathematics fall into categories. Those categories have names like L-function, elliptic curve, and modular form. The L-functions play a special role, acting like 'DNA' which characterizes the other objects. More than 20 million objects have been catalogued, each with its L-function that serves as a link between related items. Just as the value of genome sequencing is greatly increased when many members of a population have been sequenced, the comprehensive material in the LMFDB will be an indispensible tool for new discoveries.
The LMFDB provides a sophisticated web interface that allows both experts and amateurs to easily navigate its contents. Each object has a "home page" and links to related objects, or "friends." Holly Swisher, a project member from Oregon State University, commented that the friends links are one of the most valuable aspects of the project: "The LMFDB is really the only place where these interconnections are given in such clear, explicit, and navigable terms. Before our project it was difficult to find more than a handful of examples, and now we have millions."
A team of researchers at the University of Wisconsin has developed a pair of glasses that allows the wearer to have tetrachromatic vision. In their paper uploaded to the arXiv preprint sever, the group describes the inspiration for their glasses and explain how they work.
Humans have three types of cone cells in the back of the eye to differentiate color. Some react to blue, some to green and some to red. The cones do their work by responding to the difference in wavelength of the incoming light. Such vision is known as trichromatic. In this new effort, the researchers have found a way of fooling the brain into seeing as if there were a fourth type of cone, by wearing glasses with two types of filters. The result is tetrachromatic vision.
To create the glasses, the researchers fashioned two types of filters, one for each eye. The filters remove some parts of the blue light spectrum. But the filters each remove a different part. When the filters are fitted into a frame and worn like regular glasses, the wearer is able to see colors that are normally hidden—metamers. In a sense, it is rather the opposite of what occurs with people who are color blind. They might see blue and red as the same, even though there is more light information there. Adding spectrum identification to color blind eyes allows for seeing more of what is already there. With the new combined filter system, a person is able to look at what appears to be an object that is all the same color, such as purple, and see more colors in it—those normally hidden metamers.
The team notes that it should be possible to extend the idea used to create their glasses to the other two colors that cone cells process, red and green, to create glasses that offer the ability to see six basic types of colors instead of the normal three. They plan to start with green. Such glasses, the team notes, might be used to spot counterfeit money, or to see a person in the jungle wearing camouflage.
In the mathematical field of dynamical systems, an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed.
An attractor is called strange if it has a fractal structure. This is often the case when the dynamics on it are chaotic, but strange nonchaotic attractors also exist. If a strange attractor is chaotic, exhibiting sensitive dependence on initial conditions, then any two arbitrarily close alternative initial points on the attractor, after any of various numbers of iterations, will lead to points that are arbitrarily far apart (subject to the confines of the attractor), and after any of various other numbers of iterations will lead to points that are arbitrarily close together. Thus a dynamic system with a chaotic attractor is locally unstable yet globally stable: once some sequences have entered the attractor, nearby points diverge from one another but never depart from the attractor.
The term strange attractor was coined by David Ruelle and Floris Takens to describe the attractor resulting from a series of bifurcations of a system describing fluid flow. Strange attractors are often differentiable in a few directions, but some are like a Cantor dust, and therefore not differentiable. Strange attractors may also be found in presence of noise, where they may be shown to support invariant random probability measures of Sinai–Ruelle–Bowen type.
New NASA research reveals that the giant Martian shield volcano Arsia Mons produced one new lava flow at its summit every 1 to 3 million years during the final peak of activity. The last volcanic activity there ceased about 50 million years ago—around the time of Earth's Cretaceous-Paleogene extinction, when large numbers of our planet's plant and animal species (including dinosaurs) went extinct.
A landslide on comet 67P/Churyumov–Gerasimenko triggered a plume of dust to be ejected, revealing pristine ice hidden beneath the surface.
In July 2015, the Rosetta spacecraft observed an outburst from the comet. Images from on board cameras had shown numerous surface changes taking place over the two years of observation. However, one in particular was of interest to researchers.
Outbursts are often seen on comets, but what causes them is not known. To understand what happens on the surface at the point of these outbursts, an international team of researchers studied an event on Comet 67P.
In two studies – one published in Nature Astronomy, the other in Science – researchers showed that landslides had taken place on the comet, with whole cliffs collapsing, drastically altering the surface landscape.
Cerealia Facula, a dome-like feature located in the center of Ceres’ Occator crater, is only 4 million years old -- approximately 30 million years younger than the crater itself, according to research led by Dr. Andreas Nathues of the Max Planck Institute for Solar System Research.
Occator crater is one of the largest craters on the dwarf planet Ceres. With a diameter of 57 miles (92 km), it is larger than Tycho crater on the Moon. Its steep walls stand tall at over 1.4 miles (2 km), higher than the North face of the Eiger in the Bernese Alps.
“Occator crater is located in the northern hemisphere of Ceres. In its center a pit with a diameter of about 6.8 miles (11 km) can be found. On some parts of its edges, jagged mountains and steep slopes rise up to 2,460 feet (750 m) high,” Dr. Nathues and co-authors said. “Within the pit a bright dome formed. It is 1,312 feet (400 m) high, has a diameter of 1.9 miles (3 km), and displays prominent fractures.”
“This dome, called Cerealia Facula, contains the brightest material on Ceres.”
Since later impacts in this area did not expose any other material from the depth, this feature possibly consists entirely of bright material. The secondary, smaller bright areas of Occator, called Vinalia Faculae, are paler, form a thinner layer and — as VIR and camera data show — turn out to be a mixture of carbonates and dark surrounding material.
New evidence also suggests that Cerealia Facula likely rose in a process that took place over a long period of time, rather than forming in a single event. Dr. Nathues and his colleagues believe the initial trigger was the impact that dug out Occator crater. This impact happened some 34 million years ago and caused briny liquid to rise closer to the surface.
SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications. Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.
Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth. Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.
Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.
Scientists are planning to capture the first ever photo of a black hole’s event horizon (the boundary of no return that light can’t event escape). The project is called the Event Horizon Telescope, and it uses a network of 9 radio telescopes found across the world that will be pointed at Sagittarius A*, the black hole 25,000 light years away at the center of our Milky Way galaxy.
Scientists say that calculations and preparations are done, and that they’re aiming to shoot the groundbreaking photo sometime early this year. “There are quite a few challenges that need to be overcome to take a picture of a black hole – it’s something that’s extremely small in the sky,” EHT scientist Feryal Ozel explains. “But what we’re hoping for is a full array observation in early 2017.”
Although our entire galaxy revolves around it, Sagittarius A* has an event horizon with a diameter of 24 million km (~14.9 million miles), or about 17 times that of our Sun. And since it’s so far away, to us it’s about the relative size of a CD on the surface of the moon, scientists say.
So what will the resulting photo look like? Scientists are predicting that it will look like a crescent of light around a black hole due to the Doppler effect making part of the ring brighter than the other. Here’s the “close up” view of a black hole that was computer generated for the movie Interstellar (created under the guidance of renowned astrophysicist Kip Thorne).
Although the invisible substance known as dark matter dominates galaxies nowadays, it was apparently only a minor ingredient of galaxies in the early universe, a new study finds.
This new finding sheds light on how galaxies and their mysterious "haloes" of dark matter have changed over time, researchers said.
Dark matter is thought to make up about 84 percent of the matter in the universe. Although dark matter is invisible, its presence can be inferred by its gravitational effects on visible matter. For instance, previous work discovered that the outer parts of galactic disks whirl faster than expected around the cores of those galaxies. These findings make sense if one assumes that "haloes" of dark matter envelop those galaxies and gravitationally pull at their outer regions. [The Search for Dark Matter in Pictures]
Now, the researchers unexpectedly find that in the early universe, dark matter played a much smaller role in galaxies than previously thought. The scientists detailed their findings in the March 16 issue of the journal Nature.
Using the European Southern Observatory's Very Large Telescope in Chile, the researchers examined six massive, star-forming galaxies from the early universe during the peak of galaxy formation 10 billion years ago. They analyzed the rotation of these galaxies to calculate how much dark matter they possessed.
When it comes to the Milky Way and other typical galaxies born in the current era of the universe, their "effective radius"—that is, the bright region that half their light comes from—is 50 to 80 percent dark matter, said study lead author Reinhard Genzel, an astrophysicist and director of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany. In comparison, in half the early galaxies the researchers studied, dark matter made up 10 percent or less of the galaxies' effective radius, Genzel said.
Slow wi-fi is a source of irritation that nearly everyone experiences. Wireless devices in the home consume ever more data, and it’s only growing, and congesting the wi-fi network. Researchers at Eindhoven University of Technology have come up with a surprising solution: a wireless network based on harmless infrared rays. The capacity is not only huge (more than 40Gbit/s per ray) but also there is no need to share since every device gets its own ray of light. This was the subject for which TU/e researcher Joanne Oh received her PhD degree with the ‘cum laude’ distinction last week.
The system conceived in Eindhoven is simple and, in principle, cheap to set up. The wireless data comes from a few central ‘light antennas’, for instance mounted on the ceiling, which are able to very precisely direct the rays of light supplied by an optical fiber. Since there are no moving parts, it is maintenance-free and needs no power: the antennas contain a pair of gratings that radiate light rays of different wavelengths at different angles (‘passive diffraction gratings’). Changing the light wavelengths also changes the direction of the ray of light. Since a safe infrared wavelength is used that does not reach the vulnerable retina in your eye, this technique is harmless.
If you walk around as a user and your smartphone or tablet moves out of the light antenna’s line of sight, then another light antenna takes over. The network tracks the precise location of every wireless device using its radio signal transmitted in the return direction. It is a simple matter to add devices: they are assigned different wavelengths by the same light antenna and so do not have to share capacity. Moreover, there is no longer any interference from a neighboring wi-fi network.
Data capacity of light rays
Current wi-fi uses radio signals with a frequency of 2.5 or 5 gigahertz. The system conceived at TU Eindhoven uses infrared light with wavelengths of 1500 nanometers and higher; this light has frequencies that are thousands of times higher, some 200 terahertz, which makes the data capacity of the light rays much larger. Joanne Oh even managed a speed of 42.8 Gbit/s over a distance of 2.5 meters. For comparison, the average connection speed in the Netherlands is two thousand times less (17.6 Mbit/s). Even if you have the very best wi-fi system available, you won’t get more than 300 Mbit/s in total, which is some hundred times less than the speed per ray of light achieved by the Eindhoven study.
Researchers from The University of Manchester have shown that it is possible to build a new super-fast form of computer that “grows as it computes”.
Professor Ross D King and his team have demonstrated for the first time the feasibility of engineering a nondeterministic universal Turing machine (NUTM), and their research is to be published in the prestigious Journal of the Royal Society Interface.
The theoretical properties of such a computing machine, including its exponential boost in speed over electronic and quantum computers, have been well understood for many years – but the Manchester breakthrough demonstrates that it is actually possible to physically create a NUTM using DNA molecules.
“Imagine a computer is searching a maze and comes to a choice point, one path leading left, the other right,” explained Professor King, from Manchester’s School of Computer Science. “Electronic computers need to choose which path to follow first.
“But our new computer doesn’t need to choose, for it can replicate itself and follow both paths at the same time, thus finding the answer faster.
“This ‘magical’ property is possible because the computer’s processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.
Are animals on Earth about to get a lot smaller? If the past is any guide, perhaps the answer is yes.
Carbon signatures in the geological record show that global temperature surged 5 to 8 degrees Celsius within 10,000 years.
They also indicate that the planet’s temperature remained elevated for an additional 170,000 years before returning to normal.
Scientists describe this (relatively) rapid rise in temperature as a “hyperthermal event,” and it is not the only one that has ever occurred. About 2 million years later, the Earth experienced another surge in temperature that was about half the magnitude of its predecessor.
Over the course of Earth’s history there have been other, smaller hyperthermal events as well. Most scientists would agree that we are in the midst of one right now.
Abigail D’Ambrosia, a graduate student at the University of New Hampshire, is interested in what happens to living things when the global temperature jumps. Do they go extinct? Do they adapt? Do they change?
Her research, published in Science Advances, shows that at least in the case of some mammals, they shrink.
Lawrence Livermore National Laboratory weapon physicist Greg Spriggs gives researchers a new way to study the power of nuclear weaponry.
A treasure trove of footage from early U.S. nuclear weapons tests has just been declassified and uploaded to YouTube.
The film release was part of a project headed by Lawrence Livermore National Laboratory (LLNL) weapons physicist Greg Spriggs which aimed to digitize and preserve thousands of films documenting the nation’s nuclear history. The endeavor required an all-hands-on deck approach from archivists, film experts and software engineers, but the team says that this digitized database is already yielding new insights from the decades-old tests.
The films all stem from the 210 atmospheric nuclear tests undertaken by the U.S. between 1945 and 1962. There are an estimated 10,000 films from these tests, capturing multiple angles and data points. The project has so far tracked down 6,500 of them, and converted 4,200 to a digital format—750 have so far been declassified, and this week’s batch is the first to be released.
Preserving the films wasn’t easy. It required modifying equipment to match the specifications of the old film, and locating data logs that provide critical information about camera placement, speed and focal length. Then, they watched each film to determine the exact frame rate, as it was known to vary from camera to camera at the time. Several programmers assisted Spriggs’ team and provided computational tools to analyze films frame-by-frame—a task that was once done by hand. Once a film was digitized and the relevant information matched to each, it can be used to study the behavior of nuclear weapons.
The videos include several of the major nuclear weapons testing runs from the era, including Operations Plumbbob and Dominic. The tests were mostly conducted at sites in Nevada or on atolls in the middle of the Pacific Ocean. Several of the early tests would raise concerns over the fallout from nuclear device testing, both on soldiers involved in exercises nearby and on civilians in the surrounding areas.
The films were originally meant for researchers, to be used as study guides for the next round of development and testing. In the years following the first nuclear explosion, the Trinity test in New Mexico on July 16, 1945, researchers raced to comprehend the magnitude of their creation.
The hundreds of tests that followed comprised an array of bomb designs and testing environments, including underground, underwater and high-altitude tests. The videos of these events were obsessively studied frame by frame to gauge the magnitude of the explosion by looking at its brightness and shockwave, as well as the effects on nearby military equipment, towns and livestock.
Looking back through the footage today, Spriggs says it’s apparent some the data gathered 60 years ago is incorrect. With the benefit of modern-day technology, he is hoping to rectify those mistakes and provide accurate information after all this time. “When you go to validate your computer codes, you want to use the best data possible,” he says. “We were finding that some of these answers were off by 20, maybe 30, percent. That’s a big number for doing code validation. One of the payoffs of this project is that we’re now getting very consistent answers. We’ve also discovered new things about these detonations that have never been seen before. New correlations are now being used by the nuclear forensics community, for example.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.