We live in a great time to be an electronics tinkerer. What with the Arduino, Raspberry Pi, BeagleBoard and other single-board computers, it's cheap and easy to get started with hardware hacking.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
A second is always a second. Nevertheless, no clocks are so precise that they can measure the exact duration of a second. So even the most precise atomic clocks are 0.000000000000000001 seconds off each second. Over the course of a few billion years that equals one second.
A single second off over the course of a few billion years may sound like a very precise clock -- and in a way it is but scientists from the University of Copenhagen would like a clock that is even more precise.
So now the scientists are suggesting that the atomic clocks around the world be connected by light rays; creating one big network of atomic clocks.
This network would measure time more precisely than ever before.
“A more precise clock would lead to a number of new, interesting possibilities,” says Professor Anders Søndberg Sørensen of the Niels Bohr Institute who recently co-authored a proposal for an improved atomic clock that was published in Nature Physics.
With more precise clocks come more precise experiments, and with more precise experiments come new technological possibilities. The Global Position System (GPS) for instance, only became possible when a new clock was invented that was so precise that it could measure how long it took for a signal from a GPS transmitter on Earth to reach satellites in space. Similarly, researchers hope that a more precise clock will lead to new scientific possibilities – and solve the debate over whether constants of nature are really constant at all.
It might soon be possible to perform large-scale 3D motion reconstructions of sporting events or other live performances, thanks to new research by scientists at Carnegie Mellon University. The researchers mounted 480 video cameras in a two-story geodesic dome that enabled them to track the motion of events such as a man swinging a baseball bat or confetti being thrown into the air.
This is all done without any little white balls or other markers attached to the subjects, thanks to a technique for estimating visibility of a target point based on motion. The trick was to leverage established techniques for automatically identifying and tracking points based on visual elements – to find distinctive patterns, in other words – and monitor them over time as they cross between different cameras.
Many traditional challenges in reconstructing 3D motion, such as matching across wide baselines and handling occlusion, reduce in significance as the number of unique viewpoints increases. However, to obtain this benefit, a new challenge arises: estimating precisely which cameras observe which points at each instant in time. This new system presents a maximum a posteriori (MAP) estimate of the time-varying visibility of the target points to reconstruct the 3D motion of an event from a large number of cameras. A newly developed algorithm takes, as input, camera poses and image sequences, and outputs the time-varying set of the cameras in which a target patch is visible and its reconstructed trajectory. It models visibility estimation as a MAP estimate by incorporating various cues including photometric consistency, motion consistency, and geometric consistency, in conjunction with a prior that rewards consistent visibilities in proximal cameras. An optimal estimate of visibility is obtained by finding the minimum cut of a capacitated graph over cameras.
The inventors demonstrate that our method estimates visibility with greater accuracy, and increases tracking performance producing longer trajectories, at more locations, and at higher accuracies than methods that ignore visibility or use photometric consistency alone.
Scientists looking at data from the Baryon Oscillation Spectroscopic Survey (BOSS), the largest program in the third Sloan Digital Sky Survey, have measured the expansion rate of the universe 10.8 billion years ago — a time prior to the onset of accelerated expansion caused by dark energy. The measurement is also the most precise measurement of a universal expansion rate ever made, with only 2% uncertainty. The results were announced at a press conference at the APS’s April meeting on Monday, at the same time that the results were posted on the arXiv.
The rate of universal expansion has changed over the course of the universe’s lifetime. It is believed to have gradually slowed down after the Big Bang, but mysteriously began accelerating again about 7 billion years ago. BOSS and other observatories have previously measured expansion rates going back 6 billion years.
To measure astronomical distances, astronomers will occasionally use so-called “standard candles” – these are supernovae with known luminosities. The difference between the known luminosity and the apparent luminosity indicates the supernova’s distance. The BOSS results measure the expansion factor of the universe using a “standard ruler” – a known distance between celestial objects. The expansion rate can be deduced when the known distance between two objects is compared to their apparent distance and each of their redshifts.
The “standard ruler” in this case is the imprint left over from sound waves in the early universe, also known as baryonic acoustic oscillations, or BAOs. These sound waves from the early universe should have created regularly spaced areas of high and low density in regular matter. For example, scientists see an excess of pairs of galaxies separated by about 450 million light-years – the length of the BAO ruler.
The BOSS experiment looked at the distance between quasars and nearby rings of gas. Quasars – galaxies with a supermassive black hole at their centre – are some of the brightest objects in the universe as a result of radiation from tremendous amounts of material falling into the black hole. Because the quasars formed in areas particularly dense with gas and dust, the imprint of the BAO is strong: it appears as a ring of gas roughly 450 million light-years away from the center of the quasar.
“The scale of that ring is precisely this baryonic acoustic oscillation…and that’s what we’re trying to measure,” says Andreu Font-Ribera of Lawrence Berkeley National Laboratory and one of the authors on the new paper.
Quasars not only provide an imprint of the BAO, they are also some of the most visible objects at such great distances from the earth. Even supernovae or entire galaxies (of the non-quasar variety) are virtually invisible at a distance of 10.8 billion light-years from earth. The BOSS team studied more than 164,000 quasars to make their measurement.
Australian and Taiwanese scientists have discovered a new molecule which puts the science community one step closer to solving one of the barriers to development of cleaner, greener hydrogen fuel-cells as a viable power source for cars.
Scientists say that the newly-discovered "28copper15hydride" puts us on a path to better understanding hydrogen, and potentially even how to get it in and out of a fuel system, and is stored in a manner which is stable and safe – overcoming Hindenburg-type risks.
"28copper15hydride" is certainly not a name that would be developed by a marketing guru, but while it would send many running for an encyclopaedia (or let's face it, Wikipedia), it has some of the world's most accomplished chemists intrigued.
Its discovery was recently featured on the cover of one of the world's most prestigious chemistry journals, and details are being presented today by Australia's Dr Alison Edwards at the 41st International Conference on Coordination Chemistry, Singapore where 1100 chemists have gathered.
The molecule was synthesised by a team led by Prof Chenwei Liu from the National Dong Hwa University in Taiwan, who developed a partial structure model.
The chemical structure determination was completed by the team at the Australian Nuclear Science and Technology Organisation (ANSTO) using KOALA, one of the world's leading crystallography tools.
Most solid material is made of crystalline structures. The crystals are made up of regular arrangements of atoms stacked up like boxes in a tightly packed warehouse. The science of finding this arrangement, and structure of matter at the atomic level, is crystallography. ANSTO is Australia's home of this science.
A new study led by MIT materials scientists reveals the reason why gold nanoparticles can easily slip through cell membranes to deliver drugs directly to target cells. The nanoparticles enter cells by taking advantage of a route normally used in vesicle-vesicle fusion, a crucial process that allows signal transmission between neurons.
In the July 21 issue of Nature Communications, the researchers describe in detail the mechanism by which these nanoparticles are able to fuse with a membrane. The findings suggest possible strategies for designing nanoparticles — made from gold or other materials — that could get into cells even more easily.
“We’ve identified a type of mechanism that might be more prevalent than is currently known,” says Reid Van Lehn, an MIT graduate student in materials science and engineering and one of the paper’s lead authors. “By identifying this pathway for the first time it also suggests not only how to engineer this particular class of nanoparticles, but that this pathway might be active in other systems as well.”
Most nanoparticles enter cells through endocytosis, a process that traps the particles in intracellular compartments, which can damage the cell membrane and cause cell contents to leak out. But in 2008, MIT researchers found that a special class of gold nanoparticles coated with a mix of molecules could enter cells without any disruption.
Last year, they discovered that the particles were somehow fusing with cell membranes and being absorbed into the cells. In their new study, they created detailed atomistic simulations to model how this happens, and performed experiments that confirmed the model’s predictions.
Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models?
In this work, LEVAN developers introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Their approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Their approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. A comprehensive aggregation of online system has been queried by users to learn models for several interesting concepts including, breakfast, Gandhi, beautiful, etc. To date, the LEVAN system has models available for over 50,000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.
Vanderbilt University researchers have discovered that engineered probiotic bacteria (“friendly” bacteria like those in yogurt) in the gut produce a therapeutic compound that inhibits weight gain, insulin resistance, and other adverse effects of a high-fat diet in mice.
“Of course it’s hard to speculate from mouse to human,” said senior investigator Sean Davies, Ph.D., assistant professor of Pharmacology. “But essentially, we’ve prevented most of the negative consequences of obesity in mice, even though they’re eating a high-fat diet.”
The findings published in the August issue of the Journal of Clinical Investigation (open access) suggest that it may be possible to manipulate the bacterial residents of the gut — the gut microbiota — to treat obesity and other chronic diseases.
Davies has a long-standing interest in using probiotic bacteria to deliver drugs to the gut in a sustained manner, in order to eliminate the daily drug regimens associated with chronic diseases. In 2007, he received a National Institutes of Health Director’s New Innovator Award to develop and test the idea.
Other studies have demonstrated that the natural gut microbiota plays a role in obesity, diabetes and cardiovascular disease. “The types of bacteria you have in your gut influence your risk for chronic diseases,” Davies said. “We wondered if we could manipulate the gut microbiota in a way that would promote health.”
To start, the team needed a safe bacterial strain that colonizes the human gut. They selected E. coli Nissle 1917, which has been used as a probiotic treatment for diarrhea since its discovery nearly 100 years ago.
They genetically modified the E. coli Nissle strain to produce a lipid compound called N-acyl phosphatidylethanolamine (NAPE)*, which is normally synthesized in the small intestine in response to feeding. NAPE is rapidly converted to NAE, a compound that reduces both food intake and weight gain. Some evidence suggests that NAPE production may be reduced in individuals eating a high-fat diet.
“NAPE seemed like a great compound to try — since it’s something that the host normally produces,” Davies said.
The investigators added the NAPE-producing bacteria to the drinking water of mice eating a high-fat diet for eight weeks. Mice that received the modified bacteria had dramatically lower food intake, body fat, insulin resistance and fatty liver compared to mice receiving control bacteria.
They found that these protective effects persisted for at least four weeks after the NAPE-producing bacteria were removed from the drinking water. And even 12 weeks after the modified bacteria were removed, the treated mice still had much lower body weight and body fat compared to the control mice. Active bacteria no longer persisted after about six weeks.
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket.
The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from.
It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. A team of scientist engineers now presents a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly.
A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.
The planet's current biodiversity, the product of 3.5 billion years of evolutionary trial and error, is the highest in the history of life. But it may be reaching a tipping point. Scientists caution that the loss and decline of animals is contributing to what appears to be the early days of the planet's sixth mass biological extinction event. Since 1500, more than 320 terrestrial vertebrates have become extinct. Populations of the remaining species show a 25 percent average decline in abundance. The situation is similarly dire for invertebrate animal life.
And while previous extinctions have been driven by natural planetary transformations or catastrophic asteroid strikes, the current die-off can be associated to human activity, a situation that the lead author Rodolfo Dirzo, a professor of biology at Stanford, designates an era of "Anthropocene defaunation."
Across vertebrates, 16 to 33 percent of all species are estimated to be globally threatened or endangered. Large animals -- described as megafauna and including elephants, rhinoceroses, polar bears and countless other species worldwide -- face the highest rate of decline, a trend that matches previous extinction events.
Larger animals tend to have lower population growth rates and produce fewer offspring. They need larger habitat areas to maintain viable populations. Their size and meat mass make them easier and more attractive hunting targets for humans.
Although these species represent a relatively low percentage of the animals at risk, their loss would have trickle-down effects that could shake the stability of other species and, in some cases, even human health.
For instance, previous experiments conducted in Kenya have isolated patches of land from megafauna such as zebras, giraffes and elephants, and observed how an ecosystem reacts to the removal of its largest species. Rather quickly, these areas become overwhelmed with rodents. Grass and shrubs increase and the rate of soil compaction decreases. Seeds and shelter become more easily available, and the risk of predation drops.
Consequently, the number of rodents doubles -- and so does the abundance of the disease-carrying ectoparasites that they harbor.
"Where human density is high, you get high rates of defaunation, high incidence of rodents, and thus high levels of pathogens, which increases the risks of disease transmission," said Dirzo, who is also a senior fellow at the Stanford Woods Institute for the Environment. "Who would have thought that just defaunation would have all these dramatic consequences? But it can be a vicious circle."
The popularity of drones is climbing quickly among companies, governments and citizens alike. But the rules surrounding where, when and why you can fly an unmanned aerial vehicle aren’t very clear. The FAA has tried to assert control and insist on licensing for all drone operators, while drone pilots and some legal experts claim drones do not fall under the FAA’s purview. The uncertainty—and recent attempts by the FAA to fine a drone pilot and ground a search and rescue organization—has UAV operators nervous.
To help with the question of where it is legal to fly a drone, Mapbox has put together an interactive map of all the no-fly zones for UAVs they could find. Most of the red zones on the map are near airports, military sites and national parks. But as WIRED’s former Editor-in-Chief, Chris Anderson, now CEO of 3-D Robotics and founder of DIY Drones, discovered in 2007 when he crashed a drone bearing a camera into a tree on the grounds of Lawrence Berkeley National Laboratory, there is plenty of trouble in all sorts of places for drone operators to get into.
As one of the map’s authors, Bobby Sudekum, writes on the Mapbox blog, it’s a work in progress. They’ve made the data they collected available for anyone to use, and if you know of other no-fly zones that aren’t on the map, you can add that data to a public repository they started on GitHub.
For instance, you’ll see on the map below that there isn’t a no-fly area over Berkeley Lab, which sits in the greyed area in the hills above UC Berkeley. Similarly, there is no zone marked around Lawrence Livermore National Laboratory, one of the country’s two nuclear weapons labs. I have a call into the lab to check on the rules*, but in the meantime, if you have a drone, just know that in 2006, the lab acquired a Gatling gun that has a range of 1 mile and can fire 4,000 rounds a minute.
In Aesop's fable about the crow and the pitcher, a thirsty bird happens upon a vessel of water, but when he tries to drink from it, he finds the water level out of his reach. Not strong enough to knock over the pitcher, the bird drops pebbles into it -- one at a time -- until the water level rises enough for him to drink his fill. New research demonstrates the birds' intellectual prowess may be more fact than fiction.
Highlighting the value of ingenuity, the fable demonstrates that cognitive ability can often be more effective than brute force. It also characterizes crows as pretty resourceful problem solvers. New research conducted by UC Santa Barbara's Corina Logan, with her collaborators at the University of Auckland in New Zealand, demonstrates the birds' intellectual prowess may be more fact than fiction. Her findings appear today in the scientific journal PLOS ONE.
Logan is lead author of the paper, which examines causal cognition using a water displacement paradigm. "We showed that crows can discriminate between different volumes of water and that they can pass a modified test that so far only 7- to 10-year-old children have been able to complete successfully. We provide the strongest evidence so far that the birds attend to cause-and-effect relationships by choosing options that displace more water."
Logan, a junior research fellow at UCSB's SAGE Center for the Study of the Mind, worked with New Caledonian crows in a set of small aviaries in New Caledonia run by the University of Auckland. "We caught the crows in the wild and brought them into the aviaries, where they habituated in about five days," she said. Keeping families together, they housed the birds in separate areas of the aviaries for three to five months before releasing them back to the wild.
The testing room contained an apparatus consisting of two beakers of water, the same height, but one wide and the other narrow. The diameters of the lids were adjusted to be the same on each beaker. "The question is, can they distinguish between water volumes?" Logan said. "Do they understand that dropping a stone into a narrow tube will raise the water level more?" In a previous experiment by Sarah Jelbert and colleagues at the University of Auckland, the birds had not preferred the narrow tube. However, in that study, the crows were given 12 stones to drop in one or the other of the beakers, giving them enough to be successful with either one.
"When we gave them only four objects, they could succeed only in one tube -- the narrower one, because the water level would never get high enough in the wider tube; they were dropping all or most of the objects into the functional tube and getting the food reward," Logan explained. "It wasn't just that they preferred this tube, they appeared to know it was more functional." However, she noted, we still don't know exactly how the crows think when solving this task. They may be imagining the effect of each stone drop before they do it, or they may be using some other cognitive mechanism. "More work is needed," Logan said.
Logan also examined how the crows react to the U-tube task. Here, the crows had to choose between two sets of tubes. With one set, when subjects dropped a stone into a wide tube, the water level raised in an adjacent narrow tube that contained food. This was due to a hidden connection between the two tubes that allowed water to flow. The other set of tubes had no connection, so dropping a stone in the wide tube did not cause the water level to rise in its adjacent narrow tube.
Each set of tubes was marked with a distinct color cue, and test subjects had to notice that dropping a stone into a tube marked with one color resulted in the rise of the floating food in its adjacent small tube. "They have to put the stones into the blue tube or the red one, so all you have to do is learn a really simple rule that red equals food, even if that doesn't make sense because the causal mechanism is hidden," said Logan.
As it turns out, this is a very challenging task for both corvids (a family of birds that includes crows, ravens, jays and rooks) and children. Children ages 7 to 10 were able to learn the rules, as Lucy Cheke and colleagues at the University of Cambridge discovered in 2012. It may have taken a couple of tries to figure out how it worked, Logan noted, but the children consistently put the stones into the correct tube and got the reward (in this case, a token they exchanged for stickers). Children ages 4 to 6, however, were unable to work out the process. "They put the stones randomly into either tube and weren't getting the token consistently," she said.
Recently, Jelbert and colleagues from the University of Auckland put the New Caledonian crows to the test using the same apparatus the children did. The crows failed. So Logan and her team modified the apparatus, expanding the distance between the beakers. And Kitty, a six-month-old juvenile, figured it out. "We don't know how she passed it or what she understands about the task," Logan said, "so we don't know if the same cognitive processes or decisions are happening as with the children, but we now have evidence that they can. It's possible for the birds to pass it.
Let's face it, humans are pretty intelligent. Most people would not argue with this. We spend a large majority of our lives trying to become MORE intelligent. Some of us spend nearly three decades of our lives in school, learning about the world. We also strive to work together in groups, as nations, and as a species, to better tackle the problems that face us.
A second track of transhumanism is to facilitate and support improvement of machines in parallel to improvements in human quality of life. Many people argue that we have also already built complex computer programs which show a glimmer of autonomous intelligence, and that in the future we will be able to create computer programs that are equal to, or have a much greater level of intelligence than humans. Such an intelligent system will be able to self-improve, just as we humans identify gaps in our knowledge and try to fill them by going to school and by learning all we can from others. Our computer programs will soon be able to read Wikipedia and Google Books to learn, just like their creators.
She is also the cofounder of carboncopies.org - and organization that works on connectome mapping of the brain and downloading memories.
Even in our deepest theories of machine intelligence, the idea of reward comes up. There is a theoretical model of intelligence called AIXI, developed by Marcus Hutter , which is basically a mathematical model which describes a very general, theoretical way in which an intelligent piece of code can work. This model is highly abstract, and allows, for example, all possible combinations of computer program code snippets to be considered in the construction of an intelligent system. Because of this, it hasn’t actually ever been implemented in a real computer. But, also because of this, the model is very general, and captures a description of the most intelligentprogram that could possibly exist. Note that in order to try and build something that even approximates this model is way beyond our computing capability at the moment, but we are talking now about computer systems that may in the future may be much more powerful. Anyway, the interesting thing about this model is that one of the parameters is a term describing… you guessed it… REWARD.
Changing your own code
We, as humans, are clever enough to look at this model, to understand it, and see that there is a reward term in there. And if we can see it, then any computer system that is based on this highly intelligent model will certainly be able to understand this model, and see the reward term too. But – and here’s the catch – the computer system that we build based on this model has the ability to change its own code! In fact it had to in order to become more intelligent than us in the first place, once it realized we were such lousy programmers and took over programming itself!
So imagine a simple example – our case from earlier – where a computer gets an additional ’1′ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1′ with an ‘add 2′? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
New research by theorists at the Harvard-Smithsonian Center for Astrophysics (CfA) shows that we could spot the fingerprints of certain pollutants under ideal conditions. This would offer a new approach in the search for extraterrestrial intelligence (SETI).
“We consider industrial pollution as a sign of intelligent life, but perhaps civilizations more advanced than us, with their own SETI programs, will consider pollution as a sign of unintelligent life since it’s not smart to contaminate your own air,” said Harvard student and lead author Henry Lin.
“People often refer to ETs as ‘little green men,’ but the ETs detectable by this method should not be labeled ‘green’ since they are environmentally unfriendly,” added co-author Avi Loeb, Harvard’s Frank B. Baird Jr. Professor of Science.
The team, which also includes Smithsonian scientist Gonzalo Gonzalez Abad, finds that the upcoming James Webb Space Telescope (JWST) should be able to detect two kinds of chlorofluorocarbons (CFCs) — ozone-destroying chemicals used in solvents and aerosols. They calculated that JWST could tease out the signal of CFCs if atmospheric levels were 10 times those on Earth. A particularly advanced civilization might intentionally pollute the atmosphere to high levels and globally warm a planet that is otherwise too cold for life.
There is one big caveat to this work. JWST can only detect pollutants on an Earth-like planet circling a white dwarf star, which is what remains when a star like our sun dies. That scenario would maximize the atmospheric signal. Finding pollution on an Earth-like planet orbiting a sun-like star would require an instrument beyond JWST — a next-next-generation telescope.
The team notes that a white dwarf might be a better place to look for life than previously thought, since recent observations found planets in similar environments. Those planets could have survived the bloating of a dying star during its red giant phase, or have formed from the material shed during the star’s death throes.
The Chairman and CEO of DNA Electronics, a provider of point-of-care genomic diagnostics solutions for medical and healthcare applications, Chris Toumazou, has been awarded the European Inventor Award 2014 in the Research category, for his rapid USB-based DNA testing device.
Announced at the European Inventor Awards ceremony in Berlin on June 17th 2014, Toumazou’s win recognises his contribution to medical research with his ground-breaking invention. The device, which can show the results of a DNA test within minutes, uses silicon transistors to identify DNA and RNA, offering a simpler, cheaper and more discrete alternative to existing DNA analysis equipment.
The invention involves the amplification and detection of DNA and other biomolecules using pH measurement, providing the ground work for DNA Electronics’ molecular diagnostics platform Genalysis®. With the capability of identifying genomic sequences, not only in patients, but also in infectious agents, the company is developing products that will provide clinicians with rapid actionable diagnosis of life-threatening conditions.
DNA Electronics is a developer of semiconductor solutions for real-time nucleic acid detection which enables faster, simpler and more cost-effective DNA analysis platforms.
A spin-out of Imperial College London, DNA Electronics was founded by Professor Toumazou following his invention of the company’s core technology that allows CMOS transistors to be switched on and off with DNA – the key invention enabling semiconductor-based sequencing. Prof. Toumazou’s innovation has culminated in the world’s first DNA logic on standard CMOS technology.
The company’s IP portfolio includes techniques for monitoring nucleotide insertions using ion-sensitive transistors, enabling label-free electronic DNA sequencing and diagnostics platforms. DNA Electronics (DNAe) has developed the ground-breaking Genalysis® platform of disposable silicon chip-based solutions for real-time nucleic acid sequence detection at the point of care, providing end users with technology as yet unavailable outside a laboratory.
DNA Electronics has a non-exclusive, field-limited licensing agreement with Ion Torrent (now part of Thermo Fisher Scientific), whose next generation sequencing technology is based on DNA Electronics’ semiconductor sequencing IP. DNA Electronics has also licensed its Genalysis® technology platform to GENEU™, a company that is delivering on-the-spot genetic analytics services for cosmetics and skincare applications.
For more information: http://www.dnae.co.uk
The recent cluster of accidents show air travel still has its dangers. BBC Future’s infographic reveals how the latest tragedies compare with previous years.
Questions being asked:
A team of researchers with members from several countries working together in Rome, Italy, has come up with a new explanation of how it is that starlings are able to fly in a flock in a way that makes them appear as a single organism. In their paper published in the journal Nature Physics, the team describes how they used high-speed cameras to capture and study flight movement by individual bird members and what they found as a result.
Starling flight is as mesmerizing as it is mystifying—flocks of hundreds or thousands of birds sweep across the sky as if a single organism. The birds flying over Rome in particular have captured the imagination of bird enthusiasts, tourists, film makers and scientists alike. How do individual birds know when to turn and which way? Some have suggested it's a random thing, each bird simply flies making sure not to run into a neighbor. Others have suggested that some birds initiate a turn and others follow, creating a diffusion effect. In this new study, the researchers suggest that none of the earlier theories is correct—they've come up with something brand new.
To get a better look at the birds in flight, the researchers recorded flocks flying over Rome with high speed cameras and then took the results into their lab for examination. They found that turns are almost always initiated by just a few birds, but rather than other birds trying to figure out where to turn too, they instead simply copy how sharply their neighbor turns. This allows for the turn message to propagate through the flock at a very fast constant speed—approximately 20 to 40 meters per second, the team calculated. That constant message transfer speed means that each bird in a flock can respond in as little as half a second, without causing the flock to break apart.
By attaching short sequences of single-stranded DNA to nanoscale building blocks, researchers can design structures that can effectively build themselves. The building blocks that are meant to connect have complementary DNA sequences on their surfaces, ensuring only the correct pieces bind together as they jostle into one another while suspended in a test tube.
Now, a University of Pennsylvania team has made a discovery with implications for all such self-assembled structures.
Earlier work assumed that the liquid medium in which these DNA-coated pieces float could be treated as a placid vacuum, but the Penn team has shown that fluid dynamics play a crucial role in the kind and quality of the structures that can be made in this way.
As the DNA-coated pieces rearrange themselves and bind, they create slipstreams into which other pieces can flow. This phenomenon makes some patterns within the structures more likely to form than others.
The research was conducted by professors Talid Sinno and John Crocker, alongside graduate students Ian Jenkins, Marie Casey and James McGinley, all of the Department of Chemical and Biomolecular Engineering in Penn’s School of Engineering and Applied Science.
It was published in the Proceedings of the National Academy of Sciences.
Ozone and higher temperatures can combine to reduce crop yields, but effects will vary by region.
Many studies have shown the potential for global climate change to cut food supplies. But these studies have, for the most part, ignored the interactions between increasing temperature and air pollution — specifically ozone pollution, which is known to damage crops.
A new study involving researchers at MIT shows that these interactions can be quite significant, suggesting that policymakers need to take both warming and air pollution into account in addressing food security.
The study looked in detail at global production of four leading food crops — rice, wheat, corn, and soy — that account for more than half the calories humans consume worldwide. It predicts that effects will vary considerably from region to region, and that some of the crops are much more strongly affected by one or the other of the factors: For example, wheat is very sensitive to ozone exposure, while corn is much more adversely affected by heat.
The research was carried out by Colette Heald, an associate professor of civil and environmental engineering (CEE) at MIT, former CEE postdoc Amos Tai, and Maria van Martin at Colorado State University. Their work is described this week in the journal Nature Climate Change.
Overall, with all other factors being equal, warming may reduce crop yields globally by about 10 percent by 2050, the study found. But the effects of ozone pollution are more complex — some crops are more strongly affected by it than others — which suggests that pollution-control measures could play a major role in determining outcomes. Ozone pollution can also be tricky to identify, Heald says, because its damage can resemble other plant illnesses, producing flecks on leaves and discoloration.
Potential reductions in crop yields are worrisome: The world is expected to need about 50 percent more food by 2050, the authors say, due to population growth and changing dietary trends in the developing world. So any yield reductions come against a backdrop of an overall need to increase production significantly through improved crop selections and farming methods, as well as expansion of farmland.
Astronomers using NASA's Chandra X-ray Observatory to explore the Perseus Cluster, a swarm of galaxies approximately 250 million light years from Earth, have observed the spectral line that appears not to come from any known type of matter.
Perseus Cluster a collection of galaxies and one of the most massive known objects in the Universe, immersed in an enormous 'atmosphere' of superheated plasma. It is approximately 768 000 light years across. "I couldn't believe my eyes," says Esra Bulbul of the Harvard Center for Astrophysics. "What we found, at first glance, could not be explained by known physics."
"The cluster's atmosphere is full of ions such as Fe XXV, Si XIV, and S XV. Each one produces a 'bump' or 'line' in the x-ray spectrum, which we can map using Chandra. These spectral lines are at well-known x-ray energies."
Yet, in 2012 when Bulbul added together 17 day's worth of Chandra data, a new line popped up where no line should be. "A line appeared at 3.56 keV (kilo-electron volts) which does not correspond to any known atomic transition," she says. "It was a great surprise."
We detected a weak unidentified emission line at E=(3.55-3.57)+/-0.03 keV in a stacked XMM spectrum of 73 galaxy clusters spanning a redshift range 0.01-0.35. MOS and PN observations independently show the presence of the line at consistent energies.
When the full sample is divided into three subsamples (Perseus, Centaurus+Ophiuchus+Coma, and all others), the line is significantly detected in all three independent MOS spectra and the PN "all others" spectrum. It is also detected in the Chandra spectra of Perseus with the flux consistent with XMM (though it is not seen in Virgo). However, it is very weak and located within 50-110eV of several known faint lines, and so is subject to significant modeling uncertainties. On the origin of this line, we argue that there should be no atomic transitions in thermal plasma at this energy. An intriguing possibility is the decay of sterile neutrino, a long-sought dark matter particle candidate.
Assuming that all dark matter is in sterile neutrinos with m_s=2E=7.1 keV, our detection in the full sample corresponds to a neutrino decay mixing angle sin^2(2theta)=7e-11, below the previous upper limits. However, based on the cluster masses and distances, the line in Perseus is much brighter than expected in this model. This appears to be because of an anomalously bright line at E=3.62 keV in Perseus, possibly an Ar XVII dielectronic recombination line, although its flux would be 30 times the expected value and physically difficult to understand. In principle, such an anomaly might explain our line detection in other subsamples as well, though it would stretch the line energy uncertainties. Another alternative is the above anomaly in the Ar line combined with the nearby 3.51 keV K line also exceeding expectation by factor 10-20. Confirmation with Chandra and Suzaku, and eventually Astro-H, are required to determine the nature of this new line.
Back in 2012, the Sun erupted with a powerful solar storm that just missed the Earth but was big enough to "knock modern civilization back to the 18th century," NASA said. The extreme space weather that tore through Earth's orbit on July 23, 2012, was the most powerful in 150 years, according to a statement posted on the US space agency website Wednesday.
However, few Earthlings had any idea what was going on. "If the eruption had occurred only one week earlier, Earth would have been in the line of fire," said Daniel Baker, professor of atmospheric and space physics at the University of Colorado. Instead the storm cloud hit the STEREO-A spacecraft, a solar observatory that is "almost ideally equipped to measure the parameters of such an event," NASA said. Scientists have analyzed the treasure trove of data it collected and concluded that it would have been comparable to the largest known space storm in 1859, known as the Carrington event. It also would have been twice as bad as the 1989 solar storm that knocked out power across Quebec, scientists said.
"I have come away from our recent studies more convinced than ever that Earth and its inhabitants were incredibly fortunate that the 2012 eruption happened when it did," said Baker. The National Academy of Sciences has said the economic impact of a storm like the one in 1859 could cost the modern economy more than two trillion dollars and cause damage that might take years to repair. Experts say solar storms can cause widespread power blackouts, disabling everything from radio to GPS communications to water supplies -- most of which rely on electric pumps.
They begin with an explosion on the Sun's surface, known as a solar flare, sending X-rays and extreme UV radiation toward Earth at light speed. Hours later, energetic particles follow and these electrons and protons can electrify satellites and damage their electronics.
Next are the coronal mass ejections, billion-ton clouds of magnetized plasma that take a day or more to cross the Sun-Earth divide. These are often deflected by Earth's magnetic shield, but a direct hit could be devastating.
During intense drought, groundwater depletion in the Colorado River Basin has skyrocketed. For the past 14 years, drought has afflicted the Colorado River Basin, and one of the most visible signs has been the white bathtub rings around the red rocks of Lake Mead and Lake Powell, the two biggest dammed lakes on the river. But there is also an invisible bathtub being emptied, below ground. A new study shows that ground water in the basin is being depleted six times faster than surface water. The groundwater losses, which take thousands of years to be recharged naturally, point to the unsustainability of exploding population centers and water-intensive agriculture in the basin, which includes most of Arizona and parts of Colorado, California, Nevada, Utah, New Mexico, and Wyoming.
The study is the first to identify groundwater depletion across the entire Colorado River Basin, and it brings attention to a neglected issue, says Leonard Konikow, a hydrogeologist emeritus at the U.S. Geological Survey in Reston, Virginia, who was not involved with the work. Because ground water feeds many of the streams and rivers in the area, Konikow predicts that more of them will run dry. He says water pumping costs will rise as farmers—who are the biggest users of ground water—have to drill deeper and deeper into aquifers. “It’s disconcerting,” Konikow says. “Boy, water managers gotta do something about this, because this can’t go on forever.”
To document the groundwater depletion, James Famiglietti, a hydrologist at the University of California, Irvine, and his colleagues relied on a pair of NASA satellites called the Gravity Recovery and Climate Experiment (GRACE). The instruments are sensitive to tiny variations in Earth’s gravity. They can be used to observe groundwater extraction, because when the mass of that water disappears, gravity in that area also drops.
In the 9 years from December 2004 to November 2013, ground water was lost at a rate of 5.6 cubic kilometers a year, the team reports online today in Geophysical Research Letters. That’s compared with a decline of 0.9 cubic kilometers per year from Lake Powell and Lake Mead, which contain 85% of the surface water in the basin.
Famiglietti says it makes sense that cities and farmers turn from surface water to ground water during drought. But he is surprised by the magnitude of the loss. The groundwater depletion rate is twice that in California’s Central Valley, another place famous for heavy groundwater use.
Harvard's first large-scale digital computer, which came to be known as the Mark I, was conceived by Howard H. Aiken (A.M. '37, Ph.D. '39) and built by IBM. Fifty-one feet long, it was installed in the basement of what is now Lyman Laboratory in 1944, and later moved to a new building called the Aiken Computation Laboratory, where a generation of computing pioneers were educated and where the Maxwell Dworkin building now stands as part of the mechanism remains on exhibit in the Science Center.
The Mark I performed additions and subtractions at a rate of about three per second; multiplication and division took considerably longer. This benchmark was soon surpassed by computers that could do thousands of arithmetic operations per second, then millions and billions. By the late 1990s a few machines were reaching a trillion (1012) operations per second; these were called terascale computers, as tera is the Système International prefix for 1012. The next landmark—and the current state of the art—is the petascale computer, capable of 1015 operations per second. In 2010, Kaxiras' blood flow simulation ran on a petascale computer called Blue Gene/P in Jülich, Germany, which at the time held fifth place on the Top 500 list of supercomputers.
The new goal is an exascale machine, performing at least 1018 operations per second. This is a number so immense it challenges the imagination. Stacks of pennies reaching to the moon are not much help in expressing its magnitude—there would be millions of them. If an exascale computer counted off the age of the universe in units of a billionth of a second, the task would take a little more than 10 seconds.
And what comes after exascale? We can look forward to zettascale (1021) and yottascale (1024); then we run out of prefixes. The engine driving these amazing gains in computer performance is the ability of manufacturers to continually shrink the dimensions of transistors and other microelectronic devices, thereby cramming more of them onto a single chip. (The number of transistors per chip is in the billions now.) Until about 10 years ago, making transistors smaller also made them faster, allowing a speedup in the master clock, the metronome-like signal that sets the tempo for all operations in a digital computer. Between 1980 and 2005, clock rates increased by a factor of 1,000, from a few megahertz to a few gigahertz. But the era of ever-increasing clock rates has ended.
The speed limit for modern computers is now set by power consumption. If all other factors are held constant, the electricity needed to run a processor chip goes up as the cube of the clock rate: doubling the speed brings an eightfold increase in power demand. SEAS Dean Cherry A. Murray, the John A. and Elizabeth S. Armstrong Professor of Engineering and Applied Sciences and Professor of Physics, points out that high-performance chips are already at or above the 100-watt level. "Go much beyond that," she says, "and they would melt."
If the chipmakers cannot build faster transistors, however, they can still make them smaller and thus squeeze more onto each chip. Since 2005 the main strategy for boosting performance has been to gang together multiple processor "cores" on each chip. The clock rate remains roughly constant, but the total number of operations per second increases if the separate cores can be put to work simultaneously on different parts of the same task. Large systems are assembled from vast numbers of these multicore processors.
When the Kaxiras group's blood flow study ran on the Blue Gene/P at Jülich, the machine had almost 300,000 cores. The world's largest and fastest computer, as of June 2014, is the Tianhe-2 in Guangzhou, China, with more than 3 million cores. An exascale machine may have hundreds of millions of cores, or possibly as many as a billion.
A team of Brazilian and American astronomers used CFHT observations of the system 16 Cygni to discover evidence of how giant planets like Jupiter form.
One of the main models to form giant planets is called "core accretion". In this scenario, a rocky core forms first by aggregation of solid particles until it reaches a few Earth masses when it becomes massive enough to accrete a gaseous envelope. For the first time, astronomers have detected evidence of this rocky core, the first step in the formation of a giant planet like our own Jupiter.
The astronomers used the Canada-France-Hawaii Telescope (CFHT) to analyze the starlight of the binary stars 16 Cygni A and 16 Cygni B. The system is a perfect laboratory to study the formation of giant planets because the stars were born together and are therefore very similar, and both resemble the Sun. However, observations during the last decades show that only one of the two stars, 16 Cygni B, hosts a giant planet which is about 2.4 times as massive as Jupiter. By decomposing the light from the two stars into their basic components and looking at the difference between the two stars, the astronomers were able to detect signatures left from the planet formation process on 16 Cygni B.
The fingerprints detected by the astronomers are twofold. First, they found that the star 16 Cygni A is enhanced in all chemical elements relative to 16 Cygni B. This means that 16 Cygni B, the star that hosts a giant planet, is metal deficient. As both stars were born from the same natal cloud, they should have exactly the same chemical composition. However, planets and stars form at about the same time, hence the metals that are missing in 16 Cygni B (relative to 16 Cygni A) were probably removed from its protoplanetary disk to form its giant planet, so that the remaining material that was falling into 16 Cygni B in the final phases of its formation was deficient in those metals.
The second fingerprint is that on top of an overall deficiency of all analyzed elements in 16 Cygni B, this star has a systematic deficiency in the refractory elements such as iron, aluminum, nickel, magnesium, scandium, and silicon. This is a remarkable discovery because the rocky core of a giant planet is expected to be rich in refractory elements. The formation of the rocky core seems to rob refractory material from the proto-planetary disk, so that the star 16 Cygni B ended up with a lower amount of refractories. This deficiency in the refractory elements can be explained by the formation of a rocky core with a mass of about 1.5 – 6 Earth masses, which is similar to the estimate of Jupiter's core.
"Our results show that the formation of giant planets, as well as terrestrial planets like our own Earth, leaves subtle signatures in stellar atmospheres", says Marcelo Tucci Maia (Universidade de São Paulo), the lead author of the paper.
No Man’s Sky is a video game quite unlike any other. Sean Murray, one of the creators of the computer game No Man’s Sky, can’t guarantee that the virtual universe is infinite, but he’s certain that, if it isn’t, nobody will ever find out. “If you were to visit one virtual planet every second,” he says, “then our own sun will have died before you’d have seen them all.”
Developed for Sony’s PlayStation 4 by an improbably small team (the original four-person crew has grown only to 10 in recent months) at Hello Games, an independent studio in the south of England, it’s a game that presents a traversable universe in which every rock, flower, tree, creature, and planet has been “procedurally generated” to create a vast and diverse play area.
“We are attempting to do things that haven’t been done before,” says Murray. “No game has made it possible to fly down to a planet, and for it to be planet-sized, and feature life, ecology, lakes, caves, waterfalls, and canyons, then seamlessly fly up through the stratosphere and take to space again. It’s a tremendous challenge.”
Procedural generation, whereby a game’s landscape is generated not by an artist’s pen but an algorithm, is increasingly prevalent in video games. Most famously Minecraft creates a unique world for each of its players, randomly arranging rocks and lakes from a limited palette of bricks whenever someone begins a new game (see “The Secret to a Video Game Phenomenon”). But No Man’s Sky is far more complex and sophisticated. The tens of millions of planets that comprise the universe are all unique. Each is generated when a player discovers it, and is subject to the laws of its respective solar systems and vulnerable to natural erosion. The multitude of creatures that inhabit the universe dynamically breed and genetically mutate as time progresses. This is virtual world building on an unprecedented scale (see video).
This presents numerous technological challenges, not least of which is how to test a universe of such scale during its development – the team is currently using virtual testers—automated bots that wander around taking screenshots which are then sent back to the team for viewing. Additionally, while No Man’s Sky might have an infinite-sized universe, there aren’t an infinite number of players. To avoid the problem of a kind of virtual loneliness, where a player might never encounter another person on his or her travels, the game starts every new player in the same galaxy (albeit on his or her own planet) with a shared initial goal of traveling to its center. Later in the game, players can meet up, fight, trade, mine, and explore. “Ultimately we don’t know whether people will work, congregate, or disperse,” Murray says. “I know players don’t like to be told that we don’t know what will happen, but that’s what is exciting to us: the game is a vast experiment.”
Workers with the Insect Museum of West China, who were recently given several very large dragon-fly looking insects, with long teeth, by locals in a part of Sichuan, have declared it, a giant dobsonfly the largest known aquatic insect in the world alive today. The find displaces the previous record holder, the South American helicopter damselfly, by just two centimeters.
The dobsonfly is common (there are over 220 species of them) in China, India, Africa, South America and some other parts of Asia, but until now, no specimens as large as those recently found in China have been known. The largest specimens in the found group had a wingspan of 21 centimeters, making it large enough to cover the entire face of a human adult. Locals don't have to worry too much about injury from the insects, however, as officials from the museum report that larger males' mandibles are so huge in proportion to their bodies that they are relatively weak—incapable of piercing human skin. They can kick up a stink, however, as they are able to spray an offensive odor when threatened.
Also, despite the fact that they look an awful lot like dragonflies, they are more closely related to fishflies. The long mandibles, though scary looking to humans, are actually used for mating—males use them to show off for females, and to hold them still during copulation. Interestingly, while their large wings (commonly twice their body length) make for great flying, they only make use of them for about a week—the rest of their time alive as adults is spent hiding under rocks or moving around on or under the water. That means that they are rarely seen as adults, which for most people is probably a good thing as the giants found in China would probably present a frightening sight. They are much better known during their long larval stage when they are used as bait by fishermen.