Amazing Science
825.1K views | +147 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Longevity science!

ProjectDR allows doctors to "see into" patients' bodies

ProjectDR allows doctors to "see into" patients' bodies | Amazing Science |
Imagine if doctors could see through a patient's skin, and their perspective of the underlying bones and organs changed accordingly as the person moved around. Well, that's what scientists at the University of Alberta have developed – kind of. It's still in the experimental phase, and is known as ProjectDR.


The new technology is bringing the power of augmented reality into clinical practice. The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does. “We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR. The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.


“There are lots of applications for this technology, including in teaching, physiotherapy, laparoscopic surgery and even surgical planning,” said Watts, who developed the technology with fellow graduate student Michael Fiest. ProjectDR also has the capacity to present segmented images—for example, only the lungs or only the blood vessels—depending on what a clinician is interested in seeing. For now, Watts is working on refining ProjectDR to improve the system’s automatic calibration and to add components such as depth sensors. The next steps are testing the program’s viability in a clinical setting, explained Pierre Boulanger, professor in the Department of Computing Science.


“Soon, we’ll deploy ProjectDR in an operating room in a surgical simulation laboratory to test the pros and cons in real-life surgical applications,” said Boulanger. "We are also doing pilot studies to test the usability of the system for teaching chiropractic and physical therapy procedures.” added Greg Kawchuk, a co-supervisor on the project from the Faculty of Rehabilitation Medicine. Once these pilot studies are complete, the research team expects the deployment of the system in real surgical pilot studies will quickly follow.

Via Ray and Terry's
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Exciting developments in zinc biology!

A new nanochip that will detect bacterial infections in 15 minutes

A new nanochip that will detect bacterial infections in 15 minutes | Amazing Science |
A novel approach to detect bacterial infections in 10-15 minutes is expected to become commercially available next year.


A new device – a biological sensor inside a nanochip – that can detect bacterial infections in ten to 15 minutes will become available in 2016. Devised by a team of scientists from South Africa’s Stellenbosch University, the device is currently being patented. The Technology Innovation Agency has funded a prototype in preparation for commercialization by April 2016.


Pathogenic organisms infect about 250 million people a year. At least 8%, around 20 million people, die. Early detection of infections can prevent many deaths. Since the nanochip was announced as a project of the university in September 2014, progress has been made in developing additional sensing mechanisms, enhancing its capabilities.


The nanochip for early detection of infection came after a chance meeting between the author and microbiologist Leon Dicks, an expert in the field of superconductors and nanoelectrical devices.

While discussing individual current research, we agreed to work to find a way of detecting infections early and accurately. The basis for our research was piezoelectricity, which is how crystals convert mechanical energy into electricity or vice-versa.

Via Imre Lengyel
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

'Chemical MP3 Player' Can 3D Print Pharmaceuticals On-Demand from Digital Code

'Chemical MP3 Player' Can 3D Print Pharmaceuticals On-Demand from Digital Code | Amazing Science |

Have you ever taken your old compact discs and converted them to MP3 files so you could listen to your favorite music on your laptop, or through a portable MP3 device that’s much smaller than an unwieldy portable CD player? Now, researchers from the University of Glasgow are working on a very similar process, but instead of music files, they are using a chemical-to-digital converter to digitize the process of drug manufacturing; a chemical MP3 player, if you will, that can 3D print pharmaceuticals on demand.


3D printing in the pharmaceutical field is a fascinating concept, though not a new one. But this ‘Spotify for chemistry’ concept is new: it’s the first time we’ve seen an approach to manufacturing pharmaceuticals using digital code. According to Science, the University of Glasgow team “tailored a 3D printer to synthesize pharmaceuticals and other chemicals from simple, widely available starting compounds fed into a series of water bottle–size reactors.”

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Reading Through a Closed Book

Reading Through a Closed Book | Amazing Science |

Spatial resolution, spectral contrast, and occlusion are three major bottlenecks in current imaging technologies for non-invasive inspection of complex samples such as closed books. A team of scientists empower the time-of-flight capabilities of conventional THz time domain spectroscopy and combine it with its spectral capabilities to computationally overcome these bottlenecks. Their study reports successful unsupervised content extraction through a densely layered structure similar to that of a closed book.



No comment yet.
Scooped by Dr. Stefan Gruenwald!

German Volocopter’s fully-electric autonomous manned multicopter is performing its first passenger flights

German Volocopter’s fully-electric autonomous manned multicopter is performing its first passenger flights | Amazing Science |

Volocopter is the first company approved to put people in the skies with what’s essentially the equivalent of a driverless car in the air, ‘pilotless aircraft’ if you will, at the consumer level.


It’s a people's drone, and it’s a fantastic idea. In places where traffic is insane, like Los Angeles, and smog is bad, also like Los Angeles, a system of efficient travel that works like an Uber in the sky sounds terrifying, but also awesome and maybe even necessary. This is the future we’ve been asking for; finally a product worth drooling over to start the year! And Volocopter just reminded everyone that CES is the biggest show in tech.

When is it coming?

According to the company’s website: The Volocopter is the world’s first multicopter to be granted a certification for manned flights – as early as 2016. It fulfils stringent German and international safety standards. From the end of 2017 the Volocopter will get to prove this in Dubai: At the first ever autonomous air taxi test run in the history of aviation.


The Volocopter 2X turns the vision of “flight for all” into reality. Just step on board the first manned, fully electric and safe VTOLs in the world.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

At least three billion computer chips are vulnerable to a security flaw

At least three billion computer chips are vulnerable to a security flaw | Amazing Science |

Companies are rushing out software fixes for Chipmageddon 1.0. Tech companies are still working overtime on patching two critical vulnerabilities in computer chips that were revealed this week. The flaws, dubbed “Meltdown” and “Spectre,” could let hackers get hold of passwords, encryption keys, and other sensitive information from a computer’s core memory via malicious apps running on devices.


How many chips are affected? The number is something of a moving target. But from the information released so far by tech companies and estimates from chip industry analysts, it looks as if at least three billion chips in computers, tablets, and phones now in use are vulnerable to attack by Spectre, which is the more widespread of the two flaws.


Apple says all its Mac and iOS products are affected, with the exception of the Apple watch. That’s a billion or so devices. Gadgets powered by Google’s Android operating system number more than two billion, the company said last year. Linley Gwennap of the Linley Group, which tracks the chip industry, thinks the security flaws could affect about 500 million of them. As practically all smartphones run on iOS and Android, this pretty much covers the mobile-device landscape.


Then there are PCs and servers. These are largely powered by chips from Intel, whose share price has been battered since news of the flaws emerged. Its chief U.S. competitor, AMD, which has been gaining ground on Intel, said in a blog post  that its chips are not vulnerable to Meltdown and there is a “near zero risk” from one variant of Spectre and zero risk from another.


Still, if some level of threat from Spectre exists, AMD chips merit inclusion. Between them Intel and AMD account for over a billion PC and server chips. In addition, there are a host of smaller chipmakers such as IBM, which has said at least some of its chips are affected. This brings the total to around three billion processors, though this could change as more information emerges. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Project MORPHEUS: DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown

Project MORPHEUS: DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown | Amazing Science |

A University of Michigan (U-M) team has announced plans to develop an “unhackable” computer, funded by a new $3.6 million grant from the Defense Advanced Research Projects Agency (DARPA). The goal of the project, called MORPHEUS, is to design computers that avoid the vulnerabilities of most current microprocessors, such as the Spectre and Meltdown flaws announced  recently. The $50 million DARPA System Security Integrated Through Hardware and Firmware (SSITH) program aims to build security right into chips’ microarchitecture, instead of relying on software patches. The U-M grant is one of nine that DARPA has recently funded through SSITH.



The idea is to protect against future threats that have yet to be identified. “Instead of relying on software Band-Aids to hardware-based security issues, we are aiming to remove those hardware vulnerabilities in ways that will disarm a large proportion of today’s software attacks,” said Linton Salmon, manager of DARPA’s System Security Integrated Through Hardware and Firmware program.


Under MORPHEUS, the location of passwords would constantly change, for example. And even if an attacker were quick enough to locate the data, secondary defenses in the form of encryption and domain enforcement would throw up additional roadblocks.


More than 40 percent of the “software doors” that hackers have available to them today would be closed if researchers could eliminate seven classes of hardware weaknesses**, according to DARPA.


DARPA is aiming to render these attacks impossible within five years. “If developed, MORPHEUS could do it now,” said Todd Austin, U-M professor of computer science and engineering, who leads the project. Researchers at The University of Texas and Princeton University are also working with U-M.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

This edible sensor could reveal our gut microbes

This edible sensor could reveal our gut microbes | Amazing Science |

Wouldn’t it be nice if our microbiomes could serve up diet advice—some science-based assurance that our food and medicines act in harmony with our resident microbes to keep us healthy? For that to happen, scientists will need to better understand how the interaction between food and microbes affects the chemical composition of our guts. Now, a team of researchers has developed an edible device that passes through the digestive tract, measuring concentrations of intestinal gases along the way. The 2.6-centimeter-long capsule (above) contains sensors for hydrogen, oxygen, and carbon dioxide, which can spike and dip as microbes break down components of our meals and release various byproducts.


In a pilot study published today in Nature Electronics, six volunteers experimented with high and low fiber diets while the pill transmitted signals to a pocket-size receiver every 5 minutes. Their preliminary tests showed that the pill's readouts can reflect changing levels of fermentation in the gut and the speed of food’s transit through the body. Turning those data into specific recommendations will be much more complicated. But the researchers suggest these gas readouts could someday help design more healthful foods and possibly diagnose digestive problems.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

China set for groundbreaking 2018 mission to the far side of the moon

China set for groundbreaking 2018 mission to the far side of the moon | Amazing Science |
The two part Chang'e 4 space mission will launch in June when a satellite is placed when a 60,000km behind the moon, followed later in the year by a lander and rover.


According to the Chongqing Morning Post, a container filled with seeds and insect eggs will be attached to Chang'e 4, China's second lunar lander, and will be sent to the Moon in 2018.  The container, which is made from special aluminum alloy, will demonstrate the growing process of plants and animals on the Moon. 


It will also provide valuable data and experience for the future establishment of eco-bases on other planets. 'The container will send potatoes, Arabidopsis seeds and silkworm eggs to the surface of the Moon. The eggs will hatch into silkworms, which can produce carbon dioxide, while the potatoes and seeds emit oxygen through photosynthesis. Together, they can establish a simple ecosystem on the Moon,' Zhang Yuanxun, chief designer of the container, explains.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Team maps magnetic fields of bacterial cells and nano-objects for the first time

Team maps magnetic fields of bacterial cells and nano-objects for the first time | Amazing Science |

A research team led by a scientist from the U.S. Department of Energy's Ames Laboratory has demonstrated for the first time that the magnetic fields of bacterial cells and magnetic nano-objects in liquid can be studied at high resolution using electron microscopy. This proof-of-principle capability allows first-hand observation of liquid environment phenomena, and has the potential to vastly increase knowledge in a number of scientific fields, including many areas of physics, nanotechnology, biofuels conversion, biomedical engineering, catalysis, batteries and pharmacology.


"It is much like being able to travel to a Jurassic Park and witness dinosaurs walking around, instead of trying to guess how they walked by examining a fossilized skeleton," said Tanya Prozorov, an associate scientist in Ames Laboratory's Division of Materials Sciences and Engineering. Prozorov works with biological and bioinspired magnetic nanomaterials, and faced what initially seemed to be an insurmountable challenge of observing them in their native liquid environment. She studies a model system, magnetotactic bacteria, which form perfect nanocrystals of magnetite. In order to best learn how bacteria do this, she needed an alternative to the typical electron microscopy process of handling solid samples in vacuum, where soft matter is studied in prepared, dried, or vitrified form.


For this work, Prozorov received DOE recognition through an Office of Science Early Career Research Program grant to use cutting-edge electron microscopy techniques with a liquid cell insert to learn how the individual magnetic nanocrystals form and grow with the help of biological molecules, which is critical for making artificial magnetic nanomaterials with useful properties.


To study magnetism in bacteria, she applied off-axis electron holography, a specialized technique that is used for the characterization of magnetic nanostructures in the transmission electron microscope, in combination with the liquid cell. "When we look at samples prepared in the conventional way, we have to make many assumptions about their properties based on their final state, but with the new technique, we can now observe these processes first-hand," said Prozorov. "It can help us understand the dynamics of macromolecule aggregation, nanoparticle self-assembly, and the effects of electric and magnetic fields on that process."


"This method allows us to obtain large amounts of new information," said Prozorov. "It is a first step, proving that the mapping of magnetic fields in liquid at the nanometer scale with electron microscopy could be done; I am eager to see the discoveries it could foster in other areas of science."

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's Dragonfly Mission Could Revolutionize How Exoworlds are Explored

NASA's Dragonfly Mission Could Revolutionize How Exoworlds are Explored | Amazing Science |

Dragonfly's destination is the Saturn moon, Titan. And unlike the Mars' rovers, Dragonfly won't be a stuck in one general area. It will gather science from different sites hundreds of kilometers apart.

The dual-quadcopter won't fly like a typical drone, though. The communications delay between Earth and Titan means regular flying is a no-go. Instead, Dragonfly will perform a series of 'hops.' Once science gathering is done at one location, it will take off and head for the next.

During flight, instruments onboard will gather info on the makeup of Titan's atmosphere and capture aerial imaging of Titan's surface. These images will be huge for studying Titan's geology. These same images will be used to scout for future 'hop' landings too.

Scientists won't look longingly at interesting sites and wish they could explore it closer. With Dragonfly, they can queue up another 'hop' and check it out.

According to the Dragonfly team, the mission would last about two years. And 'hops' would happen each Titan day (that's 16 Earth days). Flying, science gathering, and data transmission will happen as the Sun shines on Titan (8 Earth days). At night, Dragonfly will use its Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to recharge.


Here are the goals for the Dragonfly as the folks behind the mission tell it:

  • Understand the organic and methanogenic cycle on Titan, especially as it relates to prebiotic chemistry.
  • Investigate the subsurface ocean and/or liquid reservoirs, particularly their evolution and possible interaction with the surface.

Will Dragonfly find life? I would pump the brakes on that one. It could, but this mission is about increasing our knowledge of Titan. We still don't have a good idea of the moon's surface composition. What complex organic compounds are formed from the interaction between sunlight and methane and nitrogen molecules? Are there chemical signatures that could point to signs of water or hydrocarbon-based life? Those are just a couple of the questions Dragonfly will attempt to answer if it's selected by NASA for the next New Frontiers mission.

Titan is also perfect for this kind of mission. Peter Bedini from Johns Hopkins Applied Physics Laboratory, called Titan the easiest place to fly a quadcopter in the solar system in a recent presentation. That's because Titan has an atmosphere more than four times as dense as Earth. And, the gravity is only 1/7 of Earth's. If you were standing on Titan, you could strap on a pair of wings, flap your arms, and fly.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Alan Charky - VAC AERO Vacuum Heat Treating Furnaces!

Study Shows How 3D Printed Metals can be Ductile as well as Strong

Study Shows How 3D Printed Metals can be Ductile as well as Strong | Amazing Science |
A new method by which to 3D print metals, involving an extensively used stainless steel, has been shown to realize exceptional levels of both ductility and strength, when compared to counterparts from more conventional processes.


The research is opposing to the skepticism surrounding the ability to make robust and ductile metals using 3D printing, and as such the discovery is vital to moving the technology forward for the manufacturing of heavy duty components.


3D printing has long been accepted as a technology which can possibly transform the way of manufacturing, allowing one to quickly construct objects with intricate and tailored geometries.

With the technology rapidly developing in recent years, 3D printing, particularly metal 3D printing, is swiftly progressing toward extensive industrial application.


Indeed, the manufacturing leader General Electric (GE) has already been using metal 3D printing to create certain key parts, such as the fuel nozzles in their newest LEAP aircraft engine. The technology helps GE to minimize 900 separate parts into just 16, and make fuel nozzles 60% cheaper and 40% lighter.


The worldwide revenue from the industry is predicted to be more than 20 billion USD per year by the year 2025. Regardless of the bright future, the quality of the products from metal 3D printing has been susceptible to skepticism. In majority of metal 3D printing processes, products are directly made from metal powders, which make it prone to defects, therefore causing weakening of mechanical properties.


Dr. Leifeng Liu, who is the key participant of the project, lately moved to the University of Birmingham from Stockholm University as an AMCASH research fellow. He said, “Strength and ductility are natural enemies of one another, most methods developed to strengthen metals consequently reduce ductility.”

Via Alan Charky
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google has released a machine learning AI tool that makes sense of your genome

Google has released a machine learning AI tool that makes sense of your genome | Amazing Science |

AI tools could help us turn information gleaned from genetic sequencing into life-saving therapies. Almost 15 years after scientists first sequenced the human genome, making sense of the enormous amount of data that encodes human life remains a formidable challenge. But it is also precisely the sort of problem that machine learning excels at.


Google has now released a tool called DeepVariant that uses the latest AI techniques to build a more accurate picture of a person’s genome from sequencing data. DeepVariant helps turn high-throughput sequencing readouts into a picture of a full genome. It automatically identifies small insertion and deletion mutations and single-base-pair mutations in sequencing data.


High-throughput sequencing became widely available in the 2000s and has made genome sequencing more accessible. But the data produced using such systems has offered only a limited, error-prone snapshot of a full genome. It is typically challenging for scientists to distinguish small mutations from random errors generated during the sequencing process, especially in repetitive portions of a genome. These mutations may be directly relevant to diseases such as cancer.


A number of tools exist for interpreting these readouts, including GATK, VarDict, and FreeBayes. However, these software programs typically use simpler statistical and machine-learning approaches to identifying mutations by attempting to rule out read errors. “One of the challenges is in difficult parts of the genome, where each of the tools has strengths and weaknesses,” says Brad Chapman, a research scientist at Harvard’s School of Public Health who tested an early version of DeepVariant. “These difficult regions are increasingly important for clinical sequencing, and it’s important to have multiple methods.”


DeepVariant was developed by researchers from the Google Brain team, a group that focuses on developing and applying AI techniques, and Verily, another Alphabet subsidiary that is focused on the life sciences. The team collected millions of high-throughput reads and fully sequenced genomes from the Genome in a Bottle (GIAB)  project, a public-private effort to promote genomic sequencing tools and techniques. They fed the data to a deep-learning system and painstakingly tweaked the parameters of the model until it learned to interpret sequenced data with a high level of accuracy.


Last year, DeepVariant won first place in the PrecisionFDA Truth Challenge, a contest run by the FDA to promote more accurate genetic sequencing. “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems,” says Brendan Frey, CEO of Deep Genomics.


The release of DeepVariant is the latest sign that machine learning may be poised to boost progress in genomics. Deep Genomics is one of several companies trying to use AI approaches such as deep learning to tease out genetic causes of diseases and to identify potential drug therapies (see “An AI-Driven Genomics Company Is Turning to Drugs”).


Deep Genomics aims to develop drugs by using deep learning to find patterns in genomic and medical data. Frey says AI will eventually go well beyond helping to sequence genomic data. “The gap that is currently blocking medicine right now is in our inability to accurately map genetic variants to disease mechanisms and to use that knowledge to rapidly identify life-saving therapies,” he says.


Another prominent company in this area is Wuxi Nextcode, which has offices in Shanghai, Reykjavik, and Cambridge, Massachusetts. Wuxi Nextcode has amassed the world’s largest collection of fully sequenced human genomes, and the company is investing heavily in machine-learning methods.


DeepVariant will also be available on the Google Cloud Platform. Google and its competitors are furiously adding machine-learning features to their cloud platforms in an effort to lure anyone who might want to tap into the latest AI techniques (see “Ambient AI Is About to Devour the Software Industry”).


In general, AI figures to help many aspects of medicine take big leaps forward in the coming years. There are opportunities to mine many different kinds of medical data—from images or medical records, for example— to predict ailments that a human doctor might miss (see “The Machines Are Getting Ready to Play Doctor” and “A New Algorithm for Palliative Care”).

Art Jones's curator insight, December 7, 2017 9:24 PM

Google turns the sci-fi which amazed us on the big screen yesterday into today's reality. 

Scooped by Dr. Stefan Gruenwald!

Scientists Create a 'Princess Leia-Style Display' With Moving Light

Scientists Create a 'Princess Leia-Style Display' With Moving Light | Amazing Science |
Free-space volumetric displays, or displays that create luminous
image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from  almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulatlight at a two-dimensional surface with an edge boundary. These include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate.
Engineers now present a free-space volumetric display based on
photophoretic optical trapping that produces full-colour graphics
in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and wrap-around’ displays. Don't call it a hologram, though.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Storing data in DNA is a lot easier than getting it back out

Storing data in DNA is a lot easier than getting it back out | Amazing Science |
But a method bacteria use to swap genetic information could offer a way.


Humanity is creating information at an unprecedented rate—some 16 zettabytes every year (a zettabyte is one billion terabytes). And this rate is increasing. Last year, the research group IDC calculated that we’ll be producing over 160 zettabytes every year by 2025. All this data has to be stored, and as a result we need much denser memory than we have today. One intriguing solution is to exploit the molecular structure of DNA. Researchers have long known that DNA can be used for data storage—after all, it stores the blueprint for making individual humans and transmits it from one generation to the next. What’s impressive for computer scientists is the density of the data that DNA stores: a single gram can hold roughly a zettabyte.


But nobody has come up with a realistic system for storing data in a DNA library and then retrieving it again when it is needed.

Today that changes thanks to the work of Federico Tavella at the University of Padua in Italy and colleagues, who have designed and tested just such a technique based on bacterial nanonetworks.


The principle is simple. Bacteria often carry genetic information in the form of tiny circular rings of double-stranded DNA called plasmids. These molecules are important because they often confer some advantage to the host cell, such as antibiotic resistance.


Crucially, bacteria can transfer plasmids from one cell to another in a process known as conjugation. This is one way that bacteria swap genetic information, and the process forms a fantastically complex nanonetwork in nature.


That’s the basis of the new technique. Tavella and co want to exploit this nanonetwork to transfer information that they have genetically engineered into the plasmids.


The idea is to store data in plasmids inside bacterial cells that are trapped in a specific location. To retrieve this information, the researchers send motile bacteria to this site, where they conjugate with the trapped bacteria and capture the data-carrying plasmids. Finally, the motile bacteria carry this information to a device that extracts the plasmids and reads the data they carry.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Computational method improves the resolution of time-of-flight depth sensors 1,000-fold

Computational method improves the resolution of time-of-flight depth sensors 1,000-fold | Amazing Science |
A computational technique developed at MIT improves the resolution of time-of-flight depth sensors 1,000-fold while combating the type of light scattering caused by fog and rain. The work points toward practical sensor systems for self-driving cars.


For the past 10 years, the Camera Culture group at MIT’s Media Lab has been developing innovative imaging systems — from a camera that can see around corners to one that can read text in closed books — by using “time of flight,” an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.


In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That’s the type of resolution that could make self-driving cars practical. The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.


At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars. But as Achuta Kadambi, a  joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, “As you increase the range, your resolution goes down exponentially. Let’s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.”


At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Observation of Accelerating Wave Packets in Curved Space

Observation of Accelerating Wave Packets in Curved Space | Amazing Science |

By shining a laser along the inside shell of an incandescent light bulb, physicists have performed the first experimental demonstration of an accelerating light beam in curved space.


Light can be described as rays that travel in straight lines, which is convenient for explaining a large number of phenomena such as reflection or propagation through free space. Consequently, natural intuition about light relies on rays, and one expects light to go from one point to another in a straight line. However, light is actually a wave and therefore exhibits numerous features that are unique to waves.


In recent years, researchers have shown that optical wave packets (beams) can propagate in a self-accelerating manner, where the structure of a beam is engineered to move along a curved trajectory. This field has attracted major interest, with many potential applications. Here, scientists now take these accelerating beams one step further, demonstrating them in a medium that has a curved space geometry, where the trajectory of the accelerating beam is determined by the interplay between the curvature of space and interference effects arising from the beam’s structure.


The simplest example of a curved object is a sphere because it has the same constant curvature everywhere. Normally, optical beams that are confined to propagate on the surface of a sphere would move along geodesic paths, the largest circle on the sphere’s surface. But, as this experiment shows, theoretically and experimentally, one can shape the structure of a beam such that it will accelerate and evolve in a shape-preserving manner on a nongeodesic line, such as a circle close to the North Pole. The experimenters use a thin hemispheric glass shell as the curved-space landscape for the light, and they couple a specifically shaped beam into this glass waveguide. The brightest lobe of this beam bends away from the shortest (geodesic) path, which is the trajectory that light would normally take on the sphere.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Continental’s 3D Touch Surface Display Receives Highest Honor at CES 2018 Innovation Awards

Continental’s 3D Touch Surface Display Receives Highest Honor at CES 2018 Innovation Awards | Amazing Science |

The world’s first touchscreen, featuring a 3D surface, combines a unique visual appearance with a brand-new operating concept by Continental. The innovative 3D touch surface display can be operated instinctively, enhancing the user experience and increasing safety. The technology company was awarded the CES 2018 Best of Innovation Award in the “In-Vehicle Audio/Video” category, the highest awarded honor in its category, for its state of the art design and breakthrough technology.


“Our latest display solution combines three elements: design, safety and user experience. The 3D surface not only allows for exciting design, but it also ensures that drivers can operate the various functions without having to take their eyes off the road,” said Dr. Frank Rabe, head of the Instrumentation & Driver HMI business unit at Continental. “The CES Innovation Awards honor technologies for the very highest standards of design and engineering prowess, so we are absolutely delighted to have received this award.”


The growing demand among users for new features and digital content means that in-vehicle touch screens are getting bigger and bigger. While conventional screens are ideal for the flexible display of digital information, their shortcomings quickly become apparent when it comes to user-friendliness and design possibilities for vehicle manufacturers. To address this, Continental developed a 3D surface for its new touchscreen. The 3D elements allow brand-specific individualization of the high-quality plastic surface and, at the same time, finger guidance that users can actually feel.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Turbulent to laminar flow conversion can save up to 95 percent of energy used for pipelines

Turbulent to laminar flow conversion can save up to 95 percent of energy used for pipelines | Amazing Science |

Scientists have assumed that once a flow of a fluid has become turbulent, turbulence would persist. Researchers at the Institute of Science and Technology Austria (IST Austria), including Professor Björn Hof and co-first authors Jakob Kühnen and Baofang Song, have now shown that this is not the case. In their experiments, which they published in Nature Physics, they destabilized turbulence in a pipe so that the flow turned to a laminar (non-turbulent) state, and they observed that the flow remained laminar thereafter. Eliminating turbulence can save as much as 95 percent of the energy required to pump a fluid through a pipe.


The amount of energy used by industry to pump fluids through pipes is considerable and corresponds to approximately 10 percent of the global electricity consumption. It therefore does not come as a surprise that researchers worldwide are seeking ways to reduce these costs. The major part of these energy losses is caused by turbulence, a phenomenon that leads to a drastic increase of frictional drag, requiring much more energy to pump the fluid. Previous approaches have aimed to locally reduce turbulence levels. Now, the research group of Björn Hof at IST Austria has taken an entirely new approach, tackling the problem from a different side. Rather than temporarily weakening turbulence, they destabilized existing turbulence so that the flow automatically became laminar.


In a so-called laminar flow, a fluid flows in parallel layers which do not mix. The opposite of this is a turbulent flow, which is characterized by vortices and chaotic changes in pressure and velocity within the fluid. Most flows that we can observe in nature and in engineering are turbulent, from the smoke of an extinguished candle to the flow of blood in the left ventricle of our heart. In pipes, both laminar and turbulent flows can, in principle, exist and be stable, but a small disturbance can make a laminar flow turbulent. Turbulence in pipes was until now assumed to be stable, and efforts to save energy costs therefore only focused on reducing its magnitude but not to extinguish it completely. In their proof of principle, Björn Hof and his group have now shown that this assumption was wrong, and that a turbulent flow can, indeed, be transformed to a laminar one. The flow thereafter remains laminar unless it is disturbed again.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers Develop World's Smallest Wearable Device Measuring UV Light Exposure

Researchers Develop World's Smallest Wearable Device Measuring UV Light Exposure | Amazing Science |

A Northwestern University professor, working in conjunction with the global beauty company L'Oréal, has developed the smallest wearable device in the world. The wafer-thin, feather-light sensor can fit on a fingernail and precisely measures a person's exposure to UV light from the sun.


The device, as light as a raindrop and smaller in circumference than an M&M, is powered by the sun and contains the world's most sophisticated and accurate UV dosimeter. It was unveiled Sunday, Jan. 7, at the 2018 Consumer Electronics Show in Las Vegas and will be called UV Sense.


"We think it provides the most convenient, most accurate way for people to measure sun exposure in a quantitative manner," said Northwestern engineer John A. Rogers. "The broader goal is to provide a technology platform that can save lives and reduce skin cancers by allowing individuals, on a personalized level, to modulate their exposure to the sun."


UV Sense has no moving parts, no battery, is waterproof and can be attached to almost any part of the body or clothing, where it continuously measures UV exposure in a unique accumulation mode.


Rogers said the device, created in a partnership with L'Oréal, is meant to stick on a thumbnail -- a stable, rigid surface that ensures robust device adherence. It's also an optimal location to measure exposure to the sun. "It is orders of magnitude smaller than anything else out there," Rogers said. "It also is one of the few sensors that directly measures the most harmful UV rays. Further, it simultaneously records body temperature, which is also very important in the context of sun exposure."


Users need only to download an app on their smartphone, then swipe the phone over the device to see their exposure to the sun, either for that day or over time. The app can suggest other, less UV-intense times for outdoor activities or give peace of mind to individuals who are concerned about overexposure.


"UV Sense is transformative technology that permits people to receive real-time advice via mobile phone messages when they exceed their daily safe sun limit," said June K. Robinson, M.D., research professor of dermatology at Northwestern University Feinberg School of Medicine.


Roger's research group at Northwestern, in collaboration with Robinson and researchers at Feinberg, have received a roughly $2 million grant from the National Institutes of Health to deploy the fingernail UV sensors in human clinical studies of sun exposure in cohorts of subjects who are at risk for melanoma. The first pre-pilot field trials launched in December.


"Sunlight is the most potent known carcinogen," Rogers said. "It's responsible for more cancers than any other carcinogen known to man, and it's everywhere -- even in Chicago."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Two Experiments Show Fourth Spatial Dimension Effect

Two Experiments Show Fourth Spatial Dimension Effect | Amazing Science |

To the best of our knowledge, we humans can only experience this world in three spatial dimensions (plus one time dimension): up and down, left and right, and forward and backward. But in two physics labs, scientists have found a way to represent a fourth spatial dimension.


This isn’t a fourth dimension that you can disappear into or anything like that. Instead, two teams of physicists engineered special two-dimensional setups, one with ultra-cold atoms and another with light particles. Both cases demonstrated different but complementary outcomes that looked the same as something called the “quantum Hall effect” occurring in four dimensions. These experiments could have important implications to fundamental science, or even allow engineers to access higher-dimension physics in our lower-dimension world.


“Physically, we don’t have a 4D spatial system, but we can access 4D quantum Hall physics using this lower-dimensional system because the higher-dimensional system is coded in the complexity of the structure,” Mikael Rechtsman, professor at Penn State University behind one of the papers, told Gizmodo. “Maybe we can come up with new physics in the higher dimension and then design devices that take advantage the higher-dimensional physics in lower dimensions.”


There are three spatial dimensions, or directions you can move by holding everything else constant. Moving back and forth along a line is moving in one dimension. Make a right angle to the line and you add a second dimension to move forward and backward in—imagine a square. Make another right angle and you enter the third spatial dimension. You can move up and down—that’s a cube. If a fourth dimension existed, you could make another right angle into it, and create some sort of hyper cube. A fourth spatial dimension can be described by mathematics but not physically realized.


However, think about this: a three-dimensional figure leaves a two-dimensional shadow. By observing this shadow, we can glean some information about the three-dimensional thing. Perhaps by observing some real-world physical system, we can learn about a fourth-dimensional nature by a shadow left in lower dimensions.

At each of the new experiments’ core is the quantum Hall effect: When electrons are confined to two dimensions, as if they are stuck on the surface of a sheet of paper (like in graphene or in certain layers of semiconductors), and a magnetic field is passed through that sheet perpendicularly, some of the system’s electrical properties become restricted to multiples of exact number values. Math says that other consequences of this quantum Hall effect should be measurable in a four-spatial-dimension system—but again, we don’t have four spatial dimensions to test this physics in.
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe!

Strange Nuclear Sphere Could Revolutionize Fusion Energy

Strange Nuclear Sphere Could Revolutionize Fusion Energy | Amazing Science |

It's well established that nuclear fusion - the reaction that powers our Sun - could be the key to unlocking clean, limitless energy here on Earth. But one of the biggest challenges of modern science is how to harness the fusion reaction so that it produces more energy than it consumes. And a new paper claims to have found a way to do just that.


Instead of looking at how to optimize common fusion reactor designs, such as tokamaks or stellerators, a group of physicists experimentally tested some novel reactor types. They found that a strange-looking sphere design could be the key to achieving net-positive nuclear fusion because, surprisingly, it has the potential to generate more energy than it uses. 


The key difference, aside from its shape, is that this nuclear sphere would fuse hydrogen and boron, rather than hydrogen isotopes such as deuterium and tritium. And it uses lasers to heat the core up to 200 times hotter than the centre of the Sun.


If the team's calculations are correct, the hydrogen-boron reactor device could be built and producing net-positive energy way before any of the reactors currently being tested reach completion. Even better, the hydrogen-boron reaction produces no neutrons, and therefore doesn't create any radioactive waste as a byproduct. "It is a most exciting thing to see these reactions confirmed in recent experiments and simulations," says lead researcher Heinrich Hora, from the University of New South Wales in Australia.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Single metalens focuses all colors of the rainbow in one point

Single metalens focuses all colors of the rainbow in one point | Amazing Science |

Ground-breaking lens opens new possibilities in virtual and augmented reality. Metalenses — flat surfaces that use nanostructures to focus light — promise to revolutionize optics by replacing the bulky, curved lenses currently used in optical devices with a simple, flat surface.  But, these metalenses have remained limited in the spectrum of light they can focus well. Now a team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has developed the first single lens that can focus the entire visible spectrum of light — including white light — in the same spot and in high resolution. This has only ever been achieved in conventional lenses by stacking multiple lenses. The research is published in Nature Nanotechnology.


Focusing the entire visible spectrum and white light – combination of all the colors of the spectrum — is so challenging because each wavelength moves through materials at different speeds. Red wavelengths, for example, will move through glass faster than the blue, so the two colors will reach the same location at different times resulting in different foci. This creates image distortions known as chromatic aberrations.


Cameras and optical instruments use multiple curved lenses of different thicknesses and materials to correct these aberrations, which, of course, adds to the bulk of the device. “Metalenses have advantages over traditional lenses,” says Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the research. “Metalenses are thin, easy to fabricate and cost effective. This breakthrough extends those advantages across the whole visible range of light. This is the next big step.”


The Harvard Office of Technology Development has protected the intellectual property relating to this project and has licensed it to a startup for commercial development. The metalenses developed by Capasso and his team use arrays of titanium dioxide nanofins to equally focus wavelengths of light and eliminate chromatic aberration. Previous researchdemonstrated that different wavelengths of light could be focused but at different distances by optimizing the shape, width, distance, and height of the nanofins.


In this latest design, the researchers created units of paired nanofins that control the speed of different wavelengths of light simultaneously. The paired nanofins control the refractive index on the metasurface and are tuned to result in different time delays for the light passing through different fins, ensuring that all wavelengths reach the focal spot at the same time.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe!

Hyperlens crystal capable of viewing living cells in unprecedented detail

Hyperlens crystal capable of viewing living cells in unprecedented detail | Amazing Science |

Just imagine: An optical lens so powerful that it lets you view features the size of a small virus on the surface of a living cell in its natural environment.


Construction of instruments with this capability is now possible because of a fundamental advance in the quality of an optical material used in hyperlensing, a method of creating lenses that can resolve objects much smaller than the wavelength of light. The achievement was reported by a team of researchers led by Joshua Caldwell, associate professor of mechanical engineering at Vanderbilt University, in a paper published Dec. 11 in the journal Nature Materials.


The optical material involved is hexagonal boron nitride (hBN), a natural crystal with hyperlensing properties. The best previously reported resolution using hBN was an object about 36 times smaller than the infrared wavelength used: about the size of the smallest bacteria. The new paper describes improvements in the quality of the crystal that enhance its potential imaging capability by about a factor of ten.


The researchers achieved this enhancement by making hBN crystals using isotopically purified boron. Natural boron contains two isotopes that differ in weight by about 10 percent, a combination that significantly degrades the crystal's optical properties in the infrared.


"We have demonstrated that the inherent efficiency limitations of hyperlenses can be overcome through isotopic engineering," said team member Alexander Giles, research physicist at the the U.S. Naval Research Laboratory. "Controlling and manipulating light at nanoscale dimensions is notoriously difficult and inefficient. Our work provides a new path forward for the next generation of materials and devices."


 The researchers calculate that a lens made from their purified crystal can in principle capture images of objects as small as 30 nanometers in size. To put this in perspective, there are 25 million nanometers in an inch and human hair ranges from 80,000 to 100,000 nanometers in diameter. A human red blood cell is about 9,000 nanometers and viruses range from 20 to 400 nanometers.

Via CineversityTV
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Two Holograms in One Surface

Two Holograms in One Surface | Amazing Science |

A team of scientists at Caltech has figured out a way to encode more than one holographic image in a single surface without any loss of resolution. The engineering feat overturns a long-held assumption that a single surface could only project a single image regardless of the angle of illumination.


The technology hinges on the ability of a carefully engineered surface to reflect light differently depending on the angle at which incoming light strikes that surface.


Holograms are three-dimensional images encoded in two-dimensional surfaces. When the surface is illuminated with a laser, the image seems to pop off the surface and becomes visible. Traditionally, the angle at which laser light strikes the surface has been irrelevant—the same image will be visible regardless. That means that no matter how you illuminate the surface, you will only create one hologram.


Led by Andrei Faraon, assistant professor of applied physics and materials science in the Division of Engineering and Applied Science, the team developed silicon oxide and aluminum surfaces studded with tens of millions of tiny silicon posts, each just hundreds of nanometers tall. (For scale, a strand of human hair is 100,000 nanometers wide.) Each nanopost reflects light differently due to variations in its shape and size, and based on the angle of incoming light.


That last property allows each post to act as a pixel in more than one image: for example, acting as a black pixel if incoming light strikes the surface at 0 degrees and a white pixel if incoming light strikes the surface at 30 degrees.


"Each post can do double duty. This is how we're able to have more than one image encoded in the same surface with no loss of resolution," says Faraon (BS '04), senior author of a paper on the new material published by Physical Review X on December 7, 2017.

No comment yet.