Amazing Science
Find tag "technology"
348.8K views | +178 today
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

DNA sequencing with nanopores reaches new lengths

DNA sequencing with nanopores reaches new lengths | Amazing Science |

Researchers from the University of Washington’s Departments of Physics and Genome Sciences have developed a nanopore sequencing technique reaching read lengths of several thousand bases. The result is the latest in a series of advances in nanopore technology developed at the university.

The team, led by Jens Gundlach, published their findings in Nature Biotechnology as an advanced online publication on June 25, 2014 ("Decoding long nanopore sequencing reads of natural DNA").

“This is the first time anyone has shown that nanopores can be used to generate interpretable signatures corresponding to very long DNA sequences from real-world genomes,” said co-author Jay Shendure, an associate professor in Genome Sciences, “It’s a major step forward.

”The idea for nanopore sequencing originated in the 90s: a lipid membrane, similar to the material that makes up the cell wall, acts as a barrier separating two liquids. Inserted into the membrane is a tiny gap, just nanometers across, called a nanopore. By applying a voltage difference across the barrier, ions in the liquid try to move between the two sides of the barrier and the only way to do this is to flow through the nanopore. The movement of the charged molecules between the two liquids is a current, just like electrons moving along a wire in an electrical circuit, and can be recorded.

Any DNA in the system is also pulled towards the other side of the barrier by the voltage difference, since DNA is negatively charged, and just like the ions it has to pass through the nanopore. The difference is that the DNA is much bigger than the ions and partially blocks the nanopore, making it harder for the smaller molecules to pass through. As the ions are blocked by the DNA, there is a measurable difference in the current flowing across the membrane which is dependent on the DNA base passing through the nanopore. By measuring the changing current, information can be gained on the bases passing through.

The researchers created the nanopore by inserting a single protein called Mycobacterium smegmatis porin A, or MspA, in the membrane. MspA is normally found lining the membrane of a species of bacteria, controlling the intake of nutrients.

One challenge the researchers faced was the control of the DNA passing through the nanopore. Normally, the DNA would zip through the MspA nanopore too fast to detect the changes in the current. The researchers slowed the DNA movement through the pore using a second protein called phi29 DNA polymerase (DNAP), which captures DNA and slows its movement through the pore.

The shape of the protein MspA meant that several bases passed through the nanopore at one time and the current changes were the result of a combination of those bases. This presented another challenge. Since several bases passed through the nanopore at one time, the researchers needed a way to decipher what the current changes meant. To do this, they first made a library of DNA sequences that contains all possible combinations of 4 nucleotides (for the mathematically inclined, the library is 44 = 256 bases long – a string of 4 bases with 4 possible choices for each DNA base). The library, whose sequence was already known, was run though the nanopore first to find the current associated with each set of DNA base combinations. They combined the library measurements with known genome sequences to generate a set of expected current changes that could be compared to experimental measurements.

The researchers tested their approach by sequencing the entire genome of bacteriophage Phi X 174, a virus that infects bacteria and is used as a benchmark for evaluating new sequencing technologies. The impressive feat here is the length of the genome they sequenced – the Phi X 174 genome is 4,500 bases long. Other nanopore technologies have been limited to sequencing DNA fragments that were much shorter.

“Despite the remaining hurdles, our demonstration that a low-cost device can reliably read the sequences of naturally occurring DNA and can interpret DNA segments as long as 4,500 nucleotides in length represents a major advance in nanopore DNA sequencing,” explained Gundlach.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Flexible electronics: Fully printed organic thin film transistors (OTFTs) on a paper substrate

Flexible electronics: Fully printed organic thin film transistors (OTFTs) on a paper substrate | Amazing Science |
A nanoparticle ink that can be used for printing electronics without high-temperature annealing presents a possible profitable approach for manufacturing flexible electronics.

Printing semiconductor devices is considered to provide low-cost high performance flexible electronics that outperforms the amorphous silicon thin film transistors currently limiting developments in display technology. However the nanoparticle inks developed so far have required annealing, which limits them to substrates that can withstand high temperatures, ruling out a lot of the flexible plastics that could otherwise be used. Researchers at the National Institute for Materials Science and Okayama University in Japan have now developed a nanoparticle ink that can be used with room-temperature printing procedures.

Developments in thin film transistors made from amorphous silicon have provided wider, thinner displays with higher resolution and lower energy consumption. However further progress in this field is now limited by the low response to applied electric fields, that is, the low field-effect mobility. Oxide semiconductors such as InGaZnO (IGZO) offer better performance characteristics but require complicated fabrication procedures.

Nanoparticle inks should allow simple low-cost manufacture but the nanoparticles usually used are surrounded in non-conductive ligands – molecules that are introduced during synthesis for stabilizing the particles. These ligands must be removed by annealing to make the ink conducting. Takeo Minari, Masayuki Kanehara and colleagues found a way around this difficulty by developing nanoparticles surrounded by planar aromatic molecules that allow charge transfer.

The gold nanoparticles had a resistivity of around 9 x 10-6 Ω cm – similar to pure gold. The researchers used the nanoparticle ink to print organic thin film transistors on a flexible polymer and a paper substrate at room temperature, producing devices with mobilities of 7.9 and 2.5 cm2 V-1 s-1 for polymer and paper respectively – figures comparable to IGZO devices.

As the researchers conclude in their report of the work, "This room temperature printing process is a promising method as a core technology for future semiconductor devices."

Reference: Minari, T., Kanehara, Y., Liu, C., Sakamoto, K., Yasuda, T., Yaguchi, A., Tsukada, S., Kashizaki, K. and Kanehara, M. (2014), "Room-Temperature Printing of Organic Thin-Film Transistors with π-Junction Gold Nanoparticles." Adv. Funct. Mater.. doi: 10.1002/adfm.201400169

No comment yet.
Scooped by Dr. Stefan Gruenwald!

STAMP: Japanese universities develop new world's fastest camera

STAMP: Japanese universities develop new world's fastest camera | Amazing Science |

Researchers working at two universities in Japan have jointly developed what is being described as the world's fastest camera. A photo-device with a frame interval of 4.4 trillion frames per second. In their paper published in the journal Nature Photonics, the team describes how their camera works, its capabilities and the extensive work that went into its creation.

High speed cameras allow researchers and everyday people alike the ability to see things that they wouldn't be able to otherwise, from slowdown of sports play to mechanical processes. Prior to the announcement in Japan, the fastest cameras relied on what's known as a pump-probe process—where light is "pumped" at an object to be photographed, and then "probed" for absorption. The main drawback to such an approach is that it requires repetitive measurements to construct an image. The new camera is motion-based femtophotography, performing single-shot bursts for image acquisition, which means it has no need for repetitive measurements. It works via optical mapping of an object's spatial profile which varies over time. Its abilities make it 1000 times as fast as cameras it supersedes. In addition to the extremely high frame rate, the camera also has a high pixel resolution (450 × 450).

Developed by a joint team of researchers from Keio University and the University of Tokyo, the camera is set to capture images of things and events that until now have not been impossible. With technology the team has named Sequentially Timed All-optical Mapping Photography, or STAMP for short, the camera is poised to be used to capture chemical reactions, lattice vibrational waves, plasma dynamics, even heat conduction, which the researchers note occurs at approximately a sixth the speed that light travels.

The joint team has been working on development of the camera over the course of three years—plans call for continued development—the team would like to make the camera smaller (currently it's about a square meter) to allow for use in more applications. They also believe the camera could be used in a wide variety of fields, in both the public and private sectors. Some examples would be laser processes used for making big items like car parts, or in tiny applications such as the creation of semiconductors.

high-speed camera would allow researchers to actually see what is going on as the laser does its work. They also expect the camera to be useful in the medical field.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

CurvACE gives robots a bug’s eye view

CurvACE gives robots a bug’s eye view | Amazing Science |

The vertebrate eye has provided inspiration for the design of conventional cameras with single-aperture optics to provide a faithful rendering of the visual world. The insect compound eye, in spite of bearing a comparatively lower resolution than the vertebrate eye, is very efficient for local and global motion analysis over a large field of view (FOV), making it an excellent sensor for accurate and fast navigation in 3D dynamic environments. Furthermore, compound eyes take several shapes and curvatures to fit the head and viewing directions of very different types of insects while offering the same functionality.

The goal of this project is to design, develop, and assess a novel curved and flexible vision sensor for fast extraction of motion-related information. These integrated systems are called CURVed Artificial Compound Eyes (CURVACE). Compared to conventional cameras, artificial compound eyes will offer more efficient visual abilities for motion analysis,  a much larger field of view in a smaller size and weight, bearing a thin packaging and being self-contained and programmable.

Additionally, the CURVACE developers will use neuromorphic imagers with adaptive sensitivity, and the rendered images will yield less distortion and less aberration. The fabricated CURVACE will bear mechanical adaptability to a range of shapes and curvatures, and some versions will offer space within the convexity for embedding processing units, battery, or additional sensors that are useful for motion-related computation.

The findings of the CurvACE project were published in PNAS.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Optical projection tomography microscopy (OPTM) could detect early signs of cancer inside cells

Optical projection tomography microscopy (OPTM) could detect early signs of cancer inside cells | Amazing Science |

An optical technique able to spot the tell-tale changes in DNA content of cell nuclei during the earliest stages of cancer could offer valuable screening and surveillance in the fight against some forms of the disease.

Optical projection tomographic microscopy (OPTM), developed by a team at the University of Washingtonand commercialized as Cell-CT by VisionGate, employs computerized tomographic reconstruction methods analogous to those used in an X-ray CT scan. But its data comes from an optical examination, rather than an X-ray.

A paper published in SPIE's Journal of Medical Imaging assessed how well the technique could detect aneuploidy, the presence of an incorrect amount of DNA material in a cell nucleus that can be an early indication of cancer. The results show that OPTM was fully capable of providing results on a par with the gold-standard flow cytometry and image cytometry methods used by pathologists.

"OPTM stemmed originally from our wish to take microscopic images by simulating an X-ray," said Eric Seibel of the university's Human Photonics Laboratory (HPL). "One way to do that is to use an optical system with a high numerical aperture and high magnification lens, and scan that very thin focal plane through a very small object, such as the nucleus of a single cell."

In the Cell-CT platform, a widefield optical microscope is adapted with a customized rotation stage. Cells, their nuclear material already stained by standard techniques, are carried by an optical gel along a 50 micron channel in a microcapillary tube, which rotates around its long axis. A fast mirror scans the objective focal plane axially through the sample. The result is a large data set of "pseudo-projections," generated as the sample tube rotates. Algorithms adapted from those used in an analogous fashion for conventional CT images then turn the slices of visual data into a 3D image of the stained nuclear material of the specimen, with a resolution of 0.35 microns.

"This was essentially a calibration exercise," commented Seibel. "We wanted to make quantitative measurements on different standard cultured cancer cell lines, and see if OPTM could match results from flow cytometry; and we found that it could. We also wanted to see how to automate the process as much as possible, since 3D analysis is more difficult for humans to perform than 2D analysis, and computational power is needed."

Earliest signs of cancer: The implications for both cancer detection in certain specific scenarios and the wider development of new pathology techniques could be significant. Crucially, the technique could help to spot aneuploidy in small numbers of sample cells, potentially revealing the presence of cancer before other clinical signs materialize. Flow cytometry works on large numbers of cells and assesses whole populations, making it less able to spot the rare abnormality in a limited group of cells; but OPTM could be ideal.

In particular, VisionGate has positioned Cell-CT as means to spot the earliest signs of lung cancer, by using it to assess sputum samples from high-risk individuals, such as ex-smokers. Further developments in the platform are under way to automate the image analysis operation further, and also to boost the speed of the overall process, with the goal of imaging one cell in full 3D every second.

"There is a very good chance that 3D image analysis is the only way we can go for really early detection of cancer, early enough to allow implementation of therapeutic drugs to tamp down its progression," commented Seibel. "These types of techniques could become prominent for testing of sputum, urine, or in other instances where a sample can be taken with a needle and analyzed. Potentially this could all be done without a pathologist present."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM Develops a New Chip That Functions Like a Brain

IBM Develops a New Chip That Functions Like a Brain | Amazing Science |

The processor, named TrueNorth, may eventually excel at calculations that stump today’s supercomputers.  The chip, or processor, is named TrueNorth and was developed by researchers at IBM and detailed in an article published on Thursday in the journal Science. It tries to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to the brain’s neural networks.

The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape. The processor may thus be able to recognize that a woman in a video is picking up a purse, or control a robot that is reaching into a pocket and pulling out a quarter. Humans are able to recognize these acts without conscious thought, yet today’s computers and robots struggle to interpret them.

Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

In-car heads-up display lets you respond to texts with hand motions and voice

In-car heads-up display lets you respond to texts with hand motions and voice | Amazing Science |

We've seen companies take a few stabs at smartphone-savvy heads-up displays for cars, but they tend to be one-way devices -- while they'll feed you info, you still have to reach for your phone to answer a message or get directions. Navdy may just have a smarter solution in store. Its namesake HUD not only projects car stats, navigation and notifications, but lets you interact with them through a blend of gestures and speech. You swipe with your fingers to either respond to or dismiss any alert that comes in; the system leans on the built-in voice commands from Android and iOS, so you can tell Navdy to get directions in Google Maps or play iTunes music as if you were speaking to the phone itself.

Navdy announced on 8/5/2014 a Head­Up Display (HUD) aftermarket car console that allows drivers to access their smartphone’s apps while keeping their eyes on the road. Navdy combines a high quality projection display with voice and gesture controls to create a safer, highly intuitive driving experience. Combining advanced display technology with touch­less controls means drivers no longer need to fumble around with their phone to navigate, communicate or control their music. Navdy's display technology projects a bright transparent image directly within your field of vision that appears to float six feet in front of your windshield, allowing you to keep your eyes on the road while simultaneously seeing navigation instructions or incoming phone calls. The device comes with advanced dimming and stabilization controls, to optimize usability in any driving conditions.

Due to voice and gesture controls, you’ll never need to look away from the road to use Navdy. Your app’s simplified Navdy menus can be navigated with intuitive hand gestures. Voice recognition captures more complex commands and text message responses. Navdy’s noise cancellation and wide angle gesture sensors are specifically designed to create an optical driving experience.

The device mounts on a flexible footer that fits on practically any car dashboard, and is powered by plugging in to the onboard computer (OBD II port), available in all cars produced since 1996. This makes the only required cord less intrusive, while providing car status information to the Navdy processor.

Navdy works with popular navigation apps like Google Maps to display turn­by­turn directions; it controls your music apps like Spotify, Pandora, or Google Music; it reads or displays notifications from text messages or social media apps, fully controlled by its Parental Control settings; and it displays car alerts such as true­speed, miles­to­empty, or battery­voltage from its access to the car’s computer.

Navdy works with iPhone (iOS 7+) and Android (4.3+) smartphones, and can move easily to another car or another smartphone. Once a Navdy has been paired over bluetooth for the first time, it can share data with your phone over wifi. Navdy does not require it’s own data subscription service. Initial device dashboard placement takes 60­90 seconds. Slightly longer if you read instructions.

The company is getting its display off the ground through crowdfunding. If you're willing to commit within the first 30 days, you can pay $299 for a Navdy unit instead of the $499 it will cost when it ships in early 2015.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Acoustic bottle beams hold promise for acoustic imaging, cloaking and levitation

Acoustic bottle beams hold promise for acoustic imaging, cloaking and levitation | Amazing Science |

Using a technique that has possible applications in acoustic cloaking, sonic levitation, ultrasonic imaging, and particle manipulation, scientists at the University of California Berkeley claim to have produced a "bottle" beam of acoustic energy in open air that can precisely redirect sound waves. Able to bend these waves along set trajectories without the need for waveguides or other mechanical assistance, the bottle beam is also able to flow around objects in its path while maintaining its shape.

Sound waves – like light waves – travel in straight lines, but can be bent through reflection, diffraction, or refraction. In the case of the Berkeley Lab experiments, the researchers bent sound waves using an array of acoustic transducers – effectively high-frequency loudspeakers – some 1.5 cm (0.6 in) in diameter, spaced 2.5 cm (0.98 in) apart and operating at a frequency of 10 kHz. This array was able to directly alter the phase and direction at which each sound wave was generated so that a defined set of pressure fields with distinct trajectories were created in the air.

As a result, the team claimed to have produced an acoustic bottle beam whose sound waves travel through the high pressure wall of its curved shell to flow around a zero pressure center. In this way, the sound waves are held together and are able to travel on in this fashion over some distance. The acoustic bottle beam is also uninfluenced by solid objects that are placed in its path, with the shape and characteristics of the sound waves reforming after flowing around the object.

"Our acoustic bottle beams open new avenues to applications in which there is a need to access hard-to-reach objects hidden behind obstacles, such as acoustic imaging and therapeutic ultrasound through inhomogeneous media," said Berkeley Lab researcher Tongcang Li. "We can also use an acoustic bottle as a cloaking device, re-routing sound waves around an object and then recovering them in their original form, making the object invisible to sonar detection."

As the high pressure exterior of the acoustic bottle also applies a dragging force to the surrounding air, no sound waves can pass through the zero pressure interior of the bottle thereby making it suitable for acoustic trapping. In this way, nanoparticles and similarly microscopic objects may be held in place with nothing more than sound wave pressure surrounding them.

Similarly, acoustic levitation, where sound waves are used to move and handle microscopically small objects, such as nanoparticles, microbes, or water droplets, may also be rendered possible by continued development of this research. The team also believes that their work may find use in the likes of 3D graphic printing and object manipulation by emulating recent standing wave experiments in that area.

"Our acoustic bottle beams can do the same thing but offer better stability, true 3D graphics, and more freedom of motion as our beam can propagate along a curved path," said co-researcher Xuefeng Zhu. "We can also levitate much larger 3D objects than can be lifte

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Elon Musk Says Artificial Intelligence Could Be More Dangerous Than Nuclear Weapons

Elon Musk Says Artificial Intelligence Could Be More Dangerous Than Nuclear Weapons | Amazing Science |

When Elon Musk, the founder of Tesla and SpaceX, comments on the future, ears in the tech space perk up. But a weekend mini-rant from the futurist drew the attention of even some non-techies and revealed that he's more worried about an artificial intelligence (A.I.) apocalypse than he's let on in recent months.

Posting his thoughts to Twitter on Saturday, after recommending a book about A.I., Musk made what might be the most controversial technology statement of his career: "We need to be super careful with A.I. Potentially more dangerous than nukes."

Others, like Google's Ray Kurzweil, have discussed a technological "singularity," in which A.I.'s take over from humans, but rarely has such a high profile voice with real ties to the technology business put the prospect in such stark terms.

To be fair, Musk's thoughts should be considered within the context he made them, that is, suggesting the book Superintelligence: Paths, Dangers, Strategies, a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers in the future.

Nevertheless, the comparison of A.I. to nuclear weapons, a threat that has cast a worrying shadow over much of the last 30 years in terms of humanity's longevity possibly being cut short by a nuclear war, immediately raises a couple of questions.

The first, and most likely from many quarters, will be to question Musk's future-casting. Some may use Musk's A.I. concerns — which remain fantastical to many — as proof that his predictions regarding electric cars and commercial space travel are the visions of someone who has seen too many science fiction films. "If Musk really thinks robots might destroy humanity, maybe we need to dismiss his long view thoughts on other technologies." Those essays are likely already being written.

In recent years, Musk's most science fiction-inspired comments have revolved around colonizing Mars, but this latest comment, and the one he made back in June about fearing a "Terminator" future, indicate that this is a serious issue for the tech mogul. As for whether his concerns hold any weight, we can't be sure, just yet, but Musk is hedging his bets by investing in an artificial intelligence research company called Vicarious.

Apparently, although not as vocal about it, others in the tech space agree with Musk's investment approach toward super-intelligent machines. Investors in Vicarious include the likes of Facebook's Mark Zuckerberg and Amazon's Jeff Bezos.

Karlos Svoboda's curator insight, August 5, 4:34 PM

Raději varovat dříve než-li později litovat 

Scooped by Dr. Stefan Gruenwald!

New material combines two semiconductor sheets three atomic layers thick to create ultra-thin solar cells

New material combines two semiconductor sheets three atomic layers thick to create ultra-thin solar cells | Amazing Science |

Semiconductor heterostructures form the cornerstone of many electronic and optoelectronic devices and are traditionally fabricated using epitaxial growth techniques. More recently, heterostructures have also been obtained by vertical stacking of two-dimensional crystals, such as graphene and related two-dimensional materials. These layered designer materials are held together by van der Waals forces and contain atomically sharp interfaces. Here, we report on a type-II van der Waals heterojunction made of molybdenum disulfide and tungsten diselenide monolayers. The junction is electrically tunable, and under appropriate gate bias an atomically thin diode is realized. Upon optical illumination, charge transfer occurs across the planar interface and the device exhibits a photovoltaic effect. Advances in large-scale production of two-dimensional crystals could thus lead to a new photovoltaic solar technology.

Tungsten diselenide is a semiconductor which consists of three atomic layers. One layer of tungsten is sandwiched between two layers of selenium atoms. “We had already been able to show that tungsten diselenide can be used to turn light into electric energy and vice versa”, says Thomas Mueller. But a solar cell made only of tungsten diselenide would require countless tiny metal electrodes tightly spaced only a few micrometers apart. If the material is combined with molybdenium disulphide, which also consists of three atomic layers, this problem is elegantly circumvented. The heterostructure can now be used to build large-area solar cells. 

When light shines on a photoactive material single electrons are removed from their original position. A positively charged hole remains, where the electron used to be. Both the electron and the hole can move freely in the material, but they only contribute to the electrical current when they are kept apart so that they cannot recombine. 

To prevent recombination of electrons and holes, metallic electrodes can be used, through which the charge is sucked away - or a second material is added. “The holes move inside the tungsten diselenide layer, the electrons, on the other hand, migrate into the molybednium disulphide”, says Thomas Mueller. Thus, recombination is suppressed.

This is only possible if the energies of the electrons in both layers are tuned exactly the right way. In the experiment, this can be done using electrostatic fields. Florian Libisch and Professor Joachim Burgdörfer (TU Vienna) provided computer simulations to calculate how the energy of the electrons changes in both materials and which voltage leads to an optimum yield of electrical power.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Beyond GPS: Five next-generation technologies

Beyond GPS: Five next-generation technologies | Amazing Science |

Several DARPA programs are exploring innovative technologies and approaches that could supplement GPS to provide reliable, highly accurate real-time positioning, navigation and timing (PNT) data for military and civilian uses and deal with possible loss of GPS accuracy from solar storms or jamming, for example.

DARPA Director Arati Prabhakar  said DARPA currently has five programs that focus on PNT-related technology.

Adaptable Navigation Systems (ANS) is developing new algorithms and architectures that can create better inertial measurement devices. By using cold-atom interferometry, which measures the relative acceleration and rotation of a cloud of atoms stored within a sensor, extremely accurate inertial measurement devices could operate for long periods without needing external data to determine time and position. ANS also seeks to exploit non-navigational electromagnetic signals — including commercial satellite, radio and television signals and even lightning strikes — to provide additional points of reference for PNT.

Microtechnology for Positioning, Navigation, and Timing (Micro-PNT) leverages extreme miniaturization made possible by DARPA-developed micro-electromechanical systems (MEMS) technology. These include precise chip-scale gyroscopes, clocks, and complete integrated timing and inertial measurement devices. DARPA researchers have fabricated a prototype with three gyroscopes, three accelerometers and a highly accurate master clock on a chip that fits easily on the face of a penny.

Quantum-Assisted Sensing and Readout (QuASAR) intends to make the world’s most accurate atomic clocks — which currently reside in laboratories — both robust and portable. QuASAR researchers have developed optical atomic clocks in laboratories with a timing error of less than 1 second in 5 billion years. Making clocks this accurate and portable could improve upon existing military systems such as GPS, and potentially enable entirely new radar, LIDAR, and metrology applications.

The Program in Ultrafast Laser Science and Engineering (PULSE) applies the latest in pulsed laser technology to significantly improve the precision and size of atomic clocks and microwave sources, enabling more accurate time and frequency synchronization over large distances. It could enable global distribution of time precise enough to take advantage of the world’s most accurate optical atomic clocks.

The Spatial, Temporal and Orientation Information in Contested Environments (STOIC) program seeks to develop PNT systems that are independent of GPS: long-range robust reference signals, ultra-stable tactical clocks, and multifunctional systems that provide PNT information between multiples users.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

World's first artificial leaves uses chloroplasts from real plants and photosynthesis

World's first artificial leaves uses chloroplasts from real plants and photosynthesis | Amazing Science |

The Silk Leaf project uses chloroplasts from real plants suspended in silk proteins to create a hardy vehicle for photosynthesis.

The leaves, created by Royal College of Art student Julian Melchiorri, absorb water and carbon dioxide just like real plants but are made from tough silk proteins that could let them survive space voyages.

Melchiorri explains: "NASA is researching different ways to produce oxygen for long-distance space journeys to let us live in space. This material could allow us to explore space much further than we can now."

The Silk Leaf project was engineered in collaboration with Tufts University silk lab, which helped Melchiorri extract chloroplasts from real leaves and suspend them in a silk matrix.

"The material is extracted directly from the fibres of silk," explains Melchiorri. "This material has an amazing property of stabilising molecules. I extracted chloroplasts from plant cells and placed them inside this silk protein. As an outcome I have the first photosynthetic material that is living and breathing as a leaf does."

Chloroplasts are the parts of plant cells that conduct photosynthesis, using the energy of the sun to turn carbon dioxide and water to create glucose and oxygen.

Melchiorri’s creations are currently more conceptual than practical (the efficiency of the photosynthesis process hasn’t been tested for one) but he hopes they could be used in all manner of futuristic architectural projects, perhaps even deploying giant leaves as air filters, hanging them on the exterior of buildings to absorb CO2 and channel fresh air inside.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nonablative Laser Light Increases Influenza Vaccine Response 4 to 7-fold - Neomatica

Nonablative Laser Light Increases Influenza Vaccine Response 4 to 7-fold - Neomatica | Amazing Science |

Influenza imposes a heavy annual public health burden, and lies historically at the heart of a number of global pandemics that killed tens of millions.  To overcome the challenges of manufacturing enough vaccines such that we may stave off the next epidemic, medical researchers are searching for ways to strengthen or extend the power of existing and stockpiled vaccines.  Now a team of scientists in Boston has just developed a new method of using laser light to stimulate and enhance the immune response to a vaccine by a remarkable 4 to 7-fold against disease agents. Such treatments that assist vaccines but are not vaccines themselves are known as adjuvants.

Interestingly, the improved 4 to 7-fold laser adjuvant could not be matched even when compared against increasing the vaccine dosage 10-fold.  Efficacy of the vaccine was measured by the level of influenza-specific antibodies generated in an inoculated person.  The new method improves on an existing adjuvant hampered by harmful side effects which thus far has prevented its usage broadly.  Although the results were obtained in the context of two animal models, adult and aged mice, as well as pigs, its fairly general immunological basis is expected to translate to humans.

Before inoculation, the injection site is exposed to laser light for a short time. The light does not perforate the outer layers of the skin, but rather injures the dermis.  Because of the way the laser light is arranged, this creates a number of “microthermal zones.”  In each zone, dermal cells that are damaged stimulate inflammation, signaling danger to the immune system, which in turn attracts antigen-presenting cells (APCs) to the damaged area. APCs are cells that occur naturally in the body that bind antigens of harmful disease agents so as to prepare the rest of the immune system to recognize and neutralize the threat.

The damaged area is so small such that that self-healing occurs within 72 hours. The inspiration for the adjuvant comes from a type of skin treatment used in cosmetic dermatology.  In the cosmetic context, the laser light is used to stimulate lightly skin with aged appearance.  Post-damage, epithelial cells quickly grow to surround the microthermal zone to give rise to more youthful looking skin.  The same class of non-ablative lasers were used in this study.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers Develop Algorithm to Turn Erratic First-Person Footage Into Smooth Hyperlapse Videos

Researchers Develop Algorithm to Turn Erratic First-Person Footage Into Smooth Hyperlapse Videos | Amazing Science |

Three researchers at Microsoft Research (Johannes KopfMichael Cohen, and Richard Szeliski) have developed an algorithm that turns erratic first-person footage into smooth hyperlapse videos. The problem, as they put it, is that first-person footage is generally so long that the best way to actually view it is through a time-lapse video, but a time-lapse video further exacerbates the general shakiness and erratic nature of first-person footage. Kopf, Cohen, and Szeliski solve this problem through a complex algorithm that ultimately stitches and blends certain frames into a cohesive whole.

Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.

More on the algorithm, including a more technical breakdown, is available at Microsoft Research.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Blood-brain-barrier disruption with high-frequency pulsed electric fields

Blood-brain-barrier disruption with high-frequency pulsed electric fields | Amazing Science |

A team of researchers from Virginia Tech and Wake Forest University School of Biomedical Engineering and Sciences have developed a new technique for using pulsed electric energy to open the blood-brain-barrier (BBB) for treating brain cancer and neurological disorders.

Their Vascular Enabled Integrated Nanosecond pulse (VEIN pulse) procedure consists of inserting minimally invasive needle electrodes into the diseased tissue and applying multiple bursts of 850-nanosecond pulsed electric energy with alternating polarity.

The researchers think the bursts disrupt tight junction proteins responsible for maintaining the integrity of the BBB, but without causing damage to the surrounding tissue. This technique will be described in the upcoming issue of the journal TECHNOLOGY.

For the treatment of brain cancer, “VEIN pulses could be applied at the same time as biopsy or through the same track as the biopsy probe in order to mitigate damage to the healthy tissue by limiting the number of needle insertions,” saysRafael V. Davalos, Ph.D, director of the Bioelectromechanical Systems Laboratory at Virginia Tech.

The BBB is a network of tight junctions that normally acts to protect the brain from foreign substances by preventing them from leaking from blood vessels into neural structures. But that also limits the effectiveness of drugs to treat brain disease. Temporarily opening the BBB is a way to ensure that drugs can still be effective.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New technique creates highly accurate detailed 3D maps in real time

New technique creates highly accurate detailed 3D maps in real time | Amazing Science |

Computer scientists at MIT and the National University of Ireland (NUI) at Maynooth have developed a mapping algorithm that creates dense, highly detailed 3-D maps of indoor and outdoor environments in real time.

The researchers tested their algorithm on videos taken with a low-cost Kinect camera, including one that explores the serpentine halls and stairways of MIT’s Stata Center. Applying their mapping technique to these videos, the researchers created rich, three-dimensional maps as the camera explored its surroundings.

As the camera circled back to its starting point, the researchers found that after returning to a location recognized as familiar, the algorithm was able to quickly stitch images together to effectively “close the loop,” creating a continuous, realistic 3-D map in real time.

The technique solves a major problem in the robotic mapping community that’s known as either “loop closure” or “drift”: As a camera pans across a room or travels down a corridor, it invariably introduces slight errors in the estimated path taken. A doorway may shift a bit to the right, or a wall may appear slightly taller than it is. Over relatively long distances, these errors can compound, resulting in a disjointed map, with walls and stairways that don’t exactly line up.

In contrast, the new mapping technique determines how to connect a map by tracking a camera’s pose, or position in space, throughout its route. When a camera returns to a place where it’s already been, the algorithm determines which points within the 3-D map to adjust, based on the camera’s previous poses.

“Before the map has been corrected, it’s sort of all tangled up in itself,” says Thomas Whelan, a PhD student at NUI. “We use knowledge of where the camera’s been to untangle it. The technique we developed allows you to shift the map, so it warps and bends into place.”

The technique, he says, may be used to guide robots through potentially hazardous or unknown environments. Whelan’s colleague John Leonard, a professor of mechanical engineering at MIT, also envisions a more benign application.

Nikauly Vargas Arias's curator insight, August 14, 4:44 PM

Interesante herramienta para la gestión sostenible del patrimonio construido

Scooped by Dr. Stefan Gruenwald!

3D sketching system ‘revolutionizes’ design interaction and collaboration

3D sketching system ‘revolutionizes’ design interaction and collaboration | Amazing Science |

University of Montreal researchers have developed a collaborative 3D sketching system called Hyve-3D (Hybrid Virtual Environment 3D), which they presented at the SIGGRAPH 2014 conference in Vancouver this week.

“Hyve-3D is a new interface for 3D content creation via embodied and collaborative 3D sketching,” said lead researcher Professor Tomás Dorta of the university’s School of Design.

“The system is a full-scale immersive 3D environment. Users create drawings on hand-held tablets. They can then use the tablets to manipulate the sketches to create a 3D design within [that] space.” For example, they could be designing the outside of a car, and then actually stepping into the car to work on the interior detailing.

The 3D images are the result of an optical illusion created by a wide-screen high-resolution projector. A 16-inch dome mirror projects the image onto a specially designed 5-meter-diameter spherically concave fabric screen. The system is driven by a MacBook Pro laptop, a tracking system with two 3D sensors, and two iPad mini tablets. Each iPad is attached to a tracker.

“The software takes care of all the networking, scene management, 3D graphics and projection, and also couples the sensors’ input and iPad devices,” Dorta explains. “The iPads run [an app] that serves as the user-interaction front-end of the system. Specialized techniques render the 3D scene onto a spherical projection in real-time.” The Hyve-3D software also works on conventional 2D displays.

The iPads are tracked in six degrees of freedom, meaning the iPad itself can be moved in three axes or rotated in three axes. Currently, 3D design requires complicated or expensive equipment. “Our system is innovative, non-intrusive and simple,” Dorta said. “Beyond its obvious artistic applications, Hyve-3D clearly has industrial applications in a wide range of fields, from industrial or architectural design to engineering, medical 3D applications, game design animation and movie making. My team is looking forward to taking the product to market and discovering what people do with it.”

Univalor, the university’s technology commercialization unit, is supporting the market launch of the system.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Attosecond Timing Tools: Catching Chemistry in Motion

Attosecond Timing Tools: Catching Chemistry in Motion | Amazing Science |
SLAC researchers have developed a laser-timing system that could lead to X-ray snapshots fast enough to reveal the triggers of chemical and material reactions.

"Previously, we could see a chemical bond before it's broken and after it's broken," said Ryan Coffee, an LCLS scientist whose team developed this system. "With this tool, we can watch the bond while it is breaking and 'freeze-frame' it."

The success of most LCLS experiments relies on precise timing of the X-ray laser with another laser, a technique known as "pump-probe." Typically, light from an optical laser "pumps" or triggers a specific effect in a sample, and researchers vary the arrival of the X-ray laser pulses, which serve as the "probe" to capture images and other data that allow them to study the effects at different points in time.

Timing tools now in place at most LCLS experimental stations can measure the arrival time of the optical and X-ray laser pulses to an accuracy within 10 femtoseconds, or quadrillionths of a second. The new pulse-measuring system, which is highlighted in the July 27 edition of Nature Photonics, builds upon the existing tools and pushes timing to attoseconds, which are quintillionths (billion-billionths) of a second.

Nick Hartmann, an LCLS research associate and doctoral student at the University of Bern in Switzerland who is the lead author of the study detailing the system, said, "An X-ray laser with attosecond timing resolution would open up a new class of experiments on the natural time scale of electron motion."

The new system uses a high-resolution spectrograph, a type of camera that records the timing and wavelength of the probe laser pulses. The colorful patterns it displays represent the different wavelengths of light that passed, at slightly different times, through a thin sample of silicon nitride.

This material experiences a cascading reaction in its electrons when it is struck by an X-ray pulse. This effect leaves a brief imprint in the way light passes through the sample, sort of like a temporary interruption of vision following a camera's flash.

This X-ray-caused effect shows up in the way the light from the other laser pulse passes through the silicon nitride – it is seen as a brief dip in the amount of light recorded by the spectrograph, like the after-image of a camera flash. An image-analysis algorithm then precisely calculates, based on the recorded patterns, the relative arrival time of the X-ray pulses.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA's 'Flying Saucer' Air Brakes Ace Flight Test

NASA's 'Flying Saucer' Air Brakes Ace Flight Test | Amazing Science |

A prototype inflatable braking system to land heavy payloads on Mars aced a debut test flight in June, but its supersonic parachute will need to be reshaped to better accommodate the turbulent airflow of rapid descent, NASA engineers said Friday. NASA’s Low-Density Supersonic Decelerator (LDSD) rocketed to an altitude of 190,000 feet after being carried into the stratosphere by a massive helium balloon.

The thin air and low pressure at that altitude is as close as engineers can come to simulating flight in Mars’ atmosphere. The idea of the experiment was to accelerate the braking system to four times the speed of sound, which is roughly the speed that a spacecraft from Earth would hit the Martian atmosphere.

“Our main objective was to show that we can get this vehicle to altitude, that we can get it to conditions that the technologies will see when they actually fly at Mars,” LDSD project manager Mark Adler, with NASA’s Jet Propulsion Laboratory in Pasadena, Calif., told reporters at a press conference Friday.

Once in position, the vehicle inflated a doughnut-shaped air brake to increase its surface area and thus the amount of energy that could be dissipated by frictional heating during the fall through the atmosphere.

That part of the test went better than expected, lead researcher Ian Clark, also with JPL, told reporters. The structure inflated quickly and uniformly and managed to maintain its 20-foot diameter shape with only about 0.8 inches of deflection, which for an inflated structure is “pretty remarkable,” Clark said. Problems began with the deployment of a supersonic parachute, which was quickly torn apart as it attempted to inflate while moving at  2,500 mph.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

WiFi Backscatter: How to enable the Internet of Things without batteries

WiFi Backscatter: How to enable the Internet of Things without batteries | Amazing Science |

University of Washington engineers have designed a clever new communication system called Wi-Fi backscatter that uses ambient radio frequency signals as a power source for battery-free devices (such as temperature sensors or wearable technology) and also reuses the existing Wi-Fi infrastructure to provide Internet connectivity for these devices.

“If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects,” said Shyam Gollakota, a UW assistant professor of computer science and engineering.

“We now have the ability to enable Wi-Fi connectivity for devices while consuming orders of magnitude less power than what Wi-Fi typically requires.”

To supply power to the devices, the system uses an “ambient backscatter” scheme previously developed by the UW group, which allows two devices to communicate with each other by harvesting ambient radio, TV, and cellular transmissions, as KurzweilAI described last year.

So the new research takes that a step further by also connecting each individual device to the Internet. But but even low-power Wi-Fi consumes 1000 to 10,000 times more power than can be harvested in these wireless signals. Instead, the new system uses an ultra-low-power “tag” with an antenna and circuitry that can talk to Wi-Fi-enabled laptops or smartphones while consuming negligible power (less than 10 microwatts).

These tags work by essentially “looking” for Wi-Fi signals moving between the router and a laptop or smartphone. The tags encode data in real time by either reflecting or not reflecting the Wi-Fi router’s signals, thus slightly changing the wireless signal. Wi-Fi-enabled devices like laptops and smartphones would detect these minute changes (by analyzing changes in reflected signals) and receive data from the tag.

So far, the UW’s Wi-Fi backscatter tag has communicated with a Wi-Fi device at rates of 1 kilobit per second with about 2 meters between the devices. The researchers plan to extend the range to about 20 meters and have files patents on the technology.

The “Internet of Things” would extend connectivity to perhaps billions of devices. Battery-free sensors could be embedded in everyday objects to help monitor and track everything from the structural safety of bridges to the health of your heart. For example, your smart watch could upload your workout data onto a Google spreadsheet.

Or sensors embedded around your home could track minute-by-minute temperature changes and send that information to your thermostat to help conserve energy.

The researchers will publish their results at the Association for Computing Machinery’s Special Interest Group on Data Communication‘s annual conference this month in Chicago. The team also plans to start a company based on the technology.

Philippe Andre's curator insight, August 7, 2:57 AM
 A very clever use of wifi waves to communicate without devices energy (or almost free energy). This way, if confirmed, opens up possibilities for M2M connection in economic conditions. If ever the technology allowed the same thing using the waves of mobile networks, this still open more opportunities for rapid expansion of M2M in particular environments (without WiFi but with very good mobile networks as in some African countries, for example )
Scooped by Dr. Stefan Gruenwald!

Scientists develop pioneering new spray-on solar cells

Scientists develop pioneering new spray-on solar cells | Amazing Science |

A team of scientists at the University of Sheffield is the first to fabricate perovskite solar cells using a spray-painting process – a discovery that could help cut the cost of solar electricity.

Experts from the University’s Department of Physics and Astronomy and Department of Chemical and Biological Engineering have previously used the spray-painting method to produce solar cells using organic semiconductors - but using perovskite is a major step forward.

Efficient organometal halide perovskite based photovoltaics were first demonstrated in 2012. They are now a very promising new material for solar cells as they combine high efficiency with low material costs. The spray-painting process wastes very little of the perovskite material and can be scaled to high volume manufacturing – similar to applying paint to cars and graphic printing.

Lead researcher Professor David Lidzey said: “There is a lot of excitement around perovskite based photovoltaics. “Remarkably, this class of material offers the potential to combine the high performance of mature solar cell technologies with the low embedded energy costs of production of organic photovoltaics.”

While most solar cells are manufactured using energy intensive materials like silicon, perovskites, by comparison, requires much less energy to make. By spray-painting the perovskite layer in air the team hope the overall energy used to make a solar cell can be reduced further. Prof. Lidzey said: “The best certified efficiencies from organic solar cells are around 10 per cent.

“Perovskite cells now have efficiencies of up to 19 per cent. This is not so far behind that of silicon at 25 per cent - the material that dominates the world-wide solar market.” He added: “The perovskite devices we have created still use similar structures to organic cells. What we have done is replace the key light absorbing layer - the organic layer - with a spray-painted perovskite.

“Using a perovskite absorber instead of an organic absorber gives a significant boost in terms of efficiency.” The Sheffield team found that by spray-painting the perovskite they could make prototype solar cells with efficiency of up to 11 per cent.

Professor Lidzey said: “This study advances existing work where the perovskite layer has been deposited from solution using laboratory scale techniques. It’s a significant step towards efficient, low-cost solar cell devices made using high volume roll-to-roll processing methods.”

Solar power is becoming an increasingly important component of the world-wide renewables energy market and continues to grow at a remarkable rate despite the difficult economic environment.

Professor Lidzey said: “I believe that new thin-film photovoltaic technologies are going to have an important role to play in driving the uptake of solar-energy, and that perovskite based cells are emerging as likely thin-film candidates. “

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Visual Microphone - Passive Recovery of Sound from Just Visual Information

The Visual Microphone - Passive Recovery of Sound from Just Visual Information | Amazing Science |

An unlikely method of surveillance has emerged in the form of crisp packets, after researchers reconstructed speech by observing tiny vibrations using a "visual microphone".

Researchers at the Massachusetts Institute of Technology (MIT), Microsoft and Adobe achieved this by developing an algorithm that can recreate sound based on silent video footage of nearby objects.

As well as packets of crisps, the algorithm can record sound through the vibrations of other items, like aluminum foil, glasses of water and house plants.

"When sound hits an object, it causes the object to vibrate," said Abe Davis, an electrical engineering graduate student at MIT and first author of a paper that details the findings. "The motion of this vibration creates a very subtle visual signal that's usually invisible to the naked eye.

"In our work, we show how using only a video of the object and a suitable processing algorithm, we can extract these minute vibrations and partially recover the sounds that produce them - letting us turn everyday visible objects into visual microphones."

Through this process, Davis and his team were first able to decipher the tune of Mary had a little lamb through vibrations in the leaves of a plant.

To test the sound-processing algorithm, the method was then tested on a bag of crisps from behind sound-proof glass. Remarkably, the visual microphone was able to pick up the words to the same nursery rhymeThe results of the experiment have already led privacy advocates to warn about the implications of how this technology may be used.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA tests ‘impossible’ no-fuel quantum space engine – and it actually seems to work

NASA tests ‘impossible’ no-fuel quantum space engine – and it actually seems to work | Amazing Science |
NASA didn't set out to confirm the feasibility of a seemingly impossible fuel-less thruster design, but it seems they did exactly that.

NASA has been testing new space travel technologies throughout its entire history, but the results of its latest experiment may be the most surprising yet — if they can be confirmed and hold up. At a conference in Cleveland, Ohio, scientists with NASA's Eagleworks Laboratories in Houston, Texas, presented a paper indicating they had achieved a small amount of thrust from a container that had no traditional fuels, only microwaves, bouncing around inside of it. If the results can be replicated reliably and scaled up — and that's a big "if," since NASA only produced them on a very small scale over a two-day period — they could ultimately result in ultra-light-weight, ultra-fast spacecrafts that could carry humans to Mars in weeks instead of months, and to the nearest star system outside our own (Proxima Centurai) in just about 30 years.

The type of container NASA tested was based on a model for a new space engine that doesn't use weighty liquid propellant or nuclear reactors, called a Cannae Drive. The idea is that microwaves bouncing from end-to-end of a specially designed, unevenly-shaped container can create a difference in radiation pressure, causing thrust to be exerted toward the larger end of the container. A similar type of technology called an EmDrive has been demonstrated to work in small scale trials by Chinese and Argentine scientists.

While the amount of thrust generated in these NASA's tests was lower than previous trials — between 30 and 50 micronewtons, way less than even the weight of an iPhone, as Nova points out — the fact that any thrust whatsoever is generated without an onboard source of fuel seems to violate the conservation of momentum, a bedrock in the laws of physics.

Most impressively, the NASA team specifically built two Cannae Drives, including one that was designed to fail, and instead it worked. As the scientists write in their paper abstract: "thrust was observed on both test articles, even though one of the test articles was designed with the expectation that it would not produce thrust." That suggests the drive is "producing a force that is not attributable to any classical electromagnetic phenomenon," the scientists write. It may instead be interacting with the quantum vacuum — the lowest energetic state possible — but the scientists don't have much evidence to support this idea yet.

There are many reasons to be skeptical: the inventor of the Cannae Drive, Guido Fetta, has only a Bachelor’s Degree in Chemical Engineering and is operating his company Cannae as a for-profit venture. Still, the fact that such results were produced by NASA scientists is promising and should warrant further investigation.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Vision-correcting display makes reading glasses so yesterday

Vision-correcting display makes reading glasses so yesterday | Amazing Science |

What if computer screens had glasses instead of the people staring at the monitors? That concept is not too far afield from technology being developed by UC Berkeley computer and vision scientists.

The researchers are developing computer algorithms to compensate for an individual’s visual impairment, and creating vision-correcting displays that enable users to see text and images clearly without wearing eyeglasses or contact lenses. The technology could potentially help hundreds of millions of people who currently need corrective lenses to use their smartphones, tablets and computers. One common problem, for example, is presbyopia, a type of farsightedness in which the ability to focus on nearby objects is gradually diminished as the aging eyes’ lenses lose elasticity.

More importantly, the displays could one day aid people with more complex visual problems, known as high order aberrations, which cannot be corrected by eyeglasses, said Brian Barsky, UC Berkeley professor of computer science and vision science, and affiliate professor of optometry.

“We now live in a world where displays are ubiquitous, and being able to interact with displays is taken for granted,” said Barsky, who is leading this project. “People with higher order aberrations often have irregularities in the shape of the cornea, and this irregular shape makes it very difficult to have a contact lens that will fit. In some cases, this can be a barrier to holding certain jobs because many workers need to look at a screen as part of their work. This research could transform their lives, and I am passionate about that potential.”

“The significance of this project is that, instead of relying on optics to correct your vision, we use computation,” said lead author Fu-Chung Huang, who worked on this project as part of his computer science Ph.D. dissertation at UC Berkeley under the supervision of Barsky and Austin Roorda, professor of vision science and optometry. “This is a very different class of correction, and it is non-intrusive.”

The algorithm, which was developed at UC Berkeley, works by adjusting the intensity of each direction of light that emanates from a single pixel in an image based upon a user’s specific visual impairment. In a process called deconvolution, the light passes through the pinhole array in such a way that the user will perceive a sharp image.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from networks and network weaving!

Detecting Communities Based on Network Topology

Detecting Communities Based on Network Topology | Amazing Science |

Communities are groups that are densely connected among their members, and sparsely connected with the rest of the network. Community structure can reveal abundant hidden information about complex networks that is not easy to detect by simple observation. There are many large-scale complex networks (systems) in the real world whose structure is not fully understood. A great deal of research has been carried out to uncover the structures of these real world networks, to improve the ability to manage, maintain, renovate and control them. With the help of varied approaches, it is possible to shed light on the general structure of these networks, and further understand their function.

Network science methods have been used in various settings [12including social,[34information,[5] transportation,[6] energy,[7ecological,[8disease, [9and biological networks. [101112,13] In most of these cases we can find clear community structures, which are usually associated with specific functions. However, to date, most detection methods have limitations, and there is still a lot of room to develop more general approaches.

At present, most methods focus on the detection of node community. One popular approach is based on the optimization of the modularity Q [14155256] of a sub-network.

Some methods [1314,2934383940] force every node to be assigned to a single community. This assumption doesn't always reflect real world networks, where several overlapping communities can co-exist. For example, in social networks, a person may have family relationship circles, job circles, friend circles, social circles, hobby circles and so on. Algorithms that can discover overlapping communities [1617181920212223]  have been developed, and recently, methods to detect link communities [202425]  have been presented.

The concept of a link community is useful for discovering overlapping communities, as edges are more likely to have unique identities than nodes, which instead tend to have multiple identities. In addition, statistical, [54] information-theoretic [354853]  and synchronization and dynamical clustering approaches [4950585960have also been developed to detect communities.

Via Ashish Umre, june holley
Ivon Prefontaine's curator insight, July 26, 6:54 PM

Community is a more complex and organic organizing than teams. Teams are inherently hierarchical with predetermined goals. Communities are fluid and the goals are continuously being negotiated. Schools and classrooms are better served to be thought of as communities with overlapping qualities and permeable boundaries with other communities.

Eli Levine's curator insight, July 29, 6:42 PM

A useful tool for policy making, because it helps identify communities and how they interact to form super-communities.


The essence of mapping the polity and the public, socially, economically, technologically, and infrastrucutrally.


Think about it.