Amazing Science
700.7K views | +583 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Building living, breathing supercomputers

Building living, breathing supercomputers | Amazing Science | Scoop.it

The substance that provides energy to all the cells in our bodies, Adenosine triphosphate (ATP), may also be able to power the next generation of supercomputers. That is what an international team of researchers led by Prof. Nicolau, the Chair of the Department of Bioengineering at McGill, believe. They've published an article on the subject earlier this week in PNAS, in which they describe a model of a biological computer that they have created that is able to process information very quickly and accurately using parallel networks in the same way that massive electronic super computers do.


Except that the model bio supercomputer they have created is a whole lot smaller than current supercomputers, uses much less energy, and uses proteins present in all living cells to function. "We've managed to create a very complex network in a very small area," says Dan Nicolau, Sr. with a laugh. He began working on the idea with his son, Dan Jr., more than a decade ago and was then joined by colleagues from Germany, Sweden and The Netherlands, some 7 years ago. "This started as a back of an envelope idea, after too much rum I think, with drawings of what looked like small worms exploring mazes."


The model bio-supercomputer that the Nicolaus (father and son) and their colleagues have created came about thanks to a combination of geometrical modelling and engineering knowhow (on the nano scale). It is a first step, in showing that this kind of biological supercomputer can actually work.


The circuit the researchers have created looks a bit like a road map of a busy and very organized city as seen from a plane. Just as in a city, cars and trucks of different sizes, powered by motors of different kinds, navigate through channels that have been created for them, consuming the fuel they need to keep moving.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human vs. Machine: RNA paper based on a computer game, authorship creates identity crisis

Human vs. Machine: RNA paper based on a computer game, authorship creates identity crisis | Amazing Science | Scoop.it

A journal published a paper today that reveals a set of folding constraints in the design of RNA molecules. So far, so normal.

Most of the data for the study come from an online game that crowdsources solutions from thousands of nonexpert players—unusual but not unique. But the lead authors of the paper are the players themselves. Now that is a first. And there's a twist: The journal nearly delayed publication because of "ethical" concerns about authors using only their game names.


The game is called Eterna, and it made a big splash in 2014 with a paper in the Proceedings of the National Academy of Sciences that had 37,000 players as co-authors. The goal was to see whether nonexpert humans can do better than computer algorithms at designing RNA sequences that fold into particular shapes. And indeed, the humans won, even after the computer algorithms were endowed with insights from the human folders. When it comes to the biophysics of RNA folding, John Henry still beats the machine.


That 2014 study was led by card-carrying scientists Adrien Treuille and Rhiju Das, biophysicists at Carnegie Mellon University in Pittsburgh, Pennsylvania, and Stanford University in Palo Alto, California, respectively. The two researchers created the game in 2009. (They both cut their teeth on scientific game design as postdocs in the lab of David Baker at the University of Washington, Seattle, where the blockbuster game FoldIt was conceived.) Since then they have massively scaled up the process and hooked the game to a real-world automated lab that actually tests the folding predictions made by players against the 3D structure of the RNA molecules. They call it the Eterna Massive Open Laboratory.


The newest paper shows how far the effort has come. Among the game's thousands of RNA design "puzzles," there seem to be a small set that are particularly difficult. Among the most challenging structural features to figure out is symmetry, where an RNA strand folds into two or more identically shaped loops. The Eterna game includes an interface for players to propose hypotheses about how particular RNA structures will or will not fold into particular shapes. Those were distilled into a set of "designability" rules. The question was: Do only human designers struggle with thorny design problems, or do computer simulations tussle too?


The answer is that the computers struggled just as much as the people. Researchers report that three of the best existing computer algorithms, running on a supercomputer at Stanford, struggled to solve the very same RNA design problems as the humans. The result shows that the human "designability" rules do indeed correspond to problems that are hard not just for human brains but also for computers, the team reports today in the Journal of Molecular Biology. In fact, the hardest puzzles that could be solved by experienced Eterna players were unsolvable by the computer even after days of crunching. And to help improve the algorithms, computer scientists now have a set of benchmarks—the Eterna100—to gauge the design difficulty of RNA structures.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Toolkit for Silicon-based Quantum Computing

A Toolkit for Silicon-based Quantum Computing | Amazing Science | Scoop.it
Before quantum computing becomes practical, researchers will need to find a practical way to store information as quantum bits, or qubits. Researchers are making significant progress toward the creation of electronic devices based on qubits made of single ions implanted in silicon, one of the most practical of all materials.


“Bit” is a contraction of “binary digit,” but unlike a classical bit, which is plain-vanilla binary with a value of either 0 or 1, a quantum bit, or qubit — the theoretical basis of quantum computing — holds both 0 and 1 in a superposed state until it is measured.


A vast computational space can be created with relatively few quantum-mechanically entangled qubits, and the measurement of one qubit can instantly resolve an intricate calculation when all the entangled qubits are “collapsed” to a specific value by the measurement.


So how does one make and measure a qubit? The problem has engaged scientists for years. Many arrangements have been proposed and some demonstrated, each with its advantages and disadvantages, including tricky schemes involving superconducting tunnel junctions, quantum dots, neutral atoms in optical lattices, trapped ions probed by lasers, and so on.


In the long run, however, qubits based on individual dopant atoms implanted in silicon may have the edge. The materials and methods of silicon-chip manufacturing are familiar and, when applied to quantum-computer devices, have the potential for easy scale-up.


“There are three pillars to the development program my colleagues and I have been following,” says Thomas Schenkel of Berkeley Lab’s Accelerator and Fusion Research Division. “One is the theory of quantum measurement in the devices we build, led by Professor Birgitta Whaley from the Department of Chemistry at UC Berkeley; another is the fabrication of these devices, headed by myself and Professor Jeff Bokor from UC Berkeley’s Department of Electrical Engineering and Computer Science; and the third is to actually measure quantum states in these devices, an effort led by Professor Steve Lyon from Princeton’s Department of Electrical Engineering. Of course, things don’t necessarily happen in that order.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Switchable material could enable new memory chips

Switchable material could enable new memory chips | Amazing Science | Scoop.it

Two MIT researchers have developed a thin-film material whose phase and electrical properties can be switched between metallic and semiconducting simply by applying a small voltage. The material then stays in its new configuration until switched back by another voltage.


The discovery could pave the way for a new kind of “nonvolatile” computer memory chip that retains information when the power is switched off, and for energy conversion and catalytic applications.

The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoOx.


Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. “Here for the first time,” she says, “we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoOx.”


“It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties,” Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite.


The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage — just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage.


Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors — a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say.


Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. “So our idea was, don’t change the atmosphere, just apply a voltage.”


“Voltage modifies the effective oxygen pressure that the material faces,” Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia.


In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT’s Center for Materials Science and Engineering.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels | Amazing Science | Scoop.it

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a deep-learning algorithm that can predict how memorable or forgettable an image is almost as accurately as humans, and they plan to turn it into an app that tweaks photos to make them more memorable. For each photo, the “MemNet” algorithm also creates a “heat map” (a color-coded overlay) that identifies exactly which parts of the image are most memorable. You can try it out online by uploading your own photos to the project’s “LaMem” dataset.


The research is an extension of a similar algorithm the team developed for facial memorability. The team fed its algorithm tens of thousands of images from several different datasets developed at CSAIL, including LaMem and the scene-oriented SUN and Places. The images had each received a “memorability score” based on the ability of human subjects to remember them in online experiments.


The team then pitted its algorithm against human subjects by having the model predicting how memorable a group of people would find a new never-before-seen image. It performed 30 percent better than existing algorithms and was within a few percentage points of the average human performance. By emphasizing different regions, the algorithm can also potentially increase the image’s memorability.


“CSAIL researchers have done such manipulations with faces, but I’m impressed that they have been able to extend it to generic images,” says Alexei Efros, an associate professor of computer science at the University of California at Berkeley. “While you can somewhat easily change the appearance of a face by, say, making it more ‘smiley,’ it is significantly harder to generalize about all image types.”


LaMem is the world’s largest image-memorability dataset. With 60,000 images, each annotated with detailed metadata about qualities such as popularity and emotional impact, LaMem is the team’s effort to spur further research on what they say has often been an under-studied topic in computer vision.


Team members picture a variety of potential applications, from improving the content of ads and social media posts, to developing more effective teaching resources, to creating your own personal “health-assistant” device to help you remember things. The team next plans to try to update the system to be able to predict the memory of a specific person, as well as to better tailor it for individual “expert industries” such as retail clothing and logo design.


The work is supported by grants from the National Science Foundation, as well as the McGovern Institute Neurotechnology Program, the MIT Big Data Initiative at CSAIL, research awards from Google and Xerox, and a hardware donation from Nvidia.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Complexity Theory: Major Advance Reveals the Limits of Computation

Complexity Theory: Major Advance Reveals the Limits of Computation | Amazing Science | Scoop.it
A major advance reveals deep connections between the classes of problems that computers can—and can’t—possibly do.


For more than 40 years, researchers had been trying to find a better way to compare two arbitrary strings of characters, such as the long strings of chemical letters within DNA molecules. The most widely used algorithm is slow and not all that clever: It proceeds step-by-step down the two lists, comparing values at each step. If a better method to calculate this “edit distance” could be found, researchers would be able to quickly compare full genomes or large data sets, and computer scientists would have a powerful new tool with which they could attempt to solve additional problems in the field.


In a paper presented at the ACM Symposium on Theory of Computing, two researchers from the Massachusetts Institute of Technology put forth a mathematical proof that the current best algorithm was “optimal”—in other words, that finding a more efficient way to compute edit distance was mathematically impossible. The Boston Globe celebrated the hometown researchers’ achievement with a headline that read “For 40 Years, Computer Scientists Looked for a Solution That Doesn’t Exist.”


But researchers aren’t quite ready to record the time of death. One significant loophole remains. The impossibility result is only true if another, famously unproven statement called the strong exponential time hypothesis (SETH) is also true. Most computational complexity researchers assume that this is the case—including Piotr Indyk and Artūrs Bačkurs of MIT, who published the edit-distance finding—but SETH’s validity is still an open question. This makes the article about the edit-distance problem seem like a mathematical version of the legendary report of Mark Twain’s death: greatly exaggerated.


The media’s confusion about edit distance reflects a murkiness in the depths of complexity theory itself, where mathematicians and computer scientists attempt to map out what is and is not feasible to compute as though they were deep-sea explorers charting the bottom of an ocean trench. This algorithmic terrain is just as vast—and poorly understood—as the real seafloor, said Russell Impagliazzo, a complexity theorist who first formulated the exponential-time hypothesis with Ramamohan Paturi in 1999. “The analogy is a good one,” he said. “The oceans are where computational hardness is. What we’re attempting to do is use finer tools to measure the depth of the ocean in different places.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Skyscraper-style carbon-nanotube chip design to ‘boosts electronic performance by factor of a thousand’

Skyscraper-style carbon-nanotube chip design to ‘boosts electronic performance by factor of a thousand’ | Amazing Science | Scoop.it

Researchers at Stanford and three other universities are creating a revolutionary new skyscraper-like high-rise architecture for computing based on carbon nanotube materials instead of silicon. In Rebooting Computing, a special issue (in press) of the IEEE Computer journal, the team describes its new approach as “Nano-Engineered Computing Systems Technology,” or N3XT.


Suburban-style chip layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy, they note. N3XT will break data bottlenecks by integrating processors and memory-like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators.

The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits, according to the researchers.


Stanford researchers including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong have “assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra says.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong claims.


Engineers have previously tried to stack silicon chips but with limited success, the researchers suggest. Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires. But conventional 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.


The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of vias that can move more data over shorter distances that traditional wire, using less energy, and immersing computation and memory storage into an electronic super-device.


The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below. As in IBM’s recent chip breakthrough (see “Method to replace silicon with carbon nanotubes developed by IBM Research“), N3XT chips are based on carbon nanotube transistors.


Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors, and much thinner. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

How to control information leaks from smartphone apps

How to control information leaks from smartphone apps | Amazing Science | Scoop.it

A Northeastern University research team has found “exten­sive” leakage of users’ information — device and user iden­ti­fiers, loca­tions, and passwords — into net­work traffic from apps on mobile devices, including iOS, Android, and Win­dows phones. The researchers have also devised a way to stop the flow.


David Choffnes, an assis­tant pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence, and his col­leagues devel­oped a simple, effi­cient cloud-based system called ReCon. It detects leaks of “per­son­ally identifiable infor­ma­tion,” alerts users to those breaches, and enables users to con­trol the leaks by spec­i­fying what infor­ma­tion they want blocked and from whom.


The team’s study fol­lowed 31 mobile device users with iOS devices and Android devices who used ReCon for a period of one week to 101 days and then mon­i­tored their per­sonal leak­ages through a ReCon secure webpage. The results were alarming. “Depress­ingly, even in our small user study we found 165 cases of cre­den­tials being leaked in plain­text,” the researchers wrote.


Of the top 100 apps in each oper­ating system’s app store that par­tic­i­pants were using, more than 50 per­cent leaked device iden­ti­fiers, more than 14 per­cent leaked actual names or other user iden­ti­fiers, 14–26 per­cent leaked loca­tions, and three leaked pass­words in plain­text. In addi­tion to those top apps, the study found sim­ilar pass­word leaks from 10 addi­tional apps that par­tic­i­pants had installed and used.


The password-leaking apps included Map­MyRun, the lan­guage app Duolingo, and the Indian dig­ital music app Gaana. All three devel­opers have since fixed the leaks. Sev­eral other apps con­tinue to send plain­text pass­words into traffic, including a pop­ular dating app.


What’s really trou­bling is that we even see sig­nif­i­cant num­bers of apps sending your pass­word, in plain­text read­able form, when you log in,” says Choffnes. In a public-WiFi set­ting, that means anyone run­ning “some pretty simple soft­ware” could nab it.


Via LilyGiraud
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This app lets you target autonomous video drones with facial recognition

This app lets you target autonomous video drones with facial recognition | Amazing Science | Scoop.it

One small step for selfies, one giant leap for cheap deep-learning autonomous video-surveillance drones.


Robotics company Neurala has combined facial-recognition and drone-control mobile software in an iOS/Android app called “Selfie Dronie” that enables low-cost Parrot Bebop and Bebop 2 drones to take hands-free videos and follow a subject autonomously. To create a video, you simply select the person or object and you’re done. The drone then flies an arc around the subject to take a video selfie (it moves with the person). Or it zooms upward for a dramatic aerial shot in “dronie” mode.


Basically, the app replaces remote-control gadgets and controlling via GPS on cell phones. Instead, once the target person is designated, the drone operates autonomously. Neurala explains that its Neurala Intelligence Engine (NIE) can immediately learn to recognize an object using an ordinary camera. Then, as the object moves, Neurala’s deep learning algorithms learn more about the object in real time and in different environments, and by comparing these observations to other things it has learned in the past — going beyond current deep-learning visual processing, which requires training first.


Neurala says NASA funded Neurela in October to commercialize its autonomous navigation, object recognition, and obstacle avoidance software developed for planetary exploration robots such as Curiosity rover, and apply it in real-world situations on Earth for self-driving cars, home robots, and autonomous drones.


Neurela says what makes its software unique is its use of deep learning and passive sensors, instead of “expensive and power-hungry active systems,” such as radar and LIDAR, used in most prototype self-driving vehicles.


Of course, it’s a small step from this technology to surveillance drones with facial recognition and autonomous weaponized unmanned aerial vehicles (see “The proposed ban on offensive autonomous weapons is unrealistic and dangerous” and “Why we really should ban autonomous weapons: a response“), especially given the recent news in Paris and Brussels and current terrorist threats directed to the U.S. and other countries.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google open-sources its TensorFlow machine learning system

Google open-sources its TensorFlow machine learning system | Amazing Science | Scoop.it

Google announced today that it will make its new second-generation “TensorFlow” machine-learning system open source. That means programmers can now achieve some of what Google engineers have done, using TensorFlow — from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos, to reading a sign in a foreign language using Google Translate.


Google says TensorFlow is a highly scalable machine learning system — it can run on a single smartphone or across thousands of computers in datacenters. The idea is to accelerate research on machine learning, “or wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data.”


This blog post by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead, provides a technical overview. “Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with signals derived from deep neural networks, and to power the magic features of tomorrow,” they note.


Machine learning is still in its infancy—computers today still can’t do what a 4-year-old can do effortlessly, like knowing the name of a dinosaur after seeing only a couple examples, or understanding that “I saw the Grand Canyon flying to Chicago” doesn’t mean the canyon is hurtling over the city. There is a lot of work ahead of us. But with TensorFlow it's a good start, and users can all be in it together.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

World's Largest Natural Sound Library Is Now Online

World's Largest Natural Sound Library Is Now Online | Amazing Science | Scoop.it

The Macaulay Library, with over 150,000 recordings of 9,000 species is now digitised and has been uploaded to an online searchable database.

The library, housed at Cornell University’s Lab of Ornithology, has made available over ten terabytes of recordings, with a runtime of 7513 hours of natural sounds. The collection, which began in 1929 has taken researchers dozens of years to accumulate. It currently holds recordings to a massive three-quarters of the world’s entire bird species, but it’s not all chirps and squawks, there’s also a decent helping of whale songs, insects, bears, elephants, primates and just about every other critter or creepy crawley that roams the earth.


The collection’s curator Greg Budney describes the archives as revolutionary, in terms of the speed and the breadth of material that is now accessible online. “This is one of the greatest research and conservation resources at the Cornell Lab,” said Budney. “And through its digitization we’ve swung the doors open on it in a way that wasn’t possible 10 or 20 years ago.”


“Our audio collection is the largest and the oldest in the world,” explained Macaulay Library director Mike Webster. “Now, it’s also the most accessible. We’re working to improve search functions and create tools people can use to collect recordings and upload them directly to the archive. Our goal is to make the Macaulay Library as useful as possible for the broadest audience possible.”


Now the team has digitised its massive archive, it’s focusing on collecting new material from amateur and professional recordists from around the world. The sounds are not only used by sound designers and filmmakers, but also by researchers, museums, or anyone interested in the sounds of nature. “Plus, it’s just plain fun to listen to these sounds,” explained Bundy. “Have you heard the sound of a walrus underwater?  It’s an amazing sound.” 


Via Marianne PokeBunny Lenaerts, Jocelyn Stoller
more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

"Tardis" Memory Could Enable Huge Multi-Core Computer Chips

"Tardis" Memory Could Enable Huge Multi-Core Computer Chips | Amazing Science | Scoop.it
More efficient memory-management scheme could help enable chips with thousands of cores.


In a modern, multicore chip, every core — or processor — has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access.


If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data. That directory takes up a significant chunk of memory: In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count. Envisioned chips with 128, 256, or even 1,000 cores will need a more efficient way of maintaining cache coherence.


At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory’s memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New flat transistor defies theoretical limit

New flat transistor defies theoretical limit | Amazing Science | Scoop.it
A team of researchers with members from the University of California and Rice University has found a way to get a flat transistor to defy theoretical limitations on Field Effect Transistors (FETs). In their paper published in the journal Nature, the team describes their work and why they believe it could lead to consumer devices that have both smaller electronics and longer battery life. Katsuhiro Tomioka with Erasmus MC University Medical Center in the Netherlands offers a News & Views article discussing the work done by the team in the same journal edition.


As Tomioka notes, the materials and type of architecture currently used in creating small consumer electronic devices is rapidly reaching a threshold upon which a tradeoff will have to be made—smaller transistors or more power requirements—this is because of the unique nature of FETs, shortening the channel they use requires more power, on a logarithmic scale. Thus, to continue making FETs ever smaller and to get them to use less power means two things, the first is that a different channel material must be found, one that allow high switch-on currents at low voltages. The second is a way must be found to lower the voltage required for the FETs.


Researchers have made inroads on the first requirement, building FETs with metal-oxide-semiconductor materials, for example. The second has proved to be more challenging. In this latest effort, the researchers looked to tunneling to reduce voltage demands, the results of which are called, quite naturally, tunneling FETs or TFETs—they require less voltage because they are covered (by a gate stack) and work by transporting a charge via quantum-tunneling. The device the team built is based on a 2D bilayer of molybdenum disulfide and bulk germanium—it demonstrated a negative differential resistance, a marker of tunneling, and a very steep subthreshold slope (the switching property associated with rapid turn-on) which fell below the classical theoretical limit.


The work by the team represents substantial progress in solving the miniturization problem for future electronics devices, but as the team notes, there is still much to do. They express optimism that further improvements will lead to not just better consumer devices, but tiny sensors that could be introduced into the body to help monitor health.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Real or computer-generated: Can you tell the difference?

Real or computer-generated: Can you tell the difference? | Amazing Science | Scoop.it

Which of these are photos vs. computer-generated images?


As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated, a Dartmouth College-led study has foundThis has introduced complex forensic and legal issues*, such as how to distinguish between computer-generated and photographic images of child pornography, says Hany Farid, a professor of computer science, pioneering researcher in digital forensics at Dartmouth, and senior author of a paper in the journal ACM Transactions on Applied Perception“This can be problematic when a photograph is introduced into a court of law and the jury has to assess its authenticity,” Farid says.


In their study, Farid’s team conducted perceptual experiments in which 60 high-quality computer-generated and photographic images of men’s and women’s faces were shown to 250 observers. Each observer was asked to classify each image as either computer generated or photographic. Observers correctly classified photographic images 92 percent of the time, but correctly classified computer-generated images only 60 percent of the time.


But in a follow-up experiment, when the researchers provided a second set of observers with some training before the experiment, their accuracy on classifying photographic images fell slightly to 85 percent but their accuracy on computer-generated images jumped to 76 percent. With or without training, observers performed much worse than Farid’s team observed five years ago in a study when computer-generated imagery was not as photorealistic.


“We expect that human observers will be able to continue to perform this task for a few years to come, but eventually we will have to refine existing techniques and develop new computational methods that can detect fine-grained image details that may not be identifiable by the human visual system,” says Farid.

more...
Shafique Miraj Aman's curator insight, February 29, 11:40 PM

This is a good article to show how computer graphics technology will slowly overcome the uncanny valley.

Babara Lopez's curator insight, March 4, 8:46 PM
It's hard...
Taylah Mancey's curator insight, March 24, 2:28 AM

With technology always changing and getting better, it can sometimes not always be for an advantage. While forensic science has imporved due to technological advancements, this story shows that there can aso be a negative way it can be affected. Techology in this instance has made it difficult for justice to prevail.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

‘Eternal' 5D data storage could reliably record the history of humankind for billions of years

‘Eternal' 5D data storage could reliably record the history of humankind for billions of years | Amazing Science | Scoop.it

Digital documents stored in nano-structured dots in glass for billions of years could survive the end of the human race.


Scientists at the University of Southampton Optoelectronics Research Centre (ORC) have developed the first digital data storage system capable of creating archives that can survive for billions of years. Using nanostructured glass, the system has 360 TB per disc capacity, thermal stability up to 1,000°C, and virtually unlimited lifetime at room temperature (or 13.8 billion years at 190°C ).


As a “highly stable and safe form of portable memory,” the technology opens up a new era of “eternal” data archiving that could be essential to cope with the accelerating amount of information currently being created and stored, the scientists says.* The system could be especially useful for organizations with big archives, such as national archives, museums, and libraries, according to the scientists.


The recording system uses an ultrafast laser to produce extremely short (femtosecond) and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometers (one millionth of a meter) in fuzed quartz (coined as a “Superman memory crystal” (as in “memory crystals” used in the Superman films).”


The self-assembled nanostructures change the way light travels through glass, modifying the polarization of light, which can then be read by a combination optical microscope and polarizer, similar to that found in Polaroid sunglasses. The recording method is described as “5D” because the information encoding is in five dimensions — three-dimensional position plus size and orientation.


So far, the researchers have saved major documents from human history, such as the Universal Declaration of Human Rights (UDHR), Newton’s OpticksMagna Carta, and Kings James Bible as digital copies. A copy of the UDHR encoded to 5D data storage was recently presented to UNESCO by the ORC at the International Year of Light (IYL) closing ceremony in Mexico.


The team is now looking for industry partners to further develop and commercialize this technology. The researchers will present their research at the photonics industry’s SPIE (the International Society for Optical Engineering Conference) in San Francisco on Wednesday Feb. 17.


* In 2008, the International Data Corporation [found] that total capacity of data stored is increasing by around 60% each year. As a result, more than 39,000 exabytes of data will be generated by 2020. This amount of data will cause a series of problems and one of the main will be power consumption. 1.5% of the total U.S. electricity consumption in 2010 was given to the data centers in the U.S. According to a report by the Natural Resources Defence Council, the power consumption of all data centers in the U.S. will reach roughly 140 billion kilowatt-hours per each year by 2020. This amount of electricity is equivalent to that generated by roughly thirteen Heysham 2 nuclear power stations (one of the biggest stations in UK, net 1240 MWe).


Most of these data centers are built based on hard-disk drive (HDD), with only a few designed on optical discs. HDD is the most popular solution for digital data storage according to the International Data Corporation. However, HDD is not an energy-efficient option for data archiving; the loading energy consumption is around 0.04 W/GB. In addition, HDD is an unsatisfactory candidate for long-term storage due to the short lifetime of the hardware and requires transferring data every two years to avoid any loss.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why | Amazing Science | Scoop.it
In mathematical terms, these supervised-learning systems are given a large set of inputs and the corresponding outputs; the goal is for a computer to learn the function that will reliably transform a new input into the correct output. To do this, the computer breaks down the mystery function into a number of layers of unknown functions called sigmoid functions. These S-shaped functions look like a street-to-curb transition: a smoothened step from one level to another, where the starting level, the height of the step and the width of the transition region are not determined ahead of time.

Inputs enter the first layer of sigmoid functions, which spits out results that can be combined before being fed into a second layer of sigmoid functions, and so on. This web of resulting functions constitutes the “network” in a neural network. A “deep” one has many layers.


Decades ago, researchers proved that these networks are universal, meaning that they can generate all possible functions. Other researchers later proved a number of theoretical results about the unique correspondence between a network and the function it generates. But these results assume networks that can have extremely large numbers of layers and of function nodes within each layer. In practice, neural networks use anywhere between two and two dozen layers. Because of this limitation, none of the classical results come close to explaining why neural networks and deep learning work as spectacularly well as they do.


It is the guiding principle of many applied mathematicians that if something mathematical works really well, there must be a good underlying mathematical reason for it, and we ought to be able to understand it. In this particular case, it may be that we don’t even have the appropriate mathematical framework to figure it out yet. Or, if we do, it may have been developed within an area of “pure” mathematics from which it hasn’t yet spread to other mathematical disciplines.


Another technique used in machine learning is unsupervised learning, which is used to discover hidden connections in large data sets. Let’s say, for example, that you’re a researcher who wants to learn more about human personality types. You’re awarded an extremely generous grant that allows you to give 200,000 people a 500-question personality test, with answers that vary on a scale from one to 10. Eventually you find yourself with 200,000 data points in 500 virtual “dimensions”—one dimension for each of the original questions on the personality quiz. These points, taken together, form a lower-dimensional “surface” in the 500-dimensional space in the same way that a simple plot of elevation across a mountain range creates a two-dimensional surface in three-dimensional space.


What you would like to do, as a researcher, is identify this lower-dimensional surface, thereby reducing the personality portraits of the 200,000 subjects to their essential properties—a task that is similar to finding that two variables suffice to identify any point in the mountain-range surface. Perhaps the personality-test surface can also be described with a simple function, a connection between a number of variables that is significantly smaller than 500. This function is likely to reflect a hidden structure in the data.


In the last 15 years or so, researchers have created a number of tools to probe the geometry of these hidden structures. For example, you might build a model of the surface by first zooming in at many different points. At each point, you would place a drop of virtual ink on the surface and watch how it spread out. Depending on how the surface is curved at each point, the ink would diffuse in some directions but not in others. If you were to connect all the drops of ink, you would get a pretty good picture of what the surface looks like as a whole. And with this information in hand, you would no longer have just a collection of data points. Now you would start to see the connections on the surface, the interesting loops, folds and kinks. This would give you a map for how to explore it.


These methods are already leading to interesting and useful results, but many more techniques will be needed. Applied mathematicians have plenty of work to do. And in the face of such challenges, they trust that many of their “purer” colleagues will keep an open mind, follow what is going on, and help discover connections with other existing mathematical frameworks. Or perhaps even build new ones.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Will computers ever truly understand what humans are saying?

Will computers ever truly understand what humans are saying? | Amazing Science | Scoop.it

If you think computers are quickly approaching true human communication, think again. Computers like Siri often get confused because they judge meaning by looking at a word's statistical regularity. This is unlike humans, for whom context is more important than the word or signal, according to a researcher who invented a communication game allowing only nonverbal cues, and used it to pinpoint regions of the brain where mutual understanding takes place.


From Apple's Siri to Honda's robot Asimo, machines seem to be getting better and better at communicating with humans.

But some neuroscientists caution that today's computers will never truly understand what we're saying because they do not take into account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues, machines don't develop a shared understanding of the people, place and situation -- often including a long social history -- that is key to human communication. Without such common ground, a computer cannot help but be confused.

"People tend to think of communication as an exchange of linguistic signs or gestures, forgetting that much of communication is about the social context, about who you are communicating with," Stolk said.

The word "bank," for example, would be interpreted one way if you're holding a credit card but a different way if you're holding a fishing pole. Without context, making a "V" with two fingers could mean victory, the number two, or "these are the two fingers I broke."

"All these subtleties are quite crucial to understanding one another," Stolk said, perhaps more so than the words and signals that computers and many neuroscientists focus on as the key to communication. "In fact, we can understand one another without language, without words and signs that already have a shared meaning."

Babies and parents, not to mention strangers lacking a common language, communicate effectively all the time, based solely on gestures and a shared context they build up over even a short time.

Stolk argues that scientists and engineers should focus more on the contextual aspects of mutual understanding, basing his argument on experimental evidence from brain scans that humans achieve nonverbal mutual understanding using unique computational and neural mechanisms. Some of the studies Stolk has conducted suggest that a breakdown in mutual understanding is behind social disorders such as autism.

"This shift in understanding how people communicate without any need for language provides a new theoretical and empirical foundation for understanding normal social communication, and provides a new window into understanding and treating disorders of social communication in neurological and neurodevelopmental disorders," said Dr. Robert Knight, a UC Berkeley professor of psychology in the campus's Helen Wills Neuroscience Institute and a professor of neurology and neurosurgery at UCSF.

Stolk and his colleagues discuss the importance of conceptual alignment for mutual understanding in an opinion piece appearing Jan. 11 in the journal Trends in Cognitive Sciences.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

"Writable" Circuits Could Let Scientists Draw Electronics into Existence

"Writable" Circuits Could Let Scientists Draw Electronics into Existence | Amazing Science | Scoop.it

New method uses soft sheets made of silicone rubber that have many tiny droplets of liquid metal embedded inside them.


Scientists have developed a way to produce soft, flexible and stretchy electronic circuits and radio antennas by hand, simply by writing on specially designed sheets of material. This technique could help people draw electronic devices into existence on demand for customized devices, researchers said in a new study describing the method.


Whereas conventional electronics are stiff, new soft electronics are flexible and potentially stretchable and foldable. Researchers around the world are investigating soft electronics for applications such as wearable and implantable devices. [See also: 5 Crazy Technologies That Are Revolutionizing Biotech]


The new technique researchers developed creates circuits by fusing, or sintering, together bits of metal to form electrically conductive wires. But the newly developed process does not use heat, as is often the case with sintering. Instead, this method involves soft sheets made of silicone rubber that have many tiny droplets of liquid metal embedded inside them. Pressing down on these sheets using, for instance, the tip of a pen, ruptures the capsules, much like popping miniature water balloons, and the liquid metal inside can pool to form circuit elements.


"We can make conductive lines by hand simply by writing," said study co-senior author Michael Dickey, a chemical engineer at North Carolina State University in Raleigh. The researchers used a metal known as eutectic gallium indium (EGaIn), a highly electrically conductive alloy that is liquid at about 60 degrees Fahrenheit (15.5 degrees Celsius). They embedded droplets of EGaIn that were only about 100 nanometers, or billionths of a meter, wide into sheets of the a kind of silicone rubber known as PDMS. When these droplets pool together, their electrical conductivity increases about tenfold compared to when they are separate, the researchers said. To understand why, imagine a hallway covered with water balloons.


Via LilyGiraud
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists have figured out what we need to achieve secure quantum teleportation

Scientists have figured out what we need to achieve secure quantum teleportation | Amazing Science | Scoop.it

"We've got this."


For the first time, researchers have demonstrated the precise requirements for secure quantum teleportation – and it involves a phenomenon known 'quantum steering', first proposed by Albert Einstein and Erwin Schrödinger.


Before you get too excited, no, this doesn't mean we can now teleport humans like they do on Star Trek. Instead, this research will allow people to use quantum entanglement to send information across large distances without anyone else being able to eavesdrop. Which is almost as cool, because this is how we'll form the un-hackable communication networks of the future.


Quantum teleportation isn't new in itself. Researchers have already had a lot of success quantum teleporting information over 100 km of fiber. But there's a slight issue – the quantum message was getting to the other end kinda incoherent, and scientists haven't exactly known what to do to prevent that from happening, until now. 


"Teleportation works like a sophisticated fax machine, where a quantum state is transported from one location to another," said one of the researchers, Margaret Reid, from Swinburne University of Technology in Australia. "Let’s say 'Alice' begins the process by performing operations on the quantum state – something that encodes the state of a system – at her station. Based on the outcomes of her operations, she communicates (by telephone or public Internet) to 'Bob' at a distant location, who is then able to create a replica of the quantum state," she explains.


"The problem is that unless special requirements are satisfied, quantum mechanics demands that the state at Bob’s end will be 'fuzzed up'." The researchers have now shown that to avoid this, Alice and Bob (or anyone else who wants to send an entangled message) need to use a special form of quantum entanglement known as 'Einstein-Podolsky-Rosen steering'.


"Only then can the quality of the transported state be perfect," said Reid. "The beauty is that quantum mechanics guarantees that a perfect state can only be transported to one receiver. Any second 'eavesdropper' will get a fuzzy version." Basically, in this quantum steering state, the measurement of one entangled particle can have an immediate 'steering' effect on the state of another distant particle.

The researchers will continue to investigate this phenomenon to figure out how it can be used to more reliably communicate using quantum entanglement.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

China still has the fastest supercomputer, and now has more than 100 in service

China still has the fastest supercomputer, and now has more than 100 in service | Amazing Science | Scoop.it

The Top 500 supercomputer rankings are a fun way to gauge which countries boast the most powerful rigs in the world. And, perhaps unsurprisingly, China has won the top spot for the sixth time in a row.

Not only that, but the nation has nearly tripled its supercomputer count from 37 to 109 in only six months. Although the US still maintains a healthy 201 supercomputers, first place in terms of quantity, that’s actually a record low for the nation in the Top 500, which was conceived back in 1993.


Produced by its National University of Defense Technology, China’s Tianhe-2 bolsters a whopping 3,120,000 cores with the ability to achieve 33.86 quadrillion floating point operations (flops) per second. As if that information alone wasn’t intimidating enough, those numbers are almost double that of the US energy department’s still powerful, but not quite as monumental, Titan Cray XK7, apparently capable of 17.59 petaflops, according to the Linpack benchmark.


A United States-owned rig also occupies the third place position, that is, IBM’s Sequoia, custom-built for the National Nuclear Security Administration and housed in the Lawrence Livermore National Lab. The Sequoia, which claimed the top spot in 2012, has since been surpassed by both the Tianhe-2 and the Titan Cray XK7.


Among the top 10 of the Top 500, only the Trinity and Hazel Hen are fresh faces to the list, positioned at numbers 6 and 8, respectively. While the Trinity was conceived for the US Department of Energy, the Hazel Hen rests in Stuttgart, Germany.


In an interview with the BBC, Rajnish Arora, vice president of enterprise computing at IDC Asia Pacific, explained to the network that China’s domination in the supercomputer space is less reflective of the United States’ inability to compete and more representative of China’s economic growth.


“When China started off appearing on the center stage of the global economy in the 80s and 90s, it was predominately a manufacturing hub,” Arora said. “All the IP or design work would happen in Europe or the US and the companies would just send manufacturing or production jobs to China. Now as these companies become bigger, they want to invest in technical research capabilities, so that they can create a lot more innovation and do basic design and engineering work.”


Via Ben van Lier
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon | Amazing Science | Scoop.it
Juliano Pinto, a 29-year-old paraplegic, kicked off the 2014 World Cup in São Paulo with a robotic exoskeleton suit that he wore and controlled with his mind. The event was broadcast internationally and served as a symbol of the exciting possibilities of brain-controlled machines. Over the last few decades research into brain–computer interfaces (BCIs), which allow direct communication between the brain and an external device such a computer or prosthetic, has skyrocketed. Although these new developments are exciting, there are still major hurdles to overcome before people can easily use these devices as a part of daily life.

Until now such devices have largely been proof-of-concept demonstrations of what BCIs are capable of. Currently, almost all of them require technicians to manage and include external wires that tether individuals to large computers. New research, conducted by members of the BrainGate group, a consortium that includes neuroscientists, engineers and clinicians, has made strides toward overcoming some of these obstacles. “Our team is focused on developing what we hope will be an intuitive, always-available brain–computer interface that can be used 24 hours a day, seven days a week, that works with the same amount of subconscious thought that somebody who is able-bodied might use to pick up a coffee cup or move a mouse,” says Leigh Hochberg, a neuroengineer at Brown University who was involved in the research. Researchers are opting for these devices to also be small, wireless and usable without the help of a caregiver.

Via Wildcat2030, Jocelyn Stoller
more...
Lucile Debethune's curator insight, November 22, 2015 12:48 PM

Une approche intéressante de l'interface homme machine,  et le groupe Braingate apporte de très bonne idées sur ce sujet.A surveiller

 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Entanglement in Silico: Quantum computer coding in silicon now possible

Entanglement in Silico: Quantum computer coding in silicon now possible | Amazing Science | Scoop.it
A team of Australian engineers has proven – with the highest score ever obtained – that a quantum version of computer code can be written and manipulated using two quantum bits in a silicon microchip. The advance removes lingering doubts that such operations can be made reliably enough to allow powerful quantum computers to become a reality. 
 
The result, obtained by a team at UNSW, appears today in the international journal, Nature Nanotechnology.
 
The quantum code written at UNSW is built upon a class of phenomena called quantum entanglement, which allows for seemingly counterintuitive phenomena such as the measurement of one particle instantly affecting another – even if they are at opposite ends of the universe.
 
"We have succeeded in passing the test, and we have done so with the highest ‘score’ ever recorded in an experiment.”
 
“This effect is famous for puzzling some of the deepest thinkers in the field, including Albert Einstein, who called it ‘spooky action at a distance’,” said Professor Andrea Morello, of the School of Electrical Engineering & Telecommunications at UNSW and Program Manager in the Centre for Quantum Computation & Communication Technology, who led the research. “Einstein was sceptical about entanglement, because it appears to contradict the principles of ‘locality’, which means that objects cannot be instantly influenced from a distance.”
 
Physicists have since struggled to establish a clear boundary between our everyday world – which is governed by classical physics – and this strangeness of the quantum world. For the past 50 years, the best guide to that boundary has been a theorem called Bell’s Inequality, which states that no local description of the world can reproduce all of the predictions of quantum mechanics.
 
Bell’s Inequality demands a very stringent test to verify if two particles are actually entangled, known as the ‘Bell test’, named for the British physicist who devised the theorem in 1964.
 
“The key aspect of the Bell test is that it is extremely unforgiving: any imperfection in the preparation, manipulation and read-out protocol will cause the particles to fail the test,” said Dr Juan Pablo Dehollain, a UNSW Research Associate who with Dr Stephanie Simmons was a lead author of the Nature Nanotechnology paper.
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

Basic quantum computation achieved with silicon for first time

Basic quantum computation achieved with silicon for first time | Amazing Science | Scoop.it

So far we have had to use supercooled materials to try out ultra-fast quantum logic. Now silicon, already standard in electronics, has gained that talent.


The ingredients for superfast computers could be nearly in place. For the first time, researchers have demonstrated that two silicon transistors acting as quantum bits can perform a tiny calculation. Now it’s just a question of using these as the building blocks of a larger quantum computer – taking advantage of the very material that is ubiquitous in conventional electronics.


Where conventional computing uses bits, quantum computing uses qubits, which can take the values 0, 1 or various combinations of these, instead of being stuck at either 0 or 1. This means they could exponentially shrink the time it takes to solve problems, transforming fields like encryption and the search for new pharmaceuticals.


Previously qubit calculations had been made using ultra-cold superconductors, which are easier to couple together into a basic calculator – but never with user-friendly silicon. In silicon, the qubits are isolated to keep them stable, which is a barrier to making two qubits interact with each other.


Now, a team led by Andrew Dzurak of the University of New South Wales in Sydney, Australia, has achieved that feat. Their device looks at the spin of two electrons and follows instructions: if the first one is spinning in a particular direction, flip the spin of the second electron. If not, do nothing.

This is an example of a logic gate, a fundamental unit in a computer.


Repetition of that same humble logic by creating sequences of gates can enable more and more complex calculations. Dzurak’s team says it has patented a design for a chip containing millions of such qubits.


“This is a seminal breakthrough in the world of quantum computer development – with some caveats,” says Thomas Schenckel of the Lawrence Berkley National Laboratory in California. Although easier to scale up, “silicon-based qubits are still way behind superconducting qubits”, he says.


But that doesn’t diminish the potential of the work. “Nothing beats what we can do in silicon in terms of economical scaling and large-scale integration,” Schenkel says.


Journal reference: Nature, DOI: 10.1038/nature15263


Via Jocelyn Stoller
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

What is the Internet of Everything (IoE)?

What is the Internet of Everything (IoE)? | Amazing Science | Scoop.it

Imagine a world in which everything is connected and packed with sensors. 50+ billion connected devices, loaded with a dozen or more sensors, will create a trillion-sensor ecosystem.


These devices will create what I call a state of perfect knowledge, where we'll be able to know what we want, where we want, when we want. Combined with the power of data mining and machine learning, the value that you can create and the capabilities you will have as an individual and as a business will be extraordinary.


Here are a few basic examples to get you thinking:

  • Retail: Beyond knowing what you purchased, stores will monitor your eye gaze, knowing what you glanced at… what you picked up and considered, and put back on the shelf. Dynamic pricing will entice you to pick it up again.
  • City Traffic: Cars looking for parking cause 40% of traffic in city centers. Parking sensors will tell your car where to find an open spot.
  • Lighting: Streetlights and house lights will only turn on when you're nearby.
  • Vineyards/Farming: Today IoE enables winemakers to monitor the exact condition (temperature, humidity, sun) of every vine and recommend optimal harvest times. IoE can follow details of fermentation and even assure perfect handling through distribution and sale to the consumer at the wine store.
  • Dynamic pricing: In the future, everything has dynamic pricing where supply and demand drives pricing. Uber already knows when demand is high, or when I'm stuck miles from my house, and can charge more as a result.
  • Transportation: Self-driving cars and IoE will make ALL traffic a thing of the past.
  • Healthcare: You will be the CEO of your own health. Wearables will be tracking your vitals constantly, allowing you and others to make better health decisions.
  • Banking/Insurance: Research shows that if you exercise and eat healthy, you're more likely to repay your loan. Imagine a variable interest rate (or lower insurance rate) depending on exercise patterns and eating habits?
  • Forests: With connected sensors placed on trees, you can make urban forests healthier and better able to withstand -- and even take advantage of -- the effects of climate change.
  • Office Furniture: Software and sensors embedded in office furniture are being used to improve office productivity, ergonomics and employee health.
  • Invisibles: Forget wearables, the next big thing is sensor-based technology that you can't see, whether they are in jewelry, attached to the skin like a bandage, or perhaps even embedded under the skin or inside the body. By 2017, 30% of wearables will be "unobtrusive to the naked eye," according to market researcher Gartner.

Via Fernando Gil
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Fast and accurate: Groundbreaking computer program diagnoses cancer in two days

Fast and accurate: Groundbreaking computer program diagnoses cancer in two days | Amazing Science | Scoop.it

In by far the majority of cancer cases, the doctor can quickly identify the source of the disease, for example cancer of the liver, lungs, etc. However, in about one in 20 cases, the doctor can confirm that the patient has cancer -- but cannot find the source. These patients then face the prospect of a long wait with numerous diagnostic tests and attempts to locate the origin of the cancer before starting any treatment.


Now, researchers at DTU Systems Biology have combined genetics with computer science and created a new diagnostic technology based on advanced self-learning computer algorithms which -- on the basis of a biopsy from a metastasis -- can with 85 per cent certainty identify the source of the disease and thus target treatment and, ultimately, improve the prognosis for the patient.


Each year, about 35,000 people are diagnosed with cancer in Denmark, and many of them face the prospect of a long wait until the cancer has been diagnosed and its source located. However, even after very extensive tests, there will still be 2-3 per cent of patients where it has not been possible to find the origin of the cancer. In such cases, the patient will be treated with a cocktail of chemotherapy instead of a more appropriately targeted treatment, which could be more effective and gentler on the patient.


The newly developed method, which researchers are calling TumorTracer, are based on analyses of DNA mutations in cancer tissue samples from patients with metastasized cancer, i.e. cancer which has spread. The pattern of mutations is analyzed in a computer program which has been trained to find possible primary tumor localizations. The method has been tested on many thousands of samples where the primary tumour was already identified, and it has proven extremely precise. The next step will be to test the method on patients with unknown primary tumours. In recent years, researchers have discovered several ways of using genome sequencing of tumours to predict whether an individual cancer patient will benefit from a specific type of medicine.


This is a very effective method, and it is becoming increasingly common to conduct such sequencing for cancer patients. Associate Professor Aron Eklund from DTU Systems Biology explains: "We are very pleased that we can now use the same sequencing data together with our new algorithms to provide a much faster diagnosis for cancer cases that are difficult to diagnose, and to provide a useful diagnosis in cases which are currently impossible to diagnose. At the moment, it takes researchers two days to obtain a biopsy result, but we expect this time to be reduced as it becomes possible to do the sequencing increasingly faster. And it will be straightforward to integrate the method with the methods already being used by doctors."

more...
No comment yet.