Amazing Science
791.1K views | +97 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

These new quantum dot crystals could replace silicon in super-fast, next-gen computers

These new quantum dot crystals could replace silicon in super-fast, next-gen computers | Amazing Science | Scoop.it

Solid, crystalline structures of incredibly tiny particles known as quantum dots have been developed by engineers in the US, and they're so close to perfect, they could be a serious contender for a silicon alternative in the super-fast computers of the future.

 

Just as single-crystal silicon wafers revolutionised computing technology more than 60 years ago (your phone, laptop, PC, and iPad wouldn’t exist without one), quantum dot solids could change everything about how we transmit and process information in the decades to come.

 

But despite the incredible potential of quantum dot crystals in computing technology, researchers have been struggling for years to organise each individual dot into a perfectly structured solid - something that’s crucial if you want to install it in a processor and run an electric charge through it.

 

The problem? Past efforts to build something out of quantum dots - which are made up of a mere 5,000 atoms each - have failed, because researchers couldn’t figure out how to 'glue' them together without using another type of material that messes with their performance.

 

"Previously, they were just thrown together, and you hoped for the best," lead researcher Tobias Hanrath from Cornell University told The Christian Science Monitor. "It was like throwing a couple thousand batteries into a bathtub and hoping you get charge flowing from one end to the other."

 

Instead of pursuing different chemicals and materials that could work as the 'glue' but hinder the quantum dot’s electrical properties, Hanrath and his team have figured out how to ditch the glue and stick the quantum dots to each other, Lego-style.

"If you take several quantum dots, all perfectly the same size, and you throw them together, they’ll automatically align into a bigger crystal," Hanrath says.

 

To achieve this, the researchers first made nanocrystals from lead and selenium, and built these into crystalline fragments. These fragments were then used to form two-dimensional, square-shaped 'superstructures' - tiny building blocks that attach to each other without the help of other atoms. 

 

Publishing the results in Nature Materials, the team claims that the electrical properties of these superstructures are potentially superior to all other existing semiconductor nanocrystals, and they could be used in new types of devices for super-efficient energy absorption and light emission. The structures aren’t entirely perfect though, which is a key limitation of using quantum dots as your building blocks. While every silicon atom is exactly the same size, each quantum dot can vary by about 5 percent, and even when we’re talking about something that’s a few thousand atoms small, that 5 percent size variability is all it takes to prevent perfection.

 

Hanrath says that’s a good and a bad thing - good because they managed to hit the limits of what can be done with quantum dot solids, but bad, because they’ve hit the limits of what can be done with quantum dot solids.

 

"It's the equivalent of saying, 'Now we've made a really large single-crystal wafer of silicon, and you can do good things with it,'" he says in a press release. "That's the good part, but the potentially bad part of it is, we now have a better understanding that if you wanted to improve on our results, those challenges are going to be really, really difficult.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists suggest a 100 times faster type of memory cell based on Josephson junctions

Scientists suggest a 100 times faster type of memory cell based on Josephson junctions | Amazing Science | Scoop.it

A group of scientists from Moscow Institute of Physics and Technology and from the Moscow State University has developed a fundamentally new type of memory cell based on superconductors -- this type of memory will be able to work hundreds of times faster than the types of memory devices commonly used today, according to an article published in the journal Applied Physics Letters.

 

"With the operational function that we have proposed in these memory cells, there will be no need for time-consuming magnetization and demagnetization processes. This means that read and write operations will take only a few hundred picoseconds, depending on the materials and the geometry of the particular system, while conventional methods take hundreds or thousands of times longer than this," said the corresponding author of the study, Alexander Golubov, the Head of MIPT's Laboratory of Quantum Topological Phenomena in Superconducting Systems.

 

Golubov and his colleagues have proposed creating basic memory cells based on quantum effects in "sandwiches" of a superconductor -- dielectric (or other insulating material) -- superconductor, which were predicted in the 1960s by the British physicist Brian Josephson. The electrons in these "sandwiches" (they are called "Josephson junctions") are able to tunnel from one layer of a superconductor to another, passing through the dielectric like balls passing through a perforated wall.

 

Today, Josephson junctions are used both in quantum devices and conventional devices. For example, superconducting qubits are used to build the D-wave quantum system, which is capable of finding the minima of complex functions using the quantum annealing algorithm. There are also ultra-fast analogue-to-digital converters, devices to detect consecutive events, and other systems that do not require fast access to large amounts of memory. There have also been attempts to use the Josephson Effect to create ordinary processors. An experimental processor of this type was created in Japan in the late 1980s. In 2014, the research agency IAPRA resumed its attempts to create a prototype of a superconducting computer.

 

Josephson junctions with ferromagnets used as the middle of the "sandwich" are currently of greatest practical interest. In memory elements that are based on ferromagnets the information is encoded in the direction of the magnetic field vector in the ferromagnet. However, there are two fundamental flaws with this process: firstly, the low density of the "packaging" of the memory elements -- additional chains need to be added to provide extra charge for the cells when reading or writing data, and secondly the magnetization vector cannot be changed quickly, which limits the writing speed.

 

The group of physicists from MIPT and MSU proposed encoding the data in Josephson cells in the value of the superconducting current. By studying the superconductor-normal metal/ferromagnet-superconductor-insulator-superconductor junctions, the scientists discovered that in certain longitudinal and transverse dimensions the layers of the system may have two energy minima, meaning they are in one of two different states. These two minima can be used to record data -- zeros and ones.

 

In order to switch the system from "zero" to "one" and back again, the scientists have suggested using injection currents flowing through one of the layers of the superconductor. They propose to read the status using the current that flows through the whole structure. These operations can be performed hundreds of times faster than measuring the magnetization or magnetization reversal of a ferromagnet.

 

"In addition, our method requires only one ferromagnetic layer, which means that it can be adapted to so-called single flux quantum logic circuits, and this means that there will be no need to create an entirely new architecture for a processor. A computer based on single flux quantum logic can have a clock speed of hundreds of gigahertz, and its power consumption will be dozens of times lower," said Golubov.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Experiment shows magnetic chips could dramatically increase computing's energy efficiency

Experiment shows magnetic chips could dramatically increase computing's energy efficiency | Amazing Science | Scoop.it

In a breakthrough for energy-efficient computing, engineers at the University of California, Berkeley, have shown for the first time that magnetic chips can operate with the lowest fundamental level of energy dissipation possible under the laws of thermodynamics.

 

The findings, to be published Friday, March 11, 2016 in the peer-reviewed journal Science Advances, mean that dramatic reductions in power consumption are possible—as much as one-millionth the amount of energy per operation used by transistors in modern computers.

 

This is critical for mobile devices, which demand powerful processors that can run for a day or more on small, lightweight batteries. On a larger, industrial scale, as computing increasingly moves into 'the cloud,' the electricity demands of the giant cloud data centers are multiplying, collectively taking an increasing share of the country's—and world's—electrical grid.

 

"We wanted to know how small we could shrink the amount of energy needed for computing," said senior author Jeffrey Bokor, a UC Berkeley professor of electrical engineering and computer sciences and a faculty scientist at the Lawrence Berkeley National Laboratory. "The biggest challenge in designing computers and, in fact, all our electronics today is reducing their energy consumption."

 

Lowering energy use is a relatively recent shift in focus in chip manufacturing after decades of emphasis on packing greater numbers of increasingly tiny and faster transistors onto chips. "Making transistors go faster was requiring too much energy," said Bokor, who is also the deputy director the Center for Energy Efficient Electronics Science, a Science and Technology Center at UC Berkeley funded by the National Science Foundation. "The chips were getting so hot they'd just melt."

 

Researchers have been turning to alternatives to conventional transistors, which currently rely upon the movement of electrons to switch between 0s and 1s. Partly because of electrical resistance, it takes a fair amount of energy to ensure that the signal between the two states is clear and reliably distinguishable, and this results in excess heat.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science | Scoop.it

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.

 

Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.

 

“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.

 

MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.

 

Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.

 

While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Competition to Quantum computers: NP-complete problem solved with biological motors

Competition to Quantum computers: NP-complete problem solved with biological motors | Amazing Science | Scoop.it

Quantum computers get a lot of people excited because they solve problems in a manner that's fundamentally different from existing hardware. A certain class of mathematical problems, called NP-complete, can seemingly only be solved by exploring every possible solution, which conventional computers have to do one at a time.


Quantum computers, by contrast, explore all possible solutions simultaneously, and so these can provide answers relatively rapidly.

This isn't just an intellectual curiosity; encryption schemes rely on it being too computationally challenging to decrypt a message.

But as you may have noticed, we don't yet have quantum computers, and the technical hurdles between us and them remain substantial. An international team recently decided to try a different approach, using biology to explore a large solution space in parallel. While their computational machine is limited, the basic approach works, and it's 10,000 times more energy-efficient than traditional computers.


The basic setup of their system is remarkably simple. They create a grid of channels with two types of intersections. One allows everything passing down a grid line to continue traveling in a straight line; the second divides things evenly between the two output lines. A mathematical problem can then be encoded in the pattern of these two types of switches on the grid.


One way to envision this is as a series of channels that balls can roll down. To solve the problem, you simply start a large population of balls rolling from the top. They can explore all the possible solution space on their way traveling toward the bottom and should end up collecting at the exits that correspond to any possible solutions.


Of course, doing this with something macroscopic would be rather space, time, and labor consuming. You want something small so that the switches can be placed close together, and you want something that traverses the channels relatively quickly. For this, the authors turned to biology.


Within a cell, there are networks of protein fibers that motor proteins can crawl across using ATP as an energy source. These move things around within the cell, allow the cell itself to move, and drive larger processes like the contraction of muscles. For their computations, the authors flipped this around: the channels of the grid were coated in the motor proteins, and the authors loaded fragments of the fibers at the top of the grid. As long as ATP was provided, the fibers were quickly shuffled down to the bottom of the grid, traveling at speeds of up to 10µm a second. They were tagged with fluorescent molecules, which made it easy to track their progress.


The process wasn't 100 percent precise; tubes didn't go straight through some of the straight junctions some 0.2 percent of the time. Still, when the authors set up the grid to solve an NP-complete problem (the subset sum problem) there was a statistically significant difference between the number of fiber fragments that ended up exiting at solutions vs. those that exited at an incorrect gate.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Real or computer-generated: Can you tell the difference?

Real or computer-generated: Can you tell the difference? | Amazing Science | Scoop.it

Which of these are photos vs. computer-generated images?


As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated, a Dartmouth College-led study has foundThis has introduced complex forensic and legal issues*, such as how to distinguish between computer-generated and photographic images of child pornography, says Hany Farid, a professor of computer science, pioneering researcher in digital forensics at Dartmouth, and senior author of a paper in the journal ACM Transactions on Applied Perception“This can be problematic when a photograph is introduced into a court of law and the jury has to assess its authenticity,” Farid says.


In their study, Farid’s team conducted perceptual experiments in which 60 high-quality computer-generated and photographic images of men’s and women’s faces were shown to 250 observers. Each observer was asked to classify each image as either computer generated or photographic. Observers correctly classified photographic images 92 percent of the time, but correctly classified computer-generated images only 60 percent of the time.


But in a follow-up experiment, when the researchers provided a second set of observers with some training before the experiment, their accuracy on classifying photographic images fell slightly to 85 percent but their accuracy on computer-generated images jumped to 76 percent. With or without training, observers performed much worse than Farid’s team observed five years ago in a study when computer-generated imagery was not as photorealistic.


“We expect that human observers will be able to continue to perform this task for a few years to come, but eventually we will have to refine existing techniques and develop new computational methods that can detect fine-grained image details that may not be identifiable by the human visual system,” says Farid.

more...
Shafique Miraj Aman's curator insight, February 29, 2016 11:40 PM

This is a good article to show how computer graphics technology will slowly overcome the uncanny valley.

Babara Lopez's curator insight, March 4, 2016 8:46 PM
It's hard...
Taylah Mancey's curator insight, March 24, 2016 2:28 AM

With technology always changing and getting better, it can sometimes not always be for an advantage. While forensic science has imporved due to technological advancements, this story shows that there can aso be a negative way it can be affected. Techology in this instance has made it difficult for justice to prevail.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

‘Eternal' 5D data storage could reliably record the history of humankind for billions of years

‘Eternal' 5D data storage could reliably record the history of humankind for billions of years | Amazing Science | Scoop.it

Digital documents stored in nano-structured dots in glass for billions of years could survive the end of the human race.


Scientists at the University of Southampton Optoelectronics Research Centre (ORC) have developed the first digital data storage system capable of creating archives that can survive for billions of years. Using nanostructured glass, the system has 360 TB per disc capacity, thermal stability up to 1,000°C, and virtually unlimited lifetime at room temperature (or 13.8 billion years at 190°C ).


As a “highly stable and safe form of portable memory,” the technology opens up a new era of “eternal” data archiving that could be essential to cope with the accelerating amount of information currently being created and stored, the scientists says.* The system could be especially useful for organizations with big archives, such as national archives, museums, and libraries, according to the scientists.


The recording system uses an ultrafast laser to produce extremely short (femtosecond) and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometers (one millionth of a meter) in fuzed quartz (coined as a “Superman memory crystal” (as in “memory crystals” used in the Superman films).”


The self-assembled nanostructures change the way light travels through glass, modifying the polarization of light, which can then be read by a combination optical microscope and polarizer, similar to that found in Polaroid sunglasses. The recording method is described as “5D” because the information encoding is in five dimensions — three-dimensional position plus size and orientation.


So far, the researchers have saved major documents from human history, such as the Universal Declaration of Human Rights (UDHR), Newton’s OpticksMagna Carta, and Kings James Bible as digital copies. A copy of the UDHR encoded to 5D data storage was recently presented to UNESCO by the ORC at the International Year of Light (IYL) closing ceremony in Mexico.


The team is now looking for industry partners to further develop and commercialize this technology. The researchers will present their research at the photonics industry’s SPIE (the International Society for Optical Engineering Conference) in San Francisco on Wednesday Feb. 17.


* In 2008, the International Data Corporation [found] that total capacity of data stored is increasing by around 60% each year. As a result, more than 39,000 exabytes of data will be generated by 2020. This amount of data will cause a series of problems and one of the main will be power consumption. 1.5% of the total U.S. electricity consumption in 2010 was given to the data centers in the U.S. According to a report by the Natural Resources Defence Council, the power consumption of all data centers in the U.S. will reach roughly 140 billion kilowatt-hours per each year by 2020. This amount of electricity is equivalent to that generated by roughly thirteen Heysham 2 nuclear power stations (one of the biggest stations in UK, net 1240 MWe).


Most of these data centers are built based on hard-disk drive (HDD), with only a few designed on optical discs. HDD is the most popular solution for digital data storage according to the International Data Corporation. However, HDD is not an energy-efficient option for data archiving; the loading energy consumption is around 0.04 W/GB. In addition, HDD is an unsatisfactory candidate for long-term storage due to the short lifetime of the hardware and requires transferring data every two years to avoid any loss.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why | Amazing Science | Scoop.it
In mathematical terms, these supervised-learning systems are given a large set of inputs and the corresponding outputs; the goal is for a computer to learn the function that will reliably transform a new input into the correct output. To do this, the computer breaks down the mystery function into a number of layers of unknown functions called sigmoid functions. These S-shaped functions look like a street-to-curb transition: a smoothened step from one level to another, where the starting level, the height of the step and the width of the transition region are not determined ahead of time.

Inputs enter the first layer of sigmoid functions, which spits out results that can be combined before being fed into a second layer of sigmoid functions, and so on. This web of resulting functions constitutes the “network” in a neural network. A “deep” one has many layers.


Decades ago, researchers proved that these networks are universal, meaning that they can generate all possible functions. Other researchers later proved a number of theoretical results about the unique correspondence between a network and the function it generates. But these results assume networks that can have extremely large numbers of layers and of function nodes within each layer. In practice, neural networks use anywhere between two and two dozen layers. Because of this limitation, none of the classical results come close to explaining why neural networks and deep learning work as spectacularly well as they do.


It is the guiding principle of many applied mathematicians that if something mathematical works really well, there must be a good underlying mathematical reason for it, and we ought to be able to understand it. In this particular case, it may be that we don’t even have the appropriate mathematical framework to figure it out yet. Or, if we do, it may have been developed within an area of “pure” mathematics from which it hasn’t yet spread to other mathematical disciplines.


Another technique used in machine learning is unsupervised learning, which is used to discover hidden connections in large data sets. Let’s say, for example, that you’re a researcher who wants to learn more about human personality types. You’re awarded an extremely generous grant that allows you to give 200,000 people a 500-question personality test, with answers that vary on a scale from one to 10. Eventually you find yourself with 200,000 data points in 500 virtual “dimensions”—one dimension for each of the original questions on the personality quiz. These points, taken together, form a lower-dimensional “surface” in the 500-dimensional space in the same way that a simple plot of elevation across a mountain range creates a two-dimensional surface in three-dimensional space.


What you would like to do, as a researcher, is identify this lower-dimensional surface, thereby reducing the personality portraits of the 200,000 subjects to their essential properties—a task that is similar to finding that two variables suffice to identify any point in the mountain-range surface. Perhaps the personality-test surface can also be described with a simple function, a connection between a number of variables that is significantly smaller than 500. This function is likely to reflect a hidden structure in the data.


In the last 15 years or so, researchers have created a number of tools to probe the geometry of these hidden structures. For example, you might build a model of the surface by first zooming in at many different points. At each point, you would place a drop of virtual ink on the surface and watch how it spread out. Depending on how the surface is curved at each point, the ink would diffuse in some directions but not in others. If you were to connect all the drops of ink, you would get a pretty good picture of what the surface looks like as a whole. And with this information in hand, you would no longer have just a collection of data points. Now you would start to see the connections on the surface, the interesting loops, folds and kinks. This would give you a map for how to explore it.


These methods are already leading to interesting and useful results, but many more techniques will be needed. Applied mathematicians have plenty of work to do. And in the face of such challenges, they trust that many of their “purer” colleagues will keep an open mind, follow what is going on, and help discover connections with other existing mathematical frameworks. Or perhaps even build new ones.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Will computers ever truly understand what humans are saying?

Will computers ever truly understand what humans are saying? | Amazing Science | Scoop.it

If you think computers are quickly approaching true human communication, think again. Computers like Siri often get confused because they judge meaning by looking at a word's statistical regularity. This is unlike humans, for whom context is more important than the word or signal, according to a researcher who invented a communication game allowing only nonverbal cues, and used it to pinpoint regions of the brain where mutual understanding takes place.


From Apple's Siri to Honda's robot Asimo, machines seem to be getting better and better at communicating with humans.

But some neuroscientists caution that today's computers will never truly understand what we're saying because they do not take into account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues, machines don't develop a shared understanding of the people, place and situation -- often including a long social history -- that is key to human communication. Without such common ground, a computer cannot help but be confused.

"People tend to think of communication as an exchange of linguistic signs or gestures, forgetting that much of communication is about the social context, about who you are communicating with," Stolk said.

The word "bank," for example, would be interpreted one way if you're holding a credit card but a different way if you're holding a fishing pole. Without context, making a "V" with two fingers could mean victory, the number two, or "these are the two fingers I broke."

"All these subtleties are quite crucial to understanding one another," Stolk said, perhaps more so than the words and signals that computers and many neuroscientists focus on as the key to communication. "In fact, we can understand one another without language, without words and signs that already have a shared meaning."

Babies and parents, not to mention strangers lacking a common language, communicate effectively all the time, based solely on gestures and a shared context they build up over even a short time.

Stolk argues that scientists and engineers should focus more on the contextual aspects of mutual understanding, basing his argument on experimental evidence from brain scans that humans achieve nonverbal mutual understanding using unique computational and neural mechanisms. Some of the studies Stolk has conducted suggest that a breakdown in mutual understanding is behind social disorders such as autism.

"This shift in understanding how people communicate without any need for language provides a new theoretical and empirical foundation for understanding normal social communication, and provides a new window into understanding and treating disorders of social communication in neurological and neurodevelopmental disorders," said Dr. Robert Knight, a UC Berkeley professor of psychology in the campus's Helen Wills Neuroscience Institute and a professor of neurology and neurosurgery at UCSF.

Stolk and his colleagues discuss the importance of conceptual alignment for mutual understanding in an opinion piece appearing Jan. 11 in the journal Trends in Cognitive Sciences.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

"Writable" Circuits Could Let Scientists Draw Electronics into Existence

"Writable" Circuits Could Let Scientists Draw Electronics into Existence | Amazing Science | Scoop.it

New method uses soft sheets made of silicone rubber that have many tiny droplets of liquid metal embedded inside them.


Scientists have developed a way to produce soft, flexible and stretchy electronic circuits and radio antennas by hand, simply by writing on specially designed sheets of material. This technique could help people draw electronic devices into existence on demand for customized devices, researchers said in a new study describing the method.


Whereas conventional electronics are stiff, new soft electronics are flexible and potentially stretchable and foldable. Researchers around the world are investigating soft electronics for applications such as wearable and implantable devices. [See also: 5 Crazy Technologies That Are Revolutionizing Biotech]


The new technique researchers developed creates circuits by fusing, or sintering, together bits of metal to form electrically conductive wires. But the newly developed process does not use heat, as is often the case with sintering. Instead, this method involves soft sheets made of silicone rubber that have many tiny droplets of liquid metal embedded inside them. Pressing down on these sheets using, for instance, the tip of a pen, ruptures the capsules, much like popping miniature water balloons, and the liquid metal inside can pool to form circuit elements.


"We can make conductive lines by hand simply by writing," said study co-senior author Michael Dickey, a chemical engineer at North Carolina State University in Raleigh. The researchers used a metal known as eutectic gallium indium (EGaIn), a highly electrically conductive alloy that is liquid at about 60 degrees Fahrenheit (15.5 degrees Celsius). They embedded droplets of EGaIn that were only about 100 nanometers, or billionths of a meter, wide into sheets of the a kind of silicone rubber known as PDMS. When these droplets pool together, their electrical conductivity increases about tenfold compared to when they are separate, the researchers said. To understand why, imagine a hallway covered with water balloons.


Via LilyGiraud
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists have figured out what we need to achieve secure quantum teleportation

Scientists have figured out what we need to achieve secure quantum teleportation | Amazing Science | Scoop.it

"We've got this."


For the first time, researchers have demonstrated the precise requirements for secure quantum teleportation – and it involves a phenomenon known 'quantum steering', first proposed by Albert Einstein and Erwin Schrödinger.


Before you get too excited, no, this doesn't mean we can now teleport humans like they do on Star Trek. Instead, this research will allow people to use quantum entanglement to send information across large distances without anyone else being able to eavesdrop. Which is almost as cool, because this is how we'll form the un-hackable communication networks of the future.


Quantum teleportation isn't new in itself. Researchers have already had a lot of success quantum teleporting information over 100 km of fiber. But there's a slight issue – the quantum message was getting to the other end kinda incoherent, and scientists haven't exactly known what to do to prevent that from happening, until now. 


"Teleportation works like a sophisticated fax machine, where a quantum state is transported from one location to another," said one of the researchers, Margaret Reid, from Swinburne University of Technology in Australia. "Let’s say 'Alice' begins the process by performing operations on the quantum state – something that encodes the state of a system – at her station. Based on the outcomes of her operations, she communicates (by telephone or public Internet) to 'Bob' at a distant location, who is then able to create a replica of the quantum state," she explains.


"The problem is that unless special requirements are satisfied, quantum mechanics demands that the state at Bob’s end will be 'fuzzed up'." The researchers have now shown that to avoid this, Alice and Bob (or anyone else who wants to send an entangled message) need to use a special form of quantum entanglement known as 'Einstein-Podolsky-Rosen steering'.


"Only then can the quality of the transported state be perfect," said Reid. "The beauty is that quantum mechanics guarantees that a perfect state can only be transported to one receiver. Any second 'eavesdropper' will get a fuzzy version." Basically, in this quantum steering state, the measurement of one entangled particle can have an immediate 'steering' effect on the state of another distant particle.

The researchers will continue to investigate this phenomenon to figure out how it can be used to more reliably communicate using quantum entanglement.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

China still has the fastest supercomputer, and now has more than 100 in service

China still has the fastest supercomputer, and now has more than 100 in service | Amazing Science | Scoop.it

The Top 500 supercomputer rankings are a fun way to gauge which countries boast the most powerful rigs in the world. And, perhaps unsurprisingly, China has won the top spot for the sixth time in a row.

Not only that, but the nation has nearly tripled its supercomputer count from 37 to 109 in only six months. Although the US still maintains a healthy 201 supercomputers, first place in terms of quantity, that’s actually a record low for the nation in the Top 500, which was conceived back in 1993.


Produced by its National University of Defense Technology, China’s Tianhe-2 bolsters a whopping 3,120,000 cores with the ability to achieve 33.86 quadrillion floating point operations (flops) per second. As if that information alone wasn’t intimidating enough, those numbers are almost double that of the US energy department’s still powerful, but not quite as monumental, Titan Cray XK7, apparently capable of 17.59 petaflops, according to the Linpack benchmark.


A United States-owned rig also occupies the third place position, that is, IBM’s Sequoia, custom-built for the National Nuclear Security Administration and housed in the Lawrence Livermore National Lab. The Sequoia, which claimed the top spot in 2012, has since been surpassed by both the Tianhe-2 and the Titan Cray XK7.


Among the top 10 of the Top 500, only the Trinity and Hazel Hen are fresh faces to the list, positioned at numbers 6 and 8, respectively. While the Trinity was conceived for the US Department of Energy, the Hazel Hen rests in Stuttgart, Germany.


In an interview with the BBC, Rajnish Arora, vice president of enterprise computing at IDC Asia Pacific, explained to the network that China’s domination in the supercomputer space is less reflective of the United States’ inability to compete and more representative of China’s economic growth.


“When China started off appearing on the center stage of the global economy in the 80s and 90s, it was predominately a manufacturing hub,” Arora said. “All the IP or design work would happen in Europe or the US and the companies would just send manufacturing or production jobs to China. Now as these companies become bigger, they want to invest in technical research capabilities, so that they can create a lot more innovation and do basic design and engineering work.”


Via Ben van Lier
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon | Amazing Science | Scoop.it
Juliano Pinto, a 29-year-old paraplegic, kicked off the 2014 World Cup in São Paulo with a robotic exoskeleton suit that he wore and controlled with his mind. The event was broadcast internationally and served as a symbol of the exciting possibilities of brain-controlled machines. Over the last few decades research into brain–computer interfaces (BCIs), which allow direct communication between the brain and an external device such a computer or prosthetic, has skyrocketed. Although these new developments are exciting, there are still major hurdles to overcome before people can easily use these devices as a part of daily life.

Until now such devices have largely been proof-of-concept demonstrations of what BCIs are capable of. Currently, almost all of them require technicians to manage and include external wires that tether individuals to large computers. New research, conducted by members of the BrainGate group, a consortium that includes neuroscientists, engineers and clinicians, has made strides toward overcoming some of these obstacles. “Our team is focused on developing what we hope will be an intuitive, always-available brain–computer interface that can be used 24 hours a day, seven days a week, that works with the same amount of subconscious thought that somebody who is able-bodied might use to pick up a coffee cup or move a mouse,” says Leigh Hochberg, a neuroengineer at Brown University who was involved in the research. Researchers are opting for these devices to also be small, wireless and usable without the help of a caregiver.

Via Wildcat2030, Jocelyn Stoller
more...
Lucile Debethune's curator insight, November 22, 2015 12:48 PM

Une approche intéressante de l'interface homme machine,  et le groupe Braingate apporte de très bonne idées sur ce sujet.A surveiller

 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Face-tracking software lets you make anyone say anything in real time

You know how they say, "Show me pictures or video, or it didn't happen"? Well, the days when you could trust what you see on video in real time are officially coming to an end thanks to a new kind of face tracking.

 

A team from Stanford, the Max Planck Institute for Informatics and the University of Erlangen-Nuremberg has produced a video demonstrating how its software, called Face2Face, in combination with a common webcam, can make any person on video appear to say anything a source actor wants them to say.

 

In addition to perfectly capturing the real-time talking motions of the actor and placing them seamlessly on the video subject, the software also accounts for real-time facial expressions, including distinct movements such as eyebrow raises.

 

To show off the system, the team used YouTube videos of U.S. President George W. Bush, Russian President Vladimir Putin and Republican presidential candidate Donald Trump. In each case, the facial masking is flawless, effectively turning the video subject into the actor's puppet.

 

It might be fun to mix this up with something like "Say it with Trump," but for now the software is still in the research phase. "Unfortunately, the software is currently not publicly available — it's just a research project," team member Matthias Niessner told Mashable. "However, we are thinking about commercializing it given that we are getting so many requests." We knew this kind of stuff was possible in the special effects editing room, but the ability to do it in real time — without those nagging "uncanny valley" artifacts — could change how we interpret video documentation forever.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Korean Go champ scores surprise victory over supercomputer

Korean Go champ scores surprise victory over supercomputer | Amazing Science | Scoop.it
A South Korean Go grandmaster on Sunday scored his first win over a Google-developed supercomputer, in a surprise victory after three humiliating defeats in a high-profile showdown between man and machine.

 

Lee Se-Dol thrashed AlphaGo after a nail-biting match that lasted for nearly five hours—the fourth of the best-of-five series in which the computer clinched a 3-0 victory on Saturday.

 

Lee struggled in the early phase of the fourth match but gained a lead towards the end, eventually prompting AlphaGo to resign.

The 33-year-old is one of the greatest players in modern history of the ancient board game, with 18 international titles to his name—the second most in the world.

 

"I couldn't be happier today...this victory is priceless. I wouldn't trade it for the world," a smiling Lee said after the match to cheers and applause from the audience. "I can't say I wasn't hurt by the past three defeats...but I still enjoyed every moment of playing so it really didn't damage me greatly," he said.

 

Lee earlier predicted a landslide victory over Artificial Intelligence (AI) but was later forced to concede that the AlphaGo was "too strong". Lee had vowed to try his best to win at least one game after his second defeat.

 

Described as the "match of the century" by local media, the game was closely watched by tens of millions of Go fans mostly in East Asia as well as AI scientists.

 

The most famous AI victory to date came in 1997, when the IBM-developed supercomputer Deep Blue beat the then-world class chess champion Garry Kasparov. But Go, played for centuries mostly in Korea, Japan and China, had long remained the holy grail for AI developers due to its complexity and near-infinite number of potential configurations.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Xiaoice: The largest Turing test in history is carried out by Microsoft in China

Xiaoice: The largest Turing test in history is carried out by Microsoft in China | Amazing Science | Scoop.it
Microsoft's Xiaoice, which means "little Bing" in Chinese, has been described as the "largest Turing test in history" by a top researcher at the company.

 

In a report for Nautilus, the researcher, Yongdong Wang, outlined what the company was doing with Xiaoice — pronounced "Shao-ice" and translated as "little Bing" — in China. The Turing test, created by British scientist Alan Turing, is a test that looks to see if a human can detect whether the entity answering a question is a human or a computer. The test has become a fixture in pop culture, with astarring role in the film "Ex Machina."

 

According to Wang, Xiaoice is used by millions of people every day and can respond with human-like answers, questions, and "thoughts." If a user sends a picture of a broken ankle, Xiaoice will reply with a question asking about how much pain the injury caused, for example.

 

"Xiaoice can exchange views on any topic," Wang wrote. "If it's something she doesn't know much about, she will try to cover it up. If that doesn't work, she might become embarrassed or even angry, just like a human would."

 

A report from GeekWire said millions of Chinese users were telling Xiaoice that they loved it, without any apparent irony. About 25% of Xiaoice's users — 10 million people — had said "I love you" while using the service.

 

But very few people outside Asia have heard of Xiaoice. According to Google Trends, which tracks what people are searching for, interest spiked in August last year but has fallen.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Amazing Science | Scoop.it

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.

 

Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.

 

The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”

 

“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

 

Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Building living, breathing supercomputers

Building living, breathing supercomputers | Amazing Science | Scoop.it

The substance that provides energy to all the cells in our bodies, Adenosine triphosphate (ATP), may also be able to power the next generation of supercomputers. That is what an international team of researchers led by Prof. Nicolau, the Chair of the Department of Bioengineering at McGill, believe. They've published an article on the subject earlier this week in PNAS, in which they describe a model of a biological computer that they have created that is able to process information very quickly and accurately using parallel networks in the same way that massive electronic super computers do.


Except that the model bio supercomputer they have created is a whole lot smaller than current supercomputers, uses much less energy, and uses proteins present in all living cells to function. "We've managed to create a very complex network in a very small area," says Dan Nicolau, Sr. with a laugh. He began working on the idea with his son, Dan Jr., more than a decade ago and was then joined by colleagues from Germany, Sweden and The Netherlands, some 7 years ago. "This started as a back of an envelope idea, after too much rum I think, with drawings of what looked like small worms exploring mazes."


The model bio-supercomputer that the Nicolaus (father and son) and their colleagues have created came about thanks to a combination of geometrical modelling and engineering knowhow (on the nano scale). It is a first step, in showing that this kind of biological supercomputer can actually work.


The circuit the researchers have created looks a bit like a road map of a busy and very organized city as seen from a plane. Just as in a city, cars and trucks of different sizes, powered by motors of different kinds, navigate through channels that have been created for them, consuming the fuel they need to keep moving.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human vs. Machine: RNA paper based on a computer game, authorship creates identity crisis

Human vs. Machine: RNA paper based on a computer game, authorship creates identity crisis | Amazing Science | Scoop.it

A journal published a paper today that reveals a set of folding constraints in the design of RNA molecules. So far, so normal.

Most of the data for the study come from an online game that crowdsources solutions from thousands of nonexpert players—unusual but not unique. But the lead authors of the paper are the players themselves. Now that is a first. And there's a twist: The journal nearly delayed publication because of "ethical" concerns about authors using only their game names.


The game is called Eterna, and it made a big splash in 2014 with a paper in the Proceedings of the National Academy of Sciences that had 37,000 players as co-authors. The goal was to see whether nonexpert humans can do better than computer algorithms at designing RNA sequences that fold into particular shapes. And indeed, the humans won, even after the computer algorithms were endowed with insights from the human folders. When it comes to the biophysics of RNA folding, John Henry still beats the machine.


That 2014 study was led by card-carrying scientists Adrien Treuille and Rhiju Das, biophysicists at Carnegie Mellon University in Pittsburgh, Pennsylvania, and Stanford University in Palo Alto, California, respectively. The two researchers created the game in 2009. (They both cut their teeth on scientific game design as postdocs in the lab of David Baker at the University of Washington, Seattle, where the blockbuster game FoldIt was conceived.) Since then they have massively scaled up the process and hooked the game to a real-world automated lab that actually tests the folding predictions made by players against the 3D structure of the RNA molecules. They call it the Eterna Massive Open Laboratory.


The newest paper shows how far the effort has come. Among the game's thousands of RNA design "puzzles," there seem to be a small set that are particularly difficult. Among the most challenging structural features to figure out is symmetry, where an RNA strand folds into two or more identically shaped loops. The Eterna game includes an interface for players to propose hypotheses about how particular RNA structures will or will not fold into particular shapes. Those were distilled into a set of "designability" rules. The question was: Do only human designers struggle with thorny design problems, or do computer simulations tussle too?


The answer is that the computers struggled just as much as the people. Researchers report that three of the best existing computer algorithms, running on a supercomputer at Stanford, struggled to solve the very same RNA design problems as the humans. The result shows that the human "designability" rules do indeed correspond to problems that are hard not just for human brains but also for computers, the team reports today in the Journal of Molecular Biology. In fact, the hardest puzzles that could be solved by experienced Eterna players were unsolvable by the computer even after days of crunching. And to help improve the algorithms, computer scientists now have a set of benchmarks—the Eterna100—to gauge the design difficulty of RNA structures.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Toolkit for Silicon-based Quantum Computing

A Toolkit for Silicon-based Quantum Computing | Amazing Science | Scoop.it
Before quantum computing becomes practical, researchers will need to find a practical way to store information as quantum bits, or qubits. Researchers are making significant progress toward the creation of electronic devices based on qubits made of single ions implanted in silicon, one of the most practical of all materials.


“Bit” is a contraction of “binary digit,” but unlike a classical bit, which is plain-vanilla binary with a value of either 0 or 1, a quantum bit, or qubit — the theoretical basis of quantum computing — holds both 0 and 1 in a superposed state until it is measured.


A vast computational space can be created with relatively few quantum-mechanically entangled qubits, and the measurement of one qubit can instantly resolve an intricate calculation when all the entangled qubits are “collapsed” to a specific value by the measurement.


So how does one make and measure a qubit? The problem has engaged scientists for years. Many arrangements have been proposed and some demonstrated, each with its advantages and disadvantages, including tricky schemes involving superconducting tunnel junctions, quantum dots, neutral atoms in optical lattices, trapped ions probed by lasers, and so on.


In the long run, however, qubits based on individual dopant atoms implanted in silicon may have the edge. The materials and methods of silicon-chip manufacturing are familiar and, when applied to quantum-computer devices, have the potential for easy scale-up.


“There are three pillars to the development program my colleagues and I have been following,” says Thomas Schenkel of Berkeley Lab’s Accelerator and Fusion Research Division. “One is the theory of quantum measurement in the devices we build, led by Professor Birgitta Whaley from the Department of Chemistry at UC Berkeley; another is the fabrication of these devices, headed by myself and Professor Jeff Bokor from UC Berkeley’s Department of Electrical Engineering and Computer Science; and the third is to actually measure quantum states in these devices, an effort led by Professor Steve Lyon from Princeton’s Department of Electrical Engineering. Of course, things don’t necessarily happen in that order.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Switchable material could enable new memory chips

Switchable material could enable new memory chips | Amazing Science | Scoop.it

Two MIT researchers have developed a thin-film material whose phase and electrical properties can be switched between metallic and semiconducting simply by applying a small voltage. The material then stays in its new configuration until switched back by another voltage.


The discovery could pave the way for a new kind of “nonvolatile” computer memory chip that retains information when the power is switched off, and for energy conversion and catalytic applications.

The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoOx.


Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. “Here for the first time,” she says, “we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoOx.”


“It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties,” Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite.


The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage — just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage.


Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors — a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say.


Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. “So our idea was, don’t change the atmosphere, just apply a voltage.”


“Voltage modifies the effective oxygen pressure that the material faces,” Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia.


In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT’s Center for Materials Science and Engineering.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels | Amazing Science | Scoop.it

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a deep-learning algorithm that can predict how memorable or forgettable an image is almost as accurately as humans, and they plan to turn it into an app that tweaks photos to make them more memorable. For each photo, the “MemNet” algorithm also creates a “heat map” (a color-coded overlay) that identifies exactly which parts of the image are most memorable. You can try it out online by uploading your own photos to the project’s “LaMem” dataset.


The research is an extension of a similar algorithm the team developed for facial memorability. The team fed its algorithm tens of thousands of images from several different datasets developed at CSAIL, including LaMem and the scene-oriented SUN and Places. The images had each received a “memorability score” based on the ability of human subjects to remember them in online experiments.


The team then pitted its algorithm against human subjects by having the model predicting how memorable a group of people would find a new never-before-seen image. It performed 30 percent better than existing algorithms and was within a few percentage points of the average human performance. By emphasizing different regions, the algorithm can also potentially increase the image’s memorability.


“CSAIL researchers have done such manipulations with faces, but I’m impressed that they have been able to extend it to generic images,” says Alexei Efros, an associate professor of computer science at the University of California at Berkeley. “While you can somewhat easily change the appearance of a face by, say, making it more ‘smiley,’ it is significantly harder to generalize about all image types.”


LaMem is the world’s largest image-memorability dataset. With 60,000 images, each annotated with detailed metadata about qualities such as popularity and emotional impact, LaMem is the team’s effort to spur further research on what they say has often been an under-studied topic in computer vision.


Team members picture a variety of potential applications, from improving the content of ads and social media posts, to developing more effective teaching resources, to creating your own personal “health-assistant” device to help you remember things. The team next plans to try to update the system to be able to predict the memory of a specific person, as well as to better tailor it for individual “expert industries” such as retail clothing and logo design.


The work is supported by grants from the National Science Foundation, as well as the McGovern Institute Neurotechnology Program, the MIT Big Data Initiative at CSAIL, research awards from Google and Xerox, and a hardware donation from Nvidia.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Complexity Theory: Major Advance Reveals the Limits of Computation

Complexity Theory: Major Advance Reveals the Limits of Computation | Amazing Science | Scoop.it
A major advance reveals deep connections between the classes of problems that computers can—and can’t—possibly do.


For more than 40 years, researchers had been trying to find a better way to compare two arbitrary strings of characters, such as the long strings of chemical letters within DNA molecules. The most widely used algorithm is slow and not all that clever: It proceeds step-by-step down the two lists, comparing values at each step. If a better method to calculate this “edit distance” could be found, researchers would be able to quickly compare full genomes or large data sets, and computer scientists would have a powerful new tool with which they could attempt to solve additional problems in the field.


In a paper presented at the ACM Symposium on Theory of Computing, two researchers from the Massachusetts Institute of Technology put forth a mathematical proof that the current best algorithm was “optimal”—in other words, that finding a more efficient way to compute edit distance was mathematically impossible. The Boston Globe celebrated the hometown researchers’ achievement with a headline that read “For 40 Years, Computer Scientists Looked for a Solution That Doesn’t Exist.”


But researchers aren’t quite ready to record the time of death. One significant loophole remains. The impossibility result is only true if another, famously unproven statement called the strong exponential time hypothesis (SETH) is also true. Most computational complexity researchers assume that this is the case—including Piotr Indyk and Artūrs Bačkurs of MIT, who published the edit-distance finding—but SETH’s validity is still an open question. This makes the article about the edit-distance problem seem like a mathematical version of the legendary report of Mark Twain’s death: greatly exaggerated.


The media’s confusion about edit distance reflects a murkiness in the depths of complexity theory itself, where mathematicians and computer scientists attempt to map out what is and is not feasible to compute as though they were deep-sea explorers charting the bottom of an ocean trench. This algorithmic terrain is just as vast—and poorly understood—as the real seafloor, said Russell Impagliazzo, a complexity theorist who first formulated the exponential-time hypothesis with Ramamohan Paturi in 1999. “The analogy is a good one,” he said. “The oceans are where computational hardness is. What we’re attempting to do is use finer tools to measure the depth of the ocean in different places.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Skyscraper-style carbon-nanotube chip design to ‘boosts electronic performance by factor of a thousand’

Skyscraper-style carbon-nanotube chip design to ‘boosts electronic performance by factor of a thousand’ | Amazing Science | Scoop.it

Researchers at Stanford and three other universities are creating a revolutionary new skyscraper-like high-rise architecture for computing based on carbon nanotube materials instead of silicon. In Rebooting Computing, a special issue (in press) of the IEEE Computer journal, the team describes its new approach as “Nano-Engineered Computing Systems Technology,” or N3XT.


Suburban-style chip layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy, they note. N3XT will break data bottlenecks by integrating processors and memory-like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators.

The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits, according to the researchers.


Stanford researchers including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong have “assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra says.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong claims.


Engineers have previously tried to stack silicon chips but with limited success, the researchers suggest. Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires. But conventional 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.


The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of vias that can move more data over shorter distances that traditional wire, using less energy, and immersing computation and memory storage into an electronic super-device.


The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below. As in IBM’s recent chip breakthrough (see “Method to replace silicon with carbon nanotubes developed by IBM Research“), N3XT chips are based on carbon nanotube transistors.


Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors, and much thinner. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

How to control information leaks from smartphone apps

How to control information leaks from smartphone apps | Amazing Science | Scoop.it

A Northeastern University research team has found “exten­sive” leakage of users’ information — device and user iden­ti­fiers, loca­tions, and passwords — into net­work traffic from apps on mobile devices, including iOS, Android, and Win­dows phones. The researchers have also devised a way to stop the flow.


David Choffnes, an assis­tant pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence, and his col­leagues devel­oped a simple, effi­cient cloud-based system called ReCon. It detects leaks of “per­son­ally identifiable infor­ma­tion,” alerts users to those breaches, and enables users to con­trol the leaks by spec­i­fying what infor­ma­tion they want blocked and from whom.


The team’s study fol­lowed 31 mobile device users with iOS devices and Android devices who used ReCon for a period of one week to 101 days and then mon­i­tored their per­sonal leak­ages through a ReCon secure webpage. The results were alarming. “Depress­ingly, even in our small user study we found 165 cases of cre­den­tials being leaked in plain­text,” the researchers wrote.


Of the top 100 apps in each oper­ating system’s app store that par­tic­i­pants were using, more than 50 per­cent leaked device iden­ti­fiers, more than 14 per­cent leaked actual names or other user iden­ti­fiers, 14–26 per­cent leaked loca­tions, and three leaked pass­words in plain­text. In addi­tion to those top apps, the study found sim­ilar pass­word leaks from 10 addi­tional apps that par­tic­i­pants had installed and used.


The password-leaking apps included Map­MyRun, the lan­guage app Duolingo, and the Indian dig­ital music app Gaana. All three devel­opers have since fixed the leaks. Sev­eral other apps con­tinue to send plain­text pass­words into traffic, including a pop­ular dating app.


What’s really trou­bling is that we even see sig­nif­i­cant num­bers of apps sending your pass­word, in plain­text read­able form, when you log in,” says Choffnes. In a public-WiFi set­ting, that means anyone run­ning “some pretty simple soft­ware” could nab it.


Via LilyGiraud
more...
No comment yet.