Amazing Science
Find tag "computers"
307.1K views | +102 today
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Google's Street View address reading software also able to decipher CAPTCHAs

Google's Street View address reading software also able to decipher CAPTCHAs | Amazing Science |

Google engineers working on software to automatically read home and business addresses off photographs taken by Street View vehicles, have created a product so good that not only can it be used for address reading, it can solve CAPTCHAs, as well.

CAPTCHAs are, of course, words that have been intentionally distorted presented to live humans who wish to enter a web site—to gain access, they must correctly type the word into a box. CAPTCHAs are believed to be difficult if not impossible for spam bots to decipher, thus they serve to protect the site—at least for now.

It's sort of ironic actually, that software has inadvertently been created that thwarts the efforts of other software engineers attempting to keep spam bots from accessing web sites. The finding was posted by Google Product Manager Vinay Shet on the Google blog.

To make Google Street View (part of Google Maps) ever smarter, engineers have been hard at work developing a sophisticated neural network based on both prior research and new image recognition techniques. The aim is to make Google's products more accurate. To display an image of a house or building given an address by a user takes a lot of computer smarts—Google connects new addresses to older known addresses, constantly updating its databases.

Presumably, the goal is to map every building in the known world to an address. But the work has produced an unexpected by-product, the very same software developed for Street View can also be used to decipher CAPTCHAs with 96 percent accuracy (98.8 percent when working on Google's own reCAPTCHA).

No comment yet.
Scooped by Dr. Stefan Gruenwald!

An Interactive Map Showing Global Cyberattacks In Real Time

An Interactive Map Showing Global Cyberattacks In Real Time | Amazing Science |
Security firm Kaspersky Lab has launched an interactive cyberthreat map that visualizes cyber security incidents occurring worldwide in real time. A quick glance shows that the world is a pretty scary place.

The interactive map is a promotional tool created by Kaspersky Lab, but it's fascinating nonetheless. Threats displayed include malicious objects detected during on-access and on-demand scans, email and web antivirus detections, as well as objects identified by vulnerability and intrusion detection sub-systems.

Every day Kaspersky Lab handles more than 300,000 malicious objects. Three years ago the figure was just 70,000 but antivirus technologies have also changed with the times and we have no problem coping with this huge stream of traffic. Where do the attacks come from? Where do users click on malicious links most often? Which types of malware are the most prevalent? These are the sort of questions being asked by lots of users. Our new map of the cyberworld threat landscape allows everyone to see the scale of cyber activity in real time and to get a taste of what it feels like to be one of our experts.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA set to release online software catalog

NASA set to release online software catalog | Amazing Science |

Get ready for a stimulating software catalog. You may want to write NASA CAT. next to Thursday, April 10, on your calendar. That is the day that the National Aeronautics and Space Administration (NASA) is to make available to the public, at no cost, more than 1,000 codes with its release of a new online software catalog. The catalog, a master list organized into 15 categories, is intended for industry, academia, other government agencies, and general public. The catalog covers technology topics ranging from project management systems, design tools, data handling, image processing, solutions for life support functions, aeronautics, structural analysis, and robotic and autonomous systems. NASA said the codes represent NASA's best solutions to an array of complex mission requirements.

"Software is an increasingly important element of the agency's intellectual asset portfolio," said Jim Adams, deputy chief technologist with NASA. "It makes up about one-third of its reported inventions each year." With this month's release of the software catalog, he said, the software becomes widely available to the public. Each NASA code was evaluated, however, for access restrictions and designated for a specific type of release, ranging from codes open to all U.S. citizens to codes restricted to use by other federal agencies.

The catalog nonetheless fits into NASA's ongoing efforts to transfer more NASA technologies to American industry and U.S. consumers As Wired's Robert McMillan wrote on Friday, "This NASA software catalog will list more than 1,000 projects, and it will show you how to actually obtain the code you want. The idea to help hackers and entrepreneurs push these ideas in new directions—and help them dream up new ideas."

Adams said, "By making NASA resources more accessible and usable by the public, we are encouraging innovation and entrepreneurship. Our technology transfer program is an important part of bringing the benefit of space exploration back to Earth for the benefit of all people."

Daniel Lockney, technology transfer program executive with NASA's Office of the Chief Technologist, underscored this down-to-earth mission side of NASA in 2012 in an article in Innovation in 2012. "NASA really is the gold standard for technology transfer," he then said. "The money spent on research and development doesn't just go up into space; it comes down to earth in the form of some very practical and tangible results."

Lockney said they know the investment in technology creates jobs, boosts the economy and provides benefits in addition to the mission focus. "Our technologies have done everything from make hospitals more efficient to making transportation safer and greener. The technology reaches into all aspects about our lives."

Russ Roberts's curator insight, April 5, 10:47 PM

This could prove interesting to anyone interested in science and technology.  "Wired" spokesman Robert McMillan says this NASA Software Catalog "will list more than 1,000 projects and it will show you how to actually obtain the codes you want."  NASA spokesman Jim Adams adds that the release of the catalog "is an important part of bringing the benefits of space exploration back to Earth for the benefit of all people."  Aloha de Russ (KH6JRM).

Scooped by Dr. Stefan Gruenwald!

▶ "Design of a Superconducting Quantum Computer" - Talk by John Martinis

Superconducting quantum computing is now at an important crossroad, where "proof of concept" experiments involving small numbers of qubits can be transitioned to more challenging and systematic approaches that could actually lead to building a quantum computer. Our optimism is based on two recent developments: a new hardware architecture for error detection based on "surface codes" [1], and recent improvements in the coherence of superconducting qubits [2]. I will explain how the surface code is a major advance for quantum computing, as it allows one to use qubits with realistic fidelities, and has a connection architecture that is compatible with integrated circuit technology. Additionally, the surface code allows quantum error detection to be understood using simple principles. I will also discuss how the hardware characteristics of superconducting qubits map into this architecture, and review recent results that suggest gate errors can be reduced to below that needed for the error detection threshold. 


[1] Austin G. Fowler, Matteo Mariantoni, John M. Martinis and Andrew N. Cleland, PRA 86, 032324 (2012). 
[2] R. Barends, J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill, P. O'Malley, P. Roushan, J. Wenner, T. C. White, A. N. Cleland and John M. Martinis, arXiv:1304:2322.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

CODE_n: Data Visualizations in Grande Scale Shown at CeBit 2014

CODE_n: Data Visualizations in Grande Scale Shown at CeBit 2014 | Amazing Science |

I guess that CODE_n [], developed by design agency Kram/Weisshaar, is best appreciated when perceived in the flesh, that is at the Hannover Fairgrounds during CeBit 2014 in Hannover, Germany.

CODE_n consists of more than 3.000 square meters (approx. 33,000 ft2) of ink-jet printed textile membranes, stretching more than 260 meters of floor-to-ceiling tera-pixel graphics. The 12.5 terapixel, 90-meter long wall-like canopy titled "Retrospective Trending", shows over 400 lexical frequency timelines ranging from the years 1800 to 2008, each generated using Google's Ngram tool. The hundreds of search terms relate to ethnographic themes of politics, economics, engineering, science, technology, mathematics, and philosophy, resulting in the output of historical trajectories of word usage over time.

The 6.2 terapixel "Hydrosphere Hyperwall" is a visualization of the global ocean as dynamic pathways, polychrome swathes of sea climate, data-collecting swarms of mini robots and sea animals, as well as plumes of narrow current systems. NASA's ECCO2 maps were interwoven with directional arrows that specify wind direction and data vectors that represent buoys, cargo floats, research ships, wave gliders, sea creatures and research stations.

Finally, the 6.6 terapixel "Human Connectome" is a morphological map of the human brain. Consisting of several million multi-coloured fibre bundles and white matter tracts that were captured by diffusion-MRIs, the structural descriptions of the human mind were generated at 40 times the scale of the human body. The 3D map of human neural connections visualizes brain dynamics on an ultra-macro scale as well as the infinitesimal cell-scale.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

UCLA engineering team increases power efficiency for future computer processors

UCLA engineering team increases power efficiency for future computer processors | Amazing Science |
Have you ever wondered why your laptop or smartphone feels warm when you're using it? That heat is a byproduct of the microprocessors in your device using electric current to power computer processing functions — and it is actually wasted energy.
Now, a team led by researchers from the UCLA Henry Samueli School of Engineering and Applied Science has made major improvements in computer processing using an emerging class of magnetic materials called "multiferroics," and these advances could make future devices far more energy-efficient than current technologies.
With today's device microprocessors, electric current passes through transistors, which are essentially very small electronic switches. Because current involves the movement of electrons, this process produces heat — which makes devices warm to the touch. These switches can also "leak" electrons, making it difficult to completely turn them off. And as chips continue to get smaller, with more circuits packed into smaller spaces, the amount of wasted heat grows.
The UCLA Engineering team used multiferroic magnetic materials to reduce the amount of power consumed by "logic devices," a type of circuit on a computer chip dedicated to performing functions such as calculations. A multiferroic can be switched on or off by applying alternating voltage — the difference in electrical potential. It then carries power through the material in a cascading wave through the spins of electrons, a process referred to as a spin wave bus.
A spin wave can be thought of as similar to an ocean wave, which keeps water molecules in essentially the same place while the energy is carried through the water, as opposed to an electric current, which can be envisioned as water flowing through a pipe, said principal investigator Kang L. Wang, UCLA's Raytheon Professor of Electrical Engineering and director of the Western Institute of Nanoelectronics (WIN).
"Spin waves open an opportunity to realize fundamentally new ways of computing while solving some of the key challenges faced by scaling of conventional semiconductor technology, potentially creating a new paradigm of spin-based electronics," Wang said.
The UCLA researchers were able to demonstrate that using this multiferroic material to generate spin waves could reduce wasted heat and therefore increase power efficiency for processing by up to 1,000 times. Their research is published in the journal Applied Physics Letters
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Are you ready for the Internet of Cops?

Are you ready for the Internet of Cops? | Amazing Science |

FirstNet — a state-of-the-art communications network for paramedics, firemen and law enforcement at the federal, state and local level — will give cops on the streets unprecedented technological powers, and possibly hand over even more intimate data about our lives to the higher ends of the government and its intelligence agencies, Motherboard reports.

According to a series of presentation slides from December last year, FirstNet will be the “MOST secure wireless network ever built,” as a dedicated 4G network just for first responders.

FirstNet will allow users to “tag” a disaster victim with a small device to allow patients’ vital signs to be monitored from a control center, allowing medical staff to keep an eye on who needs treatment the most at any one time. But FirstNet will also give local law enforcement the ability to take digital “fingerprints from the field,” record and share high-quality video, with facial recognition, and instantaneously marry these freshly sourced data with others over the network.

The uses of FirstNet — biometric data gathering, license plate readers and high speed information sharing — are explicit aims of the project, as laid out in presentations and other documents, along with a possible “kill switch” to disable the civilian network in emergencies.

There is also the possibility that this will create a new means for the federal government to harvest massive quantities of the biometric data being collected by local agencies.

In related news last week, under a surveillance program codenamed Optic Nerve, Britain’s surveillance agency GCHQ, with aid from the NSA, collected millions of still images of Yahoo webcam chats in bulk and saved them to agency databases, The Guardian reported. The agencies also collected gamers’ chats and deployed real-life agents into World of Warcraft and Second Life.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM wants to put the power of Watson in your smartphone

IBM wants to put the power of Watson in your smartphone | Amazing Science |

Watson, IBM's Jeopardy-conquering super computer, has set its sites on mobile apps. Not long ago, the recently created Watson Business Group announced that would offer APIs to developers to create cloud-based apps built around cognitive computing. Now IBM is launching a competition to lure mobile app creators to its new platform. Over the next three months the company will be taking submissions that leverage Watson's unique capabilities like deep data analysis and natural language processing to put impossibly powerful tools to the palm of your hand. IBM is hoping for apps that "change the way consumers and businesses interact with data on their mobile devices." It's an ambitious goal, but considering the way Watson spanked Ken Jennings, it seems something that is well within its reach. The machine has already changed the way we view computers and artificial intelligence, not only by winningJeopardy, but by making cancer treatment decisions and attending college. Now it wants to make your smartphone smarter than you could ever hope to be.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists create Chameleon virus that could move undetected between Wi-Fi access points

Scientists create Chameleon virus that could move undetected between Wi-Fi access points | Amazing Science |

We all know to look out for viruses that can be spread over the internet, or by sharing files between computers. Now, however, scientists at the University of Liverpool have shown for the first time that special viruses could move between wireless access points using existing Wi-Fi networks – as efficiently as the common cold virus spreads between people through the air.

The team computer-simulated an attack by a virus known as Chameleon, which they created. Although the virus didn't affect the functions of the access points (APs) or users' computers, it was able to access and report the credentials of all the people who were using those APs at the time. Some APs were impregnable due to encryption or password protection, but in those cases Chameleon would just move on to other more vulnerable access points.

Due to the fact that existing anti-virus software is only designed to look for viruses in computers or on the internet, the virus itself remained undetected.

The simulated attack was set in London and Belfast. Just like the cold virus spreads quicker in crowded cities, Chameleon spread faster in situations where multiple APs were located in close proximity to one another.

"Wi-Fi connections are increasingly a target for computer hackers because of well-documented security vulnerabilities, which make it difficult to detect and defend against a virus," said Prof. Alan Marshall, who took part in the study. "It was assumed, however, that it wasn’t possible to develop a virus that could attack Wi-Fi networks, but we demonstrated that this is possible and that it can spread quickly. We are now able to use the data generated from this study to develop a new technique to identify when an attack is likely."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Ubi Ubiquitous Computer is Here: Talk to Your Wall and Your Wall will Talk Back

The Ubi Ubiquitous Computer is Here: Talk to Your Wall and Your Wall will Talk Back | Amazing Science |

The Ubi is a WiFi-connected, voice-operated computer that plugs into a power outlet and makes the environment around it Internet enabled. Reminiscent of voice controlled computers depicted in science fiction, early uses of the Ubi include Internet search, messaging, and communications without the use of hands or screens. The Ubi also includes sensors that allow for remote monitoring of the environment around it.

Project Ubi Odyssey will allow early adopters of technology to get access to the Ubi, develop connectivity with home automation and Internet services, and create novel human computer interactions. Those interested can register for the program at and selected candidates will be invited to participate in the program. The Beta Ubi cost is $299. The program is currently limited to 5,000 participants and to residents of the United States.

The Ubi relies on powerful server technology that processes natural language to infer requests from the user and then pulls data from various Internet sources. Users can easily build voice-driven interactions and connect devices and services through the Ubi Portal. The device is equipped with temperature, humidity, air pressure and ambient light sensors to provide feedback on the environment around it. Also onboard the Ubi are stereo speakers, two microphones, and bright multi-colored LED indicator lights.

Unified Computer Intelligence Corporation CEO Leor Grebler told me the device will also be able to sense devices that are openly connected to the Internet (eventually, the Nest “learning” thermostat and smart smoke/CO2 alarms), “but we’re not controlling devices outright yet. We will add a way to talk to devices/Internet services as well as for them to talk back to the user.”

Here are the impressive specs: Android OS 4.1, 1.6 GHz Dual-Core ARM Cortex-A9 Processor, 1 GB RAM, 802.11 b/n/g Wifi Enabled (WPA and WPA2 encryption), stereo speakers and dual microphones, Bluetooth-capable, ambient light sensor, cloud-based speech recognition (Google/Android libraries), and natural language understanding.

And you can program its user interface on a computer, or verbally on the Ubi, Grebler said. “We’re slowly releasing apps on,” he said. “We have the first blossoms of an API that will essentially allow any Internet service, such as email, calendar, Twitter, Facebook, etc. ) to have its own voice and be interactive through the Ubi.”

You can register for the program at and selected candidates will be invited to participate in the program. The Beta Ubi cost is $299. The program is currently limited to 5,000 participants and to U.S. residents.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers use Google's exacycle cloud computing platform to simulate key drug receptor

Researchers use Google's exacycle cloud computing platform to simulate key drug receptor | Amazing Science |

Roughly 40 percent of all medications act on cells' G protein-coupled receptors. One of these receptors, beta 2 adrenergic receptor site (B2AR), naturally transforms between two base configurations; knowing the precise location of each of approximately 4,000 atoms is crucial for ensuring a snug fit between it and the drug.

Now, researchers at Stanford and Google have conducted an unprecedented, atom-scale simulation of the receptor site's transformation, a feat that could have significant impact on drug design. This is the first scientific project to be completed using Google Exacycle's cloud computing platform, which allows scientists to crunch big data on Google's servers during periods of low network demand.

The study was published in the January issue of Nature Chemistry.

As a type of GPCR, the B2AR is a molecule that sits within the membrane of most cells. Various molecules in the body interact with the receptor's exterior, like two hands shaking, to trigger an action inside the cell.

"GPCRs are the gateway between the outside of the cell and the inside," said co-author Vijay Pande, PhD, professor of chemistry and a senior author of the study. "They're so important for biology, and they're a natural, existing signaling pathway for drugs to tap into."

Lead authors of the study were former postdoctoral scholar Kai Kohlhoff, PhD, and current postdoctoral scholars Diwakar Shukla, PhD, and Morgan Lawrenz, PhD.

Roughly half of all known drugs—including pharmaceuticals and naturally occuring molecules, such as caffeine —target some GPCR, and many new medications are being designed with these receptor sites in mind. Brian Kobilka, professor of molecular and cellular physiology at Stanford, was awarded the 2012 Nobel Prize in Chemistry for his role in discovering and understanding GPCRs.

Traditionally, maps that detail each atom of GPCRs and other receptors are created through a technique called X-ray crystallography. The technique is industry standard, but it can only visualize a molecule in its resting state; receptors naturally change configurations, and their intermediate forms might also have medical potential.

When developing a drug, scientists will often run a computer program, known as a docking program, that predicts how well the atomic structure of a proposed drug will fit into the known receptor.

In the case of GPCRs, for example, the X-ray crystallography techniques have detailed their "on" and "off" configurations; many medications have been specifically designed to fit into these sites. Scientists expect, however, that other fruitful configurations exist. Many drugs engage with GPCR sites, even though computational models suggest that they don't fit either of the two defined reaction site configurations.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Hypnotically beautiful real-time wind map of Earth created by supercomputers

Hypnotically beautiful real-time wind map of Earth created by supercomputers | Amazing Science |

The wind has never been this beautiful! This interactive visualization of wind patterns all around the world is created by a script that downloads weather data from the Global Forecast System at the National Centers for Environmental Prediction, part of NOAA/the National Weather Service. This raw data is then rendered in your browser in the form of a globe that can be moved (drag with the mouse) and zoomed in and out of (use your mouse scroll wheel).

The data is updated every 3 hours, so it is pretty close to real-time considering that this isn't just a small local dataset but covers the whole planet. You can access it here: Earth Wind Map.

It reminds a little of this beautiful interactive animated wind map of the U.S., but  it's even more beautiful, and it covers the whole planet.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

MITRE-Harvard nanocomputer may point the way to future computer miniaturization

MITRE-Harvard nanocomputer may point the way to future computer miniaturization | Amazing Science |

An interdisciplinary team of scientists and engineers from The MITRE Corporation and Harvard University have taken key steps toward ultra-small electronic computer systems that push beyond the imminent end of Moore’s Law. They designed and assembled, from the bottom up, a functioning, ultra-tiny control computer (nanocontroller) that they say is the densest nanoelectronic system ever built.

The “nanoelectronic finite-state machine” (“nanoFSM”) or nanocomputer measures 0.3 x 0.03 millimeters. It is composed of hundreds of nanowire transistors, each an under-20 nanometers switch. The nanowire transistors use very little power because they are “nonvolatile” — the switches remember whether they are on or off, even when no power is supplied to them.

In the nanoFSM, these nanoswitches are assembled and organized into circuits on several “tiles” (modules). Together, the tiles route small electronic signals around the computer, enabling it to perform calculations and process signals that could be used to control tiny systems, such as miniscule medical therapeutic devices, other tiny sensors and actuators, or even insect-sized robots.

In 2011, the MITRE-Harvard team demonstrated a single such tiny tile capable of performing simple logic operations (ultra-tiny nanocircuits). In their recent collaboration they combined three tiles on a single chip to produce a first-of-its-kind complex programmable nanocomputer.

“It was a challenge to develop a system architecture and nanocircuit designs that would pack the control functions we wanted into such a very tiny system,” according to Shamik Das, chief architect of the nanocomputer, who is also principal engineer and group leader of MITRE’s Nanosystems Group. “Once we had those designs, though, our Harvard collaborators did a brilliant job innovating to be able to realize them.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New body-hack app shortcuts jet-lag recovery

New body-hack app shortcuts jet-lag recovery | Amazing Science |

A new jet-lag mobile app called Entrain released by University of Michigan mathematicians reveals previously unknown shortcuts that can help travelers entrain (synchronize) their circadian rhythms to new time zones as efficiently as possible.

Entrain is built around the premise that light, particularly from the sun and in wavelengths that appear to our eyes as the color blue, is the strongest signal to regulate circadian rhythms. These fluctuations in behaviors and bodily functions, tied to the planet’s 24-hour day, do more than guide us to eat and sleep. They govern processes in each one of our cells.

The study, published April 10, 2014, in Public Library of Science Computational Biology (open access journal), relies on two leading mathematical models that have been shown to accurately describe human circadian rhythms. The researchers used these equations and a technique called optimal control theory to calculate ideal adjustment schedules for more than 1,000 possible trips.

The app gives users access to these schedules. Start by entering your typical hours of light and darkness in your current time zone, then choose the time zone you’re traveling to and when, as well as the brightest light you expect to spend the most time in during your trip (indoor or outdoor.) The app offers a specialized plan and predicts how long it will you take to adjust.

The shortcuts the app offers are custom schedules of light and darkness depending on the itinerary. The schedules boil down to one block of time each day when you should seek the brightest light possible and another when you should put yourself in the dark, or at least in dim light. You don’t even have to be asleep.

If you must go outside, you can wear pink-tinted glasses to block blue wavelength light, the researchers say. And if the app prescribes “bright outdoor light” in the middle of the night, a therapeutic lightbox can do the job — yes, its shortcuts sometimes require odd hours.

The Entrain app is available now as a free app in the Apple store.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

MIT: New ‘switch’ could power quantum computing

MIT: New ‘switch’ could power quantum computing | Amazing Science |

MIT: A light lattice that traps atoms may help scientists build networks of quantum information transmitters.

Using a laser to place individual rubidium atoms near the surface of a lattice of light, scientists at MIT and Harvard University have developed a new method for connecting particles — one that could help in the development of powerful quantum computing systems.

The new technique, described in a paper published today in the journal Nature, allows researchers to couple a lone atom of rubidium, a metal, with a single photon, or light particle. This allows both the atom and photon to switch the quantum state of the other particle, providing a mechanism through which quantum-level computing operations could take place.

Moreover, the scientists believe their technique will allow them to increase the number of useful interactions occurring within a small space, thus scaling up the amount of quantum computing processing available.

“This is a major advance of this system,” says Vladan Vuletić, a professor in MIT’s Department of Physics and Research Laboratory for Electronics (RLE), and a co-author of the paper. “We have demonstrated basically an atom can switch the phase of a photon. And the photon can switch the phase of an atom.”

That is, photons can have two polarization states, and interaction with the atom can change the photon from one state to another; conversely, interaction with the photon can change the atom’s phase, which is equivalent to changing the quantum state of the atom from its “ground” state to its “excited” state. In this way the atom-photon coupling can serve as a quantum switch to transmit information — the equivalent of a transistor in a classical computing system. And by placing many atoms within the same field of light, the researchers may be able to build networks that can process quantum information more effectively.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Where Art Meets Math: The Hypnotic Animated Gifs of David Szakaly

Where Art Meets Math: The Hypnotic Animated Gifs of David Szakaly | Amazing Science |

Since 2008 Hungarian/German graphic designer David Szakaly has been churning out some of the most dizzying, hypnotic and wholly original gifs on the web under the name Davidope. His blend of twisting organic forms, flashes of black and white, and forays into pulsing technicolor shapes have inspired legions of others to experiment with the medium, many of whom have been featured here on Colossal. It’s hard to determine the scale of Szakaly’s influence online, but a simple Google image search for “animated gif” brings up dozens of his images that have been shared around Tumblr hundreds of thousands of times.

Szakaly began experimenting with the vector animation program Macromedia Flash back in 1999 where he used the software to create presentations, banners, and other creatives for clients. It was nearly a decade later when he decided to dedicate more time to experimenting with motion graphics and found that Tumblr was a great platform to share his quirky gifs. While he still works in the corporate world on other digital projects, he has also found commercial success making animations for clients around the world. Though it’s his personal work that really stands out. If or when gifs end up on gallery walls, it will be hard to deny Szakaly’s role in getting them there.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Face-To-Face: Crude Mugshots built from DNA data alone

Face-To-Face: Crude Mugshots built from DNA data alone | Amazing Science |
Computer program crudely predicts a facial structure from genetic variations.

Researchers have now shown how 24 gene variants can be used to construct crude models of facial structure. Thus, leaving a hair at a crime scene could one day be as damning as leaving a photograph of your face. Researchers have developed a computer program that can create a crude three-dimensional (3D) model of a face from a DNA sample.

Using genes to predict eye and hair color is relatively easy. But the complex structure of the face makes it more valuable as a forensic tool — and more difficult to connect to genetic variation, says anthropologist Mark Shriver of Pennsylvania State University in University Park, who led the work, published today in PLOS Genetics1.

Shriver and his colleagues took high-resolution images of the faces of 592 people of mixed European and West African ancestry living in the United States, Brazil and Cape Verde. They used these images to create 3D models, laying a grid of more than 7,000 data points on the surface of the digital face and determining by how much particular points on a given face varied from the average: whether the nose was flatter, for instance, or the cheekbones wider. They had volunteers rate the faces on a scale of masculinity and femininity, as well as on perceived ethnicity.

Next, the authors compared the volunteers’ genomes to identify points at which the DNA differed by a single base, called a single nucleotide polymorphism (SNP). To narrow down the search, they focused on genes thought to be involved in facial development, such as those that shape the head in early embryonic development, and those that are mutated in disorders associated with features such as cleft palate. Then, taking into account the person’s sex and ancestry, they calculated the statistical likelihood that a given SNP was involved in determining a particular facial feature.

This pinpointed 24 SNPs across 20 genes that were significantly associated with facial shape. A computer program the team developed using the data can turn a DNA sequence from an unknown individual into a predictive 3D facial model (see 'Face to face'). Shriver says that the group is now trying to integrate more people and genes, and look at additional traits, such as hair texture and sex-specific differences.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Internet surveillance predicts disease outbreak before WHO

Internet surveillance predicts disease outbreak before WHO | Amazing Science |

Have you ever Googled for an online diagnosis before visiting a doctor? If so, you may have helped provide early warning of an infectious disease epidemic.

In a new study published in Lancet Infectious Diseases, Internet-based surveillance has been found to detect infectious diseases such as Dengue Fever and Influenza up to two weeks earlier than traditional surveillance methods, according to Queensland University of Technology (QUT) research fellow and senior author of the paper Wenbiao Hu.

Hu, based at the Institute for Health and Biomedical Innovation, said there was often a lag time of two weeks before traditional surveillance methods could detect an emerging infectious disease.

“This is because traditional surveillance relies on the patient recognizing the symptoms and seeking treatment before diagnosis, along with the time taken for health professionals to alert authorities through their health networks. In contrast, digital surveillance can provide real-time detection of epidemics.”

Hu said the study used search engine algorithms such as Google Trends and Google Insights. It found that detecting the 2005–06 avian influenza outbreak “Bird Flu” would have been possible between one and two weeks earlier than official surveillance reports.

“In another example, a digital data collection network was found to be able to detect the SARS outbreak more than two months before the first publications by the World Health Organization (WHO),” Hu said.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future!

How do you build a large-scale quantum computer?

How do you build a large-scale quantum computer? | Amazing Science |

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteriaThe prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.

To glimpse the difficulty of this task, consider the guts of a would-be quantum computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A quantum computing device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.  

In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.

Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A (reference below), a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.

One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected.

The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.

Via Szabolcs Kósa
Andreas Pappas's curator insight, March 28, 1:40 AM

This article shows how scientists can increase the scale of quantum machine while still making them behave quantum mechanically by reading the qu-bits with lasers instead of conventional wiring.

Scooped by Dr. Stefan Gruenwald!

Qualcomm Is Developing Brain-Inspired Computing - The Zeroth Processor

Qualcomm Is Developing Brain-Inspired Computing - The Zeroth Processor | Amazing Science |

Qualcomm’s technologies are designed from the ground-up with speed and power efficiency in mind. This way, devices that use our products can run smoothly and maximize battery life driven experiences. As mobile computing becomes increasingly pervasive, so do our expectations of the devices we use and interact with in our everyday lives. We want these devices to be smarter, anticipate our needs, and share our perception of the world so we can interact with them more naturally. The computational complexity of achieving these goals using traditional computing architectures is quite challenging, particularly in a power- and size-constrained environment vs. in the cloud and using supercomputers.

For the past few years our Research and Development teams have been working on a new computer architecture that breaks the traditional mold. We wanted to create a new computer processor that mimics the human brain and nervous system so devices can have embedded cognition driven by brain inspired computing—this is Qualcomm Zeroth processing.

Biologically Inspired Learning

We want Qualcomm Zeroth products to not only mimic human-like perception but also have the ability to learn how biological brains do.  Instead of preprogramming behaviors and outcomes with a lot of code, we’ve developed a suite of software tools that enable devices to learn as they go and get feedback from their environment.

In the video below, we outfitted a robot with a Qualcomm Zeroth processor and placed it in an environment with colored boxes. We were then able to teach it to visit white boxes only. We did this through dopaminergic-based learning, a.k.a. positive reinforcement—not by programming lines of code.

Another major pillar of Zeroth processor function is striving to replicate the efficiency with which our senses and our brain communicate information. Neuroscientists have created mathematical models that accurately characterize biological neuron behavior when they are sending, receiving or processing information. Neurons send precisely timed electrical pulses referred to as “spikes” only when a certain voltage threshold in a biological cell’s membrane is reached. These spiking neural networks (SNN) encode and transmit data very efficiently in both how our senses gather information from the environment and then how our brain processes and fuses all of it together.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Prototype holographic memory stores data in magnetic holograms with quick read-out access

Prototype holographic memory stores data in magnetic holograms with quick read-out access | Amazing Science |

A new type of memory device based on the interference of spin waves has been unveiled by scientists in the US and Russia. Data are stored in the form of magnetic bits and read out simultaneously as holographic images. Because the wavelengths of the spin waves are much shorter than those of light, the storage density of the memory has the potential to be much greater than systems based on optical holograms, and could someday be used to store very large amounts of information.

Conventional holography involves splitting a beam of laser light into an illumination beam and a reference beam. The illumination beam is fired at the object of interest and the deflected light is sent to a detector (or photographic film), where it is reunited with the reference beam. The detector records the interference between the two beams and this information is then used to create a 3D image of the object. As well as being used as a security feature on credit cards and banknotes, holograms also have the potential to store and retrieve large amounts of information in a very efficient way.

However, the storage densities that can be achieved using optical holograms are limited by the relatively long wavelengths of visible light – about 500 nm. Now, Alexander Khitun and colleagues at the University of California, Riverside and the Kotel'nikov Institute of Radioengineering and Electronics in Saratov, Russia, have created a holographic memory that uses spin waves, which have much shorter wavelengths.

The team's prototype device comprises two small magnets – each about 360 μm wide – that are connected by a magnetic wire. Data are stored in the device in terms of the orientations of magnetic moments of the magnets. For example, the "00" state corresponds to both magnets being oriented along the x-axis and "01" corresponds to the first magnet being oriented along x-axis and the second along the y-axis.

Data are written to the device using spin waves with relatively large amplitudes that are capable of changing the orientation of the magnet bits. The read-out process involves sending spin waves with smaller amplitudes through the device, where the phases of the waves are affected by the orientations of the two bits. The antennas are then used to measure the interference between the waves. By varying the relative phases of the input spin waves, the team can build up a holographic image of the orientation of the two magnets. This is analogous to how an optical holographic image is built up by varying the angle between the object and the illumination beam.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM sets new speed record for Big Data

IBM sets new speed record for Big Data | Amazing Science |

IBM has announced it has achieved a new data-transmission advancement that will help improve Internet backbone speeds to 200 — 400 gigabits per second (Gb/s) at extremely low power. The speed boost is based on a new lab prototype chip design that can be used to improve transfer of Big Data between clouds and data centers via fiber four times faster than current 100 Gb/s technology. A previous version of the technology has been licensed to Semtech Corp., a leading supplier of analog and mixed-signal semiconductors. Semtech is using that technology to develop advanced communications platforms expected to be announced later this year, ranging from optical and wireline communications to advanced radar systems.

As Big Data and Internet data traffic continue to grow exponentially, future networking standards have to support higher data rates. For example, in 1992, 100 gigabytes of data was transferred per day; today, traffic has grown to two exabytes per day, a 20-million-fold increase. To support the increase in traffic, scientists at IBM Research and Ecole Polytechnique Fédérale de Lausanne (EPFL) have been developing ultra-fast, energy-efficient, analog-to-digital converter (ADC) technology to enable transmission across long-distance fiber channels.

For example, scientists plan to use ADCs to convert the analog radio signals that originate from the cosmos to digital. It’s part of a collaboration called DOME between ASTRON, the Netherlands Institute for Radio Astronomy, DOME-South Africa, and IBM to develop a fundamental IT roadmap for the Square Kilometer Array (SKA), an international project to build the world’s largest and most sensitive radio telescope.

The analog radio data that the SKA collects from deep space is expected to produce multiple petabits (1015 bits) per second — 10 times the current global Internet traffic. IBM says the prototype ADC would be an ideal candidate to transport the signals fast and at very low power — a critical requirement considering the ~3,000 radio telescopes, each transmitting ~160 Gb/s, that will be spread over a square kilometer.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Intel finally shares Haswell’s secrets, reveals new work on ultra-low-power chips

Intel finally shares Haswell’s secrets, reveals new work on ultra-low-power chips | Amazing Science |
Intel debuted multiple new papers and low-power advances at ISSCC this year, including a GPU core far more efficient than any the manufacturer has previously produced.

The International Solid-State Circuits Conference (ISSCC) is this week, and it’s a time when companies and researchers meet to discuss cutting-edge advances in semiconductor technology. Intel is giving several presentations at the conference this year, with new details on the future of low-power computing and some previously unknown information about Haswell CPUs.

When Intel launched a version of Haswell with 40 GPU execution units and 128MB on-package EDRAM, codenamed Crystal Well, it played coy with many of the details. Die size, clock speed, and organizational structure were all swept under the rug — until now. We now know that the Crystal Well EDRAM is a separate (but on-package) 77-square-millimeter chip clocked at 1.6GHz with a 1V operating voltage. The interface between the CPU/GPU and Crystal Well is called the OPIO (on Package I/O) and it’s a simple, flexible design that Intel has deployed in two forms. On Haswell-ULT (ultra-low power) chips, the OPIO link is a 4GB/sec bridge between the on-die Platform Controller Hub and the rest of the core. When deployed alongside Crystal Well, the OPIO can transfer 102GB/s — at a nominal cost of just 1W of power.

Other disclosures the company made confirmed some of our speculation from a year ago. When Intel announced that Haswell would have an on-die voltage regulator, we speculated that the FIVR (Full Integrated Voltage Regulator) was a step Intel took in order to speed its transition time from idle to full load and back again. 0W has become the new 1GHz — the faster a chip can move in and out of idle, the more horsepower it can bring to bear on specific tasks and the more power it can save in the transitions.

As Anandtech reports, our speculation on this front appears to have been correct. FIVR is highly efficient (90% under load) and can enter/exit sleep in 320 nanoseconds and clock to full Turbo in just 100 nanoseconds.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future!

Geordie Rose (D-wave) Interview: Machine Learning is Progressing Faster Than You Think

Geordie Rose (D-wave) Interview: Machine Learning is Progressing Faster Than You Think | Amazing Science |

D-wave CTO Geordie Rose talks about quantum computing, AI and the technological singularity.

Via Szabolcs Kósa
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Reserachers bring extensive world temperature records to Google Earth

Reserachers bring extensive world temperature records to Google Earth | Amazing Science |

Climate researchers from the University of East Anglia (UEA) in the UK have just given people a whole lot more to talk about. As part of an ongoing effort to increase the accessibility and transparency of data on past climate and climate change, they've made one of the most widely used records of Earth's climate accessible through Google Earth.

Established in 1971, the UEA's Climate Research Unit (CRU) has become one of the leading institutions involved in the study of natural and anthropogenic climate change. Drawing on monthly weather records from some 6,000 weather stations around the globe, some dating back over 150 years, the researchers are responsible for Climatic Research Unit Temperature Version 4 (CRUTEM4), a widely used dataset of land-surface air temperatures.

By making CRUTEM4 data available through Google Earth, users can zoom in on any of the 6,000 weather stations, drill down through some 20,000 graphs and view monthly, seasonal and annual temperature data, some of which dates back to 1850. The interface places a red and green checkerboard over areas for where data is available. Since some remote areas lack weather stations, there are gaps in the checkerboard.

"The data itself comes from the latest CRUTEM4 figures, which have been freely available on our website and via the Met Office," said Dr Tim Osborn from the CRU. "But we wanted to make this key temperature dataset as interactive and user-friendly as possible. The beauty of using Google Earth is that you can instantly see where the weather stations are, zoom in on specific countries, and see station datasets much more clearly."

There are already a number of climate datasets available for Google Earth, including those from the US National Oceanic and Atmospheric Administration (NOAA). Those wishing to view these and the CRUTEM4 dataset need only download Google Earth and open the KML format files. Due to the sheer volume of data, the CRU researchers expect there will be a few errors in their dataset and are encouraging users to alert them to any unusual figures.

No comment yet.