Amazing Science
759.0K views | +1 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Virtual human built from more than 5000 slices of a real woman

Virtual human built from more than 5000 slices of a real woman | Amazing Science | Scoop.it

The virtual body is the work of Sergey Makarov at the Worcester Polytechnic Institute in Massachusetts and his colleagues. They used software to help them stitch the thousands of images together, and the final model was checked by five doctors, each with a different medical specialism. “It needs to be anatomically correct,” says Makarov, who presented the work at the IEEE Engineering in Medicine and Biology Society meeting in Milan, Italy, last month.


Their phantom is the most detailed digital reconstruction of a whole human body ever to be pieced together. She has 231 tissue parts, ranging from windpipe to eyeballs, but is missing nose cartilage and 14 other bits of the body.


Other teams have created phantoms from MRI and CT scans of living volunteers, but the resolution is nowhere near as good. Entire body scans take several hours and any slight movements blur the image. The scans also lack colour, which is important for understanding different tissues, says Makarov.


“Sectioned color images allow you to distinguish virtually all the anatomical structures we are made of,” says Silvia Farcito at the Foundation for Research on Information Technologies in Society, based in Zurich, Switzerland, although she says that blood vessels tend to collapse in cadavers.


“They have ten times as much information as you’d get from an MRI scan,” says Fernando Bello, who develops simulations for medical procedures at Imperial College London. “It means the team will have much more information about organs and their structuring.”


The high resolution of the model makes it ideal for virtual experiments. Each of the woman’s tissues has a well-defined set of parameters, such as density and thermal conductivity. This makes it possible to compute the impact that radiation, for example, and various imaging techniques are likely to have on living tissues.


“The phantom gives us a great opportunity to study human tissues without having to do human studies, which are lengthy and expensive,” says Ara Nazarian, an orthopedic surgeon at Harvard Medical School who is collaborating with Makarov.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Delicately opening a band gap in graphene enables high-performance transistors

Delicately opening a band gap in graphene enables high-performance transistors | Amazing Science | Scoop.it

Electrons can move through graphene with almost no resistance, a property that gives graphene great potential for replacing silicon in next-generation, highly efficient electronic devices. But currently it's very difficult to control the electrons moving through graphene because graphene has no band gap, which means the electrons don't need to cross any energy barrier in order to conduct electricity. As a result, the electrons are always conducting, all the time, which means that this form of graphene can't be used to build transistors because it has no "off" state. In order to control the electron movement in graphene and enable "off" states in future graphene transistors, graphene needs a non-zero band gap—an energy barrier that can prevent electrons from conducting electricity when desired, making graphene a semiconductor instead of a full conductor.


In a new study, scientists have opened a band gap in graphene by carefully doping both sides of bilayer graphene in a way that avoids creating disorder in the graphene structure. Delicately opening up a band gap in graphene in this way enabled the researchers to fabricate a graphene-based memory transistor with the highest initial program/erase current ratio reported to date for a graphene transistor (34.5 compared to 4), along with the highest on/off ratio for a device of its kind (76.1 compared to 26), while maintaining graphene's naturally high electron mobility (3100 cm2/V·s).


The researchers, led by Professor Young Hee Lee at Sungkyunkwan University and the Institute for Basic Science in Suwon, South Korea, have published their paper on the new method for opening up a band gap in graphene in a recent issue of ACS Nano.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Papers
Scoop.it!

A Life in Games: The Playful Genius of John Conway

A Life in Games: The Playful Genius of John Conway | Amazing Science | Scoop.it
GNAWING ON HIS left index finger with his chipped old British teeth, temporal veins bulging and brow pensively squinched beneath the day-before-yesterday’s hair, the mathematician John Horton Conway unapologetically whiles away his hours tinkering and thinkering—which is to say he’s ruminating, although he will insist he’s doing nothing, being lazy, playing games.


Based at Princeton University, though he found fame at Cambridge (as a student and professor from 1957 to 1987), Conway, 77, claims never to have worked a day in his life. Instead, he purports to have frittered away reams and reams of time playing. Yet he is Princeton’s John von Neumann Professor in Applied and Computational Mathematics (now emeritus). He’s a fellow of the Royal Society. And he is roundly praised as a genius. “The word ‘genius’ gets misused an awful lot,” said Persi Diaconis, a mathematician at Stanford University. “John Conway is a genius. And the thing about John is he’ll think about anything.… He has a real sense of whimsy. You can’t put him in a mathematical box.”


Via Ashish Umre, Complexity Digest
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Tricky Path to Quantum-Safe Encryption Relies on a Fine Red Line

A Tricky Path to Quantum-Safe Encryption Relies on a Fine Red Line | Amazing Science | Scoop.it

On August 11, 2015, the National Security Agency updated an obscure page on its website with an announcement that it plans to shift the encryption of government and military data away from current cryptographic schemes to new ones, yet to be determined, that can resist an attack by quantum computers.


“It is now clear that current Internet security measures and the cryptography behind them will not withstand the new computational capabilities that quantum computers will bring,” NSA spokesperson Vanee’ Vines stated in an email, confirming the change. “NSA’s mission to protect critical national security systems requires the agency to anticipate such developments.”


Quantum computers, once seen as a remote theoretical possibility, are now widely expected to work within five to 30 years. By exploiting the probabilistic rules of quantum physics, the devices could decrypt most of the world’s “secure” data, from NSA secrets to bank records to email passwords. Aware of this looming threat, cryptographers have been racing to develop “quantum-resistant” schemes efficient enough for widespread use.


The most promising schemes are believed to be those based on the mathematics of lattices — multidimensional, repeating grids of points. These schemes depend on how hard it is to find information that is hidden in a lattice with hundreds of spatial dimensions, unless you know the secret route.


But last October, cryptographers at the Government Communications Headquarters (GCHQ), Britain’s electronic surveillance agency, posted an enigmatic paper online that called into question the security of some of the most efficient lattice-based schemes. The findings hinted that vulnerabilities had crept in during a decade-long push for ever-greater efficiency. As cryptographers simplified the underlying lattices on which their schemes were based, they rendered the schemes more susceptible to attack.


Building on the GCHQ claims, two teams of cryptanalysts have spent the past year determining which lattice-based schemes can be broken by quantum computers, and which are safe — for now.

“This is the modern incarnation of the classic cat-and-mouse game between the cryptographer and cryptanalyst,” said Ronald Cramer of the National Research Institute for Mathematics and Computer Science (CWI) and Leiden University in the Netherlands. When cryptanalysts are quiet, cryptographers loosen the security foundations of the schemes to make them more efficient, he said. “But at some point a red line might be crossed. That’s what happened here.” Now, the cryptanalysts are speaking up.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

China is set to complete the installation of the world's longest quantum communication network

China is set to complete the installation of the world's longest quantum communication network | Amazing Science | Scoop.it

China is set to complete the installation of the world's longest quantum communication network stretching 2,000km (1,240miles) from Beijing to Shanghai by 2016, say scientists leading the project. Quantum communications technology is considered to be "unhackable" and allows data to be transferred at the speed of light. By 2030, the Chinese network would be extended worldwide, the South China Morning Post reported. It would make the country the first major power to publish a detailed schedule to put the technology into extensive, large-scale use.


The development of quantum communications technology has accelerated in the last five years. The technology works by two people sharing a message which is encrypted by a secret key made up of quantum particles, such as polarized photons. If a third person tries to intercept the photons by copying the secret key as it travels through the network, then the eavesdropper will be revealed by virtue of the laws of quantum mechanics – which dictate that the act of interfering with the network affects the behaviour of the key in an unpredictable manner.


If all goes to schedule, China would be the first country to put a quantum communications satellite in orbit, said Wang Jianyu, deputy director of the China Academy of Science's (CAS) Shanghai branch. At a recent conference on quantum science in Shanghai, Wang said scientists from CAS and other institutions have completed major research and development tasks for launching the satellite equipped with quantum communications gear, South China Morning Post said.


The potential success of the satellite was confirmed by China's leading quantum communications scientist, Pan Jianwei, a CAS academic who is also a professor of quantum physics at the University of Science and Technology of China (USTC) in Hefei, in the eastern province of Anhui. Pan said researchers reported significant progress on systems development after conducting experiments at a test center in the northwest of China.


The satellite would be used to transmit encoded data through a method called quantum key distribution (QKD), which relies on cryptographic keys transmitted via light pulse signals. QKD is said to be nearly impossible to hack, since any attempted eavesdropping would change the quantum states and thus could be quickly detected by dataflow monitors.


Via LeapMind, Jocelyn Stoller
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

EMBL: The genome in the cloud

EMBL: The genome in the cloud | Amazing Science | Scoop.it

Since the completion of the Human Genome Project in 2001, technological advances have made sequencing genomes much easier, quicker and cheaper, fueling an explosion in sequencing projects. Today, genomics is well into the era of ‘big data’, with genomics datasets often containing hundreds of terabytes (1014 bytes) of information.


The rise of big genomic data offers many scientific opportunities, but also creates new problems, as Jan Korbel, Group Leader in the Genome Biology Unit EMBL Heidelberg, describes in a new commentary paper authored with an international team of scientists and published today in Nature.


Korbel’s research focuses on genetic variation, especially genetic changes leading to cancer, and relies on computational and experimental techniques. While the majority of current cancer genetic studies assess the 1% of the genome comprising genes, a main research interest of the Korbel group is in studying genetic alterations within ‘intergenic’ regions that drive cancer. As this approach looks at much more of the genome than gene-focused studies, it requires analysis of larger amounts of data. This challenge is exemplified via the Pan-Cancer Analysis of Whole Genomes (PCAWG) project, co-led by Korbel, which brings together nearly 1 petabyte (10^15 bytes) of genome sequencing data from more than 2000 cancer patients.


The problem is not a shortage of data but accessing and analysing it. Genome datasets from cancer patients are typically stored in so-called ‘controlled access’ data archives, such as the European Genome-phenome Archive (EGA). These repositories, however, are ‘static’, says Korbel, meaning that the datasets need to be downloaded to a researcher’s institution before they can be further analysed or integrated with other types of data to address biomedically relevant research questions. “With massive datasets, this can take many months and may be unfeasible altogether depending on the institution’s network bandwidth and computational processing capacities,” says Korbel. “It’s a severe limitation for cancer research, blocking scientists from replicating and building on prior work.”


With data stored in one of the various commercial cloud services on offer from companies such as Amazon Web Services, or on academic community clouds, researchers can analyse vast datasets without first downloading them to their institutions, saving time and money that would otherwise need to be spent on maintaining them locally. Cloud computing also allows researchers to draw on the processing power of distributed computers to significantly speed up analysis without purchasing new equipment for computationally laborious tasks. A large portion of the data from PCAWG, for example, will be analysed through cloud computing using both academic community and commercial cloud providers, thanks to new computational frameworks currently being built.


One concern about using cloud computing revolves around the privacy of people who have supplied genetic samples for studies. However, cloud services are now typically as secure as regular institutional data centres, which has diminished this worry: earlier this year, the US National Institutes of Health lifted a 2007 ban on uploading their genomic data into cloud storage. Korbel predicts that the coming months and years will see a big upswing in the use of cloud computing for genomics research, with academic cloud services, such as the EMBL-EBI Embassy Cloud, and commercial cloud providers including Amazon becoming a crucial component of the infrastructure for pursuing research in human genetics.


Yet there remain issues to resolve. One is who should pay for cloud services. Korbel and colleagues urge funding agencies to take on this responsibility given the central role cloud services are predicted to play in future research. Another issue relates to the differing privacy, ethical and normative policies and regulations in Europe, the US, and elsewhere. Some European countries may prefer that patient data remain within their jurisdiction so that they fall under European privacy laws, and not US laws, which apply once a US-based cloud provider is used. Normative and bioethical aspects of patient genome analysis, including in the context of cloud computing, are another specific focus of Korbel’s research, which is being pursued via an inter-disciplinary collaboration with Fruzsina Molnár-Gábor from Heidelberg University faculty of law in a project funded by the Heidelberg Academy of Sciences and Humanities.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New internet routing method allows users to avoid sending data through undesired countries

New internet routing method allows users to avoid sending data through undesired countries | Amazing Science | Scoop.it
Censorship is one of the greatest threats to open communication on the Internet. Information may be censored by a user's country of residence or the information's desired destination. But recent studies show that censorship by countries through which the data travels along its route is also a danger.


Now, computer scientists at the University of Maryland have developed a method for providing concrete proof to Internet users that their information did not cross through certain geographic areas. The new system offers advantages over existing systems: it is immediately deployable and does not require knowledge of—or modifications to—the Internet's routing hardware or policies.


"With recent events, such as censorship of Internet traffic, suspicious 'boomerang routing' where data leaves a region only to come back again, and monitoring of users' data, we became increasingly interested in this notion of empowering users to have more control over what happens with their data," says project lead Dave Levin, an assistant research scientist in the University of Maryland Institute for Advanced Computer Studies (UMIACS).


This new system, called Alibi Routing, will be presented on August 20, 2015, at the Association for Computing Machinery Special Interest Group on Data Communication (ACM SIGCOMM) conference in London. Levin teamed with associate professor Neil Spring and professor Bobby Bhattacharjee, who have appointments in UMD's Department of Computer Science and UMIACS, on the paper.


Information transmitted over the Internet, such as website requests or email content, is broken into packets and sent through a series of routers on the way to its destination. However, users have very little control over what parts of the world these packets traverse.


Some parts of the world have been known to modify data returned to users, thus censoring content. In 2012, researchers demonstrated that Domain Name System (DNS) queries that merely pass through China's borders are subject to the same risk as if the requests came from one of the country's own residents.


To evaluate their Alibi Routing method, the researchers simulated a network with 20,000 participants and selected forbidden regions from the 2012 "Enemies of the Internet" report published by Reporters Without Borders—China, Syria, North Korea and Saudi Arabia—as well as the three other countries with the highest number of Internet users at the time of the study—the United States, China and Japan.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

HIV spreads like computer worms, say scientists

HIV spreads like computer worms, say scientists | Amazing Science | Scoop.it

HIV spreads throughout the body in a similar way to some computer worms, according to a new model, which also suggests that early treatment is key to finding a cure to the disease.


HIV specialists and network security experts at University College London (UCL) have found that HIV progresses both via the bloodstream and directly between cells – akin to computer worms spreading themselves through two routes to infect as many computers as possible.


Prof Benny Chain, from UCL’s infection and immunity division, the co-senior author of the research, said: “I was involved in a study looking in general at spreading of worms across the internet and then I realised the parallel. They have to consistently find another computer to infect outside. They can either look locally in their own networks, their own computers, or you could remotely transmit out a worm to every computer on the internet. HIV also uses two ways of spreading within the body.”


The model was inspired by similarities between HIV and computer worms such as the highly damaging “Conficker” worm, first detected in 2008, which has infected military and police networks across Europe and is still active today.


The researchers’ findings, published on Thursday, told them that just as computer worms spread most efficiently by a combination of two routes, so must HIV – enabling the researchers to create a model for this “hybrid spreading”, which accurately predicted patients’ progression from HIV to Aids.


Detailed sample data from 17 HIV patients from London were used to verify the model, suggesting that hybrid spreading provides the best explanation for HIV progression and highlighting the benefits of very prompt treatment.


Chain said the model provided strong evidence of cell-to-cell spread, which he said some HIV scientists remained sceptical about, as it is difficult to observe in human beingsbecause it occurs in tissue.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists hope computer modelling can help predict flu outbreaks

Scientists hope computer modelling can help predict flu outbreaks | Amazing Science | Scoop.it

There’s no shortage of experts monitoring influenza outbreaks around the globe. The Centers for Disease Control and Prevention tracks flu activity in the United States year-round and produces weekly flu activity reports between the peak months of October to May. Likewise, the World Health Organisation constantly gathers epidemiological surveillance data, and releases updates on outbreaks taking place anywhere, anytime.


Still, despite the monitoring and the annual push to administer flu vaccines, influenza sickens millions of people around the world each year, leading to as many as 500,000 deaths annually. Young children and the elderly in particular are at risk. But what if we were better at predicting – and preparing for – seasonal outbreaks.That’s the impetus driving a team of researchers trying to show that it is possible to predict the timing and intensity of flu outbreaks in subtropical climates, such as Hong Kong, where flu seasons occur at irregular intervals throughout the year. The group, which includes scientists from Columbia University and the University of Hong Kong, has created a computer model to run various simulations of an outbreak and predict its magnitude and peak, according to a study published in the journal PLOS Computational Biology.


As a test case, the researchers gathering flu data from dozens of outpatient clinics and lab reports in Hong Kong between 1998 and 2013, then explored whether their system could accurately predict how outbreaks played out during those years. They said the program did remarkably well at predicting the peak of an outbreak several weeks in advance.


That’s not to say it was perfect. Researchers said the accuracy of the predictions varied, depending on the strength of an outbreak and how far in advance they tried to make a prediction. In addition, forecasts for specific strains of influenza proved more reliable than those for overall epidemics, and it was easier to predict the peak and magnitude of an outbreak than exactly when it would begin or how long it might last.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Build Smallest Transistor from a Molecule and a Few Atoms

Researchers Build Smallest Transistor from a Molecule and a Few Atoms | Amazing Science | Scoop.it

A team of physicists from the Paul-Drude-Institut für Festkörperelektronik (PDI) and the Freie Universität Berlin (FUB), Germany, the NTT Basic Research Laboratories (NTT-BRL), Japan, and the U.S. Naval Research Laboratory (NRL), United States, has used a scanning tunneling microscope to create a minute transistor consisting of a single molecule and a small number of atoms. The observed transistor action is markedly different from the conventionally expected behavior and could be important for future device technologies as well as for fundamental studies of electron transport in molecular nanostructures. The complete findings are published in the August 2015 issue of the journal Nature Physics.


Transistors have a channel region between two external contacts and an electrical gate electrode to modulate the current flow through the channel. In atomic-scale transistors, this current is extremely sensitive to single electrons hopping via discrete energy levels. Single-electron transport in molecular transistors has been previously studied using top-down approaches, such as lithography and break junctions. But atomically precise control of the gate – which is crucial to transistor action at the smallest size scales – is not possible with these approaches. 


The team used a highly stable scanning tunneling microscope (STM) to create a transistor consisting of a single organic molecule and positively charged metal atoms, positioning them with the STM tip on the surface of an indium arsenide (InAs) crystal. Kiyoshi Kanisawa, a physicist at NTT-BRL, used the growth technique of molecular beam epitaxy to prepare this surface. Subsequently, the STM approach allowed the researchers, first, to assemble electrical gates from the +1 charged atoms with atomic precision and, then, to place the molecule at various desired positions close to the gates.


Stefan Fölsch, a physicist at the PDI who led the team, explained that “the molecule is only weakly bound to the InAs template. So, when we bring the STM tip very close to the molecule and apply a bias voltage to the tip-sample junction, single electrons can tunnel between template and tip by hopping via nearly unperturbed molecular orbitals, similar to the working principle of a quantum dot gated by an external electrode. In our case, the charged atoms nearby provide the electrostatic gate potential that regulates the electron flow and the charge state of the molecule”.


But there is a substantial difference between a conventional semiconductor quantum dot – comprising typically hundreds or thousands of atoms – and the present case of a surface-bound molecule: Steven Erwin, a physicist at NRL and expert in density-functional theory, pointed out that “the molecule adopts different rotational orientations, depending on its charge state. We predicted this based on first-principles calculations and confirmed it by imaging the molecule with the STM”. This coupling between charge and orientation has a dramatic effect on the electron flow across the molecule, manifested by a large conductance gap at low bias voltages. Piet Brouwer, a physicist at FUB and expert in quantum transport theory, said that “this intriguing behavior goes beyond the established picture of charge transport through a gated quantum dot. Instead, we developed a generic model that accounts for the coupled electronic and orientational dynamics of the molecule”. This simple and physically transparent model entirely reproduces the experimentally observed single-molecule transistor characteristics.


The perfection and reproducibility offered by these STM-generated transistors will enable the exploration of elementary processes involving current flow through single molecules at a fundamental level. Understanding and controlling these processes – and the new kinds of behavior to which they can lead – will be important for integrating molecule-based devices with existing semiconductor technologies.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Broad Institute, Google Genomics combine bioinformatics and computing expertise

Broad Institute, Google Genomics combine bioinformatics and computing expertise | Amazing Science | Scoop.it

Broad Institute of MIT and Harvard is teaming up with Google Genomics to explore how to break down major technical barriers that increasingly hinder biomedical research by addressing the need for computing infrastructure to store and process enormous datasets, and by creating tools to analyze such data and unravel long-standing mysteries about human health.

As a first step, Broad Institute’s Genome Analysis Toolkit, or GATK, will be offered as a service on the Google Cloud Platform, as part of Google Genomics. The goal is to enable any genomic researcher to upload, store, and analyze data in a cloud-based environment that combines the Broad Institute’s best-in-class genomic analysis tools with the scale and computing power of Google.

GATK is a software package developed at the Broad Institute to analyze high-throughput genomic sequencing data. GATK offers a wide variety of analysis tools, with a primary focus on genetic variant discovery and genotyping as well as a strong emphasis on data quality assurance. Its robust architecture, powerful processing engine, and high-performance computing features make it capable of taking on projects of any size.

GATK is already available for download at no cost to academic and non-profit users. In addition, business users can license GATK from the Broad. To date, more than 20,000 users have processed genomic data using GATK.

The Google Genomics service will provide researchers with a powerful, additional way to use GATK. Researchers will be able to upload genetic data and run GATK-powered analyses on Google Cloud Platform, and may use GATK to analyze genetic data already available for research via Google Genomics. GATK as a service will make best-practice genomic analysis readily available to researchers who don’t have access to the dedicated compute infrastructure and engineering teams required for analyzing genomic data at scale. An initial alpha release of the GATK service will be made available to a limited set of users.

“Large-scale genomic information is accelerating scientific progress in cancer, diabetes, psychiatric disorders, and many other diseases,” said Eric Lander, President and Director of Broad Institute. “Storing, analyzing, and managing these data is becoming a critical challenge for biomedical researchers. We are excited to work with Google’s talented and experienced engineers to develop ways to empower researchers around the world by making it easier to access and use genomic information.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

With deep learning and dimensionality reduction, we can visualize the entirety of Wikipedia?

With deep learning and dimensionality reduction, we can visualize the entirety of Wikipedia? | Amazing Science | Scoop.it

Deep neural networks are an approach to machine learning that has revolutionized computer vision and speech recognition in the last few years, blowing the previous state of the art results out of the water. They’ve also brought promising results to many other areas, including language understanding and machine translation. Despite this, it remains challenging to understand what, exactly, these networks are doing.


Understanding neural networks is just scratching the surface, however, because understanding the network is fundamentally tied to understanding the data it operates on. The combination of neural networks and dimensionality reduction turns out to be a very interesting tool for visualizing high-dimensional data – a much more powerful tool than dimensionality reduction on its own.


Paragraph vectors, introduced by Le & Mikolov (2014), are vectors that represent chunks of text. Paragraph vectors come in a few variations but the simplest one, which we are using here, is basically some really nice features on top of a bag of words representation.


With word embeddings, we learn vectors in order to solve a language task involving the word. With paragraph vectors, we learn vectors in order to predict which words are in a paragraph.


Concretely, the neural network learns a low-dimensional approximation of word statistics for different paragraphs. In the hidden representation of this neural network, we get vectors representing each paragraph. These vectors have nice properties, in particular that similar paragraphs are close together.


Now, Google has some pretty awesome people. Andrew Dai, Quoc Le, and Greg Corrado decided to create paragraph vectors for some very interesting data sets. One of those was Wikipedia, creating a vector for every English Wikipedia article. The result is that we get a visualization of the entirety of Wikipedia. A map of Wikipedia. A large fraction of Wikipedia’s articles fall into a few broad topics: sports, music (songs and albums), films, species, and science.

more...
Tom Vandermolen's curator insight, July 1, 2015 1:12 AM

Another great machine learning/semantics tool.  We're getting closer, and it feels like all of these different techniques are homing in on *something* from different directions.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Facebook's new AI software can recognize you in photos even if you're not looking

Facebook's new AI software can recognize you in photos even if you're not looking | Amazing Science | Scoop.it

Thanks to the latest advances in computer vision, we now have machines that can pick you out of a line-up. But what if your face is hidden from view? An experimental algorithm out of Facebook's artificial intelligence lab can recognise people in photographs even when it can't see their faces. Instead it looks for other unique characteristics like your hairdo, clothing, body shape and pose.


Modern face-recognition algorithms are so good they've already found their way into social networks, shops and even churches. Yann LeCun, head of artificial intelligence at Facebook, wanted to see they could be adapted to recognise people in situations where someone's face isn't clear, something humans can already do quite well.


"There are a lot of cues we use. People have characteristic aspects, even if you look at them from the back," LeCun says. "For example, you can recognize Mark Zuckerberg very easily, because he always wears a gray T-shirt."


The research team pulled almost 40,000 public photos from Flickr - some of people with their full face clearly visible, and others where they were turned away - and ran them through a sophisticated neural network.


The final algorithm was able to recognise individual people's identities with 83 per cent accuracy. It was presented earlier this month at the Computer Vision and Pattern Recognition conference in Boston, Massachusetts. An algorithm like this could one day help power photo apps like Facebook's Moments, released last week.


Moments scours through a phone's photos, sorting them into separate events like a friend's wedding or a trip to the beach and tagging whoever it recognises as a Facebook friend. LeCun also imagines such a tool would be useful for the privacy-conscious - alerting someone whenever a photo of themselves, however obscured, pops up on the internet.


The flipside is also true: the ability to identify someone even when they are not looking at the camera raises some serious privacy implications. Last week, talks over rules governing facial recognition collapsed after privacy advocates and industry groups could not agree.


"If, even when you hide your face, you can be successfully linked to your identify, that will certainly concern people," says Ralph Gross at Carnegie Mellon University in Pittsburgh, Pennsylvania, who says the algorithm is impressive. "Now is a time when it's important to discuss these questions."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers use disordered matter for computation, evolving breakthrough nanoparticle Boolean logic network

Researchers use disordered matter for computation, evolving breakthrough nanoparticle Boolean logic network | Amazing Science | Scoop.it

Natural computers, such as evolved brains and cellular automata, express sophisticated interconnected networks and exhibit massive parallelism. They also adapt to exploit local physical properties such as capacitative crosstalk between circuits. By contrast, synthetic computers channel activity according to established design rules and do not adapt to take advantage of their surroundings. Thus, researchers are interested in using matter itself for computation.


Scientists have speculated about the ability to evolve designless nanoscale networks of inanimate matter with the same robust capacities as natural computers, but have not yet realized the concept. Now, a group of researchers reports in Nature Nanotechnology a disordered nanomaterials system that was artificially evolved by optimizing the values of control voltages according to a genetic algorithm.


Using interconnected metal nanoparticles, which act as nonlinear single-electron transistors, the researchers were able to exploit the system's emergent network properties to create a universal, reconfigurable Boolean gate. The authors note that their system meets the requirements for a cellular neural network—universality, compactness, robustness and evolvability. Their approach works around the device-to-device variations that are becoming increasingly difficult to align as semiconductors approach the nanoscale, and which result in uncertainties in performance.


Their system is a disordered nanoparticle network that can be reconfigured in situ into any two-input Boolean logic gate by tuning six static control voltages. It exploits the rich emergent behavior of up to 100 arbitrarily interconnected nanoparticles. For the experiment, the researchers used 20 nm gold nanoparticles interconnected with insulating molecules. These single-electron transistors express strongly nonlinear switching behavior, and the researchers looked for logic gates among the mutual interactions between them.


The fastest method proved to be artificial evolution. They developed a genetic algorithm that followed the well-known rules of natural selection, considering each control voltage as a gene and the complete set of system voltages as a genome. The best-performing (i.e., "fittest") genomes were preserved and improved via a composite cloning-breeding approach. The desirable traits of the initial, mostly low-performing genomes were passed selectively to subsequent generations.


For each logic gate evolved, the genetic algorithm almost always converged to a viable genome within less than 200 generations. The researchers note that due to the slow input signals they used, the process took about an hour; optimizing the system set-up could result in faster evolution.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers develop simple way to ward off Trojan attacks on quantum cryptographic systems

Researchers develop simple way to ward off Trojan attacks on quantum cryptographic systems | Amazing Science | Scoop.it
A team of researchers working for Toshiba in Japan and the U.K. has found a way to prevent Trojan horse attacks on quantum key distribution (QKD) systems. They describe their ideas in a paper they have had published in Physical Review X.


One of the hot areas of study in creating secure computer messaging systems is QKD—encrypted keys can be sent securely across public domain fiber networks safe from prying snoopers—if a key is intercepted, quantum physics ensures that it will be made known to the party on the receiving end, who will then call for a new key. Such systems are not as perfect as they seem however, as there is always a weak point in any system. In those based on QKD, that weak point typically resides at the sending site—in one scenario, an interloper, in computer circles known as Eve, can simply shine a bright light on the sender's encoder and then measure the reflection of the light that comes back, revealing the information that was used to make the key. This is a form of Trojan attack. Some have suggested that one way to thwart such an attack is to install devices that detect the physical presence of a person or device near an encoder, but that leaves open the possibly of the attacker foiling the new devices as well. In this new research, the team at Toshiba proposes a new approach, modifying the transmitter so that reflected light will be too weak to reveal any useful information.


The modifications to the transmitter would include adding an attenuator which would reduce the pulse to just one photon, an isolator which would only allow out-going light to pass through, and of course, a filter which would prevent the transfer of any wavelengths not initially specified to be in the channel.


The team has already built and tested a partial system with the new passive Trojan battler and report that it does indeed protect against Trojan attacks. They note also that it is a relatively cheap way to get the job done and the devices can be installed rather easily. Next up is a prototype that will serve as the basis for a product for delivery to customers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Hackers Remotely Inactivate a Car’s Brakes Via a Common Car Gadget

Hackers Remotely Inactivate a Car’s Brakes Via a Common Car Gadget | Amazing Science | Scoop.it
CAR HACKING DEMOS like last month’s over-the-internet hijacking of a Jeep have shown it’s possible for digital attackers to cross the gap between a car’s cellular-connected infotainment system and its steering and brakes. But a new piece of research suggests there may be an even easier way for hackers to wirelessly access those critical driving functions: Through an entire industry of potentially insecure, internet-enabled gadgets plugged directly into cars’ most sensitive guts.

At the Usenix security conference today, a group of researchers from the University of California at San Diego plan to reveal a technique they could have used to wirelessly hack into any of thousands of vehicles through a tiny commercial device: A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency. By sending carefully crafted SMS messages to one of those cheap dongles connected to the dashboard of a Corvette, the researchers were able to transmit commands to the car’s CAN bus—the internal network that controls its physical driving components—turning on the Corvette’s windshield wipers and even enabling or disabling its brakes.

“We acquired some of these things, reverse engineered them, and along the way found that they had a whole bunch of security deficiencies,” says Stefan Savage, the University of California at San Diego computer security professor who led the project. The result, he says, is that the dongles “provide multiple ways to remotely…control just about anything on the vehicle they were connected to.”
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM's $3 Billion Investment In Synthetic Brains And Quantum Computing

IBM's $3 Billion Investment In Synthetic Brains And Quantum Computing | Amazing Science | Scoop.it

IBM is unveiling a massive $3 billion research and development round investing in weird technologies—and, in the process, essentially staking Big Blue’s long-term survival on big data and cognitive computing. Over the next five years, IBM will invest a significant amount of their total revenue in technologies like non-silicon computer chips, quantum computing research, and computers that mimic the human brain.


IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnered with Google and NASA to develop quantum computer systems.


The news of the funding round is surprising to some IBM-watchers; for the past year or so, rumors flew that IBM was considering an exit from the microchip business. Barring an unlikely sharp turn by Big Blue in business strategy, the funding round is essentially a shift of strategy by IBM. While it looks likely that IBM will sell their chip unit—the New York Times cites GlobalFoundries as a likely buyer—the investment means that IBM sees a new way to make money from chips: Long-term returns from holding valuable, potentially lucrative patents and intellectual property.


"The point of this announcement is to underscore our commitment to the future of computing," Guha told Fast Company. "As you probably know, silicon technology has taken us a long way. A lot of stuff you see around you is a result of our ability to scale silicon tech, but the community at large realizes the end of silicon scaling is coming. However, performance scaling in computer system will continue in various ways; our R&D efforts are focused on different ways and means by which we do so."


Of all the investments announced in the round, neurosynaptic chips are the most novel. Essentially low-power microchips designed to mimic the behavior of the human brain, IBM has been researching the feasibility of building technology that can mimic human cognition for years. IBM is believed to be building a new programming language around the chips, which will be used for machine learning and cognitive computing systems like Watson. Some proof-of-concept neurosynaptic computing projects IBM announced previously include oral thermometers which identify bacteria by their odor and "conversation flowers" placed on tables which automatically identify speakers by voice and generate real-time transcripts of conversations, rendering transcriptionists obsolete.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

IoT mapped: The emerging landscape of smart things

IoT mapped: The emerging landscape of smart things | Amazing Science | Scoop.it

No one really knows how many “things” there are deployed today that have IoT characteristics. IDC’s 2013 estimate was about 9.1 billion, growing to about 28 billion by 2020 and over 50 billion by 2025. You can get pretty much any other number you want, but all the estimates are very large. So what are all these IoT things doing and why are they there? Here’s our attempt to map out the IoT landscape (click to enlarge).


There are a whole lot of possible organizational approaches to the constituent parts of IoT. One can use a “halo” approach, looking at how IoT principles will be applied to individual people, their surroundings (vehicles and homes), the organization of those surroundings (towns and cities and the highways and other transit systems that connect them), the range of social activities (essentially commerce, but also travel, hospitality, entertainment and leisure) that go on in those surroundings and finally the underpinnings of those activities (“industrial” including agriculture, energy and transport and logistics).


This is not an exhaustive taxonomy (excluded are all military and some law enforcement specific uses) or even the best way to organize things, but it’s a useful start and has been helpful in explaining the opportunity to the businesses we advise.


Via Fernando Gil
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe
Scoop.it!

Bouncing single photons off satellites for flexible high-end quantum key encryption

Bouncing single photons off satellites for flexible high-end quantum key encryption | Amazing Science | Scoop.it

Quantum key distribution is regularly touted as the encryption of the future. While the keys are exchanged on an insecure channel, the laws of physics provide a guarantee that two parties can exchange a secret key and know if they're being overheard. This unencrypted-but-secure form of key exchange circumvents one of the potential shortcomings of some forms of public key systems.


However, quantum key distribution (QKD) has one big downside: the two parties need to have a direct link to each other. So, for instance, banks in and around Geneva use dedicated fiber links to perform QKD, but they can only do this because the link distance is less than 100km. These fixed and short links are an expensive solution. A more flexible solution is required if QKD is going to be used for more general encryption purposes.


A group of Italian researchers have demonstrated the possibility of QKD via a satellite, which in principle (but not in practice) means that any two parties with a view of a satellite can exchange keys.


QKD is based on, essentially, the fact that once you measure the state of a photon, the photon is gone—you need to absorb the photon with a detector to measure its state. To take a particular example, we have Alice and Bob who want to communicate without letting the nefarious Eve into the picture. They begin by generating a secret key, through the laws of quantum physics, with which to encode their future communications.


Alice generates two lists of random ones and zeros. The first list contains bit values, and the second set is used to set the basis (think of this as the orientation of the measurement system) of a string of single photons. An important point is that these two basis sets are not orthogonal. So, for instance, a common example is to choose vertical and horizontal polarization for one basis and two diagonal polarizations for the second. Between the two values, the polarization of the photon is set into four possible states.


These single photons are sent to Bob, who will measure them. But, the quantum measurements don't allow you to ask a photon "What polarization are you?" Instead you end up asking questions like "Are you vertical or horizontally polarized?" So, Bob randomly chooses between the two basis sets. Sometimes he asks the photons which diagonal polarization they have and other times he asks them if they are vertical or horizontally polarized.


Now, if Alice sends a vertically polarized photon to Bob who asks which diagonal polarization it has, the photon will end up randomly choosing 45 degrees or 135 degrees. However, if Alice chooses to send a horizontally polarized photon and Bob asks the photon if it is horizontally or vertically polarized, he will always get horizontally polarized. The key point is that the measurement basis choice determines how the photon must be described. If Bob and Alice make the same choice, the photon is either in one or other state. If their choices are different, the photon, according to Bob, is in a superposition of two states. The upshot is that, in the first case, the measurement process is deterministic. Alice and Bob can know from their instrument settings exactly which of Bob's detector must click. In the second case, however, the measurement process forces the photon to randomly choose from two states: neither Bob nor Alice can predict the outcome of the measurement. It is this uncertainty, and how intervening measurements by Eve modify that uncertainty, that give QKD its security.


After all the photons are sent, Bob has a string of random numbers, but he has no way of knowing which ones to choose to make up a key. To create a common secret key, Bob and Alice publicly announce their choice of basis set for each bit. But, the choice of which polarization is kept secret. Alice and Bob can look for the positions in the string where they made the same choices and choose those bits to generate the common key.


The next step is to reveal Eve. To do this, Alice announces a section of the secret key. How does this reveal Eve? Let's suppose that Eve is intercepting the photons. She randomly chooses a basis set and measures the photons, but Eve doesn't know which basis set Alice chose. When Eve tries to recreate the photon state that Alice sent, she gets it wrong half the time. So, instead of Alice and Bob finding that they get the same result all the time, the number drops to one half. Eve can, of course, be subtler and only intercept every second photon, bringing the statistic closer to full agreement. But, the fewer photons she intercepts, the less information she has.


When Alice and Bob compare statistics for the partial key, they not only know that Eve is there, but how much information Eve is getting. If Eve was not present, they can throw away the revealed section of key and continue to generate more key digits. However, even if Eve is listening in, they can determine if they wish to go on, based on knowing how much of the key Eve is intercepting.


Via CineversityTV
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Top 500 Supercomputer List Reflects Shifting State of Global HPC Trends

Top 500 Supercomputer List Reflects Shifting State of Global HPC Trends | Amazing Science | Scoop.it

If the bi-annual list of the world’s fastest, most powerful supercomputers was used as indicator of key technological, government investment, and scientific progress, one could make some striking predictions about where the next centers of worldwide innovation are likely to rest.


With China clinging to its dramatic top placement on the list for the fifth consecutive iteration of the list, which was just announced at the International Supercomputing Conference in Frankfurt, Germany, the U.S. position only just holding steady, and Japan’s new wave of systems taking hold, the global playing field at the highest end of computing is set for some interesting times.


Interestingly, despite its far-and-away win with the Tianhe-2 supercomputer, the number of systems in China has dropped significantly over the last year. In November of 2014, the country claimed 61 systems on the Top 500—a number that has plummeted to 37 with the retirement of several systems that were toward the bottom. The United States is holding steady with 230 machines on the Top 500—an impressive number, but this is the fewest supercomputers on U.S. soil outside of one other drop (down to 226) in the early 2000s.


The stunning supercomputing story of late has been in Japan (and as an up-and-comer, South Korea), which is the #2-ranked user of HPC on the planet. In June 2010, Japan had 18 systems on the list. By 2012, there were 35 (including the K Computer, which is still top-ranked), in 2013 there were 30. And in the list for this summer there were a total of 39. In addition to the successful Fujitsu-built K Computer, the TSUBAME 2.5 machine and the upcoming (2016) TSUBAME 3.0 system will take top spots on the list, rounding out the hardware and software investments of Japan’s HPCI program, which is the nation’s umbrella for reaching eventual exascale targets in the post-2020 timeframe.


The share of European systems is quite large as well—and growing slightly. On the November list there were 130 systems, which has jumped to 141. And while Asian machines are the topic of a great deal of conversation (and an upcoming panel that we will be covering this week) there are 107 systems from Asia now, down from 120 in November.


While developments in Japan, South Korea, Russia are noteworthy, to be fair, only a relatively small fraction of high performance computing sites run the LINPACK benchmark (estimates are between 15-20%), which is the metric by which Top 500 placement is determined. To make economic and competitiveness predictions on this alone would be laden with caveats. However, one of the biggest trends of note is that the list overall, across countries and systems types, is in a rut with unprecedented low replacement rates and old systems that are still running the benchmark and keeping the list essentially where it was in late 2013-early 2014.


While there might not be any earth-shattering new supercomputers to surprise and awe us on the latest round of the Top 500 fastest systems announcement today, the relative calm of the list speaks a great deal, even for those who do not regularly follow advances in supercomputer performance. Investments and progress toward ever-faster Top 500 machines is an indicator (and outcome) of economic, scientific, and industrial growth, so for this June’s list, it is useful focus a bit less than usual on the systems themselves and more on some of the key movements that define supercomputing in 2015.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Computer scientists find mass extinctions can accelerate evolution

Computer scientists find mass extinctions can accelerate evolution | Amazing Science | Scoop.it

A computer science team at The University of Texas at Austin has found that robots evolve more quickly and efficiently after a virtual mass extinction modeled after real-life disasters such as the one that killed off the dinosaurs. Beyond its implications for artificial intelligence, the research supports the idea that mass extinctions actually speed up evolution by unleashing new creativity in adaptations.


Computer scientists Risto Miikkulainen and Joel Lehman co-authored the study published today in the journal PLOS One, which describes how simulations of mass extinctions promote novel features and abilities in surviving lineages.


"Focused destruction can lead to surprising outcomes," said Miikkulainen, a professor of computer science at UT Austin. "Sometimes you have to develop something that seems objectively worse in order to develop the tools you need to get better."


In biology, mass extinctions are known for being highly destructive, erasing a lot of genetic material from the tree of life. But some evolutionary biologists hypothesize that extinction events actually accelerate evolution by promoting those lineages that are the most evolvable, meaning ones that can quickly create useful new features and abilities.


Miikkulainen and Lehman found that, at least with robots, this is the case. For years, computer scientists have used computer algorithms inspired by evolution to train simulated robot brains, called neural networks, to improve at a task from one generation to the next. The UT Austin team's innovation in the latest research was in examining how mass destruction could aid in computational evolution.


In computer simulations, they connected neural networks to simulated robotic legs with the goal of evolving a robot that could walk smoothly and stably. As with real evolution, random mutations were introduced through the computational evolution process. The scientists created many different niches so that a wide range of novel features and abilities would come about.


After hundreds of generations, a wide range of robotic behaviors had evolved to fill these niches, many of which were not directly useful for walking. Then the researchers randomly killed off the robots in 90 percent of the niches, mimicking a mass extinction.


After several such cycles of evolution and extinction, they discovered that the lineages that survived were the most evolvable and, therefore, had the greatest potential to produce new behaviors. Not only that, but overall, better solutions to the task of walking were evolved in simulations with mass extinctions, compared with simulations without them.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

The Humans Who Dream Of Companies That Won't Need Us

The Humans Who Dream Of Companies That Won't Need Us | Amazing Science | Scoop.it
How would Ethereum's network autonomously run transportation apps, delivery services, and other companies? And would we even want that?

Via Spaceweaver, Ben van Lier
more...
Spaceweaver's curator insight, July 19, 2015 7:18 AM

This is very relevant to short and long term Global Brain technologies.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers develop basic computing elements for bacteria

Researchers develop basic computing elements for bacteria | Amazing Science | Scoop.it
Sensors, memory switches, and circuits can be encoded in a common gut bacterium.


The “friendly” bacteria inside our digestive systems are being given an upgrade, which may one day allow them to be programmed to detect and ultimately treat diseases such as colon cancer and immune disorders.


In a paper published today in the journal Cell Systems, researchers at MIT unveil a series of sensors, memory switches, and circuits that can be encoded in the common human gut bacterium Bacteroides thetaiotaomicron.


These basic computing elements will allow the bacteria to sense, memorize, and respond to signals in the gut, with future applications that might include the early detection and treatment of inflammatory bowel disease or colon cancer.


Researchers have previously built genetic circuits inside model organisms such as E. coli. However, such strains are only found at low levels within the human gut, according to Timothy Lu, an associate professor of biological engineering and of electrical engineering and computer science, who led the research alongside Christopher Voigt, a professor of biological engineering at MIT.


“We wanted to work with strains like B. thetaiotaomicron that are present in many people in abundant levels, and can stably colonize the gut for long periods of time,” Lu says. The team developed a series of genetic parts that can be used to precisely program gene expression within the bacteria. “Using these parts, we built four sensors that can be encoded in the bacterium’s DNA that respond to a signal to switch genes on and off inside B. thetaiotaomicron,” Voigt says. These can be food additives, including sugars, which allow the bacteria to be controlled by the food that is eaten by the host, Voigt adds.


To sense and report on pathologies in the gut, including signs of bleeding or inflammation, the bacteria will need to remember this information and report it externally. To enable them to do this, the researchers equipped B. thetaiotaomicron with a form of genetic memory. They used a class of proteins known as recombinases, which can record information into bacterial DNA by recognizing specific DNA addresses and inverting their direction.


The researchers also implemented a technology known as CRISPR interference, which can be used to control which genes are turned on or off in the bacterium. The researchers used it to modulate the ability of B. thetaiotaomicron to consume a specific nutrient and to resist being killed by an antimicrobial molecule.


The researchers demonstrated that their set of genetic tools and switches functioned within B. thetaiotaomicron colonizing the gut of mice. When the mice were fed food containing the right ingredients, they showed that the bacteria could remember what the mice ate.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Most internet anonymity software leaks users’ details

Most internet anonymity software leaks users’ details | Amazing Science | Scoop.it

Virtual Private Networks (VPNs) are legal and increasingly popular for individuals wanting to circumvent censorship, avoid mass surveillance or access geographically limited services like Netflix and BBC iPlayer. Used by around 20 per cent of European internet users they encrypt users’ internet communications, making it more difficult for people to monitor their activities.

The study of fourteen popular VPN providers found that eleven of them leaked information about the user because of a vulnerability known as ‘IPv6 leakage’. The leaked information ranged from the websites a user is accessing to the actual content of user communications, for example comments being posted on forums. Interactions with websites running HTTPS encryption, which includes financial transactions, were not leaked.

The leakage occurs because network operators are increasingly deploying a new version of the protocol used to run the Internet called IPv6. IPv6 replaces the previous IPv4, but many VPNs only protect user’s IPv4 traffic. The researchers tested their ideas by choosing fourteen of the most famous VPN providers and connecting various devices to a WiFi access point which was designed to mimic the attacks hackers might use.

Researchers attempted two of the kinds of attacks that might be used to gather user data – ‘passive monitoring’, simply collecting the unencrypted information that passed through the access point; and DNS hijacking, redirecting browsers to a controlled web server by pretending to be commonly visited websites like Google and Facebook.

The study also examined the security of various mobile platforms when using VPNs and found that they were much more secure when using Apple’s iOS, but were still vulnerable to leakage when using Google’s Android.

Dr Gareth Tyson, a lecturer from QMUL and co-author of the study, said: “There are a variety of reasons why someone might want to hide their identity online and it’s worrying that they might be vulnerable despite using a service that is specifically designed to protect them.

“We’re most concerned for those people trying to protect their browsing from oppressive regimes. They could be emboldened by their supposed anonymity while actually revealing all their data and online activity and exposing themselves to possible repercussions.”

more...
SoftwarePromoCodes's curator insight, July 8, 2015 5:43 AM

This is why everyone should use multiple forms of protection like encryption, a VPN, and more..... 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

D-Wave Systems Breaks the 1000 Qubit Quantum Computing Barrier

D-Wave Systems Breaks the 1000 Qubit Quantum Computing Barrier | Amazing Science | Scoop.it

New Milestone Will Enable System to Address Larger and More Complex Problems


D-Wave Systems Inc., the world's first quantum computing company, today announced that it has broken the 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort.


This is a major technological and scientific achievement that will allow significantly more complex computational problems to be solved than was possible on any previous quantum computer.


D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.” Every additional qubit doubles the search space of the processor. At 1000 qubits, the new processor considers 21000possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two. ‪In fact, the new search space contains far more possibilities than there are ‪particles in the observable universe.


“For the high-performance computing industry, the promise of quantum computing is very exciting. It offers the potential to solve important problems that either can’t be solved today or would take an unreasonable amount of time to solve,” said Earl Joseph, IDC program vice president for HPC. “D-Wave is at the forefront of this space today with customers like NASA and Google, and this latest advancement will contribute significantly to the evolution of the Quantum Computing industry.”


As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.


“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”


The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:


  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.
more...
No comment yet.