Amazing Science
Follow
Find tag "computers"
432.4K views | +272 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Current State of Machine Intelligence

The Current State of Machine Intelligence | Amazing Science | Scoop.it

A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.


Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).


Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.


What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).


We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.


Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.


Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).

Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
more...
John Vollenbroek's curator insight, April 25, 2:53 AM

I like this overview

pbernardon's curator insight, April 26, 2:33 AM

Une infographie et une cartographie claire et très intéressante sur l'intelligence artificielle et les usages induits que les organisations vont devoir s'approprier.

 

#bigdata 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Photon afterglow could transmit information without transmitting energy

Photon afterglow could transmit information without transmitting energy | Amazing Science | Scoop.it

Physicists have theoretically shown that it is possible to transmit information from one location to another without transmitting energy. Instead of using real photons, which always carry energy, the technique uses a small, newly predicted quantum afterglow of virtual photons that do not need to carry energy. Although no energy is transmitted, the receiver must provide the energy needed to detect the incoming signal—similar to the way that an individual must pay to receive a collect call.


The physicists, Robert H. Jonsson, Eduardo Martín-Martínez, and Achim Kempf, at the University of Waterloo (Martín-Martínez and Kempf are also with the Perimeter Institute), have published a paper on the concept in a recent issue of Physical Review LettersCurrently, any information transmission protocol also involves energy transmission. This is because these protocols use real photons to transmit information, and all real photons carry energy, so the information and energy are inherently intertwined.


Most of the time when we talk about electromagnetic fields and photons, we are talking about real photons. The light that reaches our eyes, for example, consists only of real photons, which carry both information and energy. However, all electromagnetic fields contain not only real photons, but also virtual photons, which can be thought of as "imprints on the quantum vacuum." The new discovery shows that, in certain circumstances, virtual photons that do not carry energy can be used to transmit information.


The physicists showed how to achieve this energy-less information transmission by doing two things: "First, we use quantum antennas, i.e., antennas that are in a quantum superposition of states," Kempf told Phys.org. "For example, with current quantum optics technology, atoms can be used as such antennas. Secondly, we use the fact that, when real photons are emitted (and propagate at the speed of light), the photons leave a small afterglow of virtual photons that propagate slower than light. This afterglow does not carry energy (in contrast to real photons), but it does carry information about the event that generated the light. Receivers can 'tap' into that afterglow, spending energy to recover information about light that passed by a long time ago."


The proposed protocol has another somewhat unusual requirement: it can only take place in spacetimes with dimensions in which virtual photons can travel slower than the speed of light. For instance, the afterglow would not occur in our 3+1 dimensional spacetime if spacetime were completely flat. However, our spacetime does have some curvature, and that makes the afterglow possible.


These ideas also have implications for cosmology. In a paper to be published in a future issue of Physical Review Letters, Martín-Martínez and collaborators A. Blasco, L. Garay, and M. Martin-Benito have investigated these implications. "In that work, it is shown that the afterglow of events that happened in the early Universe carries more information than the light that reaches us from those events," Martín-Martínez said. "This is surprising because, up until now, it has been believed that real quanta, such as real photons of light, are the only carriers of information from the early Universe."

more...
Russell Roberts's curator insight, April 24, 12:59 AM

Interesting theoretical concept that could affect the design of antennas and other radio components.  Aloha de Russ (KH6JRM).

Scooped by Dr. Stefan Gruenwald
Scoop.it!

'Google Maps' for the body: A biomedical revolution down to a single cell

'Google Maps' for the body: A biomedical revolution down to a single cell | Amazing Science | Scoop.it
Scientists are using previously top-secret technology to zoom through the human body down to the level of a single cell. Scientists are also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle.


UNSW biomedical engineer Melissa Knothe Tate is using previously top-secret semiconductor technology to zoom through organs of the human body, down to the level of a single cell.


A world-first UNSW collaboration that uses previously top-secret technology to zoom through the human body down to the level of a single cell could be a game-changer for medicine, an international research conference in the United States has been told.


The imaging technology, developed by high-tech German optical and industrial measurement manufacturer Zeiss, was originally developed to scan silicon wafers for defects.


UNSW Professor Melissa Knothe Tate, the Paul Trainor Chair of Biomedical Engineering, is leading the project, which is using semiconductor technology to explore osteoporosis and osteoarthritis.


Using Google algorithms, Professor Knothe Tate -- an engineer and expert in cell biology and regenerative medicine -- is able to zoom in and out from the scale of the whole joint down to the cellular level "just as you would with Google Maps," reducing to "a matter of weeks analyses that once took 25 years to complete."


Her team is also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle. "For the first time we have the ability to go from the whole body down to how the cells are getting their nutrition and how this is all connected," said Professor Knothe Tate. "This could open the door to as yet unknown new therapies and preventions."


Professor Knothe Tate is the first to use the system in humans. She has forged a pioneering partnership with the US-based Cleveland Clinic, Brown and Stanford Universities, as well as Zeiss and Google to help crunch terabytes of data gathered from human hip studies. Similar research is underway at Harvard University and Heidelberg in Germany to map neural pathways and connections in the brains of mice.


The above story is based on materials provided by University of New South Wales.

more...
CineversityTV's curator insight, March 30, 8:53 PM

What happens with the metadata? In the public domain? Or in the greed hands of the elite.

Courtney Jones's curator insight, April 2, 4:49 AM

,New advances in biomedical technology

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain in your pocket: Smartphone replaces thinking, study shows

Brain in your pocket: Smartphone replaces thinking, study shows | Amazing Science | Scoop.it

In the ancient world — circa, say, 2007 — terabytes of information were not available on sleekly designed devices that fit in our pockets. While we now can turn to iPhones and Samsung Galaxys to quickly access facts both essential and trivial — the fastest way to grandmother’s house, how many cups are in a gallon, the name of the actor who played Newman on “Seinfeld” — we once had to keep such tidbits in our heads or, perhaps, in encyclopedia sets.


With the arrival of the smartphone, such dusty tomes are unnecessary. But new research suggests our devices are more than a convenience — they may be changing the way we think. In “The brain in your pocket: Evidence that Smartphones are used to supplant thinking,” forthcoming from the journal Computers in Human Behavior, lead authors Nathaniel Barr and Gordon Pennycook of the psychology department at the University of Waterloo in Ontario said those who think more intuitively and less analytically are more likely to rely on technology.


“That people typically forego effortful analytic thinking in lieu of fast and easy intuition suggests that individuals may allow their Smartphones to do their thinking for them,” the authors wrote.


What’s the difference between intuitive and analytical thinking? In the paper, the authors cite this problem: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”


The brain-teaser evokes an intuitive response: The ball must cost 10 cents, right? This response, unfortunately, is obviously wrong — 10 cents plus $1.10 equals $1.20, not $1.10. Only through analytic thinking can one arrive at the correct response: The ball costs 5 cents. (Confused? Five cents plus $1.05 equals $1.10.)


It’s just this sort of analytical thinking that avid smartphone users seem to avoid. For the paper, researchers asked subjects how much they used their smartphones, then gave them tests to measure not just their intelligence, but how they processed information.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors | Amazing Science | Scoop.it

Google launches an effort to build its own quantum computer that has the potential to change computing forever. Google is about to begin designing and building hardware for a quantum computer, a type of machine that can exploit quantum physics to solve problems that would take a conventional computer millions of years. Since 2009, Google has been working with controversial startup D-Wave Systems, which claims to make “the first commercial quantum computer.” Last year, Google purchased one of D-Wave’s machines to be able to test the machine thoroughly. But independent tests published earlier this year found no evidence that D-Wave’s computer uses quantum physics at all to solve problems more efficiently than a conventional machine.


Now, John Martinis, a professor at University of California, Santa Barbara, has joined Google to establish a new quantum hardware lab near the university. He will try to make his own versions of the kind of chip inside a D-Wave machine. Martinis has spent more than a decade working on a more proven approach to quantum computing, and built some of the largest, most error-free systems of qubits, the basic building blocks that encode information in a quantum computer.


“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.” Martinis has taken a joint position with Google and UCSB that will allow him to continue his own research at the university.


Quantum computers could be immensely faster than any existing computer at certain problems. That’s because qubits working together can use the quirks of quantum mechanics to quickly discard incorrect paths to a solution and home in on the correct one. However, qubits are tricky to operate because quantum states are so delicate.


Chris Monroe, a professor who leads a quantum computing lab at the University of Maryland, welcomed the news that one of the leading lights in the field was going to work on the question of whether designs like D-Wave’s can be useful. “I think this is a great development to have legitimate researchers give it a try,” he says.


Since showing off its first machine in 2007, D-Wave has irritated academic researchers by making claims for its computers without providing the evidence its critics say is needed to back them up. However, the company has attracted over $140 million in funding and sold several of its machines (see “The CIA and Jeff Bezos Bet on Quantum Computing”).


There is no question that D-Wave’s machine can perform certain calculations. And research published in 2011 showed that the machine’s chip harbors the right kind of quantum physics needed for quantum computing. But evidence is lacking that it uses that physics in the way needed to unlock the huge speedups promised by a quantum computer. It could be solving problems using only ordinary physics.


Martinis’s previous work has been focused on the conventional approach to quantum computing. He set a new milestone in the field this April, when his lab announced that it could operate five qubits together with relatively low error rates. Larger systems of such qubits could be configured to run just about any kind of algorithm depending on the problem at hand, much like a conventional computer. To be useful, a quantum computer would probably need to be built with tens of thousands of qubits or more.


Martinis was a coauthor on a paper published in Science earlier this year that took the most rigorous independent look at a D-Wave machine yet. It concluded that in the tests run on the computer, there was “no evidence of quantum speedup.” Without that, critics say, D-Wave is nothing more than an overhyped, and rather weird, conventional computer. The company counters that the tests of its machine involved the wrong kind of problems to demonstrate its benefits.


Martinis’s work on D-Wave’s machine led him into talks with Google, and to his new position. Theory and simulation suggest that it might be possible for annealers to deliver quantum speedups, and he considers it an open question. “There’s some really interesting science that people are trying to figure out,” he says.

more...
Benjamin Chiong's curator insight, March 23, 7:23 PM

Looking at Amdahl's law, it is not only the data storage that matters but every component of computer. As each piece of hardware advances, the rest of the parts should be able to keep up as well. Quantum Computing forges a world that allows massive processing power to analyze Big Data. This gives us an idea how the future would look like.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

First general learning system that can learn directly from experience to master a wide range of challenging tasks

First general learning system that can learn directly from experience to master a wide range of challenging tasks | Amazing Science | Scoop.it

The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.


This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.


Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.


"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.


Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.


Nature Video of DQN AI

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New Artificial Lighting Tricks Human Brain into Seeing Sunlight

New Artificial Lighting Tricks Human Brain into Seeing Sunlight | Amazing Science | Scoop.it

Access to natural daylight has long been one of the biggest limiting factors in building design – some solutions involve reflecting real daylight from the outdoors, but until now no solution has been able to mimic natural refraction processes and fool our minds into thinking we are surrounded by actual sunlight. Developed by CoeLux in Italy, this new form of artificial light is able to dupe humans, cameras and computers alike using a thin coating of nanoparticules to simulate Rayleigh scattering, a natural process that takes place in Earth’s atmosphere causing diffuse sky radiation. It was not enough to make the lights brighter or bluer – variegation and other elements were needed as well.


The result is an effect that carries the same qualities we are used to experiencing outside, from color to light quality. The company also boasts that these photos are untouched and that their fake skylights in showrooms fool people in person just as effectively, appearing to have infinite depth just like one would expect looking up into the sky.


The potential applications are effectively endless, from lighting deep indoor spaces to replacing natural light in places where winters drag on and daylight hours are short. The company sees opportunities in areas like healthcare facilities where it may not be possible to put patients near real windows for spatial or health reasons. Currently, three lighting types are on offer to simulate various broad regions – Mediterranean, Tropical and Nordic – featuring various balances of light, shade, hue and contrast. They are also working on additional offerings, including simulated daytime sequences (sunrise through sunset) and color variations to reflect different kinds of weather conditions.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The future of electronics could ultimately lead to electrical conductors that are 100% efficient

The future of electronics could ultimately lead to electrical conductors that are 100% efficient | Amazing Science | Scoop.it

The future of electronics could lie in a material from its past, as researchers from The Ohio State University work to turn germanium—the material of 1940s transistors—into a potential replacement for silicon. At the American Association for the Advancement of Science meeting, assistant professor of chemistryJoshua Goldberger reported progress in developing a form of germanium called germanane.


In 2013, Goldberger’s lab at Ohio State became the first to succeed at creating one-atom-thick sheet of germanane—a sheet so thin, it can be thought of as two-dimensional. Since then, he and his team have been tinkering with the atomic bonds across the top and bottom of the sheet, and creating hybrid versions of the material that incorporate other atoms such as tin.


The goal is to make a material that not only transmits electrons 10 times faster than silicon, but is also better at absorbing and emitting light—a key feature for the advancement of efficient LEDs and lasers. “We’ve found that by tuning the nature of these bonds, we can tune the electronic structure of the material. We can increase or decrease the energy it absorbs,” Goldberger said. “So potentially we could make a material that traverses the entire electromagnetic spectrum, or absorbs different colors, depending on those bonds.”


As they create the various forms of germanane, the researchers are trying to exploit traditional silicon manufacturing methods as much as possible, to make any advancements easily adoptable by industry.


Aside from these traditional semiconductor applications, there have been numerous predictions that a tin version of the material could conduct electricity with 100 percent efficiency at room temperature. The heavier tin atom allows the material to become a 2D “topological insulator,” which conducts electricity only at its edges., Goldberger explained. Such a material is predicted to occur only with specific bonds across the top and bottom surface, such as a hydroxide bond.


Goldberger’s lab has verified that this theoretical material can be chemically stable. His lab has created germanane with up to 9 percent tin atoms incorporated, and shown that tin atoms have strong preference to bond to hydroxide above and below the sheet. His group is currently developing routes towards preparing the pure tin 2D derivatives.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

#WeAreNotWaiting: Confessions of a Diabetes Hacker

#WeAreNotWaiting: Confessions of a Diabetes Hacker | Amazing Science | Scoop.it

There is a quiet hacking revolution taking place in the Type 1 diabetes community. You can identify its followers by the popular hashtag #WeAreNotWaiting on Twitter. There are currently thousands of individuals running an app known as Nightscout to upload real-time blood glucose readings from their Dexcom continuous glucose monitors to their own private servers. This allows hundreds of parents to keep close watch over the health of children with Type 1. While this kind of customized surveillance can be done with current diabetes technology, it has yet to be approved by the FDA. Many have decided that we are not waiting for that agency’s blessing.


Stephen Black reports: "I was diagnosed with Type 1 four months ago. At that time, I knew nothing about diabetes. I was in disbelief when I discovered that if I wanted to see my glucose levels in real time, I would need to carry around an extra, bulky device in my pocket. If I wanted to see that data anywhere else, I would need to plug it into a computer and upload it. If a loved one wanted to check in to see if I was doing alright, they would need to call me and hope I answered. This seemed anachronistic in the wireless age. I promptly got to work on a project I have dubbed DexDrip, a wireless bluetooth bridge that would allow real-time blood glucose readings from a sensor to be delivered straight to my phone. I started by researching the Nightscout project. After doing some digging, I soon discovered an underground community of individuals working on similar projects to mine. Skirting just outside the peripheral vision of the FDA, they are working on all sorts of projects, including their own closed-loop artificial pancreas systems. Some are working in groups, others are working alone, but all share the same goal of making lives for people with diabetes easier, better, and (although they are taking personal risks) safer. Finding these people was difficult because many want to remain anonymous, but I was amazed by the community; everyone was there to help me. Once I discovered how to intercept transmissions from my Dexcom transmitter, it was pretty straightforward to steer the signal to my smartphone. The math presents a bigger challenge. Every 12 hours, the Dexcom receiver asks the user to enter his or her current blood glucose value so it can recalibrate. I couldn’t find a way to work around this automatic recalibration request. If I was going to cut the receiver out of the equation, I would need to write my own calibration algorithm."


Read Stephen Black's full report here

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Semiconductor Engineering — Will 7nm and 5nm Really Happen?

Semiconductor Engineering — Will 7nm and 5nm Really Happen? | Amazing Science | Scoop.it

New materials and transistors could extend Moore’s Law to 1.5nm or beyond, but there are a lot of problems ahead and a lot of unanswered questions.


As leading-edge chipmakers continue to ramp up their 28nm and 20nm devices, vendors are also updating their future technology roadmaps. In fact, IC makers are talking about their new shipment schedules for 10nm. And GlobalFoundries, Intel, Samsung and TSMC are narrowing down the options for 7nm, 5nm and beyond.


There is a high probability that IC makers can scale to 10nm, but vendors face a multitude of challenges at 7nm and beyond. The big question is whether the 7nm node will ever happen. And is 5nm even possible? The 3nm node is too far out in the future and is still up in the air.


If the industry moves beyond 10nm, it won’t be a straightforward process of simply scaling the gate length, as in previous nodes. The migration to 7nm itself requires a monumental and expensive shift towards new transistor architectures, channel materials and interconnects. It also involves the development of new fab tools and materials, which are either immature or don’t exist today.


Technically, it’s possible to make 7nm and 5nm chips in R&D. One challenge is to design and manufacture devices that meet the cost and power requirements for systems. Another challenge is to make the right technology choices, as the roadmap for the various options remains in flux.


Indeed, in the previous roadmaps among many entities, the leading transistor candidate has been the high-mobility or III-V finFET at 7nm, followed by a next-generation transistor type at 5nm.


Now, the options are all over the map. For example, according to Imec’s latest roadmap, III-V finFETs may get pushed out to 5nm, although they could still appear at 7nm. And a next-generation transistor could arrive as early as 7nm, according to Imec.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mastering multicore: Parallelizing common algorithms

Mastering multicore: Parallelizing common algorithms | Amazing Science | Scoop.it

Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.


Today, hardware manufacturers are making computer chips faster by giving them more cores, or processing units. But while some data structures are well adapted to multicore computing, others are not. In principle, doubling the number of cores should double the efficiency of a computation. With algorithms that use a common data structure called a priority queue, that’s been true for up to about eight cores — but adding any more cores actually causes performance to plummet.


At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming in February, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will describe a new way of implementing priority queues that lets them keep pace with the addition of new cores. In simulations, algorithms using their data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.


A priority queue is a data structure that, as its name might suggest, sequences data items according to priorities assigned them when they’re stored. At any given time, only the item at the front of the queue — the highest-priority item — can be retrieved. Priority queues are central to the standard algorithms for finding the shortest path across a network and for simulating events, and they’ve been used for a host of other applications, from data compression to network scheduling.


With multicore systems, however, conflicts arise when multiple cores try to access the front of a priority queue at the same time. The problem is compounded by modern chips’ reliance on caches — high-speed memory banks where cores store local copies of frequently used data.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Top Rated Electronic Health Record Software Is Free

Top Rated Electronic Health Record Software Is Free | Amazing Science | Scoop.it

Earlier this month, Medscape published the results of their recent survey (here) which asked 18,575 physicians across 25 specialties to rate their Electronic Health Record (EHR) system. For overall satisfaction, the #1 ranked EHR solution was the VA’s Computerized Patient Record System ‒ also known as VistA. It was built using open‒source software and is therefore license free.


There’s also a publicly available version of VistA called OpenVista and several companies leverage a services-only business model for larger OpenVista installations. For smaller installations, a YouTube video (here) suggests the free OpenVista software can be installed in about 10 minutes ‒ bring your own hardware.


Of course free software licensing doesn’t make the hardware, installation or maintenance free, but the lack of any software licensing fees at all does reduce the overall cost ‒ especially for large installations ‒ and that can typically save millions of dollars.


Open-source software also charts a much different course for design changes that are not dependent on the resources, budgets (or revenue requirements) of independent software vendors (ISV’s).


In many ways, VistA’s top rating is no surprise because it’s the only EHR installation in the U.S. with a truly national footprint. As a single software solution, VistA is designed to support almost 9 million vets through about 1,700 different care sites around the country.

more...
Kris Prendergast's curator insight, March 9, 10:01 AM

How vendor-lock in drains customers

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wireless technology more than 10 times faster than the best Wi-Fi is coming to market in 2015

Wireless technology more than 10 times faster than the best Wi-Fi is coming to market in 2015 | Amazing Science | Scoop.it

Smartphones, tablets and PCs should appear this year that can send and receive data wirelessly more than 10 times faster than a Wi-Fi connection. As well as transferring videos and other large files in a flash, this could do away with the cables used to hook PCs up to displays or projectors.


The wireless technology that will allow this is known as 60 gigahertz—after the radio frequency it uses—and by the name “WiGig.” Computing giants including Apple, Microsoft, and Sony have quietly collaborated on the new standard for years, and a handful of products featuring WiGig are already available. But the technology will get a big push this year, with several companies bringing products featuring WiGig to market.


WiGig carries data much faster than Wi-Fi because its higher frequency radio signal can be used to encode more information. The maximum speed of a wireless channel using the current 60 gigahertz protocol is seven gigabits per second (in perfect conditions). That compares to the 433 megabits per second possible via a single channel made using the most advanced Wi-Fi protocol in use today, which transmits at five megahertz. Most Wi-Fi networks use less advanced technology that operates even slower.


Qualcomm, a leading maker of mobile device processors and wireless chips, has invested heavily in WiGig. At the International Consumer Electronics Show in Las Vegas this month, the company demonstrated a wireless router for home or office use with the technology built in. That device will go on sale by the end of 2015.


Qualcomm has also designed the latest in its line of Snapdragon mobile processors to support WiGig. The “reference designs” Qualcomm shows to customers include its 60-gigahertz wireless chips, and the first devices built using the Snapdragon 810 processor are expected to go on sale in mid-2015. At CES, Qualcomm showed tablets built with that processor using WiGig to transfer video.


Those working on WiGig technology predict that demand for high definition video will make the technology necessary. The latest smartphones now record video at extremely high resolution. Grodzinsky says WiGig will start appearing in set-top boxes, making it easier to stream content from mobile devices to high definition TVs, or upload it to the Internet. Qualcomm calculates that its WiGig technology will make it possible to transfer a full-length HD movie in just three minutes.


Besides Qualcomm, Intel is preparing its own WiGig technology, and the company said at its annual developer conference last summer that WiGig chips would appear in laptops in 2015. In demos then and at CES this month, Intel showed a laptop using WiGig to connect with displays and other peripheral devices.


Samsung also expects to launch WiGig products this year. The company announced late in 2014 that it had developed its own implementation, and said it expected to commercialize it in 2015. The technology will appear in Samsung’s mobile, health-care, and smart home products.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

1 Billion Total Websites - Internet Live Stats

1 Billion Total Websites - Internet Live Stats | Amazing Science | Scoop.it
How many websites are there on the Web? Number of websites by year and growth from 1991 to 2015. Historical count and popular websites starting from the first website until today. Charts, real time counter, and interesting info.


After reaching 1 billion websites in September of 2014, a milestone confirmed by NetCraft in its October 2014 Web Server Survey and that Internet Live Stats was the first to announce - as attested by this tweet from the inventor of the World Wide Web himself, Tim Berners-Lee - the number of websites in the world has subsequently declined, reverting back to a level below 1 billion. This is due to the monthly fluctuations in the count of inactive websites. We do expect, however, to exceed 1 billion websites again sometime in 2015 and to stabilize the count above this historic milestone in 2016.


Curious facts
  • The first-ever website (info.cern.ch) was published on August 6, 1991 by British physicist Tim Berners-Lee while at CERN, in Switzerland. [2] On April 30, 1993 CERN made World Wide Web ("W3" for short) technology available on a royalty-free basis to the public domain, allowing the Web to flourish.[3]
  • The World Wide Web was invented in March of 1989 by Tim Berners-Lee (see the original proposal). He also introduced the first web server, the first browser and editor (the “WorldWideWeb.app”), the Hypertext Transfer Protocol (HTTP) and, in October 1990, the first version of the "HyperText Markup Language" (HTML).[4]
  • In 2013 alone, the web has grown by more than one third: from about 630 million websites at the start of the year to over 850 million by December 2013 (of which 180 million were active).
  • Over 50% of websites today are hosted on either Apache or nginx (54% of the total as of February 2015, according to NetCraft), both open source web servers.[5] After getting very close and even briefly taking the lead in July of 2014 in terms of server market share, Microsoft has recently got back behind Apache. As of February of 2015, 39% of server are hosted on Apache and 29% on Microsoft. However, if the overall trend continues in the future, in a few years from now Microsoft could become the leading web server developer for the first time in history.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Probabilistic programming does in 50 lines of code what used to take thousands

Probabilistic programming does in 50 lines of code what used to take thousands | Amazing Science | Scoop.it
Most recent advances in artificial intelligence—such as mobile apps that convert speech to text—are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.


To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.


At the Computer Vision and Pattern Recognition conference in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programs—less than 50 lines long—written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code. "This is the first time that we're introducing probabilistic programming in the vision area," says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. "The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems."


By the standards of conventional computer programs, those "models" can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself.


"When you think about probabilistic programs, you think very intuitively when you're modeling," Kulkarni says. "You don't think mathematically. It's a very different style of modeling." The new work, Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research. Even though their computers were painfully slow by today's standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in which light reflected off of virtual objects. This is, essentially, how Pixar makes movies.

more...
Russell R. Roberts, Jr.'s curator insight, April 14, 12:14 AM

Machine learning is advancing the development of artificial intelligence (AI) by "leaps and bounds."  The key to this remarkable advance in AI is the use of probabalistic programming to crunch huge amounts of data and then see if there are patterns.  Futuristic stuff that's happening today. Aloha, Russ.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Could analog computing accelerate highly complex scientific computer simulations?

Could analog computing accelerate highly complex scientific computer simulations? | Amazing Science | Scoop.it

DARPA announced today, March 19, a Request for Information (RFI) on methods for using analog approaches to speed up computation of the complex mathematics that characterize scientific computing. “The standard digital computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.


These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium, he notes. But they involve continuous rates of change over a large range of physical parameters relating to the problems of interest, so they don’t lend themselves to being broken up and solved in discrete pieces by individual CPUs. Examples of such problems include predicting the spread of an epidemic, understanding the potential impacts of climate change, or modeling the acoustical signature of a newly designed ship hull.


What if there were a processor specially designed for such equations? What might it look like? Analog computers solve equations by manipulating continuously changing values instead of discrete digital measurements, and have been around for more than a century. In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism.


Their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. So it’s conceivable, Tang said, that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.


DARPA’s RFI is called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS), available here: http://go.usa.gov/3CV43. The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Tang said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body | Amazing Science | Scoop.it

scientist Marc-Oliver Gewaltig and his team at the Human Brain Project (HBP) built a model mouse brain and a model mouse body, integrating them both into a single simulation and providing a simplified but comprehensive model of how the body and the brain interact with each other. "Replicating sensory input and motor output is one of the best ways to go towards a detailed brain model analogous to the real thing," explains Gewaltig.


As computing technology improves, their goal is to build the tools and the infrastructure that will allow researchers to perform virtual experiments on mice and other virtual organisms. This virtual neurorobotics platform is just one of the collaborative interfaces being developed by the HBP. A first version of the software will be released to collaborators in April. The HBP scientists used biological data about the mouse brain collected by the Allen Brain Institute in Seattle and the Biomedical Informatics Research Network in San Diego. These data contain detailed information about the positions of the mouse brain's 75 million neurons and the connections between different regions of the brain. They integrated this information with complementary data on the shapes, sizes and connectivity of specific types of neurons collected by the Blue Brain Project in Geneva.


A simplified version of the virtual mouse brain (just 200,000 neurons) was then mapped to different parts of the mouse body, including the mouse's spinal cord, whiskers, eyes and skin. For instance, touching the mouse's whiskers activated the corresponding parts of the mouse sensory cortex. And they expect the models to improve as more data comes in and gets incorporated. For Gewaltig, building a virtual organism is an exercise in data integration. By bringing together multiple sources of data of varying detail into a single virtual model and testing this against reality, data integration provides a way of evaluating – and fostering – our own understanding of the brain. In this way, he hopes to provide a big picture of the brain by bringing together separated data sets from around the world. Gewaltig compares the exercise to the 15th century European data integration projects in geography, when scientists had to patch together known smaller scale maps. These first attempts were not to scale and were incomplete, but the resulting globes helped guide further explorations and the development of better tools for mapping the Earth, until reaching today's precision.


Read more: https://www.humanbrainproject.eu
Human Brain Project: http://www.humanbrainproject.eu
NEST simulator software : http://nest-simulator.org/
Largest neuronalnetwork simulation using NEST : http://bit.ly/173mZ5j

Open Source Data Sets:
Allen Institute for Brain Science: http://www.brain-map.org
Bioinformatics Research Network (BIRN): http://www.birncommunity.org

The Behaim Globe : 
Germanisches National Museum, http://www.gnm.de/
Department of Geodesy and Geoinformation, TU Wien, http://www.geo.tuwien.ac.at

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Dr. Google joins Mayo Clinic

Dr. Google joins Mayo Clinic | Amazing Science | Scoop.it
The deal to produce clinical summaries under the Mayo Clinic name for Google searches symbolizes the medical priesthood's acceptance that information technology has reshaped the doctor-patient relationship. More disruptions are already on the way.


If information is power, digitized information is distributed power. While “patient-centered care” has been directed by professionals towards patients, collaborative health – what some call “participatory medicine” or “person-centric care” ­– shifts the perspective from the patient outwards.


Collaboration means sharing. At places like Mayo and Houston’s MD Anderson Cancer Center, the doctor’s detailed notes, long seen only by other clinicians, are available through a mobile app for patients to see when they choose and share how they wish. mHealth makes the process mundane, while the content makes it an utterly radical act.


About 5 million patients nationwide currently have electronic access to open notes. Boston’s Beth Israel Deaconess Medical Center and a few other institutions are planning to allow patients to make additions and corrections to what they call “OurNotes.” Not surprisingly, many doctors remain mortified by this medical sacrilege.


Even more threatening is an imminent deluge of patient-generated health data churned out by a growing list of products from major consumer companies. Sensors are being incorporated into wearables, watches, smartphones and (in a Ford prototype) even a “car that cares” with biometric sensors in the seat and steering wheel. Sitting in your suddenly becomes telemedicine.


To be sure, traditional information channels remain. For example, a doctor-prescribed, Food and Drug Administration-approved app uses sensors and personalized analytics to prevent severe asthma attacks. Increasingly common, though, is digitized data that doesn’t need a doctor at all. For example, a Microsoft fitness band not only provides constant heart rate monitoring, according to a New York Times review, but is part of a health “platform” employing algorithms to deliver “actionable information” and contextual analysis. By comparison, “Dr. Google” belongs in a Norman Rockwell painting.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain makes decisions with same method used to break WW2 Enigma code

Brain makes decisions with same method used to break WW2 Enigma code | Amazing Science | Scoop.it

When making simple decisions, neurons in the brain apply the same statistical trick used by Alan Turing to help break Germany’s Enigma code during World War II, according to a new study in animals by researchers at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute and Department of NeuroscienceResults of the study were published Feb. 5 in Neuron.


As depicted in the film “The Imitation Game,” Alan Turing and his team of codebreakers devised the statistical technique to help them decipher German military messages encrypted with the Enigma machine. The technique today is called Wald’s sequential probability ratio test, after Columbia professor Abraham Wald, who independently developed the test to determine if batches of munitions should be shipped to the front or if they contained too many duds.


Finding pairs of messages encrypted with the same Enigma settings was critical to unlocking the code. Turing’s statistical test, in essence, decided as efficiently as possible if any two messages were a pair.


The test evaluated corresponding pairs of letters from the two messages, aligned one above the other (in the film, codebreakers are often pictured doing this in the background, sliding messages around on grids). Although the letters themselves were gibberish, Turing realized that Enigma would preserve the matching probabilities of the original messages, as some letters are more common than others.

The codebreakers assigned values to aligned pairs of letters in the two messages. Unmatched pairs were given a negative value, matched pairs a positive value.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Data Scientist on a Quest to Turn Computers Into Doctors

Data Scientist on a Quest to Turn Computers Into Doctors | Amazing Science | Scoop.it

Some of the world’s most brilliant minds are working as data scientists at places like Google, Facebook, and Twitter—analyzing the enormous troves of online information generated by these tech giants—and for hacker and entrepreneur Jeremy Howard, that’s a bit depressing. Howard, a data scientist himself, spent a few years as the president of the Kaggle, a kind of online data scientist community that sought to feed the growing thirst for information analysis. He came to realize that while many of Kaggle’s online data analysis competitions helped scientists make new breakthroughs, the potential of these new techniques wasn’t being fully realized. “Data science is a very sexy job at the moment,” he says. “But when I look at what a lot of data scientists are actually doing, the vast majority of work out there is on product recommendations and advertising technology and so forth.”


So, after leaving Kaggle last year, Howard decided he would find a better use for data science. Eventually, he settled on medicine. And he even did a kind of end run around the data scientists, leveraging not so much the power of the human brain but the rapidly evolving talents of artificial brains. His new company is called Enlitic, and it wants to use state-of-the-art machine learning algorithms—what’s known as “deep learning”—to diagnosis illness and disease.


Publicly revealed for the first time today, the project is only just getting off the ground—“the big opportunities are going to take years to develop,” Howard says—but it’s yet another step forward for deep learning, a form of artificial intelligence that more closely mimics the way our brains work. Facebook is exploring deep learning as a way of recognizing faces in photos. Google uses it for image tagging and voice recognition. Microsoft does real-time translation in Skype. And the list goes on.


But Howard hopes to use deep learning for something more meaningful. His basic idea is to create a system akin to the Star Trek Tricorder, though perhaps not as portable. Enlitic will gather data about a particular patient—from medical images to lab test results to doctors’ notes—and its deep learning algorithms will analyze this data in an effort to reach a diagnosis and suggest treatments. The point, Howard says, isn’t to replace doctors, but to give them the tools they need to work more effectively. With this in mind, the company will share its algorithms with clinics, hospitals, and other medical outfits, hoping they can help refine its techniques. Howard says that the health care industry has been slow to pick-up on the deep-learning trend because it was rather expensive to build the computing clusters needed to run deep learning algorithms. But that’s changing.


The real challenge, Howard says, isn’t writing algorithms but getting enough data to train those algorithms. He says Enlitic is working with a number of organizations that specialize in gathering anonymized medical data for this type of research, but he declines to reveal the names of the organizations he’s working with. And while he’s tight-lipped about the company’s technique now, he says that much of the work the company does will eventually be published in research papers.

more...
Mike Dele's curator insight, March 20, 10:00 PM

why don't we look at the possibility of creating and manufacturing human spare parts just like for cars to replace any form of problem?

Benjamin Mzhari's curator insight, March 27, 8:37 AM

i fore see this type of profession becoming dynamic in the sense that it will not only look at business data but other statistics figures that will aid businesses.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cryptographers Could Prevent Satellite Collisions

Cryptographers Could Prevent Satellite Collisions | Amazing Science | Scoop.it
In February 2009 the U.S.'s Iridium 33 satellite collided with the Russian Cosmos 2251, instantly destroying both communications satellites. According to ground-based telescopes tracking Iridium and Cosmos at the time, the two should have missed each other, but onboard instrumentation data from even one of the satellites would have told a different story. Why weren't operators using this positional information?

Orbital data are actually guarded secrets: satellite owners view the locations and trajectories of their on-orbit assets as private. Corporations fear losing competitive advantage—sharing exact positioning could help rivals determine the extent of their capabilities. Meanwhile governments fear that disclosure could weaken national security. But even minor collisions can cause millions of dollars' worth of damage and send debris into the path of other satellites and spacecraft carrying humans, such as the International Space Station, which is why the Iridium-Cosmos crash prompted those in the field to find an immediate fix to the clandestine problem.

In the current working solution, the world's four largest satellite communications providers have teamed up with a trusted third party: Analytical Graphics. The company aggregates their orbital data and alerts participants when satellites are at risk. This arrangement, however, requires that all participants maintain mutual trust of the third party, a situation often difficult or impossible to arrange as more players enter the field and launch more satellites into orbit.

Now experts are thinking cryptography, which can eliminate the need for mutual trust, may be a better option. In the 1980s specialists developed algorithms that allowed many people to jointly compute a function on private data without revealing any number of secrets. In 2010 DARPA tasked teams of cryptographers to apply this technology to develop so-called secure multiparty computation (MPC) protocols for satellite data sharing. In this method, each participant loads proprietary data into its own software, which then sends messages back and forth according to a publicly specified MPC protocol. The design of the protocol guarantees that participants can compute a desired output (for example, the probability of collision) but nothing else. And because the protocol design is public, anyone involved can write their own software client—there would be no need for all parties to trust one another.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google wants to make your internet connection 1,000 times faster (up to 10Gbps)

Google wants to make your internet connection 1,000 times faster (up to 10Gbps) | Amazing Science | Scoop.it

Google is working on technology to deliver data transfer speeds over the Internet at 10 gigabits per second, 10 times faster than the connections offered by Google Fiber in Kansas City, a Google executive revealed Wednesday, according to a USA Today report. That's roughly 1,000 times faster than the average US connection speed of 7.2 megabits per second in 2014. The project is part of Google's vision of the next-generation Internet, allowing for more stable connections for data-intensive applications and greater adoption of software as a service, Google CFO Chief Patrick Pichette said during the Goldman Sachs Technology and Internet conference.


"That's where the world is going. It's going to happen," Pichette said. It may happen over a decade, but "why wouldn't we make it available in three years? That's what we're working on. There's no need to wait," he added. Few homes need 1Gbps or even 100Mbps broadband today, but existing capacity is steadily being absorbed by emerging technologies such as streaming audio and video, cloud storage, video chats, software updates, and multiplayer games.


Connections will be stretched even thinner with adoption of higher-resolution 4K video, expected to be the next bandwidth-hogging technology. Netflix, which plans to begin streaming content in 4K this year, said the 4K streams will need a 15Mbps connection, roughly twice the bandwidth needed to stream Super HD content.


Of course, Google isn't alone in its quest for faster Internet connections. A team of UK researchers announced last year that they had achieved wireless data transmission speeds of 10Gbs via visible light. Their "Li-fi" system used a micro-LED light bulb to transmit 3.5Gbps across each of the three primary colors of visible light: red, blue, and green.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Technique greatly extends duration of fragile quantum states, pointing toward practical quantum computers

Technique greatly extends duration of fragile quantum states, pointing toward practical quantum computers | Amazing Science | Scoop.it

Quantum computers are experimental devices that promise exponential speedups on some computational problems. Where a bit in a classical computer can represent either a 0 or a 1, a quantum bit, or qubit, can represent 0 and 1 simultaneously, letting quantum computers explore multiple problem solutions in parallel. But such “superpositions” of quantum states are, in practice, difficult to maintain.


In a paper appearing this week in Nature Communications, MIT researchers and colleagues at Brookhaven National Laboratory and the synthetic-diamond company Element Six describe a new design that in experiments extended the superposition time of a promising type of qubit a hundredfold.


In the long term, the work could lead toward practical quantum computers. But in the shorter term, it could enable the indefinite extension of quantum-secured communication links, a commercial application of quantum information technology that currently has a range of less than 100 miles.


The researchers’ qubit design employs nitrogen atoms embedded in synthetic diamond. When nitrogen atoms happen to be situated next to gaps in the diamond’s crystal lattice, they produce “nitrogen vacancies,” which enable researchers to optically control the magnetic orientation, or “spin,” of individual electrons and atomic nuclei. Spin can be up, down, or a superposition of the two.


To date, the most successful demonstrations of quantum computing have involved atoms trapped in magnetic fields. But “holding an atom in vacuum is difficult, so there’s been a big effort to try to trap them in solids,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper.


“In particular, you want a transparent solid, so you can send light in and out. Crystals are better than many other solids, like glass, in that their atoms are nice and regular and their electronic structure is well defined. And amongst all the crystals, diamond is a particularly good host for capturing an atom, because it turns out that the nuclei of diamond are mostly free of magnetic dipoles, which can cause noise on the electron spin.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Malware detection technology identifies malware without examining source cod

Malware detection technology identifies malware without examining source cod | Amazing Science | Scoop.it

Hyperion, new malware detection software that can quickly recognize malicious software even if the specific program has not been previously identified as a threat, has been licensed by Oak Ridge National Laboratory (ORNL) to R&K Cyber Solutions LLC (R&K).


Hyperion, which has been under development for a decade, offers more comprehensive scanning capabilities than existing cyber security methods, said one of its inventors, Stacy Prowell of the ONRL Cyber Warfare Research team. By computing and analyzing program behaviors associated with harmful intent, Hyperion can determine the software’s behavior without using its source code or even running the program.


“These behaviors can be automatically checked for known malicious operations as well as domain-specific problems,” Prowell said. “This technology helps detect vulnerabilities and can uncover malicious content before it has a chance to execute.”


“This approach is better than signature detection, which only searches for patterns of bytes,” Prowell said. “It’s easy for somebody to hide that — they can break it up and scatter it about the program so it won’t match any signature.”


“Software behavior computation is an emerging science and technology that will have a profound effect on malware analysis and software assurance,” said R&K Cyber Solutions CEO Joseph Carter. “Computed behavior based on deep functional semantics is a much-needed cyber security approach that has not been previously available. Unlike current methods, behavior computation does not look at surface structure. Rather, it looks at deeper behavioral patterns.”


Carter adds that technology’s malware analysis capabilities can be applied to multiple related cyber security problems, including software assurance in the absence of source code, hardware and software data exploitation and forensics, supply chain security analysis, anti-tamper analysis, and potential first intrusion detection systems based on behavior semantics.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Elon Musk reveals plan to put internet connectivity in space

Elon Musk reveals plan to put internet connectivity in space | Amazing Science | Scoop.it

At the SpaceX event held in Seattle, Elon Musk revealed his grand (and expensive) $10 billion plan to build internet connectivity in space. Musk’s vision wants to radically change the way we access internet. His plan includes putting satellites in space, between which data packets would bounce around before being passed down to Earth. Right now, data packets bounce about the various networks via routers.


Some say that Elon Musk’s ambitious project would enable a Smartphone to access the internet just like it communicates with GPS satellites. SpaceX will launch its satellites in a low orbit, so as to reduce communication lag. While geosynchronous communication satellites orbit the Earth from an altitude of 22,000 miles, SpaceX’s satellites would be orbiting the Earth from an altitude of 750 miles.


Once Musk’s system is in place, data packets would simply be sent to space, from where they would bounce about the satellites, and ultimately be sent back to Earth. “The speed of light is 40 percent faster in the vacuum of space than it is for fiber,” says Musk, which is why he believes that his unnamed SpaceX venture is the future of internet connectivity, replacing traditional routers and networks.


The project is based out of SpaceX’s new Seattle office. It will initially start out with 60 workers, but Musk predicts that the workforce may grow to over 1,000 in three to four years. Musk wants “the best engineers that either live in Seattle or that want to move to the Seattle area and work on electronics, software, structures, and power systems,” to work with SpaceX.

more...
JebaQpt's comment, January 21, 11:21 PM
Elon Musk quotes http://www.thequotes.net/2014/10/elon-musk-quotes/
Justin Boersma's curator insight, March 27, 7:12 AM

Global internet connectivity through Low Earth Orbit satellites can prove to be incredibly useful and revolutionise the way certain information may travel, e.g. designate specific types of data to be transmitted only through this network of satellites. This would overall increase connectivity and speed across the globe, and most likely require an overhaul of current networking hardware.