Amazing Science
698.0K views | +635 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Software supplies snapshot of gene expression across whole brain

Software supplies snapshot of gene expression across whole brain | Amazing Science |

A new tool provides speedy analysis of gene expression patterns in individual neurons from postmortem brain tissue. Researchers have used the method to compare the genetic signatures of more than 3,000 neurons from distant brain regions.


Scientists typically use a technique called RNA-Seq to measure gene expression in neurons isolated from postmortem brains. However, analyzing the data from this approach is daunting because the analysis must be done one cell at a time.


The new method combines RNA-Seq with software that allows researchers to analyze the expression patterns of thousands of neurons at once1. The investigators described the automated technique, called single-nucleus RNA sequencing (SNS), in June in Science.


The researchers tested the method on postmortem brain tissue from a 51-year-old woman with no known neurological illnesses. They used a laser to dissect 3,227 neurons from six brain areas, including those involved in language, cognition, vision and social behavior. They then performed RNA-Seq on the cells, getting a readout for RNAs produced in each cell.


The software identifies genes by matching a short segment of each RNA to a gene on a reference map of the human genome. The researchers then quantified each gene’s expression level.

The process correctly identified the subtypes of 2,253 neurons that ramp up brain activity and 972 neurons that dampen it.


Within these two broad classes, the neurons fell into 16 groups based on their location and their origin in the developing brain. For example, neurons from the visual cortex show different patterns of gene expression than do neurons from the temporal cortex, which processes hearing and language.


The findings expand the list of features that distinguish neurons from other cells in the brain. Researchers could use the method to identify patterns of gene expression in the brains of people with autism.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe!

The Most Innovative Video Game in Years Creates Infinite Exoworlds and Creatures

The Most Innovative Video Game in Years Creates Infinite Exoworlds and Creatures | Amazing Science |
No Man’s Sky promises to be an interstellar sandbox you can explore without strings. Players are set loose in a kind of cosmic petri dish populated by randomly generated lifeforms they can study, exploit or destroy. Resources can be mined to upgrade gear or speed travel, while trade ships crawl through solar systems and tease precious cargo ripe for pirating. Even learning alien languages plays a role in trading effectively, or at least avoiding insults that could end in firefights.
It’s a tall order. Gamers are famously curious, hard to please and quick to judge a developer deemed to have over-promised or, worse, over-hyped. And No Man’s Sky has generated its share of hype, from its quintillions of explorable planets and factions vying for control of the galaxy, to the multitudes of lifeforms you can catalog in a universal online library. Gamers can take those kinds of claims as a dare. From No Man’s Sky, they expect a universe that pushes back when prodded, that feels designed and not haphazard. They want the grail-like feedback loop: an experience that endlessly surprises and potentially lasts forever.

Via CineversityTV
No comment yet.
Scooped by Dr. Stefan Gruenwald!

UW professor is digitizing every fish species in the world

UW professor is digitizing every fish species in the world | Amazing Science |

Nearly 25,000 species of fish live on our planet, and a University of Washington professor wants to scan and digitize them all. That means each species will soon have a high-resolution, 3D visual replica online, available to all and downloadable for free. Scientists, teachers, students and amateur ichthyologists will be able to look at the fine details of a smoothhead sculpin’s skeleton, or 3-D print an exact replica of an Arctic alligatorfish.


“These scans are transforming the way we think about 3-D data and accessibility,” said Adam Summers, a UW professor of biology and aquatic and fishery sciences who is spearheading the project.


Summers, who is based at the UW’s Friday Harbor Laboratories, uses a small computerized tomography (CT) scanner in the back room of a lab to churn out dozens of fish scans from specimens gathered around the world. The machine works like a standard CT scanner used in hospitals: A series of X-ray images is taken from different angles, then combined using computer processing to create three-dimensional images of the skeleton.


The goal is to make it possible for scientists to examine the morphology of a particular species, or try to understand why a group of fish all have similar physical characteristics such as bony head “armor” or the ability to burrow into the sand.

“It’s been so fun to throw this data up on the web and have people actually use it,” Summers said.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

First completely scalable quantum simulation of a molecule

First completely scalable quantum simulation of a molecule | Amazing Science |
A team of researchers made up of representatives from Google, Lawrence Berkeley National Labs, Tufts University, UC Santa Barbara, University College London and Harvard University reports that they have successfully created a scalable quantum simulation of a molecule for the first time ever. In a paper uploaded to the open access journal Physical Review X, the team describes the variational quantum eigensolver (VQE) approach they used to create and solve one of the first real-world quantum computer applications.

As research continues with the development of a true quantum computer, some in the field have turned their attention to selecting certain types of problems that such computers could solve, as opposed to what are now being called classical computers. One such problem is solving the molecular electronic structure problem, which as Google Quantum Software Engineer Ryan Babbush notes in a blog post involves searching for the lowest electron energy configuration of a given molecule. What this means in practice is using a machine to compute the energies of molecules—doing so for some, such as methane, is relatively easy and can be done very quickly on a classical computer, but others, such as propane, can take days. This makes it an ideal test case for a quantum computer.

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Garage Biotech: New drugs using only a computer, the internet and free online data

Garage Biotech: New drugs using only a computer, the internet and free online data | Amazing Science |

Pharmaceutical companies typically develop new drugs with thousands of staff and budgets that run into the billions of dollars. One estimate puts the cost of bringing a new drug to market at $2.6 billion with others suggesting that it could be double that cost at $5 billion.


One man, Professor Atul Butte director of the University of California Institute of Computational Health Sciences, believes that like other Silicon Valley startups, almost anyone can bring a drug to market from their garage with just a computer, the internet, and freely available data. In a talk given at the Science on the Swan conference held in Perth this week, Professor Butte outlined the process for an audience of local and international scientists and medics.


The starting point is the genetic data from thousands of studies on humans, mice and other animals, that is now freely available on sites from the National Institute of Health and the European Molecular Biology Laboratory. The proliferation of genetic data from experiments has been driven by the ever decreasing cost of sequencing genetic information using gene chip technologies.


Professor Butte, students, and research staff have found a range of different ways of using this data to look for new drugs. In one approach, they have constructed a map of how the genetic profiles of people with particular diseases are related to each other. In particular, to look for diseases with very similar genetic profiles. Having done that, they noticed that the genetic profile of people with heart conditions were very closely related to that of the much rarer condition of muscular dystrophy. What this potentially suggested was that drugs that work for one condition could potentially work in the other. This process of discovering other uses of drugs, called “drug repositioning”, is not new.


Drugs like Viagra were originally used for treatment of cardiovascular conditions. The difference is that Viagra’s repositioned use resulted from the observation of side-effects in patients taking the drug for its original intended purpose.

Professor Butte on the other hand is using “Big Data” and computers to show that given the close relationship in the genetic profile of two diseases, the potential cross-over effect of drugs working for one condition working in another.


Still in the garage, the next step from discovering a potential drug is to test if it actually works in an experimental setting on animals. Here again, Professor Butte has turned to the internet and sites like Assay Depot. This is a site, structured like Amazon, from which a researcher can order an experiment to be carried out to test a drug on a range of animal models. It is literally a case of choosing the experiment type you want, adding it to a shopping cart, paying by credit card and getting the experimental results mailed back in a few weeks time. “Shoppers” are given the choice of laboratory they want to use, including a choice of which country the lab is based.


Once a new use for a drug has been shown to work in an animal model, the next step would be to test the drug in humans, get approval for the use of the drug for that condition and then finally take the drug to market.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Storage technologies struggle to keep up with big data – is there a biological alternative?

Storage technologies struggle to keep up with big data – is there a biological alternative? | Amazing Science |

If DNA archives become a plausible method of data storage, it will be thanks to rapid advances in genetic technologies. The sequencing machines that “read out” DNA code have already become exponentially faster and cheaper; the National Institutes of Health shows costs for sequencing a 3-billion-letter genome plummeting from US $100 million in 2001 to a mere $1,000 today. However, DNA synthesis technologies required to “write” the code are much newer and less mature. Synthetic-biology companies like San Francisco’s Twist Biosciencehave begun manufacturing DNA to customers’ specifications only in the last few years, primarily serving biotechnology companies that are tweaking the genomes of microbes to trick them into making some desirable product. Manufacturing DNA for data storage could be a profitable new market, says Twist CEO Emily Leproust.


Twist sent a representative to the April meeting, and the company is also working with Microsoft on a separate experiment in DNA storage, in which it synthesized 10 million strands of DNA to encode Microsoft’s test file. Leproust says Microsoft and the other tech companies are currently trying to determine “what kind of R&D has to be done to make a viable commercial product.” To make a product that’s competitive with magnetic tape for long-term storage, Leproust estimates that the cost of DNA synthesis must fall to 1/10,000 of today’s price. “That is hard,” she says mildly. But, she adds, her industry can take inspiration from semiconductor manufacturing, where costs have dropped far more dramatically. And just last month, an influential group of geneticists proposed an international effort to reduce the cost of DNA synthesis, suggesting that $100 million could launch the project nicely.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Daily Magazine!

World's first 1,000-processor chip

World's first 1,000-processor chip | Amazing Science |

A microchip containing 1,000 independent programmable processors has been designed by a team at the University of California, Davis, Department of Electrical and Computer Engineering. The energy-efficient "KiloCore" chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors. The KiloCore was presented at the 2016 Symposium on VLSI Technology and Circuits in Honolulu on June 16.


"To the best of our knowledge, it is the world's first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university," said Bevan Baas, professor of electrical and computer engineering, who led the team that designed the chip architecture. While other multiple-processor chips have been created, none exceed about 300 processors,according to an analysis by Baas' team. Most were created for research purposes and few are sold commercially. The KiloCore chip was fabricated by IBM using their 32 nm CMOS technology.


Each processor core can run its own small program independently of the others, which is a fundamentally more flexible approach than so-called Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs; the idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use, Baas said.


Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.


The chip is the most energy-efficient "many-core" processor ever reported, Baas said. For example, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.


Applications already developed for the chip include wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacenter record processing.


The team has completed a compiler and automatic program mapping tools for use in programming the chip.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Worldwide quantum web may be possible with help from graphs

Worldwide quantum web may be possible with help from graphs | Amazing Science |
One of the most ambitious endeavors in quantum physics right now is to build a large-scale quantum network that could one day span the entire globe. In a new study, physicists have shown that describing quantum networks in a new way—as mathematical graphs—can help increase the distance that quantum information can be transmitted. Compared to classical networks, quantum networks have potential advantages such as better security and being faster under certain circumstances.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

In a New Method for Searching Image Databases, a Hand-drawn Sketch Is all it Takes

In a New Method for Searching Image Databases, a Hand-drawn Sketch Is all it Takes | Amazing Science |
Computer scientists at the University of Basel have developed a new method for conducting image and video database searches based on hand-drawn sketches. The user draws a sketch on a tablet or interactive paper, and the system searches for a matching image in the database. The new method is free to access for researchers.


People today are increasingly confronted with the challenge of having to find their way around vast collections of photos and videos, both in their work lives and at home. Although search engines such as Google and Bing make it easy to find documents or websites quickly and efficiently using search terms, the options for searching collections of multimedia objects are more limited.


Researchers at the Department of Mathematics and Computer Science at the University of Basel have developed a system known as 'vitrivr', which allows a search for images and videos by means of a sketch. The user creates a sketch of the desired object on a tablet or interactive paper, and the program delivers the images and video clips that most resemble it. For videos, the user can even specify on the sketch in which direction an object is moving in the searched sequence.


In designing the system, the researchers deliberately set a very broad similarity concept and adapted it to different types of sketch; for example, similar colors, shapes or directions of movement.


Individual searches can then be augmented by a range of other query types -- search terms, examples of images and videos, or combinations of all these. An important feature of the new system is its scalability, a feature that means it can be used even with very large multimedia collections.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM scientists achieve storage memory breakthrough of 3 bits per cell

IBM scientists achieve storage memory breakthrough of 3 bits per cell | Amazing Science |
For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).


The current memory landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry's attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn't lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.


This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts!

Computer Simulation: How An 8.0 Earthquake Would Rock Los Angeles

Computer Simulation: How An 8.0 Earthquake Would Rock Los Angeles | Amazing Science |

Earlier this week, an expert from the Southern California Earthquake Center spoke at a conference in Long Beach and called the southern San Andreas fault "locked, loaded and ready to go" for a major 'quake. He said that people should be preparing for something around a magnitude 8.0—that's larger than the devastating San Francisco earthquake back in 1906, the LA Times notes, and that one caused about 3,000 deaths from both the shaking and the fires that followed (which LA's former earthquake czar Lucy Jones has said we should be worried about).


What would an earthquake that big even look like? How would it move and where could we expect the shaking to be felt? For that, there's a video from the SCEC that shows where the movement would occur and how far away it could be felt in the event of a 'quake that starts near San Luis Obispo and moves south along the fault.

Via YEC Geo
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory!

Scientists Create a 5-atom Quantum Computer That Could Make Today's Encryption Obsolete

Scientists Create a 5-atom Quantum Computer That Could Make Today's Encryption Obsolete | Amazing Science |

MIT scientists have developed a 5-atom quantum computer, one that is able to render traditional encryption obsolete. The creation of this five atom quantum computer comes in response to a challenge posed in 1994 by Professor Peter Shor of MIT. Professor Shor developed a quantum algorithm that’s able to calculate a large number’s prime factors more efficiently than traditional computers, with 15 being the smallest figure to meaningfully demonstrate the algorithm.


The new system was able to return the correct factors and with a confidence upwards of 99 percent. Professor Isaac Chuan of MIT said: “We show that Shor’s algorithm, the most complex quantum algorithm known to date, is realizable in a way where, yes, all you have to do is go in the lab, apply more technology, and you should be able to make a bigger quantum computer.”


Of course, this may be a little easier said than done. “It might still cost an enormous amount of money to build—you won’t be building a quantum computer and putting it on your desktop anytime soon—but now it’s much more an engineering effort, and not a basic physics question,” Chuang added.


Yet, Chuang has his team are hopeful for the future of quantum computing, saying that they “foresee it being straightforwardly scalable, once the apparatus can trap more atoms and more laser beams can control the pulses…We see no physical reason why that is not going to be in the cards.”


Via Ben van Lier
No comment yet.
Scooped by Dr. Stefan Gruenwald!

How EVE Online's Project Discovery is remapping human biology

How EVE Online's Project Discovery is remapping human biology | Amazing Science |

EVE Online isn't just a game about internet spaceships and sci-fi politics. Since March, developer CCP Games has been running Project Discovery – an initiative to help improve scientific understanding of the human body at the tiniest levels. Run in conjunction with the Human Protein Atlas and Massively Multiplayer Online Science, the project taps into EVE Online's greatest resource – its player base – to help categorise millions of proteins.


"We show them an image, and they can change the colour of it, putting green or red dyes on it to help them analyse it a little bit better," Linzi Campbell, game designer on Project Discovery, tells WIRED. "Then we also show them examples – cytoplasm is their favourite one! We show them what each of the different images should look like, and just get them to pick a few that they identify within the image. The identifications are scrambled each time, so it's not as simple as going 'ok, every time I just pick the one on the right' – they have to really think about it."


The analysis project is worked into EVE Online as a minigame, and works within the context of the game's lore. "We have this NPC organisation called the Drifters – they're like a mysterious entity in New Eden [EVE's interplanetary setting]," Campbell explains. "The players don't know an awful lot about the Drifters at the minute, so we disguised it within the universe as Drifter DNA that they were analysing. I think it just fit perfectly. We branded this as [research being done by] the Sisters of Eve, and they're analysing this Drifter DNA." 


The response has been tremendous. "We've had an amazing number of classifications, way over our greatest expectations," says Emma Lundberg, associate professor at the Human Protein Atlas. "Right now, after six weeks, we've had almost eight million classifications, and the players spent 16.2 million minutes playing the minigame. When we did the math, that translated – in Swedish measures – to 163 working years. It's crazy."


"We had a little guess, internally. We said if we get 40,000+ classifications a day, we're happy. If we get 100,000 per day, then we're amazed," Lundberg adds. "But when it peaked in the beginning, we had 900,000 classifications in one day. Now it's stabilised, but we're still getting around 200,000 a day, so everyone is mind-blown. We never expected it."

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Scientists develop small, reprogrammable quantum computer

Scientists develop small, reprogrammable quantum computer | Amazing Science |

"Quantum computing has hit another "milestone" with US researchers today unveiling the development of a small quantum computer that can be reprogrammed.

Many research groups have previously created small, functional quantum computers, but most of these have been only able to solve a single problem. However in today's Nature journal, Shantanu Debnath and colleagues at the University of Maryland reveal their new device can solve three algorithms using quantum effects to perform calculations in a single step, where a normal computer would require several operations.

Although the new device consists of just five bits of quantum information (qubits), the team said it had the potential to be scaled up to a larger computer. In traditional computing bits are either 1 or 0, while in a quantum computer, qubits can be both numbers at the same time. This has the potential to provide faster computation in areas such as materials sciences, searching large databases and data security and encryption."

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Experiments point toward memory chips 1,000 times faster than today's

Experiments point toward memory chips 1,000 times faster than today's | Amazing Science |
Silicon memory chips come in two broad types: volatile memory, such as computer RAM that loses data when the power is turned off, and nonvolatile flash technologies that store information even after we shut off our smartphones.


In general, volatile memory is much faster than nonvolatile storage, so engineers often balance speed and retention when picking the best memory for the task. That's why slower flash is used for permanent storage. Speedy RAM, on the other hand, works with processors to store data during computations because it operates at speeds measured in nanoseconds, or billionths of a second.


Now Stanford-led research shows that an emerging memory technology, based on a new class of semiconductor materials, could deliver the best of both worlds, storing data permanently while allowing certain operations to occur up to a thousand times faster than today's memory devices. The new approach may also be more energy efficient.


"This work is fundamental but promising," said Aaron Lindenberg, an associate professor of materials science and engineering at Stanford and of photon science at the SLAC National Accelerator Laboratory. "A thousandfold increase in speed coupled with lower energy use suggests a path toward future memory technologies that could far outperform anything previously demonstrated."


Lindenberg led a 19-member team, including researchers at SLAC, who detailed their experiments in Physical Review Letters. Their findings provide new insights into the experimental technology of phase-change memory.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts!

How the gurus behind Google Earth created Niantic's 'Pokémon Go'

How the gurus behind Google Earth created Niantic's 'Pokémon Go' | Amazing Science |

If you spotted dozens of people silently congregating in parks and train stations over the weekend, they were probably just busy trying to catch a Pidgeotto.


Niantic's Pokémon Go, the augmented reality mobile game, has become a global phenomenon since it launched Wednesday in Australia before rolling out in the U.S. The game requires players to explore the real world to find Pokémon, collect items at Pokéstops and conquer gyms, and a lot of work has gone into the game's mapping.


John Hanke, the CEO and founder of Niantic, is a Google veteran. He was one of the founders of Keyhole, the company Google bought to start Google Earth, and had a hand in Google Maps before forming Niantic. The company spun off from Google's parent company Alphabet in 2015.


For Hanke, accurate mapping was integral to Pokémon Go. "A lot of us worked on Google Maps and Google Earth for many, many years, so we want the mapping to be good," he told Mashable. All those Pokémon Go obsessives out there owe some serious thanks to a whole other set of gamers.


Ingress, the augmented-reality multiplayer game, was launched in beta by Niantic in 2011. Its users are responsible for helping create the data pool that determines where Pokéstops and gyms appear in Pokémon Go.


In the early days of Ingress, Niantic formed a beginning pool of portal locations for the game based on historical markers, as well as a data set of public artwork mined from geo-tagged photos on Google. "We basically defined the kinds of places that we wanted to be part of the game," Hanke said. "Things that were public artwork, that were historical sites, that were buildings with some unique architectural history or characteristic, or a unique local businesses."

Via YEC Geo
YEC Geo's curator insight, July 19, 2:44 PM
Why  am I not surprised?
Scooped by Dr. Stefan Gruenwald!

How to preserve the information on the web permanently?

How to preserve the information on the web permanently? | Amazing Science |

IF YOU WANTED to write a history of the Internet, one of the first things you would do is dig into the email archives of Vint Cerf. In 1973, he co-created the protocols that Internet servers use to communicate with each other without the need for any kind of centralized authority or control. He has spent the decades since shaping the Internet’s development, most recently as Google’s “chief Internet evangelist.”


Thankfully, Cerf says he has archived about 40 years of old email—a first-hand history of the Internet stretching back almost as far as the Internet itself. But you’d also have a pretty big problem: a whole lot of that email you just wouldn’t be able to open. The programs Cerf used to write those emails, and the formats in which they’re stored, just don’t work on any current computer you’d likely be using to try to read them.


Today, much of the responsibility for preserving the web’s history rests on The Internet Archive. The non-profit’s Wayback Machine crawls the web perpetually, taking snapshots that let you, say, go back and see how WIRED looked in 1997. But the Wayback Machine has to know about a site before it can index it, and it only grabs sites periodically. Based on the Internet Archive’s own findings, the average webpage only lasts about 100 days. In order to preserve a site, the Wayback Machine has to spot it in that brief window before it disappears.

The Average Webpage Is Now the Size of the Original Doom. What’s more, the Wayback Machine is a centralized silo of information—an irony that’s not lost on the inventors of the Internet. If it runs out of money, it could go dark. And because the archives originate from just one web address, it’s relatively easy for censors, such as those in China, to block users from accessing the site entirely. The Archive Team–an unrelated organization–is leading an effort to create a more decentralized backup on the Internet Archive. But if Internet Archive founder Brewster Kahle, Cerf, and their allies who recently came together at what they called the Decentralized Web Summit have their way, the world will one day have a web that archives itself and backs itself up automatically.

Some pieces of this new web already exist. Interplanetary File System, or IPFS, is an open source project that taps into ideas pioneered by the decentralized digital currency Bitcoin and the peer-to-peer file sharing system BitTorrent. Sites opt in to IPFS, and the protocol distributes files among participating users. If the original web server goes down, the site will live on thanks to the backups running on other people’s computers. What’s more, these distributed archives will let people browse previous versions of the site, much the way you can browse old edits in Wikipedia or old versions of websites in the Wayback Machine.

“We are giving digital information print-like quality,” says IPFS founder Juan Benet. “If I print a piece of paper and physically hand it to you, you have it, you can physically archive it and use it in the future.” And you can share that copy with someone else.

What would you do right now if you wanted to read something stored on a floppy disk? On a Zip drive? Right now IPFS is still just a tool the most committed: you need to have IPFS’s software installed on your computer to take part. But Benet says the team has already built a version of the software in JavaScript that can run in your browser without the need to install any new software at all. If it winds up on everyone’s browsers, the idea goes, then everyone can help back up the web.

Unlike the early web, the web of today isn’t just a collection of static HTML files. It’s a rich network of interconnected applications like Facebook and Twitter and Slack that are constantly changing. A truly decentralized web will need ways not just to back up pages but applications and data as well. That’s where things get really tricky–just ask the team behind the decentralized crowdfunding system DAO which was just hacked to the tune of $50 million last week.

The IPFS team is already hard at work on a feature that would allow a web app to keep trucking along even if the original server disappears, and it’s already built a chat app to demonstrate the concept. Meanwhile, several other projects– such as Ethereum, ZeroNet and the SAFE Network—aspire to create ways to build websites and applications that don’t depend on a single server or company to keep running. And now, thanks in large part to the Summit, many of them are working to make their systems cross-compatible.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Daily Magazine!

Chinese supercomputer tops list of world's fastest computers of 2016

Chinese supercomputer tops list of world's fastest computers of 2016 | Amazing Science |
A Chinese supercomputer has topped a list of the world's fastest computers again this year, and for the first time the winning system uses Chinese-designed processors instead of U.S. technology.


The announcement Monday is a new milestone for Chinese supercomputer development and a further erosion of past U.S. dominance of the field. Last year's Chinese winner in the TOP500 ranking maintained by researchers in the United States and Germany slipped to No. 2, followed by a computer at the U.S. government's Oak Ridge National Laboratory in Tennessee.


Also this year, China displaced the United States for the first time as the country with the most supercomputers in the top 500. China had 167 systems and the United States had 165. Japan was a distant No. 3 with 29 systems.


Supercomputers are one of a series of technologies targeted by China's ruling Communist Party for development and have received heavy financial support. Such systems are used for weather forecasting, designing nuclear weapons, analyzing oilfields and other specialized purposes.


"Considering that just 10 years ago, China claimed a mere 28 systems on the list, with none ranked in the top 30, the nation has come further and faster than any other country in the history of supercomputing," the TOP500 organizers said in a statement.


This year's champion is the Sunway TaihuLight at the National Supercomputing Center in Wuxi, west of Shanghai, according to TOP500. It was developed by China's National Research Center of Parallel Computer Engineering & Technology using entirely Chinese-designed processors.


The TaihuLight is capable of 93 petaflops, or quadrillion calculations per second, according to TOP500. It is intended for use in engineering and research including climate, weather, life sciences, advanced manufacturing and data analytics.


Its top speed is about five times that of Oak Ridge's Titan, which uses Cray, NVIDIA and Opteron technology. Other countries with computers in the Top 10 were Japan, Switzerland, Germany and Saudi Arabia.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Custom Processor Speeds Up Robot Motion Planning by Factor of 1,000

Custom Processor Speeds Up Robot Motion Planning by Factor of 1,000 | Amazing Science |

A preprogrammed FPGA can take motion planning from frustrating to instantaneous.


If you’ve ever seen a live robot manipulation demo, you’ve almost certainly noticed that the robot probably spends a lot of time looking like it’s not doing anything. It’s tempting to say that the robot is “thinking” when this happens, and that might even be mostly correct: odds are that you’re watching some poor motion-planning algorithm try and figure out how to get the robot’s arm and gripper to do what it’s supposed to do without running into anything. This motion planning process is both one of the most important skills a robot can have (since it’s necessary for robots to “do stuff”), and also one of the most time and processor intensive. 


At the RSS 2016 conference this week, researchers from the Duke Roboticsgroup at Duke University in Durham, N.C., are presenting a paper about “Robot Motion Planning on a Chip,” in which they describe how they can speed up motion planning by three orders of magnitude while using 20 times less power. How? Rather than using general purpose CPUs and GPUs, they instead developed a custom processor that can run collision checking across an entire 3D grid all at once.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google combines two main quantum computing ideas in one computer

Google combines two main quantum computing ideas in one computer | Amazing Science |

A team of researchers from Google, the University of the Basque Country, the University of California and IKERBASQUE, Basque Foundation for Science has devised a means for combining the two leading ideas for creating a quantum computer in one machine, offering a possible means for learning more about how to create a true quantum computer sometime in the future. They have published the details in the journal Nature.


Computer scientists would really like to figure out how to build a true quantum computer—doing so would allow for solving problems that are simply unsolvable on conventional machines. But, unfortunately, the idea behind such a computer is still mostly theoretical. To move some of the ideas from theory to reality, the researchers with this new effort have built an actual machine that is based on two of the strongest approaches to building a quantum computer.


The first approach is based on the gate model, where qubits are linked together to form primitive circuits that together form quantum logic gates. In such an arrangement, each logic gate is capable of performing one specific type of operation. Thus, to make use of such a computer, each of the logic gates must be programmed ahead of time to carry out certain tasks.


With the second approach the qubits do not interact, instead they are kept at a ground state where they are then caused to evolve into a system capable of solving a particular problem. The result is known as an adiabatic machine—some have actually been built because they are more versatile than the gate model computers. Unfortunately, they are also not expected to be able to ever fully make use of the full power of quantum computing.


In this new effort, the researchers have attempted to gain the positive attributes of both approaches by creating a machine where they started with a standard quantum computer and then used it to simulate an adiabatic machine. It uses 9 qubits and has over 1,000 logic gates and allows for communication between qubits to be turned on and off at will. The end result, the team reports, is one that unlike an adiabatic machine, is able to tackle traditionally difficult computing problems. They expect it to be useful as a research tool, helping lead the way to the development of a truly quantum computer.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How bot-to-bot could soon replace APIs

How bot-to-bot could soon replace APIs | Amazing Science |
By now it’s clear that bots will cause a major paradigm shift in customer service, e-commerce, and, quite frankly, all aspects of software-to-human interaction.


Today when two software systems have to talk to each other, software developers need to implement an integration using APIs (application programming interfaces). This integration process is time consuming. That’s why, over the last couple of years, services such as ZapierScribe, and IFTT have become popular. They provide out-of-the-box interfaces to hundreds of software applications, allowing you to connect, for example, your CRM system with a mailing tool or analytics platform.

In the bot-to-bot era, however, each software application can talk to each other system, regardless of whether they have an actual API integration in place. Granted, bot-to-bot communication will not be used to exchange large amounts of data, but it will allow for ad-hoc communication between, for example, my banking software and a web shop. My banking software could talk to the webshop bot and ask for that missing invoice: “Niko needs an invoice for order 45678, can you provide that?”


The beauty of bot-to-bot communication will be that it is in plain English, it will be conversations that every human can understand. Assuming that all conversations between my bot Annie and other bots are archived, I will be able to go back and see how my two little bots came to a certain conclusion. In my banking example, when an invoice is missing after all, I could click on a “details” button, which would show me the conversation Annie had with the webshop. The archived bot-to-bot conversation would show me the webshop bot response, that the invoice will not be available for another couple of weeks.


But it gets better. If my bot is stuck in a conversation with another bot, she can call me in for help: “Niko, it’s Annie here, your finance bot. I’m talking to a supplier, but I’m having some trouble understanding what they are saying.” I could chime in — when I have time of course, a couple of hours later, since bots have unlimited patience — and I would rephrase the question of Annie and get the answer from the other bot. Next, Annie could continue the conversation and handle my business.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Soon We Won’t Program Computers. We’ll Train Them Like Dogs

Soon We Won’t Program Computers. We’ll Train Them Like Dogs | Amazing Science |
Welcome to the new world of artificial intelligence. Soon, we won't program computers. We'll train them. Like dolphins. Or dogs. Or humans.


Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.


This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces.


Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Is Fog Computing The Next Big Thing In Internet of Things?

Is Fog Computing The Next Big Thing In Internet of Things? | Amazing Science |

One of the reasons why IoT has gained momentum in the recent past is the rise of cloud services. Though the concept of M2M existed for over a decade, organizations never tapped into the rich insights derived from the datasets generated by sensors and devices. Existing infrastructure was just not ready to deal with the massive scale demanded by the connected devices architecture. That’s where cloud becomes an invaluable resource for enterprises.


With abundant storage and ample computing power, cloud became an affordable extension to the enterprise data center. The adoption of cloud resulted in increased usage of Big Data platforms and analytics. Organizations are channelizing every bit of data generated from a variety of sources and devices to the cloud where it is stored, processed, and analyzed for deriving valuable insights. The combination of cloud and Big Data is the key enabler of Internet of Things. IoT is all set to become the killer use case for distributed computing and analytics.


Cloud service providers such as Amazon, GoogleIBM, MicrosoftSalesforce, and Oracle are offering managed IoT platforms that deliver the entire IoT stack as a service. Customers can on-board devices, ingest data, define data processing pipelines that analyze streams in real-time, and derive insights from the sensor data. Cloud-based IoT platforms are examples of verticalized PaaS offerings, which are designed for a specific use case.


While cloud is a perfect match for the Internet of Things, not every IoT scenario can take advantage of it. Industrial IoT solutions demand low-latency ingestion and immediate processing of data. Organizations cannot afford the delay caused by the roundtrip between the devices layer and cloud-based IoT platforms. The solution demands instant processing of data streams with quick turnaround. For example, it may be too late before the IoT cloud shuts down an LPG refilling machine after detecting an unusual combination of pressure and temperature thresholds. Instead, the anomaly should be detected locally within milliseconds followed by an immediate action trigged by a rule. The other scenario that demands local processing is healthcare. Given the sensitivity of data, healthcare companies don’t want to stream critical data points generated by life-saving systems. That data needs to be processed locally not only for faster turnaround but also for anonymizing personally identifiable patient data.


The demand for distributing the IoT workloads between the local data center and cloud has resulted in an architectural pattern called Fog computing. Large enterprises dealing with industrial automation will have to deploy infrastructure within the data center that’s specifically designed for IoT. This infrastructure is a cluster of compute, storage, and networking resources delivering sufficient horsepower to deal with the IoT data locally. The cluster that lives on the edge is called the Fog layer. Fog computing mimics cloud capabilities within the edge location, while still taking advantage of the cloud for heavy lifting. Fog computing is to IoT what hybrid cloud is to enterprise IT. Both the architectures deliver best of both worlds.


Cisco is one of the early movers in the Fog computing market. The company is credited with coining the term even before IoT became a buzzword. Cisco positioned Fog as the layer to reduce the latency in hybrid cloud scenarios. With enterprises embracing converged infrastructure in data centers and cloud for distributed computing, Cisco had vested interest in pushing Fog to stay relevant in the data center. Almost after five years of evangelizing Fog computing with little success, Cisco finally found a legitimate use case in the form of IoT.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Teaching assistant wasn't human and nobody guessed it

Teaching assistant wasn't human and nobody guessed it | Amazing Science |
Jill Watson is a virtual teaching assistant. She was one of nine teaching assistants in an artificial intelligence online course. And none of the students guessed she wasn't a human.


College of Computing Professor Ashok Goel teaches Knowledge Based Artificial Intelligence (KBAI) every semester. It's a core requirement of Georgia Tech's online master's of science in computer science program. And every time he offers it, Goel estimates, his 300 or so students post roughly 10,000 messages in the online forums -- far too many inquiries for him and his eight teaching assistants (TA) to handle. That's why Goel added a ninth TA this semester. Her name is Jill Watson, and she's unlike any other TA in the world. In fact, she's not even a "she." Jill is a computer -- a virtual TA -- implemented on IBM's Watson platform.


"The world is full of online classes, and they're plagued with low retention rates," Goel said. "One of the main reasons many students drop out is because they don't receive enough teaching support. We created Jill as a way to provide faster answers and feedback."


Goel and his team of Georgia Tech graduate students started to build her last year. They contacted Piazza, the course's online discussion forum, to track down all the questions that had ever been asked in KBAI since the class was launched in fall 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers.


"One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn't really go up," Goel said. "Students tend to ask the same questions over and over again."


That's an ideal situation for the Watson platform, which specializes in answering questions with distinct, clear solutions. The team wrote code that allows Jill to field routine questions that are asked every semester. For example, students consistently ask where they can find particular assignments and readings.


Jill wasn't very good for the first few weeks after she started in January, often giving odd and irrelevant answers. Her responses were posted in a forum that wasn't visible to students.

"Initially her answers weren't good enough because she would get stuck on keywords," said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. "For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons -- same keywords -- but different context. So we learned from mistakes like this one, and gradually made Jill smarter."


After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn't need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.


The students, who were studying artificial intelligence, were unknowingly interacting with it. Goel didn't inform them about Jill's true identity until April 26. The student response was uniformly positive. One admitted her mind was blown. Another asked if Jill could "come out and play." Since then some students have organized a KBAI alumni forum to learn about new developments with Jill after the class ends, and another group of students has launched an open source project to replicate her.

Ra's curator insight, May 9, 4:42 PM
Scooped by Dr. Stefan Gruenwald!

Autonomous quantum error correction method greatly increases qubit coherence times

Autonomous quantum error correction method greatly increases qubit coherence times | Amazing Science |

It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.


In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.


"The most interesting thing about my work is that it shows just how simple and small a fully error corrected quantum circuit can be, which is why I call the device the 'Very Small Logical Qubit,'" Kapit told "Also, the error correction is fully passive—unwanted error states are quickly repaired by engineered dissipation, without the need for an external computer to watch the circuit and make decisions. While this paper is a theoretical blueprint, it can be built with current technology and doesn't require any new insights to make it a reality."


The new passive error correction circuit consists of just two primary qubits, in contrast to the 10 or more qubits required in most active approaches. The two qubits are coupled to each other, and each one is also coupled to a "lossy" object, such as a resonator, that experiences photon loss.


"In the absence of any errors, there are a pair of oscillating photon configurations that are the 'good' logical states of the device, and they oscillate at a fixed frequency based on the circuit parameters," Kapit explained. "However, like all qubits, the qubits in the circuit are not perfect and will slowly leak photons into the environment. When a photon randomly escapes from the circuit, the oscillation is broken, at which point a second, passive error correction circuit kicks in and quickly inserts two photons, one which restores the lost photon and reconstructs the oscillating logical state, and the other is dumped to a lossy circuit element and quickly leaks back out of the system. The combination of careful tuning of the resonant frequencies of the circuit and adding photons two at a time to correct losses ensures that the passive error correction circuit can operate continuously but won't do anything to the two good qubits unless their oscillation has been broken by a photon loss."

No comment yet.