Bits 'n Pieces on Big Data
1.2K views | +1 today
Follow
Bits 'n Pieces on Big Data
Innovative information and insight into Big Data (if you like the content, please consider donating to my bitcoin address #3Pjof6N9xRAYXXSPZ4EAFLfHGn51ZdPcxi)
Curated by onur savas
Your new post is loading...
Your new post is loading...
Scooped by onur savas
Scoop.it!

Researchers hope deep learning algorithms can run on FPGAs and supercomputers

Researchers hope deep learning algorithms can run on FPGAs and supercomputers | Bits 'n Pieces on Big Data | Scoop.it
The NSF has funded projects that will investigate how deep learning algorithms run on FPGAs and across systems using the high-performance RDMA interconnect. Another project, led by Andrew Ng and two supercomputing experts, wants to put the models on supercomputers and give them a Python interface.
more...
No comment yet.
Rescooped by onur savas from Big Data Analysis in the Clouds
Scoop.it!

Cray integrates Hadoop Big Data analytics with supercomputers

Cray integrates Hadoop Big Data analytics with supercomputers | Bits 'n Pieces on Big Data | Scoop.it
Cray is bringing integrated open source Hadoop Big Data analytics software to its supercomputing platforms.

Via Pierre Levy
more...
No comment yet.
Scooped by onur savas
Scoop.it!

The Graph 500 List

The Graph 500 List | Bits 'n Pieces on Big Data | Scoop.it

Data intensive supercomputer applications are increasingly important for HPC workloads, but are ill-suited for platforms designed for 3D physics simulations. Current benchmarks and performance metrics do not provide useful information on the suitability of supercomputing systems for data intensive applications. A new set of benchmarks is needed in order to guide the design of hardware architectures and software systems intended to support such applications and to help procurements. Graph algorithms are a core part of many analytics workloads.

 

Backed by a steering committee of over 50 international HPC experts from academia, industry, and national laboratories, Graph 500 will establish a set of large-scale benchmarks for these applications. The Graph 500 steering committee is in the process of developing comprehensive benchmarks to address three application kernels: concurrent search, optimization (single source shortest path), and edge-oriented (maximal independent set). Further, we are in the process of addressing five graph-related business areas: Cybersecurity, Medical Informatics, Data Enrichment, Social Networks, and Symbolic Networks.

onur savas's insight:

List current as of November 2012.

more...
No comment yet.
Scooped by onur savas
Scoop.it!

Project ranks billions of drug interactions

Project ranks billions of drug interactions | Bits 'n Pieces on Big Data | Scoop.it

“It’s the largest computational docking ever done by mankind.”


By analysing the chemical structure of a drug, researchers can see if it is likely to bind to, or ‘dock’ with, a biological target such as a protein. Researchers have now unveiled a computational effort that used Google's supercomputers to assesses billions of potential dockings on the basis of drug and protein information held in public databases, finding potentially toxic side effects and allowing researchers to predict how and where a compound might work in the body.

more...
No comment yet.
Scooped by onur savas
Scoop.it!

NMC PRObE

NMC PRObE | Bits 'n Pieces on Big Data | Scoop.it

PRObE aims to provide a world unique large-scale, low-level, and highly instrumentable systems research facility to the community. This is accomplished by re-purposing supercomputers that Los Alamos National Laboratory (LANL) decommissions and making them available to researchers. The goal of the PRObE facility is to further research in Operating Systems, Networking, Storage, Resiliency, and other relevant systems research topics. PRObE is governed by committees with people from the community, by the community.

more...
No comment yet.