Complex Insight - Understanding our world
11.1K views | +0 today
Follow
Complex Insight  - Understanding our world
Latest news on complex systems in life sciences, engineering, education and government
Curated by ComplexInsight
Your new post is loading...
Your new post is loading...
Scooped by ComplexInsight
Scoop.it!

CUDA Toolkit

CUDA Toolkit | Complex Insight  - Understanding our world | Scoop.it
The NVIDIA® CUDA® Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. The CUDA Toolkit includes a compiler for NVIDIA GPUs, math libraries, and tools for debugging and optimizing the performance of your applications.
ComplexInsight's insight:

New CUDA 6 toolkit release with 64-bit arm support, improved CUDA Fortran for scientific aplications, replay features in visual profiler and nvprof and a new BLAS GPU library cublasXT that scales across GPUs. A lot of goodness for compute developers - available here: https://developer.nvidia.com/cuda-downloads

more...
No comment yet.
Rescooped by ComplexInsight from Papers
Scoop.it!

Limits on fundamental limits to computation

An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.

 

Limits on fundamental limits to computation
Igor L. Markov
Nature 512, 147–154 (14 August 2014) http://dx.doi.org/10.1038/nature13570


Via Complexity Digest
ComplexInsight's insight:

Discussion of limits is key to creating new ideas - Igor Markov's paper is worth reading for exploring lmitations and engineering implications and to trigger off new discussions and ideas. Worth reading.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Big Data & HPC: The Modern Crystal Ball for 2014

Big Data & HPC: The Modern Crystal Ball for 2014 | Complex Insight  - Understanding our world | Scoop.it
ComplexInsight's insight:

Jorge Titinger, CEO of SGI has a good  article on applications of HPC to Big Data including:

 

Graphing and mapping: HPC-powered data mapping and graphing will lead to greater accuracy in business forecastingPattern visualizations: HPC-powered tools will emerge that can provide an intuitive view of complex data sets, enabling rapid identification of relationships for simple analysisScaling in-memory databases: HPC will allow enterprise in-memory systems to handle larger data workloads — allowing closer to complete data sets (over partial sets) to benefit from real-time analytics while in motionMeta-data: The importance of meta-data will jump dramatically — we’ll see enterprises realize leveraging meta-data analytics for virtualization and relational mapping can yield enhanced accuracy, new business insights and even reveal security threats

 

With the advent of on demand compute and cloud based processing it will be interesting to see how the HPC companies continue to differentiate from on demand suppliers such as amazon and microsoft. Jorge's 2014 outlook is interesting and more sane than many big data predictions - certainly worth reading.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Software through the lens of evolutionary biology | Theory, Evolution ...

Software through the lens of evolutionary biology | Theory, Evolution ... | Complex Insight  - Understanding our world | Scoop.it
My preferred job title is 'theorist', but that is often too ambiguous in casual and non-academic conversation, so I often settle for 'computer scientist'. Unfortunately, it seems that the overwhelming majority of people equate ...
ComplexInsight's insight:

 Artem Kaznatcheev, a researcher in theoretical computer science - i.e. the ideas that underpin computing - has a wonderful write up of Stephanie Forrest's Stannislaw Ulam lecture at the SFI on using inspiration from Biology to address challenges in Software industry. The Ulam lecture is available in video - but its a few hours long - through seriously worth watching and covers modern developments in genetic programming and other approaches. If you need an abbrieviated write up of the key ideas underpinning the Professor Forrest's lecture - then Artem's write up is an awesomely succinct. Worth reading (and the lectures  linked in his article - are worth watching!) 

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Media for Thinking the Unthinkable

Presented at the MIT Media Lab on April 4, 2013. A peronal preface: http://worrydream.com/MediaForThinkingTheUnthinkable/note.html For more information about…
ComplexInsight's insight:

Bret Victor talk on media tools for thinking and how representions map to thinking and building associations which help us understand systems. Worth watching.

more...
No comment yet.
Rescooped by ComplexInsight from Papers
Scoop.it!

Neural Computation and the Computational Theory of Cognition

We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.

 

Neural Computation and the Computational Theory of Cognition

Gualtiero Piccinini, Sonya Bahar

Cognitive Science
Volume 37, Issue 3, pages 453–488, April 2013

http://dx.doi.org/10.1111/cogs.12012


Via Complexity Digest
ComplexInsight's insight:

Re-reading some of John Holland's work on neural network simulation at present while looking into different models of computation and digital physics, so this is a timely paper.  Looks to be an interesting read.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Intel's 50-Core Xeon Phi: The New Era of Inexpensive Supercomputing

Intel's 50-Core Xeon Phi: The New Era of Inexpensive Supercomputing | Complex Insight  - Understanding our world | Scoop.it

The advent of Intel's massively parallel coprocessor will make every server a supercomputer. This week, Intel unveiled its new Xeon Phi coprocessor, which puts an astonishing 50 x86 cores onto a single PCI-connected card. The term "coprocessor" should be understood in context. Every one of the Phi's cores can boot Linux and run any x86 software. However, the card itself needs to plug into a system that has an independent CPU, which basically oversees the Phi's operations. Hence, the coprocessor appellation. The first model to be released in Q1 of next year will have 50 cores, and the follow-up coprocessor slated for release in mid-2013 will have 60 cores. Each processor supports four threads, making for 200 threads for the initial Phi. The cores run at 1.05 GHz and sport a 512-KB L2 cache each. They collectively share 8 GB of GDDR5 memory. They will liely compete with GPU based solutions from NVIDIA and AMD - but with a more familiar programmign model. Things just got very interesting in the high performance compute world. Click on the image or the title to learn more.

more...
No comment yet.
Rescooped by ComplexInsight from The Robot Times
Scoop.it!

Seth Lloyd on Programming the Universe

Seth Llyod is a Professor in the Department of Mechanical Engineering at the Massachusetts Institute of Technology (MIT). His talk, "Programming the Universe", is about the computational power of atoms, electrons, and elementary particles.


Via Szabolcs Kósa, The Robot Launch Pad
more...
No comment yet.
Rescooped by ComplexInsight from Social Simulation
Scoop.it!

A review of High Performance Computing foundations for scientists

The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.


Via Frédéric Amblard
more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Oracle Cranks Up The Cores To 32 With Sparc M7 Chip

Oracle Cranks Up The Cores To 32 With Sparc M7 Chip | Complex Insight  - Understanding our world | Scoop.it
Say what you will about Oracle co-founder and CEO Larry Ellison, but when the software giant bought Sun Microsystems more than four years ago, for $7.4 bil
ComplexInsight's insight:

It has often been easy to forget following the acquisition of Sun by Oracle - Oracle have continued investing in hardware. The new M7 chip line with 64 sparc cores and Numa interconnect options not only continues to promise speed increases for existing oracle customers but also shows benefits over off the shelf clusters of highly integrated large core systems using optimised connection architectures. Good in depth article - worth reading.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Machined Learnings: Stranger in a Strange Land

ComplexInsight's insight:

Good article on the differences between big data processing and HPC simulations. Worth reading to see where the two communities focus, worry about and can learn from one another.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

How Molecules Matter to Mental Computation

This was just a brilliant paper, talking about exactly what I found wrong with (yet) current computational models: http://t.co/pxP6MZMa7T
ComplexInsight's insight:

I remember reading this when first published and its a great paper. Any computational model of human cognition needs to integrate both chemical and eletrical mechanisms into a integrated whole. Great scoop and awesome paper.

more...
No comment yet.
Rescooped by ComplexInsight from Papers
Scoop.it!

Evolutionary Information Theory

Evolutionary information theory is a constructive approach that studies information in the context of evolutionary processes, which are ubiquitous in nature and society. In this paper, we develop foundations of evolutionary information theory, building several measures of evolutionary information and obtaining their properties. These measures are based on mathematical models of evolutionary computations, machines and automata. 

 

Evolutionary Information Theory
Mark Burgin

Information 2013, 4(2), 124-168; http://dx.doi.org/10.3390/info4020124


Via Complexity Digest
ComplexInsight's insight:

This looks very promising - one for reading list for holidays. 

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Trillion Particle Simulation Bridges Gap to Exascale

Trillion Particle Simulation Bridges Gap to Exascale | Complex Insight  - Understanding our world | Scoop.it

An unprecedented trillion-particle simulation, which utilized more than 120,000 processors and generated approximately 350 terabytes of data, pushed the performace capability of the National Energy Research Scientific Computing Center’s (NERSC’s) Cray XE6 “Hopper” supercomputer to its limits. 

ComplexInsight's insight:

To understand the scale of this simulation the team simulated more than two trillion particles for nearly 23,000 time steps with VPIC, a large-scale plasma physics application. The simulation used approximately 80 percent of Hopper’s computing resources, 90 percent of the available memory on each node, and 50 percent of the Lustre scratch file system. In total, 10 separate trillion-particle datasets, each ranging between 30 to 42 terabytes in size, were written as HDF5 files on the scratch file system at a sustained rate of approximately 27 gigabytes per second. This type of simulation will become increasingly necessary for many of the large scale science challenges underway in many areas of physics, geophysics and lifesciences. Click on the image or title to learn more.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Biological Computation - Microsoft Research

Biological Computation - Microsoft Research | Complex Insight  - Understanding our world | Scoop.it
Application and development of computational methods and tools for modeling and analyzing complex biological systems.
ComplexInsight's insight:

AS some one who deeply subscribes to Chris Langton's view that the natural stufy of Computing is to stufy computation as its writ across all of nature, this  research at Microsoft is deeply interesting (and echoes compaies liek Autodesk who come from one discipline and are increasingly looking at life sciences through the view of computing:  The Biological Computation group is conducting research to uncover fundamental principles of biological computation: what cells compute, how and why. We focus primarily on developing computational techniques that enable multiscale modelling, from molecules to cells to systems. Our work currently focuses on fundamentals of Biological Computation, with applications in Immunology and Development, together with principles of Programming Life, with applications in DNA Computing and Synthetic Biology. Click on the image or title to learn more.

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Intel teaches Xeon Phi x86 coprocessor snappy new tricks

Intel teaches Xeon Phi x86 coprocessor snappy new tricks | Complex Insight  - Understanding our world | Scoop.it

Having been tangentally* impacted by the Intel Larrabee project - which was to create a GPU based on the standard Intel architecture - it has been interesting following how intel is responding to the GPU adoption in the technical/ parallel computing market. It took fifteen years for Intel to shrink the computing power of the teraflops-busting ASCI Red massively parallel Pentium II supercomputer down to something that fits inside of a PCI-Express coprocessor card – and the Xeon Phi coprocessor is only the first step in a long journey with coprocessor sidekicks riding posse with CPUs in pursuit of exascale computing. With over 50 Pentium cores on the card - this promises to significantly impact the capabilities of desktop computing, server and cloud compute power. Between Nvidia's Kepler GPU and Intel's Xeon phi - its going to be an interesting 12 months- especially give that our new simulation tools are designed for this type of environment. To read the rest of the Register article click on the image or the title to learn more. ** ( re: tangental: they considered buying some software I was involved in bringing to market and when they didn't the team created a start up to commercialize it instead.)

more...
No comment yet.
Scooped by ComplexInsight
Scoop.it!

Faculty Summit 2012 - Microsoft Research

Faculty Summit 2012 - Microsoft Research | Complex Insight  - Understanding our world | Scoop.it

Microsoft Research will present its Faculty Summit accessible by the public. On July 16 and 17, 9:00 a.m. to 2:00 p.m. Pacific Standard Time (PST) (noon to 5:00 p.m. Eastern Standard Time). The summit gives an opportunity to hear leading academic researchers and educators, as well as Microsoft researchers, product group engineers, and architects explore new opportunities in computer science research and advances that address real-world challenges. Should be at least and hopefully awesome.  Click on the image or title to learn more.

more...
No comment yet.