Amazing Science
677.7K views | +671 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

MIT: Global E-mail Patterns Reveal "Clash of Civilizations"

MIT: Global E-mail Patterns Reveal "Clash of Civilizations" | Amazing Science |
The global pattern of e-mail communication reflects the cultural fault lines thought to determine future conflict, say computational social scientists.


Most political scientists consider the Cold War as a conflict between capitalist countries in the west and the Communist Bloc in the East. As such, it was essentially a conflict of ideology. At the end of the Cold War, the question arose of what would drive the next wave of conflicts.


In 1992, the Harvard-based political scientist Samuel Hartington suggested that future conflicts would be driven largely by cultural differences. He went on to map out a new world order in which the people of the world are divided into nine culturally distinct civilisations. These include: Western civilisation; Latin American civilisation; the Orthodox world of former Soviet Union countries; the Sinic civilisation including China, the Koreas and Vietnam; the Muslim world of the greater Middle East; Sub-Saharan Africa and so on. His argument was that future conflicts would be based around the fault lines at the edges of these civilisations. He published this view in a now famous article called “The Clash of Civilizations?” in the politcal journal Foreign Affairs.

Today, we get an answer of sorts thanks to the work of Bogdan State at Stanford University in California and a few pals. These guys have analysed a global database of e-mail messages, and their locations, sent by  more than 10 million people over the space of a year. State and co say that the pattern of connections between these people, clearly reflects the civilisations mapped out by Huntington. In other words, the way we send e-mails is a reflection of the mesh of civilisations that is an important driver of future conflict.

“The findings (unsurprisingly) support the idea that geography, transporation and administrative decisions are all important determinant of between-country communication: distance decreases density, as do visas, while direct flights increase it,” say the researchers. And there are surprising results as well. For example, a common border between two countries actually reduces the communication density between them, perhaps because of increased tensions.  “These curious findings do raise the issue of potential problems with European integration, as well as of the higher potential for conflict between countries sharing borders, which may lead to less communication,” say State and pals.


There are one or two caveats with this kind of research, the main one being that the process of rescaling the data can introduce artefacts that then influence the observed effects. However, further research  with other data sets should help to iron out these problems. Of course, if our planet is divided by civilisation in the way Huntington suggested, it’s not surprising that this is reflected in the pattern of global communication.

A more interesting question is whether this kind of computational social science can measure the ongoing pulse of global tensions and whether it has any predictive power over in spotting where the next conflicts are likely to arise. That’s beyond the current state of the art but it’s clearly an area worth watching.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Any Two Pages on the Web Are Connected By 19 Clicks or Less

Any Two Pages on the Web Are Connected By 19 Clicks or Less | Amazing Science |

No one knows for sure how many individual pages are on the web, but right now, it’s estimated that there are more than 14 billion. Recently, though, Hungarian physicist Albert-László Barabási discovered something surprising about this massive number: Like actors in Hollywood connected by Kevin Bacon, from every single one of these pages you can navigate to any other in 19 clicks or less.


Barabási’s findings involved a simulated model of the web that he created to better understand its structure. He discovered that of the roughly 1 trillion web documents in existence—the aforementioned 14 billion-plus pages, along with every image, video or other file hosted on every single one of them—the vast majority are poorly connected, linked to perhaps just a few other pages or documents.


Distributed across the entire web, though, are a minority of pages—search engines, indexes and aggregators—that are very highly connected and can be used to move from area of the web to another. These nodes serve as the “Kevin Bacons” of the web, allowing users to navigate from most areas to most others in less than 19 clicks.


Barabási credits this “small world” of the web to human nature—the fact that we tend to group into communities, whether in real life or the virtual world. The pages of the web aren’t linked randomly, he says: They’re organized in an interconnected hierarchy of organizational themes, including region, country and subject area.


Interestingly, this means that no matter how large the web grows, the same interconnectedness will rule. Barabási analyzed the network looking at a variety of levels—examining anywhere from a tiny slice to the full 1 trillion documents—and found that regardless of scale, the same 19-click-or-less rule applied.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

What Comes After the Cloud? How About Fog Computing?

What Comes After the Cloud? How About Fog Computing? | Amazing Science |
Startup Symform says its shredded, distributed cloud is more resistant to natural disasters than traditional computing clouds.


The world has embraced the cloud. What’s not to like? Startups can grow rapidly without investing in racks of computers, companies can back up data easily, consumers can travel light and still have access to their huge photo libraries and other personal files.


Back in October, however, real clouds clashed with metaphorical clouds when Hurricane Sandy and its aftermath took down some key data centers in New York and New Jersey; a serious problem for businesses who had their main servers in New York and their backup servers in nearby New Jersey.


Commercial cloud service providers, for the most part, did pretty well; perhaps because some of the largest data centers, like Amazon’s northern Virginia server farm, were not in the disaster zone. But Sandy certainly reminded cloud service providers that redundant files have to be separated by more than a couple of racks, or even a couple of miles.


Startup Symform thinks it can provide better disaster resilience than even data centers hundreds of miles apart. And, says Bassam Tabbara, Symform cofounder and Chief Technical Officer, it can do that in a way that’s extremely cheap—and in some cases free—to its customers.


Tabbara describes Symform’s approach as a “decentralized, distributed, virtual, and crowd-sourced” cloud. Living in the San Francisco Bay area, I can visualize that kind of cloud, however, we don’t call it a cloud here, we call it fog.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Facebook, ARM, x86, and the future of data centers

Facebook, ARM, x86, and the future of data centers | Amazing Science |

What’s the future of the data center look like? Complex and evolving. ARM CPUs are going to have a part to play, but creating a full server ecosystem around these products and achieving mass-market penetration is going to take years. Facebook’s Group Hug platform could kneecap traditional server vendors, but it only threatens Intel if it can’t build cheap processors that offer better performance per watt than its competition. At the Open Compute Summit last week, all of the vendors on question were confident that their own solutions would prove to be the best option for powering next-generation servers.


AMD has the fruits of its SeaMicro acquisition, new 64-bit ARMv8 processors in the works, and next-generation 28nm chips based on its Jaguar core launching this year, though there’s no information on whether or not Kabini and Temash will show up in servers. Intel has its own server Atom products and will refresh those chips with 22nm processors based on the first quad-core, out-of-order Atom that debuts later in 2013. ARM, of course, has server vendors like Calxeda as well as companies like X-Gene, which plans to ship its own 64-bit ARMv8 design by the second half of this year.



The winner will be decided by manufacturing, design, and scalability as much as CPU architecture. Historically, Intel has had a better handle on those issues than any other vendor on the planet. (See: Deliberate excellence: Why Intel leads the world in semiconductor manufacturing.) ARM may force Intel to innovate, but the chances of a wholesale takeover are exceedingly small.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Airports Are a Pandemic's Best Friend

Airports Are a Pandemic's Best Friend | Amazing Science |

After SARS broke out in China in 2002, it reached 29 countries in seven months. Air travel is a major reason why such infectious diseases spread throughout the globe so quickly. And yet even with such examples to study, scientists have had no way to precisely predict how the next infectious disease might spread through the nexus of world air terminals—until now.


In 2010 MIT engineer Ruben Juanes set out to model the movement of a pathogen from a single site of departure to junctions worldwide. If he could predict the flow of disease from a given airport and rank the most contagious ones, government officials could more effectively predict outbreaks and issue lifesaving warnings and vaccines. So Juanes and his team used a computer simulation to seed 40 major U.S. airports with virtual infected travelers. Then they mimicked the individual itineraries of millions of real passengers to model how people move through the system. The travel data included flights, wait times between flights, number of connections to international hubs, flight duration, and length of stay at destinations.


JFK International in New York—one of the world’s most heavily trafficked airports—emerged as the biggest culprit in disease spread. Honolulu, despite having just 40 percent of JFK’s traffic, came in third because of its many long-distance flights. The biggest surprise: The number of passengers per day did not directly correlate to contagion risk.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Best Scientific Figures of 2012: Sperm Trajectories, Evolving Humans and a Tomato Tapestry

The Best Scientific Figures of 2012: Sperm Trajectories, Evolving Humans and a Tomato Tapestry | Amazing Science |

Figures contained in scientific reports are a neglected area of the design world. Typically intended for display to academic audiences in the cramped confines of a journal, they tend to be utilitarian and esoteric -- yet while looking through hundreds of articles in the course of 2012, certain figures transcended the technical and rose to the level of communication art. They combined visual clarity, information density and insight into some fact of fundamental interest.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Hopital 2.0!

Who are the doctors most trusted by doctors? Big data can tell you

Who are the doctors most trusted by doctors? Big data can tell you | Amazing Science |

ZocDoc, Healthgrades, Vitals, Yelp and other sites can tell you what patients think of their doctors. But finding out in any aggregate way what doctors think of their peers has been much harder, if not near impossible, for patients — up until now. By accessing information in government databases through FOIA (Freedom of Information Act) requests, healthcare innovators are now able to share connections between doctors that are based on millions of physician referrals — a valuable indicator of who doctors hold in esteem.


Last month, Fred Trotter, a self-identified “hacktivist,” revealed that he had obtained a dataset of Medicare physician referrals through a FOIA request and was making the initial data available to those who supported a Medstartr crowdfunding campaign meant to build out his “DocGraph” and make it freely available. This week, he announced that he not only blew past his $15,000 funding goal, but was launching a second campaign to integrate his current data with an additional dataset.


The new tool, which reflects 25 million doctor referral connections, enables patients to see how many doctors are linked to a particular doctor, as well as their locations. As patients search for new physicians and specialists, being able to see who their current doctors are linked with could help them decide who to visit. It also gives doctors an opportunity to build online networks that reflect their offline networks, Gutman said. In a post about his “DocGraph” project, Trotter said that his data wasn’t strictly a “referral” data set because, in some cases, doctors might be linked through a patient they both happened to see at the same time, not through an active referral. But Gutman emphasized that HealthTap’s DOConnect considered more than Medicare referrals in mapping connections between doctors.

Via Olivier Delannoy, Chanfimao
No comment yet.
Rescooped by Dr. Stefan Gruenwald from visual data!

Data Wall: IBM Think Exhibit

Data Wall: IBM Think Exhibit | Amazing Science |

Located on Jaffe Drive at Lincoln Center in New York, the THINK exhibit combined three unique experiences to engage visitors in a conversation about how we can improve the way we live and work. 


The IBM data wall was the introduction to the Think Exhibit at the Lincoln Center. This exhibit celebrated its centennial year, and 100 years of human progress.

The wall aimed to educate the public about five areas of interest to the New York community. These included air quality, water waste, potential solar energy, fraud detection, and traffic sensing.


Visit the portfolio link to view detail images of the Data Wall, as well as concept sketches and explanations for the design for the solar and traffic sections.

Via Lauren Moss
Aji Black Stone's comment, November 16, 2012 11:11 AM
Rescooped by Dr. Stefan Gruenwald from Algos!

The top 20 scientific data visualisation tools scientists and teachers should know about

The top 20 scientific data visualisation tools scientists and teachers should know about | Amazing Science |

From simple charts to complex maps and infographics, Brian Suda's round-up of the best – and mostly free – tools has everything you need to bring your data to life. A common question is how to get started with data visualisations. Beyond following blogs, you need to practice – and to practice, you need to understand the tools available. In this article, get introduced to 20 different tools for creating visualisations.

Via Lauren Moss, Baiba Svenca, Goulu
Randy Rebman's curator insight, January 28, 2013 12:33 PM

This looks like it might be a good source for integrating infographics into the classroom.

National Microscope Exchange's comment, February 18, 2015 12:00 AM
Superb Article
Scooped by Dr. Stefan Gruenwald!

D3, a Data Driven Document Language - Presenting Scientific Data on the Web with Ease

D3, a Data Driven Document Language - Presenting Scientific Data on the Web with Ease | Amazing Science |

D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation.


D3 allows you to bind arbitrary data to a Document Object Model (DOM), and then apply data-driven transformations to the document. For example, you can use D3 to generate an HTML table from an array of numbers. Or, use the same data to create an interactive SVG bar chart with smooth transitions and interaction.


D3 is not a monolithic framework that seeks to provide every conceivable feature. Instead, D3 solves the crux of the problem: efficient manipulation of documents based on data. This avoids proprietary representation and affords extraordinary flexibility, exposing the full capabilities of web standards such as CSS3, HTML5 and SVG. With minimal overhead, D3 is extremely fast, supporting large datasets and dynamic behaviors for interaction and animation. D3’s functional style allows code reuse through a diverse collection of components and plugins.


More on D3:

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The greatest puzzle: How Do You Assemble a Brain? Randomly...

The greatest puzzle: How Do You Assemble a Brain? Randomly... | Amazing Science |

Swiss researchers show how a chance distribution of six neuronal cell types can connect to form the synapses of a working brain without any overall design control. Is this a clue to complex systems design?


How do you assemble and wire an information processing device as complex as the mammalian brain? There are roughly 86 billion neurons in a human brain, forming about a quadrillion synapses. A rat’s brain is just one thousandth that size, but still pretty complex, with 56 million neurons and 500 billion synapses.


How does the brain know to put a nest basket cell here, a small basket cell over there, a large basket cell in the middle, a Martinotti cell on the left and a bi-tufted cell on the right, all wired up to pyramidal cells? There has to be a plan, doesn’t there? I mean, the body doesn’t just throw its inventory of brain cells out there like a bunch of pick-up sticks, to fall where they may.

As it turns out, that may be almost exactly what the brain does. Like so much else in capital-L Life, connections in the brain may be emergent: the developing brain lays out its thinking cells in a nearly random mixture, and then wires them up after the fact.

The Blue Brain group (motto: “Reconstructing the brain piece by piece and building a virtual brain on a supercomputer”) at Switzerland’s Ecole Polytechnique Federale de Lausanne (EPFL) has built a computer model of a 298-cell slice of rat cerebral cortex. The model distributed the 6 types of neurons randomly, according to their frequency in natural tissue. They tracked “the incidental overlap of axonal and dendritic arbors,” the tree-like branchings at either end of the nerve cell that reach out and form synapses.


The researchers used their software tools (BlueBuilder) to model the nervous system. They then reduced the synapse-identification problem to a very large number of computer graphic “cylinder to cylinder touch” assessments, running on a 16 384-cpu IBM BlueGene/P computer at the Center for Advanced Modeling Science (CADMOS) in Lausanne.

Finally, they compared the results to data from cross-sections of actual rat brains, stained and digitized, and analyzed to show cell types and synapse locations, and found an overall 75% correspondence between the two. Not absolute correspondence…but a good deal better than chance.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science News!

‘Superorganisations’ – Learning from Nature’s Networks

‘Superorganisations’ – Learning from Nature’s Networks | Amazing Science |

Fritjof Capra, in his book ‘The Hidden Connections’ applies aspects of complexity theory, particularly the analysis of networks, to global capitalism and the state of the world; and eloquently argues the case that social systems such as organisations and networks are not just like living systems – they are living systems. The concept and theory of living systems (technically known as autopoiesis) was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela.


This is a complete version of a ‘long-blog’ written by Al Kennedy on behalf of ‘The Nature of Business’ blog and BCI: Biomimicry for Creative Innovation www.businessinspired...

Via Peter Vander Auwera, ddrrnt, Spaceweaver, David Hodgson, pdjmoo, Sakis Koukouvis
Monica S Mcfeeters's curator insight, January 18, 2014 8:57 PM

A look at how to go organic with business models in a tech age...

Nevermore Sithole's curator insight, March 14, 2014 9:01 AM

Learning from Nature’s Networks

pdjmoo's curator insight, December 6, 2014 11:04 PM







Scooped by Dr. Stefan Gruenwald!

First predictive computational model of gene networks that control the development of sea-urchin embryos

First predictive computational model of gene networks that control the development of sea-urchin embryos | Amazing Science |

As an animal develops from an embryo, its cells take diverse paths, eventually forming different body parts—muscles, bones, heart. In order for each cell to know what to do during development, it follows a genetic blueprint, which consists of complex webs of interacting genes called gene regulatory networks.


Biologists at the California Institute of Technology (Caltech) have spent the last decade or so detailing how these gene networks control development in sea-urchin embryos. Now, for the first time, they have built a computational model of one of these networks. This model, the scientists say, does a remarkably good job of calculating what these networks do to control the fates of different cells in the early stages of sea-urchin development—confirming that the interactions among a few dozen genes suffice to tell an embryo how to start the development of different body parts in their respective spatial locations. The model is also a powerful tool for understanding gene regulatory networks in a way not previously possible, allowing scientists to better study the genetic bases of both development and evolution.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

International Consortium Builds ‘Google Map’ of Human Metabolism

International Consortium Builds ‘Google Map’ of Human Metabolism | Amazing Science |

Building on earlier pioneering work by researchers at the University of California, San Diego, an international consortium of university researchers has produced the most comprehensive virtual reconstruction of human metabolism to date. Scientists could use the model, known as Recon 2, to identify causes of and new treatments for diseases like cancer, diabetes and even psychiatric and neurodegenerative disorders. Each person’s metabolism, which represents the conversion of food sources into energy and the assembly of molecules, is determined by genetics, environment and nutrition.


Doctors have long recognized the importance of metabolic imbalances as an underlying cause of disease, but scientists have been ramping up their research on the connection as a result of compelling evidence enabled by the Human Genome Project and advances in systems biology, which leverages the power of high-powered computing to build vast interactive databases of biological information.


“Recon 2 allows biomedical researchers to study the human metabolic network with more precision than was ever previously possible. This is essential to understanding where and how specific metabolic pathways go off track to create disease,” said Bernhard Palsson, Galletti Professor of Bioengineering at UC San Diego Jacobs School of Engineering.


“It’s like having the coordinates of all the cars in town, but no street map. Without this tool, we don’t know why people are moving the way they are,” said Palsson. He likened Recon 2 to Google mapping for its ability to merge complex details into a single, interactive map. For example, researchers looking at how metabolism sets the stage for cancerous tumor growth could zoom in on the “map” for finely detailed images of individual metabolic reactions or zoom out to look at patterns and relationships among pathways or different sectors of metabolism. This is not unlike how you can get a street view of a single house or zoom out to see how the house fits into the whole neighborhood, city, state, country and globe.  And just as Google maps brings together a broad set of data – such as images, addresses, streets and traffic flow – into an easily navigated tool, Recon 2 pulls together a vast compendium of data from published literature and existing models of metabolic processes.


Recon 2 is already proving its utility, according to Ines Thiele, a professor at the University of Iceland and UC San Diego alumna, who led the Recon 2 effort. Thiele earned her Ph.D. in bioinformatics as a student of Palsson’s and was part of the original Recon 1 team.


Thiele said Recon 2 has successfully predicted alterations in metabolism that are currently used to diagnose certain inherited metabolic diseases.

“The use of this foundational resource will undoubtedly lead to a myriad of exciting predictions that will accelerate the translation of basic experimental results into clinical applications,” said Thiele. “Ultimately, I envision it being used to personalize diagnosis and treatment to meet the needs of individual patients. In the future, this capability could enable doctors to develop virtual models of their patients’ individual metabolic networks and identify the most efficacious treatment for various diseases including diabetes, cancer and neurodegenerative diseases.”


As much as Recon 2 marks a significant improvement over Recon 1, there is still much work to be done, according to the research team. Thiele said Recon 2 accounts for almost 1,800 genes of an estimated 20,000 protein-coding genes in the human genome. “Clearly, further community effort t will be required to capture chemical interactions with and between the rest of the genome,” she said.  

Victoria Auyeung's curator insight, July 11, 2014 7:49 PM

Look forward to a future in which we can use the human metabolic network to predict how our diets may affect our health? I certainly do!

Scooped by Dr. Stefan Gruenwald!

An amazing invisible truth about Wikipedia hiding inside Wikipedia's GeoTag Information

An amazing invisible truth about Wikipedia hiding inside Wikipedia's GeoTag Information | Amazing Science |

A large number of Wikipedia articles are geocoded. This means that when an article pertains to a location, its latitude and longitude are linked to the article. As you can imagine, this can be useful to generate insightful and eye-catching infographics.


A while ago, a team at Oxford built this magnificent tool to illustrate the language boundaries in Wikipedia articles. This led me to wonder if it would be possible to extract the different topics in Wikipedia.


This is exactly what I managed to do in the past few days. I downloaded all of Wikipedia, extracted 300 different topics using a powerful clustering algorithm, projected all the geocoded articles on a map and highlighted the different clusters (or topics) in red. The results were much more interesting than I thought. For example, the map on the left shows all the articles related to mountains, peaks, summits, etc. in red on a blue base map.  The highlighted articles from this topic match the main mountain ranges exactly.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

SHAMAN: How to store today's data and ensure that it will always be accessible

SHAMAN: How to store today's data and ensure that it will always be accessible | Amazing Science |
This project will develop and test a next generation digital preservation framework including tools for analysing, ingesting, managing, accessing and reusing information objects and data.

The SHAMAN Integrated Project aims at developing a new framework for long-term digital preservation (more than one century) by exploring the potential of recent developments in the areas of GRID computing, federated digital library architectures, multivalent emulation and semantic representation and annotation.


The researchers' vision is: "For the longer term, SHAMAN will develop radically new approaches to Digital Preservation, such as those inspired by human capacity to deal with information and knowledge, providing a sound basis and instruments for unleashing the potential of advanced ICT to automatically act on high volumes and dynamic and volatile digital content, guaranteeing its preservation, keeping track of its evolving semantics and usage context and safeguarding its integrity, authenticity and long term accessibility over time."


The project plans to deliver a set of integrated tools supporting the various aspects of the preservation process: analysis/characterisation, ingestion, management, access and reuse. Work includes trials and validation of the tools in three application domains dealing with different types of objects: scientific publishing and government archives, industrial design and engineering (e.g. CAD), and e-science resources.


SHAMAN's dissemination and exploitation plans aim at actively fostering outreach and take-up of results and will be tailored according to the specific needs of the scientific / academic world and of industry users. SHAMAN's work will be coordinated with other digital preservation projects and initiatives at national and international level.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Simulating 25,000 generations of evolution, researchers discover why biological networks tend to organize

Simulating 25,000 generations of evolution, researchers discover why biological networks tend to organize | Amazing Science |

By simulating 25,000 generations of evolution within computers, Cornell University engineering and robotics researchers have discovered why biological networks tend to be organized as modules – a finding that will lead to a deeper understanding of the evolution of complexity. The new insight also will help evolve artificial intelligence, so robot brains can acquire the grace and cunning of animals. From brains to gene regulatory networks, many biological entities are organized into modules – dense clusters of interconnected parts within a complex network.

For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over "the evolution of complexity." For years, the prevailing assumption was simply that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over their non-modular competitors. But that may not be enough to explain the origin of the phenomena. The team discovered that evolution produces modules not because they produce more adaptable designs, but because modular designs have fewer and shorter network connections, which are costly to build and maintain. As it turned out, it was enough to include a "cost of wiring" to make evolution favor modular architectures.

The results may help explain the near-universal presence of modularity in biological networks as diverse as neural networks – such as animal brains – and vascular networks, gene regulatory networks, protein-protein interaction networks, metabolic networks and even human-constructed networks such as the Internet. "Being able to evolve modularity will let us create more complex, sophisticated computational brains," says Clune. Says Lipson: "We've had various attempts to try to crack the modularity question in lots of different ways. This one by far is the simplest and most elegant."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Solving puzzles without a picture: New algorithm assembles chromosomes from next generation sequencing data

Solving puzzles without a picture: New algorithm assembles chromosomes from next generation sequencing data | Amazing Science |

One of the most difficult problems in the field of genomics is assembling relatively short "reads" of DNA into complete chromosomes. In a new paper published in Proceedings of the National Academy of Sciences an interdisciplinary group of genome and computer scientists has solved this problem, creating an algorithm that can rapidly create "virtual chromosomes" with no prior information about how the genome is organized.


The powerful DNA sequencing methods developed about 15 years ago, known as next generation sequencing (NGS) technologies, create thousands of short fragments. In species whose genetics has already been extensively studied, existing information can be used to organize and order the NGS fragments, rather like using a sketch of the complete picture as a guide to a jigsaw puzzle. But as genome scientists push into less-studied species, it becomes more difficult to finish the puzzle.


To solve this problem, a team led by Harris Lewin, distinguished professor of evolution and ecology and vice chancellor for research at the University of California, Davis and Jian Ma, assistant professor at the University of Illinois at Urbana-Champaign created a computer algorithm that uses the known chromosome organization of one or more known species and NGS information from a newly sequenced genome to create virtual chromosomes.


"We show for the first time that chromosomes can be assembled from NGS data without the aid of a preexisting genetic or physical map of the genome," Lewin said. The new algorithm will be very useful for large-scale sequencing projects such as G10K, an effort to sequence 10,000 vertebrate genomes of which very few have a map, Lewin said.


"As we have shown previously, there is much to learn about phenotypic evolution from understanding how chromosomes are organized in one species relative to other species," he said. The algorithm is called RACA (for reference-assisted chromosome assembly), co-developed by Jaebum Kim, now at Konkuk University, South Korea, and Denis Larkin of Aberystwyth University, Wales. Kim wrote the software tool which was evaluated using simulated data, standardized reference genome datasets as well as a primary NGS assembly of the newly sequenced Tibetan antelope genome generated by BGI (Shenzhen, China) in collaboration with Professor Ri-Li Ge at Qinghai University, China.


Larkin led the experimental validation, in collaboration with scientists at BGI, proving that predictions of chromosome organization were highly accurate. Ma said that the new RACA algorithm will perform even better as developing NGS technologies produce longer reads of DNA sequence. "Even with what is expected from the newest generation of sequencers, complete chromosome assemblies will always be a difficult technical issue, especially for complex genomes. RACA predictions address this problem and can be incorporated into current NGS assembly pipelines," Ma said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Peta, Exa, Yotta And Beyond: Big Data Reaches Cosmic Proportions

Peta, Exa, Yotta And Beyond: Big Data Reaches Cosmic Proportions | Amazing Science |
Since the advent of big data, it's been a struggle for some to get a real sense of just how big big data really is. You hear strange terms like "peta," exa" and "yotta"… but what does all that really mean? When managing massive amounts of data, the scales were talking about can quickly reach astronomical proportions.


Coming from Indiana, where the value of Pi might have once legally been 3, it would be easy to just slap the label "gi-fracking-normous" on the scales we're talking about, but I'm going to push past my native upbringing and focus on recent efforts to quantify big data. A recent infographic from clearCi is one such effort, outlining the scale of data produced on the Internet each day: 2.5 quintillion bytes of data. Storage vendor Seagate figures that a total of 450 exabytes of storage shipped in 2011.


How Big Can Data Get? It will get bigger, of course. The next level of data will be the brontobyte, which is 10E27 bytes. The human body contains 7 octillion atoms. If each atom were a byte of information, that's 7 brontobytes of data.


No one company, not even Google, will every get to the upper ends of these scales. But with terabyte amounts of data an everyday occurrence and petabytes not so rare anymore, a lot of companies will need to wrap their heads around the notion of what the big in big data really means and bring it down to Earth.


No comment yet.
Rescooped by Dr. Stefan Gruenwald from Content Curation World!

A Curated Selection of Data Visualization Charts and Infographics: The Information Is Beautiful Awards

A Curated Selection of Data Visualization Charts and Infographics: The Information Is Beautiful Awards | Amazing Science |

Robin Good: David McCandlees, the author of the book Information is Beautiful celebrates great data visualization and information design work through the Information is Beautiful Awards.

Together with a jury of experts like Brian Eno, Paola Antonelli, Maria Popova, Simon Rogers and Aziz Kami, he has curated a unique selection of 300 designs and a short list of finalists in the following categories:


» Data visualization– A singular visualisation of data or information.» Infographic – Using multiple data visualisations in service to a theme or story


» Interactive visualization – Any viz where you can dynamically filter or explore the data.


» Data journalism – A combination of text and visualizations in a journalistic format.


» Motion infographic – Moving and animated visualizations along a theme or story.


» Tool or website – Online tools & apps to aid datavizzing.


The selection itself is worth a tour of the site and of this initiative.




Longlist selection:


Shortlist selection:



Via Robin Good
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientific Data Visualization and Infographics Resources

Scientific Data Visualization and Infographics Resources | Amazing Science |
Data visualizations and infographics can make complex datasets easier to understand and comprehend. By creating a graphical represenatation of data and statistics, complicated concepts and information can make more sense in less time.


Many visualizations focus on representing a specific set of data or statistical information. Others focus on less-concrete topics, providing a visual representation of abstract concepts. Generally speaking, the first type appear more like graphs or charts and the latter are often more creative and imaginative.


But visualizations and infographics can be used poorly, too. Putting in too much information (or not enough), using improper formats for the information provided, and other failures are common. Many useful resources for infographics and data visualization are compiled. Most are galleries of effective graphics though some also provide how-to information for information designers.


Also check out the intelligent continuously updated wall charts based on information:

No comment yet.
Scooped by Dr. Stefan Gruenwald!

China is building a 100-petaflops supercomputer

China is building a 100-petaflops supercomputer | Amazing Science |

As the U.S. launched what’s expected to be the world’s fastest supercomputer at 20 petaflops (peak performance), China announced it is building a machine intended to be five times faster when it is deployed in 2015. China’s Tianhe-2 supercomputer will run at 100 petaflops (quadrillion floating-point calculations per second) peak performance, designed by China’s National University of Defense Technology, according to the Guangzhou Supercomputing Center, where the machine will be housed. Tianhe-2 could help keep China competitive with the future supercomputers of other countries, as industry experts estimate machines will start reaching 1,000-petaflops (1 exaflop) performance by 2018.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Visualizing a Full Day of Airplane Paths over the USA

Visualizing a Full Day of Airplane Paths over the USA | Amazing Science |

At any given moment, there can be 30,000 manmade objects in the sky above us: Planes, helicopters, satellites, weather balloons, space debris, and other diverse technologies. They watch, they guide, they protect, they communicate, they transport, they predict, they look out into the stars. In less than 100 years, the deep blue has become a complex web of machinery.


Our lives are closely tied to these networks in the sky, but a disjunction has occurred between us and the aerial technologies we use every day. We rarely consider the hulking, physical machines that have now become core to our lifestyle. By not being aware of the hardware we use every day, we may also not be aware of the social, economic, cultural, and political importance of these technologies. By visualizing them, it may lead to a better understanding of the forces that are shaping our future.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Internet Map: Revealing the Hidden Structure of the Network - information aesthetics

The Internet Map: Revealing the Hidden Structure of the Network - information aesthetics | Amazing Science |

The Internet Map [ ] encompasses over 350,000 websites based in 196 countries, which are clustered according to about 2 million mutual links between them. Developed by a small team of Russian enthusiasts, the interactive Internet map is an "attempt to look into the hidden structure of the network, fathom its colossal scale, and examine that which is impossible to understand from the bare figures of statistics." Every circle on the map stands for a unique website, with its size determined by website web traffic. Its color depends on the country of origin, with red for Russia, yellow for China, purple for Japan, and light-blue for the US.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mapping population densities: some interesting models

Mapping population densities: some interesting models | Amazing Science |

New kind of cartograms to map population densities on this planet from an article in the Telegraph. The cartograms use data from the Global Rural-Urban Mapping Project.

No comment yet.