How do oysters attach themselves to rocks? They need a glue, but a glue that can set in a watery environment. In this installment of "Joe's Big Idea," NPR's Joe Palca reports that glue could lead to medical advances.
I'm rather partial to Joe Palca's reports and this one is awesome. Joe interviews Jonathan Wilker, a chemist at West Lafayette who is studying how oysters create the natural adhesive they use to stick to rocks. Might not sound interesting but the research will likely help create adhesives for surgery, new non-posionous coatings that marine animals such as barnacles cannot stick too and other applications. Worth reading.
Palantir's data platform, Gotham, enables data integration, search and discovery, knowledge management, secure collaboration, and algorithmic analysis across a wide variety of data sources.
Palantir teamed with the OpenGeo to extend PostGIS to support WGS84 coordinate system and geodetic coordinates. This enabled Palantir to migrate from Oracle spatial to PostGIS as the underpinning spatial database for their Gotham platform. The benefits to the wider spatial community are signficiant since WGS84 based mapping is often a requirement in government and geoscience applications. Palantir's sponsored open source development undertaken by OpenGeo enables other to take advantage of the new features and integrate PostGIS into workflows. Click on the image or title to learn more.
Networks of interconnected nodes have long played a key role in cognitive science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions or collaborations among scientists. Today, the inclusion of network theory into cognitive sciences, and the expansion of complex systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the cognitive sciences.
Networks in Cognitive Science
Andrea Baronchelli, Ramon Ferrer-i-Cancho, Romualdo Pastor-Satorras, Nick Chater, Morten H. Christiansen
Biotechnological and life science innovations do not only lead to immense progress in diverse fields of natural science and technical research and thereby drive economic development, they also fundamentally affect the relationship between nature, technology and society. Taken this seriously, the ethical and societal assessment of emerging biotechnologies as for example synthetic biology is challenged not only to constrain on questions of biosafety and biosecurity but also to face the societal questions within the different fields as an interface problem of science and society. In order to map this vague and stirring field, we propose the concept of bio-objects to explore the reciprocal interaction at the interface of science and society serious as well to have the opportunity to detect possible junctions of societal discontent and unease before their appearance
Ethical discussions walk the thin line between intellectual noise and useful guidance. Given the potential impact of bio tech I think this is a good step in the direction of discussions for useful guidance. Be interesting seeing how it develops.
How the blistering pace of technological change could have a profound impact on healthcare.
The combination of sensors and automated tests in areas of genetics and proteomics enable collection of largescale comprehensive health data for the first time. That data will generate insights into human biology, our bacterial biome and how our health systems work. Advances in large scale data processing, correlation and machine learning will help over the next decade to radically change our understanding of human biology. As data is collected and in silico experimentation mapped to invitrio understanding data will change our healthcare systems over the next 30 years. This BBC article gives a good insight into how and why this is starting to happen now.
Networks are commonly used to define underlying interaction structures where infections, information, or other quantities may spread. Although the standard approach has been to aggregate all links into a static structure, some studies have shown that the time order in which the links are established may alter the dynamics of spreading. In this paper, we study the impact of the time ordering in the limits of flow on various empirical temporal networks. By using a random walk dynamics, we estimate the flow on links and convert the original undirected network (temporal and static) into a directed flow network. We then introduce the concept of flow motifs and quantify the divergence in the representativity of motifs when using the temporal and static frameworks. We find that the regularity of contacts and persistence of vertices (common in email communication and face-to-face interactions) result on little differences in the limits of flow for both frameworks. On the other hand, in the case of communication within a dating site and of a sexual network, the flow between vertices changes significantly in the temporal framework such that the static approximation poorly represents the structure of contacts. We have also observed that cliques with 3 and 4 vertices containing only low-flow links are more represented than the same cliques with all high-flow links. The representativity of these low-flow cliques is higher in the temporal framework. Our results suggest that the flow between vertices connected in cliques depend on the topological context in which they are placed and in the time sequence in which the links are established. The structure of the clique alone does not completely characterize the potential of flow between the vertices.
Flow motifs reveal limitations of the static framework to represent human interactions
In our own research temporality of relationships or interactions between agents is a common property of various systems (think infection, traffic, economic exchange). Maybe its coming from a computer graphics background but their use of Flow motifs reminds me a lot of flow fields and glyph representations in scientific visualizations.
We describe techniques for the robust detection of community structure in some classes of time-dependent networks. Specifically, we consider the use of statistical null models for facilitating the principled identification of structural modules in semi-decomposable systems. Null models play an important role both in the optimization of quality functions such as modularity and in the subsequent assessment of the statistical validity of identified community structure. We examine the sensitivity of such methods to model parameters and show how comparisons to null models can help identify system scales. By considering a large number of optimizations, we quantify the variance of network diagnostics over optimizations (“optimization variance”) and over randomizations of network structure (“randomization variance”). Because the modularity quality function typically has a large number of nearly degenerate local optima for networks constructed using real data, we develop a method to construct representative partitions that uses a null model to correct for statistical noise in sets of partitions. To illustrate our results, we employ ensembles of time-dependent networks extracted from both nonlinear oscillators and empirical neuroscience data.
Robust detection of dynamic community structure in networks Danielle S. Bassett, Mason A. Porter, Nicholas F. Wymbs, Scott T. Grafton, Jean M. Carlson, and Peter J. Mucha
Good catch by Eugene and the Complexity Digest team. Ability to examine network structure communal and adhoc in time-dependent networks will become increasingly important with complex systems analysis. Interesting read.
In recent years, visualization has become an all-purpose technique for communicating and exploring data within the humanities. There are a wide availability of tools offering different points of entry from IBM’s Many Eyes to Gephi to Tapor 2.0. Projects like the Visual Thesaurus, Mapping the Republic of Letters, and Hypercities, among countless others, all engage with visualization as an integral part of their scholarship. Yet, they do so in very different ways and from a wide variety of disciplinary perspectives, leaving us to question: what is visualization in the humanities? Why do we use it? How do we use it? And to what end?
Great article on when, why, how and which data visualisation tools and techniques to use. Also has a link to Tapor2 project which is a personal favourite for tracking analysis tools.
In most social, information, and collaboration systems the complex activity of agents generates rapidly evolving time-varying networks. Temporal changes in the network structure and the dynamical processes occurring on its fabric are usually coupled in ways that still challenge our mathematical or computational modelling. Here we analyse a mobile call dataset describing the activity of millions of individuals and investigate the temporal evolution of their egocentric networks. We empirically observe a simple statistical law characterizing the memory of agents that quantitatively signals how much interactions are more likely to happen again on already established connections. We encode the observed dynamics in a reinforcement process defining a generative computational network model with time-varying connectivity patterns. This activity-driven network model spontaneously generates the basic dynamic process for the differentiation between strong and weak ties. The model is used to study the effect of time-varying heterogeneous interactions on the spreading of information on social networks. We observe that the presence of strong ties may severely inhibit the large scale spreading of information by confining the process among agents with recurrent communication patterns. Our results provide the counterintuitive evidence that strong ties may have a negative role in the spreading of information across networks.
The emergence and role of strong ties in time-varying communication networks
Márton Karsai, Nicola Perra, Alessandro Vespignani
A kidney "grown" in the laboratory has been transplanted into animals where it started to produce urine, US scientists say.
While a long time away, developments such as this hold out long term promise of a revolution for organ transplants in the future, potentially removing the need for immunosuppressants and increasing the number of available organs for those waiting for treatment. Click on the image or title to learn more.
Tom Vander Ark is an education advocate, advisor, and author of Getting Smart: How Personal Digital Learning is Changing the World. Tom is Founder and Executive Editor of Getting Smart and a partner in Learn Capital.
Tom Vander Ark makes 7 observations on early smart city adoption. Worth a quick read.
Many years and two jobs ago the company I was working for decided to use Python for game development – I talked about our experiences at the Game Developers Conference in 2002. We felt that the available Python ...
Interesting open source addition to the various python debuggers available. Click on image or title to learn more.
“The Machine Age,” an essay written for The New York Times by Norbert Wiener, a visionary mathematician, languished for six decades in the M.I.T. archives, and now excerpts are being published.
When I was a young teenager I came across Cybernetics by Norbert Weiner at my local lending library. I would borrow it several times over the years to read and re-read, though much was certainly beyond my understanding then. For those familiar with Weiner's work this essay from 1949 will come as no surprise. For those not - it will give an insight into why we owe so much to his insight that helped found the field of computer science and informatics and why his work and ideas are often worth revisting and re-examining. Great essay - worth the read. Click on the image or title to learn more.
Search and discovery platform for big public data that exposes billions of public records across previously siloed datasets. (Looks interesting.
The Enigma team recently won TechCrunch New York startup competition - check out the video of their presentation. Interesting product that pulls together a diverse data set with accessible query tools and api.
BackgroundOn March 30, 2013, a novel avian influenza A H7N9 virus that infects human beings was identified. This virus had been detected in six provinces and municipal cities in China as of April 18, 2013. We correlated genomic sequences from avian influenza viruses with ecological information and did phylogenetic and coalescent analyses to extrapolate the potential origins of the virus and possible routes of reassortment events.MethodsWe downloaded H7N9 virus genome sequences from the Global Initiative on Sharing Avian Influenza Data (GISAID) database and public sequences used from the Influenza Virus Resource. We constructed phylogenetic trees and did 1000 bootstrap replicates for each tree. Two rounds of phylogenetic analyses were done. We used at least 100 closely related sequences for each gene to infer the overall topology, removed suspicious sequences from the trees, and focused on the closest clades to the novel H7N9 viruses. We compared our tree topologies with those from a bayesian evolutionary analysis by sampling trees (BEAST) analysis. We used the bayesian Markov chain Monte Carlo method to jointly estimate phylogenies, divergence times, and other evolutionary parameters for all eight gene fragments. We used sequence alignment and homology-modelling methods to study specific mutations regarding phenotypes, specifically addressing the human receptor binding properties.FindingsThe novel avian influenza A H7N9 virus originated from multiple reassortment events. The HA gene might have originated from avian influenza viruses of duck origin, and the NA gene might have transferred from migratory birds infected with avian influenza viruses along the east Asian flyway. The six internal genes of this virus probably originated from two different groups of H9N2 avian influenza viruses, which were isolated from chickens. Detailed analyses also showed that ducks and chickens probably acted as the intermediate hosts leading to the emergence of this virulent H7N9 virus. Genotypic and potential phenotypic differences imply that the isolates causing this outbreak form two separate subclades.InterpretationThe novel avian influenza A H7N9 virus might have evolved from at least four origins. Diversity among isolates implies that the H7N9 virus has evolved into at least two different lineages. Unknown intermediate hosts involved might be implicated, extensive global surveillance is needed, and domestic-poultry-to-person transmission should be closely watched in the future.FundingChina Ministry of Science and Technology Project 973, National Natural Science Foundation of China, China Health and Family Planning Commission, Chinese Academy of Sciences.
Firstly sidestepping the important findings for H7N9 virus, this paper illustrates the importance of rgorous methodology and key research methods for understanding disease evolution and contagence pathways. the paper details correlated genomic sequences and ecological information using phylogenetic and coalescent analyses to extrapolate the potential originsand possible routes of reassortment events in H7N9 virus. As for the findings - novel avian influenza viruses are a major concern for world wide public health - the research work in this paper raises the need for understanding intermediate hosts, viral evolution pathways and domestic poultry wild animation contact on a global scale for future health policy. Worth reading.
Present in almost in every cell, microRNAs are known to target tens to hundreds of genes each and to be able to repress, or "silence," their expression. What is less well understood is how exactly miRNAs repress target gene expression. Now a team of scientists led by geneticists at the University of California, Riverside has conducted a study on plants (Arabidopsis) that shows that the site of action of the repression of target gene expression occurs on the endoplasmic reticulum (ER), a cellular organelle that is an interconnected network of membranes—essentially, flattened sacs and branching tubules—that extends like a flat balloon throughout the cytoplasm in plant and animal cells
I think this is fundamentally important. As a programmer - microRNA reminds me of microcode running on multiple parallel processes . With this work showing that ER membranes are essential for microRNA activity. The last line of the article nails it: "Our work shows that an integral membrane protein, AMP1, is required for the miRNA-mediated target gene repression to be successful. As AMP1 has counterparts in animals, our findings in plants could have broader implications." Full paper in Cell
In this paper we argue that if we want to find a more satisfactory approach to tackling the major socio-economic problems we are facing, we need to thoroughly rethink the basic assumptions of macroeconomics and financial theory. Making minor modifications to the standard models to remove “imperfections” is not enough, the whole framework needs to be revisited.
Call it the ultimate nature documentary. Scientists have recorded atomic motions in real time, offering a glimpse into the very essence of chemistry and biology at the atomic level.
Mapping molecular motions -- the "magic" of chemistry revealed. Despite the enormous number of possible arrangements of atoms during a structural transition, such as occurs with changes in charge distribution or chemical processes, the interconversion from one structure to another reduces to a few key types of motions. This enormous reduction in dimensionality is what makes chemical concepts transferable from one molecule to another and has enabled chemists to synthesize nearly any molecule desired, for new drugs to infusing new material properties. This is a still image from a movie that gives a direct atomic level view of this enormous reduction in complexity. The specific trajectories along three different coordinates, as highlighted in the movie, are shown as projections (right view) on a cube. The key atomic motions can be mapped on to three highly simplified coordinates -- the magic of chemistry in its full atomic splendor. (Credit: Lai Chung Liu, University of Toronto)
Historically, such image analysis technology has only been found in complex, expensive systems such as military equipment, industrial robots, and quality-control inspection systems for manufacturing. However, cost, performance, and power consumption advances in digital integrated circuits such as processors, memory devices, and image sensors are now paving the way for the proliferation of embedded vision into high-volume applications.
Products like Microsoft Kinnect have popularised 3D depth sensing and the technologies are now rapidly becoming cost accessible for a wide range of applications. Embedded have a really good introductory article on different 3D visual depth sensing methods used to augment machine vision systems. If you want to understand how the technologies work, different methods and inights to forthcoming applications - it is worth reading.
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.
Neural Computation and the Computational Theory of Cognition
Gualtiero Piccinini, Sonya Bahar
Cognitive Science Volume 37, Issue 3, pages 453–488, April 2013
Re-reading some of John Holland's work on neural network simulation at present while looking into different models of computation and digital physics, so this is a timely paper. Looks to be an interesting read.