For Turing in 1939 faced with certain “programming” problems, he had of course under his belt his own programming language for pattern matching.
Your new post is loading...
Your new post is loading...
The study of online social networks has revolutionised the way social scientists understand human interaction on a grand scale. It is based on the assumption that the fundamental unit of interaction is the social tie that exists between two individuals. This tie can be a message that one person has sent to another, that one person follows another, that one person ‘likes’ another and so on.
These social ties are the atoms of social network structure. And much of the research on social networks has focused on how these atoms join together to create complex networks of interaction.
Much less thought has been given to the atoms themselves, whether they fall into categories themselves, whether different atoms have different social properties and how combining atoms of different types might be indicative of entirely different relationships.
Today, Luca Maria Aiello at Yahoo Labs in Barcelona, Spain, and a couple of pals, change that. They tease apart the nature of the links that form on social networks and say these atoms fall into three different categories. They also show how to extract this information automatically and then characterise the relationships according to the combination of atoms that exist between individuals. Their ultimate goal: to turn anthropology into a full-blooded sub-discipline of computer science.
Neuromarketing firms claim that brain scanning technology can be used to evaluate consumers’ responses to products and predict which ones they prefer, but so far most of these claims are hugely exaggerated.
New research published in the journal Nature
Communications adding some hope to the neuromarketing hype, by showing that the brain activity shared by small groups of people in response to film clips can accurately predict how popular those clips will be among larger groups.
Ten years ago, Uri Hasson and his colleagues recruited five participants and used functional magnetic resonance imaging (fMRI) to scan their brains while each one watched the same 30-minute clip of Sergio Leone’s classic spaghetti western, The Good, the Bad and the Ugly. They noticed that the film produced remarkably similar patterns of brain activity in all the participants, synchronising the activity across multiple regions, such that their brains “ticked collectively” while they viewed it.
The researchers went on to show that films differ in their ability to induce this shared brain activity, with the more engaging ones producing a greater degree of synchrony, and more recently others have shown that the stereoscopic effects used in 3D films make viewing more enjoyable by creating a more immersive experience.
The new study, led by Jacek Dmochowski of City College of New York, builds on this earlier work. Dmochowski and his colleagues showed 16 participants scenes from the pilot episode of The Walking Dead, together with 10 commercials that were first aired during the SuperBowl championship, while recording their brain waves withelectroencephalography (EEG).
We use the maximum entropy method to study how the strength of effective alignment between birds depends on distance. We find in all analyzed flocks that the interaction decays exponentially. Such short-range form is noteworthy, considering that the velocity correlation that is input of the calculation is long-ranged. We use our method to study the directional anisotropy in the alignment interaction and find that the interaction strength along the direction of motion is weaker than in the transverse direction, which may account for the anisotropic spatial distribution of birds observed in natural flocks.
Short-range interaction vs long-range correlation in bird flocks
Via Complexity Digest
"Ormia ochracea is a little, yellow fly of the American south whose breeding strategy has an outsize ick factor. It deposits its larvae on the bodies of male crickets. The larvae then eat their way into their unwilling hosts, and devour them from the inside. What is most remarkable, though, is that the female fly locates the crickets by sound, homing in on the he-cricket’s stridulations (the chirping that results from the wings rubbing together) with uncanny accuracy. The cricket’s chirp is a smear of sound across the scale from the 5 kilohertz carrier frequency to around 20 kHz. And, as anybody who has tried to evict a passionate cricket from a tent or cabin knows, the sound is maddeningly hard to pinpoint.
With an auditory apparatus—let’s call them ears—only 1.5 millimeter across, ochracea pulls off a major feat of acoustic location; a number of engineering groups are working on devices to duplicate the fly’s sensitivity. Now, a team at the University of Texas at Austin has built a prototype replica of O. ochracea’s ear. Michael L. Kuntzman and Neal A. Hall, researchers in the school’s electrical and computer engineering department, describe the device and its performance in Applied Physics Letters."
Via Miguel Prazeres
According to a report from the Howard Hughes Medical Institute (HHMI), new technologies for monitoring brain activity are generating unparalleled quantities of information. That data could offer new insights into how the brain works, but only if researchers can interpret it.
To help organize the data, neuroscientists can now harness the power of distributed computing using “Thunder,” a library of tools developed at the HHMI Janelia Research Campus. According to the Freeman Lab, Thunder is a library for analyzing large-scale neural data. It’s fast to run, easy to develop for, and can be used interactively. It is built on Spark, a new framework for cluster computing.
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media.
Self-organization on social media: endo-exo bursts and baseline fluctuations
Via Complexity Digest
When it comes to research into Artficial Life, commercial projects have begun to outpace academic ones.
The term “Artificial Life” emerged in 1986 when the American computer scientist Christopher Langton coined it while organizing the first “Workshop on the Synthesis and Simulation of Living Systems.” Since then the idea of artificial life has spread through computer science into gaming, the study of artificial intelligence, and beyond.
One important factor in this spread has been the Web and the way it allows networked computing to generate complex environments in which artificial organisms can thrive and evolve. Today, Tim Taylor from Monash University in Australia outlines the history of artificial life on the Web and the way it might evolve in the future.
He divides the history of Web-based artificial life into two periods: before and after 2005, a characterization that corresponds roughly with the emergence of Web 2.0 and the collaborative behaviors that it allows.
One of the earliest networked artificial life experiments was based on the well-known A-Life system, Tierra. This was created in the early 1990s by the ecologist Tom Ray to simulate in silico the basic processes of evolutionary and ecological dynamics. After Ray began his work, he soon recognized the potential of the Web to create a large complex environment in which digital organisms could freely evolve. So he set up a project called Network Tierra to exploit this potential
Win–win choices cause anxiety, often more so than decisions lacking the opportunity for a highly desired outcome. These anxious feelings can paradoxically co-occur with positive feelings, raising important implications for individual decision styles and general well-being. Across three studies, people chose between products that varied in personal value. Participants reported feeling most positive and most anxious when choosing between similarly high-valued products. Behavioral and neural results suggested that this paradoxical experience resulted from parallel evaluations of the expected outcome (inducing positive affect) versus the cost of choosing a response (inducing anxiety). Positive feelings were reduced when there was no high-value option, and anxiety was reduced when only one option was highly valued. Dissociable regions within the striatum and the medial prefrontal cortex (mPFC) tracked these dueling affective reactions during choice. Ventral regions, associated with stimulus valuation, tracked positive feelings and the value of the best item. Dorsal regions, associated with response valuation, tracked anxiety. In addition to tracking anxiety, the dorsal mPFC was associated with conflict during the current choice, and activity levels across individual items predicted whether that choice would later be reversed during an unexpected reevaluation phase. By revealing how win–win decisions elicit responses in dissociable brain systems, these results help resolve the paradox of win–win choices. They also provide insight into behaviors that are associated with these two forms of affect, such as why we are pulled toward good options but may still decide to delay or avoid choosing among them.
Myorobotics at the Technical University of Munich, takes us on a fascinating journey on how an adorable humanoid robot with muscles, called Roboy, is born in 9 months, and sheds light on the future of robotics, and what kind of future it might bring us. Being fascinated by the complexity and beauty of everything, Rafael Hostettler always had a hard time to choose. That’s why he has an MSc. in Computational Science from ETH Zurich, where he learnt to simulate just about everything on computers, so he didn’t have to make a decision. Now he’s building robots that imitate the building principles of the human musculoskeletal system and travels the world with Roboy. The 3D printed robot boy that plays in a theatre, goes to school and captivates the audience with his fascinating stories.
Via Dr. Stefan Gruenwald
Technology is becoming deeply interwoven into the fabric of society. The Internet has become a central source of information for many people when making day-to-day decisions. Here, we present a method to mine the vast data Internet users create when searching for information online, to identify topics of interest before stock market moves. In an analysis of historic data from 2004 until 2012, we draw on records from the search engine Google and online encyclopedia Wikipedia as well as judgments from the service Amazon Mechanical Turk. We find evidence of links between Internet searches relating to politics or business and subsequent stock market moves. In particular, we find that an increase in search volume for these topics tends to precede stock market falls. We suggest that extensions of these analyses could offer insight into large-scale information flow before a range of real-world events.
Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them.
Back in 2011, a team from Texas A&M University carried out a cyber sting to trap nonhuman Twitter users that were polluting the Twittersphere with spam. Their approach was to set up “honeypot” accounts which posted nonsensical content that no human user would ever be interested in. Any account that retweeted this content, or friended the owner, must surely be a nonhuman user known as a social bot.
In mammals, the developmental path that links the primary behaviours observed during foetal stages to the full fledged behaviours observed in adults is still beyond our understanding. Often theories of motor control try to deal with the process of incremental learning in an abstract and modular way without establishing any correspondence with the mammalian developmental stages. In this paper, we propose a computational model that links three distinct behaviours which appear at three different stages of development. In order of appearance, these behaviours are: spontaneous motor activity (SMA), reflexes, and coordinated behaviours, such as locomotion. The goal of our model is to address in silico four hypotheses that are currently hard to verify in vivo: First, the hypothesis that spinal reflex circuits can be self-organized from the sensor and motor activity induced by SMA. Second, the hypothesis that supraspinal systems can modulate reflex circuits to achieve coordinated behaviour. Third, the hypothesis that, since SMA is observed in an organism throughout its entire lifetime, it provides a mechanism suitable to maintain the reflex circuits aligned with the musculoskeletal system, and thus adapt to changes in body morphology. And fourth, the hypothesis that by changing the modulation of the reflex circuits over time, one can switch between different coordinated behaviours. Our model is tested in a simulated musculoskeletal leg actuated by six muscles arranged in a number of different ways. Hopping is used as a case study of coordinated behaviour. Our results show that reflex circuits can be self-organized from SMA, and that, once these circuits are in place, they can be modulated to achieve coordinated behaviour. In addition, our results show that our model can naturally adapt to different morphological changes and perform behavioural transitions.
In mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the eventhere.
Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.
One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”
Kobielus noted that it’s in the semantic web community where the standards and technologies – RDF, OWL, ontologies and taxonomies – to support that have resided, and that this needs to become a bigger part of the overall cognitive computing discussion.
Professor Tom Collett, his Sussex colleagues Dr Olena Riabinina and Dr Andy Philippides, and Dr Natalie Hempel de Ibarra from the University of Exeter, found that the insects fly in tiny looping circuits that are centred on the nest, gradually broadening the survey until they have learned enough about their surroundings to guide themselves home at the end of their maiden flight.
Professor Collett, Emeritus Professor in Life Sciences, says: “Bumble bee nests are hidden in the undergrowth. The bees have to learn the exact relationship between objects that define the position of the nest and the nest hole.”
So instead of embarking on an epic journey and keeping their limbs crossed for a safe return home, the novices set about exploring the vicinity.
“One thing that they need to know is whether objects are near or far”, says Professor Collett.
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy—the kind impacting biosignaling systems.
Biosemiotic Entropy: Concluding the Series
Via Complexity Digest
The world's most successful data transfer protocol could underlie the next generation chat client: Bleep will provide totally secure, totally peer-to-peer chatting from BitTorrent.
BitTorrent, creator of the protocol that now handles more than a third of all internet traffic, has a new product called Bleep on the horizon. It aims to bring the distributed, anonymous technology that made BitTorrent so successful to the oldest action in the history of the internet: chat. Using the same BitTorrent connection logic that has allowed you to pirate TV shows and movies for over a decade, Bleep will facilitate direct, encrypted connections directly between peers, meaning that no outside observer ever gets its hands on your words. To everyone buy the intended recipient, your words are effectively “bleeped.” This could be big news for whistleblowers who are trying to keep their identity secret, for businesses that want to ensure the confidentiality of their communications, or just for normal people who want to escape the ever-watchful eye of the NSA.
Encrypted chat programs like Bleep, or even long-standing encrypted email schemes, are generally pretty difficult to use. If you wanted to send me a totally secure email, you’d need to visit by Twitter account for a PGP key (generously hosted at an external MIT key-server), use that to add me as an encrypted messaging buddy, then use specialized email software to send/receive messages. Most of the difficulty in sending secure messages comes from the fact that those emails must pass (unreadable) through a number of third parties — but BitTorrent’s whole addition to the tech sphere was its circumvention of unnecessary servers to allow direct peer-to-peer (p2p) communication.
As soon as you bring third parties into the system (i.e. remote servers), you bring trust into the equation, and exponentially increase the ways your communications could be attacked. If your conversations are all flowing through some Google, Microsoft, or Apple server somewhere, then it doesn’t really matter how well you protect things on your end; if the NSA/Snowden leaks have taught us anything, it’s that third parties can readily reveal your communications. Even encrypted chat services like ChatCrypt don’t fully get around this problem — though to be fair, they do a pretty good job.
With Bleep, the creators use something called a Distributed Hash Table (DHT) to basically associate public encryption keys (you’ll still need those) with IP addresses. Using this information (encryption+online location) the BitTorrent protocol can establish a direct link between two users with no intermediaries. BitTorrent says there will be absolutely no record of the IP-lookup (this would be a piece of metadata), as each user finds the other through the network’s many distributed nodes rather than a central lookup server. This system might work via a one-time lookup per user, or require a DHT-check to establish a connection at the beginning of every conversation; the documentation is still quite vague.
BitTorrent, as an idea, is sort of the apex predator of modern data-giants like Google. Encryption and p2p tech has not hurt Google much because, frankly, it’s always been too clunky to catch on for any large proportion of users’ time. On the other hand, Bleep will offer features like importing your Google contacts list for easy setup; people will still need to generate an encryption key-pair, but as it becomes more practically feasible to pass around more and more types of data without any assistance, it will ruffle more and more feathers. Don’t be surprised if those pro-tech super-PACs everyone’s so excited about end up opposing anti-cloud efforts like this one, especially when BitTorrent takes its thinking to the logical conclusion and releases a competitor to the onion routing protocol – i.e. Tor – which would allow a fully p2p web browser.
You can sign up for an early-access list for Bleep, but there’s no telling how long you might wait, and you’ll need to convince some friends to be as up in arms about privacy as you are. Though it’s still in a closed alpha phase, Bleep is built on the framework of the old BitTorrent Chat experiment, so it has already had extensive testing in its basic functionality. For those who want it, Bleep could be a near-perfect solution — but relying on supposedly impregnable software has burned many people in the past. We’ll see how well Bleep can measure up to the unforgiving storm of cyber-attacks that come to bear on virtually every “secure” software ever made.
Via Dr. Stefan Gruenwald
We consider statistical-mechanical models for spin systems built on hierarchical structures, which provide a simple example of non-mean-field framework. We show that the coupling decay with spin distance can give rise to peculiar features and phase diagrams much richer that their mean-field counterpart. In particular, we consider the Dyson model, mimicking ferromagnetism in lattices, and we prove the existence of a number of meta-stabilities, beyond the ordered state, which get stable in the thermodynamic limit. Such a feature is retained when the hierarchical structure is coupled with the Hebb rule for learning, hence mimicking the modular architecture of neurons, and gives rise to an associative network able to perform both as a serial processor as well as a parallel processor, depending crucially on the external stimuli and on the rate of interaction decay with distance; however, those emergent multitasking features reduce the network capacity with respect to the mean-field counterpart. The analysis is accomplished through statistical mechanics, graph theory, signal-to-noise technique and numerical simulations in full consistency. Our results shed light on the biological complexity shown by real networks, and suggest future directions for understanding more realistic models.
We’re happy to announce Elastic Path has been recognized in Gartner’s “Hype Cycle for Digital Marketing, 2014” report for the third year in a row, named in the “Digital Commerce” and “Commerce Experiences” categories, along with our partner, SapientNitro.
Despite the projections that big data drives more ROI than other marketing investments, recent research by Gartner reports the average return per dollar is only 55 cents, thanks in part to immature technology, lack of skilled data scientists and poorly defined business use cases.
In Insights 2014: Connecting Technology and Story in an Always-On World SapientNitro suggests poor communication between data scientists and marketers and poor execution and management of data science efforts, and recommends best practices for communicating with and managing the data science team.
Twitter has acquired Madbits, a deep-learning-based computer vision startup founded by proteges of Facebook AI director Yann LeCun. It’s the latest in a spate of deep learning and computer vision acquisitions that also includes Google, Yahoo, Dropbox and Pinterest.
Twitter has acquired a stealthy computer vision startup called Madbits, which was founded by Clément Farabet and Louis-Alexandre Etezad-Heydari, two former students of Facebook AI Lab director and New York University professor Yann LeCun.
Advances in machine vision will let employers, governments, and advertisers spot you in photos and know exactly what you’re doing in them.
When I was an undergraduate 20 years ago, I was so excited about computer vision that I chose to implement a cutting-edge paper on recognizing machine parts as my final-year project. Even though those parts were simple silhouettes of basic shapes like triangles and cogs, my project barely worked at all. Computer vision was a long way from being good enough to use in most real applications, and there was no clear path to dramatic improvements.
My college years weren’t entirely focused on algorithms, though. I was a peaceful participant in the protests that turned into the Criminal Justice Act riots in the U.K., I attended plenty of outdoor rave festivals, and I ended up at parties where lots of the attendees were high as a kite. Thankfully there’s no record of any of this, apart from a few photos buried in friends’ drawers. Teenagers today won’t be as lucky, thanks to the explosion of digital images and the advances in computer vision that have happened since that final-year project of mine.
I’ve spent my professional career building software that makes sense of images, but each project was highly custom, more art than science. If I wanted to detect red-eye in photos, I’d program in rules about the exact hue I expected, and to look for two spots of that color in positions that might be eyes, for example. A couple of years ago, I came across a technique that changed my world completely. Alex Krizhevsky and his team won the prestigious Imagenet image recognition contest with a deep convolutional neural network. Their approach had an error rate of 15 percent. The next best contestant’s approach had an error rate of 26 percent.
Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa.
A new module on the Étoile Platform, by Jeffrey Johnson
Based on the course presented at the 4th Ph.D. summer School - conference on “Mathematical Modeling of Complex Systems”, Cultural Foundation “Kritiki Estia”, 14 – 25 July, 2014, Athens.
The modern world is complex beyond human understanding and control. The science of complex systems aims to find new ways of thinking about the many interconnected networks of interaction that defy traditional approaches. Thus far, research into networks has largely been restricted to pairwise relationships represented by links between two nodes.
This course marks a major extension of networks to multidimensional hypernetworks for modeling multi-element relationships, such as companies making up the stock market, the neighborhoods forming a city, people making up committees, divisions making up companies, computers making up the internet, men and machines making up armies, or robots working as teams. This course makes an important contribution to the science of complex systems by: (i) extending network theory to include dynamic relationships between many elements; (ii) providing a mathematical theory able to integrate multilevel dynamics in a coherent way; (iii) providing a new methodological approach to analyze complex systems; and (iv) illustrating the theory with practical examples in the design, management and control of complex systems taken from many areas of application.
Via Jorge Louçã, Complexity Digest
Animals learn some things more easily than others. To explain this so-called prepared learning, investigators commonly appeal to the evolutionary history of stimulus–consequence relationships experienced by a population or species. We offer a simple model that formalizes this long-standing hypothesis. The key variable in our model is the statistical reliability of the association between stimulus, action, and consequence. We use experimental evolution to test this hypothesis in populations ofDrosophila. We systematically manipulated the reliability of two types of experience (the pairing of the aversive chemical quinine with color or with odor). Following 40 generations of evolution, data from learning assays support our basic prediction: Changes in learning abilities track the reliability of associations during a population’s selective history. In populations where, for example, quinine–color pairings were unreliable but quinine–odor pairings were reliable, we find increased sensitivity to learning the quinine–odor experience and reduced sensitivity to learning quinine–color. To the best of our knowledge this is the first experimental demonstration of the evolution of prepared learning.
Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix.
But some types of data are harder to collect than online click histories —information about geological formations thousands of feet underground, for instance. And in other applications — such as trying to predict the path of a storm — there may just not be enough time to crunch all the available data.
Dan Levine, an MIT graduate student in aeronautics and astronautics, and his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, have developed a new technique that could help with both problems. For a range of common applications in which data is either difficult to collect or too time-consuming to process, the technique can identify the subset of data items that will yield the most reliable predictions. So geologists trying to assess the extent of underground petroleum deposits, or meteorologists trying to forecast the weather, can make do with just a few, targeted measurements, saving time and money.