We consider statistical-mechanical models for spin systems built on hierarchical structures, which provide a simple example of non-mean-field framework. We show that the coupling decay with spin distance can give rise to peculiar features and phase diagrams much richer that their mean-field counterpart. In particular, we consider the Dyson model, mimicking ferromagnetism in lattices, and we prove the existence of a number of meta-stabilities, beyond the ordered state, which get stable in the thermodynamic limit. Such a feature is retained when the hierarchical structure is coupled with the Hebb rule for learning, hence mimicking the modular architecture of neurons, and gives rise to an associative network able to perform both as a serial processor as well as a parallel processor, depending crucially on the external stimuli and on the rate of interaction decay with distance; however, those emergent multitasking features reduce the network capacity with respect to the mean-field counterpart. The analysis is accomplished through statistical mechanics, graph theory, signal-to-noise technique and numerical simulations in full consistency. Our results shed light on the biological complexity shown by real networks, and suggest future directions for understanding more realistic models.
We’re happy to announce Elastic Path has been recognized in Gartner’s “Hype Cycle for Digital Marketing, 2014” report for the third year in a row, named in the “Digital Commerce” and “Commerce Experiences” categories, along with our partner, SapientNitro.
Despite the projections that big data drives more ROI than other marketing investments, recent research by Gartner reports the average return per dollar is only 55 cents, thanks in part to immature technology, lack of skilled data scientists and poorly defined business use cases.
In Insights 2014: Connecting Technology and Story in an Always-On World SapientNitro suggests poor communication between data scientists and marketers and poor execution and management of data science efforts, and recommends best practices for communicating with and managing the data science team.
Twitter has acquired Madbits, a deep-learning-based computer vision startup founded by proteges of Facebook AI director Yann LeCun. It’s the latest in a spate of deep learning and computer vision acquisitions that also includes Google, Yahoo, Dropbox and Pinterest.
Twitter has acquired a stealthy computer vision startup called Madbits, which was founded by Clément Farabet and Louis-Alexandre Etezad-Heydari, two former students of Facebook AI Lab director and New York University professor Yann LeCun.
Advances in machine vision will let employers, governments, and advertisers spot you in photos and know exactly what you’re doing in them.
When I was an undergraduate 20 years ago, I was so excited about computer vision that I chose to implement a cutting-edge paper on recognizing machine parts as my final-year project. Even though those parts were simple silhouettes of basic shapes like triangles and cogs, my project barely worked at all. Computer vision was a long way from being good enough to use in most real applications, and there was no clear path to dramatic improvements.
My college years weren’t entirely focused on algorithms, though. I was a peaceful participant in the protests that turned into the Criminal Justice Act riots in the U.K., I attended plenty of outdoor rave festivals, and I ended up at parties where lots of the attendees were high as a kite. Thankfully there’s no record of any of this, apart from a few photos buried in friends’ drawers. Teenagers today won’t be as lucky, thanks to the explosion of digital images and the advances in computer vision that have happened since that final-year project of mine.
I’ve spent my professional career building software that makes sense of images, but each project was highly custom, more art than science. If I wanted to detect red-eye in photos, I’d program in rules about the exact hue I expected, and to look for two spots of that color in positions that might be eyes, for example. A couple of years ago, I came across a technique that changed my world completely. Alex Krizhevsky and his team won the prestigious Imagenet image recognition contest with a deep convolutional neural network. Their approach had an error rate of 15 percent. The next best contestant’s approach had an error rate of 26 percent.
Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa.
A new module on the Étoile Platform, by Jeffrey Johnson
Based on the course presented at the 4th Ph.D. summer School - conference on “Mathematical Modeling of Complex Systems”, Cultural Foundation “Kritiki Estia”, 14 – 25 July, 2014, Athens.
The modern world is complex beyond human understanding and control. The science of complex systems aims to find new ways of thinking about the many interconnected networks of interaction that defy traditional approaches. Thus far, research into networks has largely been restricted to pairwise relationships represented by links between two nodes.
This course marks a major extension of networks to multidimensional hypernetworks for modeling multi-element relationships, such as companies making up the stock market, the neighborhoods forming a city, people making up committees, divisions making up companies, computers making up the internet, men and machines making up armies, or robots working as teams. This course makes an important contribution to the science of complex systems by: (i) extending network theory to include dynamic relationships between many elements; (ii) providing a mathematical theory able to integrate multilevel dynamics in a coherent way; (iii) providing a new methodological approach to analyze complex systems; and (iv) illustrating the theory with practical examples in the design, management and control of complex systems taken from many areas of application.
Animals learn some things more easily than others. To explain this so-called prepared learning, investigators commonly appeal to the evolutionary history of stimulus–consequence relationships experienced by a population or species. We offer a simple model that formalizes this long-standing hypothesis. The key variable in our model is the statistical reliability of the association between stimulus, action, and consequence. We use experimental evolution to test this hypothesis in populations ofDrosophila. We systematically manipulated the reliability of two types of experience (the pairing of the aversive chemical quinine with color or with odor). Following 40 generations of evolution, data from learning assays support our basic prediction: Changes in learning abilities track the reliability of associations during a population’s selective history. In populations where, for example, quinine–color pairings were unreliable but quinine–odor pairings were reliable, we find increased sensitivity to learning the quinine–odor experience and reduced sensitivity to learning quinine–color. To the best of our knowledge this is the first experimental demonstration of the evolution of prepared learning.
Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix.
But some types of data are harder to collect than online click histories —information about geological formations thousands of feet underground, for instance. And in other applications — such as trying to predict the path of a storm — there may just not be enough time to crunch all the available data.
Dan Levine, an MIT graduate student in aeronautics and astronautics, and his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, have developed a new technique that could help with both problems. For a range of common applications in which data is either difficult to collect or too time-consuming to process, the technique can identify the subset of data items that will yield the most reliable predictions. So geologists trying to assess the extent of underground petroleum deposits, or meteorologists trying to forecast the weather, can make do with just a few, targeted measurements, saving time and money.
Collective behaviour is a widespread phenomenon in biology, cutting through a huge span of scales, from cell colonies up to bird flocks and fish schools. The most prominent trait of collective behaviour is the emergence of global order: individuals synchronize their states, giving the stunning impression that the group behaves as one. In many biological systems, though, it is unclear whether global order is present. A paradigmatic case is that of insect swarms, whose erratic movements seem to suggest that group formation is a mere epiphenomenon of the independent interaction of each individual with an external landmark. In these cases, whether or not the group behaves truly collectively is debated. Here, we experimentally study swarms of midges in the field and measure how much the change of direction of one midge affects that of other individuals. We discover that, despite the lack of collective order, swarms display very strong correlations, totally incompatible with models of non-interacting particles. We find that correlation increases sharply with the swarm's density, indicating that the interaction between midges is based on a metric perception mechanism. By means of numerical simulations we demonstrate that such growing correlation is typical of a system close to an ordering transition. Our findings suggest that correlation, rather than order, is the true hallmark of collective behaviour in biological systems.
Thalmic Labs is a Waterloo-based startup with a very ambitious goal – to change the way we interact with our everyday computing devices. To that end, they’ve developed the Myo armband, a gesture control device that fits around the meaty part of your forearm and detects slight muscle movements, arm rotations and even electrical impulses as you gesture, translating all that information into real-time input.
We were lucky enough to get one of the first hands-on demos of the new version of the Myo, which is set to begin to start shipping to developers shortly, and to pre-order customers this fall. Thalmic CEO and co-founder Stephen Lake also takes us through the process of building a hardware startup, and shipping that startup’s crucial first product. The hardware design is final, and though there are a few bugs still be worked out (you can see a couple in the video above), the Myo is just about ready for prime time.
Research on human social interactions has traditionally relied on self-reports. Despite their widespread use, self-reported accounts of behaviour are prone to biases and necessarily reduce the range of behaviours, and the number of subjects, that may be studied simultaneously. The development of ever smaller sensors makes it possible to study group-level human behaviour in naturalistic settings outside research laboratories. We used such sensors, sociometers, to examine gender, talkativeness and interaction style in two different contexts. Here, we find that in the collaborative context, women were much more likely to be physically proximate to other women and were also significantly more talkative than men, especially in small groups. In contrast, there were no gender-based differences in the non-collaborative setting. Our results highlight the importance of objective measurement in the study of human behaviour, here enabling us to discern context specific, gender-based differences in interaction style.
Network methods have had profound influence in many domains and disciplines in the past decade. Community structure is a very important property of complex networks, but the accurate definition of a community remains an open problem. Here we defined community based on three properties, and then propose a simple and novel framework to detect communities based on network topology. We analyzed 16 different types of networks, and compared our partitions with Infomap, LPA, Fastgreedy and Walktrap, which are popular algorithms for community detection. Most of the partitions generated using our approach compare favorably to those generated by these other algorithms. Furthermore, we define overlapping nodes that combine community structure with shortest paths. We also analyzed the E. Coli. transcriptional regulatory network in detail, and identified modules with strong functional coherence.
Honeybees are some of nature’s finest mathematicians. Not only can they calculate angles and comprehend the roundness of the earth, these smart insects build and live in one of the most mathematically efficient architectural designs around: the beehive. Zack Patterson and Andy Peterson delve into the very smart geometry behind the honeybee’s home.
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media.
Self-organization on social media: endo-exo bursts and baseline fluctuations
When it comes to research into Artficial Life, commercial projects have begun to outpace academic ones.
The term “Artificial Life” emerged in 1986 when the American computer scientist Christopher Langton coined it while organizing the first “Workshop on the Synthesis and Simulation of Living Systems.” Since then the idea of artificial life has spread through computer science into gaming, the study of artificial intelligence, and beyond.
One important factor in this spread has been the Web and the way it allows networked computing to generate complex environments in which artificial organisms can thrive and evolve. Today, Tim Taylor from Monash University in Australia outlines the history of artificial life on the Web and the way it might evolve in the future.
He divides the history of Web-based artificial life into two periods: before and after 2005, a characterization that corresponds roughly with the emergence of Web 2.0 and the collaborative behaviors that it allows.
One of the earliest networked artificial life experiments was based on the well-known A-Life system, Tierra. This was created in the early 1990s by the ecologist Tom Ray to simulate in silico the basic processes of evolutionary and ecological dynamics. After Ray began his work, he soon recognized the potential of the Web to create a large complex environment in which digital organisms could freely evolve. So he set up a project called Network Tierra to exploit this potential
Florian Kiersch posted on Google+ a new experimental Google search feature baked into the knowledge graph top carousel bar. It is a timeline version, where if you search for something, Google may show you the facts and knowledge of that query over time.
Win–win choices cause anxiety, often more so than decisions lacking the opportunity for a highly desired outcome. These anxious feelings can paradoxically co-occur with positive feelings, raising important implications for individual decision styles and general well-being. Across three studies, people chose between products that varied in personal value. Participants reported feeling most positive and most anxious when choosing between similarly high-valued products. Behavioral and neural results suggested that this paradoxical experience resulted from parallel evaluations of the expected outcome (inducing positive affect) versus the cost of choosing a response (inducing anxiety). Positive feelings were reduced when there was no high-value option, and anxiety was reduced when only one option was highly valued. Dissociable regions within the striatum and the medial prefrontal cortex (mPFC) tracked these dueling affective reactions during choice. Ventral regions, associated with stimulus valuation, tracked positive feelings and the value of the best item. Dorsal regions, associated with response valuation, tracked anxiety. In addition to tracking anxiety, the dorsal mPFC was associated with conflict during the current choice, and activity levels across individual items predicted whether that choice would later be reversed during an unexpected reevaluation phase. By revealing how win–win decisions elicit responses in dissociable brain systems, these results help resolve the paradox of win–win choices. They also provide insight into behaviors that are associated with these two forms of affect, such as why we are pulled toward good options but may still decide to delay or avoid choosing among them.
Myorobotics at the Technical University of Munich, takes us on a fascinating journey on how an adorable humanoid robot with muscles, called Roboy, is born in 9 months, and sheds light on the future of robotics, and what kind of future it might bring us. Being fascinated by the complexity and beauty of everything, Rafael Hostettler always had a hard time to choose. That’s why he has an MSc. in Computational Science from ETH Zurich, where he learnt to simulate just about everything on computers, so he didn’t have to make a decision. Now he’s building robots that imitate the building principles of the human musculoskeletal system and travels the world with Roboy. The 3D printed robot boy that plays in a theatre, goes to school and captivates the audience with his fascinating stories.
Technology is becoming deeply interwoven into the fabric of society. The Internet has become a central source of information for many people when making day-to-day decisions. Here, we present a method to mine the vast data Internet users create when searching for information online, to identify topics of interest before stock market moves. In an analysis of historic data from 2004 until 2012, we draw on records from the search engine Google and online encyclopedia Wikipedia as well as judgments from the service Amazon Mechanical Turk. We find evidence of links between Internet searches relating to politics or business and subsequent stock market moves. In particular, we find that an increase in search volume for these topics tends to precede stock market falls. We suggest that extensions of these analyses could offer insight into large-scale information flow before a range of real-world events.
Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them.
Back in 2011, a team from Texas A&M University carried out a cyber sting to trap nonhuman Twitter users that were polluting the Twittersphere with spam. Their approach was to set up “honeypot” accounts which posted nonsensical content that no human user would ever be interested in. Any account that retweeted this content, or friended the owner, must surely be a nonhuman user known as a social bot.
In mammals, the developmental path that links the primary behaviours observed during foetal stages to the full fledged behaviours observed in adults is still beyond our understanding. Often theories of motor control try to deal with the process of incremental learning in an abstract and modular way without establishing any correspondence with the mammalian developmental stages. In this paper, we propose a computational model that links three distinct behaviours which appear at three different stages of development. In order of appearance, these behaviours are: spontaneous motor activity (SMA), reflexes, and coordinated behaviours, such as locomotion. The goal of our model is to address in silico four hypotheses that are currently hard to verify in vivo: First, the hypothesis that spinal reflex circuits can be self-organized from the sensor and motor activity induced by SMA. Second, the hypothesis that supraspinal systems can modulate reflex circuits to achieve coordinated behaviour. Third, the hypothesis that, since SMA is observed in an organism throughout its entire lifetime, it provides a mechanism suitable to maintain the reflex circuits aligned with the musculoskeletal system, and thus adapt to changes in body morphology. And fourth, the hypothesis that by changing the modulation of the reflex circuits over time, one can switch between different coordinated behaviours. Our model is tested in a simulated musculoskeletal leg actuated by six muscles arranged in a number of different ways. Hopping is used as a case study of coordinated behaviour. Our results show that reflex circuits can be self-organized from SMA, and that, once these circuits are in place, they can be modulated to achieve coordinated behaviour. In addition, our results show that our model can naturally adapt to different morphological changes and perform behavioural transitions.
Do we have the Internet we deserve? There’s an argument to say that yes, we absolutely do. Given web users’ general reluctance to pay for content. We are of course, paying. Just not with cold hard cash, but with our privacy — as digital business models rely on gathering and selling intel on their users to make the money to pay (the investors who paid) for the free service.
Users are also increasingly paying with time and attention, as more ad content — and more adverts masquerading as, infiltrating and degrading content — thrusts its way in front of our eyeballs in ever more insidious ways. Whether it’s repurposing our friends’ photos and endorsements to socially engineer selling us stuff, or resorting to other background tracking and targeting tricks to divert our attention from whatever it was we were actually trying to do online.
The commercialization of the web is the ugly reality of the hidden cost of all the datacenters and servers required to power the Internet. And that commercialization is compounded by the power of the big digital platforms that dominate the web we have today: Google, Facebook, Amazon. Increasingly we’re forced to play by their rules if we want to participate in the digital space where most of our friends are.
Understanding why spectra that are physically the same appear different in different contexts (color contrast), whereas spectra that are physically different appear similar (color constancy) presents a major challenge in vision research. Here, we show that the responses of biologically inspired neural networks evolved on the basis of accumulated experience with spectral stimuli automatically generate contrast and constancy. The results imply that these phenomena are signatures of a strategy that biological vision uses to circumvent the inverse optics problem as it pertains to light spectra, and that double-opponent neurons in early-level vision evolve to serve this purpose. This strategy provides a way of understanding the peculiar relationship between the objective world and subjective color experience, as well as rationalizing the relevant visual circuitry without invoking feature detection or image representation.
Societies are built on social interactions among individuals. Cooperation represents the simplest form of a social interaction: one individual provides a benefit to another one at a cost to itself. Social networks represent a dynamical abstraction of social interactions in a society. The behaviour of an individual towards others and of others towards the individual shape the individual's neighbourhood and hence the local structure of the social network. Here we propose a simple theoretical framework to model dynamic social networks by focussing on each individual's actions instead of interactions between individuals. This eliminates the traditional dichotomy between the strategy of individuals and the structure of the population and easily complements empirical studies. As a consequence, altruists, egoists and fair types are naturally determined by the local social structures, while globally egalitarian networks or stratified structures arise. Cooperative interactions drive the emergence and shape the structure of social networks.
Without sensory feedback, flies cannot fly. Exactly how various feedback controls work in insects is a complex puzzle to solve. What do insects measure to stabilize their flight? How often and how fast must insects adjust their wings to remain stable? To gain insights into algorithms used by insects to control their dynamic instability, we develop a simulation tool to study free flight. To stabilize flight, we construct a control algorithm that modulates wing motion based on discrete measurements of the body-pitch orientation. Our simulations give theoretical bounds on both the sensing rate and the delay time between sensing and actuation. Interpreting our findings together with experimental results on fruit flies’ reaction time and sensory motor reflexes, we conjecture that fruit flies sense their kinematic states every wing beat to stabilize their flight. We further propose a candidate for such a control involving the fly’s haltere and first basalar motor neuron. Although we focus on fruit flies as a case study, the framework for our simulation and discrete control algorithms is applicable to studies of both natural and man-made fliers.