3-7 September 2012 at Université Libre de Bruxelles (ECCS'12 Conference on Complex Systems
Your new post is loading...
Your new post is loading...
Natural language—spoken and signed—is a multichannel phenomenon, involving facial and body expression, and voice and visual intonation that is often used in the service of a social urge to communicate meaning. Given that iconicity seems easier and less abstract than making arbitrary connections between sound and meaning, iconicity and gesture have often been invoked in the origin of language alongside the urge to convey meaning. To get a fresh perspective, we critically distinguish the origin of a system capable of evolution from the subsequent evolution that system becomes capable of. Human language arose on a substrate of a system already capable of Darwinian evolution; the genetically supported uniquely human ability to learn a language reflects a key contact point between Darwinian evolution and language. Though implemented in brains generated by DNA symbols coding for protein meaning, the second higher-level symbol-using system of language now operates in a world mostly decoupled from Darwinian evolutionary constraints. Examination of Darwinian evolution of vocal learning in other animals suggests that the initial fixation of a key prerequisite to language into the human genome may actually have required initially side-stepping not only iconicity, but the urge to mean itself. If sign languages came later, they would not have faced this constraint.
Origin of symbol-using systems: speech, but not sign, without the semantic urge
Via Complexity Digest
Twitter has launched a tool for brands and advertisers in the UK that makes use of its collated ‘Everyday Moments’ data to show exactly how often, where and when people interact when talking about certain topics.
For example, you might want to check out the UK’s ongoing war between tea and coffee drinkers or you might want to see how many people around the nation are grumbling about traffic delays.
In 2011, Emmanuel Nnaemeka Nnadi needed help to sequence some drug-resistant fungal pathogens. A PhD student studying microbiology in Nigeria, he did not have the expertise and equipment he needed. So he turned to ResearchGate, a free social-networking site for academics, and fired off a few e-mails. When he got a reply from Italian geneticist Orazio Romeo, an international collaboration was born. Over the past three years, the two scientists have worked together on fungal infections in Africa, with Nnadi, now at Plateau State University in Bokkos, shipping his samples to Romeo at the University of Messina for analysis. “It has been a fruitful relationship,” says Nnadi — and they have never even met.
The genetic origin of advanced social organization has long been one of the outstanding problems of evolutionary biology. Here we present an analysis of the major steps in ant evolution, based for the first time, to our knowledge, on combined recent advances in paleontology, phylogeny, and the study of contemporary life histories. We provide evidence of the causal forces of natural selection shaping several key phenomena: (i) the relative lateness and rarity in geological time of the emergence of eusociality in ants and other animal phylads; (ii) the prevalence of monogamy at the time of evolutionary origin; and (iii) the female-biased sex allocation observed in many ant species. We argue that a clear understanding of the evolution of social insects can emerge if, in addition to relatedness-based arguments, we take into account key factors of natural history and study how natural selection acts on alleles that modify social behavior.
Animals learn some things more easily than others. To explain this so-called prepared learning, investigators commonly appeal to the evolutionary history of stimulus–consequence relationships experienced by a population or species. We offer a simple model that formalizes this long-standing hypothesis. The key variable in our model is the statistical reliability of the association between stimulus, action, and consequence. We use experimental evolution to test this hypothesis in populations ofDrosophila. We systematically manipulated the reliability of two types of experience (the pairing of the aversive chemical quinine with color or with odor). Following 40 generations of evolution, data from learning assays support our basic prediction: Changes in learning abilities track the reliability of associations during a population’s selective history. In populations where, for example, quinine–color pairings were unreliable but quinine–odor pairings were reliable, we find increased sensitivity to learning the quinine–odor experience and reduced sensitivity to learning quinine–color. To the best of our knowledge this is the first experimental demonstration of the evolution of prepared learning.
With data generation in the quadrillions of bytes every minute, the European Union joins with researchers to find a way to process all this information
Every single minute, the world generates 1.7 million billion bytes of data, equal to 360,000 DVDs. How can our brain deal with increasingly big and complex data sets? European Union researchers are developing an interactive system that not only presents data the way we want but also changes the presentation constantly in order to prevent brain overload. The project could enable students to study more efficiently or journalists to cross-check sources more quickly. Several museums in Germany, the Netherlands, the United Kingdom and the United States have already showed interest in the new technology.
Learning has been studied extensively in the context of isolated individuals. However, many organisms are social and consequently make decisions both individually and as part of a collective. Reaching consensus necessarily means that a single option is chosen by the group, even when there are dissenting opinions. This decision-making process decouples the otherwise direct relationship between animals' preferences and their experiences (the outcomes of decisions). Instead, because an individual's learned preferences influence what others experience, and therefore learn about, collective decisions couple the learning processes between social organisms. This introduces a new, and previously unexplored, dynamical relationship between preference, action, experience and learning. Here we model collective learning within animal groups that make consensus decisions. We reveal how learning as part of a collective results in behavior that is fundamentally different from that learned in isolation, allowing grouping organisms to spontaneously (and indirectly) detect correlations between group members' observations of environmental cues, adjust strategy as a function of changing group size (even if that group size is not known to the individual), and achieve a decision accuracy that is very close to that which is provably optimal, regardless of environmental contingencies. Because these properties make minimal cognitive demands on individuals, collective learning, and the capabilities it affords, may be widespread among group-living organisms. Our work emphasizes the importance and need for theoretical and experimental work that considers the mechanism and consequences of learning in a social context.
Calling itself the “world’s smartest baby monitor” Sproutling is launching its pre-ordercampaign today for its machine-learning monitor that predicts a baby’s sleeping patterns, mood and room conditions to provide personalized insights for parents via an iOS app.
Last year, Sproutling raised $2.6 million and an additional $100,000 this year to design the wearable aimed at raising parenting IQ and relieving stress.
Smart baby monitors aren’t new. Withings’ Smart Baby Monitor and Mimo both have features such as notifying parents if something is wrong with their baby, and wearable boot Owlet measures a baby’s heart rate. But Sproutling uses its machine-learning technology to send parents a push notification when they cannot pay full attention, such as in the shower or when taking a nap. It pairs a wearable band with a mobile app to deliver the results of 16 measurements in real time such as:Heart ratePredicting when the baby will wake upIf the baby rolled overThe baby’s mood when he or she wakes upThe brightness and noise levels of the room, in case the baby has a hard time sleepingThe room temperature
The app doesn’t simply provide the heart rate as a number, as co-founder Chris Bruce says it might cause a parent, who doesn’t really know the difference between 130 and 120 beats per minute, to worry. Instead, the wearable learns the baby’s patterns, whether it is the heart rate or sleep cycle, and averages it out to provide insights if an irregular change occurs.
Web advertisers are stealthily monitoring our browsing habits—even when we tell them not to
In July 1993, The New Yorker published a cartoon by Peter Steiner that depicted a Labrador retriever sitting on a chair in front of a computer, paw on the keyboard, as he turns to his beagle companion and says, “On the Internet, nobody knows you’re a dog.” Two decades later, interested parties not only know you’re a dog, they also have a pretty good idea of the color of your fur, how often you visit the vet, and what your favorite doggy treat is.
How do they get all that information? In a nutshell: Online advertisers collaborate with websites to gather your browsing data, eventually building up a detailed profile of your interests and activities. These browsing profiles can be so specific that they allow advertisers to target populations as narrow as mothers with teenage children or people who require allergy-relief products. When this tracking of our browsing habits is combined with our self-revelations on social media, merchants’ records of our off-line purchases, and logs of our physical whereabouts derived from our mobile phones, the information that commercial organizations, much less government snoops, can compile about us becomes shockingly revealing.
Here we examine the history of such tracking on the Web, paying particular attention to a recent phenomenon called fingerprinting, which enables companies to spy on people even when they configure their browsers to avoid being tracked.
Cookies are small pieces of text that websites cause the user’s browser to store. They are then made available to the website during subsequent visits, allowing those sites to recognize returning customers or to keep track of the state of a given session, such as the items placed in an online shopping cart. Cookies also enable sites to remember that users are logged in, freeing them of the need to repeatedly provide their user names and passwords for each protected page they access.
University of Washington engineers have designed a clever new communication system called Wi-Fi backscatter that uses ambient radio frequency signals as a power source for battery-free devices (such as temperature sensors or wearable technology) and also reuses the existing Wi-Fi infrastructure to provide Internet connectivity for these devices.
“If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects,” said Shyam Gollakota, a UW assistant professor of computer science and engineering.
“We now have the ability to enable Wi-Fi connectivity for devices while consuming orders of magnitude less power than what Wi-Fi typically requires.”
To supply power to the devices, the system uses an “ambient backscatter” scheme previously developed by the UW group, which allows two devices to communicate with each other by harvesting ambient radio, TV, and cellular transmissions, as KurzweilAI described last year.
So the new research takes that a step further by also connecting each individual device to the Internet. But but even low-power Wi-Fi consumes 1000 to 10,000 times more power than can be harvested in these wireless signals. Instead, the new system uses an ultra-low-power “tag” with an antenna and circuitry that can talk to Wi-Fi-enabled laptops or smartphones while consuming negligible power (less than 10 microwatts).
These tags work by essentially “looking” for Wi-Fi signals moving between the router and a laptop or smartphone. The tags encode data in real time by either reflecting or not reflecting the Wi-Fi router’s signals, thus slightly changing the wireless signal. Wi-Fi-enabled devices like laptops and smartphones would detect these minute changes (by analyzing changes in reflected signals) and receive data from the tag.
So far, the UW’s Wi-Fi backscatter tag has communicated with a Wi-Fi device at rates of 1 kilobit per second with about 2 meters between the devices. The researchers plan to extend the range to about 20 meters and have files patents on the technology.
The “Internet of Things” would extend connectivity to perhaps billions of devices. Battery-free sensors could be embedded in everyday objects to help monitor and track everything from the structural safety of bridges to the health of your heart. For example, your smart watch could upload your workout data onto a Google spreadsheet.
Or sensors embedded around your home could track minute-by-minute temperature changes and send that information to your thermostat to help conserve energy.
The researchers will publish their results at the Association for Computing Machinery’s Special Interest Group on Data Communication‘s annual conference this month in Chicago. The team also plans to start a company based on the technology.
Via Dr. Stefan Gruenwald
Our thoughts have a limited bandwidth; we can only fully process a few items in mind simultaneously. To compensate, the brain developed attention, the ability to select information relevant to the current task, while filtering out the rest. Therefore, by understanding the neural mechanisms of attention we hope to understand a core component of cognition. Here, we review our recent investigations of the neural mechanisms underlying the control of visual attention in frontal and parietal cortex. This includes the observation that the neural mechanisms that shift attention were synchronized to 25 Hz oscillatory brain rhythms, with each shift in attention falling within a single cycle of the oscillation. We generalize these findings to present a hypothesis that cognition relies on neural mechanisms that operate in discrete, periodic computations, as reflected in ongoing oscillations. We discuss the advantages of the model, experimental support, and make several testable hypotheses.
Without sensory feedback, flies cannot fly. Exactly how various feedback controls work in insects is a complex puzzle to solve. What do insects measure to stabilize their flight? How often and how fast must insects adjust their wings to remain stable? To gain insights into algorithms used by insects to control their dynamic instability, we develop a simulation tool to study free flight. To stabilize flight, we construct a control algorithm that modulates wing motion based on discrete measurements of the body-pitch orientation. Our simulations give theoretical bounds on both the sensing rate and the delay time between sensing and actuation. Interpreting our findings together with experimental results on fruit flies’ reaction time and sensory motor reflexes, we conjecture that fruit flies sense their kinematic states every wing beat to stabilize their flight. We further propose a candidate for such a control involving the fly’s haltere and first basalar motor neuron. Although we focus on fruit flies as a case study, the framework for our simulation and discrete control algorithms is applicable to studies of both natural and man-made fliers.
It’s good to be Yann LeCun.
Mark Zuckerberg recently handpicked the longtime NYU professor to run Facebook’s new artificial intelligence lab. The IEEE Computational Society just gave him its prestigious Neural Network Pioneer Award, in honor of his work on deep learning, a form of artificial intelligence meant to more closely mimic the human brain. And, perhaps most of all, deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it.
All of these tech companies are now exploring a particular type of deep learning called convolutional neural networks, aiming to build web services that can do things like automatically understand natural language and recognize images. At Google, “convnets” power the voice recognition system available on Android phones. At China’s Baidu, they drive a new visual search engine. This kind of deep learning has many fathers, but its success should resonate with LeCun more than anyone. “Convolutional neural nets for vision—that’s what he pushed more than anybody else,” says Microsoft’s Leon Bottou, one of LeCun’s earliest collaborators.
He pushed it in the face of enormous skepticism. In the ’80s, when LeCun first got behind the idea of convnets—an approximation of the networks of neurons in the brain—the powerful computers and enormous data sets needed to make them work just didn’t exist. The very notion of a neural network had fallen into disrepute after it failed to deliver on the promises of scientists who first dreamed of artificial intelligence at the dawn of the computer age. It was hard to publish anything related to neural nets in the major academic journals, and this would remain the case in the ’90s and on into the aughts.
But LeCun persisted. “He kind of carried the torch through the dark ages,” says Geoffrey Hinton, the central figure in the deep learning movement. And eventually, computer power caught up with the remarkable technology.
Twitter is a major social media platform in which users send and read messages (“tweets”) of up to 140 characters. In recent years this communication medium has been used by those affected by crises to organize demonstrations or find relief. Because traffic on this media platform is extremely heavy, with hundreds of millions of tweets sent every day, it is difficult to differentiate between times of turmoil and times of typical discussion. In this work we present a new approach to addressing this problem. We first assess several possible “thermostats” of activity on social media for their effectiveness in finding important time periods. We compare methods commonly found in the literature with a method from economics. By combining methods from computational social science with methods from economics, we introduce an approach that can effectively locate crisis events in the mountains of data generated on Twitter. We demonstrate the strength of this method by using it to locate the social events relating to the Occupy Wall Street movement protests at the end of 2011.
Existing visual search research has demonstrated that the receipt of reward will be beneficial for subsequent perceptual and attentional processing of features that have characterized targets, but detrimental for processing of features that have characterized irrelevant distractors. Here we report a similar effect of reward on location. Observers completed a visual search task in which they selected a target, ignored a salient distractor, and received random-magnitude reward for correct performance. Results show that when target selection garnered rewarding outcome attention is subsequently a.) primed to return to the target location, and b.) biased away from the location that was occupied by the salient, task-irrelevant distractor. These results suggest that in addition to priming features, reward acts to guide visual search by priming contextual locations of visual stimuli.
Technology is becoming deeply interwoven into the fabric of society. The Internet has become a central source of information for many people when making day-to-day decisions. Here, we present a method to mine the vast data Internet users create when searching for information online, to identify topics of interest before stock market moves. In an analysis of historic data from 2004 until 2012, we draw on records from the search engine Google and online encyclopedia Wikipedia as well as judgments from the service Amazon Mechanical Turk. We find evidence of links between Internet searches relating to politics or business and subsequent stock market moves. In particular, we find that an increase in search volume for these topics tends to precede stock market falls. We suggest that extensions of these analyses could offer insight into large-scale information flow before a range of real-world events.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.
Encoding and decoding in functional magnetic resonance imaging has recently emerged as an area of research to noninvasively characterize the relationship between stimulus features and human brain activity. To overcome the challenge of formalizing what stimulus features should modulate single voxel responses, we introduce a general approach for making directly testable predictions of single voxel responses to statistically adapted representations of ecologically valid stimuli. These representations are learned from unlabeled data without supervision. Our approach is validated using a parsimonious computational model of (i) how early visual cortical representations are adapted to statistical regularities in natural images and (ii) how populations of these representations are pooled by single voxels. This computational model is used to predict single voxel responses to natural images and identify natural images from stimulus-evoked multiple voxel responses. We show that statistically adapted low-level sparse and invariant representations of natural images better span the space of early visual cortical representations and can be more effectively exploited in stimulus identification than hand-designed Gabor wavelets. Our results demonstrate the potential of our approach to better probe unknown cortical representations.
Ah, Hollywood. Our glowing beacon of modern hope and dreams. But before Hollywood, there was New York, and before New York there was Berlin, Paris, Rome and Greece. History’s most creative people have always flocked to cultural and intellectual hubs, and now, thanks to an amazing visualization from researchers at the University of Texas at Dallas, we can see how that migration has changed over time.
Last week in the journal Science, the researchers (led by University of Texas art historian Maximilian Schich) published a study that looked at the cultural history of Europe and North America by mapping the birth and deaths of more than 150,000 notable figures—including everyone from Leonardo Da Vinci to Ernest Hemingway. That data was turned into an amazing animated infographic that looks strikingly similar to the illustrated flight paths you find in the back of your inflight magazine. Blue dots indicate a birth, red ones means death.
A smartphone app – developed by a University of Sussex academic - that boosts phone signal in big stadiums is heading to more football grounds around the country.
With the new football season getting underway this weekend, thousands of fans will be able to give the red card to poor wi-fi and phone signal during matches by downloading the free digitalStadium app, which creates a network between phones in the stadium to share bandwidth.
The digitalStadium technology enables fans and the club to communicate with each other during a match, providing real-time information on other key games, league table stats and travel information.
Fans can also take part in Twitter debates and competitions such as Rate the Ref while watching the game, while a live ticker feed delivers the latest news, views and special offers from the club.
The technology was developed by a team led by Dr Ian Wakeman, Senior Lecturer in Software Systems, and has been on trial for over a year at Brighton and Hove Albion FC, whose stadium is just across the road from the University.
A dialect is a particular form of language limited to a specific region or social group. Linguists are fascinated by dialects because they reveal social classes, patterns of immigration and how groups have influenced each other in the past.
But studying dialects is hard work. Traditionally, linguists do this by interviewing a relatively small number of people, typically a few hundred, and asking them to fill out questionnaires. Researchers then use the results to create linguistic atlases but these are naturally limited by the choice of the locations and individuals who have been studied.
Today, Bruno Gonçalves at the University of Toulon in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems on the island of Majorca, Spain, say they have found a new way to study dialects on a global scale using messages posted on Twitter. The results reveal a major surprise about the way dialects are distributed around the world and provide a fascinating snapshot of how they are evolving under various new pressures, such as global communication mechanisms like Twitter.
Gonçalves and Sánchez begin by sampling all of the tweets written in Spanish over two years and that also contain geolocation information. That gave them a database of 50 million geolocated tweets, with most from Spain, Spanish America, and the United States.
In Harvard researcher Robert Wood’s lab, a robot the size of a quarter lifts off the ground, its wings a blur. This micromachine, or RoboBee, is a marvel of modern robotics, able to hover and steer by independently flapping its wings 120 times a second.
RoboBee’s inventors think it might one day pollinate crops, supporting bee populations that are struggling to overcome colony collapse disorder—a phenomenon in which bee keepers are losing an abnormally high number of hives to as yet unconfirmed causes.
But there’s a catch.
To do anything as complicated as crop pollination, RoboBee needs to be autonomous—and it isn’t. Being as lightweight as possible is crucial for flying robots. And while RoboBee has advanced over the years, from flying in only a straight line to making turns, it still trails an electrical umbilical cord for power because it can’t lift batteries.
Being dead for sixty years hasn’t stopped Alan Turing’s giant contributions to science. While Turing is primarily famous for laying the foundations for computing (and cracking the Enigma Code) this work refers to a different sort of digit – an explanation of how fingers and toes form.
Aside from his work on computing, artificial intelligence and code breaking, Turing also proposed a theory for how identical embryonic cells could self-organize into the complex shapes and patterning we see in nature.
Turing demonstrated that remarkable features could be the product of just two chemicals, which he called morphogens. In Turing’s model one morphogen acted as an activator, the other as an inhibitor. Both diffuse away from the cell. On a leopard's fur, concentrations reduce with distance, but they do not do so evenly. While the inhibitor shows a linear decline, the activator declines exponentially from a higher starting point. Where the concentration of activator is higher than inhibitor cells can change color, creating dark spots on a light background.
The elegance of Turing's theory saw it win near universal support as an explanation for not just spots but other colouring such as zebra stripes and many plant features. Earlier this year experimental evidence for Turing’s explanation was published for the first time using synthetic cell-like structures.
However, support for the idea that Turing’s mechanism explains features such as fingers and toes has waned in recent decades. In 2012 Professor James Sharpe of the Spanish Center for Genomic Regulation produced a paper demonstrating that Hox genes, which control the overall body plan of embryos in animals operate on a Turing system.