In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection.
"[...] people are also starting to look to nature not just for technical assistance, but for system-wide strategic solutions. Whether it is working out the best strategy to deal with economic recessions or contemplating the best way to lay out a new town, problem solvers are looking to nature for deeper insights. And little wonder. Over millions of years nature has managed thousands of interrelated components and living systems that collaborate to deliver a sustainable and self-generating system that benefit all its members. It is the way that nature organises itself to deal with this complexity that is the key for a new way of thinking about our problems according to Tim Winton, the founder of Pattern Dynamics. “Biomimicry takes the tactics of nature to make actual physical mechanisms, but Pattern Dynamics uses the patterns in nature to develop high level principles that can be used to build generative strategies,” he said."
The 2014 Multi-Agent-Based Simulation (MABS) workshop is the fifteenth of a series that began in 1998. Its scientific focus lies in the confluence of social sciences and multi-agent systems, with a strong application/empirical vein, and its emphasis is stressed on (i) exploratory agent based simulation as a principled way of undertaking scientific research in the social sciences and (ii) using social theories as an inspiration to new frameworks and developments in multi-agent systems.
January 20th, 2014
Electronic abstract submission.
January 22nd, 2014
Paper submission deadline.
February 19th, 2014
Notification of acceptance/rejection.
March 10th, 2014
Camera-ready due date.
May 5th-6th, 2014
Preparation of Post-Proceedings.
The excellent quality level of this workshop has been recognized since its inception and its proceedings have been regularly published in Springer's Lecture Notes series. MABS 2014 will be hosted at AAMAS 2014, the 13th International Conference on Autonomous Agents and Multiagent Systems, which will take place in Paris, France, on May 5th-9th, 2014.
If you want a good answer, ask a decent question. That’s the startling conclusion to a study of online Q&As.
If you spend any time programming, you’ll probably have come across the question and answer site Stack Overflow. The site allows anybody to post a question related to programing and receive answers from the community.
And it has been hugely successful. According to Alexa, the site is the 3rd most popular Q&A site in the world and 79th most popular website overall.
But this success has naturally led to a problem–the sheer number of questions and answers the site has to deal with. To help filter this information, users can rank both the questions and the answers, gaining a reputation for themselves as they contribute.
Nevertheless, Stack Overflow still struggles to weed out off topic and irrelevant questions and answers. This requires considerable input from experienced moderators. So an interesting question is whether it is possible to automate the process of weeding out the less useful question and answers as they are posted.
Today we get an answer of sorts thanks to the work of Yuan Yao at the State Key Laboratory for Novel Software Technology in China and a team of buddies who say they’ve developed an algorithm that does the job.
And they say their work reveals an interesting insight: if you want good answers, ask a decent question. That may sound like a truism, but these guys point out that there has been no evidence to support this insight, until now.
“To the best of our knowledge, we are the first to quantitatively validate the correlation between the question quality and its associated answer quality,” say Yuan and co.
Whether you're a casual user of social media sites like facebook and twitter or an avid online dater accessing eHarmony or Match.com, chances are you've created a personal online profile and been faced with a decision: What should you post for your profile picture? Many people post head shots or selfies, while others opt for pictures of their children, spouses, pets, or even favorite quotes or symbols. If your goal is to be perceived as attractive (and let's be honest, whose isn't?), then new research by Drew Walker and Edward Vul at the University of California, San Diego suggests your best bet is to opt for a group shot with friends.
A photo with friends conveys the fact that you are amiable and well-liked, but oddly enough that is not what makes you more appealing. Instead, the new research shows that individual faces appear more attractive when presented in a group than when presented alone — a perceptually driven phenomenon known as the cheerleader effect.
Consider the Laker girls or Dallas Cowboy Cheerleaders. To many, these women are beautiful and sexy. However, their perceived beauty is in part a visual illusion, created by the fact that cheerleaders appear as a group rather than solo operators. Any one cheerleader seems far more attractive when she is with her team than when she is alone.
The financial crisis clearly illustrated the importance of characterizing the level of ‘systemic’ risk associated with an entire credit network, rather than with single institutions. However, the interplay between financial distress and topological changes is still poorly understood. Here we analyze the quarterly interbank exposures among Dutch banks over the period 1998–2008, ending with the crisis. After controlling for the link density, many topological properties display an abrupt change in 2008, providing a clear – but unpredictable – signature of the crisis. By contrast, if the heterogeneity of banks' connectivity is controlled for, the same properties show a gradual transition to the crisis, starting in 2005 and preceded by an even earlier period during which anomalous debt loops could have led to the underestimation of counter-party risk. These early-warning signals are undetectable if the network is reconstructed from partial bank-specific data, as routinely done. We discuss important implications for bank regulatory policies.
DT statistics,DT time-frequency analysis, and DT low-dimensional reductions The blend of these ideas provides meaningful insight into the data sets one is faced with in every scientific subject today, including those generated from complex dynamical systems. This is a particularly exciting field and much of the final part of the book is driven by intuitive examples from it, showing how the three areas can be used in combination to give critical insight into the fundamental workings of various problems.
Scientists have long suspected that corvids – the family of birds including ravens, crows and magpies – are highly intelligent.
The Tübingen researchers are the first to investigate the brain physiology of crows' intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage – and which is an effort even for humans.
The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image – while another group of cells always responded when they were operating on the "different image" rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.
The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. "Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs," says Lena Veit. "This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy." Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. "Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains," says Professor Andreas Nieder.
Tiny sap-sucking insects that are a scourge to gardeners also have the upside of helping trees survive in seasonally dry forests in Central America. How? Scale insects use carbon they get from Cordia alliodora trees to make sugar-rich “honeydew” for Azteca pittieri ants, which in turn defend the trees against leaf-munching insects. Mutualism is often stronger when resources are scarce, but this interdependence usually involves a commodity that is traded directly between species. Now, in this issue of PLOS Biology, Pringle and colleagues show that lack of a resource that is not traded—water—intensifies the bonds between C. alliodora, scale insects, and ants.
Found from southern Mexico through South America, C. alliodora has stem hollows where ants nest and tend flocks of scale insects. Named for their protective coverings, scale insects are vampires to the vegetable kingdom, piercing plants with tubular mouths to drink straight from the vascular system. As they imbibe, they secrete honeydew for ants to harvest and eat. Rounding out this mutualistic circle, the ants patrol their C. alliodora host for beetle larvae, caterpillars, and other herbivores, biting them until they leave.
Previous studies suggested that plants may invest more carbon in ant defense during water stress. This scenario is particularly taxing for C. alliodora, which drops its leaves during the dry season and must make its carbon stores last long enough to grow new leaves during the next rainy season. But ant colonies must be maintained year-round to ensure defense of leaves during the growing season, safeguarding the production of carbon to get the trees through the next dry season.
Consciousness is an emergent property of the complex brain network. In order to understand how consciousness is constructed, neural interactions within this network must be elucidated. Previous studies have shown that specific neural interactions between the thalamus and frontoparietal cortices; frontal and parietal cortices; and parietal and temporal cortices are correlated with levels of consciousness. However, due to technical limitations, the network underlying consciousness has not been investigated in terms of large-scale interactions with high temporal and spectral resolution. In this study, we recorded neural activity with dense electrocorticogram (ECoG) arrays and used the spectral Granger causality to generate a more comprehensive network that relates to consciousness in monkeys. We found that neural interactions were significantly different between conscious and unconscious states in all combinations of cortical region pairs. Furthermore, the difference in neural interactions between conscious and unconscious states could be represented in 4 frequency-specific large-scale networks with unique interaction patterns: 2 networks were related to consciousness and showed peaks in alpha and beta bands, while the other 2 networks were related to unconsciousness and showed peaks in theta and gamma bands. Moreover, networks in the unconscious state were shared amongst 3 different unconscious conditions, which were induced either by ketamine and medetomidine, propofol, or sleep. Our results provide a novel picture that the difference between conscious and unconscious states is characterized by a switch in frequency-specific modes of large-scale communications across the entire cortex, rather than the cessation of interactions between specific cortical regions.
When choosing between immediate and temporally delayed goods, people sometimes decide disadvantageously. Here, we aim to provide process-level insight into differences between individually determined advantageous and disadvantageous choices. Participants played a computer game, deciding between two different rewards of varying size and distance by moving an agent towards the chosen reward. We calculated individual models of advantageous choices and characterized the decision process by analyzing mouse movements. The larger amount of participants’ choices was classified as advantageous and the disadvantageous choices were biased towards choosing sooner/smaller rewards. The deflection of mouse movements indicated more conflict in disadvantageous choices compared with advantageous choices when the utilities of the options differed clearly. Further process oriented analysis revealed that disadvantageous choices were biased by a tendency for choice-repetition and an undervaluation of the value information in favour of the delay information, making rather simple choices harder than could be expected from the properties of the decision situation.
With the rapid growth of the Internet and overwhelming amount of information and choices that people are confronted with, recommender systems have been developed to effectively support users’ decision-making process in the online systems. However, many recommendation algorithms suffer from the data sparsity problem, i.e. the user-object bipartite networks are so sparse that algorithms cannot accurately recommend objects for users. This data sparsity problem makes many well-known recommendation algorithms perform poorly. To solve the problem, we propose a recommendation algorithm based on the semi-local diffusion process on the user-object bipartite network. The simulation results on two sparse datasets, Amazon and Bookcross, show that our method significantly outperforms the state-of-the-art methods especially for those small-degree users. Two personalized semi-local diffusion methods are proposed which further improve the recommendation accuracy. Finally, our work indicates that sparse online systems are essentially different from the dense online systems, so it is necessary to reexamine former algorithms and conclusions based on dense data in sparse systems.
In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (−8.8 dB to −18.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.
Large-scale analyses of protein-protein interactions based on coarse-grain molecular docking simulations and binding site predictions resulting from evolutionary sequence analysis, are possible and realizable on hundreds of proteins with variate structures and interfaces. We demonstrated this on the 168 proteins of the Mintseris Benchmark 2.0. On the one hand, we evaluated the quality of the interaction signal and the contribution of docking information compared to evolutionary information showing that the combination of the two improves partner identification. On the other hand, since protein interactions usually occur in crowded environments with several competing partners, we realized a thorough analysis of the interactions of proteins with true partners but also with non-partners to evaluate whether proteins in the environment, competing with the true partner, affect its identification. We found three populations of proteins: strongly competing, never competing, and interacting with different levels of strength. Populations and levels of strength are numerically characterized and provide a signature for the behavior of a protein in the crowded environment. We showed that partner identification, to some extent, does not depend on the competing partners present in the environment, that certain biochemical classes of proteins are intrinsically easier to analyze than others, and that small proteins are not more promiscuous than large ones. Our approach brings to light that the knowledge of the binding site can be used to reduce the high computational cost of docking simulations with no consequence in the quality of the results, demonstrating the possibility to apply coarse-grain docking to datasets made of thousands of proteins. Comparison with all available large-scale analyses aimed to partner predictions is realized. We release the complete decoys set issued by coarse-grain docking simulations of both true and false interacting partners, and their evolutionary sequence analysis leading to binding site predictions. Download site: http://www.lgm.upmc.fr/CCDMintseris/
Does the availability of instant reference checking and “find more like this” research on the Internet change the standards by which academics should feel “obligated” to cite the work of others? Is the deliberate refusal to look for the existence of parallel work by others an ethical lapse or merely negligence? At a minimum, the Dutch standard of Slodderwetenschap (sloppy science) is clearly at work. At a maximum so is plagiarism. In between sits the process to be labeled as ‘plagiarism by negligence’. This article seeks to expose the intellectual folly of allowing such a plagiarism to be tolerated by the academy through a discussion of the cases of Terrence Deacon and Stephen Wolfram.
Subliminal Influence or Plagiarism by Negligence? The Slodderwetenschap of Ignoring the Internet
Talk about taking a dim view of things. Researchers have obtained ultrasharp images of weakly illuminated objects using a bare minimum of photons: mathematically stitching together information from single particles of light recorded by each pixel of a solid-state detector.
The achievement is likely to support studies of fragile biological materials, such as the human eye, that could be damaged or destroyed by higher levels of illumination. The development could also have applications for military surveillance, such as in a spy camera that records a scene with a minimum of illumination to elude detection.
Trendiness in the brain sciences often has an obscure, esoteric quality that belies the prominence accorded neuro in both academia and popular culture. Toward the top of the list of arcana resides the ponderously titled “embodied cognition.” This is the idea that cognitive processes—thought, emotion—arise from our interactions with the physical world around us. Reduced to its simplest level: holding a warm tea cup might make you feel well disposed toward your lunch guest.
his may seem an odd question, but can financial engineering cure cancer? No less of an intellectual light than Andrew W. Lo of the Massachusetts Institute of Technology and member of the Future of Finance Advisory Council believes financial engineering may be a potent weapon in the quest to find a cure. In fact, this was the topic of Lo’s presentation at the recent Fixed-Income Management Conferencein Boston.
Lo’s thesis rests on several key points:
Applying portfolio theory to finding a cure for cancer helps increase expected returns and lower expected risks for the capital deployed.Applying financial engineering through securitization allows for financing a cure for cancer in a smarter way that ensures greater participation from prospective investors.Recent anecdotal evidence suggests that human genome mapping allows for the identification of problematic genes that may be targeted by customized medicines to fight specific cancers.
Notorious capital destroyers, biotech investments of more than $400 billion have never generated returns in the aggregate covering their costs of capital. In fact, venture capital firms are so discouraged by their returns that the number and size of biotech investments has steadily declined from their peaks in 2007–2008.
We've known big data has had big impacts in business, and in lots of prediction tasks. I want to understand, what does big data mean for what we do for science? Specifically, I want to think about the following context: You have a scientist who has a hypothesis that they would like to test, and I want to think about how the testing of that hypothesis might change as data gets bigger and bigger. So that's going to be the rule of the game. Scientists start with a hypothesis and they want to test it; what's going to happen?
It's undeniable—technology is changing the way we think. But is it for the better? Amid a chorus of doomsayers, Clive Thompson delivers a resounding "yes." The Internet age has produced a radical new style of human intelligence, worthy of both celebration and analysis. We learn more and retain it longer, write and think with global audiences, and even gain an ESP-like awareness of the world around us. Modern technology is making us smarter, better connected, and often deeper—both as individuals and as a society.
Abiotic environmental variables strongly affect the outcomes of species interactions. For example, mutualistic interactions between species are often stronger when resources are limited. The effect might be indirect: water stress on plants can lead to carbon stress, which could alter carbon-mediated plant mutualisms. In mutualistic ant–plant symbioses, plants host ant colonies that defend them against herbivores. Here we show that the partners' investments in a widespread ant–plant symbiosis increase with water stress across 26 sites along a Mesoamerican precipitation gradient. At lower precipitation levels, Cordia alliodora trees invest more carbon in Azteca ants via phloem-feeding scale insects that provide the ants with sugars, and the ants provide better defense of the carbon-producing leaves. Under water stress, the trees have smaller carbon pools. A model of the carbon trade-offs for the mutualistic partners shows that the observed strategies can arise from the carbon costs of rare but extreme events of herbivory in the rainy season. Thus, water limitation, together with the risk of herbivory, increases the strength of a carbon-based mutualism.
Understanding how prey capture rates are influenced by feeding ecology and environmental conditions is fundamental to assessing anthropogenic impacts on marine higher predators. We compared how prey capture rates varied in relation to prey size, prey patch distribution and prey density for two species of alcid, common guillemot (Uria aalge) and razorbill (Alca torda) during the chick-rearing period. We developed a Monte Carlo approach parameterised with foraging behaviour from bird-borne data loggers, observations of prey fed to chicks, and adult diet from water-offloading, to construct a bio-energetics model. Our primary goal was to estimate prey capture rates, and a secondary aim was to test responses to a set of biologically plausible environmental scenarios. Estimated prey capture rates were 1.5±0.8 items per dive (0.8±0.4 and 1.1±0.6 items per minute foraging and underwater, respectively) for guillemots and 3.7±2.4 items per dive (4.9±3.1 and 7.3±4.0 items per minute foraging and underwater, respectively) for razorbills. Based on species' ecology, diet and flight costs, we predicted that razorbills would be more sensitive to decreases in 0-group sandeel (Ammodytes marinus) length (prediction 1), but guillemots would be more sensitive to prey patches that were more widely spaced (prediction 2), and lower in prey density (prediction 3). Estimated prey capture rates increased non-linearly as 0-group sandeel length declined, with the slope being steeper in razorbills, supporting prediction 1. When prey patches were more dispersed, estimated daily energy expenditure increased by a factor of 3.0 for guillemots and 2.3 for razorbills, suggesting guillemots were more sensitive to patchier prey, supporting prediction 2. However, both species responded similarly to reduced prey density (guillemot expenditure increased by 1.7; razorbill by 1.6), thus not supporting prediction 3. This bio-energetics approach complements other foraging models in predicting likely impacts of environmental change on marine higher predators dependent on species-specific foraging ecologies.
A recent study by van Ede et al. (2012) shows that the accuracy and reaction time in humans of tactile perceptual decisions are affected by an attentional cue via distinct cognitive and neural processes. These results are controversial as they undermine the notion that accuracy and reaction time are influenced by the same latent process that underlie the decision process. Typically, accumulation-to-bound models (like the drift diffusion model) can explain variability in both accuracy and reaction time by a change of a single parameter. To elaborate the findings of van Ede et al., we fitted the drift diffusion model to their behavioral data. Results show that both changes in accuracy and reaction time can be partly explained by an increase in the accumulation of sensory evidence (drift rate). In addition, a change in non-decision time is necessary to account for reaction time changes as well. These results provide a subtle explanation of how the underlying dynamics of the decision process might give rise to differences in both the speed and accuracy of perceptual tactile decisions. Furthermore, our analyses highlight the importance of applying a model-based approach, as the observed changes in the model parameters might be ecologically more valid, since they have an intuitive relationship with the neuronal processes underlying perceptual decision making.
We identify and explain the mechanisms that account for the emergence of fairness preferences and altruistic punishment in voluntary contribution mechanisms by combining an evolutionary perspective together with an expected utility model. We aim at filling a gap between the literature on the theory of evolution applied to cooperation and punishment, and the empirical findings from experimental economics. The approach is motivated by previous findings on other-regarding behavior, the co-evolution of culture, genes and social norms, as well as bounded rationality. Our first result reveals the emergence of two distinct evolutionary regimes that force agents to converge either to a defection state or to a state of coordination, depending on the predominant set of self- or other-regarding preferences. Our second result indicates that subjects in laboratory experiments of public goods games with punishment coordinate and punish defectors as a result of an aversion against disadvantageous inequitable outcomes. Our third finding identifies disadvantageous inequity aversion as evolutionary dominant and stable in a heterogeneous population of agents endowed initially only with purely self-regarding preferences. We validate our model using previously obtained results from three independently conducted experiments of public goods games with punishment.