Network methods have had profound influence in many domains and disciplines in the past decade. Community structure is a very important property of complex networks, but the accurate definition of a community remains an open problem. Here we defined community based on three properties, and then propose a simple and novel framework to detect communities based on network topology. We analyzed 16 different types of networks, and compared our partitions with Infomap, LPA, Fastgreedy and Walktrap, which are popular algorithms for community detection. Most of the partitions generated using our approach compare favorably to those generated by these other algorithms. Furthermore, we define overlapping nodes that combine community structure with shortest paths. We also analyzed the E. Coli. transcriptional regulatory network in detail, and identified modules with strong functional coherence.
We investigate the emergence and persistence of communities through a recently proposed mechanism of adaptive rewiring in coevolutionary networks. We characterize the topological structures arising in a coevolutionary network subject to an adaptive rewiring process and a node dynamics given by a simple voterlike rule. We find that, for some values of the parameters describing the adaptive rewiring process, a community structure emerges on a connected network. We show that the emergence of communities is associated to a decrease in the number of active links in the system, i.e. links that connect two nodes in different states. The lifetime of the community structure state scales exponentially with the size of the system. Additionally, we find that a small noise in the node dynamics can sustain a diversity of states and a community structure in time in a finite size system. Thus, large system size and/or local noise can explain the persistence of communities and diversity in many real systems.
Emergence and persistence of communities in coevolutionary networks J. C. González-Avella, M. G. Cosenza, J. L. Herrera, K. Tucci
Evolutionary biologists have long thought that lying ought to destroy societies. Now computational anthropologists have shown that nothing could be further from the truth.
Everybody learns as a child that lying is wrong. We all learn something else too—that some kinds of lies are worse than others. What’s more, certain kinds of fibs—so-called white lies– are actually quite acceptable, even necessary at times.
Consequently, humans become sophisticated liars. Indeed, various studies have shown that we lie all the time, perhaps as often as twice a day on average.
It’s easy to see how lying reduces the level of trust between individuals and so threatens the stability of societies. So how do societies survive all this lying?
That’s something of a puzzle for evolutionary biologists. The very fact that lying is so prevalent in human society suggests that it might offer some kind of evolutionary advantage. In other words, we all benefit from lying in some way. But how?
Today, we get an answer thanks to the work of Gerardo Iñiguez at Aalto University in Finland and a few pals (including Robin Dunbar, an anthropologist from the University of Oxford of Dunbar’s number fame). These guys have simulated the effect that lies have on the strength of connections that exist within a social network.
But they’ve added fascinating twist. These guys have made a clear distinction between lies that benefit the person being lied to versus lies that benefit the person doing the lying. In other words, their model captures the difference between “white” lies, which are prosocial, and “black” lies, which are antisocial.
Interesting although it holds no real surprises, this is one of the things that just needed confirmation. Obviously white and black lies both have their explanation in evolution. A white lie helps you because you help the other, the black lie because you don't. Is to so hard or strange to understand? Fun stuff though!
Terminator cyborgs may be on the horizon as a result of bionics supra-particles invented at the University of Michigan and Pittsburgh.
Arjen ten Have's insight:
This is not my actual idea of ascension but I must admit it might help. Then why isn't this my idea of human ascension? Well although science on itself is neutral, its impact for sure isn't. Technology is great and great things can be achieved but since I have a hard time accepting we are ready for real group selection, where group equals complete human society, I am afraid this technology will be first used by NSA and other US military organizations. Science fiction will come true but I am not so sure that the plastic (as in flexible) human will actually prevail. So should we stop or fear this? No we should guide it, just as we guide for nuclear proliferance (albeit that it would be better if the nuclear states would at least reduce their stocks).
Requests are at the core of many social media systems such as question & answer sites and online philanthropy communities. While the success of such requests is critical to the success of the community, the factors that lead community members to satisfy a request are largely unknown. Success of a request depends on factors like who is asking, how they are asking, when are they asking, and most critically what is being requested, ranging from small favors to substantial monetary donations. We present a case study of altruistic requests in an online community where all requests ask for the very same contribution and do not offer anything tangible in return, allowing us to disentangle what is requested from textual and social factors. Drawing from social psychology literature, we extract high-level social features from text that operationalize social relations between recipient and donor and demonstrate that these extracted relations are predictive of success. More specifically, we find that clearly communicating need through the narrative is essential and that that linguistic indications of gratitude, evidentiality, and generalized reciprocity, as well as high status of the asker further increase the likelihood of success. Building on this understanding, we develop a model that can predict the success of unseen requests, significantly improving over several baselines. We link these findings to research in psychology on helping behavior, providing a basis for further analysis of success in social media systems.
How to Ask for a Favor: A Case Study on the Success of Altruistic Requests Tim Althoff, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky
"If we take the time to look, we realize that nature provides us with a time-tested R&D lab for re-imagined industry and its contributing forces. The natural world has already mastered renewable energy use, closed production cycles, collaborative networks, sustainable materials, and green chemistry. Underlying these proven successes are principles [...] including rampant resource efficiency, real-time responsiveness, and systems intelligence, among others. These principles enable entire natural "economies" to be not merely productive but resilient and regenerative. "
Artificial intelligence: the next step in evolution? The Age American philosopher Daniel Dennett sums up the feelings of some scientists when suggesting that humans are immensely complex and able computational machines.
Arjen ten Have's insight:
Cool elaboration on AI. Quote:“When we start to design intelligent systems to include motives and the emotional signalling that accompanies them – and to use these as a reference standard against which perceived events and objects can be sorted, evaluated and organised – we’ll have made a major step towards achieving true machine intelligence.”
Research on human social interactions has traditionally relied on self-reports. Despite their widespread use, self-reported accounts of behaviour are prone to biases and necessarily reduce the range of behaviours, and the number of subjects, that may be studied simultaneously. The development of ever smaller sensors makes it possible to study group-level human behaviour in naturalistic settings outside research laboratories. We used such sensors, sociometers, to examine gender, talkativeness and interaction style in two different contexts. Here, we find that in the collaborative context, women were much more likely to be physically proximate to other women and were also significantly more talkative than men, especially in small groups. In contrast, there were no gender-based differences in the non-collaborative setting. Our results highlight the importance of objective measurement in the study of human behaviour, here enabling us to discern context specific, gender-based differences in interaction style.
Overexploitation of renewable resources today has a high cost on the welfare of future generations. Unlike in other public goods games, however, future generations cannot reciprocate actions made today. What mechanisms can maintain cooperation with the future? To answer this question, we devise a new experimental paradigm, the /`Intergenerational Goods Game/'. A line-up of successive groups (generations) can each either extract a resource to exhaustion or leave something for the next group. Exhausting the resource maximizes the payoff for the present generation, but leaves all future generations empty-handed. Here we show that the resource is almost always destroyed if extraction decisions are made individually. This failure to cooperate with the future is driven primarily by a minority of individuals who extract far more than what is sustainable. In contrast, when extractions are democratically decided by vote, the resource is consistently sustained. Voting is effective for two reasons. First, it allows a majority of cooperators to restrain defectors. Second, it reassures conditional cooperators that their efforts are not futile. Voting, however, only promotes sustainability if it is binding for all involved. Our results have implications for policy interventions designed to sustain intergenerational public goods.
Cooperating with the future Oliver P. Hauser, David G. Rand, Alexander Peysakhovich & Martin A. Nowak
Evolutionary Robotics is a field that “aims to apply evolutionary computation techniques to evolve the overall design or controllers, or both, for real and simulated autonomous robots” (Vargas et al., 2014). This approach is “useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes” (Floreano et al., 2008). However, as noted in Bongard (2013) “the use of metaheuristics (i.e., evolution) sets this subfield of robotics apart from the mainstream of robotics research,” which “aims to continuously generate better behavior for a given robot, while the long-term goal of Evolutionary Robotics is to create general, robot-generating algorithms.”
Many hostile scenarios exist in real-life situations, where cooperation is disfavored and the collective behavior needs intervention for system efficiency improvement. Towards this end, the framework of soft control provides a powerful tool by introducing controllable agents called shills, who are allowed to follow well-designed updating rules for varying missions. Inspired by swarm intelligence emerging from flocks of birds, we explore here the dependence of the evolution of cooperation on soft control by an evolutionary iterated prisoner's dilemma (IPD) game staged on square lattices, where the shills adopt a particle swarm optimization (PSO) mechanism for strategy updating. We demonstrate that not only can cooperation be promoted by shills effectively seeking for potentially better strategies and spreading them to others, but also the frequency of cooperation could be arbitrarily controlled by choosing appropriate parameter settings. Moreover, we show that adding more shills does not contribute to further cooperation promotion, while assigning higher weights to the collective knowledge for strategy updating proves a efficient way to induce cooperative behavior. Our research provides insights into cooperation evolution in the presence of PSO-inspired shills and we hope it will be inspirational for future studies focusing on swarm intelligence based soft control.
We consider the effects of social learning on the individual learning and genetic evolution of a colony of artificial agents capable of genetic, individual and social modes of adaptation. We confirm that there is strong selection pressure to acquire traits of individual learning and social learning when these are adaptive traits. We show that selection pressure for learning of either kind can supress selection pressure for reproduction or greater fitness. We show that social learning differs from individual learning in that it can support a second evolutionary system that is decoupled from the biological evolutionary system. This decoupling leads to an emergent interaction where immature agents are more likely to engage in learning activities than mature agents.
The Effect of Social Learning on Individual Learning and Evolution Chris Marriott, Jobran Chebib
It is a commonly held belief in our culture that competition is good for our professional and personal development, our business growth and our economy. We believe that competition motivates people to work harder and, as a result, the most talented individuals win out over those less competent. Survival of the fittest, right? In her latest book, A Bigger Prize: How We Can Do Better Than The Competition, author, Margaret Heffernan, challenges us to look at competition differently. It does not bring out the best in us. In fact, Heffernan makes the point that competition causes us to focus solely on the end goal, the prize. We lose out, not only personally, but businesses also lose their ability to innovate and succeed in today’s economy.
Complex adaptive systems (cas), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other cas. Despite a wealth of data and descriptions concerning different cas, there remain many unanswered questions about "steering" these systems. In Signals and Boundaries, John Holland argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering cas through the mechanisms that generate their signal/boundary hierarchies.
Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.