We investigate the emergence and persistence of communities through a recently proposed mechanism of adaptive rewiring in coevolutionary networks. We characterize the topological structures arising in a coevolutionary network subject to an adaptive rewiring process and a node dynamics given by a simple voterlike rule. We find that, for some values of the parameters describing the adaptive rewiring process, a community structure emerges on a connected network. We show that the emergence of communities is associated to a decrease in the number of active links in the system, i.e. links that connect two nodes in different states. The lifetime of the community structure state scales exponentially with the size of the system. Additionally, we find that a small noise in the node dynamics can sustain a diversity of states and a community structure in time in a finite size system. Thus, large system size and/or local noise can explain the persistence of communities and diversity in many real systems.
Emergence and persistence of communities in coevolutionary networks J. C. González-Avella, M. G. Cosenza, J. L. Herrera, K. Tucci
Evolutionary biologists have long thought that lying ought to destroy societies. Now computational anthropologists have shown that nothing could be further from the truth.
Everybody learns as a child that lying is wrong. We all learn something else too—that some kinds of lies are worse than others. What’s more, certain kinds of fibs—so-called white lies– are actually quite acceptable, even necessary at times.
Consequently, humans become sophisticated liars. Indeed, various studies have shown that we lie all the time, perhaps as often as twice a day on average.
It’s easy to see how lying reduces the level of trust between individuals and so threatens the stability of societies. So how do societies survive all this lying?
That’s something of a puzzle for evolutionary biologists. The very fact that lying is so prevalent in human society suggests that it might offer some kind of evolutionary advantage. In other words, we all benefit from lying in some way. But how?
Today, we get an answer thanks to the work of Gerardo Iñiguez at Aalto University in Finland and a few pals (including Robin Dunbar, an anthropologist from the University of Oxford of Dunbar’s number fame). These guys have simulated the effect that lies have on the strength of connections that exist within a social network.
But they’ve added fascinating twist. These guys have made a clear distinction between lies that benefit the person being lied to versus lies that benefit the person doing the lying. In other words, their model captures the difference between “white” lies, which are prosocial, and “black” lies, which are antisocial.
Interesting although it holds no real surprises, this is one of the things that just needed confirmation. Obviously white and black lies both have their explanation in evolution. A white lie helps you because you help the other, the black lie because you don't. Is to so hard or strange to understand? Fun stuff though!
Terminator cyborgs may be on the horizon as a result of bionics supra-particles invented at the University of Michigan and Pittsburgh.
Arjen ten Have's insight:
This is not my actual idea of ascension but I must admit it might help. Then why isn't this my idea of human ascension? Well although science on itself is neutral, its impact for sure isn't. Technology is great and great things can be achieved but since I have a hard time accepting we are ready for real group selection, where group equals complete human society, I am afraid this technology will be first used by NSA and other US military organizations. Science fiction will come true but I am not so sure that the plastic (as in flexible) human will actually prevail. So should we stop or fear this? No we should guide it, just as we guide for nuclear proliferance (albeit that it would be better if the nuclear states would at least reduce their stocks).
Requests are at the core of many social media systems such as question & answer sites and online philanthropy communities. While the success of such requests is critical to the success of the community, the factors that lead community members to satisfy a request are largely unknown. Success of a request depends on factors like who is asking, how they are asking, when are they asking, and most critically what is being requested, ranging from small favors to substantial monetary donations. We present a case study of altruistic requests in an online community where all requests ask for the very same contribution and do not offer anything tangible in return, allowing us to disentangle what is requested from textual and social factors. Drawing from social psychology literature, we extract high-level social features from text that operationalize social relations between recipient and donor and demonstrate that these extracted relations are predictive of success. More specifically, we find that clearly communicating need through the narrative is essential and that that linguistic indications of gratitude, evidentiality, and generalized reciprocity, as well as high status of the asker further increase the likelihood of success. Building on this understanding, we develop a model that can predict the success of unseen requests, significantly improving over several baselines. We link these findings to research in psychology on helping behavior, providing a basis for further analysis of success in social media systems.
How to Ask for a Favor: A Case Study on the Success of Altruistic Requests Tim Althoff, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky
"If we take the time to look, we realize that nature provides us with a time-tested R&D lab for re-imagined industry and its contributing forces. The natural world has already mastered renewable energy use, closed production cycles, collaborative networks, sustainable materials, and green chemistry. Underlying these proven successes are principles [...] including rampant resource efficiency, real-time responsiveness, and systems intelligence, among others. These principles enable entire natural "economies" to be not merely productive but resilient and regenerative. "
PubMed comprises more than 23 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
Arjen ten Have's insight:
Logical addition to the HMMER package although it would have been more interesting to include xHMMMER.
An emergent property arises when individual components or actions, combined together, collectively generate a higher-level aggregate experience. Think democracy. Or plague. Or community. One does not
Arjen ten Have's insight:
Thid is interesting stuff but somehow it just does not add up. Might be a semantic problem, if not it would a logic problem making it useless.
For me this statements is controversial:
"The first is that sustainability, like any emergent property, must be developed collectively. Like an ant in his colony, the individual's primary value is as a component of the whole. The second implication is that sustainability, as an emergent property, cannot be mandated from above. It arises, to some extent inexplicably, from the ground up."
"Must be developed" requires an active role whereas "it arises" is something passive. An emergent property is always passive. Hence, in the end you cannot really see sustainability as an emergent property. The fact that ant societies are sustainable and are an emergent property does not mean that thriving towards sustainability equals an emergent property. Ant society are rather particular in that they should be seen as sort of superorganism: an organism built form organisms. If you take away one single caste, the superorganism ceases to exist. Just like taking away your liver will kill you. As such it is logical that sustainibilty is an important constraint in the evolution of ants, and hence in that case is an emerging property.
For human society that will be a bit different. Although I am not sure we need to built our society in a similar manner, I do think we should learn from it in order to build a better society.
Evolutionary Robotics is a field that “aims to apply evolutionary computation techniques to evolve the overall design or controllers, or both, for real and simulated autonomous robots” (Vargas et al., 2014). This approach is “useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes” (Floreano et al., 2008). However, as noted in Bongard (2013) “the use of metaheuristics (i.e., evolution) sets this subfield of robotics apart from the mainstream of robotics research,” which “aims to continuously generate better behavior for a given robot, while the long-term goal of Evolutionary Robotics is to create general, robot-generating algorithms.”
Many hostile scenarios exist in real-life situations, where cooperation is disfavored and the collective behavior needs intervention for system efficiency improvement. Towards this end, the framework of soft control provides a powerful tool by introducing controllable agents called shills, who are allowed to follow well-designed updating rules for varying missions. Inspired by swarm intelligence emerging from flocks of birds, we explore here the dependence of the evolution of cooperation on soft control by an evolutionary iterated prisoner's dilemma (IPD) game staged on square lattices, where the shills adopt a particle swarm optimization (PSO) mechanism for strategy updating. We demonstrate that not only can cooperation be promoted by shills effectively seeking for potentially better strategies and spreading them to others, but also the frequency of cooperation could be arbitrarily controlled by choosing appropriate parameter settings. Moreover, we show that adding more shills does not contribute to further cooperation promotion, while assigning higher weights to the collective knowledge for strategy updating proves a efficient way to induce cooperative behavior. Our research provides insights into cooperation evolution in the presence of PSO-inspired shills and we hope it will be inspirational for future studies focusing on swarm intelligence based soft control.
We consider the effects of social learning on the individual learning and genetic evolution of a colony of artificial agents capable of genetic, individual and social modes of adaptation. We confirm that there is strong selection pressure to acquire traits of individual learning and social learning when these are adaptive traits. We show that selection pressure for learning of either kind can supress selection pressure for reproduction or greater fitness. We show that social learning differs from individual learning in that it can support a second evolutionary system that is decoupled from the biological evolutionary system. This decoupling leads to an emergent interaction where immature agents are more likely to engage in learning activities than mature agents.
The Effect of Social Learning on Individual Learning and Evolution Chris Marriott, Jobran Chebib
It is a commonly held belief in our culture that competition is good for our professional and personal development, our business growth and our economy. We believe that competition motivates people to work harder and, as a result, the most talented individuals win out over those less competent. Survival of the fittest, right? In her latest book, A Bigger Prize: How We Can Do Better Than The Competition, author, Margaret Heffernan, challenges us to look at competition differently. It does not bring out the best in us. In fact, Heffernan makes the point that competition causes us to focus solely on the end goal, the prize. We lose out, not only personally, but businesses also lose their ability to innovate and succeed in today’s economy.
Complex adaptive systems (cas), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other cas. Despite a wealth of data and descriptions concerning different cas, there remain many unanswered questions about "steering" these systems. In Signals and Boundaries, John Holland argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering cas through the mechanisms that generate their signal/boundary hierarchies.
Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.
It is commonly believed that information spreads between individuals like a pathogen, with each exposure by an informed friend potentially resulting in a naive individual becoming infected. However, empirical studies of social media suggest that individual response to repeated exposure to information is far more complex. As a proxy for intervention experiments, we compare user responses to multiple exposures on two different social media sites, Twitter and Digg. We show that the position of exposing messages on the user-interface strongly affects social contagion. Accounting for this visibility significantly simplifies the dynamics of social contagion. The likelihood an individual will spread information increases monotonically with exposure, while explicit feedback about how many friends have previously spread it increases the likelihood of a response. We provide a framework for unifying information visibility, divided attention, and explicit social feedback to predict the temporal dynamics of user behavior.
"The widespread existence of cooperation is difficult to explain because individuals face strong incentives to exploit the cooperative tendencies of others. In the research reported here, we examined how the spread of reputational information through gossip promotes cooperation in mixed-motive settings. Results showed that individuals readily communicated reputational information about others, and recipients used this information to selectively interact with cooperative individuals and ostracize those who had behaved selfishly, which enabled group members to contribute to the public good with reduced threat of exploitation. Additionally, ostracized individuals responded to exclusion by subsequently cooperating at levels comparable to those who were not ostracized. These results suggest that the spread of reputational information through gossip can mitigate egoistic behavior by facilitating partner selection, thereby helping to solve the problem of cooperation even in noniterated interactions."
The concept of Singularity envisages a technology-driven explosion in intelligence. I argue that the resulting suprahuman intelligence will not be centralized in a single AI system, but distributed across all people and artifacts, as connected via the Internet. This global brain will function to tackle all challenges confronting the "global superorganism". Its capabilities will extend so far beyond our present abilities that they may be best conveyed as a pragmatic version of the "divine" attributes: omniscience (knowing everything needed to solve our problems), omnipresence (being available anywhere anytime), omnipotence (being able to provide any product or service at negligible cost) and omnibenevolence (aiming at the greatest happiness for the greatest number). By extrapolating present trends, technologies and evolutionary mechanisms, I argue that these abilities are likely to be realized within the next few decades. The resulting solution to all our individual and societal problems can be seen as a return to "Eden", the idyllic state of abundance and peace that supposedly existed before civilization. In this utopian society, individuals would be supported and challenged by the global brain to maximally develop their abilities, and to continuously create new knowledge. However, side effects of technological innovation are likely to create serious disturbances on the road to this utopia. The most important dangers are cascading failures facilitated by hyperconnectivity, the spread of psychological parasites that make people lose touch with reality, the loss of human abilities caused by an unnatural, passive lifestyle, and a conservative backlash triggered by too rapid changes. Because of the non-linearity of the system, the precise impact of such disturbances cannot be predicted. However, a range of precautionary measures, including a "global immune system", may pre-empt the greatest risks.
Return to Eden? Promises and Perils on the Road to an Omnipotent Global Intelligence Prof. Dr. Francis Heylighen