Memes were originally framed in relationship to genes. In The Selfish Gene, Dawkins claimed that humans are “survival machines” for our genes, the replicating molecules that emerged from the primordial soup and that, through mutation and natural selection, evolved to generate beings that were more effective as carriers and propagators of genes. Still, Dawkins explained, genes could not account for all of human behavior, particularly the evolution of cultures. So he identified a second replicator, a “unit of cultural transmission” that he believed was “leaping from brain to brain” through imitation. He named these units “memes,” an adaption of the Greek word mimene, “to imitate.” Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.
Systems of many interacting components — be they species, integers or subatomic particles — kept producing the same statistical curve, which had become known as the Tracy-Widom distribution. This puzzling curve seemed to be the complex cousin of the familiar bell curve, or Gaussian distribution, which represents the natural variation of independent random variables like the heights of students in a classroom or their test scores. Like the Gaussian, the Tracy-Widom distribution exhibits “universality,” a mysterious phenomenon in which diverse microscopic effects give rise to the same collective behavior. “The surprise is it’s as universal as it is,” said Tracy, a professor at the University of California, Davis.
Memes are the cultural equivalent of genes that spread across human culture by means of imitation. What makes a meme and what distinguishes it from other forms of information, however, is still poorly understood. Here we propose a simple formula for describing the characteristic properties of memes in the scientific literature, which is based on their frequency of occurrence and the degree to which they propagate along the citation graph. The product of the frequency and the propagation degree is the meme score, which accurately identifies important and interesting memes within a scientific field. We use data from close to 50 million publication records from the Web of Science, PubMed Central and the American Physical Society to demonstrate the effectiveness of our approach. Evaluations relying on human annotators, citation network randomizations, and comparisons with several alternative metrics confirm that the meme score is highly effective, while requiring no external resources or arbitrary thresholds and filters.
António F Fonseca's insight:
A simple truth: memes are repetition and imitation. A nice paper based on this simple idea.
Social networks have many counter-intuitive properties, including the "friendship paradox" that states, on average, your friends have more friends than you do. Recently, a variety of other paradoxes were demonstrated in online social networks. This paper explores the origins of these network paradoxes. Specifically, we ask whether they arise from mathematical properties of the networks or whether they have a behavioral origin. We show that sampling from heavy-tailed distributions always gives rise to a paradox in the mean, but not the median. We propose a strong form of network paradoxes, based on utilizing the median, and validate it empirically using data from two online social networks. Specifically, we show that for any user the majority of user's friends and followers have more friends, followers, etc. than the user, and that this cannot be explained by statistical properties of sampling. Next, we explore the behavioral origins of the paradoxes by using the shuffle test to remove correlations between node degrees and attributes. We find that paradoxes for the mean persist in the shuffled network, but not for the median. We demonstrate that strong paradoxes arise due to the assortativity of user attributes, including degree, and correlation between degree and attribute.
Network Weirdness: Exploring the Origins of Network Paradoxes Farshad Kooti, Nathan O. Hodas, Kristina Lerman
Complex adaptive systems (cas), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other cas. Despite a wealth of data and descriptions concerning different cas, there remain many unanswered questions about "steering" these systems. In Signals and Boundaries, John Holland argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering cas through the mechanisms that generate their signal/boundary hierarchies.
Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.
It is commonly believed that information spreads between individuals like a pathogen, with each exposure by an informed friend potentially resulting in a naive individual becoming infected. However, empirical studies of social media suggest that individual response to repeated exposure to information is far more complex. As a proxy for intervention experiments, we compare user responses to multiple exposures on two different social media sites, Twitter and Digg. We show that the position of exposing messages on the user-interface strongly affects social contagion. Accounting for this visibility significantly simplifies the dynamics of social contagion. The likelihood an individual will spread information increases monotonically with exposure, while explicit feedback about how many friends have previously spread it increases the likelihood of a response. We provide a framework for unifying information visibility, divided attention, and explicit social feedback to predict the temporal dynamics of user behavior.
Economics is changing. In the last few years it has generated a number of new approaches. One of the most promising - complexity economics - was pioneered in the 1980s and 1990s by a small team at the Santa Fe Institute. Economist and complexity theorist W. Brian Arthur led that team, and in this book he collects many of his articles on this new approach. The traditional framework sees behavior in the economy as in an equilibrium steady state. People in the economy face well-defined problems and use perfect deductive reasoning to base their actions on. The complexity framework, by contrast, sees the economy as always in process, always changing. People try to make sense of the situations they face using whatever reasoning they have at hand, and together create outcomes they must individually react to anew. The resulting economy is not a well-ordered machine, but a complex evolving system that is imperfect, perpetually constructing itself anew, and brimming with vitality.
The new vision complements and widens the standard one, and it helps answer many questions: Why does the stock market show moods and a psychology? Why do high-tech markets tend to lock in to the dominance of one or two very large players? How do economies form, and how do they continually alter in structure over time?
The papers collected here were among the first to use evolutionary computation, agent-based modeling, and cognitive psychology. They cover topics as disparate as how markets form out of beliefs; how technology evolves over the long span of time; why systems and bureaucracies get more complicated as they evolve; and how financial crises can be foreseen and prevented in the future.
The Western Ghats in India rise like a wall between the Arabian Sea and the heart of the subcontinent to the east. The 1,000-mile-long chain of coastal mountains is dense with lush rainforest and grasslands, and each year, clouds bearing monsoon rains blow in from the southwest and break against the mountains’ flanks, unloading water…
Nice multi-agent experiment showing the emergence of friendliness and the thinking on mode other's, after all a human advantage, neurocientist explain it by mirror neurons. Is the ultimate reason for the existence of Facebook and such.
This article is an attempt to capture, in a reasonable space, some of the major developments and currents of thought in information theory and the relations between them. I have particularly tried to include changes in the views of key authors in the field. The domains addressed range from mathematical-categorial, philosophical and computational approaches to systems, causal-compositional, biological and religious approaches and messaging theory. I have related key concepts in each domain to my non-standard extension of logic to real processes that I call Logic in Reality (LIR). The result is not another attempt at a General Theory of Information such as that of Burgin, or a Unified Theory of Information like that of Hofkirchner. It is not a compendium of papers presented at a conference, more or less unified around a particular theme. It is rather a highly personal, limited synthesis which nonetheless may facilitate comparison of insights, including contradictory ones, from different lines of inquiry. As such, it may be an example of the concept proposed by Marijuan, still little developed, of the recombination of knowledge. Like the best of the work to which it refers, the finality of this synthesis is the possible contribution that an improved understanding of the nature and dynamics of information may make to the ethical development of the information society.
Brenner and Daniel Cohnitz have a very good book about the subject "Information and Information Flow" that covers almost all aspects of Information Theory. Unfortunatelly the 'Matecmatical Information Theory' of Jan Kahre didn't have yet the same attention.
Decisions in a group often result in imitation and aggregation, which are enhanced in panic, dangerous, stressful or negative situations. Current explanations of this enhancement are restricted to particular contexts, such as anti-predatory behavior, deflection of responsibility in humans, or cases in which the negative situation is associated with an increase in uncertainty. But this effect is observed across taxa and in very diverse conditions, suggesting that it may arise from a more general cause, such as a fundamental characteristic of social decision-making. Current decision-making theories do not explain it, but we noted that they concentrate on estimating which of the available options is the best one, implicitly neglecting the cases in which several options can be good at the same time. We explore a more general model of decision-making that instead estimates the probability that each option is good, allowing several options to be good simultaneously. This model predicts with great generality the enhanced imitation in negative situations. Fish and human behavioral data showing an increased imitation behavior in negative circumstances are well described by this type of decisions to choose a good option.
The Informative Herd: why humans and other animals imitate more when conditions are adverse Alfonso Pérez-Escudero, Gonzalo G. de Polavieja
There is a rapidly expanding literature on the application of complex networks in economics that focused mostly on stock markets. In this paper, we discuss an application of complex networks to study international business cycles.
We construct complex networks based on GDP data from two data sets on G7 and OECD economies. Besides the well-known correlation-based networks, we also use a specific tool for presenting causality in economics, the Granger causality. We consider different filtering methods to derive the stationary component of the GDP series for each of the countries in the samples. The networks were found to be sensitive to the detrending method. While the correlation networks provide information on comovement between the national economies, the Granger causality networks can better predict fluctuations in countries’ GDP. By using them, we can obtain directed networks allows us to determine the relative influence of different countries on the global economy network. The US appears as the key player for both the G7 and OECD samples.
The modern world is complex beyond human understanding and control. The science of complex systems aims to find new ways of thinking about the many interconnected networks of interaction that defy traditional approaches. Thus far, research into networks has largely been restricted to pairwise relationships represented by links between two nodes. This volume marks a major extension of networks to multidimensional hypernetworks for modeling multi-element relationships, such as companies making up the stock market, the neighborhoods forming a city, people making up committees, divisions making up companies, computers making up the internet, men and machines making up armies, or robots working as teams.
This volume makes an important contribution to the science of complex systems by: (i) extending network theory to include dynamic relationships between many elements; (ii) providing a mathematical theory able to integrate multilevel dynamics in a coherent way; (iii) providing a new methodological approach to analyze complex systems; and (iv) illustrating the theory with practical examples in the design, management and control of complex systems taken from many areas of application.
This lecture treats some enduring misconceptions about modeling. One of these is that the goal is always prediction. The lecture distinguishes between explanation and prediction as modeling goals, and offers sixteen reasons other than prediction to build a model. It also challenges the common assumption that scientific theories arise from and 'summarize' data, when often, theories precede and guide data collection; without theory, in other words, it is not clear what data to collect. Among other things, it also argues that the modeling enterprise enforces habits of mind essential to freedom. It is based on the author's 2008 Bastille Day keynote address to the Second World Congress on Social Simulation, George Mason University, and earlier addresses at the Institute of Medicine, the University of Michigan, and the Santa Fe Institute.
Social networks pervade our everyday lives: we interact, influence, and are influenced by our friends and acquaintances. With the advent of the World Wide Web, large amounts of data on social networks have become available, allowing the quantitative analysis of the distribution of information on them, including behavioral traits and fads. Recent studies of correlations among members of a social network, who exhibit the same trait, have shown that individuals influence not only their direct contacts but also friends’ friends, up to a network distance extending beyond their closest peers. Here, we show how such patterns of correlations between peers emerge in networked populations. We use standard models (yet reflecting intrinsically different mechanisms) of information spreading to argue that empirically observed patterns of correlation among peers emerge naturally from a wide range of dynamics, being essentially independent of the type of information, on how it spreads, and even on the class of underlying network that interconnects individuals. Finally, we show that the sparser and clustered the network, the more far reaching the influence of each individual will be. DOI: http://dx.doi.org/10.1103/PhysRevLett.112.098702
Origin of Peer Influence in Social Networks Phys. Rev. Lett. 112, 098702 – Published 6 March 2014 Flávio L. Pinheiro, Marta D. Santos, Francisco C. Santos, and Jorge M. Pacheco
L'opinione predominante, condivisa dalla maggioranza delle persone, emerge rapidamente su Twitter, qualunque sia l'argomento, e una volta stabilizzata difficilmente può cambiare. Lo ha scoperto una nuova analisi automatizzata, che potrebbe essere utilizzata per prevedere - ma forse anche per influenzare - come si orienterà l'opinione pubblica
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.