Your new post is loading...
This chapter introduces statistical methods used in the analysis of social networks and in the rapidly evolving parallelfield of network science. Although several instances of social network analysis in health services research have appeared recently, the majority involve only the most basic methods and thus scratch the surface of what might be accomplished. Cuttingedge methods using relevant examples and illustrations in health services research are provided. by A. James O'Malley, JukkaPekka Onnela arXiv:1404.0067 [physics.socph]
Via NESS, Complejidady Economía
Across broad areas of the environmental and social sciences, simulation models are an important way to study systems inaccessible to scientific experimental and observational methods, and also an essential complement of those more conventional approaches. The contemporary research literature is teeming with abstract simulation models whose presentation is mathematically demanding and requires a high level of knowledge of quantitative and computational methods and approaches. Furthermore, simulation models designed to represent specific systems and phenomena are often complicated, and, as a result, difficult to reconstruct from their descriptions in the literature. Spatial Simulation: Exploring Pattern and Process aims to provide a practical and accessible account of dynamic spatial modelling, while also equipping readers with a sound conceptual foundation in the subject, and a useful introduction to the wideranging literature.
Can one hear the 'sound' of a growing network? We address the problem of recognizing the topology of evolving biological or social networks. Starting from percolation theory, we analytically prove a linear inverse relationship between two simple graph parametersthe logarithm of the average cluster size and logarithm of the ratio of the edges of the graph to the theoretically maximum number of edges for that graphthat holds for all growing power law graphs. The result establishes a novel property of evolving powerlaw networks in the asymptotic limit of network size. Numerical simulations as well as fitting to realworld citation coauthorship networks demonstrate that the result holds for networks of finite sizes, and provides a convenient measure of the extent to which an evolving family of networks belongs to the same powerlaw class.
Many optimization algorithms have been developed by drawing inspiration from swarm intelligence (SI). These SIbased algorithms can have some advantages over traditional algorithms. In this paper, we carry out a critical analysis of these SIbased algorithms by analyzing their ways to mimic evolutionary operators. We also analyze the ways of achieving exploration and exploitation in algorithms by using mutation, crossover and selection. In addition, we also look at algorithms using dynamic systems, selforganization and Markov chain framework. Finally, we provide some discussions and topics for further research.
A 1989 program, with Lorenz
Graph theory is a valuable framework to study the organization of functional and anatomical connections in the brain. Its use for comparing network topologies, however, is not without difficulties. Graph measures may be influenced by the number of nodes (N) and the average degree (k) of the network. The explicit form of that influence depends on the type of network topology, which is usually unknown for experimental data. Direct comparisons of graph measures between empirical networks with different N and/or k can therefore yield spurious results. We list benefits and pitfalls of various approaches that intend to overcome these difficulties. We discuss the initial graph definition of unweighted graphs via fixed thresholds, average degrees or edge densities, and the use of weighted graphs. For instance, choosing a threshold to fix N and k does eliminate size and density effects but may lead to modifications of the network by enforcing (ignoring) nonsignificant (significant) connections. Opposed to fixing N and k, graph measures are often normalized via random surrogates but, in fact, this may even increase the sensitivity to differences in N and k for the commonly used clustering coefficient and smallworld index. To avoid such a bias we tried to estimate the N,kdependence for empirical networks, which can serve to correct for size effects, if successful. We also add a number of methods used in social sciences that build on statistics of local network structures including exponential random graph models and motif counting. We show that none of the hereinvestigated methods allows for a reliable and fully unbiased comparison, but some perform better than others.
A Gömböc is a strange thing. It looks like an egg with sharp edges, and when you put it down it starts wriggling and rolling around with an apparent will of its own. Until quite recently, noone knew whether Gömböcs even existed. Even now, Gábor Domokos, one of their discoverers, reckons that in some sense they barely exists at all. So what are Gömböcs and what makes them special?
This lecture treats some enduring misconceptions about modeling. One of these is that the goal is always prediction. The lecture distinguishes between explanation and prediction as modeling goals, and offers sixteen reasons other than prediction to build a model. It also challenges the common assumption that scientific theories arise from and 'summarize' data, when often, theories precede and guide data collection; without theory, in other words, it is not clear what data to collect. Among other things, it also argues that the modeling enterprise enforces habits of mind essential to freedom. It is based on the author's 2008 Bastille Day keynote address to the Second World Congress on Social Simulation, George Mason University, and earlier addresses at the Institute of Medicine, the University of Michigan, and the Santa Fe Institute.
Many manmade and natural phenomena, including the intensity of earthquakes, population of cities, and size of international wars, are believed to follow powerlaw distributions. The accurate identification of powerlaw patterns has significant consequences for developing an understanding of complex systems. However, statistical evidence for or against the powerlaw hypothesis is complicated by large fluctuations in the empirical distribution's tail, and these are worsened when information is lost from binning the data. We adapt the statistically principled framework for testing the powerlaw hypothesis, developed by Clauset, Shalizi and Newman, to the case of binned data. This approach includes maximumlikelihood fitting, a hypothesis test based on the KolmogorovSmirnov goodnessoffit statistic and likelihood ratio tests for comparing against alternative explanations. We evaluate the effectiveness of these methods on synthetic binned data with known structure and apply them to twelve realworld binned data sets with heavytailed patterns.
Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic soldandbought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US) stock market is a representative of emerging (developed) markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling.
Social systems have recently attracted much attention, with attempts to understand social behavior with the aid of statistical mechanics applied to complex systems. Collective properties of such systems emerge from couplings between components, for example, individual persons, transportation nodes such as airports or subway stations, and administrative districts. Among various collective properties, criticality is known as a characteristic property of a complex system, which helps the systems to respond flexibly to external perturbations. This work considers the criticality of the urban transportation system entailed in the massive smart card data on the Seoul transportation network. Analyzing the passenger flow on the Seoul bus system during one week, we find explicit powerlaw correlations in the system, that is, powerlaw behavior of the strength correlation function of bus stops and verify scale invariance of the strength fluctuations. Such criticality is probed by means of the scaling and renormalization analysis of the modified gravity model applied to the system. Here a group of nearby (bare) bus stops are transformed into a (renormalized) “block stop” and the scaling relations of the network density turn out to be closely related to the fractal dimensions of the system, revealing the underlying structure. Specifically, the resulting renormalized values of the gravity exponent and of the Hill coefficient give a good description of the Seoul bus system: The former measures the characteristic dimensionality of the network whereas the latter reflects the coupling between distinct transportation modes. It is thus demonstrated that such ideas of physics as scaling and renormalization can be applied successfully to social phenomena exemplified by the passenger flow.

“The city is not only a community, it is a conflux. ….The real city, as a center of industry, is a conflux of streams of traffic; as a center of culture, it is conflux of streams of thought.” So wrote Benton MacKaye in 1928 in his book The New Exploration: A Philosophy of Regional Planning. When I sent a copy of my own recent book The New Science of Cities to my erstwhile colleague and old friend Lionel March, he quickly scowered it and said: “I see in your Preamble that you cite Castells’ ‘space of flows’ and that your approach makes much of flows and networks. I immediately turned to your bibliography to search for the name Benton MacKaye. It is not there! The author of The New Exploration (1928) is my hero of metropolitan/regional development. I’m sure you know of him”.
This article is based on the keynote address presented to the European Meetings on Cybernetics and Systems Research (EMCSR) in 2012, on the occasion of Edgar Morin receiving the Bertalanffy Prize in Complexity Thinking, awarded by the Bertalanffy Centre for the Study of Systems Science(BCSSS). The following theses will be elaborated on: (a) The whole is at the same time more and less than its parts; (b) We must abandon the term "object" for systems because all the objects are systems and parts of systems; (c) System and organization are the two faces of the same reality; (d) Ecosystems illustrate selforganization.
Arrogant physicists — do they think economics is easy?No. But their ideas can help improve economics, and here’s why
Almost universally, wealth is not distributed uniformly within societies or economies. Even though wealth data have been collected in various forms for centuries, the origins for the observed wealthdisparity and social inequality are not yet fully understood. Especially the impact and connections of human behavior on wealth could so far not be inferred from data. Here we study wealth data from the virtual economy of the massive multiplayer online game (MMOG) Pardus. This data not only contains every player's wealth at every point in time, but also all actions of every player over a timespan of almost a decade. We find that wealth distributions in the virtual world are very similar to those in western countries. In particular we find an approximate exponential for low wealth and a powerlaw tail. The Gini index is found to be 0.65, which is close to the indices of many Western countries. We find that wealthincrease rates depend on the time when players entered the game. Players that entered the game early on tend to have remarkably higher wealthincrease rates than those who joined later. Studying the players' positions within their social networks, we find that the local position in the trade network is most relevant for wealth. Wealthy people have high in and outdegree in the trade network, relatively low nearestneighbor degree and a low clustering coefficient. Wealthy players have many mutual friendships and are socially well respected by others, but spend more time on business than on socializing. We find that players that are not organized within social groups with at least three members are significantly poorer on average. We observe that high `political' status and high wealth go hand in hand. Wealthy players have few personal enemies, but show animosity towards players that behave as public enemies.
We give a tutorial for the study of dynamical systems on networks, and we focus in particular on ``simple" situations that are tractable analytically. We briefly motivate why examining dynamical systems on networks is interesting and important. We then give several fascinating examples and discuss some theoretical results. We also discuss dynamical systems on dynamical (i.e., timedependent) networks, overview software implementations, and give our outlook on the field.
The friendship paradox states that your friends have on average more friends than you have. Does the paradox "hold" for other individual characteristics like income or happiness? To address this question, we generalize the friendship paradox for arbitrary node characteristics in complex networks. By analyzing two coauthorship networks of Physical Review journals and Google Scholar profiles, we find that the generalized friendship paradox (GFP) holds at the individual and network levels for various characteristics, including the number of coauthors, the number of citations, and the number of publications. The origin of the GFP is shown to be rooted in positive correlations between degree and characteristics. As a fruitful application of the GFP, we suggest effective and efficient sampling methods for identifying high characteristic nodes in largescale networks. Our study on the GFP can shed lights on understanding the interplay between network structure and node characteristics in complex networks.
Background
There is a rapidly expanding literature on the application of complex networks in economics that focused mostly on stock markets. In this paper, we discuss an application of complex networks to study international business cycles. Methodology/Principal Findings
We construct complex networks based on GDP data from two data sets on G7 and OECD economies. Besides the wellknown correlationbased networks, we also use a specific tool for presenting causality in economics, the Granger causality. We consider different filtering methods to derive the stationary component of the GDP series for each of the countries in the samples. The networks were found to be sensitive to the detrending method. While the correlation networks provide information on comovement between the national economies, the Granger causality networks can better predict fluctuations in countries’ GDP. By using them, we can obtain directed networks allows us to determine the relative influence of different countries on the global economy network. The US appears as the key player for both the G7 and OECD samples.
This chapter aims at reviewing complex networks models and methods that were either developed for or applied to socioeconomic issues, and pertinent to the theme of New Economic Geography. After an introduction to the foundations of the field of complex networks, the present summary adds insights on the statistical mechanical approach, and on the most relevant computational aspects for the treatment of these systems. As the most frequently used model for interacting agentbased systems, a brief description of the statistical mechanics of the classical Ising model on regular lattices, together with recent extensions of the same model on smallworld WattsStrogatz and scalefree AlbertBarabasi complex networks is included. Other sections of the chapter are devoted to applications of complex networks to economics, finance, spreading of innovations, and regional trade and developments. The chapter also reviews results involving applications of complex networks to other relevant socioeconomic issues, including results for opinion and citation networks. Finally, some avenues for future research are introduced before summarizing the main conclusions of the chapter.
Complex adaptive systems (cas), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semipermeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other cas. Despite a wealth of data and descriptions concerning different cas, there remain many unanswered questions about "steering" these systems. In Signals and Boundaries, John Holland argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering cas through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signalprocessing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multicelled organism; and Markov processes.
Via Complexity Digest, António F Fonseca
Nafeez Ahmed: Natural and social scientists develop new model of how 'perfect storm' of crises could unravel global system
It is part of our daily socialmedia experience that seemingly ordinary items (videos, news, publications, etc.) unexpectedly gain an enormous amount of attention. Here we investigate how unexpected these events are. We propose a method that, given some information on the items, quantifies the predictability of events, i.e., the potential of identifying in advance the most successful items defined as the upper bound for the quality of any prediction based on the same information. Applying this method to different data, ranging from views in YouTube videos to posts in Usenet discussion groups, we invariantly find that the predictability increases for the most extreme events. This indicates that, despite the inherently stochastic collective dynamics of users, efficient prediction is possible for the most extreme events.
Activities such as distributed collaboration are becoming more common as organizations become geographically diverse and they have important consequences when the collective group makes important decisions.
Via Roger D. Jones, PhD
