By W. Brian Arthur; External Professor, Santa Fe Institute; Visiting Researcher, Palo Alto Research Center.
Economics is a stately subject, one that has altered little since its modern foundations were laid in Victorian times. Now it is changing radically. Standard economics is suddenly being challenged by a number of new approaches: behavioral economics, neuroeconomics, new institutional economics. One of the new approaches came to life at the Santa Fe Institute: complexity economics.
Complexity economics got its start in 1987 when a now-famous conference of scientists and economists convened by physicist Philip Anderson and economist Kenneth Arrow met to discuss the economy as an evolving complex system. That conference gave birth a year later to the Institute’s first research program – the Economy as an Evolving Complex System – and I was asked to lead this. That program in turn has gone on to lay down a new and different way to look at the economy.
The Creative Economy is diverse, housing artists, computer programmers, architects, designers and many more. An unfortunate by-product of this diversity is that the sector lacks a clear identity, and it can be overlooked in economic debate. But the Creative Economy is extremely important to the UK: it employs 8 per cent of all workers and is growing three times faster than the rest of the economy.
When I attended university in 1984 as a psychology undergraduate in the States, the pathway to scientific literacy was pure and simple: you took a research methods course, followed by a statistics course or two, and that was it – you were prepared to do social science! Okay, if you were lucky, you could also take a qualitative methods or historical methods course, but my professors were pretty clear: the real science was quantitative methods and statistics.
Later, when I moved on to graduate school in clinical psychology and then medical sociology, little changed. Certainly, the statistics got more interesting and esoteric, which I very much liked. But the same old distinctions seemed dominant, with quantitative method and statistics holding the upper hand: hard science over soft science; quantitative method over qualitative method; math over metaphor; method over theory; representation over interpretation; experiment over description; prediction over understanding; variables over cases; on and on it went.
And why? Because, my quantitative professors argued, statistics (and pretty much it alone) made ‘sense’ of the complexity of social reality – what Warren Weaver, in his brilliant 1948 article, “Science and Complexity” called the disorganized complexity problem. According to Weaver, nuances aside, the problems of science can be organized, historically speaking, into three main types. The first are simple systems, comprised of a few variables and amenable to near-complete mathematical description – clocks, pendulums, basic machines. Second are disorganized complex systems, where the unpredictable microscopic behavior of a very large number of variables – gases, crowds, atoms, etc – make them highly resistant to simple formulas; requiring, instead, the tools of statistics. Finally are organized complex systems, where the interrelationships amongst a large number of variables, and the organic whole they create, determine their complexity – human bodies, immune systems, formal organization, social institutions, networks, cities, global economies, etc.
Renowned public thinker Cass Sunstein defends his groundbreaking nudge theory. When the state seeks to influence our choices in “our best interests” is this liberty-infringing meddling, or simply good government?
Every now and again a paper is published on the number of errors made in academic articles. These papers document the frequency of conceptual errors, factual errors, errors in abstracts, errors in quotations, and errors in reference lists. James Hartley reports that the data are alarming, but suggests a possible way of reducing them. Perhaps in future there might be a single computer program that matches references in the text with correct (pre-stored) references as one writes the text.
We show that the behaviour of Bitcoin has interesting similarities to stock and precious metal markets, such as gold and silver. We report that whilst Litecoin, the second largest cryptocurrency, closely follows Bitcoin's behaviour, it does not show all the reported properties of Bitcoin. Agreements between apparently disparate complexity measures have been found, and it is shown that statistical, information-theoretic, algorithmic and fractal measures have different but interesting capabilities of clustering families of markets by type. The report is particularly interesting because of the range and novel use of some measures of complexity to characterize price behaviour, because of the IRS designation of Bitcoin as an investment property and not a currency, and the announcement of the Canadian government's own electronic currency MintChip.
It’s official, at least according to the OECD. Rising inequality is estimated to have knocked more than 10 percentage points off growth in Mexico, New Zealand, Sweden, Finland and Norway over the past two decades. In Italy, the United Kingdom and the United States, the cumulative growth rate would have been six to nine percentage points higher had income disparities not widened. On the other hand, greater equality helped increase GDP per capita in Spain, France and Ireland prior to the crisis.
All Horizon 2020 calls for proposals are published on the Horizon 2020 Participant Portal, a one-stop-shop website that is available since January 2014. The Participant Portal provides all information needed for responding to a call: its opening and closing dates, its overall budget, the relevant Work Programme and all other documents related to the call. Calls can be searched and filtered using various parameters, such as open, closed or forthcoming calls as well as keywords. Another way to learn about open or forthcoming calls is to consult the Work Programme of a specific domain.
Since Social Media sites such as “Facebook” burst onto the scene 10 years ago, researchers and market analysts have been looking for a way to tap into the content on these sites. In recent years, there have been several attempts to do this with some being more successful than others (Lewis, Zamith & Hermida, 2013), particularly with regards to the scale of the medium in question. For those uninitiated (apologies to those that are) the term “Big Data” is the catch-all for the enormous trails of information generated by consumers going about their day in an increasingly digitized world (Manyika et al., 2011). It is this sheer volume of information that poses the first hurdle to be overcome when conducting research online. For example, earlier this year I was collecting data on the European Parliamentary Election and generated over 16,000 tweets in about three weeks. Bearing in mind that on average a tweet contains approximately 12 words in 1.5 sentences (Twitter, 2013), for those three weeks I had 196,500 words or 24,500 sentences to come to terms with. That is a lot of data for one person to deal with alone, especially if only applying manual techniques such as content analysis.
Dynamics of Multi-Level Systems Seminar/School — 01 - 12 June 2015 Workshop — 15 - 19 June 2015
Scientific Coordinators: Fatihcan Atay (Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, Germany) Kristian Lindgren (Chalmers University of Technology, Gothenburg, Sweden) Eckehard Olbrich (Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, Germany)
Predicting which scientific discoveries will change the world is, arguably, a fool's game. Who knows what the future will bring? Yet every year a handful of developments—say, the arrival of the quickest, cheapest genome-editing tool yet—get us so excited that we cannot help ourselves. This year those breakthroughs include tools for reprogramming living cells and rendering lab animals transparent; ways of powering electronics with sound waves and saliva; smartphone screens that correct for the flaws in your vision; Lego-like atomic structures that could produce major advances in superconductivity research; and others. Read about them now, then pay attention in the coming years to see what they do.
Human beings have always lived in groups, and their individual lives have invariably depended on group decisions. But, given the daunting challenges of group choice, owing to the divergent interests and concerns of the group’s members, how should collective decision-making be carried out?
NO ENDORSEMENT carries more weight than an investment by Warren Buffett. He became the world’s second-richest man by buying safe, reliable businesses and holding them for ever. So when his company increased its stake in Tesco to 5% in 2012, it sent a strong message that the giant British grocer would rebound from its disastrous attempt to compete in America.
But it turned out that even the Oracle of Omaha can fall victim to dodgy accounting. On September 22nd Tesco announced that its profit guidance for the first half of 2014 was £250m ($408m) too high, because it had overstated the rebate income it would receive from suppliers. Britain’s Serious Fraud Office has begun a criminal investigation into the errors. The company’s fortunes have worsened since then: on December 9th it cut its profit forecast by 30%, partly because its new boss said it would stop “artificially” improving results by reducing service near the end of a quarter. Mr Buffett, whose firm has lost $750m on Tesco, now calls the trade a “huge mistake”.
Speak or write in English, and the world will hear you. Speak or write in Tamil or Portuguese, and you may have a harder time getting your message out. Now, a new method for mapping how information flows around the globe identifies the best languages to spread your ideas far and wide. One hint: If you’re considering a second language, try Spanish instead of Chinese.
The study was spurred by a conversation about an untranslated book, says Shahar Ronen, a Microsoft program manager whose Massachusetts Institute of Technology (MIT) master’s thesis formed the basis of the new work. A bilingual Hebrew-English speaker from Israel, he told his MIT adviser, César Hidalgo (himself a Spanish-English speaker), about a book written in Hebrew whose translation into English he wasn’t yet aware of. “I was able to bridge a certain culture gap because I was multilingual,” Ronen says. He began thinking about how to create worldwide maps of how multilingual people transmit information and ideas.
New method of measuring cultural transmission suggests some tongues spread ideas better than others
The growth of social media sees us heading towards a radically open society. David R. Brake aims to provide an overview of the harms that can be posed by unwary social media use for both adults and children. He then draws on in-depth interviews, and a range of related theories of human behaviour to consider why this happens. This is an interesting resource for students and scholars in the fields of digital media and interpersonal communication, concludes Stefania Vicari.
Aim of this paper is to introduce the complex system perspective into retail market analysis. Currently, to understand the retail market means to search for local patterns at the micro level, involving the segmentation, separation and profiling of diverse groups of consumers. In other contexts, however, markets are modelled as complex systems. Such strategy is able to uncover emerging regularities and patterns that make markets more predictable, e.g. enabling to predict how much a country’s GDP will grow. Rather than isolate actors in homogeneous groups, this strategy requires to consider the system as a whole, as the emerging pattern can be detected only as a result of the interaction between its self-organizing parts. This assumption holds also in the retail market: each customer can be seen as an independent unit maximizing its own utility function. As a consequence, the global behaviour of the retail market naturally emerges, enabling a novel description of its properties, complementary to the local pattern approach. Such task demands for a data-driven empirical framework. In this paper, we analyse a unique transaction database, recording the micro-purchases of a million customers observed for several years in the stores of a national supermarket chain. We show the emergence of the fundamental pattern of this complex system, connecting the products’ volumes of sales with the customers’ volumes of purchases. This pattern has a number of applications. We provide three of them. By enabling us to evaluate the sophistication of needs that a customer has and a product satisfies, this pattern has been applied to the task of uncovering the hierarchy of needs of the customers, providing a hint about what is the next product a customer could be interested in buying and predicting in which shop she is likely to go to buy it.
The retail market as a complex system Pennacchioli D, Coscia M, Rinzivillo S, Giannotti F, Pedreschi D EPJ Data Science 2014, 3 :33 (11 December 2014)
Human crowds often bear a striking resemblance to interacting particle systems, and this has prompted many researchers to describe pedestrian dynamics in terms of interaction forces and potential energies. The correct quantitative form of this interaction, however, has remained an open question. Here, we introduce a novel statistical-mechanical approach to directly measure the interaction energy between pedestrians. This analysis, when applied to a large collection of human motion data, reveals a simple power law interaction that is based not on the physical separation between pedestrians but on their projected time to a potential future collision, and is therefore fundamentally anticipatory in nature. Remarkably, this simple law is able to describe human interactions across a wide variety of situations, speeds and densities. We further show, through simulations, that the interaction law we identify is sufficient to reproduce many known crowd phenomena.
Universal Power Law Governing Pedestrian Interactions Phys. Rev. Lett. 113, 238701 – Published 2 December 2014 Ioannis Karamouzas, Brian Skinner, and Stephen J. Guy
How do we ensure that the ‘next big thing’ – the Internet of Things – be harnessed for the public good? Sonia Bussu of Involve argues that the involvement of the public is key to ensure that a common language is developed, and that societal values at put at the centre of technological developments.
The advances in understanding complex networks have generated increasing interest in dynamical processes occurring on them. Pattern formation in activator-inhibitor systems has been studied in networks, revealing differences from the classical continuous media. Here we study pattern formation in a new framework, namely multiplex networks. These are systems where activator and inhibitor species occupy separate nodes in different layers. Species react across layers but diffuse only within their own layer of distinct network topology. This multiplicity generates heterogeneous patterns with significant differences from those observed in single-layer networks. Remarkably, diffusion-induced instability can occur even if the two species have the same mobility rates; condition which can never destabilize single-layer networks. The instability condition is revealed using perturbation theory and expressed by a combination of degrees in the different layers. Our theory demonstrates that the existence of such topology-driven instabilities is generic in multiplex networks, providing a new mechanism of pattern formation.
by Nikos E. Kouvaris, Shigefumi Hata, Albert Díaz-Guilera
We use these apps and websites because of their benefits. We discover new music, restaurants and movies; we meet new friends and reconnect with old ones; we trade goods and services. The paradox of this situation is that while we gain from digital connectivity, the accompanying invasion into our private lives makes our personal data ripe for abuse — revealing things we thought we had not even disclosed.