One of the major resource requirements of computers—ranging from biological cells to human brains to high-performance digital computers—is the energy used to run them. Those energy requirements of performing a computation have been a long-standing focus of research in statistical physics, going back (at least) to the early work of Landauer and colleagues.
However, one of the most prominent aspects of computers is that they are inherently non-equilibrium systems. They are also often quite small, far from the thermodynamic limit. Unfortunately, the research by Landauer and co-workers was grounded in the statistical physics of the 20th century, which could not properly address the thermodynamics of non-equilibrium, nanoscale systems.
Fortunately, recent revolutionary breakthroughs in stochastic thermodynamics have overcome the limitations of 20th century statistical physics. We can now analyze arbitrarily off-equilibrium systems, of arbitrary size. Here I show how to apply these recent breakthroughs to analyze the thermodynamics of computation. Specifically, I present formulas for the thermodynamic costs of implementing (loop-free) digital circuits, of implementing Turing machines, and of implementing multipartite processes like the interacting organelles in a cell.
Those of us who think that that the laws of physics underlying everyday life are completely known tend to also think that consciousness is an emergent phenomenon that must be compatible with those laws. To hold such a position in a principled way, it’s important to have a clear understanding of “emergence” and when it happens. Anil Seth is a leading researcher in the neuroscience of consciousness, who has also done foundational work (often in collaboration with Lionel Barnett) on what emergence means. We talk about information theory, entropy, and what they have to do with how things emerge.
What is the economy? People used to tell stories about the exchange of goods and services in terms of flows and processes — but over the last few hundred years, economic theory veered toward measuring discrete amounts of objects. Why? The change has less to do with the objective nature of economies and more to do with what tools theorists had available. And scientific instruments — be they material technologies or concepts — don’t just make new things visible, but also hide things in new blind spots. For instance, algebra does very well with ratios and quantities…but fails to properly address what markets do: how innovation works, where value comes from, and how economic actors navigate (and change) a fundamentally uncertain shifting landscape. With the advent of computers, new opportunities emerge to study that which cannot be contained in an equation. Using algorithms, scientists can formalize complex behaviors – and thinking economics in both nouns and verbs provides a more complete and useful stereoscopic view of what we are and do.
This week we speak with W. Brian Arthur of The Santa Fe Institute, Stanford University, and Xerox PARC about his recent essay, “Economics in Nouns and Verbs.” In this first part of a two-part conversation, we explore how a mathematics of static objects fails to describe economies in motion — and how a process-based approach can fill gaps in our understanding.
We only find open-ended evolution (OEE) in the development of human technology or in the evolution of life itself. The research on OEE at ALIFE aims to discover a mechanism that generates OEE automatically in a computer or machine. A potential mechanism and the conditions required have been discussed in three previous workshops. In this study, we propose and discuss man--machine interaction experiments as a new OEE mechanism. The pertinent definition of OEE here is whether we can continue to create new movements that are distinguishable to us. We consider the development of body movement patterns generated when Alter3 androids imitate each other and when Alter3 androids and humans imitate each other. We use UMAP contraction and transfer entropy to measure these changes and demonstrate that man--machine communication is far more dynamic and complex than the machine--machine interaction. We discuss how human subjects can engender OEE via communication with the android.
Economies in the modern world are incredibly complex systems. But when we sit down to think about them in quantitative ways, it’s natural to keep things simple at first. We look for reliable relations between small numbers of variables, seek equilibrium configurations, and so forth. But those approaches don’t always work in complex systems, and sometimes we have to use methods that are specifically adapted to the challenges of complexity. That’s the perspective of W. Brian Arthur, a pioneer in the field of complexity economics, according to which economies are typically not in equilibrium, not made of homogeneous agents, and are being constantly updated. We talk about the basic ideas of complexity economics, how it differs from more standard approaches, and what it teaches us about the operation of real economies.
The popular conception of ants is that “anatomy is destiny”: an ant’s body type determines its role in the colony, for once and ever. But this is not the case; rather than forming rigid castes, ants act like a distributed computer in which tasks are re-allocated as the situation changes. “Division of labor” implies a constant “assembly line” environment, not fluid adaptation to evolving conditions. But ants do not just “graduate” from one task to another as they age; they pivot to accept the work required by their colony in any given moment. In this “agile” and dynamic process, ants act more like verbs than nouns — light on specialization and identity, heavy on collaboration and responsiveness.
What can we learn from ants about the strategies for thriving in times of uncertainty and turbulence?What are the algorithms that ants use to navigate environmental change, and how might they inform the ways that we design technologies? How might they teach us to invest more wisely, to explore more thoughtfully?
Dashun is an Associate Professor and the Founding Director of the Center for Science of Science and Innovation at Northwestern University.
He works on the Science of Science, turning the scientific method upon ourselves, using amazing new datasets and tools from complexity sciences and artificial intelligence.
His research has been published repeatedly in journals like Nature and Science, and has been featured in virtually all major global media outlets. Dashun is a recipient of multiple awards for his research and teaching, including Young Investigator awards, Poets & Quants Best 40 Under 40 Professors, Junior Scientific Award from the Complex Systems Society, Thinkers50 Radar List, and more.
In this wide-ranging conversation, we talk about his life, career and his new book The Science of Science.
You observe a phenomenon, and come up with an explanation for it. That’s true for scientists, but also for literally every person. (Why won’t my car start? I bet it’s out of gas.) But there are literally an infinite number of possible explanations for every phenomenon we observe. How do we invent ones we think are promising, and then decide between them once invented? Simon DeDeo (in collaboration with Zachary Wojtowicz) has proposed a way to connect explanatory values (“simplicity,” “fitting the data,” etc) to specific mathematical expressions in Bayesian reasoning. We talk about what makes explanations good, and how they can get out of control, leading to conspiracy theories or general crackpottery, from QAnon to flat earthers.
Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars and housekeeping robots has turned out to be much harder than we thought.
One reason for these repeating cycles is a lack of understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.
Speaker Bio: Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).
I’ve got a treat for you today. Today’s author’s are Gourab Ghoshal and Petter Holme, who are here to talk about a classic paper. A paper they co-authored and published in PRL in 2006. The paper has a fantastic title, which is basically also a mini abstract. It is called “Dynamics of Networking Agents Competing for High Centrality and Low Degree” (1). In the podcast we get into it!
Gourab is at at Rochester University, where he is an Associate Professor of Physics and Astronomy with joint appointments at the departments of Computer Science and Mathematics. He works in the field of Complex Systems. His research interests are in the theory and applications of Complex Networks as well as Non-equilibrium Statistical Physics, Game theory, Econophysics, Dynamical Systems and the Origins of Life.
Petter is Swedish scientist living and working in Japan, where he is a Specially Appointed Professor at the Institute of Innovative Research at the Tokyo Institute of Technology. His research focuses on large-scale structures in society, technology and biology; mostly trying to understand them as networks.
Our Episode 4 guest, Leidy Klotz, is a Professor at the University of Virginia. He studies the science of design: how we transform things from how they are – to how we want them to be. Leidy wants to apply his work outside of academia. He wants address climate change and systemic inequality, Leidy also works directly with organizations including the World Bank.
Colloquium Virtual Complexity at C3-UNAM Universities for Science Consortium
"Computational Epidemiology at the time of COVID-19" Alessandro Vespignani Network Science Institute at Northeastern University
Abstract: The data science revolution is finally enabling the development of large-scale data-driven models that provide real- or near-real-time forecasts and risk analysis for infectious disease threats. These models also provide rationales and quantitative analysis to support policy-making decisions and intervention plans. At the same time, the non-incremental advance of the field presents a broad range of challenges: algorithmic (multiscale constitutive equations, scalability, parallelization), real-time integration of novel digital data streams (social networks, participatory platform, human mobility etc.). I will review and discuss recent results and challenges in the area, and focus on ongoing work aimed at responding to the COVID-19 pandemic.
Short Bio: Alessandro Vespignani is the Director of the Network Science Institute and Sternberg Family Distinguished University Professor at Northeastern University. He is a professor with interdisciplinary appointments in the College of Computer and Information Science, College of Science, and the Bouvé College of Health Sciences. Dr. Vespignani's work focuses on statistical and numerical simulation methods to model spreading phenomena, including the realistic and data-driven computational modeling of biological, social, and technological systems. For several years his work has focused on the spreading of infectious diseases, working closely with the CDC and the WHO.
The increasing availability of large-scale datasets that trace the entirety of the scientific enterprise, have created an unprecedented opportunity to explore scientific production and reward. Parallel developments in data science, network science, and artificial intelligence offer us powerful tools and techniques to make sense of these millions of data points. Together, they tell a complex yet insightful story about how scientific careers unfold, how collaborations contribute to discovery, and how scientific progress emerges through a combination of multiple interconnected factors. These opportunities—and challenges that come with them—have fueled the emergence of a multidisciplinary community of scientists that are united by their goals of understanding science. These practitioners of the science of science use the scientific methods to study themselves, examine projects that work as well as those that fail, quantify the patterns that characterize discovery and invention, and offer lessons to improve science as a whole. In this talk, I’ll highlight some examples of research in this area, hoping to illustrate the promise of science of science as well as its limitations.
Today we're talking to IU Professor Johan Bollen about the impact social media is having on us, and the complex relationship we have with the tech companies that run them.
From the spreading of diseases and memes to the development of opinions and social influence, dynamical processes are influenced heavily by the networks on which they occur. In this talk, I'll discuss social influence and opinion models on networks. I'll present a few types of models --- including threshold models of social contagions, voter models that coevolve with network structure, and bounded-confidence models with continuous opinions --- and illustrate how such processes are affected by the networks on which they occur. I'll also connect these models to opinion polarization and the development of echo chambers in online social networks.
In 1984 computer scientist Aaron Sloman published a paper called “The structure of the space of possible minds.” It called for systematic thinking about the vague yet intuitive notion of mind, which was capable of admitting into the conversation what we had then learnt about animal cognition and artificial intelligence. Almost four decades later, we are in a fair better position to examine Sloman’s proposal: to consider what kinds of minds can exist within the laws of physics, to compare those we already recognize (including the diversity of human minds), and to speculate about the possibilities for artificial “mind design”. In this talk I will explore this question, looking at our current understanding of the functions and capabilities of biological minds, what this might imply for efforts to create artificial “minds”, and what the implications are for ideas about consciousness, agency and free will.
Speaker Bio: Philip Ball is a freelance writer and author, and worked for many years as an editor of Nature. His many books include Critical Mass (which won the 2005 Aventis Science Books prize), Beyond Weird and How to Grow a Human. His next book, The Book of Minds, will be published in early 2022.
The Complexity in the Social World series of interviews (and YouTube Playlist) follows on from the seminar we organised in March 2021. The aim of this series is to capture some of the foundational thinkers in conversation around how to apply complexity thinking to the social world, the world of managers, economists, change agents and societies. In this way, some of these foundational thinkers, many starting their work in the 1980s, are represented and their differing perspectives and different foci of application are available in one place.
It’s not easy, figuring out the fundamental laws of physics. It’s even harder when your chosen methodology is to essentially start from scratch, positing a simple underlying system and a simple set of rules for it, and hope that everything we know about the world somehow pops out. That’s the project being undertaken by Stephen Wolfram and his collaborators, who are working with a kind of discrete system called “hypergraphs.” We talk about what the basic ideas are, why one would choose this particular angle of attack on fundamental physics, and how ideas like quantum mechanics and general relativity might emerge from this simple framework.
Our planet is experiencing an accelerated process of change associated with a variety of anthropogenic phenomena. The future of this transformation is uncertain, but there is general agreement about its negative unfolding that might threaten our own survival. Furthermore, the pace of the expected changes is likely to be abrupt: catastrophic shifts might be the most likely outcome of this ongoing, apparently slow process. Although different strategies for geo-engineering the planet have been advanced, none seem likely to safely revert the large-scale problems associated to carbon dioxide accumulation or ecosystem degradation. An alternative possibility considered here is inspired in the rapidly growing potential for engineering living systems. It would involve designing synthetic organisms capable of reproducing and expanding to large geographic scales with the goal of achieving a long-term or a transient restoration of ecosystem-level homeostasis. Such a regional or even planetary-scale engineering would have to deal with the complexity of our biosphere. It will require not only a proper design of organisms but also understanding their place within ecological networks and their evolvability. This is a likely future scenario that will require integration of ideas coming from currently weakly connected domains, including synthetic biology, ecological and genome engineering, evolutionary theory, climate science, biogeography and invasion ecology, among others.
Today on the pod is Marta Sales-Pardo & Roger Guimera. What a great talk. We could have gone on for hours. Peer review, power-laws, becoming scientists, Bayesian statistics, and much, much more. Marta and Roger study fundamental problems in all areas of science including natural, social and economic sciences. They have expertise in a broad set of tools from statistical physics, network science, statistics and computer science. Both were many years at Northwestern before starting a group at URV in Catalonia. They are authors of many classic papers in Network Science, lots of important work, e.g. on community detection. We talk about their paper “A Bayesian machine scientist to aid in the solution of challenging scientific problems”
Many people might not bother to define complexity, thinking that we know it when we see it. Scientists should not afford such luxury. I will provide a compact but comprehensive overview of the different ways that systems can be complex, offering an aggregate definition. I will discuss the role of complexity measures, and why complexity cannot be captured by a single number. This work was done in collaboration with James Ladyman, published with Yale University Press in 2020. At the other end of the spectrum of complexity science is the application to real-world problems. I will present two examples from recent work. The project 'Aiding the mitigation of and adaptation to climate change using the tools of complexity science' was done in collaboration with the Green Climate Fund, founded by the UN members in 2014. Equally, political systems are more and more focus of computational and mathematical investigations. I will present conceptual work on the stability of democracy, a collaboration with an international and interdisciplinary group of scientists.
In this episode, we speak to SFI Resident Professor Sidney Redner, author of A Guide to First-Passage Processes, about how he finds inspiration for his complex systems research in the everyday — and how he uses math and physics to explore hot hands, heat waves, parking lots, and more…
Renaud is an associate professor at the Mathematical Institute of Oxford University, investigating processes taking place on large networks.
In the episode, we talk about his story in science, the joy and value of exploring without a particular purpose, doing a PhD without publishing any papers, … and how reading classical texts by Boltzmann and others early on has shaped the work Renaud does even to this day.
When we get to the paper, we talk about Renaud’s recent work “Variance and covariance of distributions on graphs” (1) with co-authors Karel Devriendt and Samuel Martin-Gutierrez.
This episode’s guest is Dirk Brockmann. Dirk is a physicist and complex systems researcher. He’s a professor at the Department of Biology, Humboldt University of Berlin and the Robert Koch Institute, Berlin. Berfore returning to his native Germany, he was a professor at Northwestern University.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.