 Your new post is loading...
 Your new post is loading...
|
Scooped by
Complexity Digest
June 29, 11:10 AM
|
W. Barfuss, J. Flack, C.S. Gokhale, L. Hammond, C. Hilbe, E. Hughes, J.Z. Leibo, T. Lenaerts, N. Leonard, S. Levin, U. Madhushani Sehwag, A. McAvoy, J.M. Meylahn, & F.P. Santos PNAS 122 (25) e2319948121 Cooperation at scale is critical for achieving a sustainable future for humanity. However, achieving collective, cooperative behavior—in which intelligent actors in complex environments jointly improve their well-being—remains poorly understood. Complex systems science (CSS) provides a rich understanding of collective phenomena, the evolution of cooperation, and the institutions that can sustain both. Yet, much of the theory in this area fails to fully consider individual-level complexity and environmental context—largely for the sake of tractability and because it has not been clear how to do so rigorously. These elements are well captured in multiagent reinforcement learning (MARL), which has recently put focus on cooperative (artificial) intelligence. However, typical MARL simulations can be computationally expensive and challenging to interpret. In this perspective, we propose that bridging CSS and MARL affords new directions forward. Both fields can complement each other in their goals, methods, and scope. MARL offers CSS concrete ways to formalize cognitive processes in dynamic environments. CSS offers MARL improved qualitative insight into emergent collective phenomena. We see this approach as providing the necessary foundations for a proper science of collective, cooperative intelligence. We highlight work that is already heading in this direction and discuss concrete steps for future research. Read the full article at: www.pnas.org
|
Scooped by
Complexity Digest
June 28, 11:12 AM
|
Teresa Lázaro, Roger Guimerà & Marta Sales-Pardo Nature Communications volume 16, Article number: 3949 (2025) The network alignment problem appears in many areas of science and involves finding the optimal mapping between nodes in two or more networks, so as to identify corresponding entities across networks. We propose a probabilistic approach to the problem of network alignment, as well as the corresponding inference algorithms. Unlike heuristic approaches, our approach is transparent in that all model assumptions are explicit; therefore, it is susceptible of being extended and fine tuned by incorporating contextual information that is relevant to a given alignment problem. Also in contrast to current approaches, our method does not yield a single alignment, but rather the whole posterior distribution over alignments. We show that using the whole posterior leads to correct matching of nodes, even in situations where the single most plausible alignment mismatches them. Our approach opens the door to a whole new family of network alignment algorithms, and to their application to problems for which existing methods are perhaps inappropriate. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
June 27, 11:13 AM
|
TYLER H. COALE, et al. SCIENCE 11 Apr 2024 Vol 384, Issue 6692 pp. 217-222 Symbiotic interactions were key to the evolution of chloroplast and mitochondria organelles, which mediate carbon and energy metabolism in eukaryotes. Biological nitrogen fixation, the reduction of abundant atmospheric nitrogen gas (N2) to biologically available ammonia, is a key metabolic process performed exclusively by prokaryotes. Candidatus Atelocyanobacterium thalassa, or UCYN-A, is a metabolically streamlined N2-fixing cyanobacterium previously reported to be an endosymbiont of a marine unicellular alga. Here we show that UCYN-A has been tightly integrated into algal cell architecture and organellar division and that it imports proteins encoded by the algal genome. These are characteristics of organelles and show that UCYN-A has evolved beyond endosymbiosis and functions as an early evolutionary stage N2-fixing organelle, or “nitroplast.” Read the full article at: www.science.org
|
Scooped by
Complexity Digest
June 26, 6:07 PM
|
Mingzhen Lu, Sili Wang, Avni Malhotra, Shersingh Joseph Tumber-Dávila, Samantha Weintraub-Leff, M. Luke McCormack, Xingchen Tony Wang & Robert B. Jackson Nature Communications volume 16, Article number: 5281 (2025)
An improved understanding of root vertical distribution is crucial for assessing plant-soil-atmosphere interactions and their influence on the land carbon sink. Here, we analyze a continental-scale dataset of fine roots reaching 2 meters depth, spanning from Alaskan tundra to Puerto Rican forests. Contrary to the expectation that fine root abundance decays exponentially with depth, we found root bimodality at ~20% of 44 sites, with secondary biomass peaks often below 1 m. Root bimodality was more likely in areas with low total fine root biomass and was more frequent in shrublands than grasslands. Notably, secondary peaks coincided with high soil nitrogen content at depth. Our analyses suggest that deep soil nutrients tend to be underexploited, while root bimodality offers plants a mechanism to tap into deep soil resources. Our findings add to the growing recognition that deep soil dynamics are systematically overlooked, and calls for more research attention to this deep frontier in the face of global environmental change. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
June 8, 12:48 PM
|
Sai Krishna Katla, Chenyu Lin, and Juan Pérez-Mercader PNAS 122 (22) e2412514122 Self-reproduction is one of the most fundamental features of natural life. This study introduces a biochemistry-free method for creating self-reproducing polymeric vesicles. In this process, nonamphiphilic molecules are mixed and illuminated with green light, initiating polymerization into amphiphiles that self-assemble into vesicles. These vesicles evolve through feedback between polymerization, degradation, and chemiosmotic gradients, resulting in self-reproduction. As vesicles grow, they polymerize their contents, leading to their partial release and their reproduction into new vesicles, exhibiting a loose form of heritable variation. This process mimics key aspects of living systems, offering a path for developing a broad class of abiotic, life-like systems. Read the full article at: www.pnas.org
|
Scooped by
Complexity Digest
June 6, 4:45 PM
|
Miguel A. González-Casado, Andreia Sofia Teixeira & Angel Sánchez Communications Physics volume 8, Article number: 227 (2025) How do networks of social relationships evolve over time? This study addresses the lack of longitudinal analyses of social networks grounded in mathematical modelling. We analyse a dataset tracking the social interactions of 900 individuals over four years. Despite shifts in individual relationships, the macroscopic structure of the network remains stable, fluctuating within predictable bounds. We link this stability to the concept of equilibrium in statistical physics. Specifically, we show that the probabilities governing link dynamics are stationary over time, and that key network features align with equilibrium predictions. Moreover, the dynamics also satisfy the detailed balance condition. This equilibrium persists despite ongoing turnover, as individuals join, leave, and shift connections. This suggests that equilibrium arises not from specific individuals but from the balancing act of human needs, cognitive limits, and social pressures. Practically, this equilibrium simplifies data collection, supports methods relying on single network snapshots (like Exponential Random Graph Models), and aids in designing interventions for social challenges. Theoretically, it offers insights into collective human behaviour, revealing how emergent properties of complex social systems can be captured by simple mathematical models. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
May 30, 12:09 PM
|
ARIEL FLINT ASHERY, LUCA MARIA AIELLO, AND ANDREA BARONCHELLI SCIENCE ADVANCES 14 May 2025 Vol 11, Issue 20 Social conventions are the backbone of social coordination, shaping how individuals form a group. As growing populations of artificial intelligence (AI) agents communicate through natural language, a fundamental question is whether they can bootstrap the foundations of a society. Here, we present experimental results that demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. We then show how strong collective biases can emerge during this process, even when agents exhibit no bias individually. Last, we examine how committed minority groups of adversarial LLM agents can drive social change by imposing alternative social conventions on the larger population. Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals. Read the full article at: www.science.org
|
Suggested by
lidiamariaomorais@gmail.com
May 29, 3:54 PM
|
Lidia Maria de Oliveira Morais, et al. Front. Public Health, 18 May 2025 Volume 13 - 2025 Introduction: Cities in the Global South face escalating climate change challenges, including extreme weather events that disproportionately affect marginalized populations and exacerbate health risks, such as non-communicable diseases (NCDs). Climate resilience, defined as the capacity to adapt and recover from climate-related events, requires intersectoral collaboration between governments and civil society. Methods: This study employs a Community-based System Dynamics approach, leveraging shared learning across four cities—Belo Horizonte (BH, Brazil), Yaoundé (Cameroon), Kingston (Jamaica), and Kisumu (Kenya)—through the Global Diet and Activity Research Network (GDAR). An implementation of the method in BH is detailed, examining drivers and interdependencies shaping community-based climate resilience strategies against heavy rainfalls. Results: In BH, findings highlight the interplay between urbanization risks, vulnerabilities, heavy rainfall, and NCDs, with visibility, resources, education, and training identified as critical intervention points. Conclusion: This study underscores the importance of aligning community action with public policy and highlights opportunities for collective learning and resilience-building for climate change in Global South cities. Read the full article at: www.frontiersin.org
|
Scooped by
Complexity Digest
May 27, 12:55 PM
|
Andy Clark Nature Communications volume 16, Article number: 4627 (2025) As human-AI collaborations become the norm, we should remind ourselves that it is our basic nature to build hybrid thinking systems – ones that fluidly incorporate non-biological resources. Recognizing this invites us to change the way we think about both the threats and promises of the coming age. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
May 25, 12:13 PM
|
Neuroscientist Anil Seth lays out three reasons why people tend to overestimate the odds of AI becoming conscious. No one knows what it would take to build a conscious machine — but as Seth notes, we can’t rule it out. Given the unknowns, he warns against trying to deliberately create artificial consciousness. Read the full article at: bigthink.com
|
Scooped by
Complexity Digest
May 24, 11:10 AM
|
Giulia Menichetti, Ph.D., Albert-László Barabási, Ph.D., and Joseph Loscalzo, M.D., Ph.D. N Engl J Med 2025;392:1836-1845 Food contains more than 139,000 molecules, which influence nearly half the human proteome. Systematic analysis of food–chemical interactions can potentially advance nutrition science and drug discovery. Read the full article at: www.nejm.org
|
Scooped by
Complexity Digest
May 21, 1:26 PM
|
Evangelos Pournaras, Srijoni Majumdar, Thomas Wellings, Joshua C. Yang, Fatemeh B. Heravan, Regula Hänggli Fricker, Dirk Helbing Voting methods are instrumental design element of democracies. Citizens use them to express and aggregate their preferences to reach a collective decision. However, voting outcomes can be as sensitive to voting rules as they are to people's voting choices. Despite the significance and inter-disciplinary scientific progress on voting methods, several democracies keep relying on outdated voting methods that do not fit modern, pluralistic societies well, while lacking social innovation. Here, we demonstrate how one can upgrade real-world democracies, namely by using alternative preferential voting methods such as cumulative voting and the method of equal shares designed for a proportional representation of voters' preferences. By rigorously assessing a new participatory budgeting approach applied in the city of Aarau, Switzerland, we unravel the striking voting outcomes of fair voting methods: more winning projects with the same budget and broader geographic and preference representation of citizens by the elected projects, in particular for voters who used to be under-represented, while promoting novel project ideas. We provide profound causal evidence showing that citizens prefer proportional voting methods, which possess strong legitimacy without the need of very technical specialized explanations. We also reveal strong underlying democratic values exhibited by citizens who support fair voting methods such as altruism and compromise. These findings come with a global momentum to unleash a new and long-awaited participation blueprint of how to upgrade democracies. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
May 7, 1:16 PM
|
It started as a fantasy, then a promise — inspired by biology and animated by the ideas of physicists — and grew to become a powerful research tool. Now artificial intelligence has evolved into something else: a junior colleague, a partner in creativity, an impressive if unreliable wish-granting genie. It has changed everything, from how we relate to data and truth, to how researchers devise experiments and mathematicians think about proofs. In this special series, we explore how AI is changing what it means to do science and math, and what it means to be a scientist. Read the full article at: www.quantamagazine.org
|
|
Scooped by
Complexity Digest
June 29, 7:31 AM
|
Qianyang Chen and Mikhail Prokopenko Roy. Soc. Open Science Collective behaviours are frequently observed to self-organize to criticality. Existing proposals to explain these phenomena are fragmented across disciplines and only partially answer the question. This primer compares the underlying, intrinsic, utilities that may explain the self-organization of collective behaviours near criticality. We focus on information-driven approaches (predictive information, empowerment and active inference), as well as an approach incorporating both information theory and thermodynamics (thermodynamic efficiency). By interpreting the Ising model as a perception-action loop, we compare how different intrinsic utilities shape collective behaviour and analyse the distinct characteristics that arise when each is optimized. In particular, we highlight that thermodynamic efficiency—measuring the ratio of predictability gained by the system to its energy costs—reaches its maximum at the critical regime. Finally, we propose the Principle of Super-efficiency, suggesting that collective behaviours self-organize to the critical regime where optimal efficiency is achieved with respect to the entropy reduction relative to the thermodynamic costs. Read the full article at: royalsocietypublishing.org
|
Scooped by
Complexity Digest
June 27, 12:29 PM
|
Yu Tian, Sadamori Kojaku, Hiroki Sayama, and Renaud Lambiotte Phys. Rev. Lett. 134, 237401
Networks are powerful tools for modeling interactions in complex systems. While traditional networks use scalar edge weights, many real-world systems involve multidimensional interactions. For example, in social networks, individuals often have multiple interconnected opinions that can affect different opinions of other individuals, which can be better characterized by matrices. We propose a general framework for modeling such multidimensional interacting dynamics: matrix-weighted networks (MWNs). We present the mathematical foundations of MWNs and examine consensus dynamics and random walks within this context. Our results reveal that the coherence of MWNs gives rise to nontrivial steady states that generalize the notions of communities and structural balance in traditional networks. Read the full article at: link.aps.org
|
Scooped by
Complexity Digest
June 27, 7:29 AM
|
David C. Krakauer, John W. Krakauer, Melanie Mitchell Emergence is a concept in complexity science that describes how many-body systems manifest novel higher-level properties, properties that can be described by replacing high-dimensional mechanisms with lower-dimensional effective variables and theories. This is captured by the idea "more is different". Intelligence is a consummate emergent property manifesting increasingly efficient -- cheaper and faster -- uses of emergent capabilities to solve problems. This is captured by the idea "less is more". In this paper, we first examine claims that Large Language Models exhibit emergent capabilities, reviewing several approaches to quantifying emergence, and secondly ask whether LLMs possess emergent intelligence. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
June 8, 4:49 PM
|
Reed Bender, Karina Kofman, Blaise Agüera y Arcas, Michael Levin The question of "what is life?" has challenged scientists and philosophers for centuries, producing an array of definitions that reflect both the mystery of its emergence and the diversity of disciplinary perspectives brought to bear on the question. Despite significant progress in our understanding of biological systems, psychology, computation, and information theory, no single definition for life has yet achieved universal acceptance. This challenge becomes increasingly urgent as advances in synthetic biology, artificial intelligence, and astrobiology challenge our traditional conceptions of what it means to be alive. We undertook a methodological approach that leverages large language models (LLMs) to analyze a set of definitions of life provided by a curated set of cross-disciplinary experts. We used a novel pairwise correlation analysis to map the definitions into distinct feature vectors, followed by agglomerative clustering, intra-cluster semantic analysis, and t-SNE projection to reveal underlying conceptual archetypes. This methodology revealed a continuous landscape of the themes relating to the definition of life, suggesting that what has historically been approached as a binary taxonomic problem should be instead conceived as differentiated perspectives within a unified conceptual latent space. We offer a new methodological bridge between reductionist and holistic approaches to fundamental questions in science and philosophy, demonstrating how computational semantic analysis can reveal conceptual patterns across disciplinary boundaries, and opening similar pathways for addressing other contested definitional territories across the sciences. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
June 7, 4:46 PM
|
Ryan Hill, Yian Yin, Carolyn Stein, Xizhao Wang, Dashun Wang & Benjamin F. Jones Nature (2025) Scientists and inventors set the direction of their work amid evolving questions, opportunities and challenges, yet the understanding of pivots between research areas and their outcomes remains limited1,2,3,4,5. Theories of creative search highlight the potential benefits of exploration but also emphasize difficulties in moving beyond one’s expertise6,7,8,9,10,11,12,13,14. Here we introduce a measurement framework to quantify how far researchers move from their existing work, and apply it to millions of papers and patents. We find a pervasive ‘pivot penalty’, in which the impact of new research steeply declines the further a researcher moves from their previous work. The pivot penalty applies nearly universally across science and patenting, and has been growing in magnitude over the past five decades. Larger pivots further exhibit weak engagement with established mixtures of prior knowledge, lower publication success rates and less market impact. Unexpected shocks to the research landscape, which may push researchers away from existing areas or pull them into new ones, further demonstrate substantial pivot penalties, including in the context of the COVID-19 pandemic. The pivot penalty generalizes across fields, career stage, productivity, collaboration and funding contexts, highlighting both the breadth and depth of the adaptive challenge. Overall, the findings point to large and increasing challenges in effectively adapting to new opportunities and threats, with implications for individual researchers, research organizations, science policy and the capacity of science and society as a whole to confront emergent demands. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
May 30, 4:00 PM
|
Michael Levin and Richard Watson Cell and developmental biology offer numerous remarkable examples of collective intelligence and adaptive plasticity to novel circumstances, as cells implement large-scale form and function. Many of these capabilities seem different from the behavior of machines or the results of computations. And yet, they are implemented by biochemical, biophysical, and bioelectrical events which are often interpreted with the machine metaphor that dominates molecular and cell biology. The seeming incongruity between molecular mechanisms and the emergence of self-constructing and goal-driven intentional living agents has driven a perennial debate between mechanist and organicist thinkers. Here, we discuss the inadequacies of, on the one hand, the (unminded) mechanist and computationalist frameworks, and on the other, dualistic conceptions of machine vs. mind. Both fail to provide an integration of agential and mechanistic aspects evident in biology. We propose that a new kind of cognitivism, cognition all the way down, provides the necessary unification of ‘bottom-up’ and ‘top-down’ causal flows evident in living systems. We illustrate how the organizational layers between genotype and phenotype provide problem-solving intelligence, not merely complexity, and discuss the benefits and inadequacies of specific machine metaphors in this context. By taking a pragmatist approach to the hypothesis that life and mind are fundamentally the same problem, formalisms are emerging that embrace the unique quality of the agential material of life while fully benefitting from the advances of modern machine science. New ways to map formal concepts of machine and data to biology provide a route toward unifying evolutionary and developmental biology, and rich substrates for the use of truly bio-inspired principles to advance engineering and computer science. Read the full article at: osf.io
|
Scooped by
Complexity Digest
May 29, 6:02 PM
|
Ahmed Allibhoy, Arthur N. Montanari, Fabio Pasqualetti, Adilson E. Motter Oscillator Ising machines (OIMs) are networks of coupled oscillators that seek the minimum energy state of an Ising model. Since many NP-hard problems are equivalent to the minimization of an Ising Hamiltonian, OIMs have emerged as a promising computing paradigm for solving complex optimization problems that are intractable on existing digital computers. However, their performance is sensitive to the choice of tunable parameters, and convergence guarantees are mostly lacking. Here, we show that lower energy states are more likely to be stable, and that convergence to the global minimizer is often improved by introducing random heterogeneities in the regularization parameters. Our analysis relates the stability properties of Ising configurations to the spectral properties of a signed graph Laplacian. By examining the spectra of random ensembles of these graphs, we show that the probability of an equilibrium being asymptotically stable depends inversely on the value of the Ising Hamiltonian, biasing the system toward low-energy states. Our numerical results confirm our findings and demonstrate that heterogeneously designed OIMs efficiently converge to globally optimal solutions with high probability. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
May 28, 11:59 AM
|
Matthew Russell Barnes, Vincenzo Nicosia & Richard G. Clegg Scientific Reports volume 15, Article number: 5941 (2025) In complex networks, the “rich-get-richer” effect (nodes with high degree at one point in time gain more degree in their future) is commonly observed. In practice this is often studied on a static network snapshot, for example, a preferential attachment model assumed to explain the more highly connected nodes or a rich-club effect that analyses the most highly connected nodes. In this paper, we consider temporal measures of how success (measured here as node degree) propagates across time. By analogy with social mobility (a measure of people moving within a social hierarchy through their life) we define hierarchical mobility to measure how a node’s propensity to gain degree changes over time. We introduce an associated taxonomy of temporal correlation statistics including mobility, philanthropy and community. Mobility measures the extent to which a node’s degree gain in one time period predicts its degree gain in the next. Philanthropy and community measure similar properties related to node neighbourhood. We apply these statistics both to artificial models and to 26 real temporal networks. We find that most of our networks show a tendency for individual nodes and their neighbourhoods to remain in similar hierarchical positions over time, while most networks show low correlative effects between individuals and their neighbourhoods. Moreover, we show that the mobility taxonomy can discriminate between networks from different fields. We also generate artificial network models to gain intuition about the behaviour and expected range of the statistics. The artificial models show that the opposite of the “rich-get-richer” effect requires the existence of inequality of degree in a network. Overall, we show that measuring the hierarchical mobility of a temporal network is an invaluable resource for discovering its underlying structural dynamics. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
May 27, 11:51 AM
|
Luís A. Nunes Amaral, Arthur Capozzi, Dirk Helbing Organizations learn from the market, political, and societal responses to their actions. While in some cases both the actions and responses take place in an open manner, in many others, some aspects may be hidden from external observers. The Eurovision Song Contest offers an interesting example to study organizational level learning at two levels: organizers and participants. We find evidence for changes in the rules of the Contest in response to undesired outcomes such as runaway winners. We also find strong evidence of participant learning in the characteristics of competing songs over the 70-years of the Contest. English has been adopted as the lingua franca of the competing songs and pop has become the standard genre. Number of words of lyrics has also grown in response to this collective learning. Remarkably, we find evidence that four participating countries have chosen to ignore the "lesson" that English lyrics increase winning probability. This choice is consistent with utility functions that award greater value to featuring national language than to winning the Contest. Indeed, we find evidence that some countries -- but not Germany -- appear to be less susceptible to "peer" pressure. These observations appear to be valid beyond Eurovision. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
May 24, 12:12 PM
|
David Deutsch, Chiara Marletto Constructor theory asserts that the laws of physics are expressible as specifications of which transformations of physical systems can or cannot be brought about with unbounded accuracy by devices capable of operating in a cycle ('constructors'). Hence, in particular, such specifications cannot refer to time. Thus, laws expressed in constructor-theoretic form automatically avoid the anomalous properties of time in traditional formulations of fundamental theories. But that raises the problem of how they can nevertheless give meaning to duration and dynamics, and thereby be compatible with traditionally formulated laws. Here we show how. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
May 23, 11:48 AM
|
Fernando Rosas, Alexander Boyd, Manuel Baltieri Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment to ensure their reliability and safety. However, accurate world models often have high computational demands that can severely restrict the scope and depth of such assessments. Inspired by the classic `brain in a vat' thought experiment, here we investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation. By following principles from computational mechanics, our approach reveals a fundamental trade-off in world model construction between efficiency and interpretability, demonstrating that no single world model can optimise all desirable characteristics. Building on this trade-off, we identify procedures to build world models that either minimise memory requirements, delineate the boundaries of what is learnable, or allow tracking causes of undesirable outcomes. In doing so, this work establishes fundamental limits in world modelling, leading to actionable guidelines that inform core design choices related to effective agent evaluation. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
May 9, 5:44 PM
|
Linzhuo Li, Yiling Lin, Lingfei Wu Using large-scale citation data and a breakthrough metric, the study systematically evaluates the inevitability of scientific breakthroughs. We find that scientific breakthroughs emerge as multiple discoveries rather than singular events. Through analysis of over 40 million journal articles, we identify multiple discoveries as papers that independently displace the same reference using the Disruption Index (D-index), suggesting functional equivalence. Our findings support Merton's core argument that scientific discoveries arise from historical context rather than individual genius. The results reveal a long-tail distribution pattern of multiple discoveries across various datasets, challenging Merton's Poisson model while reinforcing the structural inevitability of scientific progress. Read the full article at: arxiv.org
|