 Your new post is loading...
 Your new post is loading...
|
Scooped by
Complexity Digest
April 4, 6:52 PM
|
Adams, Fred C. Both the fundamental constants that describe the laws of physics and the cosmological parameters that determine the properties of our universe must fall within a range of values in order for the cosmos to develop astrophysical structures and ultimately support life. This paper reviews the current constraints on these quantities. The discussion starts with an assessment of the parameters that are allowed to vary. The standard model of particle physics contains both coupling constants (α ,αs ,αw) and particle masses (mu ,md ,me) , and the allowed ranges of these parameters are discussed first. We then consider cosmological parameters, including the total energy density of the universe (Ω) , the contribution from vacuum energy (ρΛ) , the baryon-to-photon ratio (η) , the dark matter contribution (δ) , and the amplitude of primordial density fluctuations (Q) . These quantities are constrained by the requirements that the universe lives for a sufficiently long time, emerges from the epoch of Big Bang Nucleosynthesis with an acceptable chemical composition, and can successfully produce large scale structures such as galaxies. On smaller scales, stars and planets must be able to form and function. The stars must be sufficiently long-lived, have high enough surface temperatures, and have smaller masses than their host galaxies. The planets must be massive enough to hold onto an atmosphere, yet small enough to remain non-degenerate, and contain enough particles to support a biosphere of sufficient complexity. These requirements place constraints on the gravitational structure constant (αG) , the fine structure constant (α) , and composite parameters (C⋆) that specify nuclear reaction rates. We then consider specific instances of possible fine-tuning in stellar nucleosynthesis, including the triple alpha reaction that produces carbon, the case of unstable deuterium, and the possibility of stable diprotons. For all of the issues outlined above, viable universes exist over a range of parameter space, which is delineated herein. Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and thereby support habitability. Read the full article at: ui.adsabs.harvard.edu
|
Scooped by
Complexity Digest
April 2, 7:50 PM
|
Steven D Shaw, Gideon Nave People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways. A key prediction of the theory is "cognitive surrender"-adopting AI outputs with minimal scrutiny, overriding intuition (System 1) and deliberation (System 2). Across three preregistered experiments using an adapted Cognitive Reflection Test (N = 1,372; 9,593 trials), we randomized AI accuracy via hidden seed prompts. Participants chose to consult an AI assistant on a majority of trials (>50%). Relative to baseline (no System 3 access), accuracy significantly rose when AI was accurate and fell when it erred (+25/-15 percentage points; Study 1), the behavioral signature of cognitive surrender (AI-Accurate vs. AI-Faulty contrast; Cohen's h = 0.81). Engaging System 3 also increased confidence, even following errors. Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators. Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3. Tri-System Theory thus characterizes a triadic cognitive ecology, revealing how System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI. Read the full article at: papers.ssrn.com
|
Scooped by
Complexity Digest
March 29, 12:53 PM
|
Thomas F. Varley, Josh Bongard The study of complex systems has produced a huge library of different descriptive statistics that scientists can use to describe the various emergent patterns that characterize complex systems. The problem of engineering systems to display those patterns from first principles is a much harder one, however, as a hallmark of complexity is that macro-scale emergent properties are often difficult to predict from micro-scale features. Here, we propose a general optimization-based pipeline to automate the difficult problem of engineering emergent features by re-purposing descriptive statistics as loss functions, and letting a gradient descent optimizer do the hard work of designing the relevant micro-scale features and interactions. Using Kuramoto systems of coupled oscillators as a test bed, we show that our approach can reliably produce systems with non-trivial global properties, including higher-order synergistic information, multi-attractor metastability, and meso-scale structures such as modules and integrated information. We further show that this pipeline can also account for and accommodate constraints on the system properties, such as the costs of connections, or topological restrictions. This work is a step forward on the path moving complex systems science from a field predicated largely on description and post-hoc storytelling towards one capable of engineering real-world systems with desirable emergent meso-scale and macro-scale properties. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
March 15, 4:39 PM
|
Vicky Chuqiao Yang, James Holehouse, Hyejin Youn, José Ignacio Arroyo, Sidney Redner, Geoffrey B. West, and Christopher P. Kempes PNAS 123 (7) e2509729123 Diversification and specialization are central to complex adaptive systems, yet overarching principles across domains remain elusive. We introduce a general theory that unifies diversity and specialization across disparate systems, including microbes, federal agencies, companies, universities, and cities, characterized by two key parameters. We show from extensive data that function diversity scales with system size as a sublinear power law-resembling Heaps’ law-in all but cities, where it is logarithmic. Our theory explains both behaviors and suggests that function creation depends on system goals and structure: federal agencies tend to ensure functional coverage; cities slow new function growth as old ones expand, and cells occupy an intermediate position. Once functions are introduced, their growth follows a remarkably universal pattern across all systems. Read the full article at: www.pnas.org
|
Scooped by
Complexity Digest
March 13, 2:40 PM
|
Abbas K Rizi PNAS Nexus, Volume 5, Issue 2, February 2026, pgag010, The term emergence is increasingly used across scientific disciplines to describe phenomena that arise from interactions among a system's components but cannot be readily inferred by examining those components in isolation. While often invoked to explain higher-level behaviors—such as flocking, synchronization, or collective intelligence—the term is frequently used without precision, sometimes giving rise to ambiguity or even mystique. In this perspective paper, I clarify the scientific meaning of emergence as a measurable and physically grounded phenomenon. Through concrete examples—such as temperature, magnetism, and herd immunity in social networks—I review how collective behavior can arise from local interactions that are constrained by global boundaries. By refining the concept of emergence, it is possible to gain a clearer and more grounded understanding of complex systems. My goal is to show that emergence, when properly framed, offers not mysticism, but rather insight. Read the full article at: academic.oup.com
|
Scooped by
Complexity Digest
March 12, 9:25 AM
|
Sara Imari Walker One of the longest standing open problems in science is how life arises from non-living matter. If it is possible to measure this transition in the lab, then it might be possible to understand the physical mechanisms by which the emergence of life occurs, which so far have evaded scientific understanding. A significant hurdle is the lack of standards or a framework for cross comparison across different experimental contexts and planetary environments. In this essay, I review current challenges in experimental approaches to origin of life chemistry, focusing on those associated with quantifying experimental selectivity versus de novo generation of molecular complexity, and I highlight new methods using molecular assembly theory to measure molecular complexity. This metrology-centered approach can enable rigorous testing of hypotheses about the cascade of major transitions in molecular order marking the emergence of life, while potentially bridging traditional divides between metabolism-first and genetics-first scenarios. Grounding the study of life's origins in measurable complexity has significant implications for the search for life beyond Earth, suggesting paths toward theory-driven detection of biological complexity in diverse planetary contexts. As the field moves forward, standardized measurements of molecular complexity may help unify currently disparate approaches to understanding how matter transforms to life. Much remains to be done in this exciting frontier. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
March 11, 2:32 PM
|
Costolo, Michael This paper introduces a constraint-limited model of combinatorial growth that examines how feasibility scales with increasing system dimensionality. The framework analyzes the balance between expanding possibility spaces and constraint structures that prune feasible configurations. The model shows that when feasible configurations grow as c^n within a combinatorial space of size 2^n, the feasible fraction collapses for constant c < 2. Sustained novelty generation therefore requires c(n) to approach the combinatorial base, producing a narrow “complexity corridor” between regimes of trivial repetition and combinatorial sparsity. The paper derives the analytic structure of this corridor and explores it through numerical simulations and visualizations. The results suggest a possible structural explanation for why complex systems may emerge only within a narrow range where combinatorial expansion and constraint relaxation operate at comparable scales. The manuscript includes the full mathematical derivation, simulation results, and discussion of implications for complex systems. Read the full article at: zenodo.org
|
Scooped by
Complexity Digest
March 1, 10:42 AM
|
Junhua Yuan Nature Physics (2026) Spontaneous switching between active and inactive states in bacterial chemosensory arrays is shown to operate near a critical point. Through biologically controlled disorder, cells balance high signal gain with fast response. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
February 28, 10:53 AM
|
Erik Hoel Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
February 23, 10:28 AM
|
Sergey Gavrilets, Johannes Karl, and Michele J. Gelfand PNAS 123 (7) e2522998123 People often get public opinion wrong, assuming their own views are unpopular when in fact many others share them. This widespread misperception, called pluralistic ignorance, can trap societies in harmful or outdated norms. We build a mathematical model showing how these misperceptions form and change over time, depending on whether cultures are “tight” (with strict norms) or “loose” (with flexible ones). Our results explain why support for issues like climate action or women’s rights is often underestimated, and why change happens faster in some societies than others. The model also points to practical solutions: in loose cultures, sharing accurate information works best, while in tight ones, lowering the costs of speaking up can spark social change. Read the full article at: www.pnas.org
|
Suggested by
John Stewart
February 20, 10:11 PM
|
John E. Stewart BioSystems Volume 262, April 2026, 105733 Our universe appears to be fine-tuned for life. But once life emerges, it does not evolve randomly. Evolution has a trajectory. Both evolvability and cooperative integration increase as evolution proceeds. Until now, this trajectory has largely been driven blindly by gene-based natural selection. But humans are developing cognitive capacities that are far superior than natural selection at adapting and evolving humanity. These capacities will enable humanity to use an understanding of evolution's future trajectory to guide its own evolution, avoiding the destructive selection that will otherwise reinforce the trajectory. Humans who help realize this potential will be fulfilling vital evolutionary roles that are meaningful and purposeful in a much larger scheme of things. The paper considers whether these roles remain meaningful when considered in the wider context of possible origins of the universe. But this analysis is faced with a potentially infinite number of origin hypotheses (including innumerable ‘God hypotheses’), which are not falsified by current knowledge. The paper addresses this challenge using methods that enable rational decision-making despite radical uncertainty. Broadly, this approach reinforces the conclusions reached by consideration of the evolutionary trajectory within the universe, and opens some new possibilities. Finally, the paper demonstrates that extending this analysis also largely overcomes Hume's critique of induction, placing scientific methodologies on a firmer footing. It achieves this by recognising that a universe which exhibits a trajectory towards increasing evolvability must contain discoverable regularities that provide adaptive advantages for evolvability. Read the full article at: www.sciencedirect.com
|
Scooped by
Complexity Digest
February 20, 4:26 PM
|
Giovanni Pezzulo, Michael Levin Achieving advanced machine intelligence remains a central challenge in AI research, often approached through scaling neural architectures and generative models. However, biological systems offer a broader repertoire of strategies for adaptive, goal-directed behavior - strategies that emerged long before nervous systems evolved. This paper advocates a genuinely life-inspired approach to machine intelligence, drawing on principles from biology that enable robustness, autonomy, and open-ended problem-solving across scales. We frame intelligence as flexible problem-solving, following William James, and develop the concept of "cognitive light cones" to characterize the continuum of intelligence in living systems and machines. We argue that biological evolution has discovered a scalable recipe for intelligence - and the progressive expansion of organisms' "cognitive light cone", predictive and control capacities. To explain how this is possible, we distill five design principles - multiscale autonomy, growth through self-assemblage of active components, continuous reconstruction of capabilities, exploitation of physical and embodied constraints, and pervasive signaling enabling self-organization and top-down control from goals - that underpin life's ability to navigate creatively diverse problem spaces. We discuss how these principles contrast with current AI paradigms and outline pathways for integrating them into future autonomous, embodied, and resilient artificial systems. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
February 19, 8:23 PM
|
Fernando Rodriguez-Vergara and Phil Husbands Mathematics 2026, 14(3), 535 According to the Church–Turing thesis, the limit of what is computable is bounded by Turing machines. Following from this, given that general computable functions formally describe the notion of recursive mechanisms, it is sometimes argued that every organismic process that specifies consistent cognitive responses should be both limited to Turing machine capabilities and amenable to formalization. There is, however, a deep intuitive conviction permeating contemporary cognitive science, according to which mental phenomena, such as consciousness and agency, cannot be explained by resorting to this kind of framework. In spite of some exceptions, the overall tacit assumption is that whatever the mind is, it exceeds the reach of what is described by notions of computability. This issue, namely the nature of the relation between cognition and computation, becomes particularly pertinent and increasingly more relevant as a possible source of better understanding the inner workings of the mind, as well as the limits of artificial implementations thereof. Moreover, although it is often overlooked or omitted so as to simplify our models, it will probably define, or so we argue, the direction of future research on artificial life, cognitive science, artificial intelligence, and related fields. Read the full article at: www.mdpi.com
|
|
Scooped by
Complexity Digest
April 3, 6:47 PM
|
Danielle L. Chase, Daniel Zhu, Mahi Kathait, Henry Robertson, Jash Shah, Sully Harrer, Gary Nave, Nolan R. Bonnie, Orit Peleg When honeybee colonies reproduce by fission, several thousand bees and their queen depart the parental nest and temporarily form a dense cluster on a tree branch or other surface while searching for a new nest site. Once the new nest site is selected, the swarm disassembles and flies toward it. How honeybees transition rapidly between dispersed flight and an aggregated cluster remains an open question. Here, we develop an experimental system and three-dimensional imaging pipeline to track individual flying bees together with the evolving morphology of the swarm during formation and dissolution. We report results from a representative swarming event. During assembly, swarms rapidly form low-density clusters before undergoing a slower contraction to a more dense steady state configuration. In contrast, disassembly occurs significantly faster than assembly and is characterized by strongly divergent flight, with bees departing the swarm in all directions. Overall, this method is able to demonstrate the coupled flight and morphological dynamics that underlie honeybee swarm assembly. Because the system is relatively low-cost and low-power, it is readily adaptable for three-dimensional imaging of other biological collectives in naturalistic environments. Read the full article at: www.biorxiv.org
|
Scooped by
Complexity Digest
April 2, 4:54 PM
|
Tenta Tani We theoretically investigate how information flows when two particles interact with each other. Understanding the physical mechanisms of directional information flow is crucial for advancing information thermodynamics and stochastic computing. However, the fundamental connection between mechanical motion and causal information transfer remains elusive. To focus only on essential effects of physical dynamics, we examine two interacting Brownian particles confined in a one-dimensional potential. By simulating their Langevin dynamics, we quantify the causal information exchange using transfer entropy. We demonstrate that a mass asymmetry inherently breaks the symmetry of information flow, inducing a net directional transfer from the heavier to the lighter particle. Physically, the heavier particle, possessing larger inertia and higher active information storage, retains the memory of its trajectory longer against thermal fluctuations, thereby acting as a source of information. We analytically clarify that this net transfer is governed by a competition between the difference in memory capacity and the predictability of the particle trajectories. Furthermore, we reveal that the net information flow scales logarithmically with the mass ratio. These findings provide essential insights into the physical significance of transfer entropy and the nature of information flow in general physical systems. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
March 25, 9:09 AM
|
Jonas Wickman, Christopher A. Klausmeier, and Elena Litchman The American Naturalist Environmental variability, in the form of either temporal fluctuations or intermittent perturbations, affects virtually all ecological systems. However, while temporal variability is widely recognized to play an important role across many ecological and evolutionary subdisciplines, there is no high-level cross-cutting concept that describes how species, communities, and ecosystems respond to variability. In this article we propose that “antifragility” could serve well as such a concept. Initially used in economics, antifragility denotes that a property or metric of performance increases with variability. To showcase the breadth of applicability and utility of the concept, we examine two mathematical models for antifragility in ecosystem services and competition. We also demonstrate some of the nuances and possible misapplications of the concept. Under global change, the variability of environmental conditions is expected to change. We believe that antifragility could serve as a useful concept in coordinating research efforts toward understanding the effects of these changes. Read the full article at: www.journals.uchicago.edu
|
Scooped by
Complexity Digest
March 15, 2:35 PM
|
Dashun Wang As artificial-intelligence systems take on more of the scientific workflow, the central goal should not be complete automation, but designing platforms that preserve creativity, responsibility and surprise. Read the full article at: www.nature.com
|
Scooped by
Complexity Digest
March 12, 2:37 PM
|
Lucas Lacasa In network science, collective dynamics of complex systems are typically modelled as (nonlinear, often including many-body) vertex-level update rules evolving over a graph interaction structure. In recent years, frameworks that explicitly model such higher-order interactions in the interaction backbone (i.e. hypergraphs) have been advanced, somehow shifting the imputation of the effective nonlinearity from the dynamics to the interaction structure. In this work we discuss such structural--dynamical representation duality, and investigate how and when a nonlinear dynamics defined on the vertex set of a graph allows an equivalent representation in terms of a linear dynamics defined on the state space of a sufficiently richer, higher-order interaction structure. Using Carleman linearisation arguments, we show that finite polynomial dynamics defined in the |V| vertices of a graph admit an exact representation as linear dynamics on the state space of an hb-graph of order |V|, a combinatorial structure that extends hypergraphs by allowing vertex multiplicity, where the specific shape of the nonlinearity indicates whether the hb-graph is either finite or infinite (in terms of the number of hb-edges). For more general analytic nonlinearities, exact linear representation always require an hb-graph of infinite size, and its finite-size truncation provides an approximate representation of the original nonlinear graph-based dynamics. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
March 11, 3:14 PM
|
ONERVA KORHONEN Advances in Complex Systems Vol. 28, No. 08, 2530001 (2025) Network analysis has become a powerful tool in various fields. However, the increasing popularity comes with potential problems. Unfamiliarity with the characteristics of the systems under investigation complicates network model construction and interpretation of analysis outcomes. While these issues require special attention in studies that apply the increasingly complex higher-order connectivity models, similar problems are associated with all, even the most simple, network models. Alongside technical issues, network scientists face a philosophical question: can the network approach discover the fundamental nature of a system, on the one hand, and produce useful information, on the other hand. In this perspective, I review the potential problems of the network approach and propose two solutions to address them: active evaluation of the potential and limitations of the network framework before applying a network model and a transition toward an interdisciplinary research practice to interpret analysis outcomes in their right context. Read the full article at: www.worldscientific.com
|
Scooped by
Complexity Digest
March 10, 6:28 PM
|
How and why do complex chemical and biological systems self-organize into ordered states far from thermodynamic equilibrium? Despite advances in thermodynamics, kinetics, and information theory, a unifying principle that links organization and efficiency across scales has remained elusive. In open systems, productive-event trajectories are conditioned on starting at a source and ending at a sink. This work proposes a stochastic–dissipative least-action triad framework in which (i) a path-ensemble weighting biases trajectories by their action cost, (ii) feedback processes sharpen this distribution, and (iii) the ensemble evolves toward a least-average-action attractor, decreasing during self-organization and increasing during decay. A parametric cross-scale metric—Average Action Efficiency (AAE)—is defined, which is inversely proportional to the average action per productive event. Under reinforcing feedback, identities derived from the exponential-family path measure show that the average action decreases and AAE rises monotonically. In future extensions, this formulation could help bridge quantum, classical, and biological regimes while remaining computationally tractable, because its empirical version relies on aggregate energetic and timing data rather than enumerating individual trajectories. AAE reaches a local maximum at a non-equilibrium steady state under fixed operational context, consistent with the present formulation, and connections to thermodynamic and informational measures are made. A companion article (Part II) details empirical estimation strategies and applications (Georgiev, 2025a). Georgi Yordanov Georgiev BioSystems Volume 262, April 2026, 105647 Read the full article at: www.sciencedirect.com
See Also: Part II: Empirical estimation, Average Action Efficiency, and applications to ATP synthase
|
Scooped by
Complexity Digest
February 28, 11:12 AM
|
Viktor Stojkoski, César A. Hidalgo Research Policy Volume 55, Issue 4, May 2026, 105454 Efforts to apply economic complexity to identify diversification opportunities often rely on diagrams comparing the relatedness and complexity of products, technologies, or industries. Yet, the use of these diagrams, is not based on empirical or theoretical evidence supporting some notion of optimality. Here, we introduce an optimization-based framework that identifies diversification opportunities by minimizing a cost function capturing the constraints imposed by an economy's pattern of specialization. We show that the resulting portfolios often differ from those implied by relatedness–complexity diagrams, providing a target-oriented optimization layer to the economic complexity toolkit. Read the full article at: www.sciencedirect.com
|
Scooped by
Complexity Digest
February 24, 2:21 PM
|
Ivan Shpurov and Tom Froese Phys. Rev. E 113, 024405 The collective behavior of numerous animal species, including insects, exhibits scale-free behavior indicative of the critical (second-order) phase transition. Previous research uncovered such phenomena in the behavior of honeybees, most notably the long-range correlations in space and time. Furthermore, it was demonstrated that the bee activity in the hive manifests the hallmarks of the jamming process. We follow up by presenting a discrete model of the system that faithfully replicates some of the key features found in the data, such as the divergence of correlation length and scale-free distribution of jammed clusters. The dependence of the correlation length on the control parameter, density, is demonstrated for both the real data and the model. We conclude with a brief discussion on the contribution of the insights provided by the model to our understanding of the insects' collective behavior. Read the full article at: link.aps.org
|
Scooped by
Complexity Digest
February 21, 4:29 PM
|
Federico Naldini, Fabio Oddi, Leo D'Amato, Grégory Marlière, Vito Trianni, Paola Pellegrini Improving traffic management in case of perturbation is one of the main challenges in today's railway research. The great majority of the existing literature proposes approaches to make centralized decisions to minimize delay propagation. In this paper, we propose a new paradigm to the same aim: we design and implement a modular process to allow trains to self-organize. This process consists in having trains identifying their neighbors, formulating traffic management hypotheses, checking their compatibility and selecting the best ones through a consensus mechanism. Finally, these hypotheses are merged into a directly applicable traffic plan. In a thorough experimental analysis on a portion of the Italian network, we compare the results of self-organization with those of a state-of-the-art centralized approach. In particular, we make this comparison mimicking a realistic deployment thanks to a closed-loop framework including a microscopic railway simulator. The results indicate that self-organization achieves better results than the centralized algorithm, specifically thanks to the definition and exploitation of the instance decomposition allowed by the proposed approach. Read the full article at: arxiv.org
|
Scooped by
Complexity Digest
February 20, 7:29 PM
|
Kleber Andrade Oliveira , Henrique Ferraz de Arruda , Yamir Moreno PNAS Nexus, Volume 5, Issue 1, January 2026, pgaf402 We investigate how information-spreading mechanisms affect opinion dynamics and vice versa via an agent-based simulation on adaptive social networks. First, we characterize the impact of reposting on user behavior with limited memory, a feature that introduces novel system states. Then, we build an experiment mimicking information-limiting environments seen on social media platforms and study how the model parameters can determine the configuration of opinions. In this scenario, different posting behaviors may sustain polarization or reverse it. We further show the adaptability of the model by calibrating it to reproduce the statistical organization of information cascades as seen empirically in a microblogging social media platform. Our model combines mechanisms for platform content recommendation, connection rewiring, and limited-attention user behavior, paving the way for a robust understanding of echo chambers as a specialized phenomenon of opinion polarization. Read the full article at: academic.oup.com
|
Scooped by
Complexity Digest
February 20, 1:31 PM
|
Tiago P. Peixoto, Leto Peel, Thilo Gross, Manlio De Domenico We demonstrate that graph-based models are fully capable of representing higher-order interactions, and have a long history of being used for precisely this purpose. This stands in contrast to a common claim in the recent literature on "higher-order networks" that graph-based representations are fundamentally limited to "pairwise" interactions, requiring hypergraph formulations to capture richer dependencies. We clarify this issue by emphasizing two frequently overlooked facts. First, graph-based models are not restricted to pairwise interactions, as they naturally accommodate interactions that depend simultaneously on multiple adjacent nodes. Second, hypergraph formulations are strict special cases of more general graph-based representations, as they impose additional constraints on the allowable interactions between adjacent elements rather than expanding the space of possibilities. We show that key phenomenology commonly attributed to hypergraphs -- such as abrupt transitions -- can, in general, be recovered exactly using graph models, even locally tree-like ones, and thus do not constitute a class of phenomena that is inherently contingent on hypergraphs models. Finally, we argue that the broad relevance of hypergraphs for applications that is sometimes claimed in the literature is not supported by evidence. Instead it is likely grounded in misconceptions that network models cannot accommodate multibody interactions or that certain phenomena can only be captured with hypergraphs. We argue that clearly distinguishing between multivariate interactions, parametrized by graphs, and the functions that define them enables a more unified and flexible foundation for modeling interacting systems. Read the full article at: arxiv.org
|