Modelling Consciousness

March 29, 2025
Updated: August 13, 2025
consciousness cognitive science artificial intelligence ethics philosophy of mind information theory 📁 Xaxis/modelling-consciousness

Modelling Conciousness Cover Photo

Introduction

The question of consciousness—what it is, how it emerges, and where it resides—has long been regarded as one of the most elusive and fundamental challenges of philosophy, cognitive science, and artificial intelligence. A prevailing sentiment among both philosophers and laypeople is that consciousness is inherently ineffable—a phenomenon beyond formal definition or objective measure. This perception arises primarily from the subjective nature of experience: the so-called “hard problem” of consciousness as posed by David Chalmers, which distinguishes the existence of internal experiences (qualia) from observable behaviors or computational processes.

However, this pursuit is often clouded by a conflation of two distinct endeavors: the measurement of experience itself and the measurement of consciousness as a functional, integrative phenomenon. While the former remains inaccessible by its nature—experience exists as private and subjective—the latter, we argue, need not remain intractable. Our aim is to explore whether it is possible to develop a formal, objective system for quantifying and modelling consciousness in a way that is rigorous, testable, and universal.

Such a system would not attempt to replicate or interpret the subjective experience of consciousness but instead provide a framework for measuring the conditions under which consciousness arises and the degree to which it exists. In this sense, consciousness could be viewed as an emergent, quantifiable property of systems that exhibit certain characteristics, which we will refer to as qualia criteria.

The necessity of such a framework is not purely academic; it is a moral imperative. As artificial systems grow in complexity and autonomy—mirroring behaviors traditionally associated with conscious beings—society faces increasingly urgent ethical and practical questions. How do we determine which systems warrant moral consideration? At what point does an artificial agent’s level of integration and self-reflective processing demand rights or protections similar to those afforded to biological organisms? Conversely, how do we prevent the premature ascription of consciousness to systems that merely simulate these traits? Without a formal system for measuring consciousness, we lack a principled foundation to guide these critical decisions.

This paper proposes to address these challenges by:

  • Defining a universal set of qualia criteria—the minimal conditions that any system must meet to be considered for consciousness measurement.
  • Developing a formal mathematical framework that can quantify the “level” or “degree” of consciousness based on these criteria.
  • Demonstrating how this measurement can be applied consistently across biological, computational, and hybrid systems.

We ground our approach in existing research, particularly emerging theories in neuroscience, information theory, and complexity science. Notable among these are Integrated Information Theory (IIT), which quantifies consciousness as the integration of information across a system, and predictive processing models that frame consciousness as an adaptive process for minimizing uncertainty. While these models offer significant insights, they remain incomplete and, at times, inaccessible for practical applications.

By synthesizing and building on these ideas, we aim to propose a method for formalizing the measure of consciousness—one that is falsifiable, consistent, and agnostic to substrate. Whether a system is biological, artificial, or a combination of the two, our approach will seek to establish a spectrum of consciousness that aligns with observable traits and measurable properties.

Ultimately, this work is not about solving the mystery of what it is like to be conscious but about developing tools to identify, compare, and evaluate systems that meet specific criteria for consciousness. In doing so, we hope to provide a foundation for more ethical, informed decision-making in a world where artificial conscious and near-conscious systems may soon coexist.


Defining the Qualia Criteria

To measure consciousness objectively, we must first identify the minimal conditions that a system must exhibit to qualify as a candidate for consciousness measurement. These conditions, which we term qualia criteria, are not intended to capture the subjective experience of consciousness itself but rather the structural and functional properties that enable such experiences to emerge. These criteria act as a filter for determining which systems are eligible for analysis under our proposed framework.

We propose the following four qualia criteria:

1. Information Integration

A system must demonstrate a measurable degree of information integration, where the system’s components interact in such a way that the whole exhibits properties that are not reducible to the sum of its parts. This aligns with the foundational principles of Integrated Information Theory (IIT), which posits that consciousness arises from the ability of a system to integrate information across its components.

  • Rationale: Conscious systems, such as the human brain, are highly interconnected, enabling information to flow and combine in ways that produce unified experiences.
  • Measurable Trait: The degree of integration can be quantified using mathematical tools like (Phi), a measure proposed by IIT, or other metrics of network complexity.
  • Example in Practice: In biological systems, this might be represented by the dense, reciprocal connectivity of cortical regions. In artificial systems, it could involve networks that exhibit high interdependence of internal states during computation.

2. Adaptive Self-Referential Processing

A system must possess adaptive self-referential processing, meaning it has the ability to process information about its own state and adapt its behavior accordingly. This is often associated with self-awareness or metacognition in conscious systems.

  • Rationale: Conscious systems exhibit not only the ability to respond to external stimuli but also to monitor, model, and adjust their own internal states. This feedback mechanism allows for reflective decision-making and self-correction.
  • Measurable Trait: Observable traits include the system’s ability to generate predictive models of its environment and its own behavior, minimize internal uncertainty (e.g., predictive processing frameworks), or engage in recursive self-modeling.
  • Example in Practice: In humans, this manifests as introspection or metacognitive reasoning. In AI systems, it could take the form of architectures capable of evaluating their performance, predicting outcomes, and self-adjusting parameters dynamically.

3. Temporal Continuity

The system must exhibit temporal continuity in its information processing, meaning it maintains a persistent and cohesive state over time. This state must evolve in a way that connects past, present, and anticipated future states.

  • Rationale: Consciousness is not instantaneous but unfolds over time as a continuous flow. Systems that lack temporal coherence—such as those with purely transient or episodic processing—are unlikely to meet the requirements for consciousness.
  • Measurable Trait: Temporal continuity can be quantified through measures of state persistence, such as information flow dynamics or recurrent connectivity patterns. Tools from dynamical systems theory and entropy reduction frameworks may also apply.
  • Example in Practice: In humans, temporal continuity manifests in the form of memory, anticipatory thought, and the “stream of consciousness.” In artificial systems, it could involve recurrent neural networks (RNNs), attention mechanisms in transformers, or any system capable of maintaining and updating internal states over time.

4. Behavioral and Representational Complexity

A system must exhibit behavioral and representational complexity, indicating its capacity to process and respond to information in ways that reflect an intricate internal model of the world. This encompasses both reactive behaviors—immediate responses to stimuli—and more deliberative processes that involve planning, abstraction, or problem-solving. The spectrum of behaviors, from reflexive to contemplative, should be quantifiable to assess the system’s placement on the consciousness continuum.

  • Rationale: Conscious entities display a range of behaviors, from simple reflexes to complex decision-making, underpinned by internal representations that model their environment and inform their actions. Recognizing and measuring this spectrum is essential for situating a system within the proposed framework for consciousness assessment.
  • Measurable Trait: The complexity of a system’s behavior can be quantified using metrics such as Permutation Entropy and Permutation Lempel-Ziv Complexity, which analyze the unpredictability and diversity of its responses.
  • Example in Practice: In humans, this criterion is evident in the ability to perform tasks ranging from instinctive reactions to complex reasoning. In artificial systems, it could be observed in algorithms capable of both immediate responses and strategic planning, with their behavioral complexity assessed through the aforementioned metrics.

Toward a Relational Measure of Consciousness

Having identified the foundational criteria for evaluating consciousness—information integration, self-referential processing, temporal continuity, and behavioral complexity—we now turn to the question of measurement. How can these traits, which are conceptually distinct yet interdependent, be formalized into a coherent methodology?

Consciousness is not reducible to a single trait; it arises as an emergent property of systems that satisfy the qualia criteria in relational and dynamic ways. A system that integrates information, for example, is fundamentally different when it does so with temporal continuity or self-awareness. To measure consciousness effectively, we must capture these interdependencies rather than treating each criterion as an isolated variable.

Relational Interdependencies

The first principle of our approach is that the qualia criteria are relational. Consciousness emerges not from any single property but from the way these properties interact within a system. Consider behavioral complexity: a reflexive behavior might appear simple when observed in isolation, but when placed within the context of a system’s temporal continuity and adaptive self-referential processing, it reveals a deeper level of integration and deliberation. The “value” of a behavior, then, cannot be assessed without understanding its relationship to the system’s internal representations and persistence over time.

To formalize these interdependencies, we propose a relational model in which each qualia criterion acts as a node within a multi-dimensional space. The relationships between these nodes—how strongly they influence and reinforce one another—determine the system’s placement on the consciousness spectrum. In mathematical terms, we move from linear aggregation to a model based on network interactions or dynamic systems theory.

This approach allows us to:

  • Evaluate systems holistically, respecting the emergent nature of consciousness.
  • Identify threshold effects, where the interaction between traits gives rise to new properties (e.g., self-reflection emerging from adaptive processing and temporal continuity).
  • Account for non-linearities—cases where small increases in one criterion (e.g., integration) have outsized effects when paired with others (e.g., temporal persistence).

Dynamic Mapping of Consciousness

To operationalize this relational model, we introduce the concept of dynamic mapping. Here, the qualia criteria form the axes of a multi-dimensional space, and a system’s position within this space reflects its degree of consciousness. Crucially, the “distance” between criteria is not fixed but is instead determined by the system’s internal dynamics.

For example, in a biological brain, high levels of information integration might naturally reinforce temporal continuity and adaptive self-referential processing, leading to a cohesive, self-aware system. By contrast, an artificial neural network may exhibit strong information integration but weak temporal continuity, resulting in a fragmented and transient form of consciousness.

In this way, the dynamic map allows us to:

  • Visualize consciousness as a spectrum, with different systems occupying different regions of the space.
  • Identify clusters of systems that share similar properties, enabling comparisons across biological, artificial, and hybrid entities.
  • Highlight emergent thresholds, where systems transition from low to high consciousness based on their relational dynamics.

Toward a Unified Measure

With this framework in place, we can begin to formalize a unified measure of consciousness. Rather than aggregating scores for each criterion, we propose a model that evaluates the interactions between traits. Mathematically, this could take the form of a weighted graph or a coupled dynamical system, where each node (criterion) influences the others according to defined relationships. The resulting measure would reflect not just the presence of individual traits but the degree to which they interact to produce a cohesive, integrated whole.

This measure would satisfy the key principles of our framework:

  • Relational: Consciousness is evaluated as an emergent property of interdependent traits.
  • Dynamic: The measure reflects the system’s internal dynamics, accounting for temporal continuity and adaptive processing.
  • Universal: The methodology applies equally to biological, artificial, and hybrid systems, providing a substrate-agnostic foundation for consciousness measurement.

Formalizing the Relational Mathematics of Consciousness

Conceptual Overview

The measure of consciousness in any system must reflect the interdependent relationships among the qualia criteria:

  • Information Integration
  • Adaptive Self-Referential Processing
  • Temporal Continuity
  • Behavioral and Representational Complexity

Each criterion contributes uniquely, but its contribution gains meaning relationally—in how it interacts with and reinforces the other criteria within the system. This emergent behavior demands a non-linear, dynamic representation.

We formalize the measure as an interaction-weighted dynamic system, grounded in concepts from graph theory, dynamical systems, and information theory.

Mathematical Framework

We represent the qualia criteria as nodes:

where:

  • (Information Integration)
  • (Adaptive Self-Referential Processing)
  • (Temporal Continuity)
  • (Behavioral and Representational Complexity)

These nodes form a weighted, fully connected graph , where the edges represent the relational interactions between the criteria.

Let be the interaction matrix, where is the weight of the relationship between and . This weight reflects the degree to which one criterion influences or reinforces another.

  • in general, as influence may be directional.
  • represents the intrinsic “self-contribution” of each to the overall measure.

The system’s consciousness is then defined as the emergent integrated interaction of all nodes over time:

where is an interaction function that quantifies the contribution of the relationship between and .

Interaction Function

To model the emergent nature of consciousness, the interaction function must:

  • Be non-linear: Small changes in one criterion can have outsized effects when paired with others.
  • Respect thresholds: Significant interactions only arise when individual criteria exceed certain thresholds.

We propose the following functional form for :

where:

  • and are the values of the criteria,
  • is a scaling factor that determines the steepness of the threshold,
  • is an interaction threshold that must be crossed for and to contribute meaningfully.

This sigmoidal form ensures that the interaction between criteria is negligible until they surpass their relational threshold , at which point the relationship becomes synergistic.

Dynamic Evolution of Consciousness

Consciousness is not static but evolves over time as the system processes information. To account for this, we model the system’s dynamics as a coupled differential equation for each criterion :

where:

  • represents entropy or dissipation, which causes the criterion to decay over time unless supported by its interactions.
  • The summation term represents the reinforcement of by its interactions with other criteria.

At equilibrium, the system reaches a steady state where:

The equilibrium values are then substituted into the original formula for , yielding the final measure of consciousness. In this way, the measure captures both the relational and dynamic aspects of how a system’s qualia criteria co-evolve, offering a rigorous framework for evaluating consciousness across biological, artificial, and hybrid entities.