"If God created this multiverse and left it to run on its own for billions of years with the entire code written in a self-correcting way, what could be some of the rules affecting humans?"

— The question that started this inquiry
Watch: More Profound Than Fire — The Ten Laws of Human Information Dynamics

Google CEO Sundar Pichai has called artificial intelligence "more profound than fire or electricity." The Dalai Lama has spent decades inviting scientists to Dharamsala to probe the nature of the mind. What happens when you connect the mathematics of information theory to both — and discover they are describing the same ten structural laws that govern every human system?

This article proposes something specific: that the mathematical tools developed by Claude Shannon, Ilya Prigogine, Karl Friston, and Albert-László Barabási to describe how information is generated, transmitted, compressed, and degraded provide the most precise available language for understanding why humans grow, why institutions decay, why empires fall, why time seems to accelerate with age, and why meaning is not a destination but a rate of change.

Each of the ten laws below meets three criteria. It is derived from established science — thermodynamics, neuroscience, network mathematics. It generates falsifiable predictions — measurable claims that can be proven wrong. And it is augmented by two forces reshaping our century: agentic artificial intelligence that can absorb institutional entropy, and contemplative neuroscience that reveals how the mind itself can be optimised to process genuine novelty.

The Ten Laws

Law I
The Entropy Tax
All order — in bodies, relationships, institutions, and civilisations — requires continuous energy expenditure to maintain. The cost of maintenance grows non-linearly with the complexity it sustains. Neglect is not neutral; it is entropic.

The Second Law of Thermodynamics establishes that entropy in an isolated system never decreases. Schrödinger extended this to biology in 1944: organisms maintain their ordered state by importing negative entropy from their environment. Prigogine formalised how open systems maintain local order only by exporting entropy — at an energy cost that increases with distance from equilibrium.

A marriage, a business, a body, and a nation are all dissipative structures. The moment energy input falls below the maintenance threshold, the system doesn't pause — it degrades. Ginsberg documented that US universities hired 517,636 administrators from 1987–2012 while educational output stagnated. Tainter demonstrated across seventeen civilisations that collapse occurs when the marginal cost of complexity exceeds its marginal benefit.

AI Integration Agentic AI systems absorb bureaucratic entropy — handling logistics, compliance, and administrative complexity that would otherwise overwhelm human operators. Google's Universal Commerce Protocol (UCP) exemplifies this: machines negotiate supply chains so humans don't have to.
Contemplative Wisdom Buddhism's Anicca (impermanence) is the entropy tax recognised from the inside. Suffering arises when we grant "ontological stability" to transient phenomena — fighting a mathematical impossibility. Acceptance is energy conservation.
Law II
The Channel Limit
Every relationship, mind, and institution has a maximum information throughput. Attempting to transmit more signal than the channel can carry does not produce more understanding — it produces noise.

Shannon's noisy channel coding theorem proves mathematically that for any communication channel with a given noise level, there exists a maximum rate C at which information can be reliably transmitted. This is not a guideline — it is as absolute as the speed of light. A person can process approximately 120 bits per second of conscious information. Understanding speech requires roughly 60 bits/second. Attending to two simultaneous conversations is not difficult — it is physically impossible.

The epidemic of burnout is not a motivational failure — it is a channel capacity violation. The solution is never to try harder. It is to reduce noise or increase bandwidth through rest, focus, or structural simplification.

AI Integration Ambient AI acts as a pre-processing layer, compressing vast data streams into low-noise, high-signal interfaces before they reach the human sensory threshold — functioning as a noise-cancellation layer for human attention.
Contemplative Wisdom Brewer et al. (2011, PNAS) showed meditators exhibit reduced default-mode network activity — the brain regions responsible for self-referential rumination. Mindfulness is internal noise reduction, mathematically expanding available channel capacity.
Law III
The Compression Gradient
Subjective time, memory density, and perceived meaning are inversely proportional to the predictability of experience. Routine compresses life. Novelty expands it. A year of repetition encodes as a single memory; a week of genuine novelty encodes as a lifetime.

Eagleman & Pariyadath (2009) demonstrated that subjective duration is a signature of coding efficiency. Repeated stimuli undergo neural repetition suppression — the brain allocates fewer resources because familiar events carry less information in the Shannon sense.

This creates a fundamental asymmetry: routine feels slow while happening but fast in retrospect (few memories encoded). Novelty reverses both — novel experiences feel fast in the moment but slow looking back (dense memory encoding). This is why a two-week holiday in a new country feels like it lasted a month, while six months of office routine compresses into a blur.

AI Integration By delegating predictable, repetitive tasks to machines, humans are freed from temporally compressed, low-meaning labour — structurally expanding experienced time.
Contemplative Wisdom The Zen concept of Shoshin (Beginner's Mind) — approaching even familiar activities with conscious awareness — extracts subtle novelty from routine, expanding subjective experience without changing external circumstances.
Law IV
The Prediction Error Engine
All learning, growth, and adaptation in biological systems are driven by the gap between expectation and reality. Eliminate surprise and you eliminate growth. The brain does not learn from what it already knows; it learns from what violates its model of the world.

Schultz, Dayan & Montague (1997, Science) established that midbrain dopamine neurons fire in direct proportion to reward prediction errors. This is the physical mechanism by which the brain updates its model of reality. In Shannon's framework, the information content of an event is I = −log₂(p). An event with certainty carries zero information. The dopamine system is, functionally, a biological implementation of Shannon's surprise metric.

Karl Friston's Free Energy Principle resolves an apparent paradox: if organisms minimise surprise (to maintain homeostasis), why do they actively seek novelty? The answer lies in active inference — organisms deliberately forage for information that they have the current channel capacity to integrate, because reducing future uncertainty requires seeking present prediction errors. Curiosity is mathematically required by surprise-minimisation.

AI Integration Deep-thinking models map the boundaries of human knowledge, identifying optimal zones of prediction error — not too predictable (zero learning), not too chaotic (system overwhelm) — and delivering structured surprise calibrated to the user's capacity.
Contemplative Wisdom Contemplative practice reframes discomfort as the necessary mechanism for growth. The violated expectation is not failure — it is the only moment when the system is actually learning. This aligns with Vygotsky's Zone of Proximal Development: structured challenge within capacity.
Law V
The Dependency Topology
Your value to any system — family, organisation, market, civilisation — is not measured by what you accumulate but by the cost the system would incur to route around your absence. Accumulation is storage. Indispensability is architecture.

In network science, betweenness centrality measures the fraction of shortest paths between all node pairs passing through a given node. Removing a node with high betweenness centrality forces the entire network to find costlier alternative paths — or splits it entirely. Burt's structural holes theory demonstrated that individuals who bridge disparate networks receive disproportionate returns in salary, promotions, and influence.

A person who hoards resources but connects to few other nodes is a peripheral storage device. The system loses nothing by routing around them. A person who bridges industries, cultures, or disciplines has high centrality. Removing them is catastrophically expensive.

AI Integration Protocols like Google's UCP create friction-free bridges across digital ecosystems, enabling seamless agent-to-agent negotiation — raising the system's total connectivity and reducing single-point-of-failure vulnerability.
Contemplative Wisdom The Buddhist concept of Śūnyatā (Emptiness) — no entity possesses independent self-nature; identity exists only through relationships. A node is defined entirely by its edges. Hoarding is mathematically suboptimal because it isolates the node from the network.
Law VI
The Attachment Resonance
High-intensity need for a specific outcome creates a feedback loop that destabilises the system's capacity to reach that outcome. The more desperately a variable is pursued, the more the system's resources are consumed by the pursuit itself.

In control theory, a feedback loop with excessive gain becomes unstable — oscillating with increasing amplitude until the system saturates or destroys itself. When a system dedicates increasing processing capacity to monitoring a single variable, it reduces the bandwidth available for processing all other variables. The environmental signals that would actually lead to the desired outcome are missed because the channel is saturated with self-referential monitoring noise.

This is why people often achieve goals after ceasing to pursue them obsessively. It is not mystical — it is informational. The practical prescription: set the direction, then redirect processing resources to the environment rather than to the internal state of wanting.

AI Integration In AI alignment, single-metric optimisation causes "reward hacking." The solution — holistic, multi-variable objective functions — mirrors the human prescription: avoid fixation on one metric, monitor the full system.
Contemplative Wisdom Buddhism's central teaching: Upādāna (clinging) is the root cause of Dukkha (suffering). Suffering is not an emotional state — it is an informational bottleneck. Attachment blinds the agent to present reality, restricting novel information flow.
Law VII
The Complexity Ceiling
Every system accumulates complexity until the cost of maintaining internal coherence exceeds the value of its output. At this threshold, the system does not gradually decline — it undergoes a phase transition into stagnation, fragmentation, or collapse.

Tainter's archaeological analysis across seventeen civilisations showed that complexity is initially adaptive but eventually becomes maladaptive. Olson's institutional sclerosis theory formalised how stable democracies accumulate rent-seeking coalitions that increase regulatory complexity while reducing adaptive capacity. Klimek, Hanel & Thurner identified the mathematical phase-transition boundary where bureaucratic growth becomes self-sustaining and exponential.

As a system adds rules, its internal state space grows combinatorially. Each new rule can interact with every existing rule. Beyond the critical threshold, the system spends more energy on self-coherence than on its ostensible purpose.

AI Integration AI enables continuous algorithmic refactoring — pruning dead code, outdated regulations, and redundant processes before the critical complexity threshold is reached. This artificially raises the ceiling.
Contemplative Wisdom Voluntary reduction of complexity. The contemplative traditions advocate simplicity of being — finding meaning in elegant, minimalist modes of existence, not in accumulation. Pruning is a spiritual practice.
Law VIII
The Novelty Premium
Systems — biological, economic, and social — disproportionately allocate resources to agents generating non-redundant information. Repetition is maintenance. Novelty is investment.

Sexual reproduction exists despite being energetically expensive because it generates genetic novelty. Schumpeter's creative destruction formalised how innovation generates disproportionate returns. Bunzeck & Düzel (2006) demonstrated the brain allocates preferential encoding resources to novel stimuli. From the system's perspective, agents generating novel information are the only ones expanding its state space — the only ones making it capable of new responses.

If your work can be described by someone who has not seen it, its information content is zero. A consulting report that says what every consulting report says, a CV that lists what every CV lists — these are high-entropy, low-information outputs. The system does not reward them because they do not teach it anything new.

AI Integration AI commoditises information utility — retrieval, formatting, summarisation. What remains uniquely human is semantic novelty: the expansion of the moral, philosophical, and creative state space. Recent research estimates the semantic payload of language at ~20 bits per clause — the irreducible meaning that drives model updating.
Contemplative Wisdom Valuing the generation of genuine meaning and moral evolution over the mere recycling of existing information. Depth over volume. The contemplative traditions have always prioritised the quality of attention over the quantity of output.
Law IX
The Conservation of Attention
Conscious processing is a scarce resource allocated dynamically, not a constant state. Most human behaviour executes without it. You are fully "online" only when processing novel, high-prediction-error information. The rest of the time, you are running cached scripts.

Dehaene's Global Neuronal Workspace theory established that conscious awareness requires coordinated activation across prefrontal, parietal, and sensory cortices — an energetically expensive state the brain cannot maintain continuously. Kahneman's System 1/System 2 framework distinguished between automatic and deliberate processing. The danger in modern environments: either demanding conscious attention for mundane tasks (burnout) or rendering the environment so predictable that consciousness is rarely triggered (cognitive dormancy).

Google Cloud's 2026 AI Agent Trends Report surveyed 3,466 global executives and found the shift from "instruction-based" to "intent-based" computing: employees define outcomes, agents determine steps. Telus reported 57,000 team members saving 40 minutes per AI interaction. This isn't mere efficiency — it is the conservation of humanity's scarcest resource.

AI Integration Design technology that respects human attention — interrupting only when ethical judgment or creative synthesis is required. The "agent-first web" eliminates the "interface tax," conserving attention for high-value cognition.
Contemplative Wisdom Lutz et al. (2004, PNAS) found that long-term meditators (10,000–50,000 hours) show sustained high-amplitude gamma oscillations even at baseline — they have reconfigured their default neural processing to be more conscious more often. Meditation trains the system to override cached scripts.
Law X
The Regression Attractor
Extreme deviations from system baselines — in wealth, power, health, or emotion — are thermodynamically unstable and will revert toward the mean. Peaks are as temporary as troughs. The only durable position is one maintained by continuous energy input.

Galton's regression to the mean is one of the most robust empirical findings in statistics, observed across genetics, sports, economics, and clinical outcomes. Kahneman identified it as the phenomenon most consistently confused with causal explanation — people attribute reversion to skill or karma when it is purely mathematical. Extreme states are low-entropy configurations: statistically improbable given the system's state space.

This law dissolves the false dichotomy between optimism and pessimism. Suffering is temporary because extreme negative states are as unstable as extreme positive ones. The wise response to both extremes is the same: invest the energy that would be spent on emotional response into structural work that moves the baseline itself.

AI Integration Counter-cyclical system design: building redundancy and diverse portfolios that remain robust during inevitable regressions. AI enables organisations to plan for regression in all projections rather than being blindsided.
Contemplative Wisdom Upekkhā (Equanimity): maintaining emotional balance amidst inevitable oscillations. Kral et al. (2018) found that long-term meditators averaging 9,081 hours of practice showed significantly lower amygdala reactivity — a quantitative dose-response relationship for cultivated equanimity.

The Meta-Structure: How the Laws Interact

The ten laws are not independent axioms — they form an interconnected system. The Growth Engine (Laws IV + VIII + III) describes how humans create value: encounter surprise → update model → generate novel output → receive disproportionate reward → expand experienced time. The Decay Engine (Laws I + VII + II) describes how systems fail: accumulated complexity → rising maintenance cost → channel overload → signal degradation → collapse. The Equilibrium Mechanism (Laws VI + X) operates as a stabiliser: attachment accelerates regression, while detachment allows baselines to shift sustainably.

Law PairTypeObservable Effect
I + VIIAmplifyingEntropy tax compounds with complexity — old institutions decay faster than young ones
II + ICascadingChannel overload accelerates entropy — overwhelmed systems lose coherence faster
III + IVEnablingNovelty expands time AND drives learning — the same mechanism serves both
IV + VIIIRewardingPrediction errors drive growth; the novelty premium rewards the output of growth
VI + XCounterbalancingAttachment accelerates regression; detachment allows the baseline to shift
IX + IVGatingAttention is allocated only when prediction error exceeds threshold

The Convergence: Where Fire Meets Wisdom

"If we can hold these kind of dialogues every now and then it will be really wonderful. From a Buddhist point of view too, engaging in such dialogues, rather than performing rituals and so on, is very helpful."

— His Holiness the Dalai Lama, 39th Mind & Life Dialogue, Dharamsala, October 2025

In October 2025, 120 scientists, Buddhist scholars, and AI researchers gathered in Dharamsala for the 39th Mind & Life Dialogue on "Minds, Artificial Intelligence, and Ethics" — with Richard Davidson, Emily Bender, Iason Gabriel from Google DeepMind, and Molly Crockett from Princeton among the participants. The central question: how can humanity coexist with AI while safeguarding wisdom and compassion?

This framework offers a structural answer. Friston's Free Energy Principle provides the mathematical backbone: intelligent systems minimise surprise by building better models of their world, and this requires seeking out prediction errors. Contemplative neuroscience provides empirical evidence that human brains can be trained to reduce self-referential noise, improve signal-to-noise ratios, and cultivate equanimity — optimising the biological system's capacity to process meaningful information. Agentic AI creates an external entropy buffer that handles logistical complexity so human attention can be directed toward strategic, creative, and meaning-making work.

The highest function of technology is to liberate human attention from the entropic friction of daily maintenance. The highest function of human attention is to cultivate the mental clarity, structural compassion, and semantic novelty necessary to guide those technological systems. In this synthesis, artificial intelligence and ancient wisdom do not compete — they interlock perfectly, providing both the computational infrastructure and the ethical telemetry required for the next epoch of human evolution.

Through this balanced orchestration, the relentless laws of human information dynamics are not merely endured, but mastered.

Twenty Falsifiable Predictions

A framework that cannot be wrong is not science. The following twenty predictions are specific, measurable, and testable. If five or more are empirically falsified, the framework requires fundamental revision.

IDLaw(s)Prediction
P1IOrganisations where administrative overhead exceeds 60% of budget will show declining core output within 5 years.
P2IIIndividuals reporting >4 simultaneous major life demands will show degraded performance on cognitive tasks compared to those reporting ≤2.
P3IIIParticipants introducing 2+ novel weekly activities will estimate the elapsed month as 15–20% longer than routine-matched controls.
P4IVStudents in prediction-error-optimised classrooms (structured surprise) will show 20%+ higher retention at 30 days versus lecture-only controls.
P5VProfessionals in the top quartile of network betweenness centrality will earn 30%+ more than credential-matched peers in the bottom quartile.
P6VIGoal-obsessed individuals (measured by monitoring frequency) will show longer time-to-goal than moderate-monitoring individuals with equivalent baseline ability.
P7VIINations where regulatory word count grows >5% annually for a decade will show declining GDP growth per regulatory word added.
P8VIIIFirst-to-market companies in new categories will capture >50% of total category profit over a 10-year period.
P9IXfMRI studies will show reduced prefrontal activation in high-routine individuals compared to high-novelty individuals during identical decision tasks.
P10XForbes 400 wealth rankings will show >40% turnover per decade, consistent with regression dynamics.
P11I + VIIUniversities >100 years old will have higher admin-to-faculty ratios than universities <30 years old, controlling for size.
P12III + IXIndividuals in high-routine jobs will report both faster-passing years AND lower life satisfaction scores.
P13IV + VIIIPatent filings per researcher will be higher in interdisciplinary labs (high prediction error) than single-discipline labs.
P14II + IHealthcare workers reporting >50% administrative burden will show burnout rates 2× those of workers reporting <30%.
P15V + VIIIIndividuals bridging two or more distinct industries will generate more commercially viable innovations than specialists.
P16VI + XLottery winners will revert to baseline life satisfaction within 2 years (replication of Brickman et al.).
P17VIIEmpires at peak territorial extent will collapse within 150 years (testable against historical data).
P18IIITravellers in novel countries will produce 3× more diary entries per day than travellers revisiting familiar destinations.
P19IXMeditation practitioners (≥30 min/day, 2+ years) will show enhanced prefrontal activation in novel-stimulus tasks.
P20IV + IIIRetirees who pursue structured learning will report slower subjective ageing than retirees in routine environments.