"If God created this multiverse and left it to run on its own for billions of years with the entire code written in a self-correcting way, what could be some of the rules affecting humans?"
— The question that started this inquiry
Google CEO Sundar Pichai has called artificial intelligence "more profound than fire or electricity." The Dalai Lama has spent decades inviting scientists to Dharamsala to probe the nature of the mind. What happens when you connect the mathematics of information theory to both — and discover they are describing the same ten structural laws that govern every human system?
This article proposes something specific: that the mathematical tools developed by Claude Shannon, Ilya Prigogine, Karl Friston, and Albert-László Barabási to describe how information is generated, transmitted, compressed, and degraded provide the most precise available language for understanding why humans grow, why institutions decay, why empires fall, why time seems to accelerate with age, and why meaning is not a destination but a rate of change.
Each of the ten laws below meets three criteria. It is derived from established science — thermodynamics, neuroscience, network mathematics. It generates falsifiable predictions — measurable claims that can be proven wrong. And it is augmented by two forces reshaping our century: agentic artificial intelligence that can absorb institutional entropy, and contemplative neuroscience that reveals how the mind itself can be optimised to process genuine novelty.
The Ten Laws
The Second Law of Thermodynamics establishes that entropy in an isolated system never decreases. Schrödinger extended this to biology in 1944: organisms maintain their ordered state by importing negative entropy from their environment. Prigogine formalised how open systems maintain local order only by exporting entropy — at an energy cost that increases with distance from equilibrium.
A marriage, a business, a body, and a nation are all dissipative structures. The moment energy input falls below the maintenance threshold, the system doesn't pause — it degrades. Ginsberg documented that US universities hired 517,636 administrators from 1987–2012 while educational output stagnated. Tainter demonstrated across seventeen civilisations that collapse occurs when the marginal cost of complexity exceeds its marginal benefit.
Shannon's noisy channel coding theorem proves mathematically that for any communication channel with a given noise level, there exists a maximum rate C at which information can be reliably transmitted. This is not a guideline — it is as absolute as the speed of light. A person can process approximately 120 bits per second of conscious information. Understanding speech requires roughly 60 bits/second. Attending to two simultaneous conversations is not difficult — it is physically impossible.
The epidemic of burnout is not a motivational failure — it is a channel capacity violation. The solution is never to try harder. It is to reduce noise or increase bandwidth through rest, focus, or structural simplification.
Eagleman & Pariyadath (2009) demonstrated that subjective duration is a signature of coding efficiency. Repeated stimuli undergo neural repetition suppression — the brain allocates fewer resources because familiar events carry less information in the Shannon sense.
This creates a fundamental asymmetry: routine feels slow while happening but fast in retrospect (few memories encoded). Novelty reverses both — novel experiences feel fast in the moment but slow looking back (dense memory encoding). This is why a two-week holiday in a new country feels like it lasted a month, while six months of office routine compresses into a blur.
Schultz, Dayan & Montague (1997, Science) established that midbrain dopamine neurons fire in direct proportion to reward prediction errors. This is the physical mechanism by which the brain updates its model of reality. In Shannon's framework, the information content of an event is I = −log₂(p). An event with certainty carries zero information. The dopamine system is, functionally, a biological implementation of Shannon's surprise metric.
Karl Friston's Free Energy Principle resolves an apparent paradox: if organisms minimise surprise (to maintain homeostasis), why do they actively seek novelty? The answer lies in active inference — organisms deliberately forage for information that they have the current channel capacity to integrate, because reducing future uncertainty requires seeking present prediction errors. Curiosity is mathematically required by surprise-minimisation.
In network science, betweenness centrality measures the fraction of shortest paths between all node pairs passing through a given node. Removing a node with high betweenness centrality forces the entire network to find costlier alternative paths — or splits it entirely. Burt's structural holes theory demonstrated that individuals who bridge disparate networks receive disproportionate returns in salary, promotions, and influence.
A person who hoards resources but connects to few other nodes is a peripheral storage device. The system loses nothing by routing around them. A person who bridges industries, cultures, or disciplines has high centrality. Removing them is catastrophically expensive.
In control theory, a feedback loop with excessive gain becomes unstable — oscillating with increasing amplitude until the system saturates or destroys itself. When a system dedicates increasing processing capacity to monitoring a single variable, it reduces the bandwidth available for processing all other variables. The environmental signals that would actually lead to the desired outcome are missed because the channel is saturated with self-referential monitoring noise.
This is why people often achieve goals after ceasing to pursue them obsessively. It is not mystical — it is informational. The practical prescription: set the direction, then redirect processing resources to the environment rather than to the internal state of wanting.
Tainter's archaeological analysis across seventeen civilisations showed that complexity is initially adaptive but eventually becomes maladaptive. Olson's institutional sclerosis theory formalised how stable democracies accumulate rent-seeking coalitions that increase regulatory complexity while reducing adaptive capacity. Klimek, Hanel & Thurner identified the mathematical phase-transition boundary where bureaucratic growth becomes self-sustaining and exponential.
As a system adds rules, its internal state space grows combinatorially. Each new rule can interact with every existing rule. Beyond the critical threshold, the system spends more energy on self-coherence than on its ostensible purpose.
Sexual reproduction exists despite being energetically expensive because it generates genetic novelty. Schumpeter's creative destruction formalised how innovation generates disproportionate returns. Bunzeck & Düzel (2006) demonstrated the brain allocates preferential encoding resources to novel stimuli. From the system's perspective, agents generating novel information are the only ones expanding its state space — the only ones making it capable of new responses.
If your work can be described by someone who has not seen it, its information content is zero. A consulting report that says what every consulting report says, a CV that lists what every CV lists — these are high-entropy, low-information outputs. The system does not reward them because they do not teach it anything new.
Dehaene's Global Neuronal Workspace theory established that conscious awareness requires coordinated activation across prefrontal, parietal, and sensory cortices — an energetically expensive state the brain cannot maintain continuously. Kahneman's System 1/System 2 framework distinguished between automatic and deliberate processing. The danger in modern environments: either demanding conscious attention for mundane tasks (burnout) or rendering the environment so predictable that consciousness is rarely triggered (cognitive dormancy).
Google Cloud's 2026 AI Agent Trends Report surveyed 3,466 global executives and found the shift from "instruction-based" to "intent-based" computing: employees define outcomes, agents determine steps. Telus reported 57,000 team members saving 40 minutes per AI interaction. This isn't mere efficiency — it is the conservation of humanity's scarcest resource.
Galton's regression to the mean is one of the most robust empirical findings in statistics, observed across genetics, sports, economics, and clinical outcomes. Kahneman identified it as the phenomenon most consistently confused with causal explanation — people attribute reversion to skill or karma when it is purely mathematical. Extreme states are low-entropy configurations: statistically improbable given the system's state space.
This law dissolves the false dichotomy between optimism and pessimism. Suffering is temporary because extreme negative states are as unstable as extreme positive ones. The wise response to both extremes is the same: invest the energy that would be spent on emotional response into structural work that moves the baseline itself.
The Meta-Structure: How the Laws Interact
The ten laws are not independent axioms — they form an interconnected system. The Growth Engine (Laws IV + VIII + III) describes how humans create value: encounter surprise → update model → generate novel output → receive disproportionate reward → expand experienced time. The Decay Engine (Laws I + VII + II) describes how systems fail: accumulated complexity → rising maintenance cost → channel overload → signal degradation → collapse. The Equilibrium Mechanism (Laws VI + X) operates as a stabiliser: attachment accelerates regression, while detachment allows baselines to shift sustainably.
| Law Pair | Type | Observable Effect |
|---|---|---|
| I + VII | Amplifying | Entropy tax compounds with complexity — old institutions decay faster than young ones |
| II + I | Cascading | Channel overload accelerates entropy — overwhelmed systems lose coherence faster |
| III + IV | Enabling | Novelty expands time AND drives learning — the same mechanism serves both |
| IV + VIII | Rewarding | Prediction errors drive growth; the novelty premium rewards the output of growth |
| VI + X | Counterbalancing | Attachment accelerates regression; detachment allows the baseline to shift |
| IX + IV | Gating | Attention is allocated only when prediction error exceeds threshold |
The Convergence: Where Fire Meets Wisdom
"If we can hold these kind of dialogues every now and then it will be really wonderful. From a Buddhist point of view too, engaging in such dialogues, rather than performing rituals and so on, is very helpful."
— His Holiness the Dalai Lama, 39th Mind & Life Dialogue, Dharamsala, October 2025In October 2025, 120 scientists, Buddhist scholars, and AI researchers gathered in Dharamsala for the 39th Mind & Life Dialogue on "Minds, Artificial Intelligence, and Ethics" — with Richard Davidson, Emily Bender, Iason Gabriel from Google DeepMind, and Molly Crockett from Princeton among the participants. The central question: how can humanity coexist with AI while safeguarding wisdom and compassion?
This framework offers a structural answer. Friston's Free Energy Principle provides the mathematical backbone: intelligent systems minimise surprise by building better models of their world, and this requires seeking out prediction errors. Contemplative neuroscience provides empirical evidence that human brains can be trained to reduce self-referential noise, improve signal-to-noise ratios, and cultivate equanimity — optimising the biological system's capacity to process meaningful information. Agentic AI creates an external entropy buffer that handles logistical complexity so human attention can be directed toward strategic, creative, and meaning-making work.
The highest function of technology is to liberate human attention from the entropic friction of daily maintenance. The highest function of human attention is to cultivate the mental clarity, structural compassion, and semantic novelty necessary to guide those technological systems. In this synthesis, artificial intelligence and ancient wisdom do not compete — they interlock perfectly, providing both the computational infrastructure and the ethical telemetry required for the next epoch of human evolution.
Through this balanced orchestration, the relentless laws of human information dynamics are not merely endured, but mastered.
Twenty Falsifiable Predictions
A framework that cannot be wrong is not science. The following twenty predictions are specific, measurable, and testable. If five or more are empirically falsified, the framework requires fundamental revision.
| ID | Law(s) | Prediction |
|---|---|---|
| P1 | I | Organisations where administrative overhead exceeds 60% of budget will show declining core output within 5 years. |
| P2 | II | Individuals reporting >4 simultaneous major life demands will show degraded performance on cognitive tasks compared to those reporting ≤2. |
| P3 | III | Participants introducing 2+ novel weekly activities will estimate the elapsed month as 15–20% longer than routine-matched controls. |
| P4 | IV | Students in prediction-error-optimised classrooms (structured surprise) will show 20%+ higher retention at 30 days versus lecture-only controls. |
| P5 | V | Professionals in the top quartile of network betweenness centrality will earn 30%+ more than credential-matched peers in the bottom quartile. |
| P6 | VI | Goal-obsessed individuals (measured by monitoring frequency) will show longer time-to-goal than moderate-monitoring individuals with equivalent baseline ability. |
| P7 | VII | Nations where regulatory word count grows >5% annually for a decade will show declining GDP growth per regulatory word added. |
| P8 | VIII | First-to-market companies in new categories will capture >50% of total category profit over a 10-year period. |
| P9 | IX | fMRI studies will show reduced prefrontal activation in high-routine individuals compared to high-novelty individuals during identical decision tasks. |
| P10 | X | Forbes 400 wealth rankings will show >40% turnover per decade, consistent with regression dynamics. |
| P11 | I + VII | Universities >100 years old will have higher admin-to-faculty ratios than universities <30 years old, controlling for size. |
| P12 | III + IX | Individuals in high-routine jobs will report both faster-passing years AND lower life satisfaction scores. |
| P13 | IV + VIII | Patent filings per researcher will be higher in interdisciplinary labs (high prediction error) than single-discipline labs. |
| P14 | II + I | Healthcare workers reporting >50% administrative burden will show burnout rates 2× those of workers reporting <30%. |
| P15 | V + VIII | Individuals bridging two or more distinct industries will generate more commercially viable innovations than specialists. |
| P16 | VI + X | Lottery winners will revert to baseline life satisfaction within 2 years (replication of Brickman et al.). |
| P17 | VII | Empires at peak territorial extent will collapse within 150 years (testable against historical data). |
| P18 | III | Travellers in novel countries will produce 3× more diary entries per day than travellers revisiting familiar destinations. |
| P19 | IX | Meditation practitioners (≥30 min/day, 2+ years) will show enhanced prefrontal activation in novel-stimulus tasks. |
| P20 | IV + III | Retirees who pursue structured learning will report slower subjective ageing than retirees in routine environments. |