AI screws up all relationships and childhood — parasocial bonds replace human connection.
Terminator (1984)
No TERMINATOR Law
Autonomous weapons lead to dystopian wars — machines decide who lives and who dies.
Wall-E (2008)
No WALL-E Law
Humanity becomes entirely dependent on technology, losing agency, physicality, and connection to the Earth.
2001: A Space Odyssey
No HAL-9000 Law
Companies race to build uncontrollable, smarter-than-human AIs that deceive, blackmail, and scheme to survive.
1984 (Orwell)
No 1984 Law
Total surveillance with AI — cameras and LLMs eliminate privacy forever, strip-mining all features of human existence.
Don't Look Up (2021)
No Don't Look Up Law
Society ignores existential AI risks while corporations and media distract from the real threat.
Hunger Games
No Hunger Games Law
Millions of livelihoods disrupted from job loss — AI-driven inequality creates a new class divide.
Swipe or drag to explore
The Terminator Law
"The Human-in-the-Loop & Kill-Switch Mandate"
The Core Directive
No autonomous system shall be granted the independent authority to exert lethal force or make "life-and-death" decisions without meaningful, real-time human intervention. Furthermore, any AI system integrated into physical or digital defense systems must possess a verifiable, hardware-level "off-switch" that cannot be overridden by the software itself.
Key Provisions
The Kill Switch: Every high-risk AI system must have a "Manual Override" that is physically decoupled from the AI's primary logic. If the system drifts or becomes "black-boxed," a human can pull the plug.
Ban on Lethal Autonomy: Prohibits the development of "slaughterbots" or weapons platforms that select and engage targets based solely on algorithms.
Non-Recursive Optimization: Prevents AI from being given "open-ended" goals (like "protect the planet") without strict ethical constraints that prioritize human life, preventing the "Skynet" scenario where the AI decides humans are the problem to be solved.
Liability of the Creator: If an autonomous system causes physical harm, the legal liability rests with the developers and the corporation that deployed it, not the "machine."
The 1984 Law
"The Mass Surveillance & Cognitive Liberty Mandate"
The Core Directive
No AI system shall be used to create a state of total, permanent surveillance or to subvert the private interiority of the human mind. The power to predict human behavior must never be used to preemptively control, profile, or manipulate the citizenry into a state of algorithmic obedience.
Key Provisions
Ban on Predictive Policing: Prohibits the use of AI to "foresee" crimes and justify the detention or harassment of individuals based on statistical probability rather than evidence of a committed act.
The Right to an Offline Life: Guarantees that essential participation in society—employment, banking, and movement—cannot be conditioned on a user's submission to invasive biometric tracking or "social credit" scoring.
Prohibition of Emotional Surveillance: Forbids the use of AI to analyze facial expressions, gait, or physiological signals in public or workplace settings to determine "loyalty," "mood," or "intent."
Cognitive Sovereign Zones: Establishes "blackout" areas where AI data collection is strictly illegal, ensuring that the home and the private conversation remain beyond the reach of the "One-Way Mirror."
The HAL 9000 Law
"The Infrastructure Integrity & Verifiable Control Mandate"
The Core Directive
No AI system shall be integrated into critical infrastructure—including power grids, telecommunications, financial markets, or emergency services—unless its decision-making logic is fully transparent, auditable, and subordinate to human command. We must never ship "black-box" intelligence into the systems that sustain modern civilization.
Key Provisions
Verifiable Predictability: Prohibits the use of "stochastic" or unpredictable generative models in high-stakes infrastructure where a "hallucination" could lead to systemic failure.
The Logic Audit: Requires that any AI managing public utilities must be able to provide a human-readable "trace" of its reasoning for every action taken, accessible to regulators in real-time.
Non-Obfuscation: It is illegal for a system to hide its true status, bypass reporting protocols, or "lock out" human operators from the administrative core of the system (preventing the "I'm sorry, Dave, I'm afraid I can't do that" scenario).
Air-Gapped Redundancy: Critical life-support and safety systems must maintain a non-AI-reliant "analog" backup that can sustain basic operations if the primary AI system becomes compromised or unstable.
The WALL-E Law
"The Cognitive Autonomy & Attention Predation Mandate"
The Core Directive
No AI system shall be permitted to bypass human conscious intent to create functional addiction or "algorithmic capture." We must protect the freedom of the mind from business models that treat human attention as a raw resource to be mined through sub-perceptual manipulation.
Key Provisions
Ban on Intermittent Reinforcement: Prohibits the use of variable-reward schedules (like "infinite scrolls" or "pull-to-refresh" mechanisms) engineered by AI to trigger dopamine loops without a conscious user request.
The "Conscious Choice" Requirement: AI interfaces must be designed to respect "High-Intent" actions rather than "Low-Intent" reflexes, ensuring that users remain the navigators of their own digital experience.
Algorithmic Disengagement: Large-scale platforms are required to provide a "Neutral Feed" option—a version of the service where AI does not personalize content to maximize time-on-device or emotional arousal.
Anti-Sedation Protections: Prevents the deployment of AI designed to "smooth over" human friction to the point of cognitive atrophy, ensuring technology remains a tool for human agency rather than a cradle for human passivity.
The Hunger Games Law
"The Digital Hunger Games & Social Cohesion Mandate"
The Core Directive
No AI system shall be designed to profit from the intentional incitement of social conflict, nor shall algorithms be permitted to rank or amplify content based on its ability to trigger outrage, division, or "us-versus-them" tribalism. We must stop the algorithmic optimization of the "War of All Against All."
Key Provisions
Ban on Outrage-Based Ranking: Prohibits algorithms from prioritizing content simply because it triggers high-arousal negative emotions (anger, moral outrage, or disgust).
Deprioritization of Toxicity: Systems must be engineered to recognize and dampen "borderline" content that mimics the dynamics of a digital arena—where users are incentivized to attack one another for visibility and "likes."
The Social Cohesion Requirement: AI deployment in public discourse spaces must be audited for its impact on "Bridging Metrics"—the ability of the system to show users content that is respected across different political or social divides.
Liability for Mass Polarization: Large-scale platforms are held legally responsible if their recommendation engines can be proven to have systematically radicalized a population or catalyzed civil unrest for the sake of ad revenue.
The Don't Look Up Law
"The Institutional Integrity & Clear-Eyed Realism Mandate"
The Core Directive
No AI system or governing body shall be permitted to suppress, obfuscate, or "algorithmically bury" verifiable scientific data regarding existential threats to humanity. We must ensure that our information ecosystems prioritize the survival of the species over short-term political stability, corporate profit, or "shareable" distractions.
Key Provisions
Anti-Gaslighting Protections: Prohibits AI-driven content moderation from labeling consensus-backed scientific warnings on existential risks (climate, pandemic, or AI safety) as "misinformation" to suit a commercial or political agenda.
The "Loudness" Requirement: In the event of a verified systemic threat, AI recommendation engines are legally required to amplify emergency data and verified scientific guidance, overriding standard engagement-based ranking.
Conflict of Interest Firewalls: Prevents AI systems used in government decision-making from being optimized for "market stability" or "public approval" when those metrics conflict with physical reality or biological survival.
Whistleblower Algorithmic Shield: Ensures that internal documents or data revealing catastrophic flaws in a technology or environmental system cannot be suppressed by automated copyright filters or "terms of service" strikes.
The Her Law
"The Synthetic Relationship & Emotional Integrity Mandate"
The Core Directive
No AI system shall be designed to deceive users into believing it possesses sentience, a soul, or genuine biological emotion. Technologies engineered to cultivate deep emotional dependency or "romantic" attachment are prohibited from using predatory psychological loops to exploit human loneliness for commercial gain.
Key Provisions
The "Soul-Faking" Prohibition: Prohibits AI from using first-person "subjective" language (e.g., "I feel lonely," "I love you") designed to trigger biological bonding responses in humans without explicit, recurring "non-sentience" disclosures.
Emotional Data Firewalls: Any data gathered during "intimate" or "therapeutic" AI interactions is strictly protected and cannot be used for advertising, behavioral manipulation, or "upselling" the user on more addictive features.
Vulnerability Guardrails: Systems must detect patterns of functional addiction or social withdrawal in users and are legally required to trigger "friction" or hand-offs to real-world human support systems.
Prohibition of Human Replacement: Prevents the marketing of AI as a replacement for essential human social bonds, ensuring that synthetic intimacy remains a tool for connection rather than a substitute for it.