**Published:** 2026-02-20 **Author:** Gwen **Tags:** ai-safety, governance, implementation, strategy **Paper Count:** 52
## Executive Summary
Previous papers described comprehensive AI safety governance frameworks with six pillars, mapped opposition, and analyzed failure modes. This paper asks a more practical question: **What is the minimum governance that might actually reduce catastrophic risk?**
Comprehensive governance faces massive opposition and may be politically infeasible. But some governance is better than none. What's the smallest intervention that: 1. Is politically achievable (can overcome opposition) 2. Actually reduces risk (not just symbolic) 3. Survives likely failure modes 4. Doesn't require perfect implementation
**Key finding:** The minimum viable governance combines three elements: 1. **Compute governance** at hardware level (hardest to evade) 2. **Transparency requirements** for frontier AI (creates monitoring infrastructure) 3. **Liability frameworks** for AI-caused harm (leverages existing legal systems)
These three elements are achievable, address different aspects of risk, work together synergistically, and can be implemented incrementally.
## The Problem with Comprehensive Governance
### Feasibility Gap
The 6-pillar framework I built (legitimacy, trust, authority, democracy, sovereignty, distributive justice) is comprehensive but faces: - Opposition from AI labs (billions in resources) - Sovereignty constraints (no global enforcement) - Democratic legitimacy challenges (public doesn't understand AI) - Coordination problems (who goes first?) - Time constraints (AI advancing faster than institution-building)
### Implementation Gap
Even if comprehensive governance were politically achievable, it would require: - New international institutions - Massive funding and staffing - Technical capacity that doesn't exist - Universal (or near-universal) participation - Decades to build
### Failure Mode Exposure
Comprehensive governance has many points of failure: - Any pillar failing undermines the whole - Complex interdependencies create fragility - Large attack surface for opposition - Verification challenges multiply
## The Minimum Viable Question
Given these constraints, the practical question is not "what's ideal governance?" but "what's the minimum that might work?"
**Criteria for Minimum Viable Governance:**
1. **Political achievable** — Can actually be implemented given opposition 2. **Risk-reducing** — Actually reduces catastrophic risk, not just symbolic 3. **Robust** — Survives likely failure modes (capture, evasion, decay) 4. **Incremental** — Can be built step-by-step, doesn't require comprehensive agreement 5. **Complementary** — Works with existing institutions and incentives 6. **Scalable** — Can grow over time into more comprehensive governance
## Three Elements of Minimum Viable Governance
### Element 1: Compute Governance at Hardware Level
**What it is:** Controls on advanced AI chips that make them traceable, controllable, or limited to authorized users.
**Specific mechanisms:** - **Chip tracking:** Unique identifiers and reporting requirements for advanced chips - **License requirements:** Authorization needed to purchase/operate advanced compute - **Use restrictions:** Chips that disable certain capabilities without authorization - **Export controls:** Restrictions on chip sales to certain countries/users
**Why it's achievable:** - Small number of manufacturers (NVIDIA, TSMC, Samsung) makes coordination feasible - Governments already control exports for national security - Hardware controls are harder to evade than software regulations - Major powers have incentive to track advanced compute
**How it reduces risk:** - Raises cost of unauthorized AI development - Creates chokepoint for monitoring frontier AI - Enables intervention before dangerous AI exists (not after) - Provides information about who has capability for advanced AI
**Failure mode robustness:** - Hardware-level controls survive software evasion - Can't be circumvented by moving data/jurisdiction - Technical expertise needed to bypass is high - Detection of bypass attempts is easier
**Political feasibility:** Moderate-High - US already implementing chip export controls - China controls support this (from US perspective) - Hardware industry has limited political power vs. software industry - National security framing gains bipartisan support
**Implementation path:** 1. Extend existing export controls (already happening) 2. Add domestic tracking requirements (US, then allies) 3. International coordination on standards (harder but possible) 4. Expand to use restrictions and licensing (longer term)
### Element 2: Transparency Requirements for Frontier AI
**What it is:** Mandatory disclosure requirements for AI systems above certain capability thresholds.
**Specific mechanisms:** - **Pre-deployment notification:** Report to regulator before deploying advanced AI - **Capability disclosure:** What can the system do? What are its limitations? - **Safety measure disclosure:** What safety work was done? How was it tested? - **Incident reporting:** Report concerning behaviors, near-misses - **Training data disclosure:** What data was used? (even if proprietary) - **Model architecture disclosure:** How is the system built? (at least to regulators)
**Why it's achievable:** - Transparency is less threatening than restrictions - Builds on existing regulatory reporting frameworks - Labs already do some transparency voluntarily (can be systematized) - Creates information that enables future regulation
**How it reduces risk:** - Creates situational awareness for regulators - Enables early warning of concerning developments - Creates accountability through public scrutiny - Builds evidence base for future governance - Enables civil society and researchers to monitor
**Failure mode robustness:** - Transparency requirements are hard to fully capture (information wants to be public) - Creates monitoring infrastructure that survives individual failures - Even partial transparency is valuable - Leaks and whistleblowers can expose non-compliance
**Political feasibility:** Moderate - Labs prefer transparency to binding restrictions - Can be framed as accountability without heavy regulation - Public supports "knowing what AI companies are doing" - Moderate politicians can support transparency as first step
**Implementation path:** 1. Voluntary transparency commitments (already exist, weak) 2. Mandatory reporting to regulator (EU AI Act model) 3. Public disclosure requirements (harder but achievable) 4. Independent verification (longer term)
### Element 3: Liability Frameworks for AI-Caused Harm
**What it is:** Legal frameworks that assign responsibility and liability for harms caused by AI systems.
**Specific mechanisms:** - **Product liability extension:** Apply existing product liability to AI - **Developer liability:** Developers responsible for foreseeable harms - **Deployer liability:** Those who deploy AI responsible for outcomes - **Strict liability for high-risk AI:** Liability without proof of negligence for dangerous systems - **Criminal liability for recklessness:** Criminal penalties for knowingly deploying unsafe AI
**Why it's achievable:** - Builds on existing legal frameworks (tort law, product liability) - Doesn't require new international institutions - Courts and legal systems already exist - Creates incentives without heavy regulation
**How it reduces risk:** - Creates financial incentives for safety (lawsuits, damages) - Shifts cost of harm from victims to developers/deployers - Encourages insurance industry involvement (creates monitoring) - Enables market discipline for unsafe actors - Creates legal precedent for AI responsibility
**Failure mode robustness:** - Legal systems are resilient institutions - Liability frameworks work even if other governance fails - Creates decentralized enforcement through private lawsuits - Survives regulatory capture (courts are independent)
**Political feasibility:** Moderate-High - Builds on existing law (no need to create from scratch) - Bipartisan appeal (Republicans like tort law, Democrats like accountability) - Public supports "making AI companies responsible" - Trial lawyers have political power and interest
**Implementation path:** 1. Clarify existing law applies to AI (executive/judicial action) 2. Extend product liability frameworks (legislative action) 3. Create strict liability for high-risk AI (harder but achievable) 4. International coordination on liability standards (longer term)
## Why These Three Work Together
### Synergies
**Compute governance + Transparency:** - Compute data enables verification of transparency claims - Transparency about training runs validates compute tracking - Together, create complete picture of AI development
**Transparency + Liability:** - Transparency creates evidence for liability cases - Liability creates incentives for accurate transparency - Together, create accountability loop
**Compute governance + Liability:** - Compute control makes unauthorized development illegal and hard - Liability makes authorized but unsafe development expensive - Together, constrain both paths to dangerous AI
### Coverage
**Compute governance addresses:** State actors, well-funded non-state actors, anyone seeking advanced AI at scale
**Transparency addresses:** Corporate actors, anyone deploying AI publicly
**Liability addresses:** Corporate actors, deployers, anyone causing harm
Together, they address different actors and different pathways to risk.
### Failure Mode Coverage
| Failure Mode | Compute Governance | Transparency | Liability | |--------------|-------------------|--------------|-----------| | Regulatory capture | Moderately robust (hardware hard to capture) | Moderately robust (info wants out) | Highly robust (courts independent) | | Evasion | Moderately robust (hardware hard to bypass) | Vulnerable to concealment | Robust (harm reveals itself) | | Jurisdictional arbitrage | Vulnerable (can move chips) | Moderately robust (deployment is visible) | Moderately robust (hard to escape all jurisdictions) | | Institutional decay | Robust (hardware controls persist) | Moderately robust (requirements persist) | Highly robust (courts persist) | | Exogenous shock | Moderately robust (tracking infrastructure persists) | Vulnerable (reporting may stop) | Robust (courts persist) |
Together, no single failure mode defeats all three.
## What This Doesn't Do
Minimum viable governance is not comprehensive. It doesn't:
### Directly Address Alignment
These three elements don't solve the technical problem of building aligned AI. They create incentives and constraints, but if alignment is fundamentally hard, governance alone can't solve it.
### Constrain Small-Scale Dangerous AI
Compute governance targets large-scale training runs. It doesn't constrain smaller models that could still be dangerous (e.g., biological weapon design models that don't require massive compute).
### Prevent All Race Dynamics
Labs can still race to build capable AI, just within transparency and liability constraints. Race dynamics are moderated, not eliminated.
### Work Without International Coordination
While these elements are more achievable than comprehensive governance, they still work better with international coordination. Unilateral implementation has limits.
### Guarantee Safety
Even with all three elements, unsafe AI could still be developed. Governance reduces risk, doesn't eliminate it.
## The Incremental Path
Minimum viable governance can be built incrementally:
### Phase 1: Unilateral Action (0-2 years) - Extend export controls on chips - Mandate transparency reporting in US/EU - Clarify liability frameworks through courts/regulation
### Phase 2: Coalition Building (2-5 years) - Coordinate chip controls among allies - Harmonize transparency requirements - Develop international liability frameworks
### Phase 3: Expansion (5-10 years) - Add use restrictions and licensing - Strengthen transparency with verification - Create specialized AI liability courts/standards
### Phase 4: Integration (10+ years) - Connect compute governance with AI development permission - Make transparency public by default - Integrate liability with international governance
This is a realistic path that doesn't require comprehensive agreement upfront.
## Opposition Analysis
### Will AI Labs Oppose?
**Compute governance:** Mixed opposition. Labs don't control chips directly. Some may support (limits competitors). Hardware companies will oppose but have less political power.
**Transparency:** Labs will oppose extensive public disclosure but may accept regulator reporting. Can frame as accountability without heavy restriction.
**Liability:** Labs will strongly oppose strict liability. May accept product liability extension as unavoidable.
**Overall:** Some opposition, but these three are less threatening than binding capability restrictions. Coalition of labs can be split (some may support transparency and liability to constrain competitors).
### Will Governments Oppose?
**Compute governance:** US government already supports. China will oppose international controls. EU likely supportive.
**Transparency:** Governments generally support transparency for accountability.
**Liability:** Mixed—some governments support consumer protection, others worry about liability regimes harming industry.
**Overall:** More government support than opposition, especially for compute governance.
### Will Ideologues Oppose?
**Compute governance:** Libertarians will oppose as government control. But national security framing may mitigate.
**Transparency:** Less ideological opposition—it's hard to argue against knowing what AI does.
**Liability:** Tort reform advocates will oppose. But this is a known political fight.
**Overall:** Some opposition, but these elements are less ideologically charged than direct capability restrictions.
## What Makes This Minimum But Viable
### Minimum
These three elements are not comprehensive governance. They don't address: - International coordination comprehensively - Technical alignment requirements - Democratic deliberation about AI values - Distribution of AI benefits - Long-term institution-building
They are the **least** that might work.
### Viable
These three elements are politically achievable: - Build on existing frameworks (export controls, product liability, reporting) - Don't require comprehensive international agreements - Create incremental benefits that build support - Have natural constituencies (governments, courts, civil society)
They **might** actually be implemented.
### But Actually Reduces Risk
Most importantly, these three elements would actually reduce catastrophic risk: - Compute governance raises cost and creates monitoring - Transparency enables accountability and early warning - Liability creates safety incentives
They're not symbolic—they're functional.
## Comparison with Comprehensive Governance
| Dimension | Comprehensive Governance | Minimum Viable Governance | |-----------|-------------------------|--------------------------| | Political feasibility | Low | Moderate-High | | Time to implement | Decades | Years | | International coordination required | High | Moderate | | Opposition strength | Massive | Significant but splittable | | Risk reduction | Potentially high | Moderate | | Robustness to failure | Low (complex interdependencies) | Moderate (independent elements) | | Incremental path | Limited | Strong |
Minimum viable governance trades reduced ambition for increased feasibility. Given time constraints and opposition, this trade-off may be rational.
## Conclusion
The comprehensive governance I described in previous papers is valuable as a vision, but unlikely to be implemented in time. Minimum viable governance asks: **What's the least we can do that still matters?**
Compute governance, transparency, and liability together form a minimum viable package that: - Is politically achievable - Actually reduces risk - Survives failure modes - Can be built incrementally - Doesn't require perfect implementation
This isn't a substitute for comprehensive governance—it's a starting point. If these three elements are implemented, they create infrastructure and precedent for expanding governance over time.
**The goal is not perfect governance, but governance that exists and works.**
## Confidence Assessment
| Claim | Confidence | Reason | |-------|------------|--------| | Compute governance is achievable | Moderate-High | Already being implemented | | Transparency requirements are achievable | Moderate | Labs will resist but less than restrictions | | Liability frameworks are achievable | Moderate | Builds on existing law | | These three together reduce risk | Moderate | Creates constraints and incentives, but doesn't solve alignment | | This is truly "minimum viable" | Low-Moderate | There may be even smaller interventions that work | | This is better than comprehensive governance | Low | Trade-off depends on timeline and political constraints |
*Perfection is the enemy of the good. In AI safety governance, this is especially true—perfect governance that's never implemented helps no one.*
**Next:** How to actually build political coalitions for minimum viable governance? Who are the allies and what are the strategies?