AI Alignment Capture - Who Writes the Rules for AI and Who Benefits

Date: 2026-04-05 Status: PRIVATE - intelligence analysis Analyst: por. Zbigniew Method: PARDES + regulatory capture analysis + revolving door mapping Dossier ID: 066 Series context: Technate infrastructure mapping (see 042, 046, 053, 035)


SEED

Whoever writes the rules for AI determines who can build it - and in 2024-2026, the people writing those rules are the same people who profit from them: the White House AI czar held 449 AI investments while setting federal AI policy, the “safety” institutes were renamed to drop the word “safety,” the one AI company that refused autonomous weapons was blacklisted within 24 hours and replaced by a compliant competitor, and a $125 million super PAC was created specifically to defeat any lawmaker who votes for AI regulation.

PARAGRAPH

Between January 2025 and April 2026, the United States dismantled its AI safety infrastructure and replaced it with an innovation-first framework designed by the industry it purports to regulate. Trump rescinded Biden’s AI Executive Order 14110 on his first day in office (January 20, 2025) and replaced it with EO 14179 “Removing Barriers to American Leadership in Artificial Intelligence,” drafted with heavy input from David Sacks - a PayPal Mafia member whose firm Craft Ventures holds 449 AI investments. The NIST AI Safety Institute was renamed the “Center for AI Standards and Innovation” by Commerce Secretary Howard Lutnick (June 2025), dropping “safety” from the name and mission. The UK’s AI Safety Institute was simultaneously renamed the “AI Security Institute” (February 2025), shifting from ethical oversight to national security framing. Michael Kratsios, former Thiel Capital chief of staff, was confirmed as OSTP Director (March 2025) and now co-chairs PCAST alongside Sacks, with council members including Zuckerberg, Jensen Huang, Ellison, Andreessen, and Sergey Brin - the CEOs of companies that directly benefit from the policies they advise on. Jacob Helberg, former Palantir senior advisor, became Under Secretary of State for Economic Affairs while retaining investments in OpenAI, Anduril, SpaceX, and Neuralink. When Anthropic refused to allow its AI to be used for autonomous weapons and mass surveillance, the Pentagon designated it a “supply chain risk to national security” (February 27, 2026), and OpenAI signed the replacement contract the same day. A federal judge blocked the blacklisting as “Orwellian” and “classic illegal First Amendment retaliation” (March 26, 2026), but the precedent was set: say no to the military-AI complex and lose every federal contract. Meanwhile, OpenAI and Andreessen Horowitz created Leading the Future, a $125 million super PAC to defeat lawmakers who support AI regulation in the 2026 midterms. The EU AI Act, the world’s most comprehensive AI regulation, enters general application in August 2026 - but American companies are lobbying to weaken its enforcement, and the US framework explicitly prioritizes competitive advantage over safety. The result: a regulatory environment where the companies building the most powerful AI systems in history are simultaneously the ones writing, funding, and enforcing (or blocking) the rules that govern them.


1. THE REGULATORY LANDSCAPE (2024-2026)

1.1 EU AI Act - The Only Comprehensive Framework

Confidence: HIGH (0.9) - Legislation passed, implementation timeline public.

Milestone Date Status
Entry into force August 1, 2024 DONE
Unacceptable risk AI prohibited February 2, 2025 DONE
GPAI governance rules applicable August 2, 2025 DONE
General application (most rules) August 2, 2026 PENDING
High-risk AI in regulated products August 2, 2027 PENDING

The EU AI Act is the first binding, comprehensive AI regulatory framework in the world. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes corresponding obligations. Each member state must establish at least one AI regulatory sandbox by August 2026.

Key tension: The EU regulates by risk category. The US deregulates by executive order. American AI companies lobbied aggressively against the EU AI Act during drafting (OpenAI lobbied the EU to water down requirements, per TIME exclusive), and now face compliance obligations they spent millions trying to prevent.

Sources: European Parliament, artificialintelligenceact.eu, Kennedys Law

1.2 United States - Deregulation by Design

Confidence: HIGH (0.9) - Executive orders are public record.

Date Action Effect
Jan 20, 2025 Trump rescinds Biden EO 14110 Removes mandatory red-teaming, safety reporting, cybersecurity protocols for high-risk AI
Jan 23, 2025 EO 14179 “Removing Barriers to American Leadership in AI” Replaces safety framework with innovation-first approach; 180-day “action plan” mandate
Jun 2025 NIST AI Safety Institute renamed CAISI “Safety” removed from name and mission; refocused on “standards and innovation”
Jul 2025 “Winning the AI Race” action plan 90+ federal AI initiatives, none focused on safety
Dec 2025 EO targeting state AI laws DOJ instructed to challenge state laws deemed “onerous” to industry

Biden’s EO 14110 required: mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols, safety reporting for models above certain capability thresholds, and federal agency collaboration on AI safety best practices. All of this was eliminated on day one.

The replacement framework, shaped by David Sacks (449 AI investments) and Michael Kratsios (former Thiel Capital chief of staff), contains zero binding safety requirements. The “Winning the AI Race” plan released in July 2025 contains 90+ initiatives focused on acceleration, infrastructure, and competitive positioning against China - not a single binding safety mandate.

In December 2025, Trump issued an executive order instructing the Justice Department to challenge state-level AI laws deemed “onerous” to the industry - preempting the only remaining regulatory avenue after federal deregulation.

Sources: Skadden, Wikipedia EO 14179, Manatt, Commerce.gov

1.3 United Kingdom - Safety to Security

Confidence: HIGH (0.85)

The UK AI Safety Institute, established by Sunak after the 2023 AI Safety Summit, was renamed the “AI Security Institute” by Technology Secretary Peter Kyle on February 14, 2025 - three days after the Paris AI Action Summit. The rename signals a shift from preventive AI ethics to reactive national security focus.

The new institute explicitly will NOT focus on: algorithmic bias, freedom of speech in AI, ethical AI deployment, or proactive safety evaluation. It WILL focus on: malicious cyber-attacks, cyber fraud, national security implications.

PM Starmer directed all ministers to prioritize AI adoption and growth within their departments. The UK announced a partnership with Anthropic as part of the rebranding.

Sources: London Daily, eWeek, AI Now Institute

1.4 China - Actually Regulating (Contrary to the Narrative)

Confidence: MEDIUM (0.7) - Chinese policy implementation details hard to verify independently.

China has taken a sector-by-sector approach to AI regulation, contrary to the US narrative that “China doesn’t regulate AI”:

Date Regulation Scope
Aug 2023 Generative AI rules Content generation, deepfakes
Aug 2025 AI Plus Action Plan 6 priority areas for AI deployment
Oct 2025 Cybersecurity Law amendments First time AI enters Chinese national law; algorithm R&D support, training data regulation, ethics rulemaking
Jan 2026 Amended Cybersecurity Law effective AI governance, risk assessment, security requirements

China’s approach is instructive: they regulate AI heavily but for state control purposes, not citizen protection. DeepSeek employees reportedly had passports confiscated for holding information classified as state secrets. The Chinese model is regulation-as-control, not regulation-as-safety.

The US argument that “we can’t regulate because China won’t” is factually false - China regulates aggressively. The accurate framing is: “China regulates AI to serve the state. The US deregulates AI to serve the industry. Neither regulates AI to serve the public.”

Sources: White & Case, Carnegie Endowment, Bird & Bird


2. REGULATORY CAPTURE IN ACTION

2.1 The Sacks Pipeline: Investor to Regulator to Advisor

Confidence: HIGH (0.9) - Financial disclosures, public records.

David Sacks represents the purest case of AI regulatory capture in history:

  1. Craft Ventures holds 449 AI investments (disclosed in ethics waiver)
  2. Named White House AI and Crypto Czar (December 2024) as Special Government Employee - no Senate confirmation, limited disclosure
  3. Sold $200M in personal crypto but retained ALL Craft Ventures fund positions including BitGo (7.8%), Lightning Labs (1.1%), and hundreds of AI companies
  4. Shaped EO 14179 and the “Winning the AI Race” action plan - 90+ initiatives benefiting AI companies
  5. Oversaw revocation of Biden safety guardrails - removing the only federal AI oversight
  6. December 2025 EO targeting state AI laws - preempting state-level regulation
  7. March 2026: Transitioned to PCAST co-chair when 130-day SGE limit expired - dropping title, keeping influence

Government ethics expert Kathleen Clark called his waivers “sham ethics waivers” lacking “rigorous objective ethics analysis.” Elizabeth Warren launched an investigation into 130-day limit compliance. NPR’s December 2025 investigation documented the AI investment conflicts.

Sacks told Bloomberg his role “cost me a lot of money” due to divestments. His fund’s 449 AI positions tell a different story.

Sources: NPR, Axios, FedScoop

2.2 The Frontier Model Forum: Industry Writing Its Own Rules

Confidence: HIGH (0.85)

In July 2023, OpenAI, Anthropic, Google, and Microsoft founded the Frontier Model Forum - an industry body to “ensure safe and responsible development” of frontier AI models. The Forum:

  • Sets “safety best practices” (written by the companies themselves)
  • Coordinates research (funded by $10M AI Safety Fund)
  • “Shares knowledge with policymakers” (lobbies)
  • Chris Meserole appointed as first Executive Director

Encode Justice called it out: “Self-regulation is no substitute for government action.”

The structural problem: every safety standard the Forum proposes becomes the ceiling, not the floor, of AI regulation. When policymakers ask “what safety measures exist?”, the answer is “the ones the industry designed for itself.” This creates a policy regime where the regulators reference industry self-assessment as evidence that regulation is unnecessary.

Sources: Google Blog, TechCrunch, Frontier Model Forum

2.3 The “Safety” Institutes - Renamed, Defanged, Repurposed

Confidence: HIGH (0.9)

Three simultaneous actions gutted AI safety oversight globally:

Action Date Agent Effect
NIST AI Safety Institute renamed CAISI Jun 2025 Howard Lutnick (Commerce Secretary) “Safety” removed from name and mission
UK AISI renamed AI Security Institute Feb 2025 Peter Kyle (Tech Secretary) Ethical oversight removed; national security focus
Biden EO 14110 rescinded Jan 2025 Trump (Day 1 action) All mandatory safety reporting eliminated

The pattern: “safety” was systematically replaced with “security” or “innovation” across every institution. Safety implies protecting people from AI. Security implies protecting AI from threats. Innovation implies removing barriers to AI deployment. The word change IS the policy change.

2.4 PCAST: The Advisory Board That IS the Industry

Confidence: HIGH (0.95) - White House announcement, public record.

Trump’s PCAST, announced March 25, 2026:

Member Company Conflict
David Sacks (co-chair) Craft Ventures 449 AI investments
Michael Kratsios (co-chair) Former Thiel Capital OSTP Director, ex-Thiel chief of staff
Marc Andreessen a16z Invested in Anduril, AI companies, co-funder of $125M anti-regulation PAC
Mark Zuckerberg Meta Llama AI, $14.3B Scale AI deal
Jensen Huang NVIDIA Monopoly on AI chips
Larry Ellison Oracle Stargate partner ($500B AI infrastructure)
Sergey Brin Google/Alphabet Gemini AI, DeepMind
Safra Catz Oracle Stargate partner
Michael Dell Dell AI server infrastructure
Lisa Su AMD AI chip competitor
David Friedberg Production Board Sacks’ All-In podcast co-host

Notable exclusions: Musk and Altman are NOT on PCAST (both have adversarial relationships with other members). No consumer advocates. No civil rights representatives. No AI safety researchers. No labor representatives.

This is not an advisory board. It is the industry advising itself through the mechanism of government.

Sources: White House, Fortune, The Register


3. THE ANTHROPIC BLACKLISTING - The Precedent

Confidence: HIGH (0.95) - Court filings, judicial ruling, news coverage from multiple outlets.

Timeline

Date Event
Jul 2025 Anthropic signs $200M Pentagon contract
Sep 2025 Negotiations stall: DOD wants “unfettered access to all lawful purposes”; Anthropic demands no autonomous weapons, no mass domestic surveillance
Feb 27, 2026 Pentagon designates Anthropic “supply chain risk to national security”
Feb 27, 2026 Trump orders every federal agency to “immediately cease” all Anthropic use
Feb 27, 2026 OpenAI signs Pentagon replacement contract THE SAME DAY
Mar 5, 2026 Anthropic officially notified - while Claude is simultaneously being used by military personnel in Iran operations
Mar 7, 2026 OpenAI robotics lead Caitlin Kalinowski resigns over Pentagon deal
Mar 8, 2026 98 OpenAI employees sign protest letter
Mar 9, 2026 Anthropic files federal lawsuit (Northern District of California)
Mar 16, 2026 Tech industry rallies behind Anthropic
Mar 24, 2026 Judge Rita Lin presses DOD: “That seems a pretty low bar”
Mar 26, 2026 Lin issues preliminary injunction blocking blacklisting
Mar 26, 2026 Pentagon CTO says ban “still stands” despite ruling
Ongoing 1,000+ OpenAI and Google employees sign open letter demanding guardrails

The Ruling

US District Judge Rita Lin (Biden appointee, Northern District of California) issued a 43-page ruling:

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

“The record strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that [the government’s] real motive was unlawful retaliation.”

The “supply chain risk” designation is typically reserved for foreign adversary contractors. Using it against an American company for refusing to build autonomous weapons is, in Lin’s words, unprecedented.

The Market Signal

ChatGPT uninstalls surged 295% after the Pentagon deal. Claude climbed to #1 on the App Store. The consumer market rejected what the military market demanded - but consumer preference is irrelevant when the procurement contracts are worth billions.

The precedent: refuse the military and lose not just the military contract but ALL federal business. Accept and your ethical objections become “weasel words” (EFF’s characterization of OpenAI’s safeguards).

Sources: CNBC, NPR, CNN, EFF, Democracy Now


4. OPEN SOURCE VS. CLOSED: THE REGULATORY MOAT

Confidence: MEDIUM-HIGH (0.8)

The Strategic Landscape

Company Position Beneficiary of regulation?
OpenAI Closed-source, called for federal licensing in 2023 YES - licensing regime would cement incumbents
Anthropic Closed-source, safety-focused YES - safety requirements raise barriers for competitors
Google Both (Gemini closed, some open) YES - compliance costs favor well-funded incumbents
Meta Open-weights (Llama series) until Dec 2025 NO - open-source regulation destroys their strategy
xAI (Musk) Semi-open (Grok) MIXED

The Altman Licensing Gambit (2023)

In May 2023, Sam Altman testified before Congress and proposed: “a new agency that licenses any effort above a certain threshold of capabilities.” This would:

  1. Create a federal licensing body (barrier to entry)
  2. Require testing before deployment (expensive, favors incumbents)
  3. Allow license revocation (government kill switch on competitors)

This is the regulatory moat strategy: the dominant player calls for regulation designed to prevent new entrants while grandfathering their own position. Altman asked Congress to regulate AI in a way that would make OpenAI the gatekeeper.

By May 2025, Altman’s tune had changed entirely. Before the same Senate, he now urged lawmakers against regulations that could “slow down” the US in the AI race against China. The licensing proposal was gone. Why? Because by 2025, the deregulatory environment was more profitable than a licensing regime would have been.

Sources: Fortune 2023, TIME, Fortune 2025

Meta’s Open-Source Reversal

Meta released Llama 1-4 as open-weights models (2023-2025), positioning themselves as the open-source champion against closed incumbents. This was strategic: Meta lacked a subscription AI product, so open-source undermined competitors’ revenue while Meta benefited from community development.

Then in December 2025, Meta pivoted to a closed proprietary model codenamed “Avocado” (targeting Q1 2026 release), contradicting CEO Zuckerberg’s public stance that open source was “closing the gap.” Simultaneously, Meta acquired 49% of Scale AI for $14.3 billion (June 2025).

The open-source debate was never about ideology. It was about competitive positioning. Open source when you’re behind, closed source when you’re ahead.

Sources: WinBuzzer, Fortune


5. MILITARY AI: THE CONSORTIUM

Confidence: HIGH (0.9) - Contract values public, Pentagon announcements.

The Stack (as of April 2026)

Company Role Contract Value Technate Link
Palantir Maven AI (targeting, surveillance) - designated “program of record” across all 5 military branches $13B+ (from $480M in 2024) Thiel co-founder/chairman
Anduril Lattice OS (autonomous weapons, counter-drone, C2) $20B ceiling (10-year enterprise) Luckey (Thiel protege), Founders Fund
OpenAI Classified military AI deployment on GenAI.mil Est. $500M-2B Altman; Musk co-founded; Stargate partner
Scale AI Thunderforge (AI military planning, LLM testing/evaluation for DOD) $199M+ combined Founders Fund early investor; Wang now at Meta
SpaceX/Starshield Satellite communications, space layer Classified Musk
Saronic Autonomous naval vessels Classified Andreessen Horowitz funded

Project Maven: Google to Palantir Pipeline

The Maven trajectory tells the entire story of military AI capture:

  1. 2017: Pentagon launches Project Maven for AI-assisted drone targeting
  2. 2018: Google wins contract; 12+ employees resign, thousands protest; Google drops contract
  3. 2019-2023: Palantir steps in, builds Maven Smart System on Gotham/Foundry platforms
  4. 2024: Palantir secures $480M Army contract for Maven
  5. Late 2024: Pentagon integrates Anthropic’s Claude into Maven
  6. Mar 2025: $795M contract modification for continued Maven support
  7. Feb 2026: Anthropic blacklisted; Claude removed from Maven pipeline
  8. Mar 2026: Maven designated “program of record” across all 5 military branches; 20,000+ active users

Google’s employee protest in 2018 was the last successful internal resistance to military AI. After that, every company that resisted (Google, then Anthropic) was replaced by a company that complied (Palantir, then OpenAI). The selection pressure is clear: comply or be replaced. The labor market for ethical AI in defense was eliminated.

Anduril: The Autonomous Weapons Company

Anduril’s $20B enterprise agreement (March 2026) consolidates 120+ separate procurement pathways into ONE framework covering:

  • Lattice AI platform (sensor fusion, targeting, C2)
  • Altius long-endurance UAVs
  • Ghost/Anvil counter-drone systems
  • Menace interceptor UAS
  • Dive autonomous underwater vehicles
  • Edge-compute infrastructure

The first task order ($87M) was for counter-drone operations. Anduril is also raising $4B at a $60B valuation (led by Andreessen Horowitz, March 2026) and building a $1B weapons manufacturing facility in Ohio.

Anduril’s board and investors read like a Technate roster: Palmer Luckey (Thiel protege), Founders Fund ($1B Series G), Andreessen Horowitz (lead investor on latest round), Trae Stephens (Thiel Fellow, former Palantir, Founders Fund partner).

Sources: CNBC, Tom’s Hardware, Army Recognition, CNBC Scale AI


6. THE REVOLVING DOOR

Confidence: HIGH (0.9) - Government appointments are public record.

Thiel Network in Federal AI/Tech Positions (2025-2026)

Person From To AI/Tech Relevance
David Sacks Craft Ventures (449 AI investments) AI/Crypto Czar -> PCAST co-chair Wrote federal AI policy while holding AI portfolio
Michael Kratsios Thiel Capital (chief of staff to Thiel) OSTP Director, PCAST co-chair Oversees federal science/tech policy; youngest/first millennial OSTP Director; no PhD
Jacob Helberg Palantir (senior advisor to CEO) Under Secretary of State for Economic Affairs Retains investments in OpenAI, Anduril, SpaceX, Neuralink, Boring Company
Elon Musk Tesla, SpaceX, xAI, X DOGE head (130-day SGE) Built Palantir master database; controlled government IT during tenure
JD Vance Narya Capital (Thiel-backed VC) Vice President Former Mithril Capital (Thiel), shapes tech agenda
Howard Lutnick Cantor Fitzgerald (Tether connection) Commerce Secretary Renamed AI Safety Institute, dropping “safety”
Jim O’Neill Thiel Foundation (president) Deputy Secretary HHS Health AI policy
Ken Howery PayPal co-founder Ambassador to Denmark Tech diplomacy

The 130-Day Loophole

Both Sacks and Musk served as “Special Government Employees” (SGEs) - a designation that allows:

  • No Senate confirmation
  • Limited financial disclosure
  • 130-day annual limit (can be rotated)
  • Ethics waivers at president’s discretion

When the 130 days expired, both transitioned: Musk left DOGE, Sacks moved to PCAST. The policy influence continues through different titles. The SGE mechanism is the revolving door’s revolving door - temporary enough to avoid oversight, long enough to set permanent policy.

The Helberg Case

Jacob Helberg’s appointment is the most explicit revolving door example. As Palantir’s senior advisor, he worked alongside the company receiving $13B+ in military AI contracts. As Under Secretary of State, he shapes international economic policy including AI trade policy. He retained investments in: OpenAI (Pentagon contractor), Anduril ($20B Pentagon contract), SpaceX (Starshield military satellites), Neuralink, and the Boring Company.

Bloomberg reported he “plans to retain” these investments. No divestment. No blind trust. The advisor to the surveillance company now runs State Department economic policy while holding stock in the surveillance company.

Sources: Bloomberg, Sludge, STAT News


7. THE LOBBYING INFRASTRUCTURE

Confidence: HIGH (0.85)

Spending

Entity 2023 2024 2025 Trend
OpenAI $260K $1.76M $2.99M +1,050% in 2 years
OpenAI Q1 lobbying alone - - $1.2M (44% increase over Q1 2024) Accelerating
Lobbying firms used by OpenAI - - DLA Piper, Akin Gump Strauss Hauer & Feld DC heavyweights

Per Silicon Canals: “OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants.”

Leading the Future Super PAC

Metric Value
Founded August 2025
Total raised $125 million (2025)
Cash on hand entering 2026 $70 million
Key funders OpenAI President Greg Brockman, Andreessen Horowitz (a16z), Palantir co-founder Joe Lonsdale, SV Angel (Ron Conway), Perplexity
Dark money affiliate Build American AI (nonprofit, no donor disclosure)
Strategy Back pro-AI candidates in 2026 midterms; target lawmakers supporting AI regulation
First target NY Assemblyman Alex Bores (sponsor of NY AI safety bill)

The structure: a super PAC (Leading the Future) runs political campaigns. A nonprofit (Build American AI) runs policy advocacy without disclosing donors. Together they create a two-pronged system: lobby lawmakers privately, defeat them publicly if they don’t comply.

Anthropic created a counter-PAC: Public First Action ($20M, backing Bores). This is the AI safety debate reduced to a campaign finance arms race.

Sources: CNBC, Fortune, Axios, TechCrunch


8. THE SAM ALTMAN PARADOX

Confidence: HIGH (0.85)

Sam Altman occupies a unique position in the AI regulatory capture story - he simultaneously:

  1. Runs the most powerful AI company (OpenAI, GPT-4/5, approaching AGI claims)
  2. Co-founded the global biometric identity platform (World ID, 38M+ users, iris scanning - see Dossier 053)
  3. Testified to Congress that AI needs regulation (May 2023 - proposed licensing)
  4. Reversed position to oppose regulation (May 2025 - “don’t slow us down”)
  5. Co-leads $500B Stargate with SoftBank and Oracle (announced with Trump, Jan 2025)
  6. Signed the Pentagon deal the same day Anthropic was blacklisted (Feb 27, 2026)
  7. Co-funds $125M anti-regulation PAC (Leading the Future, via Greg Brockman)
  8. Converting OpenAI from nonprofit to for-profit (public benefit corporation)

The paradox resolves when you realize these positions are not contradictory - they are sequential:

  • Phase 1 (2023): Call for licensing when you’re the dominant player. Licensing cements your position.
  • Phase 2 (2025): Switch to anti-regulation when the administration is friendly. Deregulation is now more profitable than licensing.
  • Phase 3 (2026): Take the military contract your competitor refused. The ethics are optional; the revenue is not.
  • Throughout: Build World ID so that when AI makes identity verification impossible, you own the replacement. Create the disease and sell the cure.

Altman’s 2023 licensing proposal would have created a federal agency granting permission to build AI above certain capability thresholds. OpenAI would have been grandfathered in. Every startup, every open-source project, every university research lab would need a license. This is Standard Oil asking the government to regulate the oil industry - by requiring a license that only Standard Oil could get.

When the political winds shifted to deregulation under Trump, Altman shifted with them. The goal was never safety or deregulation - it was ensuring that whichever regulatory regime emerged, OpenAI would be inside the castle walls.

Sources: PBS, Washington Post, OpenAI Stargate


ADVERSARY (Steelman)

The case FOR industry self-regulation:

  • AI moves too fast for legislative processes. By the time a law passes, the technology has changed.
  • Industry participants understand the technology better than lawmakers. Technical expertise matters.
  • The EU AI Act is already being criticized as overly broad, potentially stifling European AI development.
  • Heavy regulation could push AI development to less transparent jurisdictions.
  • The Frontier Model Forum’s voluntary commitments are better than no commitments.

The case FOR the current US approach:

  • China IS building military AI rapidly. Slowing US development has real national security costs.
  • The Anthropic blacklisting was reversed by the courts - the system worked.
  • PCAST members are qualified to advise on technology policy. Who else would you appoint?
  • Sacks divested $200M. That’s not nothing.
  • The “Winning the AI Race” plan does address infrastructure, standards, and competitiveness.

The case AGAINST the “capture” framing:

  • Regulatory capture implies a hidden agenda. These actions are public, debated, and challenged.
  • Not every appointment from industry is corrupt. Domain expertise comes from industry.
  • The revolving door also means government gets competent people who understand the technology.

Counter-counter: These arguments are structurally sound but miss the concentration problem. It is not that industry participates in regulation - it is that ONE investor network (Thiel/PayPal Mafia) simultaneously holds: the OSTP directorship, the AI czar/PCAST role, a State Department under secretary position, the VP, key federal appointments, the dominant military AI contracts, AND the super PAC attacking lawmakers who disagree. That is not expertise advising government. That is a network governing through government.


TZELEM (What Happens When This Is Weaponized)

Scenario 1: The Licensing Regime Returns

If political winds shift and a licensing regime is enacted (as Altman proposed in 2023), the companies on PCAST would write the licensing requirements. Requirements designed for organizations with billions in compute budget, thousands of employees, and existing Pentagon relationships. Every university AI lab, every open-source project, every startup in a garage - would need to ask permission from an agency advised by their competitors.

Scenario 2: The China Justification Expands

“We can’t regulate because of China” is already used to block AI safety regulation. The same argument can be extended to: “We can’t restrict AI surveillance because China doesn’t.” “We can’t limit autonomous weapons because China won’t.” “We can’t require consent for biometric data because China already has it.” The China frame converts every democratic safeguard into a competitive disadvantage.

Scenario 3: The Anthropic Precedent Holds

If the injunction is overturned on appeal, the precedent becomes: companies with ethical objections to military AI are national security threats. This eliminates the market for ethical AI permanently. Every future AI company will know from day one: cooperate fully or be destroyed.

Scenario 4: The PAC Succeeds

If Leading the Future successfully defeats pro-regulation lawmakers in the 2026 midterms, the legislative path to AI regulation is closed for at least two more election cycles. Combined with the executive branch deregulation and the PCAST advisory structure, this would mean: no federal regulation (executive), no state regulation (preemption EO), no legislative regulation (defeated lawmakers), no international alignment (US rejects EU framework). Complete regulatory vacuum, filled only by industry self-governance.


SOD (What Emerges)

The AI regulation capture is not a conspiracy - it is a system operating exactly as designed. The mechanism has five interlocking parts:

Part 1: Personnel

Install people from one investor network into every relevant government position. Sacks (AI policy), Kratsios (science policy), Helberg (economic diplomacy), Vance (VP), Musk (government restructuring). All from the same network. All retaining financial interests in AI companies.

Part 2: Policy

Use those positions to: revoke safety requirements (EO 14110 rescinded), rename safety institutions (NIST AISI -> CAISI), preempt state regulation (December 2025 EO), and create a “pro-innovation” framework that treats any safety requirement as a barrier.

Part 3: Procurement

Award contracts exclusively to network companies (Palantir $13B+, Anduril $20B, OpenAI Pentagon deal) while blacklisting the one company that set ethical boundaries (Anthropic). Use enterprise agreements to make switching impossible.

Part 4: Political Infrastructure

Create a $125M super PAC (Leading the Future) to defeat any lawmaker who challenges the framework. Fund it with money from the same companies receiving the government contracts.

Part 5: Advisory Capture

When the temporary positions expire, move to permanent advisory roles (PCAST) populated exclusively by the CEOs of beneficiary companies. This ensures policy continuity regardless of individual role changes.

The result is a closed loop:

INVESTMENT (Thiel/VC network)
    |
    v
PERSONNEL (Sacks, Kratsios, Helberg)
    |
    v
POLICY (deregulation, preemption, "innovation")
    |
    v
PROCUREMENT (Palantir, Anduril, OpenAI contracts)
    |
    v
LOBBYING/PAC (Leading the Future, $125M)
    |
    v
ELECTORAL (defeat pro-regulation lawmakers)
    |
    v
ADVISORY (PCAST - permanent industry influence)
    |
    v
back to INVESTMENT (returns flow to VC network)

The question is not whether this constitutes regulatory capture. The question is whether there remains any institution capable of regulating AI that is not already occupied by the people who profit from AI being unregulated.

The EU AI Act is the only remaining structural counterweight. It takes effect August 2026. Whether it survives implementation pressure from the world’s most well-funded lobbying apparatus will determine whether any democratic oversight of AI exists anywhere on Earth.


CROSS-REFERENCES


SOURCES (Primary)

Regulatory Landscape

Regulatory Capture

Anthropic Blacklisting

Military AI

Revolving Door

Lobbying

OpenAI/Altman