‘Hundreds of millions into super PACs’—money from AI firms could shape 2026 races and the rules for the technology. Watchdogs call for stronger disclosure.

Henry Jollster
ai firms funding political super pacs

As the 2026 midterms approach, a surge of political money from artificial intelligence companies is setting up a high-stakes fight over how the technology will be governed. Industry players are moving fast to back candidates and causes that align with their policy goals across Congress and key states.

“The AI industry is pouring hundreds of millions of dollars into super PACs ahead of the 2026 midterm elections.”

The funding push suggests a coordinated effort to influence debates on safety standards, copyright, national security, and data rules. It also raises familiar questions about the role of large donors and outside groups in U.S. elections.

Why this money matters now

AI policy has shifted from academic debate to legislative action. Lawmakers in both parties have floated proposals on model disclosures, training data, and export controls. The Biden administration issued an executive order in 2023 on AI safety reporting. State legislatures are weighing deepfake limits and rules for automated hiring.

Super PACs can spend unlimited sums to support or oppose candidates as long as they do not coordinate with campaigns. The influx from AI firms and their executives could tilt close races and influence committee agendas once the next Congress is seated.

Lessons from recent cycles

Outside spending from the broader tech and crypto sectors has climbed in recent years. OpenSecrets reported that crypto-aligned super PACs surpassed $100 million in the 2024 cycle, signaling how a young industry can rapidly mobilize cash. AI firms appear to be following a similar playbook, building donor networks, testing messages, and focusing on primaries where smaller amounts can have outsize effects.

Election lawyers say the strategy is straightforward: back candidates open to industry input on regulation, and oppose those who favor strict limits or heavy liability. That pattern is likely to continue into 2026 as committees with jurisdiction over commerce, judiciary, and intelligence become key battlegrounds.

What AI companies want from policymakers

Industry leaders argue that clear rules will support growth while addressing real risks. They often cite the need for research funding, immigration policies that attract talent, and predictable liability standards for model providers and deployers.

  • Safety and reporting rules tied to model size and capability
  • Copyright and data-use frameworks for training material
  • National security controls for advanced chips and model exports
  • Privacy protections and consent standards
  • Support for workforce training and education

Supporters of the spending push say lawmakers need technical input to avoid blunt measures that could slow useful applications. They argue that targeted guardrails and audits can reduce harms without choking off research.

Concerns from watchdogs and civil society

Campaign finance watchdogs warn that large independent expenditures can narrow the policy debate to the preferences of a few wealthy donors. They point to the opacity of some super PAC funding streams and the growing use of nonprofit intermediaries that do not disclose donors.

Advocates for artists, workers, and small businesses worry that outsized spending could weaken protections on data use, compensation, and transparency. They favor strict rules for AI-generated deepfakes in elections, clearer copyright remedies, and strong privacy laws with real enforcement.

Some election security experts also flag the risk of synthetic media in close races. They say Congress and states should fund rapid takedown processes, watermarking standards, and voter education to reduce confusion.

How the money could shape the 2026 map

Analysts expect spending to cluster in swing House districts, Senate toss-ups, and state races that control tech policy. Primary contests may see early waves of ads testing messages about jobs, innovation, and safety. General elections could feature contrasts on copyright and consumer protection.

If pro-industry candidates win key committee seats, hearings and markups in 2027 could favor light-touch rules, voluntary standards, and preemption of stricter state laws. If skeptics gain ground, expect stronger audits, liability paths for harmed parties, and tighter controls on advanced model releases.

What to watch next

Filings with the Federal Election Commission will reveal which super PACs lead the charge and how they spend. State disclosures could show where ballot measures on AI, privacy, or deepfakes are in play. Media buyers say early ad reservations will offer clues about target districts and narratives.

Reform groups are pushing for faster disclosure of digital ads, stronger disclaimers for synthetic content, and bans on deepfakes in political campaigns. Some propose small-donor matching to dilute the influence of large checks.

The money is arriving well ahead of the vote, signaling a long campaign to write the rules of AI. The next year will test whether policymakers can craft protections that match the speed of the technology. Voters should expect more ads, sharper arguments, and higher stakes as both parties define what responsible AI looks like in law.