In March 2026, a political group called Innovation Council Action quietly opened a Washington office, hired a staff, built a lawmaker scorecard, and announced it would spend more than $100 million in the 2026 midterm elections. Its goal was not to elect a president or pass a specific law. Its goal was to ensure that artificial intelligence remains unregulated — and to punish the politicians who try to change that.
The group has the blessing of David Sacks, the Silicon Valley investor who served as Donald Trump’s AI and crypto advisor. It is led by Taylor Budowich, a former Trump White House deputy chief of staff who previously ran the MAGA Inc. super PAC. Its stated mission is to advance what it calls Trump’s “innovation agenda” — which in practice means opposing any federal or state regulation of AI that the technology industry finds inconvenient.
Innovation Council Action is not alone. The AI industry has now committed more than $300 million to the 2026 midterm elections. Pro-AI groups are outspending pro-regulation advocates by approximately three to one. AI-backed candidates have won 10 of 11 congressional primaries so far this cycle. The technology industry is not just lobbying on AI policy. It is buying the legislators who will write it.
How the Money Works
The mechanics of AI political spending follow a model that the cryptocurrency industry refined in the 2024 election cycle. Identify the races where regulatory-friendly candidates can be defeated or replaced. Build scorecards that rank lawmakers by their alignment with industry positions. Direct spending to primaries — where turnout is low, money goes further, and ideologically extreme candidates can win with narrow margins. Make the cost of opposing the industry’s position clear to incumbents before they vote.
Innovation Council Action has already built its lawmaker scorecard, ranking members of Congress based on their alignment with Trump’s AI agenda. The scorecard is explicitly designed as a spending guide — a map of which politicians will receive support and which will face opposition. Budowich’s public statement made the threat explicit: “The cavalry is coming to back up the policymakers who stand with the president and will hold accountable the ones who don’t.”
The other major players in the AI political spending ecosystem are following similar strategies. Leading the Future, backed by tech executives including Greg Brockman, Joe Lonsdale, and Marc Andreessen, has raised approximately $125 million. Meta has launched a pro-AI super PAC effort expected to spend around $65 million, focusing on state-level races where consumer protection and data privacy legislation has been advancing. The combined spending target races in Iowa, Kentucky, Maine, Michigan, and North Carolina — states where regulatory sentiment remains divided and where relatively small spending can shift outcomes.
What They Are Actually Buying
The framing of this spending as support for “innovation” and American competitiveness against China is politically effective and analytically misleading.
The policy positions that AI political spending is designed to protect are not abstract innovation principles. They are specific regulatory outcomes that benefit specific companies. The campaign against state-level AI regulation is primarily a campaign to prevent states from requiring AI companies to disclose how their systems make decisions, to prohibit AI systems that produce discriminatory outcomes in housing and employment, to regulate the use of AI in criminal justice and benefits determination, and to hold AI companies liable when their systems cause harm.
These are not frivolous regulations dreamed up by technophobes. They are the kinds of consumer and civil rights protections that the legal system applies to every other industry that affects people’s lives at scale. The argument that applying them to AI would harm American competitiveness against China is the same argument that was made against financial regulation after the 2008 crisis, against pharmaceutical regulation in the 1950s, and against automobile safety standards in the 1960s. The argument has never been correct. It has always been effective.
The Sacks Problem
The figure at the center of the AI deregulation push — David Sacks — is himself a complicated political asset. As Trump’s AI and crypto advisor, Sacks championed positions that have now run into significant resistance from multiple directions.
His two attempts to block states from regulating AI through federal preemption legislation both failed on Capitol Hill, defeated by a coalition of Republicans and Democrats who were unwilling to hand the federal government the power to override state consumer protection laws. His China policy positions have drawn criticism from Republican national security hawks who view his approach as insufficiently aggressive. His general hostility toward Anthropic — one of the few AI companies that has publicly argued for safety-oriented regulation — has created friction with Defense Department officials who have found Anthropic’s security credentials persuasive.
Former Trump adviser Steve Bannon delivered a pointed assessment to Axios: “Sacks brought policies that have been resoundingly defeated — FULL STOP.” Sources close to the White House told Axios that Sacks’ deregulatory vision is increasingly out of step with voter concerns, including within the MAGA movement itself.
The $100 million bet may be partly an attempt to reshape the political environment that has made Sacks’ preferred policies difficult to advance — to build, through campaign spending, the congressional coalition that the lobbying and policy work have failed to produce.
What Voters Actually Think
The most inconvenient fact for the AI deregulation campaign is what polling consistently shows about public opinion on the subject.
Americans across party lines are skeptical of AI and broadly supportive of regulation. A Pew Research survey from 2025 found that more Americans are worried about AI than excited about it, with concern running highest among older Americans and those in communities most directly affected by AI-driven job displacement. Notably, more Republicans than Democrats in recent polling favor some form of AI regulation — a finding that creates direct tension with the political coalition that Innovation Council Action is trying to build.
The crypto industry’s experience in 2024 provides a relevant precedent. Crypto-aligned PACs spent heavily to elect sympathetic candidates and succeeded in a number of races. The result was a more crypto-friendly Congress — but not a transformation of public opinion, which remained skeptical. The industry bought legislative access, not legitimacy.
AI political spending will likely produce a similar outcome. The 2026 midterms will almost certainly produce a more AI-industry-friendly Congress than would otherwise have emerged. The underlying public skepticism of AI, and the specific harms that are generating demands for accountability, will not be resolved by campaign spending. They will continue to accumulate.
The Regulation That Cannot Be Stopped
The AI industry’s campaign against regulation faces a structural problem that money cannot solve: the harms it is trying to avoid regulating are real, documented, and affecting people with political voices.
AI systems used in criminal sentencing have been shown to produce racially biased outcomes. Algorithmic hiring systems have been documented discriminating against women and older workers. AI-powered content recommendation systems have been linked to radicalization and mental health harm in young users. AI-generated misinformation has already affected elections. These are not hypothetical future risks. They are current realities generating current political pressure.
Every major technology that affected public life at scale eventually got regulated — not because regulators were wiser than innovators, but because the harms produced by unregulated deployment accumulated to the point where political resistance became impossible to sustain. Automobiles, pharmaceuticals, financial instruments, telecommunications — all followed this trajectory. The specific timeline varies. The direction does not.
The $300 million that the AI industry is spending in 2026 is a bet that it can delay this trajectory long enough to build facts on the ground — deployed systems, economic dependencies, trained workforces — that make regulation practically difficult even if it becomes politically inevitable. It is a bet that has worked for other industries before. It is also a bet that, in every case, ultimately lost.
Who Decides
The most important question the AI political spending campaign raises is not whether regulation is good or bad for innovation. It is who gets to decide.
The argument for deregulation is usually framed as an argument about economics — regulation slows innovation, innovation creates growth, growth benefits everyone. The argument against it is usually framed as an argument about safety and fairness — unregulated AI causes harm, harm falls disproportionately on vulnerable people, accountability matters.
Both of these framings miss the underlying political question: in a democracy, who has the authority to set the terms under which powerful technologies operate? The answer democratic systems have historically given is: the public, through elected representatives, through regulatory agencies, and through the legal system.
The AI industry’s $300 million campaign is an attempt to use the formal mechanisms of democratic participation — campaign contributions, political organizing, electoral spending — to ensure that the answer to this question is different for AI than it has been for every other powerful technology. It is using democracy to limit democracy’s reach over a technology that will affect everyone.
Whether it succeeds will be determined not just by how much money is spent, but by how quickly the harms of unregulated AI become visible enough, and widespread enough, to generate the political response that no amount of campaign spending can permanently suppress.
If this analysis interests you, read next: How Algorithms Decide What You Believe About War

