Robot Taxes and Wealth Funds: OpenAI's New Deal Pitch Meets a Credibility Crisis
- David Borish

- 1 day ago
- 6 min read

The most valuable private technology company in the world just published a detailed plan for how the government should tax, regulate, and redistribute the wealth generated by the very technology it is racing to build. The question is whether anyone should take it at face value.
OpenAI's "Industrial Policy for the Intelligence Age," released April 6, lays out six core proposals for restructuring the American economy around what it describes as approaching superintelligence. The document calls for a national public wealth fund, taxes on automated labor, a shift in the tax base from payroll to capital gains, four-day workweek pilots at full pay, automatic safety-net triggers tied to economic data, and containment playbooks for AI systems that can't be easily recalled.
CEO Sam Altman told Axios the scale of disruption ahead is comparable to the Progressive Era and the New Deal, and that the two most immediate threats are AI-enabled cyberattacks and biological weapons. The document frames itself as a starting point for debate, and OpenAI is backing that framing with up to $100,000 in research grants and $1 million in API credits for policy work that builds on these ideas.
A Sovereign Wealth Fund for AI
The most structurally ambitious proposal is the public wealth fund. OpenAI envisions a nationally managed fund, seeded in part by mandatory contributions from AI companies, that would invest in diversified assets across both AI firms and the broader set of businesses adopting the technology. Returns would be distributed directly to American citizens.
The model draws from Alaska's Permanent Fund, which pays annual dividends from oil revenues. In OpenAI's version, every American would receive a direct ownership stake in AI-driven economic growth, regardless of whether they hold stocks or have access to capital markets. The logic is straightforward: as AI generates enormous value, most of it risks concentrating at the top unless new mechanisms route it outward.
Anthropic proposed a similar idea in its own economic policy paper last October, suggesting sovereign wealth funds that would allow governments to acquire positions in AI-related assets. The convergence is notable. Two of the three leading frontier AI labs have now explicitly argued that AI's gains need to be redistributed through public ownership structures.
Open AI: Robot Taxes and Wealth Funds
OpenAI's tax proposals address what the company calls a structural problem in the making. As AI displaces workers, the wage-and-payroll tax revenue that currently funds Social Security, Medicaid, SNAP, and housing assistance could collapse. The company proposes shifting the tax base from payroll toward capital gains and corporate income, and floats taxes specifically tied to automated labor.
This puts OpenAI in peculiar company. Bill Gates proposed a robot tax in 2017. Marc Andreessen backed Donald Trump in 2024 partly in opposition to Biden's proposed capital gains tax increases. OpenAI is now advocating for policies that some of the most powerful figures in its own investor base have actively fought against.
The document stops short of specifying rates. It identifies the problem and points a general direction without committing to numbers. That vagueness is part of what has drawn criticism.
The Four-Day Week and Automatic Tripwires
Two of the more concrete proposals involve converting AI-driven productivity into tangible worker benefits. OpenAI suggests government-backed pilots of 32-hour workweeks at full pay, framing the reduced hours as an "efficiency dividend" that redirects productivity gains to time rather than output. The company also calls for portable workplace benefits untethered from any single employer, a recognition that AI may accelerate the erosion of traditional employment relationships.
Perhaps the most operationally interesting proposal is the automatic safety-net trigger. OpenAI envisions preset economic thresholds, tied to displacement data, that would automatically expand unemployment benefits, wage insurance, and cash assistance when hit. When conditions stabilize, the measures phase out. The mechanism borrows from fiscal stabilizer logic that already exists in some forms but has never been tied specifically to technology-driven displacement metrics.
Containment Playbooks for Rogue AI
The document's most arresting passage acknowledges scenarios where dangerous AI systems "cannot be easily recalled" because they are autonomous and capable of replicating themselves. OpenAI's proposed response: coordinated government-industry containment playbooks.
This is the kind of language that typically appears in safety research papers, not in lobbying documents from the company building the systems in question. Whether it reads as responsible foresight or as an attempt to normalize catastrophic risk scenarios while continuing to build toward them depends largely on how much credibility you assign to the messenger.
The Credibility Problem
The timing of this document is difficult to separate from its content. OpenAI released it on the same day The New Yorker published a year-and-a-half investigation into Altman's leadership. That report, based on over 100 interviews and hundreds of pages of previously undisclosed internal documents, includes allegations that Altman repeatedly misrepresented safety protocols to his own board, that the company's superalignment team received a fraction of the compute it was publicly promised, and that former chief scientist Ilya Sutskever compiled roughly 70 pages of evidence documenting what he called a consistent pattern of deception.
Former CTO Mira Murati told The New Yorker that institutions need to be worthy of the power they wield. One board member described Altman as having a "sociopathic lack of concern for the consequences" of misleading people. Whether or not one accepts these characterizations, their publication on the same day as a sweeping policy proposal creates an unavoidable tension.
There is also the matter of OpenAI's lobbying record. Nathan Calvin, vice president of state affairs at Encode AI, noted that OpenAI's political arm, Leading the Future PAC, has actively worked against state-level AI regulation, including bills like New York's RAISE Act and California's SB 53, while the company simultaneously publishes documents calling for stronger governance. Anton Leicht, a visiting scholar at the Carnegie Endowment, was more direct, writing on X that the document's vague nature and timing suggest "comms work to provide cover for regulatory nihilism."
Policy experts who reviewed the document offered a more measured assessment. Soribel Feliz, a former senior AI policy advisor to the U.S. Senate, said OpenAI deserves credit for putting these ideas on paper. Lucia Velasco, a senior economist at the Inter-American Development Bank, noted the proposals themselves are mostly familiar from existing governance conversations but acknowledged that the company is correct in saying governments are falling behind.
The Labor Market Context
Whatever one makes of OpenAI's motives, the document lands against a labor backdrop that gives its warnings real weight. White-collar payrolls in the United States have contracted for 29 consecutive months, a stretch that economists describe as unprecedented outside a recession. At elite business schools, the share of graduates still seeking employment three months after graduation has tripled or quadrupled compared to 2019. AI was cited as the reason for over 15,000 of the 60,000 planned job cuts announced in March alone.
The headline unemployment rate remains around 4.3%, but that number increasingly obscures the structural shift underneath it. As former Glassdoor chief economist Aaron Terrazas has observed, labor market slack is showing up as underemployment and workforce exits rather than in formal unemployment figures.
OpenAI's document effectively names this dynamic and proposes fiscal responses. The question is whether a company valued at $852 billion, preparing for a public listing, and under fresh scrutiny for the gap between its stated values and its behavior, is the right entity to lead this conversation.
What Comes Next
The proposals in "Industrial Policy for the Intelligence Age" are politically ambitious. As Axios noted, shifting the tax base from labor to capital would require inverting decades of Republican economic orthodoxy. A national public wealth fund would require the kind of legislative consensus that currently does not exist. Four-day workweek mandates face opposition from business coalitions that fund both parties.
OpenAI frames all of this as a starting point. The company is opening a Washington, D.C. workshop in May, soliciting feedback at a dedicated email address, and funding research grants. Whether these moves produce policy outcomes or serve primarily as reputation management during a sensitive IPO window will become clearer in the months ahead.
What is clear is that the company building some of the most capable AI systems on Earth has now publicly stated that those systems will hollow out the labor market, erode the tax base, and concentrate wealth unless governments intervene at a scale not seen since the New Deal. That admission alone, regardless of the source, is worth taking seriously. The hard part is building institutions that act on it before the displacement curves steepen beyond what any policy can catch.
Comments