Hacker-City
Hacker-City
Get the brief
Technology|April 6, 2026|5 min read

Sam Altman says AI superintelligence is so big that we need a 'New Deal'—critics say OpenAI's policy ideas are a cover for 'regulatory nihilism'

OpenAI released a 13-page policy paper outlining how the world needs to prepare for superintelligence, proposing everything from tax system reforms to shorter workweeks. Critics question the company's motives and trustworthiness, noting that OpenAI is hardly a neutral party in AI regulation discussions.

#OpenAI#Sam Altman#AI policy#superintelligence#regulation#AI governance#artificial intelligence#government policy#tech policy#AI safety

Sam Altman says AI superintelligence is so big that we need a 'New Deal'—critics say OpenAI's policy ideas are a cover for 'regulatory nihilism'

OpenAI has released a comprehensive policy framework advocating for fundamental societal restructuring to accommodate the emergence of superintelligence—AI systems capable of surpassing human cognitive abilities across all domains. The company argues that transformative changes spanning tax systems, labor structures, and economic frameworks will be essential to navigate this technological transition effectively.

Released on Monday, the 13-page document titled Industrial Policy for the Intelligence Age presents what OpenAI characterizes as a "slate of people-first policy ideas" designed to initiate broader policy discussions. The timing of the release coincided with a detailed New Yorker investigation examining CEO Sam Altman's track record on AI safety commitments, raising questions about the company's credibility in policy advocacy.

The policy paper, authored by OpenAI's global affairs team, addresses anticipated economic disruptions from superintelligence while proposing various mitigation strategies. "We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process," the company stated in its accompanying blog post.

The document's wide-ranging proposals—encompassing public wealth funds and reduced working hours—reflect the scale of transformation OpenAI anticipates. However, policy experts emphasize the inherent tension in having a leading AI company shape regulatory discussions that directly impact its business interests.

Lucia Velasco, a senior economist and AI policy specialist at the Inter-American Development Bank and former UN digital technologies policy head, highlighted this fundamental conflict. "OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define," she observed.

Despite these concerns, Velasco acknowledged the document's value in addressing governmental policy gaps. "Most are still treating AI as a technology problem when it's actually a structural economic shift that needs proper industrial policy," she noted. "That's a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it's a starting point."

Soribel Feliz, an independent AI policy advisor with previous Senate experience, credited OpenAI for formalizing these discussions while noting the familiar nature of the proposals. "Some of these pillars—'share prosperity broadly, mitigate risks, democratize access'—have been the framework for every major AI governance conversation since ChatGPT came out in November 2022," she explained.

Feliz emphasized that many concepts outlined in the paper have been extensively discussed in policy circles, referencing nine AI Policy Forum sessions during her Senate tenure where similar ideas were presented. "The language around public-private partnerships, AI literacy and worker voice reads like it came out of a UNESCO or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them."

The document appears strategically targeted toward Washington policymakers who have grappled with AI regulation since ChatGPT's public launch. Some observers noted improvements in specificity compared to previous OpenAI policy communications.

Nathan Calvin, vice president of state affairs and general counsel at Encode AI, recognized substantive improvements in the document's approach. "I found this document to genuinely be a real improvement from previous documents that were even more floaty and high level," he said, highlighting concrete suggestions regarding auditing, incident reporting, and government AI usage restrictions.

However, Calvin raised concerns about OpenAI's simultaneous lobbying activities through Leading the Future PAC, which advocates for industry-favorable policies. The organization, conceived by global affairs head Chris Lehane with significant funding from president Greg Brockman, has actively opposed certain AI safety measures.

"I hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing," Calvin stated, referencing the PAC's opposition to New York congressional candidate Alex Bores, who sponsored the RAISE Act—New York's AI safety and transparency legislation recently signed into law.

Calvin has previously accused OpenAI of employing aggressive tactics against California's SB 53 transparency legislation and using litigation with Elon Musk to intimidate policy critics, including his organization Encode, which OpenAI allegedly suggested was secretly funded by Musk.

The policy paper represents a significant moment in AI governance discussions, offering detailed proposals while raising fundamental questions about industry self-regulation and the appropriate role of AI companies in shaping their own oversight frameworks.

Share this story