Shifting tides at OpenAI
A prominent figure in OpenAI’s hardware division has stepped away from the company, signaling a tipping point in how the industry weighs safety against rapid innovation. Caitlin Kalinowski cited two core issues that she believes warrant deeper scrutiny: the possibility of surveilling Americans without judicial oversight and the deployment of lethal autonomous systems without human authorization.
The decision comes on the heels of OpenAI’s recent engagement with the US Department of War, a move that amplified debates about guardrails, accountability, and how far AI should be permitted to operate in national security applications.
The resignation and the message behind it
In a message posted on the social platform X, Kalinowski described the choice as difficult but principled. She argued that while artificial intelligence has a meaningful role to play in national security, the two red lines she highlighted deserved more deliberate consideration than the process afforded them at the time.
A broader context
The episode unfolds against a backdrop of tension between AI developers and other major players in the field. Prior to Kalinowski’s departure, Anthropic faced its own disagreements with the Department of War over safeguards for weaponization and mass surveillance. OpenAI stepped into the fray by pursuing a government deal, a move that drew scrutiny from industry observers and safety advocates alike.
What executives said about safety and speed
OpenAI’s leadership publicly acknowledged the pacing issue. CEO Sam Altman remarked that some steps were rushed, clarifying that the Department of War would not use OpenAI tools within US intelligence agencies. He stressed that future work would unfold more cautiously, with safety safeguards and thorough risk assessment conducted in partnership with the government.
What Kalinowski plans next
According to her LinkedIn profile, Kalinowski led planning and operations for a rapidly expanding robotics program within OpenAI. She indicated she is taking personal time before moving on to a role focused on building responsible physical AI, signaling a shift toward governance and practical safety in real-world robotics work.
Why this matters for the AI landscape
The resignation spotlights the ongoing debate over how far government collaboration should go in shaping AI capabilities. As major labs push forward with rapid development, policymakers and industry voices alike are calling for clearer guardrails to protect civil liberties while still pursuing innovation.
Sources: PC Gamer, March 9, 2026; context on Anthropic and DoW discussions; statements from Kalinowski and Altman.