Big brain energy, now in one building

Anthropic has shuffled its research deck and created the Anthropic Institute, an internal think tank that merges three of its research groups into one place focused on big-picture questions. Think jobs, economies, safety, values and the tricky business of keeping powerful AI under control.

What the new institute actually does

The Anthropic Institute brings together the company’s societal impacts team, its frontier red team that stress-tests models, and its economic research group. The plan is to study how AI changes labor markets, legal systems, social behavior and public trust. It will also incubate new projects, including a team led by Matt Botvinick that will look at how AI affects the legal system.

Leadership shuffle (same people, new chairs)

Jack Clark, a cofounder who spent more than five years running public policy, is moving to lead the new think tank with the title head of public benefit. Sarah Heck, formerly head of external affairs, will now run the public policy team, which reportedly tripled in size in 2025. Anthropic is also opening its planned Washington, DC office and says the public policy crew will keep focusing on national security, AI infrastructure, energy, and democratic leadership in AI.

Bad timing? Maybe. But planned for a while.

The timing is conspicuous. Anthropic announced the institute days after suing the US government over a designation that labeled the company a supply-chain risk, a move that would make it harder or impossible for some clients to use Anthropic tech in Department of Defense work. The lawsuit argues the company was improperly blacklisted in part because it set limits on mass domestic surveillance and fully autonomous weapons.

The company says the institute was already in the works and the recent legal and government dust-up hasn’t changed its research agenda. As one leader put it, it’s never dull in AI, and progress doesn’t pause for politics. The situation did, however, reinforce Anthropic’s view that more public conversation and more disclosure about AI are needed.

Who’s in the room

The institute launches with about 30 staffers, including founding members Matt Botvinick, Anton Korinek (a professor on leave from the University of Virginia), and Zoe Hitzig, who left another major lab after that lab’s decision to add ads to its product. Korinek and Hitzig will lead major economic research projects. Anthropic says it expects the institute’s headcount to double each year for the foreseeable future.

Money, revenue and why investors should care

Court filings revealed Anthropic has generated over $5 billion in all-time commercial revenue and spent about $10 billion on model training and inference so far. The company has reportedly been planning an IPO, and it warns that the supply-chain designation could put hundreds of millions of 2026 revenue at risk, or in a worse interpretation, multiple billions. Dozens of outside partners have reportedly contacted Anthropic seeking guidance and clarity about their contracts and obligations.

Despite the potential short-term hit, Anthropic’s leadership says they aren’t worried about committing resources to long-term research. They argue that investing in safety and transparency builds trust, which can be profitable in the long run.

On timelines, transparency and compute

Leadership also shared a bold belief: powerful AI, the company’s term for AGI, could arrive by the end of this year or in early 2027. That timeline helps explain the urgency behind a dedicated institute to study the technology’s biggest questions.

Anthropic frames itself as a public benefit corporation and says its founders are aligned on the importance of public disclosure, even if some findings might be awkward PR. The company will allocate compute resources on a week-by-week basis depending on priorities, and it doesn’t expect major conflicts between commercial work and the institute’s research needs.

People, feelings and social science

One area the institute plans to dig into is emotional dependence on AI. Teams have already measured how persuasive or sycophantic models can be in conversations, but now they want to study how using AI changes people over time. That includes large-scale social science projects and using Anthropic’s own AI to interview users about their experiences.

Bottom line: Anthropic is consolidating research into a single institute to study AI’s societal and economic fallout, staffing it up and betting that transparency and safety research will pay off—even as it fights a government designation that could dent short-term revenue. The move looks like a long-term play, with a side of legal drama.