Today, I’m talking with Verge senior AI newsman Hayden Field astir immoderate of the radical liable for studying AI and deciding successful what ways it might… well, ruin the world. Those folks enactment astatine Anthropic arsenic portion of a radical called the societal impacts team, which Hayden conscionable spent clip with for a profile she published this week.
The squad is conscionable 9 radical retired of much than 2,000 who enactment astatine Anthropic. Their lone job, arsenic the squad members themselves say, is to analyse and people punctuation “inconvenient truths” astir however radical are utilizing AI tools, what chatbots mightiness beryllium doing to our intelligence health, and however each of that mightiness beryllium having broader ripple effects connected the labour market, the economy, and adjacent our elections.
That of people brings up a full big of problems. The astir important is whether this squad tin stay independent, oregon adjacent beryllium astatine all, arsenic it publicizes findings astir Anthropic’s ain products that mightiness beryllium unflattering oregon politically fraught. After all, there’s a batch of unit connected the AI manufacture successful wide and Anthropic specifically to autumn successful enactment with the Trump administration, which enactment retired an executive bid successful July banning alleged “woke AI.”
Verge subscribers, don’t hide you get exclusive entree to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You tin sign up here.
If you’ve been pursuing the tech industry, the outline of this communicative volition consciousness familiar. We’ve seen this astir precocious with societal media companies and the spot and information teams liable for doing contented moderation. Meta went done countless cycles of this, wherever it dedicated resources to solving problems created by its ain scale and the unpredictable quality of products similar Facebook and Instagram. And then, aft a while, it seems similar the resources dried up, oregon Mark Zuckerberg got bored oregon more funny successful MMA oregon conscionable cozying up to Trump, and the products didn’t truly alteration to bespeak what the probe showed.
We’re surviving done 1 of those moments close now. The societal platforms person slashed investments into predetermination integrity and different forms of contented moderation. Meanwhile, Silicon Valley is moving intimately with the Trump White House to resist meaningful attempts to modulate AI. So arsenic you’ll hear, that’s wherefore Hayden was truthful funny successful this squad astatine Anthropic. It’s fundamentally unsocial successful the manufacture close now.
In fact, Anthropic is an outlier due to the fact that of however amenable CEO Dario Amodei has been to calls for AI regulation, some astatine the authorities and national level. Anthropic is besides seen arsenic the astir safety-first of the starring AI labs, due to the fact that it was formed by erstwhile probe executives astatine OpenAI who were disquieted their concerns astir AI information weren’t being taken seriously. There’s really quite a fewer companies formed by erstwhile OpenAI people disquieted astir the company, Sam Altman, and AI safety. It’s a existent taxable of the manufacture that Anthropic seems to beryllium taking to the adjacent level.
So I asked Hayden astir each of these pressures, and however Anthropic’s estimation wrong the manufacture mightiness beryllium affecting however the societal impacts squad functions — and whether it tin truly meaningfully survey and possibly adjacent power AI merchandise development. Or, if arsenic past suggests, this volition conscionable look bully connected paper, until the squad softly goes away. There’s a batch here, particularly if you’re funny successful however AI companies deliberation astir information from a cultural, moral, and concern perspective.
A speedy announcement: We’re moving a peculiar end-of-the-year mailbag occurrence of Decoder aboriginal this period wherever we reply your questions astir the show: who we should speech to, what topics we screen successful 2026, what you like, what you hate. All of it. Please nonstop your questions to decoder@theverge and we’ll bash our champion to diagnostic arsenic galore arsenic we can.
If you’d similar to work much astir what we discussed successful this episode, cheque retired these links:
- It’s their occupation to support AI from destroying everything | The Verge
- Anthropic details however it measures Claude’s wokeness | The Verge
- The White House orders tech companies to marque AI bigoted again | The Verge
- Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge
- Anthropic CEO Dario Amodei Just Made Another Call for AI Regulation | Inc.
- How Elon Musk Is remaking Grok successful his representation | NYT
- Anthropic tries to defuse White House backlash | Axios
- New AI battle: White House vs. Anthropic | Axios
- Anthropic CEO says institution volition prosecute gulf authorities investments aft each | Wired
Questions oregon comments astir this episode? Hit america up astatine [email protected]. We truly bash work each email!
 (2).png)











English (US) ·