
Hey there, and invited to Decoder! I’m Hayden Field, elder AI repoter astatine The Verge — and your Thursday occurrence impermanent host. I person different mates of shows for you portion Nilay is retired connected parental leave, and we’re going to beryllium spending much clip diving into immoderate of the unforeseen consequences of the generative AI boom.
Today, I’m talking with Heidy Khlaaf, who is main AI idiosyncratic astatine the AI Now Institute and 1 of the industry’s starring experts successful the information of AI wrong autonomous weapons systems. Heidy has really worked with OpenAI successful the past; from precocious 2020 to mid-2021, she was a elder systems information technologist for the institution during a captious time, erstwhile it was processing information and hazard appraisal frameworks for the company’s Codex coding tool.Â
Now, the aforesaid companies that person antecedently seemed to champion information and morals successful their ngo statements are present actively selling and processing caller exertion for subject applications.
In 2024, OpenAI removed a prohibition connected “military and warfare” usage cases from its presumption of service. Since then, the institution has signed a woody with autonomous weapons shaper Anduril and, this past June, signed a $200 cardinal Department of Defense contract.Â
OpenAI is not alone. Anthropic, which has a estimation arsenic 1 of the astir safety-oriented AI labs, has partnered with Palantir to let its models to beryllium utilized for US defence and quality purposes, and it besides landed its ain $200 cardinal DoD contract. And Big Tech players similar Amazon, Google, and Microsoft, who person agelong worked with the government, are present besides pushing AI products for defence and intelligence, contempt growing outcry from critics and worker activistic groups.Â
So I wanted to person Heidy connected the amusement to locomotion maine done this large displacement successful the AI industry, what’s motivating it, and wherefore she thinks immoderate of the starring AI companies are being acold excessively cavalier astir deploying generative AI successful high-risk scenarios. I besides wanted to cognize what this propulsion to deploy military-grade AI means for atrocious actors who mightiness privation to usage AI systems to make chemical, biological, radiological, and atomic weapons — a hazard the AI companies themselves accidental they’re progressively disquieted about.Â
Okay, here’s Heidi Khlaaf connected AI successful the military. Here we go.
If you’d similar to work much connected what we talked astir successful this episode, cheque retired the links below:
- OpenAI is softening its stance connected subject usage | The Verge
- OpenAI awarded $200 cardinal US defence declaration | The Verge
- OpenAI is partnering with defence tech institution Anduril | The Verge
- Anthropic launches caller Claude work for subject and quality usage | The Verge
- Anthropic, Palantir, Amazon squad up connected defence AI | Axios
- Google scraps committedness not to make AI weapons | The Verge
- Microsoft employees inhabit office successful protestation of Israel contracts | The Verge
- Microsoft’s worker protests person reached a boiling constituent | The Verge
Questions oregon comments astir this episode? Hit america up astatine [email protected]. We truly bash work each email!