<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Anthropic | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-30T18:16:57+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/anthropic" />
	<id>https://www.theverge.com/rss/anthropic/index.xml</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/rss/anthropic/index.xml" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Elon Musk confirms xAI used OpenAI’s models to train Grok]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/921546/elon-musk-xai-openai-trial-model-distillation" />
			<id>https://www.theverge.com/?p=921546</id>
			<updated>2026-04-30T14:16:57-04:00</updated>
			<published>2026-04-30T14:16:57-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="Law" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI's models to improve its own. The matter at question is model distillation, a common industry practice by which one larger AI model acts as a "teacher" of sorts to pass on knowledge to a smaller [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk in front of a background of justice scales." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK022_ELON_MUSK_CVIRGINIA4_G.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI's models to improve its own. </p>
<p class="has-text-align-none">The matter at question is model distillation, a common industry practice by which one larger AI model acts as a "teacher" of sorts to pass on knowledge to a smaller AI model, the "student." Although it's often used legitimately within companies using one of their own AI models to train another, it's also a practice that's sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor's model. </p>
<p class="has-text-align-none">Asked on the stand whether he knew what model distillation …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/921546/elon-musk-xai-openai-trial-model-distillation">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Claude can now plug directly into Photoshop, Blender, and Ableton]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/919648/anthropic-claude-creative-connectors-adobe-blender" />
			<id>https://www.theverge.com/?p=919648</id>
			<updated>2026-04-28T12:49:08-04:00</updated>
			<published>2026-04-28T12:49:08-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic has launched a set of connectors for Claude that allow the AI chatbot to tap into popular creative software, including Adobe's Creative Cloud apps, Affinity, Blender, Ableton, Autodesk, and more. This marks the company's latest efforts to break into the creative industry following its launch of Claude Design earlier this month. The new connectors [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Examples of Anthropic’s Claude connector for the Blender 3D modelling software." data-caption="Claude’s new Blender connector lets you debug scenes, build new tools, and batch-apply object changes directly from the chatbot interface. | Image: Anthropic" data-portal-copyright="Image: Anthropic" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Claude-creative-connectors-Blender-.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Claude’s new Blender connector lets you debug scenes, build new tools, and batch-apply object changes directly from the chatbot interface. | Image: Anthropic	</figcaption>
</figure>
<p class="has-text-align-none">Anthropic has launched a set of connectors for Claude that allow the AI chatbot to tap into popular creative software, including Adobe's Creative Cloud apps, Affinity, Blender, Ableton, Autodesk, and more. </p>
<p class="has-text-align-none">This marks the company's latest efforts to break into the creative industry following its launch of <a href="https://www.theverge.com/ai-artificial-intelligence/913963/anthropic-launched-a-new-design-product">Claude Design</a> earlier this month. The new connectors - which <a href="https://www.theverge.com/ai-artificial-intelligence/917871/anthropic-claude-personal-app-connectors">enable Claude to access apps</a>, retrieve data, and take actions within connected services - are "designed to make it easier to use Claude for creative work," according to Anthropic, and can be used for specific functions in each app. </p>
<p class="has-text-align-none">The Adobe for creativity connector can draw fr …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/919648/anthropic-claude-creative-connectors-adobe-blender">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Stevie Bonifield</name>
			</author>
			
			<title type="html"><![CDATA[Claude is connecting directly to your personal apps like Spotify, Uber Eats, and TurboTax]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917871/anthropic-claude-personal-app-connectors" />
			<id>https://www.theverge.com/?p=917871</id>
			<updated>2026-04-24T06:02:25-04:00</updated>
			<published>2026-04-23T18:27:11-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Claude users can access more apps with Anthropic's AI now thanks to new connectors for everything from hiking to grocery shopping. Anthropic already supported connecting numerous work-related apps to Claude, like Microsoft apps, but this expansion focuses on personal apps like Audible, Spotify, Uber, AllTrails, TripAdvisor, Instacart, TurboTax, and others. Some of these apps, such [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Screenshots of personal apps running in Claude" data-caption="" data-portal-copyright="Image: Anthropic" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/anthropic-claude-personal-apps.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Claude users can access more apps with Anthropic's AI now thanks to <a href="https://claude.com/blog/connectors-for-everyday-life">new connectors</a> for everything from hiking to grocery shopping. Anthropic already supported connecting numerous work-related apps to Claude, <a href="https://www.theverge.com/news/801487/anthropic-claude-microsoft-365-connector-ai">like Microsoft apps</a>, but this expansion focuses on personal apps like Audible, Spotify, Uber, AllTrails, TripAdvisor, Instacart, TurboTax, and others. </p>
<p class="has-text-align-none">Some of these apps, such as Spotify, already have <a href="https://www.theverge.com/news/793081/chagpt-apps-sdk-spotify-zillow-openai">similar connectors in OpenAI's ChatGPT</a>. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the n …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/917871/anthropic-claude-personal-app-connectors">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic&#8217;s Mythos breach was humiliating]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917644/anthropic-claude-mythos-breach-humiliation" />
			<id>https://www.theverge.com/?p=917644</id>
			<updated>2026-04-23T14:24:56-04:00</updated>
			<published>2026-04-23T14:24:56-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model fell into the wrong hands anyway. According to Bloomberg, a "small group of unauthorized users" has had access to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A number of cursors point toward an unhappy face on a laptop" data-caption="" data-portal-copyright="Photo by Amelia Holowaty Krales / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23318433/akrales_220309_4977_0182.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model <a href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security">fell into the wrong hands</a> anyway. </p>
<p class="has-text-align-none"><a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users">According to <em>Bloomberg</em></a>, a "small group of unauthorized users" has had access to Mythos - whose <a href="https://www.theverge.com/ai-artificial-intelligence/902272/anthropics-apparent-security-lapse-yielded-details-of-its-next-model-release">existence</a> was first <a href="https://www.theverge.com/ai-artificial-intelligence/902272/anthropics-apparent-security-lapse-yielded-details-of-its-next-model-release">revealed</a> in a leak - since the day Anthropic announced <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">plans</a> to offer it to a select group of companies for testing. Anthropic says it is investigating. That's a rough look for a company that has built its brand on taking AI safety seriously while touting the cybersecurity …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/917644/anthropic-claude-mythos-breach-humiliation">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Lauren Feiner</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic&#8217;s Mythos rollout has missed America’s cybersecurity agency]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/policy/916758/anthropic-mythos-preview-cisa-left-out" />
			<id>https://www.theverge.com/?p=916758</id>
			<updated>2026-04-22T13:12:21-04:00</updated>
			<published>2026-04-22T12:57:36-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Several US federal agencies are taking up Anthropic's new cybersecurity model to find vulnerabilities, but one is reportedly not getting in on the action: the nation's central cybersecurity coordinator. On Tuesday, Axios reported that the Cybersecurity and Infrastructure Security Agency (CISA) didn't have access to Mythos Preview, which Anthropic has touted as a powerful tool [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK269_ANTHROPIC_2_D.webp?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Several US federal agencies are taking up Anthropic's new cybersecurity model to find vulnerabilities, but one is reportedly not getting in on the action: the nation's central cybersecurity coordinator. </p>
<p class="has-text-align-none">On Tuesday, <a href="https://www.axios.com/2026/04/21/cisa-anthropic-mythos-ai-security"><em>Axios </em>reported</a> that the Cybersecurity and Infrastructure Security Agency (CISA) didn't have access to Mythos Preview, which Anthropic has touted as a powerful tool for finding and patching security vulnerabilities. Meanwhile, other agencies like <a href="https://www.politico.com/news/2026/04/14/anthropic-mythos-federal-agency-testing-00872439">Commerce Department</a> and <a href="https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentagon">National Security Agency (NSA)</a> are reportedly using the model, and President Donald Trump's administration has been negotiating broader access, <a href="https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security"><em>Axios</em> wrote</a> last w …</p>
<p><a href="https://www.theverge.com/policy/916758/anthropic-mythos-preview-cisa-left-out">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic’s most dangerous AI model just fell into the wrong hands]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security" />
			<id>https://www.theverge.com/?p=916501</id>
			<updated>2026-04-22T05:30:13-04:00</updated>
			<published>2026-04-22T05:18:40-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," Bloomberg reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Vector illustration of the Anthropic logo." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25469782/STK269_ANTHROPIC_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," <a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users"><em>Bloomberg</em></a> reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and "commonly used internet sleuthing tools."</p>
<p class="has-text-align-none">The Claude Mythos Preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic’s new cybersecurity model could get it back in the government’s good graces]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/914229/tides-turning-anthropic-trump-administration-cybersecurity-mythos-preview" />
			<id>https://www.theverge.com/?p=914229</id>
			<updated>2026-04-21T09:36:22-04:00</updated>
			<published>2026-04-17T16:14:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Policy" />
							<summary type="html"><![CDATA[The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo illustration of Dario Amodei of Anthropic." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25469941/STK202_DARIO_AMODEI_CVIRGINIA_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">The Trump administration has spent nearly two months fighting with AI company Anthropic. It's <a href="https://truthsocial.com/@realDonaldTrump/posts/116144552969293195">dubbed the company</a> a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">buzzy new cybersecurity-focused model</a>: Claude Mythos Preview.</p>
<p class="has-text-align-none">Anthropic's relationship with the Pentagon <a href="https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations">soured quickly</a> in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic's tech has in the past been used heavily b …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/914229/tides-turning-anthropic-trump-administration-cybersecurity-mythos-preview">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic releases a new Opus model amid Mythos Preview buzz]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/913184/anthropic-claude-opus-4-7-cybersecurity" />
			<id>https://www.theverge.com/?p=913184</id>
			<updated>2026-04-16T12:00:23-04:00</updated>
			<published>2026-04-16T11:59:24-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Anthropic has released its most powerful "generally available" model to date: Claude Opus 4.7. The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It's also supposed to be better at analyzing images and following instructions, and it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/STKB364_CLAUDE_2_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic has released its most powerful "generally available" model to date: Claude Opus 4.7. </p>
<p class="has-text-align-none">The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It's also supposed to be better at analyzing images and following instructions, and it can exhibit more "creativity" when creating slides and documents, per Anthropic.</p>
<p class="has-text-align-none">Opus 4.7 comes on the heels of <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">Mythos Preview</a>, the buzzy cybersecurity-focused model Anthropic announced earlier this month, which the company has said is its most powerful model overall. Comparatively, Opus 4.7 is …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/913184/anthropic-claude-opus-4-7-cybersecurity">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>David Pierce</name>
			</author>
			
			<title type="html"><![CDATA[The AI code wars are heating up]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic" />
			<id>https://www.theverge.com/?p=910019</id>
			<updated>2026-04-21T12:08:39-04:00</updated>
			<published>2026-04-12T08:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Google" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="The Stepback" />
							<summary type="html"><![CDATA[This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, follow David Pierce. The Stepback arrives in our subscribers' inboxes at 8AM ET. Opt in for The Stepback here. How it started Writing code was a killer app for AI [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An animation of laptops racing with live code being generated on their screens" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Turbosquid" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268441_AI_CODING_RACE_CVIRGINIA.gif?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is </em><a href="https://www.theverge.com/the-stepback-newsletter">The Stepback</a><em>,</em> <em>a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, <a href="https://www.theverge.com/authors/david-pierce" data-type="link" data-id="https://www.theverge.com/authors/david-pierce">follow David Pierce</a>. </em>The Stepback<em> arrives in our subscribers' inboxes at 8AM ET. Opt in for </em>The Stepback <a href="https://www.theverge.com/newsletters"><em>here</em></a><em>.</em></p>
<h2 class="wp-block-heading has-text-align-none">How it started</h2>
<p class="has-text-align-none">Writing code was a killer app for AI even before anyone was really talking about AI. In the spring of 2021, 18 months before the world knew the word "ChatGPT," Microsoft debuted the very first product of a partnership with a nonprofit called OpenAI: <a href="https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code">a tool called GitHub Copilot</a> that watched developers as they wrote code and tried to autocomplete snippets and lines for them …</p>
<p><a href="https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[The AI industry’s race for profits is now existential]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/podcast/909042/ai-monetization-cliff-anthropic-openai-profitable-ai-existential-moment" />
			<id>https://www.theverge.com/?p=909042</id>
			<updated>2026-04-10T05:07:31-04:00</updated>
			<published>2026-04-09T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today on Decoder, let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it. My guest today is Hayden Field, who’s our senior AI reporter here at The Verge. She’s been keeping close tabs on both Anthropic [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A photo illustration of OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei superimposed over a cliff." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/DCD_0409.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Today on <em>Decoder</em>, let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it.</p>

<p class="has-text-align-none">My guest today is Hayden Field, who’s our senior AI reporter here at <em>The Verge</em>. She’s been keeping close tabs on both Anthropic and OpenAI, and how these two companies in particular tell us a whole lot about the AI industry in 2026.&nbsp;</p>

<p class="has-text-align-none">You’ve certainly heard a version of the monetization cliff story before. The biggest AI firms are built off the back of hundreds of billions in capital investment, and they’re linked to even greater amounts of forward-looking investment in data center build-out, chips, and other infrastructure spend. At some point, the profits have to materialize, or the bubble pops. Maybe AGI arrives, maybe the economy crashes, who knows.&nbsp;</p>

<p class="has-text-align-none">You’ve heard me ask some version of this question to scores of CEOs here on this show, and a majority of them have hinted toward the bubble popping — they think some companies will fail in spectacular fashion, some will succeed, and the opportunities, especially the money, are simply too big to ignore. We’re doing this, whether we want to or not — the market depends on it.&nbsp;</p>

<div class="wp-block-vox-media-highlight vox-media-highlight"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p><em>Verge</em> subscribers, don&#8217;t forget you get exclusive access to ad-free <em>Decoder</em> wherever you get your podcasts. Head <a href="https://www.theverge.com/account/podcasts">here</a>. Not a subscriber? You can <a href="https://www.theverge.com/subscribe">sign up here</a>. </p>
</div>

<p class="has-text-align-none">So these last few weeks have felt like a very important inflection point, as both Anthropic and OpenAI have started to react to the reality of needing to go public — needing to make money. </p>

<p class="has-text-align-none">The catalyst for this change is AI agents, and products like Claude Code and Cowork, as well as the open-source OpenClaw and OpenAI’s Codex, have radically changed how these companies are thinking about their resources. And this is starting to affect how they behave — the products they support or suddenly kill, the restrictions they impose on customers, and the money they’re willing to burn toward their next big milestone.&nbsp;</p>

<p class="has-text-align-none">That&#8217;s because agents are valuable to customers right now, but agents also use far more compute. So the way people are using agents is burning tokens at a rate way faster than these companies anticipated, and that’s causing them to make hard decisions.&nbsp;</p>

<p class="has-text-align-none">We saw this most evidently last month when OpenAI abruptly <a href="https://www.theverge.com/ai-artificial-intelligence/899850/openai-sora-ai-chatgpt">killed its video-generation app Sora</a>, ditching a $1 billion Disney licensing deal in the process. Why? It costs too much to run, and OpenAI needs the compute for Codex. We saw it again just last week, when Anthropic decided it would no longer let Claude users burn through compute resources using the OpenClaw agent framework through a standard subscription plan, instead forcing those users <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">onto pay-as-you-go plans</a>, which cost substantially more.&nbsp;</p>

<p class="has-text-align-none">As you’ll hear Hayden explain here, these are glimmers of a make-or-break moment for the AI industry, as both Anthropic and OpenAI barrel toward two of the biggest IPOs in history. And the pressure on these companies to make money has never been this intense.&nbsp;</p>

<p class="has-text-align-none">The projections these companies have made, which just this week were <a href="https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9?">leaked to the <em>Wall Street Journal</em></a>, tell a story of mind-boggling growth, to the tune of hundreds of billions in revenue and profitability by the end of the decade. But the most important questions now are can the AI companies pull this off, and what compromises will they make to reach that goal and avoid crashing and burning?&nbsp;</p>

<p class="has-text-align-none">Okay: <em>Verge</em> senior policy reporter Hayden Field on the AI monetization cliff and the race to profitability. Here we go.</p>

<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm?e=VMP1417581812" width="100%"></iframe>

<p class="has-text-align-none"><em>If you’d like to read more about what we discussed in this episode, check out these links:</em></p>

<ul class="wp-block-list">
<li>The vibes are off at OpenAI | <a href="https://www.theverge.com/ai-artificial-intelligence/908513/the-vibes-are-off-at-openai"><em>The Verge</em></a></li>



<li>Anthropic essentially bans OpenClaw from Claude | <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban"><em>The Verge</em></a></li>



<li>Why OpenAI killed Sora | <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition"><em>The Verge</em></a></li>



<li>OpenAI just bought TBPN | <a href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn"><em>The Verge</em></a></li>



<li>National poll shows voters like AI less than ICE | <a href="https://www.theverge.com/ai-artificial-intelligence/891724/nbc-news-march-2026-poll-ai-ice"><em>The Verge</em></a></li>



<li>The spiraling cost of making AI | <a href="https://www.wsj.com/tech/ai/the-spiraling-cost-of-making-ai-0679bcea?mod=WTRN_pos1"><em>WSJ</em></a></li>



<li>OpenAI’s Fidji Simo taking leave amid exec shake-up | <a href="https://www.wired.com/story/openais-fidji-simo-is-taking-a-leave-of-absence/"><em>Wired</em></a></li>



<li>OpenAI raises another $122B at $850B valuation | <a href="https://www.theverge.com/ai-artificial-intelligence/904727/openai-chatgpt-investment"><em>The Verge</em></a></li>
</ul>

<p class="has-text-align-none"><em><sub>Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!</sub></em></p>
						]]>
									</content>
			
					</entry>
	</feed>
