<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Robert Hart | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-30T16:48:13+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/robert-hart" />
	<id>https://www.theverge.com/authors/robert-hart/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/robert-hart/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Meta is running get-rich-quick ads for its AI tools]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/915970/meta-manus-ai-ads-website-slop" />
			<id>https://www.theverge.com/?p=915970</id>
			<updated>2026-04-30T12:48:13-04:00</updated>
			<published>2026-04-30T12:48:13-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Social Media" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Manus, an AI company Meta acquired for $2 billion last year is running ads promising quick, easy money with AI: Find local businesses without websites or with bad websites, have AI build them one, then call them up and sell it to them. As part of the campaign, Manus was paying content creators to build [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/get-rich-quick_70fa93.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Manus, an AI <a href="https://www.theverge.com/news/851113/meta-acquires-general-purpose-ai-agent-startup-manus">company Meta acquired</a> for <a href="https://www.wsj.com/tech/ai/meta-buys-ai-startup-manus-adding-millions-of-paying-users-f1dc7ef8" data-type="link" data-id="https://www.wsj.com/tech/ai/meta-buys-ai-startup-manus-adding-millions-of-paying-users-f1dc7ef8">$2 billion</a> last year is running ads promising quick, easy money with AI: Find local businesses without websites or with bad websites, have AI build them one, then call them up and sell it to them.</p>

<p class="has-text-align-none">As part of the campaign, Manus was paying content creators to build out Instagram, YouTube, and TikTok accounts that promote its AI product as an easy, lucrative gig. (The creators’ TikTok accounts were taken down after <em>The Verge</em> inquired about them.) Some of these videos would also appear as official ads for Manus, but the posts on the paid creator accounts themselves often obscured their ties to the company.</p>

<p class="has-text-align-none">The ads were not subtle. Posted by an account called “Manus AI by Meta,” one video presented Manus’ AI agent as an “Easy side hustle” that “absolutely anybody can do” — one that supposedly “takes less than 10 minutes” and can bring in a “potential $5k a month.” The young person in the video says, “There is literally no limit.” Except, I guess, the number of businesses willing to buy an AI-generated website from a stranger on the internet.&nbsp;</p>

<p class="has-text-align-none">The ad did not tag the creator featured in it, but their TikTok account, which has since been removed, was filled almost entirely with Manus content. Their Instagram <a href="https://www.instagram.com/reel/DRdU9EtkvEE/">account</a>, which is still live, is nearly identical. Neither disclosed any connection to Manus in its bio or posts.&nbsp;</p>

<p class="has-text-align-none">Across TikTok and Instagram, I found a network of other accounts posting near-identical Manus content, much of it hyping the website scheme, but also selling vibe-coded apps. The accounts were strikingly similar. They looked the same, used the same language, and promised the same thing: “The art of Manus” with a close-up of their face, “my websites don’t look vibe-coded anymore,” “don’t get a part-time job,” and a “making [thousands of dollars] without talking challenge” as the creators put tape over their mouths. Most accounts were only a few months old, had only ever posted about Manus, and appeared to be run by creators in their late teens or early 20s.&nbsp;The majority of posts had no noticeable engagement, though some were viral hits with tens of thousands of likes, comments, and shares.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268477_make_money_without_talking_cvirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Make money without talking.&lt;/em&gt; | Image: TikTok, The Verge" data-portal-copyright="Image: TikTok, The Verge" />
<p class="has-text-align-none">Some accounts vaguely referenced to “building with Manus” in their bio, or something similar. A few listed what appeared to be real names, and those led to LinkedIn profiles identifying them as contractors producing content for Manus. There was also a person whose LinkedIn profile said that Manus hired them in January as a contract “viral growth expert” to “lead a team of 10–20 content creators,” enforce “strict brand guidelines and quality benchmarks,” guide creators on persona-specific content, and run coaching sessions training creators on how to go viral. The person did not respond to a request for comment. Manus spokesperson Ronghui Li confirmed the company “works with third-party agency partners on paid UGC creator programs across platforms including TikTok, Instagram, and YouTube” and said the individuals and accounts I referenced were real “external partners involved in this program.”</p>

<p class="has-text-align-none">Manus declined to answer questions on Meta’s role in the program, including whether the parent company was aware of it or whether it complied with Meta’s own policies. Asked about disclosure and advertising rules, Li said that Manus occasionally licensed some of their creator videos as formal ads on the platforms, where they were posted with the usual advertisement labeling. However, Li claimed responsibility for disclosure on creators’ posts lay with the creators themselves and that Manus is now reviewing the specific accounts and posts in question.</p>

<p class="has-text-align-none">Asked why Manus was promoting the tool as an “easy side hustle,” Li said the company does “not endorse exaggerated or misleading earnings claims” and was reviewing the content I flagged.&nbsp;Li did not say whether that review is of the program as a whole.&nbsp;Li also did not answer my specific question about what evidence, if any, Manus had to support the earnings claims made in the videos.&nbsp;</p>

<p class="has-text-align-none"><a href="https://www.facebook.com/business/help/221149188908254?locale=en_GB">Meta</a>, <a href="https://support.google.com/youtube/answer/154235?hl=en-GB#zippy=%2Cdo-i-need-to-notify-youtube-if-my-video-has-a-paid-product-placement-endorsement-or-other-commercial-relationship">YouTube</a>, and <a href="https://support.tiktok.com/en/business-and-creator/creator-and-business-accounts/promoting-a-brand-product-or-service">TikTok</a> all unambiguously require creators to clearly disclose paid promotions.&nbsp;Multiple legal and advertising experts I spoke to said the undisclosed relationships don’t just run afoul of major platforms’ advertising policies; in multiple jurisdictions, they probably break the law.&nbsp;Sonal Patel Oliva, an advertising lawyer at Fieldfisher, said British regulators “take a firm position on undisclosed commercial relationships in influencer marketing,” requiring incentivized content to be clearly labeled as an ad. Alexandros Antoniou, a law professor at the University of Essex in England, echoed this, saying that vague brand-adjacent language “won’t cut it” as a disclosure. </p>

<p class="has-text-align-none">Meta did not respond to multiple requests for comment asking whether it was aware of the program and whether the campaign complied with its advertising policies. TikTok declined to speak on the record, but since I reached out, most of the Manus hype videos appear to have been removed, and many of the accounts that posted them seem to have been banned. YouTube did not respond to a request for comment.&nbsp;</p>

<p class="has-text-align-none">Antoniou added that earnings claims are “riskier” than disclosure omissions, given tight rules on misleading consumers in the UK. The experts agreed that the same broad principles apply elsewhere too, including in the EU and US.  </p>

<p class="has-text-align-none">Meta owned Manus throughout the campaign described here and had <a href="https://www.scmp.com/tech/article/3351718/meta-manus-ai-deal-difficult-undo-how-will-beijing-exert-its-authority">reportedly</a> already begun <a href="https://www.ft.com/content/1e4c269a-5258-406c-a308-e55c3d5d640f?syn-25a6b1a6=1">integrating</a> the startup and its systems. It now faces the prospect of <a href="https://www.theverge.com/ai-artificial-intelligence/918913/china-blocks-metas-2-billion-acquisition-of-ai-agent-startup-manus">having to unwind</a> the deal after Chinese regulators blocked it, even as the company <a href="https://edition.cnn.com/2026/04/27/tech/china-blocks-meta-manus-intl-hnk">insists</a> it complied with relevant laws and says, without elaborating, that it expects to reach a resolution with Beijing.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI’s new security model is for ‘critical cyber defenders’ only]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/921073/openai-sam-altman-new-cybersecurity-model-gpt-5-5-cyber" />
			<id>https://www.theverge.com/?p=921073</id>
			<updated>2026-04-30T07:13:54-04:00</updated>
			<published>2026-04-30T07:09:01-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" />
							<summary type="html"><![CDATA[OpenAI is preparing to launch a new frontier cybersecurity model, GPT-5.5-Cyber. CEO Sam Altman said the model will not be available to the general public, but will be first rolled out to a select group of trusted “cyber defenders” in order for institutions to shore up their cyberdefenses.&#160; The limited rollout will take place “in [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK155_OPEN_AI_CVirginia_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is preparing to launch a new frontier cybersecurity model, GPT-5.5-Cyber. CEO Sam Altman said the model will not be available to the general public, but will be first rolled out to a select group of trusted “cyber defenders” in order for institutions to shore up their cyberdefenses.&nbsp;</p>

<p class="has-text-align-none">The limited rollout will take place “in the next few days,” <a href="https://x.com/sama/status/2049712078836170843?s=46&amp;t=fRkDIqgNCkTkvg8ZBiLA9A">Altman said on X</a>. “We will work with the entire ecosystem and the government to figure out trusted access for Cyber.”</p>

<p class="has-text-align-none">It’s not clear who will get access to the model first, though <a href="https://openai.com/index/trusted-access-for-cyber/">previous</a> &#8220;trusted access” <a href="https://openai.com/index/scaling-trusted-access-for-cyber-defense/">schemes</a> involved vetted professionals and institutions. Details of the model and its capabilities are also unclear; OpenAI has not released any technical details or specifications. The name indicates it is a specialized version of the <a href="https://www.theverge.com/ai-artificial-intelligence/917612/openai-gpt-5-5-chatgpt">recently released GPT-5.5</a>, which it called its “smartest and most intuitive to use model yet.”&nbsp;</p>

<p class="has-text-align-none">The staggered rollout is part of a growing trend in the AI industry of companies branding their top models too dangerous for public release due to their potential for misuse. OpenAI has staggered the release of previous cybersecurity-focused models, in addition to its new purpose-built <a href="https://openai.com/index/introducing-gpt-rosalind/">life sciences model GPT-Rosalind</a>, which is intended to support biology research and drug discovery. This month, Anthropic followed a similar playbook with Claude Mythos, though with much greater fanfare, and it <a href="https://www.theverge.com/ai-artificial-intelligence/917644/anthropic-claude-mythos-breach-humiliation">bungled the model’s secure release</a> in embarrassing ways.&nbsp;</p>

<p class="has-text-align-none"><br>The White House has taken a keen interest in Mythos’ rollout, despite lingering tensions with Anthropic after its fight with the Pentagon. It has recently opposed plans to expand access to Mythos further, <a href="https://www.wsj.com/tech/ai/white-house-opposes-anthropics-plan-to-expand-access-to-mythos-model-dc281ab5?mod=rss_Technology">according</a> to <em>The Wall Street Journal</em>. The report’s unnamed White House officials cite both cybersecurity concerns associated with more people having access to Mythos and worries that increased demand would hamper the government’s ability to utilize the system as reasons for the pushback.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Taylor Swift deepfakes are pushing scams on TikTok]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/920351/ai-celebrity-deepfake-ads-tiktok-copyleaks" />
			<id>https://www.theverge.com/?p=920351</id>
			<updated>2026-04-29T10:43:42-04:00</updated>
			<published>2026-04-29T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Scammers are using AI-generated videos of celebrities including Taylor Swift and Rihanna to promote shady services on TikTok, according to authentication company Copyleaks.&#160; The ads typically show celebrities in interview settings, such as red carpets, podcasts, or talk shows, and often manipulate real footage with AI, the company said. Many promote rewards programs claiming users [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Billboard via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/gettyimages-2267995710.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Scammers are using AI-generated videos of celebrities including Taylor Swift and Rihanna to promote shady services on TikTok, according to authentication company Copyleaks.&nbsp;</p>

<p class="has-text-align-none">The ads typically show celebrities in interview settings, such as red carpets, podcasts, or talk shows, and often manipulate real footage with AI, the company <a href="https://copyleaks.com/blog/tiktok-ads-impersonate-celebrities">said</a>. Many promote rewards programs claiming users can earn money by watching TikTok content and giving feedback. TikTok’s official branding appears in some of the ads, though users are redirected to third-party services that ask for personal information.</p>

<p class="has-text-align-none">In one ad, a realistic AI avatar of Swift urges users to sign up to a feature called TikTok Pay. In another, a fake Rihanna says “you literally just watch content and give your opinion.”&nbsp;</p>

<p class="has-text-align-none">It’s another example of how social platforms are struggling to keep up with a surge of convincing deepfakes, which are becoming a messy, everyday problem for users. TikTok is far from alone here. Reports suggest users of Meta’s platforms including Instagram and Facebook <a href="https://www.theverge.com/tech/820906/meta-scam-ads-failure-remove-consequences">see billions of scam ads a day</a>, and the company’s own oversight board has <a href="https://www.theverge.com/news/680857/the-oversight-board-says-meta-has-an-ai-deepfake-problem">acknowledged it has a deepfake problem.</a> YouTube also says it is &#8220;<a href="https://www.theverge.com/2024/1/25/24050443/youtube-is-investing-heavily-in-its-ability-to-stop-ai-celebrity-scam-ads">investing heavily</a>” in combating celebrity scam ads.</p>

<p class="has-text-align-none">Celebrities are also searching for new ways to fight back: <a href="https://www.theverge.com/ai-artificial-intelligence/919827/taylor-swift-trademarks-ai-copycats">last week Swift filed new trademark</a> applications for clips of her voice in an attempt to protect herself from AI copycats.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[China freezes new robotaxi licenses after Baidu chaos]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/920312/china-suspends-autonomous-vehicle-permits-baidu-chaos" />
			<id>https://www.theverge.com/?p=920312</id>
			<updated>2026-04-29T06:39:21-04:00</updated>
			<published>2026-04-29T06:39:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Autonomous Cars" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[China has suspended new licenses for autonomous vehicles, Bloomberg reports, citing unnamed people familiar with the matter. The move comes after dozens of robotaxis operated by Chinese tech giant Baidu ground to a halt in traffic last month in Wuhan, creating chaos.&#160; The restrictions will prevent companies from adding new driverless cars to their fleets, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="A Baidu Apollo Go robotaxi in Wuhan, China. | Image: Bloomberg via Getty Images" data-portal-copyright="Image: Bloomberg via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/gettyimages-2152484525.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A Baidu Apollo Go robotaxi in Wuhan, China. | Image: Bloomberg via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">China has suspended new licenses for autonomous vehicles, <em>Bloomberg</em> <a href="https://www.bloomberg.com/news/articles/2026-04-29/china-suspends-new-autonomous-driving-permits-after-baidu-outage">reports</a>, citing unnamed people familiar with the matter. The move comes after dozens of robotaxis operated by Chinese tech giant Baidu <a href="https://www.theverge.com/ai-artificial-intelligence/905012/baidu-apollo-robotaxi-freeze-china">ground to a halt in traffic last month</a> in Wuhan, creating chaos.&nbsp;</p>

<p class="has-text-align-none">The restrictions will prevent companies from adding new driverless cars to their fleets, expanding into new cities, or starting new test projects. It is unclear when officials will start issuing new licenses again.</p>

<p class="has-text-align-none"><em>Bloomberg</em> said the Wuhan incident alarmed authorities in Beijing, prompting regulators to urge local governments to review the sector to prevent similar episodes. It is at least the second time regulators have intervened after a Baidu-related incident, the report said, and the company&#8217;s Wuhan operations remain on pause while local authorities investigate the matter.&nbsp;&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[China’s DeepSeek previews new AI model a year after jolting US rivals ]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/918035/deepseek-preview-v4-ai-model" />
			<id>https://www.theverge.com/?p=918035</id>
			<updated>2026-04-24T05:45:30-04:00</updated>
			<published>2026-04-24T05:45:30-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, saying that the open-source model can compete with leading closed-source systems from US rivals including Anthropic, Google, and OpenAI. DeepSeek says V4 marks a major improvement over prior models, especially in coding, a capability that has become central [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STKB320_DEEPSEEK_AI_CVIRGINIA_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, <a href="https://x.com/deepseek_ai/status/2047516922263285776?s=20">saying</a> that the open-source model can compete with leading closed-source systems from US rivals including Anthropic, Google, and OpenAI.</p>

<p class="has-text-align-none">DeepSeek says V4 marks a major improvement over prior models, especially in coding, a capability that has become central to AI agents and helped drive the success of tools like <a href="https://www.theverge.com/ai-artificial-intelligence/913034/openai-codex-updates-use-macos">ChatGPT Codex</a> and <a href="https://www.theverge.com/report/874308/anthropic-claude-code-opus-hype-moment">Claude Code</a>. The release is also a milestone for China’s chip industry, with DeepSeek <a href="https://www.scmp.com/tech/big-tech/article/3351239/deepseek-releases-next-gen-ai-model-world-leading-efficiency">explicitly highlighting</a> compatibility with domestic Huawei technology.</p>

<p class="has-text-align-none">  <br>The release comes a year after DeepSeek <a href="https://www.theverge.com/ai-artificial-intelligence/598846/deepseek-big-tech-ai-industry-nvidia-impac">rattled the US AI industry</a> with R1, a model it claimed was trained at a fraction of the cost of leading US systems. DeepSeek has not disclosed V4’s training costs or what hardware it was trained on. US officials have accused the company of <a href="https://www.reuters.com/world/china/chinas-deepseek-trained-ai-model-nvidias-best-chip-despite-us-ban-official-says-2026-02-24/">using banned Nvidia chips</a> and Anthropic <a href="https://www.theverge.com/ai-artificial-intelligence/883243/anthropic-claude-deepseek-china-ai-distillation">claims</a> DeepSeek misused Claude to improve its own products.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic&#8217;s Mythos breach was humiliating]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917644/anthropic-claude-mythos-breach-humiliation" />
			<id>https://www.theverge.com/?p=917644</id>
			<updated>2026-04-23T14:24:56-04:00</updated>
			<published>2026-04-23T14:24:56-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic’s tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model fell into the wrong hands anyway.&#160; According to Bloomberg, a “small group of unauthorized users” has had access to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A number of cursors point toward an unhappy face on a laptop" data-caption="" data-portal-copyright="Photo by Amelia Holowaty Krales / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23318433/akrales_220309_4977_0182.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic’s tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model <a href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security">fell into the wrong hands</a> anyway.&nbsp;</p>

<p class="has-text-align-none"><a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users">According to <em>Bloomberg</em></a>, a “small group of unauthorized users” has had access to Mythos — whose <a href="https://www.theverge.com/ai-artificial-intelligence/902272/anthropics-apparent-security-lapse-yielded-details-of-its-next-model-release">existence</a> was first <a href="https://www.theverge.com/ai-artificial-intelligence/902272/anthropics-apparent-security-lapse-yielded-details-of-its-next-model-release">revealed</a> in a leak — since the day Anthropic announced <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">plans</a> to offer it to a select group of companies for testing. Anthropic says it is investigating. That’s a rough look for a company that has built its brand on taking AI safety seriously while touting the cybersecurity prowess of its latest model.</p>

<p class="has-text-align-none">From a technological standpoint, the Mythos breach is embarrassingly unsophisticated. <em>Bloomberg</em> reports the group accessed Mythos by making “an educated guess about the model’s online location,” using information about Anthropic’s other models exposed in the <a href="https://www.theverge.com/ai-artificial-intelligence/907083/a-company-that-makes-ai-training-data-has-been-hit-by-a-security-breach">breach of Mercor</a> — a company that makes AI training data — along with access one member had through contract work evaluating Anthropic models. The group got unauthorized access to Mythos through a combination of insider knowledge and a lucky guess, not some sophisticated technological exploit or wholesale theft of the model.&nbsp;</p>

<p class="has-text-align-none">Security vulnerabilities are inevitable, and it was Mercor, not Anthropic, that revealed the information the hackers used to guess Mythos’ location. Pia Hüsch, a research fellow at the British think tank Royal United Services Institute (RUSI), told me that no company is ever completely secure and humans are often the weakest link, though it “does initially seem a bit lucky” that there were no serious consequences.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>Anthropic failed to anticipate an ‘entirely imaginable’ kind of failure</p></blockquote></figure>

<p class="has-text-align-none">But it’s not entirely bad luck. These kinds of educated guesses are a very standard hacking technique, and the Mercor breach was already known before Mythos’ release. Security researcher Lukasz Olejnik described it to me as an “entirely imaginable” kind of failure that the cybersecurity industry has been routinely dealing with for the last 20 years. So Anthropic should have anticipated it and should have prepared accordingly, particularly knowing that its information had been compromised.&nbsp;</p>

<p class="has-text-align-none">Anthropic also appears to have had the means to spot the breach. The company is able to “log and track model use,” Olejnik said, which should make it possible to stop unauthorized or malicious access, especially since the Mythos rollout was supposed to be highly limited. Evidently, Anthropic wasn’t monitoring closely enough — and given how dangerous it says the model is, it’s reasonable to ask why.</p>

<p class="has-text-align-none">By <em>Bloomberg’s</em> account, the group was not using Mythos for cybersecurity tasks, partly because they just wanted to mess around with the new model and partly because doing so could have tipped Anthropic off. If Anthropic’s messaging surrounding Mythos is to be taken seriously, that is a lucky break. The company has <a href="https://red.anthropic.com/2026/mythos-preview/">framed</a> Mythos as a “watershed moment for security,” claiming it found vulnerabilities in “​​every major operating system and web browser,” and said its release must be coordinated to allow time to “reinforce the world’s cyber defenses.”&nbsp;</p>

<p class="has-text-align-none">Anthropic has a habit of using dramatic, alarming-sounding language that can be tough to interrogate cleanly, including flirting with the idea that its <a href="https://www.theverge.com/report/883769/anthropic-claude-conscious-alive-moral-patient-constitution">Claude model might be conscious</a>. Even so, early reports from parties with access suggest Mythos is particularly adept in cybersecurity. Mozilla CTO Bobby Holley <a href="https://www.theverge.com/ai-artificial-intelligence/916500/mythos-v-firefox">said</a> it found hundreds of bugs in Firefox 150 and may finally give defenders a chance at complete victory over attackers. Unsurprisingly, <a href="https://www.theverge.com/ai-artificial-intelligence/913516/now-the-white-house-is-reportedly-preparing-for-access-to-mythos">governments</a> and <a href="https://www.reuters.com/business/finance/anthropic-plans-provide-mythos-access-european-banks-soon-sources-say-2026-04-21/">financial institutions</a> around the world have been eager to get their hands on it. The NSA and other US agencies <a href="https://www.theverge.com/ai-artificial-intelligence/914748/the-nsa-reportedly-has-access-to-anthropics-mythos-despite-being-labeled-a-supply-chain-risk">reportedly</a> have access despite Anthropic’s <a href="https://www.theverge.com/ai-artificial-intelligence/890347/pentagon-anthropic-supply-chain-risk">designation as a supply chain risk</a>, though the rollout <a href="https://www.theverge.com/policy/916758/anthropic-mythos-preview-cisa-left-out">appears to have bypassed</a> the US cybersecurity agency, CISA, so far.</p>

<figure class="wp-block-pullquote"><blockquote><p>“Anthropic claims to be at the absolute forefront of all these technologies, but also positions itself as the responsible actor in all of this.”</p></blockquote></figure>

<p class="has-text-align-none">The fact that the breach was uncovered by a reporter rather than Anthropic also raises the obvious question of whether it is an isolated incident. It “really illustrates how wide the circle of people who may be able to do this is, even if they don’t have super technically sophisticated means,” Hüsch said. Anthropic will likely comb through its supply chain to see how this happened and plug gaps, but she said there is a wide range of actors who would want access to a model like this, some of them with a great deal of money behind them. There is no reason to assume anyone else who gained access would be as restrained as the group <em>Bloomberg</em> reported on.</p>

<p class="has-text-align-none">Anthropic has, to some extent, shot itself in its own foot. The company has built its identity around taking AI safety more seriously than its rivals, creating sky-high expectations for model security that jar with its apparent carelessness; the fact that Mythos was exposed through such a basic and predictable failure only underscores that. Worse still, by hyping Mythos as an unusually powerful tool too dangerous for public release, Anthropic turned it into an obvious target, whether for malicious actors or hackers simply looking for a challenge.&nbsp;</p>

<p class="has-text-align-none">This isn’t even the first awkward security incident around Mythos. The model’s existence was accidentally revealed before release through an “<a href="https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/">unsecured data trove</a>” on a central system containing content for its website. Now, that model has been secretly accessed via a wholly predictable vulnerability Anthropic didn’t think to patch. Perfection is impossible, but for a company that has anointed itself the vanguard of AI safety, such a basic misstep is hard to justify, even with some of the bad luck it’s had.</p>

<p class="has-text-align-none">To Hüsch, the whole episode can be summed up in one word: humiliation. “Anthropic claims to be at the absolute forefront of all these technologies, but also positions itself as the responsible actor in all of this,” she said. “The fact that this has now been accessed through unauthorized means so quickly, and through such an unsophisticated attempt, is really a humiliation for them.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Yelp is making its AI chatbot way more useful]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/915626/yelp-ai-assistant-chatbot-major-upgrade" />
			<id>https://www.theverge.com/?p=915626</id>
			<updated>2026-04-21T06:42:54-04:00</updated>
			<published>2026-04-21T07:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for “getting things done.” The move, one of several AI-focused updates in recent months, is part of a broader industry push to make AI more relevant and practically useful [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Yelp" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Yelp-Assistant_-Making-Beauty-Appointments-via-Vagaro.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for “getting things done.” The move, one of <a href="https://www.theverge.com/news/714944/yelp-ai-stitched-videos">several</a> AI-focused <a href="https://www.theverge.com/news/802529/yelp-ai-host-receptionist">updates</a> in recent months, is part of a broader industry push to make AI more relevant and practically useful to consumers while turning huge troves of user-generated data into a competitive edge. </p>

<p class="has-text-align-none"><br>In a press release, Yelp says the Yelp Assistant chatbot will be at “the center of the app experience,” where it can answer questions, make recommendations, and even handle bookings in a single conversation. The bot will be available through a new “Assistant” tab spanning every category on Yelp, a significant expansion from its <a href="https://www.theverge.com/2024/4/30/24144812/yelp-assistant-ai-chatbot-services-search">2024 debut</a> as a tool for hiring service professionals.</p>
<div class="youtube-embed"><iframe title="Yelp&#039;s 2026 Spring Release: The new Yelp Assistant, booking integrations, and enhanced Menu Vision" src="https://www.youtube.com/embed/bP74xqkossw?rel=0" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p class="has-text-align-none">Yelp is also broadening Assistant’s reach with a set of app integrations that let users order takeout or delivery through DoorDash, Grubhub, and other delivery services, request quotes from professionals in the area like auto and pet care, and book appointments with beauty, wellness, fitness, and healthcare providers through Vagaro and Zocdoc. Support for Yelp Waitlist is “coming soon,” as is Calendly integration for scheduling appointments.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Craig Saldanha, Yelp’s chief product officer, described the update as the company’s “most significant AI product evolution yet,” adding that it is “only the beginning of a more conversational, personalized and action-oriented Yelp experience.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI’s big Codex update is a direct shot at Claude Code]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/913034/openai-codex-updates-use-macos" />
			<id>https://www.theverge.com/?p=913034</id>
			<updated>2026-04-16T19:42:40-04:00</updated>
			<published>2026-04-16T13:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI is beefing up its agentic coding and development system, Codex, with a suite of updates that let it use your computer, generate images, and remember from past experiences.&#160;The package of updates comes as OpenAI’s rivalry with Anthropic intensifies, following the stellar successes of Claude Code and OpenAI aggressively shifting resources to catch up. Codex [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Codex can control apps on your desktop like Tic Tac Toe. | Image: OpenAI" data-portal-copyright="Image: OpenAI" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Screenshot-2026-04-16-at-13.01.11.png?quality=90&#038;strip=all&#038;crop=0,5.8356559824806,100,89.31166633486" />
	<figcaption>
	Codex can control apps on your desktop like Tic Tac Toe. | Image: OpenAI	</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is beefing up its agentic coding and development system, Codex, with <a href="https://openai.com/index/codex-for-almost-everything/">a suite of updates</a> that let it use your computer, generate images, and remember from past experiences.&nbsp;The package of updates comes as OpenAI’s rivalry with Anthropic intensifies, following the <a href="https://www.theverge.com/report/874308/anthropic-claude-code-opus-hype-moment">stellar successes of Claude Code</a> and OpenAI <a href="https://www.theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic">aggressively shifting resources</a> to catch up.</p>

<p class="has-text-align-none">Codex will now be able to operate desktop apps on your computer, OpenAI <a href="https://openai.com/index/codex-for-almost-everything/">says in a blog post announcing the update</a>. It can work in the background, meaning it won’t interfere with your own work in other apps, and multiple agents can work in parallel. For developers, OpenAI says, “this is helpful for testing and iterating on frontend changes, testing apps, or working in apps that don’t expose an API.”&nbsp;</p>

<p class="has-text-align-none">The feature will start rolling out to Codex desktop app users signed in with ChatGPT today and will initially be limited to macOS. OpenAI did not indicate a timeline for when use will expand to other operating systems. EU users will also have to wait, it said, adding that the update will roll out to users there “soon.”</p>

<p class="has-text-align-none">Codex is also getting the ability to generate and iterate on images with gpt-image-1.5, new plug-ins for tools like GitLab, Atlassian Rovo, and Microsoft Suite, and native web browsing through an in-app browser, “where you can comment directly on pages to provide precise instructions to the agent.” OpenAI also said it will be easier to automate tasks, with users able to reuse existing conversation threads and Codex now able to schedule future work for itself and wake up automatically to continue on a long-term task.&nbsp;</p>

<p class="has-text-align-none">Codex will also be getting a memory feature, allowing it to remember useful context from past experience, such as personal preferences, corrections, and information that took time to gather. OpenAI said it hopes the opt-in feature, which will be released as a preview, will help complete future tasks faster and to a quality that previously required detailed custom instructions. The personalization features will roll out to Enterprise, Edu, and EU users “soon.”&nbsp;<br></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Character.AI’s new Books mode turns reading into roleplay]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/912997/character-ai-books-mode" />
			<id>https://www.theverge.com/?p=912997</id>
			<updated>2026-04-16T10:34:28-04:00</updated>
			<published>2026-04-16T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Mired in controversy and legal woes over concerns about its chatbots’ interactions with users, particularly teens, Character.AI seems to be playing it safer with a new “Books” mode. The new format lets users step inside familiar worlds for a more structured roleplaying experience, one the company hopes will broaden perceptions of what AI roleplay can [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Character.AI" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Header-Image.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Mired in controversy and legal woes over concerns about its chatbots’ interactions with users, particularly teens, Character.AI seems to be playing it safer with a new “Books” mode. The new format lets users step inside familiar worlds for a more structured roleplaying experience, one the company hopes will broaden perceptions of what AI roleplay can be beyond <a href="https://www.theverge.com/2024/12/12/24319050/character-ai-chatbots-teen-model-training-parental-controls">romancing minors</a>, <a href="https://www.theverge.com/ai-artificial-intelligence/892978/ai-chatbots-investigation-help-teens-plan-violence">encouraging violence</a>, and <a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health">promoting self-harm</a>.</p>

<p class="has-text-align-none">In a blog post, Character.AI said Books is launching with a catalog of more than 20 classic public domain titles sourced from Project Gutenberg, including <em>Alice in Wonderland</em>, <em>Pride and Prejudice</em>, <em>Dracula</em>, <em>Frankenstein</em>, <em>Romeo and Juliet</em>, and<em> The Great Gatsby</em>. “Every book lets you choose who you want to be,” the company said, allowing users to step into the narrative as an existing character or as one of their own Character AI personas.</p>

<p class="has-text-align-none">There are a few ways to play through each story. The purist “book arc mode” follows the original narrative, plot points, and stakes while weaving the user into the story. There’s also a looser, “off-script mode” that lets users interact with the world and characters more freely. Character.AI said a “more guided experience, TapTale, is coming soon,” offering pre-written prompts users can pick to drive the story forward in addition to freeform typing.&nbsp;&nbsp;</p>

<p class="has-text-align-none">For those wanting to push things even further, Books will also let users rework a book’s premise entirely through what Character calls alternative universe remixes. Think<em> Alice in Wonderland</em> as a romcom set in space, or <em>The Wizard of Oz</em> with Toto running the show. Users will be able to share their alternative universes and explore those made by other people.</p>

<div class="image-slider">
	<div class="image-slider">
		
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Community-AUs.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Alice in ‘Everyone becomes employees building AI’&lt;/em&gt; &lt;em&gt;land&lt;/em&gt;. | Image: Character.AI" data-portal-copyright="Image: Character.AI" />

<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Character-Selection.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Choose your character.&lt;/em&gt; | Image: Character.AI" data-portal-copyright="Image: Character.AI" />

<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Chat.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Down the rabbit hole.&lt;/em&gt; | Image: Character.AI" data-portal-copyright="Image: Character.AI" />
	</div>
</div>

<p class="has-text-align-none">The feature is available to everyone through Character’s mobile app or web-based prototype hub, Labs. Even free users can try it out, though the company said they’ll only get a “handful of free turns.”</p>

<p class="has-text-align-none"><br>It’s not clear whether minors will be able to use the more guided features in Books. Character, facing <a href="https://www.theverge.com/2024/10/23/24277962/character-ai-google-wrongful-death-lawsuit">lawsuits</a><a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health"> </a><a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health">accusing</a> it of harming teens’ mental health, <a href="https://www.theverge.com/ai-artificial-intelligence/808081/character-ai-under-18-chat-ban">shut down</a> open-ended chat features for minors last year and <a href="https://www.theverge.com/news/829892/character-ai-stories-launch-teens">introduced more structured experiences called Stories</a>.</p>

<p class="has-text-align-none"></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Grok’s sexual deepfakes almost got it banned from Apple’s App Store. Almost. ]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/912297/apple-app-store-ban-grok-x-deepfakes" />
			<id>https://www.theverge.com/?p=912297</id>
			<updated>2026-04-15T07:21:43-04:00</updated>
			<published>2026-04-15T06:55:22-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Apple" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[Apple quietly threatened to kick Elon Musk’s AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, according to NBC News. It was a muted show of force from one of tech’s most powerful gatekeepers, made behind closed doors even as the undressing [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK468_APPLE_ANTITRUST_CVIRGINIA_G.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Apple quietly threatened to kick Elon Musk’s AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, <a href="https://www.nbcnews.com/tech/tech-news/apple-threat-remove-grok-app-store-deepfake-letter-musk-x-ai-rcna331677">according</a> to <em>NBC News</em>. It was a muted show of force from one of tech’s most powerful gatekeepers, made behind closed doors even as the <a href="https://www.theverge.com/news/859715/x-grok-ai-deepfakes">undressing crisis unfolded in full public view</a> and <a href="https://www.theverge.com/news/862460/apple-google-app-stores-ditch-grok-x-open-letters">criticism over Apple’s cowardice</a> mounted.</p>

<p class="has-text-align-none">In a letter <a href="https://www.nbcnews.com/tech/tech-news/apple-threat-remove-grok-app-store-deepfake-letter-musk-x-ai-rcna331677">obtained</a> by <em>NBC News</em>, Apple told US senators it “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal” and demanded that the developers “create a plan to improve content moderation.” At the time, xAI’s chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and “undress” images of real people, disproportionately women and <a href="https://www.theverge.com/ai-artificial-intelligence/855832/grok-undressing-children-csam-law-x-elon-musk">some of them apparently minors</a>.&nbsp;</p>

<p class="has-text-align-none">As we <a href="https://www.theverge.com/policy/859902/apple-google-run-by-cowards">reported at the time,</a> these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist. Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Apple said it reviewed proposed changes to the X and Grok apps. While the company concluded X had “substantially resolved its violations,” Grok “remained out of compliance.” Apple said it warned the developer that “additional changes to remedy the violation would be required, or the app could be removed from the App Store.” Only after further back-and-forth did Apple determine Grok had “substantially improved” and approved its submission. </p>

<p class="has-text-align-none">Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn-out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included <a href="https://www.theverge.com/news/859309/grok-undressing-limit-access-gaslighting">limiting Grok on X to paying subscribers</a> and <a href="https://www.theverge.com/news/861894/grok-still-undressing-in-uk">attempting to stop Grok from undressing women</a>. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X <a href="https://www.theverge.com/tech/891352/x-grok-xai-edit-blocker-photo-toggle">letting users block Grok from editing their photos</a>, are also easily circumvented.</p>

<p class="has-text-align-none">Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to <a href="https://www.theverge.com/report/872062/grok-still-undressing-men">produce similar images of myself</a> and other consenting adults. <em>NBC </em>also <a href="https://www.nbcnews.com/tech/tech-news/musks-ai-chatbot-grok-xai-making-sexual-deepfakes-imagine-rcna265855">reported</a> similar findings yesterday.</p>
						]]>
									</content>
			
					</entry>
	</feed>
