<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Elizabeth Lopatto | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-05-02T23:16:54+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/elizabeth-lopatto-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2" />
	<id>https://www.theverge.com/authors/elizabeth-lopatto-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/elizabeth-lopatto-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[The craziest part of Musk v. Altman happened while the jury was out of the room]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/921713/musk-v-altman-jared-birchall-screw-up-xai" />
			<id>https://www.theverge.com/?p=921713</id>
			<updated>2026-05-01T07:50:06-04:00</updated>
			<published>2026-04-30T18:59:51-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Okay, I am not a lawyer so I only understood about half of what just happened. But I am fairly sure, given the context, that Elon Musk’s lawyers may have just fucked up big. Jared “James Brickhouse” Birchall, Musk’s finance guy and all-around fixer, took the stand after Musk today. Most of his testimony was [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Graphic photo collage of Sam Altman and Elon Musk." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268474_musk_vs_altman_CVirginia4.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Okay, I am not a lawyer so I only understood about half of what just happened. But I am fairly sure, given the context, that Elon Musk’s lawyers may have just fucked up big.</p>

<p class="has-text-align-none">Jared “<a href="https://www.theverge.com/2019/12/5/20996680/elon-musk-testimony-vernon-unsworth-tweet-humiliated-dirtied-defamation">James Brickhouse</a>” Birchall, Musk’s finance guy and all-around fixer, took the stand after Musk today. Most of his testimony was dull and seemed to exist primarily to get some documents read into the record, which sucks but is a normal part of sitting through trials. But at the <em>very </em>end of his boring testimony something interesting happened. I believe we all got a surprise, something that rarely happens in courtrooms.</p>

<p class="has-text-align-none">The lawyer conducting his direct examination was passed a note by another member of the team, and asked Birchall what was apparently contained in the note: was he familiar with the xAI bid for OpenAI’s assets?</p>

<figure class="wp-block-pullquote"><blockquote><p>“Sam Altman was on both sides of the table.”</p></blockquote></figure>

<p class="has-text-align-none">“As I recall, a lawyer we were working with had asked the attorney general of California to ensure that in their fiduciary duty, proper value was being given to the assets of the nonprofit of OpenAI,” Birchall said. In his understanding, there was a negotiation “between Sam Altman and himself on both sides of the table, the for-profit and the nonprofit, attempting to discount the value of the nonprofit assets. And we made that bid in an attempt to properly account for the value the foundation had, and create a market bid that would need to be considered by the attorney general.”</p>

<p class="has-text-align-none">Here’s some lore: in February 2025, <a href="https://www.wsj.com/tech/elon-musk-openai-bid-4af12827">a Musk-led coalition made a $97.4 billion bid for the nonprofit that controls OpenAI</a>. The bid was submitted by Marc Toberoff, one of Musk’s lawyers in the current case. This bid happened as OpenAI was restructuring itself so that the for-profit arm could be cleared to go public. In Birchall’s testimony, that bid was made because Musk, Birchall, and others, thought Altman might undervalue the nonprofit as the company restructured itself. (I’m not really sure why that would be a problem for Musk and xAI, frankly, but whatever.)</p>

<p class="has-text-align-none">The defense counsel objected, and Birchall’s rant was struck for lack of foundation. So we did this piece by piece to establish the foundation, ending with Birchall saying, again, “Sam Altman was on both sides of the table.”</p>

<p class="has-text-align-none">On cross-examination, Bradley Wilson from Wachtell Lipton —&nbsp;OpenAI’s lawyers —&nbsp;picked the thread back up. Wilson asked how much of this Birchall had learned from sources other than lawyers. Birchall said he’d have a hard time being able to untangle that. After a few more exchanges, Wilson moved to strike all of Birchall’s testimony about the xAI bid on grounds that would not be discussed in front of the jury.</p>

<figure class="wp-block-pullquote"><blockquote><p>“You must have been very convincing. You’re not very convincing today.”</p></blockquote></figure>

<p class="has-text-align-none">The jury got to leave early while the lawyers duked it out, and this is where it got weird. Judge Yvonne Gonzalez Rogers started asking Birchall questions herself, and it clearly was making Birchall nervous. Birchall said he doesn’t remember discussing the xAI bid with Musk or Shivon Zilis or any other principal of the Musk organization. It sure sounded like Musk’s lawyers hadn’t given OpenAI proper discovery on this topic in the depositions, and so we were doing a fast and dirty deposition <em>with the judge</em> right then. At one point, Gonzalez Rogers told the plaintiff’s counsel to quit coaching the witness.</p>

<p class="has-text-align-none">Birchall said he’d spoken to the other members of the consortium about the bid, but that he wasn’t involved in discussions with Musk about when to send the bid letter. He claimed he’d heard some things from Toberoff, but that he wasn’t aware that Toberoff represented some of the other bidders. He didn’t know if xAI was aware that Toberoff represented some of the other bidders, either.</p>

<p class="has-text-align-none">Birchall didn’t know whether other investors had firsthand information about OpenAI, he claimed. No one had documents from inside OpenAI as far as he knew. Gonzalez Rogers remained unconvinced. “I’m still struggling with how you can have conversations with these individuals to raise $97.5 billion but have no recollections even in a general sense,” she said. Birchall said he had a general sense — he called each of the people involved to see if they were interested in joining Musk on the bid.</p>

<p class="has-text-align-none">“Why would they do that?” Gonzales Rogers asked. Birchall said these were people with whom Musk et al. had longstanding relationships. “You must have been very convincing,” she said. “You’re not very convincing today.”</p>

<figure class="wp-block-pullquote"><blockquote><p>Opened the door to what?</p></blockquote></figure>

<p class="has-text-align-none">Birchall said there were no numbers besides the topline one floated when he called prospective investors, and that after speaking with him, they were passed off to lawyers. He didn’t remember who chose the $97.4 billion number, and said he got it from the legal team, telling Gonzalez Rogers he didn’t get it from Musk. Gonzalez Rogers asked if that analysis was created by anyone besides Toberoff. Birchall said not that he could recall.</p>

<p class="has-text-align-none">“Did a lawyer tell you this was part of litigation?” Gonzalez Rogers asked.&nbsp;</p>

<p class="has-text-align-none">No, Birchall said. It was strictly a business deal.</p>

<p class="has-text-align-none">Apparently Steven Molo, who’d been defending Musk during the deposition, had made multiple objections to questions about the deal, citing privileged communications. Business deals, apparently, aren&#8217;t privileged. But all discovery into the xAI bid for OpenAI had been blocked before the trial began. Unfortunately, by asking Birchall about the xAI deal at the very end of the direct examination, Musk’s team may have opened the door for more digging into it. You may be wondering, “opened the door to what” and your guess is as good as mine. More discovery? Maybe something about anticompetitive behavior from Musk? It doesn’t sound like it’s going to be good for Musk, I can tell you that much.</p>

<p class="has-text-align-none">Gonzalez Rogers then asked who’d passed the note, and all the lawyers just sat there like guilty children. Finally, the guy responsible said he’d passed it, but he didn’t write it; a junior lawyer did. Who wrote it? More silence. Finally Toberoff —&nbsp;hardly a junior lawyer — stood up and took responsibility. Why had he done it? “I thought it was appropriate.”</p>

<p class="has-text-align-none">“Sounds like you wanted to open the door, then,” Gonzalez Rogers said. We adjourned while she said she’d consider what to do with this testimony. She will probably rule on it tomorrow.</p>

<p class="has-text-align-none"><em><strong>Correction</strong>, <strong>April 30th:</strong> It is Shivon Zilis, not Sharon Zilis.</em></p>

<p class="has-text-align-none"></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/917225/sam-altman-elon-musk-openai-lawsuit" />
			<id>https://www.theverge.com/?post_type=vm_stream&#038;p=917225</id>
			<updated>2026-05-02T19:16:54-04:00</updated>
			<published>2026-04-30T12:57:14-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="Law" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[Sam Altman and Elon Musk are facing off in a high-stakes trial that could alter the future of OpenAI and its most well-known product, ChatGPT. In 2024, Musk filed a lawsuit accusing OpenAI of abandoning its founding mission of developing AI to benefit humanity and shifting focus to boosting profits instead. The trial began with [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Graphic photo collage of Sam Altman and Elon Musk." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268474_musk_vs_altman_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Sam Altman and Elon Musk are facing off in a high-stakes trial that could alter the future of OpenAI and its most well-known product, <a href="https://www.theverge.com/ai-artificial-intelligence/904727/openai-chatgpt-investment">ChatGPT</a>. In <a href="https://www.theverge.com/2024/3/1/24087473/elon-musk-openai-lawsuit-nonprofit-mission" target="_blank" rel="noreferrer noopener">2024, Musk filed a lawsuit</a> accusing OpenAI of abandoning its founding mission of developing AI to benefit humanity and shifting focus to boosting profits instead. </p>

<p class="has-text-align-none">The trial began with jury selection on April 27th, before Elon Musk <a href="https://www.theverge.com/ai-artificial-intelligence/920191/elon-musk-sam-altman-trial-day-one">took the stand on Tuesday</a> as the first witness called, portraying his interest in founding OpenAI <a href="https://www.theverge.com/ai-artificial-intelligence/920048/elon-musk-testimony-save-humanity">as an effort to help save humanity</a>, before returning to the <a href="https://www.theverge.com/tech/921022/elon-musk-cross-openai-altman">stand on Wednesday</a>, and again for a third day of testimony on Thursday, before his financial manager and Neuralink CEO, Jared Birchall, took the stand.</p>

<p class="has-text-align-none">Musk was a cofounder of OpenAI and claims that Altman and cofounder Greg Brockman tricked him into giving the company money, only to turn their backs on their original goal. However, <a href="https://x.com/OpenAINewsroom/status/2048776645142872368?s=20">OpenAI says</a> that “This lawsuit has always been a baseless and jealous bid to derail a competitor” in a bid to boost Musk’s own SpaceX / xAI / X companies that have launched Grok as a competitor to ChatGPT.</p>

<p class="has-text-align-none">In his lawsuit, <a href="https://www.theverge.com/ai-artificial-intelligence/917755/musk-altman-openai-xai-gossip" target="_blank" rel="noreferrer noopener">Musk is asking for the removal of Altman and Brockman</a>, and for OpenAI to stop operating as a public benefit corporation. Musk has also demanded that OpenAI’s nonprofit receive up to $150 billion in damages he’s asking for if he wins the case.</p>
<div class="youtube-embed"><iframe title="Every day new evidence is added to the Musk v Altman trial" src="https://www.youtube.com/embed/5D5ahC-YjKg?rel=0" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p class="has-text-align-none"><em>Here’s all the latest on the trial between Musk and Altman:</em></p>
<ul>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/922826/musk-v-altman-youtube-audio">Musk v. Altman is getting a live audio stream next week.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/922550/openai-tesla-receipts-and-other-musk-v-altman-documents">OpenAI Tesla receipts and other Musk v. Altman documents.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920775/evidence-exhibits-elon-musk-sam-altman-openai-trial">All the evidence revealed so far in Musk v. Altman</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921821/gabe-newell-kojima-musk-v-altman">Here&#8217;s how Gabe Newell and Hideo Kojima ended up in the Musk v. Altman evidence.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921713/musk-v-altman-jared-birchall-screw-up-xai">The craziest part of Musk v. Altman happened while the jury was out of the room</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921640/jury-is-being-dismissed-early-so-ygr-can-deal-with-an-objection-to-birchalls-testimony">Jury is being dismissed early so YGR can deal with an objection to Birchall&#8217;s testimony.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921627/birchall-is-actually-very-funny-outside-of-court-good-for-him">Birchall is actually very funny outside of court? Good for him.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921616/we-are-now-hearing-about-the-pause-in-quarterly-donations">We are now hearing about the pause in quarterly donations.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921591/were-back">We&#8217;re back.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921574/second-break-of-the-day">Second break of the day.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921563/birchall-cross">Birchall cross.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921546/elon-musk-xai-openai-trial-model-distillation">Elon Musk confirms xAI used OpenAI’s models to train Grok</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921540/birchall-has-just-been-asked-about-the-four-teslas">Birchall has just been asked about the four Teslas.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921491/birchall-testifies-about-musks-contributions-to-openai">Birchall testifies about Musk’s contributions to OpenAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921525/a-woman-in-the-gallery-has-lowered-a-sleep-mask-over-her-eyes-and-is-attempting-to-sleep">A woman in the gallery has lowered a sleep mask over her eyes and is attempting to sleep.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921509/musk-steps-down-he-may-be-recalled">Musk steps down. He may be recalled.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921505/we-are-on-re-cross-musk-is-getting-testy-again">We are on re-cross. Musk is getting testy again.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921445/the-microsoft-investment-comes-back-up">The Microsoft investment comes back up.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921471/and-were-back">And we&#8217;re back.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921465/were-in-break-and-i-just-checked-out-something-interesting">We&#8217;re in break — and I just checked out something interesting.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921426/elon-musks-robot-army-definitely-will-not-kill-you">Elon Musk’s robot army definitely will not kill you.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921396/musk-insists-he-wasnt-kneecapping-openai">Musk insists he wasn’t kneecapping OpenAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921409/musk-seems-notably-more-subdued-today">Musk seems notably more subdued today.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921379/at-least-change-the-name-musk-says-he-told-altman">“At least change the name,” Musk says he told Altman.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921352/elon-musk-v-capitalism">Elon Musk v. Capitalism.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921338/an-ongoing-conversation-around-open-source">An “ongoing conversation” around open source.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921311/were-still-talking-about-whether-musk-read-the-term-sheet">We’re still talking about whether Musk read the term sheet.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921308/the-jurors-have-been-seated">The jurors have been seated.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921297/musk-has-just-entered-the-courtroom">Musk has just entered the courtroom.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921285/issues-of-extinction-are-excluded">&#8220;Issues of extinction are excluded.&#8221;</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/921259/good-morning">Good morning!</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/921022/elon-musk-cross-openai-altman">Elon Musk’s worst enemy in court is Elon Musk</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920982/freedom">Freedom!</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920959/unfortunately-we-will-not-be-talking-about-safety-details-of-any-specific-product">Unfortunately we will not be talking about safety details of any specific product.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920941/the-jury-is-leaving-for-the-day-i-suspect-its-a-nice-day-out-there-ygr-says">The jury is leaving for the day. &#8220;I suspect it&#8217;s a nice day out there,&#8221; YGR says.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920925/mechahitler-might-be-a-bad-look-for-the-ai-safety-defender">Mechahitler might be a bad look for the AI safety defender.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920885/musks-broader-ai-safety-commitment-or-lack-thereof-comes-up">Musk’s broader AI safety commitment (or lack thereof) comes up.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920887/this-is-so-testy">This is so testy.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920863/did-musk-even-read-the-openai-term-sheet">Did Musk even read the OpenAI term sheet?</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920856/musk-asked-shivon-zilis-to-stay-close-and-friendly-with-openai-to-keep-info-flowing">Musk asked Shivon Zilis to stay ‘close and friendly’ with OpenAI to keep info flowing.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920850/musk-says-xai-probably-wont-be-the-first-to-get-to-agi">Musk says xAI probably won’t be the first to get to AGI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920839/were-back-from-a-break-talking-about-spacex-and-xai">We’re back from a break, talking about SpaceX and xAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920835/dont-worry-about-teslas-robot-army">Don’t worry about Tesla’s robot army!</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920824/you-mostly-do-unfair-questions">“You mostly do unfair questions.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920819/its-a-free-country">“It’s a free country.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920804/will-you-answer-my-question">“Will you answer my question?”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920796/musks-desire-for-control-comes-up-again">Musk’s desire for control comes up again.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920781/this-is-a-hypothetical">“This is a hypothetical.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920769/did-musk-initially-envision-openai-as-a-corporation">Did Musk initially envision OpenAI as a corporation?</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920772/musk-is-being-combative-on-cross-already">Musk is being combative on cross already</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920764/i-did-say-that-i-would-commit-up-to-a-billion-dollars-yes">“I did say that I would commit up to a billion dollars, yes.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920756/is-tesla-really-not-working-on-agi">Is Tesla really not working on AGI?</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920728/musk-is-returning-to-the-stand">Musk is returning to the stand.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920752/at-times-being-a-judge-is-much-like-being-a-kindergarden-teacher">At times, being a judge is much like being a kindergarden teacher.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920718/were-on-a-break">We’re on a break.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920697/i-mean-all-due-respect-to-microsoft-do-you-really-want-microsoft-controlling-digital-superintelligence">“I mean, all due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920691/whats-going-on-here-this-is-a-bati-and-switch">“What’s going on here this is a bait and switch.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920674/a-musk-altman-twitter-spat">A Musk-Altman spat about Microsoft.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920678/musk-really-cannot-help-himself">Musk really cannot help himself.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920663/capped-profit-wasnt-an-issue-even-when-microsoft-got-involved">“Capped profit” wasn’t an issue, even when Microsoft got involved.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920639/tesla-is-not-pursuing-agi">“Tesla is not pursuing AGI.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920644/musk-is-more-on-his-game-today">Musk is more on his game today.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920635/after-i-received-these-reassurances-that-openai-would-continue-to-be-a-non-profit-i-continued-to-donate-over-10-million">“After I received these reassurances that OpenAI would continue to be a non-profit I continued to donate over $10 million.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920624/i-actually-was-a-fool-who-provided-free-funding-for-them-to-create-a-startup">“I actually was a fool who provided free funding for them to create a startup.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920609/more-discussion-of-who-would-own-openai">More discussion of who would own OpenAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920599/i-dont-lose-my-temper-says-elon-musk">“I don’t lose my temper,” says Elon Musk.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920578/2017-was-a-hard-year-and-weve-made-mistakes">“2017 was a hard year, and we’ve made mistakes.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920594/i-formed-many-for-profit-tech-companies-and-could-have-done-so-with-oai">&#8220;I formed many for-profit tech companies, and could have done so with OAI.&#8221;</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920586/crystal-clear-focus">&#8220;Crystal clear focus.&#8221;</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920568/sam-altman-has-just-entered-the-room-right-ahead-of-the-jury">Sam Altman has just entered the room, right ahead of the jury.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920563/a-member-of-the-public-just-got-dressed-down-by-ygr-about-taking-photos">A member of the public just got dressed down by YGR about taking photos.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920549/musk-v-altman-et-al-is-back-in-session">Musk v Altman et al. is back in session.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920463/in-naming-openai-elon-musk-worried-anything-related-to-the-turing-test-could-mean-bad-pr">In naming OpenAI, Elon Musk worried anything related to the Turing Test could mean bad PR.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920191/elon-musk-sam-altman-trial-day-one">Elon Musk appeared more petty than prepared</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920082/thats-a-wrap">That&#8217;s a wrap!</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920079/ygr-scolds-openai-for-taking-inconsistent-positions-on-the-origin-of-its-name">YGR scolds OpenAI for taking inconsistent positions on the origin of its name.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/920048/elon-musk-testimony-save-humanity">Elon Musk tells the jury that all he wants to do is save humanity</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/920051/arguments-over-ownership">Arguments over ownership.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/920037/apparently-openai-could-have-had-an-ico">Apparently OpenAI could have had an ICO.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/920025/i-was-not-averse-to-a-small-for-profit-musk-says">“I was not averse to a small for-profit,” Musk says.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/920008/were-reading-emails-between-musk-and-jensen-huang">We’re reading emails between Musk and Jensen Huang.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919999/musk-says-nonprofit-was-non-negotiable-for-openai">Musk says nonprofit was non-negotiable for OpenAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919989/were-at-the-founding-of-openai">We’re at the founding of OpenAI.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919982/musk-says-he-would-have-created-something-like-openai-on-his-own">Musk says he would have created something like OpenAI on his own.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919961/musk-recalls-meeting-sam-altman">Musk recalls meeting Sam Altman.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919966/sam-altman-left-during-a-break-but-elon-musks-lawyer-didnt-notice">Sam Altman left during a break, but Elon Musk&#8217;s lawyer didn&#8217;t notice.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919952/here-we-are-in-2026-and-ai-is-scary-smart">“Here we are in 2026 and AI is scary smart.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919945/i-have-extreme-concerns-about-ai-says-musk">“I have extreme concerns about AI,” says Musk.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919931/ai-will-be-as-smart-as-any-human-as-soon-as-next-year">AI will be as smart as ‘any human as soon as next year.’</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919936/musk-claims-he-has-time-for-spacex-tesla-neuralink-and-the-boring-company-because-he-works-a-lot">Musk claims he has time for SpaceX, Tesla, Neuralink, and the Boring Company because he works a lot.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919913/musk-is-telling-the-jury-he-cofounded-tesla">Musk is telling the jury he (co)founded Tesla.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919925/neuralinks-long-term-goal-is-now-ai">Neuralink&#8217;s long-term goal is&#8230; AI?</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919902/there-need-to-be-things-that-people-are-excited-about-that-make-life-worth-living-being-out-there-among-the-stars-can-excite-everyone">“There need to be things that people are excited about that make life worth living … being out there among the stars can excite everyone.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919864/a-little-musk-biography">A little Musk biography.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919890/elon-musk-looking-funereal-in-a-black-suit-with-a-black-tie-says-its-not-okay-to-steal-a-charity">Elon Musk, looking funereal in a black suit with a black tie, says &#8220;it&#8217;s not okay to steal a charity.&#8221;</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/917052/elon-musk-takes-stand-trial-openai-sam-altman">Elon Musk takes the stand in high-profile trial against OpenAI</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919870/we-are-back-from-a-break">We are back from a break.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919852/elon-musk-will-be-the-first-witness-in-musk-v-altman">Elon Musk will be the first witness in Musk v. Altman.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919843/microsoft-unlocked-with-openai-a-virtuous-cycle">“Microsoft unlocked with OpenAI a virtuous cycle.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919838/microsoft-enters-the-chat">Microsoft enters the chat.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919760/we-are-here-because-mr-musk-didnt-get-his-way-at-openai">“We are here because Mr Musk didn’t get his way at OpenAI.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919750/musk-demanded-control-he-demanded-the-ability-to-make-all-the-decisions-without-regard-to-the-other-founders">“[Musk] demanded control, he demanded the ability to make all the decisions without regard to the other founders.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919734/openai-lawyers-argue-that-elon-right-in-the-middle-of-discussions-about-a-for-profit-pivot">OpenAI lawyers argue that Elon was right in the middle of discussions about a for-profit pivot.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919730/musk-was-furious-that-openai-succeeded">“Musk was furious that OpenAI succeeded.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919724/openai-mr-musks-lawsuit-is-a-pageant-of-hypocrisy">OpenAI: Musk’s lawsuit is a “pageant of hypocrisy.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919669/sam-altmans-related-party-conflicted-transactions-are-how-he-made-money-on-openai-molo-says">Sam Altman&#8217;s &#8220;related party conflicted transactions&#8221; are how he made money on OpenAI, Molo says.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919654/technical-difficulties">Technical difficulties.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919649/openai-is-like-a-museum-store-that-has-looted-the-picassos-and-pocketed-the-profits">OpenAI is like a museum store that has looted the Picassos and pocketed the profits.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919630/agi-might-be-out-of-fashion-in-the-ai-world-but-it-will-be-at-the-center-of-this-trial">AGI might be out of fashion in the AI world, but it will be at the center of this trial</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919626/the-defendants-in-this-case-stole-a-charity">&#8220;The defendants in this case stole a charity.&#8221;</a>
			</li>
					<li>
				<a href="https://www.theverge.com/podcast/919534/musk-openai-trial-vergecast">Musk and Altman go to court</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919559/good-morning-from-the-musk-v-altman-line-outside-the-courtroom">Good morning from the Musk v Altman line outside the courtroom.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/919469/elon-musk-dont-like">Jury selection in Musk v. Altman: ‘People don’t like him’</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919424/we-have-a-jury">We have a jury.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919413/elon-musks-lawyer-tried-to-get-some-jurors-thrown-out-for-disliking-musk">Elon Musk&#8217;s lawyer tried to get some jurors thrown out for disliking Musk.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919388/apparently-things-are-exciting-outside">Apparently things are exciting outside.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919339/we-have-gone-through-the-first-20-potential-jurors">We have gone through the first 20 potential jurors.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919251/voir-dire-has-begun">Voir dire has begun.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/919149/trial-starts-today">The Elon Musk vs. OpenAI trial starts today.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/918909/elon-musk-drops-fraud-claims-against-openai-and-sam-altman-before-trial">Elon Musk drops fraud claims against OpenAI and Sam Altman before trial.</a>
			</li>
					<li>
				<a href="https://www.theverge.com/ai-artificial-intelligence/917755/musk-altman-openai-xai-gossip">Musk vs. Altman is here, and it&#8217;s going to get messy</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/906027/elon-musk-lawsuit-ipo-spacex-tesla">Elon Musk is about to be a very busy boy!</a>
			</li>
					<li>
				<a href="https://www.theverge.com/column/863319/highlights-musk-v-altman-openai">‘Sideshow’ concerns and billionaire dreams: What I learned from Elon Musk’s lawsuit against OpenAI</a>
			</li>
					<li>
				<a href="https://www.theverge.com/news/765171/elon-musk-apple-openai-antitrust-lawsuit">Elon Musk&#8217;s xAI is suing OpenAI and Apple</a>
			</li>
					<li>
				<a href="https://www.theverge.com/2024/11/18/24299787/elon-musk-openai-lawsuit-sam-altman-xai-google-deepmind">Inside Elon Musk’s messy breakup with OpenAI</a>
			</li>
					<li>
				<a href="https://www.theverge.com/2024/8/5/24213557/elon-musk-openai-lawsuit-sam-altman-greg-brockman-revived">Elon Musk is suing OpenAI and Sam Altman again</a>
			</li>
			</ul>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Elon Musk’s worst enemy in court is Elon Musk]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/921022/elon-musk-cross-openai-altman" />
			<id>https://www.theverge.com/?p=921022</id>
			<updated>2026-04-30T19:41:32-04:00</updated>
			<published>2026-04-29T20:01:24-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[About five hours into Elon Musk’s testimony, I typed the following sentence into my notes: “I have never been more sympathetic to Sam Altman in my life.” Musk’s direct testimony was an improvement over yesterday — even if his lawyer kept asking leading questions to cue him in how to answer. But that memory was [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk in front of a background of court gavels." data-caption="Just a simple country CEO who was being tricked by that sophisticated lawyer. | Image: Cath Virginia / The Verge, Getty Images" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK022_ELON_MUSK_CVIRGINIA4_H.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Just a simple country CEO who was being tricked by that sophisticated lawyer. | Image: Cath Virginia / The Verge, Getty Images	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">About five hours into Elon Musk’s testimony, I typed the following sentence into my notes: “I have never been more sympathetic to Sam Altman in my life.”</p>

<p class="has-text-align-none">Musk’s direct testimony was an improvement over yesterday — even if his lawyer kept asking leading questions to cue him in how to answer. But that memory was immediately obliterated by an absolutely miserable cross-examination. For hours, Musk refused to answer yes or no questions with yes or no, occasionally “forgot” things he’d testified to in the morning, and scolded defense lawyer William Savitt. I watched a few jury members glance at each other. During one testy exchange, one woman was rubbing her head. Me too, babe.</p>

<p class="has-text-align-none">Even the judge, who at times prompted Musk to answer “yes” or “no,” was having a bad time. “He was at times difficult,” said Yvonne Gonzalez Rogers after Musk after the jury left the room. (At one point, when she’d cut off his argumentative answer, she got the biggest laugh of the day.) “Part of management from my perspective is just to get through testimony.”</p>

<figure class="wp-block-pullquote"><blockquote><p>“I don’t yell at people,” Musk said.</p></blockquote></figure>

<p class="has-text-align-none">Musk spent a lot of yesterday painting this heroic picture of himself, and this morning, near the end of his direct examination, said, “I don’t lose my temper,” and “I don’t yell at people.” He said he might have called someone a “jackass,” but only in the spirit of saying something like, “don’t be a jackass.”&nbsp;</p>

<p class="has-text-align-none">Immediately afterward, Savitt baited him into being petty, irritating, and generally hard to deal with. At one point, we all watched Musk lose his temper. He spent hours quibbling over simple questions. Again and again, Savitt referred back to Musk’s deposition, where he’d answered questions slightly differently, calling Musk’s accounts into question. Even if the average juror didn’t think he was lying, he was certainly inconsistent.</p>

<p class="has-text-align-none">Savitt’s cross-examination left the distinct impression that Musk quit his quarterly payments to OpenAI because he wasn’t going to get full control of the company, then tried to kneecap it and fold it into Tesla. Initially, Musk wanted four board seats and 51 percent of the shares. The other cofounders would get three seats, together, to be voted on by shareholders (including other employees). Though Musk said that the eventual plan was to expand to 12 seats, it was obvious that Musk had full control on the initial board of seven.&nbsp;</p>

<p class="has-text-align-none">When Musk didn’t get what he wanted, he pulled the plug on his funding commitment and hired Andrej Karpathy, OpenAI’s second-best engineer, to Tesla in 2017. Despite his fiduciary duty to OpenAI as a board member, he did not try to get Karpathy to stay at OpenAI when he said he heard Karpathy wanted to leave. (“I think people should have a right to work where they want to work,” Musk said on the stand.)</p>

<figure class="wp-block-pullquote"><blockquote><p>“In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google.”</p></blockquote></figure>

<p class="has-text-align-none">By 2018, Musk was saying that OpenAI had no path forward with its current structure, declaring it was on “a path of certain failure” in emails to Ilya Sutskever and Greg Brockman. His proposed solution was to merge Tesla and OpenAI. “In my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google,” Musk said. The plan never came to fruition, and Musk resigned from OpenAI’s board that year.</p>

<p class="has-text-align-none">As early as 2016, Musk had his own concerns about OpenAI as a nonprofit. In an email to a colleague at Neuralink, he wrote “Deepmind is moving very fast. I am concerned that OpenAI is not on a path to catch up. Setting it up as non-profit might, in hindsight, have been the wrong move. Sense of urgency is not as high.”</p>

<p class="has-text-align-none">Asked about this, Musk said he was just speculating. Savitt said, “Those are your words, yes or no?”&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“You mostly do unfair questions.”</p></blockquote></figure>

<p class="has-text-align-none">Musk replied, “This is a hypothetical.”</p>

<p class="has-text-align-none">Savitt said, “So you thought it might have been a wrong move? That’s what you said?”</p>

<p class="has-text-align-none">Finally, Musk said yes.</p>

<p class="has-text-align-none">Getting Musk to put any of that on the record was intensely difficult. He refused repeatedly to answer questions like whether he knew cutting off OpenAI donations would create financial pressure, or whether he’d asked Karpathy to stay at OpenAI. He accused Savitt of asking questions that were “designed to trick me,” and we got multiple versions of this:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Musk: You mostly do unfair questions.</p>



<p class="has-text-align-none">Savitt: I am trying to put the questions as fairly as I can. I am doing my best.</p>



<p class="has-text-align-none">Musk: That’s not true.</p>
</blockquote>

<p class="has-text-align-none">Musk was trying to make this as painful as possible for Savitt, but he also made it as painful as possible for everyone else, including the jury. Watching him simply refuse to answer questions during cross he’d easily answered during direct was annoying. Watching him refuse to admit he understood the nature of linear time —&nbsp;and therefore the fact that he was still a director of OpenAI’s board before he resigned in 2018 —&nbsp;was infuriating. It made him look dishonest.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“I’d lost trust in Altman and I was concerned they were really trying to steal the charity.”</p></blockquote></figure>

<p class="has-text-align-none">Musk’s basic, oft-repeated story during this week’s testimony has been that OpenAI is “stealing a charity” and “looting a nonprofit.” He maintains that he was all right with some limited for-profit activity, but not anything that would overshadow OpenAI’s nonprofit work and constitute “the tail wagging the dog” — another phrase he reached for, over and over, like a security blanket. In direct testimony, he painted himself as a trusting “fool” who had believed the wily promises of Sam Altman and his cohort: “I gave them $38 million of essentially free funding, which they used to create an $800 billion for-profit company,” he lamented. His own lawyer’s questioning wrapped up with Musk being purportedly blindsided by a multibillion-dollar deal with Microsoft.&nbsp;</p>

<p class="has-text-align-none">“I’d lost trust in Altman and I was concerned they were really trying to steal the charity,” Musk said. “It turned out to be true.”</p>

<figure class="wp-block-pullquote"><blockquote><p>“I said I didn’t look closely! I read the headline!”</p></blockquote></figure>

<p class="has-text-align-none">On cross-examination, Musk would barely even explain how much he bothered to learn about OpenAI’s operations before suing over them a few years later.<em> </em>When OpenAI proposed a for-profit arm around 2018, he got an email outlining the proposed corporate structure. On the stand, he said he’d only read the very first section of it, which said that contributors should consider the investments as donations that may have no return. “I read the highlighted box with ‘important warning,’” Musk said.</p>

<p class="has-text-align-none">Savitt asked Musk if he’d raised any objection to the structure then, when he’d received the documents. Musk said that he didn’t read beyond that first box.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Musk: I didn’t read the fine print. We’re going into the fine print of this document.</p>



<p class="has-text-align-none">Savitt: It’s a four-page document.</p>
</blockquote>

<p class="has-text-align-none">Musk then said he hadn’t read beyond taking this in the “spirit of a donation.” And then we got the deposition, where Musk said, “I don’t think I read this term sheet… I’m not sure I actually read this term sheet… I did not closely look at this term sheet.” Savitt pointed out that nowhere in the deposition did Musk say he’d read the first paragraph and Musk, raising his voice and effectively undermining his claims from the morning that he doesn’t lose his temper (lol) or yell at people (lmao), said, “I said I didn’t look closely! I read the headline!”</p>

<p class="has-text-align-none">Imagine having to deal with this man as your cofounder. I think I would sooner open a vein.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Larry’s risky business]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/920378/oracle-openai-datacenter-buildout" />
			<id>https://www.theverge.com/?p=920378</id>
			<updated>2026-04-30T08:09:40-04:00</updated>
			<published>2026-04-29T09:57:16-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Report" />
							<summary type="html"><![CDATA[If you want to know whether the AI bubble is bursting, there’s only one publicly traded company that will tell you: Oracle. That’s right, the database company. Oracle has burned its boats and pivoted to AI, but not in any kind of usual way. It is not a foundation model builder like OpenAI or Anthropic, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An image of Larry Ellison with a basket of eggs balanced on his head in a basket with the OpenAI logo." data-caption="Oracular spectacular? | Image: Cath Virginia / The Verge" data-portal-copyright="Image: Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/68456_ORACLE_BELLWEATHER7.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Oracular spectacular? | Image: Cath Virginia / The Verge	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">If you want to know whether the AI bubble is bursting, there’s only one publicly traded company that will tell you: Oracle.</p>

<p class="has-text-align-none">That’s right, the database company. Oracle has burned its boats and pivoted to AI, but not in any kind of usual way. It is not a foundation model builder like OpenAI or Anthropic, obviously. It’s not quite a neocloud, though it has entered the same bare-metal business <a href="https://www.theverge.com/ai-artificial-intelligence/822011/coreweave-debt-data-center-ai">as CoreWeave</a>. It is a software-as-a-service company that has made an audacious bet on a very specific future version of AI as Oracle’s traditional business has gracefully declined. It is significantly older than any of its AI competitors, save Microsoft, and it has decided its future involves<a href="https://www.wsj.com/business/openai-oracle-sign-300-billion-computing-deal-among-biggest-in-history-ff27c8fe"> an enormous compute deal</a> with OpenAI, a company that does not make money.</p>

<p class="has-text-align-none">Whether OpenAI is good for its commitments to Oracle depends a lot on how much money it can raise and how quickly it can become profitable. The risk for Oracle is that it may be sinking a lot of money into building data centers for OpenAI, only for OpenAI to be unable to pay Oracle the $300 billion it agreed to in their contract. Oracle and OpenAI did not respond to requests for comment.</p>

<p class="has-text-align-none">But the OpenAI play — and the pivot to AI generally —&nbsp;suggests a specific vision: The key place to make money isn’t training foundation models. The real money is inference, or using AI models to output results on data that isn’t in the training set. So the company has looked at some startups’ businesses and decided that they are actually just features that can be added to Oracle’s existing capabilities — which is pretty much what Oracle has been up to for the entirety of its existence.</p>

<p class="has-text-align-none">Oracle, of course, is already an enterprise business, so it has the existing relationships and large salesforce to go out there and sell its vision, one that suggests there isn’t much room for the AI stack to fragment. Rather, it will consolidate under existing players. Oracle intends to be the dominant player in that game.</p>

<p class="has-text-align-none">Wall Street wants to bet on AI, and it can’t bet on OpenAI because it’s not public yet. So the best way to do it now is through Microsoft and Oracle. Microsoft has a more complicated business, so it’s not a pure AI bet. Oracle, on the other hand, is cleaner. That means you can take the temperature of the entire AI boom by checking in on how many people are betting Oracle won’t repay its loans on time — those are the credit default swaps. Oracle’s stock price also reacts to assorted and sundry industry events, providing a bellwether about the AI revolution —&nbsp;or the AI bubble, depending on how you view it.</p>

<p class="has-text-align-none">But there’s always a tremendous gap between vision and execution, as Oracle’s history shows.</p>

<figure class="wp-block-pullquote"><blockquote><p>“The orthodox company is low-growth and high-margin and makes him feel old and uncool.”</p></blockquote></figure>

<p class="has-text-align-none">Let’s get it out of the way: Oracle founder Larry “<a href="https://pagesix.com/2026/03/19/hollywood/larry-ellisons-incredible-nickname-revealed-in-hollywoods-bombshell-legal-drama/">Bad Doggy</a>” Ellison is out of his fucking mind. He has a short attention span, a willingness to promise things his engineers have not yet built, a tremendous ego, and a competitive drive that could power every AI data center on Earth and then some. Ellison is nominally the chief technology officer and executive chairman of Oracle, and Clay Magouyrk and Mike Sicilia are nominally the co-CEOs. But Oracle has always been the Larry show, starring Larry, even when he’s busy <a href="https://www.reuters.com/article/sports/oracle-hit-with-unprecedented-penalties-for-americas-cup-cheating-idUSBRE98211Y/">cheating at yacht races</a> or whatever.&nbsp;</p>

<p class="has-text-align-none">Oracle’s move to focus on AI means leaving behind the high-margin, low-growth, low-capital-expenditure database business that is Oracle’s bread and butter to jump to the low-margin, high-growth, high-capex neocloud business that Oracle has taken out $43 billion in debt to build in <em>just</em> fiscal 2026. Why do that? Well, according to Paul Kedrosky, a longtime VC at SK Ventures, Larry got bored.</p>

<p class="has-text-align-none">“This is the story of Larry forever,” says Kedrosky. “Whenever he left to go sailing, he’d say, ‘This company’s not as much fun as it used to be.’ The high-level take is that the orthodox company is low-growth and high-margin and makes him feel old and uncool.”</p>

<p class="has-text-align-none">In the 1990s, one of the reasons that Oracle became a hot property was Ellison. He was among the various futurists making predictions about what the internet would do to society. In 1996, Ellison appeared on <em>The Oprah Winfrey Show</em> to hype what he called “<a href="https://www.wired.com/2009/12/fail-oracle/">the network computer</a>.” (As part of the appearance, Oracle promised to <a href="https://www.cnet.com/tech/tech-industry/kemp-lauds-ellisons-nc-donation/">give a network computer to each of almost 300 kids</a> at a primary school in Menlo Park.) This was a lightweight device, even one that could be treated as a throwaway, that would connect to applications stored online. If you are thinking, <em>Boy, that sure sounds like a modern phone</em>, you’re right. If you are also thinking, <em>Boy, that sure sounds like the cloud</em>, you’re also right.</p>

<p class="has-text-align-none">The network computer flopped. The iPhone, which kicked off the modern era of lightweight, disposable computing devices, was introduced more than a decade later, in 2007. Oracle veered away from its bold vision of the cloud, while a true believer peeled off to form his own company: Marc Benioff, who founded Salesforce in 1999. Amazon’s AWS venture into cloud computing was in 2006, a decade after Ellison had predicted that people wouldn’t need to keep software on their own computers.&nbsp;</p>

<p class="has-text-align-none">So why didn’t Oracle lead both of those revolutions, if Ellison saw them coming a decade out? Well, the iPhone was a consumer product, and Oracle made primarily enterprise databases. Oracle knew how to sell to businesses — it’s why they’d so thoroughly stomped competition such as Relational Technology Inc. and <a href="https://www.linkedin.com/posts/rickbennett_my-headline-that-lasted-decades-killed-cullinet-activity-7396202703812612096-g3p1">Cullinet</a> in the first place — but Ellison didn’t know how to make consumers <em>choose</em> to buy things rather than get forced to use it by their employer.</p>

<p class="has-text-align-none">The failure of the network computer also made Ellison weirdly recalcitrant about the cloud. He refused to take a second crack at the idea <a href="https://www.wired.com/2011/10/ellison-answers-benioff-with-oracle-cloud/">until 2011</a>, even mocking it as “<a href="https://www.wsj.com/articles/BL-BB-1199">complete gibberish</a>.” Oracle never really recovered from its lost lead. Despite its strong enterprise software business, <a href="https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/">it lags Amazon, Google, Microsoft, and Alibaba in market share</a>, and is barely ahead of Salesforce. Given Ellison’s competitive streak — <a href="https://www.nytimes.com/2004/02/15/books/chapters/everyone-else-must-fail.html">one of his biographies is titled <em>Everyone Else Must Fail</em></a> — this has to sting. The worst part might have been losing to one of Oracle’s biggest rivals, Microsoft.</p>

<p class="has-text-align-none">Still, the majority of Oracle’s business, as of its most recent earnings results, is “cloud and software.” The category represented 88 percent of the company’s revenue in the three months ended February 28th, which is the third quarter in Oracle’s 2026 fiscal year. (There are also hardware and services businesses, but for our purposes, they are negligible.) The majority of <em>that</em> is software support, which “substantially all” customers renew every year “in order to continue to benefit from technical support services and the periodic issuance of unspecified updates and enhancements” to the applications and infrastructure they also use. That brought in a shade under $5 billion in Oracle’s third quarter. The next biggest was “cloud infrastructure,” which had revenue of about $4.9 billion.</p>

<p class="has-text-align-none">The customer support business had zero percent growth in the third quarter. Its database and applications businesses, though very profitable, aren’t growing and may even be declining, says analyst Gil Luria of DA Davidson.</p>

<p class="has-text-align-none">The cloud business, on the other hand, is growing. It’s an “okay business, very fast-growing with low profitability,” says Luria. “Oracle cloud has single-digit margins, maybe at best teens. But they’ve been growing it very fast.”</p>

<p class="has-text-align-none">So when ChatGPT launched the modern era of AI hype in Silicon Valley, it was inevitable that Ellison would take an interest. By February 2025, <a href="https://youtu.be/lIYIKpvFQOM?si=UDcib-VSvpwfAFy1&amp;t=88">Ellison was telling former UK Prime Minister Tony Blair</a> that AI was “a much bigger deal than the Industrial Revolution, electricity, and everything that’s come before.” In September, Oracle “<a href="https://techcrunch.com/2025/09/12/why-the-oracle-openai-deal-caught-wall-street-by-surprise/">shocked the markets</a>” with a $300 billion deal with OpenAI to build data centers, one of the largest cloud deals ever. Oracle’s move into the bare-metal business —&nbsp;renting out servers to AI companies — can be thought of as an extension of the cloud business. Having missed the initial run on the cloud, it seems that Ellison has decided Oracle can’t be left out this time.</p>

<figure class="wp-block-pullquote"><blockquote><p>When ChatGPT launched the modern era of AI hype, it was inevitable Ellison would take an interest</p></blockquote></figure>

<p class="has-text-align-none">In some ways, Oracle was an obvious partner for OpenAI. It’s one of the few Big Boys that isn’t trying to compete with Nvidia by building its own chips — though it does have a very close relationship with AMD. But there’s one other benefit that probably tickled Ellison, <a href="https://www.cnn.com/TECH/computing/9812/17/ellison.idg/index.html">a longtime Microsoft hater</a> who even resorted to sending private detectives to <a href="https://www.theguardian.com/technology/2000/jun/29/billgates.microsoft">sort through Microsoft’s trash</a>: OpenAI’s biggest partner for compute <em>used to be</em> Microsoft. The deal was pure, flashy, competitive Ellison —&nbsp;and propelled Oracle’s shares to an all-time high.</p>

<p class="has-text-align-none">But with the deal came another big personality: Sam Altman, who has <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">a reputation in Silicon Valley</a> as a sociopathic liar with a people-pleasing streak. OpenAI is the Sam Altman show, as became <em>very </em>clear in 2023 <a href="https://www.theverge.com/ai-artificial-intelligence/814876/ilya-sutskever-deposition-openai-sam-altman-elon-musk-lawsuit">when he was briefly deposed as OpenAI’s philosopher-king</a>. Tying Oracle so closely to OpenAI meant that Oracle was no longer the arbiter of its own fate.&nbsp;</p>

<p class="has-text-align-none">And indeed, as OpenAI soon announced a series of other massive deals, Oracle’s shares fell. Now, Oracle serves essentially as a <a href="https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89">public market proxy for betting on OpenAI’s future</a> — for better and for worse.&nbsp;</p>

<p class="has-text-align-none">This time when Ellison predicted the future, he wasn’t the sole true believer, points out Nick Patience, the AI lead at the Futurum group. “It’s a more grounded bet” than the network computer, Patience says. Microsoft’s Satya Nadella and Google’s Sundar Pichai have basically the same vision. On the other hand, Ellison is “piggybacking on Sam Altman, which is probably a dangerous place to be,” Patience notes.</p>

<figure class="wp-block-pullquote"><blockquote><p>Oracle’s OpenAI deal was basically kismet after Musk left it in the lurch</p></blockquote></figure>

<p class="has-text-align-none">Oracle’s OpenAI deal was basically kismet. Oracle had been working on a data center in Texas for Elon Musk, a friend of Ellison’s, who made an abrupt about-face when he decided his company xAI could build his own data center faster. Just as Musk left Oracle in the lurch, a LinkedIn message from OpenAI infrastructure chief Peter Hoeschele arrived in the inbox of a sales leader at Oracle, <em>Bloomberg</em> reported. The resulting deal was significantly larger than the one Oracle had been discussing with Musk, <a href="https://www.bloomberg.com/news/features/2025-12-12/oracle-s-300-billion-openai-deal-has-investors-worried-about-its-ai-spending">with options to expand it further</a>.</p>

<p class="has-text-align-none">To fulfill the deal, Oracle will build five <em>very large</em> data centers. “All told, they’ll require millions of chips and consume 4.5 gigawatts of power — more than all the homes in Chicago,”<em> Bloomberg </em>wrote of the deal. Oracle is planning to build them with an initial completion date in 2027, though according to <em>Bloomberg</em>, that has already slipped to 2028. It’s a more aggressive bet than any other major company has made on AI, and one that the less reckless —&nbsp;or perhaps, less desperate —&nbsp;Microsoft shied away from.&nbsp;</p>

<p class="has-text-align-none">Oracle’s previous chief executive officer, Safra Catz, was skeptical of the financial benefits of the cloud. It had lower margins and required costly data centers, <em>Bloomberg </em>reported, citing employees who’d heard Catz’s reservations. She was replaced last year, shortly after the OpenAI deal, by Magouyrk and Sicilia, who previously ran Oracle’s cloud business and applications. In the announcement, Ellison, unsubtly, is quoted saying that “Clay and Mike committed Oracle’s Infrastructure and Applications businesses to AI.” Oracle had burned the boats.</p>

<p class="has-text-align-none">OpenAI, for its part, needs Oracle for its investment-grade credit rating, notes Stijn Van Nieuwerburgh, a professor at Columbia Business School. OpenAI doesn’t have one, and couldn’t support the necessary compute buildout on its own. Effectively, OpenAI is renting Oracle’s creditworthiness.</p>

<p class="has-text-align-none">Of the hyperscalers, however, Oracle has the lowest credit rating. It also has the greatest debt load, even before the infrastructure buildout came into play. What’s more, when I say “Oracle is building data centers,” I am doing a little sleight of hand. Unlike Google and Meta, Oracle doesn’t actually build its own data centers. It’s leasing data centers that other companies are building on its behalf. In Abilene, Texas, that’s Crusoe, with whom Oracle has signed a 15-year lease; Oracle committed to paying more than a billion dollars a year despite Crusoe’s relative lack of experience, according to <em>SemiAnalysis.,</em> an industry newsletter.</p>

<p class="has-text-align-none">Crusoe’s inexperience is a relatively minor risk compared to the much larger one Oracle is taking on OpenAI, which is by far the biggest customer represented in Oracle’s remaining performance obligations (RPOs), which represent how much money Oracle is slated to earn from its existing contracts. Of the $553 billion in RPOs that Oracle reported in its most recent earnings release, more than $300 billion is OpenAI. So how’s OpenAI doing?</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/68456_ORACLE_BELLWEATHER8.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="A cracked egg on a red background. On the yolk is the OpenAI logo." title="A cracked egg on a red background. On the yolk is the OpenAI logo." data-has-syndication-rights="1" data-caption="&lt;em&gt;Is OpenAI cooked?&lt;/em&gt; | Image: Cath Virginia / The Verge" data-portal-copyright="Image: Cath Virginia / The Verge" />
<p class="has-text-align-none">In November 2025, Sam Altman sent a memo to OpenAI in which he wrote, “I expect the vibes out there to be rough for a bit.”</p>

<p class="has-text-align-none">The vibes had been rough for a bit before the memo, actually. There was the whole thing where <a href="https://www.theverge.com/23966325/openai-sam-altman-fired-turmoil-chatgpt">Altman got booted and then reinstated</a>. Meta <a href="https://www.forbes.com/sites/geruiwang/2025/07/12/meta-and-openais-talent-wars-how-ai-mints-elites-but-displaces-others/">raided OpenAI for talent</a>; several key OpenAI players such as <a href="https://www.cnbc.com/2024/06/19/openai-co-founder-ilya-sutskever-announces-safe-superintelligence.html">Ilya Sutskever</a>, <a href="https://www.wsj.com/tech/ai/openais-former-research-chief-aims-to-automate-manufacturing-with-ai-8871f265">Bob McGrew</a>, and <a href="https://techcrunch.com/2025/02/18/thinking-machines-lab-is-ex-openai-cto-mira-muratis-new-startup/">Mira Murati</a> went on to found their own companies. Anthropic, one of OpenAI’s biggest competitors, is <a href="https://x.com/max_a_schwarzer/status/2028939154944585989">also</a> <a href="https://fortune.com/2026/02/07/anthropic-cofounder-daniela-amodei-humanities-majors-soft-skills-hiring-ai-stem/">composed</a> of <a href="https://fortune.com/2023/07/10/anthropic-ceo-dario-amodei-ai-risks-short-medium-long-term/">former OpenAI talent</a>. The executive reshuffling is basically constant.</p>

<p class="has-text-align-none">That’s not all on the corporate chaos front. Because OpenAI is trying to go public, it had to repeatedly renegotiate its deal with Microsoft. Not only is Microsoft freed up to partner with other companies, <a href="https://techcrunch.com/2025/09/09/microsoft-to-lessen-reliance-on-openai-by-buying-ai-from-rival-anthropic/">like Anthropic</a> — the <a href="https://www.theverge.com/ai-artificial-intelligence/918981/openai-microsoft-renegotiate-contract">revenue-sharing agreement between the two companies</a> ends in 2030 (rather than whenever OpenAI hits a development milestone) and the total payments are capped. “OpenAI products will ship first on Azure, unless Microsoft cannot and chooses not to support the necessary capabilities,” Microsoft announced. But OpenAI can now partner with other cloud providers to try to build its enterprise business.</p>

<p class="has-text-align-none">OpenAI’s approach to AI is decidedly unfocused, especially in comparison to Anthropic. There’s something of a profit panic as the company tries to figure out how to make money from its scattershot AI enterprises, while Claude Code and Cowork from Anthropic emerged as the winners for enterprise AI spending. Anthropic doesn’t have image generation or video generation products. It has instead stayed laser-focused on the enterprise market.&nbsp;</p>

<p class="has-text-align-none">OpenAI has, by contrast, chased the consumer market, exposing itself to other kinds of risk in the process. The company is facing multiple lawsuits from <a href="https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis">people</a> who <a href="https://www.cbsnews.com/news/chatgpt-lawsuit-colordo-man-suicide-openai-sam-altman/">say ChatGPT</a> <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html">encouraged loved ones</a> to <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">kill themselves</a> —&nbsp;and, <a href="https://arstechnica.com/tech-policy/2025/12/openai-refuses-to-say-where-chatgpt-logs-go-when-users-die/">in some cases, others</a>. This has, perhaps predictably, <a href="https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide">resulted in political pressure</a>.</p>

<p class="has-text-align-none">Also there’s <a href="https://www.reuters.com/legal/litigation/musk-lawsuit-over-openai-for-profit-conversion-can-head-trial-us-judge-says-2026-01-07/">a lawsuit from Elon Musk</a> over OpenAI’s for-profit arm that — if Musk wins — <a href="https://www.wsj.com/tech/ai/elon-musk-asks-for-openais-nonprofit-to-get-any-damages-from-his-lawsuit-76089f6f">may threaten its public offering</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>OpenAI’s chaos does not inspire confidence in Altman’s management skills</p></blockquote></figure>

<p class="has-text-align-none">This degree of chaos does not inspire confidence in Altman’s management skills. And OpenAI, like all AI labs, is a money furnace. Recently, OpenAI projected that it will spend <a href="https://www.theinformation.com/articles/openai-boost-revenue-forecasts-predicts-112-billion-cash-burn-2030?rc=jznb2j">$665 billion by 2030</a> —&nbsp;$111 billion more in cash burn than it previously predicted. That’s not all. Its gross profit margins last year were lower than the company predicted, as it had to buy last-minute compute to meet demand. OpenAI projects it will be cash-flow positive in 2030, two years later than its rival Anthropic. Both companies are threatening to go public this year.</p>

<p class="has-text-align-none">The thing that’s driving up OpenAI’s costs is inference, the very thing Ellison is betting on. This is probably a positive sign for Ellison’s intuitions about AI use, but it might not be the best thing for his partner. OpenAI has <a href="https://www.wsj.com/tech/ai/openai-anthropic-profitability-e9f5bcd6">promised $1.4 trillion in its contracts</a>.&nbsp;</p>

<p class="has-text-align-none">OpenAI recently raised <a href="https://openai.com/index/accelerating-the-next-phase-ai/">$122 billion</a>. “That could last them a few years,” Luria says. “I am 100 percent sure they can get to $1.4 trillion? Probably not, but they do have money now, and that makes a difference for Oracle.”&nbsp;</p>

<p class="has-text-align-none">Luria’s skepticism is understandable. Take Stargate, the flashy data center project that OpenAI announced, Altman standing shoulder to shoulder with Ellison and Donald Trump. The joint venture hasn’t hired staff and isn’t developing any data centers for OpenAI, <a href="https://www.theinformation.com/articles/inside-openais-scramble-get-computing-power-stargate-stalled?rc=jznb2j"><em>The Information</em> reported in February</a>, describing Stargate as a “shelved idea.” Earlier this month, <em>The Information </em>discovered several <a href="https://www.theinformation.com/articles/openai-stargate-leaders-depart-latest-shakeup-data-center-strategy?utm_campaign=%5BREBRAND%5D+%5BTI-AM%5D+Th&amp;utm_content=1095&amp;utm_medium=email&amp;utm_source=cio&amp;utm_term=124&amp;rc=jznb2j">Stargate leaders, including Hoeschele, ditched OpenAI</a>; they <a href="https://www.theinformation.com/briefings/openai-stargate-execs-join-metas-new-compute-unit">washed ashore at Meta</a>.</p>

<p class="has-text-align-none">OpenAI didn’t get its planned 10GW of data center capacity from Oracle and SoftBank last year, either. Part of the problem for OpenAI is that its credit wasn’t as good as Oracle’s —&nbsp;so OpenAI just made its deal with Oracle directly. The two companies also made an unusual arrangement where if there was a delay or the project came in over budget, OpenAI and Oracle would share costs. (They also both benefit if things are under budget.)</p>

<p class="has-text-align-none">As for the other member of the project, SoftBank has its own OpenAI agreement, which has led to slapfights over who controls the 1GW facility in Milam County, Texas. OpenAI has signed a long-term lease with subsidiary SoftBank Energy, which would develop and own the data center. Between the complications with Oracle and SoftBank, it seems likely that OpenAI doesn’t have the money or the compute to achieve all of its ambitious goals.</p>

<p class="has-text-align-none">Despite all that, Altman is targeting a public offering by the end of 2026, <a href="https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a42"><em>The Wall Street Journal </em>reported in January</a>. That may be in doubt now, after the <em>Journal</em> also reported OpenAI <a href="https://www.wsj.com/tech/ai/openai-misses-key-revenue-user-targets-in-high-stakes-sprint-toward-ipo-94a95273?mod=hp_lead_pos1">missed revenue and user growth targets</a>. “Chief Financial Officer Sarah Friar has told other company leaders that she is worried the company might not be able to pay for future computing contracts if revenue doesn’t grow fast enough,” the <em>Journal</em> wrote, citing anonymous sources. In response, <em>Dealbook</em> analyst Harrison Rolfes issued a note explaining that OpenAI likely won’t go public this year.</p>

<p class="has-text-align-none">OpenAI is racing to beat Anthropic and Musk’s xAI, which has been subsumed by SpaceX and <a href="https://www.theverge.com/tech/887899/spacex-ipo-risks-ai">is trying to IPO this summer.</a> SpaceX is seeking <a href="https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a42?mod=article_inline">a valuation of $1 trillion or more</a>. OpenAI’s most recent funding round gave it a valuation of more than $850 billion. Granted, valuations are more art than science, but it’s also the case that if OpenAI were to IPO with that valuation, it would be <a href="https://www.theinformation.com/articles/openais-ipo-hopes-face-skeptical-investor-community?utm_campaign=article_email&amp;utm_content=article-16722&amp;utm_medium=email&amp;utm_source=sg&amp;rc=jznb2j">valued at 28 times its projected 2026 revenue</a>. By way of comparison, Nvidia — a company that is making an actual profit from the AI boom — is valued at 12 times its projected 2026 revenue. OpenAI’s rich valuation, the company’s long way to profitability, and its aim at consumers rather than enterprises may lead some investors to sit out the IPO, <em>The Information</em> reported.</p>

<p class="has-text-align-none">If and when the mandatory paperwork associated with an IPO filing is made public, we’ll get a sense of exactly how challenging the environment is for OpenAI — but it seems that OpenAI doesn’t have an easy road to do everything it’s trying to do and still stay solvent.</p>

<figure class="wp-block-pullquote"><blockquote><p>OpenAI is racing to beat Anthropic and xAI, and it’s got a long path to profit</p></blockquote></figure>

<p class="has-text-align-none">OpenAI is a flaky partner for Oracle. While that’s the largest challenge for Oracle’s data center buildout, it’s not the only risk. Payments on both Oracle’s bonds and its data center leases will occur on a fixed timeline. Anything that slows the buildout threatens Oracle — money will be flowing out on schedule, but if the build doesn’t happen on time, there may not be enough money flowing in.</p>

<p class="has-text-align-none">There are a lot of things that could potentially slow Oracle’s mad dash to build data centers: increasing objections from communities near data centers, supply chain risks from Trump’s war on Iran, and energy risks for the same. What’s more, Oracle has data centers in the now-destabilized Middle East, which could lead to surprise costs that have nothing to do with OpenAI, but nonetheless make it harder for Oracle to pay its bills.</p>

<p class="has-text-align-none">Increasingly, communities are objecting to the mad rush to build AI data centers, so much so that <a href="https://www.ncsl.org/state-legislatures-news/details/why-states-are-considering-temporary-bans-on-new-data-centers">11 states are considering moratoriums</a>. In the case of one of Oracle’s attempted data centers in Doña Ana County, New Mexico, the attempt to quickly build a data center by <a href="https://www.nytimes.com/2025/12/07/business/boarderplex-new-mexico-data-center-mystery.html">buttering up local officials</a> —&nbsp;without even really consulting the community the facility would be built in —&nbsp;is now <a href="https://sourcenm.com/2026/02/10/lawsuit-alleges-nm-countys-project-jupiter-data-center-vote-violated-open-meetings-act/">facing several lawsuits</a> from a local environmental group. The data center, dubbed Project Jupiter, would emit more greenhouse gases than the state’s two largest cities, Albuquerque and Las Cruces, combined. The state’s land commissioner <a href="https://sourcenm.com/2026/03/31/nm-state-land-commissioner-rejects-application-for-gas-pipeline-to-power-project-jupiter-data-center/">has rejected an application for a segment of gas pipeline to power</a> the data center. Though construction has already started, it’s still awaiting two air quality permits —&nbsp;and the decision, with an original deadline of April 22nd, has been <a href="https://sourcenm.com/briefs/nm-environment-officials-will-hold-public-hearing-on-project-jupiter-air-permits-push-back-decision/">delayed several months</a> to allow for a public hearing as opposition to the data center has mounted.</p>

<p class="has-text-align-none">Another Oracle-OpenAI data center, in Port Washington, Wisconsin, has similarly drawn <a href="https://abc7chicago.com/post/600-acre-ai-data-center-could-cost-wisconsin-residents-land/18616706/">pushback from locals</a>. Several protesters were arrested at a city council meeting in December, and one was dragged out for chanting “Recall” at the mayor. Construction on this data center is also underway. Among the people who oppose the data center is the <a href="https://www.instagram.com/reel/DPfBND_DDeK/?igsh=MWd6bDhqNjFvdGF0bQ%3D%3D">comedian Charlie Berens</a>, who has 3 million subscribers to his YouTube channel of mostly Midwestern humor. This data center project also <a href="https://biztimes.com/uihlein-family-member-files-lawsuit-over-records-tied-to-port-washington-data-center/">faces lawsuits</a> from locals, including <a href="https://www.wpr.org/news/community-challenges-port-washingtons-data-center-tid-court#:~:text=A%20community%20group%20opposed%20to%20the%20planned,try%20to%20block%20a%20tax%20increment%20financing">a challenge to tax incentives for the project</a> worth nearly half a billion dollars. An investigation is now taking place about whether meetings that pushed the data center development forward <a href="https://www.wisn.com/article/port-washington-neighbors-file-complaint-over-data-center-approval-process/70424962">violated open records laws</a>; Port Washington has also been accused of <a href="https://pbswisconsin.org/news-item/wisconsin-communities-face-scrutiny-over-data-center-secrecy-beyond-use-of-ndas/">not turning over public records in response to a request</a>. The construction itself, going around the clock to avoid delays, <a href="https://www.wpr.org/news/frustration-grows-over-24-hour-data-center-construction-port-washington">has also irritated neighbors</a>; <a href="https://www.tmj4.com/news/ozaukee-county/24-hour-construction-at-the-port-washington-data-center-is-set-to-end-in-april-neighbors-say-the-new-city-rules-dont-go-far-enough">new rules limit construction time</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>There are signs that the OpenAI and Oracle alliance could be getting shaky</p></blockquote></figure>

<p class="has-text-align-none">There are other signs that the OpenAI alliance may be shaky. <a href="https://www.bloomberg.com/news/articles/2026-03-06/oracle-and-openai-end-plans-to-expand-flagship-data-center">OpenAI declined to expand the Abilene, Texas, data center</a> it partners with Oracle on, possibly <a href="https://www.cnbc.com/2026/03/09/oracle-is-building-yesterdays-data-centers-with-tomorrows-debt.html">because it doesn’t have the newest clusters of Nvidia chips</a>. It wasn’t just OpenAI who didn’t want to work with Oracle on this; lenders didn’t want to finance an expansion with Oracle as the tenant, according to <em>The Wall Street Journal.</em> Banks have <a href="https://www.businessinsider.com/oracle-openai-stargate-loans-jpmorgan-diminishing-interest-debt-2026-1">reportedly grown wary of Oracle debt</a> as private credit investors have <a href="https://www.bloomberg.com/news/articles/2026-04-06/blue-owl-stock-closes-at-a-record-low-amid-private-credit-exodus">gotten anxious about their funds</a>. Should Oracle require more money, it may be harder to find.</p>

<p class="has-text-align-none">And now that Ellison’s pal Donald Trump has started a war in Iran with no end in sight, new risks are stacking up for data centers broadly — including Oracle’s. The New York Fed has said that <a href="https://www.reuters.com/markets/us/ny-fed-says-supply-chain-pressures-heated-up-march-2026-04-06/">supply chains are facing mounting pressure</a>. In particular, Iran’s blockade on the Strait of Hormuz is <a href="https://www.npr.org/2026/04/03/nx-s1-5762568/strait-of-hormuz-closure-deflates-global-helium-supply">a threat to the global helium supply</a>; helium is used in <a href="https://www.cnbc.com/2026/03/10/iran-war-semiconductor-memory-chip-impact.html">manufacturing semiconductors</a>, and there is <a href="https://www.semiconductors.org/wp-content/uploads/2023/03/SIA-Comments-to-USGS-Request-for-Comment-on-Helium-Supply-Risks-3_16_23.pdf">nothing that can replace it</a>. There is also <a href="https://www.reuters.com/world/middle-east/iran-blows-hole-us-aluminium-supply-chain-with-smelter-strikes-2026-03-30/">an aluminum crisis</a>; the material is used in data center server racks and cooling units. <a href="https://finance.yahoo.com/news/ai-data-centers-massive-demand-for-aluminum-is-crushing-the-us-aluminum-industry-110035572.html">Data centers had already driven up the price of aluminum</a>, while also making it more difficult to manufacture aluminum in the US by increasing energy prices.&nbsp;</p>

<p class="has-text-align-none">Speaking of energy prices, there is one more obvious problem that has been created by the Iran war: more expensive energy. Infrastructure damage from the war, along with the closure of the Strait of Hormuz, has sent prices up&nbsp;<a href="https://www.reuters.com/business/energy/services-firms-feel-squeeze-oil-rally-iran-war-fails-spur-drilling-2026-03-27/">without spurring new drilling</a>. That may mean increases in the price of other kinds of energy as people seek out alternatives. Expensive energy could turbocharge objections to data center buildouts, as well as making those builds more pricey. It may also force purveyors of compute to raise their prices, which could make AI even more expensive than it already is.</p>

<p class="has-text-align-none">On top of all of that, Oracle’s existing data centers are also in greater jeopardy than before Trump went to war. A Dubai data center <a href="https://www.businessinsider.com/oracle-office-dubai-strike-iran-war-2026-4">has been hit by debris</a> already. Iran claims to be proactively targeting data centers from US companies, including Oracle, and has struck several Amazon facilities. Should Iran take out more Oracle hubs, the company may be forced to rebuild those at the same time as it attempts to build its AI centers — leading to a cash crunch. It may also limit revenue.</p>

<p class="has-text-align-none">An increase in the cost of construction and energy, a decrease in revenue, or both could make it much harder for Oracle to meet its lease and debt obligations, which are fixed. Repayment of the $43 billion Oracle raised in fiscal 2026 is split up over a series of years, starting in 2029 and ending in 2066, with a total effective interest rate of 4.9 percent.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>Now, on top of everything else, there’s the war in Iran</p></blockquote></figure>

<p class="has-text-align-none">In the next five years, Oracle will have to repay the $9.5 billion it borrowed in fixed-rate notes, plus another $500 million in floating-rate notes, plus interest. This suggests an aggressive view of what the company can accomplish in five years, since we know Oracle isn’t making enough cash right now to cover its operating expenses <em>and </em>capital expenditures. To turn that around, Oracle has to build data centers, fast, to turn its theoretical future revenue into actual money.</p>

<p class="has-text-align-none">The bond market has reflected uncertainty about Oracle’s plans. In December, Oracle’s investment-grade notes were trading like junk bonds, because <a href="https://www.bloomberg.com/news/articles/2026-04-25/oracle-data-center-16-billion-financing-gets-over-the-line">investors feared data centers would be delayed</a>, according to <em>Bloomberg</em>. Also in December, Oracle’s credit default swaps (CDS) — a kind of insurance in the case of a default — <a href="https://www.reuters.com/business/finance/global-markets-cds-explainer-2025-12-11/">got expensive</a>. But by February, investors were <a href="https://www.cnbc.com/2026/02/02/oracles-credit-default-swaps-are-plummeting-after-financing-plans.html">feeling better</a> after newly announced financing plans suggested that Oracle would avoid having its credit downgraded. After Oracle’s strong earnings were announced in March, the five-year CDS got cheaper —&nbsp;then, later that month, <a href="https://www.bloomberg.com/news/articles/2026-03-27/oracle-s-credit-risk-measure-nears-record-high-on-ai-debt-fears">hit an all-time high</a>, suggesting investors were nervous again.</p>

<p class="has-text-align-none">“Oracle’s CDS has become the credit market’s proxy for AI risk,” John Lloyd, global head of multisector credit and a portfolio manager at Janus Henderson Investors, told <em>Bloomberg</em>.</p>

<p class="has-text-align-none">In April, another <a href="https://www.bloomberg.com/news/articles/2026-04-25/oracle-data-center-16-billion-financing-gets-over-the-line">$14 billion of bonds for an Oracle data center were issued</a> in a special purpose vehicle, keeping <a href="https://www.ft.com/content/e9682adb-f29a-4169-8bf0-19e299e906e2?syn-25a6b1a6=1">the debt off Oracle’s balance sheet</a>.</p>

<p class="has-text-align-none">Investors have reason to be nervous. More than half of the data centers scheduled to be built this year <a href="https://www.bloomberg.com/news/features/2026-04-01/us-ai-data-center-expansion-relies-on-chinese-electrical-equipment-imports">may be delayed by equipment shortages</a> — or even canceled. While some builders — like Oracle’s partner Crusoe — are refurbishing old transformers or relying on other strategies, uncertainty around the AI buildout has been rising.</p>

<p class="has-text-align-none">It’s not all bad news for Oracle. For instance, ByteDance has been renting chips from Oracle to circumvent export prohibitions of Nvidia’s most advanced chips. According to <em>The Information</em>, <a href="https://www.theinformation.com/articles/chinas-nvidia-loophole-how-bytedance-got-the-best-ai-chips-despite-u-s-restrictions?rc=jznb2j">ByteDance has become one of Oracle’s largest cloud customers</a>. Oracle also has a <a href="https://www.cnbc.com/2026/03/11/oracle-tiktok-us-stake-outages.html">$2 billion stake in TikTok’s newly spun-off US operations</a> and hosts all of the company’s user data.</p>

<p class="has-text-align-none">What’s more, <em>SemiAnalysis</em> has suggested that in addition to deals in Northern Virginia, ByteDance is a major customer of Oracle’s in Southeast Asia. As ByteDance is planning to grow in Southeast Asia, Europe, and Latin America, Oracle will benefit, according to <em>SemiAnalysis</em>. “The scale of the Oracle and Bytedance partnership remains under the radar,” <em>SemiAnalysis </em>noted, rating Oracle’s GPU service as Gold on <a href="https://www.clustermax.ai/v2">its most recent ranking chart</a>.</p>

<p class="has-text-align-none">Oracle had strong results in its most recent earnings, too. The company did better than expected at keeping its costs low. It also showed strong growth in its cloud infrastructure business, and 90 percent of its database projects were on or ahead of schedule. “A strong record of on-time delivery is evidence of solid execution,” <a href="https://www.morningstar.com/stocks/oracle-earnings-solid-execution-secures-revenue-target-mitigates-investor-concerns">wrote Luke Yang</a>, an analyst with the financial firm Morningstar. Still, Yang said that there was a lot of uncertainty around Oracle, since the AI landscape changes quickly.</p>

<p class="has-text-align-none">But more significant than the bare-metal business may be Ellison’s vision of private AI, deployed within databases Oracle already runs. Sure, Oracle has talked about efficiencies from using AI coding tools. That’s not really the big play, though.</p>

<p class="has-text-align-none">Oracle already has sensitive data for a number of businesses, including healthcare records. Having an AI software stack means being able to deploy AI agents into that data to better organize it —&nbsp;with fewer concerns about leakage than there would be with general-purpose third-party LLMs. “Training AI models on public data is the largest, fastest-growing business in history,” Ellison said <a href="https://www.inc.com/ava-levinson/oracle-larry-ellison-identifies-next-big-ai-business-opportunity/91277613">in December, on an earnings call</a>. “AI models reasoning on private data will be an even larger and more valuable business. Oracle databases contain most of the world’s high-value private data.”</p>

<figure class="wp-block-pullquote"><blockquote><p> “Training AI models on public data is the largest, fastest-growing business in history.”</p></blockquote></figure>

<p class="has-text-align-none">There’s reason to believe that training is no longer going to be the same kind of growth industry for AI bare-metal providers; inference will be. After all, one line of thinking goes, the big LLMs have already scraped everything available on the web. But that doesn’t really matter for the application of AI to businesses —&nbsp;inference is what they’d want anyway. Maybe they don’t have everything a business might need yet, says Patience, the analyst with Futurum. But it’s clearly where Ellison is heading.</p>

<p class="has-text-align-none">“I don’t think he’s early this time,” Patience said, suggesting this moment was unlike the network phone. “A lot of people would have to be completely wrong, so he’s more protected.”</p>

<p class="has-text-align-none">Oracle is also a go-to vendor for the Trump administration, Patience points out. Oracle has, for instance, just <a href="https://www.oracle.com/news/announcement/oracle-cloud-infrastructure-to-support-centers-for-medicare-and-medicaids-modernization-initiative-2026-02-11/">won a contract with the Centers for Medicare and Medicaid Services</a> to modernize the agency’s data. It also just <a href="https://www.oracle.com/news/announcement/us-department-of-the-air-force-accelerates-cloud-modernization-with-oracle-2026-02-12/#:~:text=The%20Department%20of%20War%20can%20now%20access,Infrastructure%20technologies%2C%20including%20Oracle%20AI%20Database%2026ai.">won a similar contract with the Air Force</a>. And now US government customers can use a number of Oracle services, <a href="https://blogs.oracle.com/cloud-infrastructure/oci-adds-new-authorized-services-us-government">including its generative AI</a>.</p>

<p class="has-text-align-none">This has, perhaps understandably, <a href="https://thedreydossier.substack.com/p/the-merger-that-needed-a-war">freaked people out</a>. Oracle has a history of unauthorized data collection —&nbsp;in 2024, <a href="https://www.reuters.com/legal/oracle-reaches-115-mln-consumer-privacy-settlement-2024-07-19/">it settled a class-action lawsuit</a> claiming that the company illegally compiled “digital dossiers” including where people browsed online, bought gas, banked, and ate, and sold the information to marketers. The same year, Ellison also <a href="https://www.oracle.com/events/financial-analyst-meeting-2024/">suggested, in an analyst meeting</a>, that AI and surveillance will make sure that &#8220;<a href="https://www.businessinsider.com/larry-ellison-ai-surveillance-keep-citizens-on-their-best-behavior-2024-9#:~:text=Ellison's%20comments%20include:%20*%20%22AI%20will%20ensure,to%20replace%20police%20cars%20in%20high%2Dspeed%20chases.">citizens will be on their best behavior</a> because we’re constantly recording everything that’s going on.&#8221;&nbsp;</p>

<p class="has-text-align-none">He described a world in which police officers, wearing Oracle body cams, tell their cameras they need two minutes to go to the bathroom. “We’ll turn it off,” Ellison says. “The truth is, we don’t really turn it off. What we do is, we record it so no one can see it. No one can get into that recording without a court order.” And an AI is always watching, Ellison says. “These are the kind of next-generation systems we can build using AI.”</p>

<p class="has-text-align-none">And of course, databases are central to the vision. “We need to unify all the national data, put it into a database where it’s easily consumable by the AI model, and then ask whatever question you like,” Ellison said in <a href="https://www.youtube.com/watch?v=FG2AtiInwKM">another speech at the World Governments Summit</a>. “Right now, countries’ data is fragmented.” Ellison’s prophecy is, effectively, government by database. AI tools and government contracts may make it competitive with Palantir, the current AI standard-bearer in assembling government data <a href="https://www.nytimes.com/2026/01/30/technology/tech-ice-facial-recognition-palantir.html">so the secret police can stalk their victims</a>.</p>

<p class="has-text-align-none">This vision is, obviously, bad news for democracy, but it’s great news for Oracle! I find myself curious about whether the company will simply use its private enterprise data to help add, hmmm, <em>efficiencies </em>to its government efforts, making it easier for surveillance-minded authoritarians to track citizens. Oracle has been wooing authoritarian governments, including <a href="https://theintercept.com/2021/02/18/oracle-china-police-surveillance/">China</a>, and has suggested that “<a href="https://www.cnbc.com/2024/04/07/oracle-chatgpt-and-the-sovereign-cloud-nations-will-seek-in-future.html">pretty much every government</a> is going to want a sovereign cloud and a dedicated region for that government.” And because Oracle is so boring, most people may not even recognize it as a threat.&nbsp;</p>

<p class="has-text-align-none">Oh, and there’s one more thing. Besides the risk of Oracle snooping into your business, there’s also the possibility of regulatory capture — that is, because Ellison is so tight with the Trumps, what remains of the government watchdogs won’t stop it. That means there’s no one to prevent Oracle doing assorted dirty deeds — but presumably not dirt cheap. This may create some downside risk if, say, Democrats ever win back power, but perhaps Ellison is betting that if he deploys his technology correctly, that simply won’t happen.</p>

<p class="has-text-align-none">Which brings me, finally, to Oracle’s core competency: lock-in. A lot of companies remain on Oracle databases because it is difficult and expensive to relocate. If Oracle’s inference is good enough, the company basically becomes Hotel California for anyone who’s put data there —&nbsp;because to leave is to leave the inference behind.</p>

<figure class="wp-block-pullquote"><blockquote><p>There’s bad news for democracy here, but good news for Oracle</p></blockquote></figure>

<p class="has-text-align-none">Despite some fairly serious risks to Oracle — largely from its OpenAI deal, and to a lesser extent from the war in Iran — the company may be positioned to succeed. The degree to which you have faith in Ellison’s vision is also the degree to which that vision is disquieting.</p>

<p class="has-text-align-none">Oracle’s AI buildout doesn’t necessarily make a lot of financial sense; Ellison may royally piss off his shareholders by the time this is all through. But making financial sense has never been Ellison’s strong point.&nbsp;And besides, what’s he going to do, let <em>Microsoft</em> beat him to the hot new technology?</p>

<p class="has-text-align-none">In Oracle’s most recent earnings, the AI buildout shows up on the balance sheet as capital expenditures. Its most recent earnings were for the third quarter of fiscal 2026, which ended in February. Oracle spent $39 billion on capital expenditures, more than three times as much as in the previous fiscal year. As a result, the company now has negative free cash flow.</p>

<p class="has-text-align-none">Oracle also told investors that it expected to spend a total of $50 billion in capital expenditures in 2026, and that it was forecasting $67 billion in revenue. In fiscal 2027, Oracle expects $90 billion in revenue. The company did not say how much it forecast in capital expenditures.</p>

<p class="has-text-align-none">Morgan Stanley analysts suggest Oracle will need “$100 billion or more for 2027 and the first half of 2028,” <a href="https://www.wsj.com/tech/ai/oracle-ai-demand-debt-04977749">according to <em>The Wall Street Journal</em></a><em>.</em></p>

<p class="has-text-align-none">“Oracle is about execution right now,” says Luria. “The number one thing is the ability to build data centers and deploy capital to create data centers.” The thing to watch for is whether Oracle can get financing.</p>

<p class="has-text-align-none">But even if everything goes smoothly on the building front, Ellison still has to deal with Sam Altman. If OpenAI’s chaos gets too out of hand, it may suck in Oracle, too.&nbsp;</p>

<p class="has-text-align-none">What happens to Oracle if OpenAI shits the bed? One possibility is that it sacrifices its software stock premium and gets priced like a utility, which is effectively what the bare-metal business is. Like the telecoms from the ’90s internet boom, it (and its bellwether status) fades in significance as the AI-native companies that survive an AI bubble bursting eventually emerge from the wreckage to reshape our society however that may go. It’s not impossible that the company goes bankrupt, if enough things go wrong at the same time. Because for Oracle to be the dastardly surveillance company of Larry Ellison’s dreams, it has to nail the timing. And that’s never been his strong suit.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Elon Musk appeared more petty than prepared]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/920191/elon-musk-sam-altman-trial-day-one" />
			<id>https://www.theverge.com/?p=920191</id>
			<updated>2026-04-29T07:31:51-04:00</updated>
			<published>2026-04-28T19:17:12-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today the first witness was sworn in in Musk v. Altman: Elon Musk. I was surprised by how flat he seemed. This is not the first time I’ve seen Musk in court. During his defamation suit, he turned on the charm and the jury responded by finding him not guilty. Today he looked adrift and [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk in front of a background of justice scales." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK022_ELON_MUSK_CVIRGINIA4_G.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Today the first witness was sworn in in <em>Musk v. Altman: </em>Elon Musk. I was surprised by how flat he seemed.</p>

<p class="has-text-align-none">This is not the first time I’ve seen Musk in court. During his defamation suit, <a href="https://www.theverge.com/2019/12/4/20994895/elon-musk-testimony-vernon-unsworth-tweet-negligence-la-courthouse">he turned on the charm</a> and the jury responded by <a href="https://www.theverge.com/2019/12/6/20998821/elon-musk-wins-loses-twitter-defamation-trial-testimony-caver-vernon-unsworth-cave-rescue">finding him not guilty.</a> Today he looked adrift and unprepared. The only times he showed real animation were when he was <a href="https://www.theverge.com/ai-artificial-intelligence/920048/elon-musk-testimony-save-humanity">bragging about how much he’d done for OpenAI</a>.</p>

<p class="has-text-align-none">The direct examination is a way of telling a story through questions; it’s important to make the narrative clear. For a suit that accuses Sam Altman of straying from OpenAI’s mission, Musk spent a weird amount of time talking about himself, recounting his biography, and hyping up the various ventures he’s undertaken that have nothing to do with OpenAI.</p>

<figure class="wp-block-pullquote"><blockquote><p>“I came up with the idea, the name, recruited the key people, taught them everything I know, provided all the initial funding. Besides that, nothing.”</p></blockquote></figure>

<p class="has-text-align-none">For instance, he told jurors that he worked between “80 to 100 hours a week,” which was how he got so much done. It is unclear to me whether <a href="https://techcrunch.com/2025/01/14/elon-musk-tweets-so-much-people-bet-over-1m-weekly-to-guess-how-many-posts/">his prolific posting habits</a> count as part of the workweek. I hope the defense asks.</p>

<p class="has-text-align-none">We did eventually get around to OpenAI, where Musk portrayed himself as the driving force. He’d been worried about AI since childhood, and had finally felt that someone needed to prevent Google from developing it. He testified that he became involved in AI safety because he had a conversation with Google’s own Larry Page and asked, “What if AI wipes out all the humans?” Page essentially shrugged —&nbsp;as far as he was concerned, as long as the AI didn’t also go extinct, things were all right. “I said, ‘That’s insane,’ and he called me a species-ist for being pro-human.” So OpenAI, for Musk, was born <em>specifically</em> to keep Google from having too much power in AI. Petty! Musk also said that after he recruited Ilya Sutskever, then a research scientist at Google, to OpenAI that “Larry Page refused to speak to me ever again.”&nbsp;</p>

<p class="has-text-align-none">What did Musk do at OpenAI? “I came up with the idea, the name, recruited the key people, taught them everything I know, provided all the initial funding. Besides that, nothing.” He paused for laughter, and one or two people obligingly chuckled. But most of the courtroom was silent. I thought he sounded petulant. “I could have started it as a for-profit and I chose not to,” Musk said.</p>

<figure class="wp-block-pullquote"><blockquote><p>It’s hard to preempt the argument you are expecting without making it yourself&nbsp;</p></blockquote></figure>

<p class="has-text-align-none">I do wonder how much of this the jury is following. We went very quickly through a lot of ideas, including “artificial general intelligence,” <a href="https://www.scientificamerican.com/article/what-does-artificial-general-intelligence-actually-mean/">an imaginary thing</a> that many AI researchers are nonetheless afraid of. Musk defined this as being when a computer “becomes as smart as any human, arguably smarter than any human.” (<a href="https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems">Large language models are not the same as intelligence</a>, and AGI has been <a href="https://www.theverge.com/2024/12/4/24313130/sam-altman-openai-agi-lower-the-bar">defined downward</a> for quite some time. But whatever! This case is not about that!)&nbsp;</p>

<p class="has-text-align-none">At another point, Musk was asked to explain who former OpenAI board member Shivon Zilis was. “Shivon was the, um, my chief of staff and, uh, you know,” Musk said. One person in the gallery — presumably familiar with the fact that Zilis is the mother of a few of Musk’s kids — burst out in loud laughter. But the jury looked puzzled.</p>

<p class="has-text-align-none">During discussions of how best to get OpenAI the vast amounts of funding it would need for compute, there was indeed discussion of a for-profit arm of OpenAI with Musk. The strategy here, I think, was to make clear that Musk’s intentions were very different than the for-profit that came to pass. (That’s true! He did not get 55 percent equity in it, as one possible cap table suggested he should.) This all seemed pretty mushy, and we got bogged down in a discussion of what, in Musk’s opinion, a reasonable equity split between founders and funders would be; it’s hard to preempt the argument you are expecting without making it yourself.&nbsp;</p>

<p class="has-text-align-none">This is also kind of a distraction from the core point of the trial: Did OpenAI betray its mission statement and fool Musk into making a charitable donation? <em>I agreed to a for-profit model but not THAT for-profit model</em> isn’t a strong argument.</p>

<p class="has-text-align-none">We’ll be back with more Musk testimony and presumably his cross-examination. If there’s a clearer story from the defense, this trial is effectively all over but the shouting. I’ve seen a strong performance from Musk on the stand before. Today he just didn’t seem dialed in. Maybe he’s grumpy about this trial because he knows he’s wasting his own time.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Musk vs. Altman is here, and it&#8217;s going to get messy]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917755/musk-altman-openai-xai-gossip" />
			<id>https://www.theverge.com/?p=917755</id>
			<updated>2026-04-27T12:21:14-04:00</updated>
			<published>2026-04-24T08:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[Elon Musk cofounded OpenAI, and then flounced off in a huff when he wasn’t anointed CEO, leaving Sam Altman as the last power-hungry man standing. Now, Musk is back with a lawsuit, and a trial is scheduled to start in Oakland, California, on April 27th. Theoretically, it’s a legal case about whether OpenAI defrauded Musk. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk is jumping in front of a courthouse while Sam Altman looks puzzled" data-caption="Might as well jump, as the poet David Lee Roth once said. | Image: Cath Virginia / The Verge" data-portal-copyright="Image: Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268474_musk_vs_altman_CVirginia2.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Might as well jump, as the poet David Lee Roth once said. | Image: Cath Virginia / The Verge	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Elon Musk cofounded OpenAI, and then flounced off in a huff when he wasn’t anointed CEO, leaving Sam Altman as the last power-hungry man standing. Now, Musk is back with a lawsuit, and a trial is scheduled to start in Oakland, California, on April 27th. <em>Theoretically</em>, it’s a legal case about whether OpenAI defrauded Musk. But that’s not really what we’re all doing here. This is about mess.&nbsp;</p>

<p class="has-text-align-none">Over the past couple of years, Musk’s legal theories for punishing OpenAI have run the gamut from breach of contract to unfair business practices to false advertising. Now, he and Altman will be getting called to the stand at a particularly delicate time. Musk’s xAI, <a href="https://www.theverge.com/tech/872619/elon-musk-merges-spacex-with-xai-and-x">now a part of SpaceX</a>, has <a href="https://www.theverge.com/tech/906027/elon-musk-lawsuit-ipo-spacex-tesla">filed for an initial public offering</a>. OpenAI is rumored to be considering an IPO itself. There are only billions of dollars at stake.</p>

<p class="has-text-align-none">And so naturally, there’s a lot of internal tech gossip coming to light. Questions about Elon’s <a href="https://www.ft.com/content/5fc6429e-2e6a-4be5-a81d-c188536cee0d?syn-25a6b1a6=1">“rhino ket” use won’t make it into the trial</a>… but it’s in news stories because <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.454.2.pdf">it showed up in the docket</a>. We got <a href="https://chatgptiseatingtheworld.com/2026/01/18/are-diary-entries-of-greg-brockman-for-openai-elon-musks-best-evidence-in-case-v-openai/">excerpts</a> from OpenAI President Greg “What will take me to $1B?” Brockman’s diary. Mark Zuckerberg, not involved in the litigation at all, <a href="https://www.techdirt.com/2026/03/31/weeks-after-denouncing-government-censorship-on-rogan-zuckerberg-texted-elon-musk-offering-to-take-down-content-for-doge/">has had several embarrassing texts made public</a>, like allegedly telling Musk that he had Meta teams “on alert to take down content doxxing or threatening the people on your [DOGE] team,” weeks <a href="https://www.theverge.com/2025/1/10/24341117/mark-zuckerberg-facebook-joe-rogan-lies">after claiming he refuses</a> to moderate based on government requests. Also, Musk thinks Jeff Bezos <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.379.38.pdf">“is a bit of a tool.”</a></p>

<figure class="wp-block-pullquote"><blockquote><p>Lawsuits appear to be Musk’s preferred alternative to therapy</p></blockquote></figure>

<p class="has-text-align-none">The fact that the case got to trial at all is a win for Musk, who seems to be trying to damage OpenAI’s reputation however he can — from lawsuits to general shit-talking to, apparently, <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">a homophobic dossier on Sam Altman</a> that’s getting passed around Silicon Valley by “Musk intermediaries.” <em>Musk v. Altman</em> “only ended up at trial because Elon Musk can pay his attorneys to argue a losing case,” said Sam Brunson, a professor of law at Loyola University of Chicago. “If I were doing this on contingency, I’d assume I wouldn’t be getting paid.”</p>

<p class="has-text-align-none">Over the next few weeks, high-profile AI executives, such as Microsoft’s CEO Satya Nadella and CTO Kevin Scott, will likely testify. Former OpenAI executives, such as cofounder Ilya Sutskever and Mira Murati, former CTO, <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.431.0.pdf">may be called</a>. The former board members involved in Altman’s temporary 2023 ouster from his CEO role may also testify.&nbsp;</p>

<p class="has-text-align-none">Lawsuits appear to be Musk’s preferred alternative to therapy. Musk has sued perceived adversaries of his X social media platform, including a suit against a nonprofit that was <a href="https://counterhate.com/blog/elon-musk-vs-ccdh-nonprofit-wins-dismissal-of-baseless-and-intimidatory-lawsuit/">dismissed as “baseless”</a> and another <a href="https://www.reuters.com/legal/government/musks-x-ends-90-million-lawsuit-against-law-firm-wachtell-2025-11-20/">against the firm</a> that successfully made him follow through on his agreement to buy Twitter. Tesla and SpaceX are hotbeds of litigation. Let’s not go over the numerous family law matters that Musk is involved in due to his 14 known children.&nbsp;</p>

<p class="has-text-align-none">Musk has actually filed four lawsuits against OpenAI. The first —&nbsp;for a breach of the founding agreement —&nbsp;was <a href="https://www.courthousenews.com/wp-content/uploads/2024/02/musk-v-altman-openai-complaint-sf.pdf">in state court</a>, in 2024; Musk withdrew it immediately before a major hearing. We then got <a href="https://www.courtlistener.com/docket/69013420/musk-v-altman/">the current lawsuit</a>, also filed in 2024, in which Musk alleged &#8220;Shakespearean&#8221; deceit. Several of its claims, including a laughable invocation of racketeering law, have been dismissed. Another suit, <a href="https://www.courtlistener.com/docket/71191818/x-corp-v-apple-inc/">filed a year later</a>, this time by xAI, accused Apple and OpenAI of engaging in anticompetitive behavior by making an agreement to exclusively put ChatGPT into iPhones. (The <a href="https://www.courtlistener.com/docket/71191818/x-corp-v-apple-inc/?filed_after=&amp;filed_before=&amp;entry_gte=&amp;entry_lte=&amp;order_by=desc">case is ongoing</a>.) A fourth case <a href="https://www.courtlistener.com/docket/71450167/xai-corp-v-openai-inc/">accused OpenAI</a> of poaching xAI employees and stealing trade secrets. It was <a href="https://www.courthousenews.com/wp-content/uploads/2026/02/xai-v-openai-court-order.pdf">dismissed</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>“Since launching a competing artificial intelligence company, xAI, Musk has been trying to leverage the judicial system for an edge.”</p></blockquote></figure>

<p class="has-text-align-none">In court starting next week, Musk will be making three main claims: that Altman and Brockman, <em>et al.</em>, breached OpenAI’s charitable trust; that they participated in unjust enrichment (at Musk’s expense); and that they committed fraud. His lawyer will tell a jury that he was duped into giving OpenAI money on terms that Altman and Brockman didn’t live up to. Among other things, he’s demanding that Altman and Brockman be removed from their company roles, that OpenAI be required to award a certain amount of money to its nonprofit, and that it cease operating in its current structure as a public benefit corporation.</p>

<p class="has-text-align-none">OpenAI has countered that Musk failed to prove that Altman and Brockman ever made him a “cognizable promise” that could amount to unlawful activity, and that he lacks the standing for some claims, among other objections. It’s pointed out that Musk could have intervened in the company’s 2025 recapitalization in the time between OpenAI filing and its review by two attorneys general, and he did not. “This suit is the latest move in Elon Musk’s increasingly blusterous campaign to harass OpenAI for his own competitive advantage,” OpenAI wrote in <a href="https://www.courtlistener.com/docket/69013420/25/musk-v-altman/">one filing</a>. “Since launching a competing artificial intelligence company, xAI, Musk has been trying to leverage the judicial system for an edge. The effort should fail.”&nbsp;</p>

<p class="has-text-align-none">In court, OpenAI could argue that it engaged in self-help — such as starting its for-profit arm — because Musk left it in the lurch when he pulled promised funding from the nonprofit, says Peter Molk, a professor of law at the University of Florida. But that may not be enough to protect OpenAI. “My walking away doesn’t mean you can break any agreement we have,” Molk says. Musk may argue that OpenAI should have brought him to court and forced him to pay up. Of course, if OpenAI had done that, it likely would have gone bankrupt.</p>

<p class="has-text-align-none">The details that come out in the trial about OpenAI “will absolutely change its reputation, if it’s still trying to claim it’s doing this in some high-minded, ‘we want to make AI safe for humanity’ way,” said Deven Desai, a professor of business law and ethics at the Georgia Institute of Technology. “The court documents and testimonies will make it harder and harder for OpenAI to keep claiming that’s what it’s about.”&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>&#8220;I think at this point OpenAI has the leverage to ask for a soft promise for new investors not to invest in competitors.” </p></blockquote></figure>

<p class="has-text-align-none">Since this lawsuit was initially filed, OpenAI’s reputation has cratered. Besides the <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html">assorted lawsuits</a> from people who say ChatGPT encouraged loved ones’ suicides, <a href="https://www.theverge.com/ai-artificial-intelligence/908513/the-vibes-are-off-at-openai">constant exec reshuffling,</a> and the dominance of Anthropic’s enterprise product have significantly cooled enthusiasm for the company. And there’s always the question of whether OpenAI, one of the most expensive startups ever, <a href="https://www.theverge.com/ai-artificial-intelligence/917380/ai-monetization-anthropic-openai-token-economics-revenue">will make a profit</a> that meets investors’ expectations.</p>

<p class="has-text-align-none">Already, <a href="https://www.theverge.com/column/863319/highlights-musk-v-altman-openai">we’ve found out</a> that Sutskever and others were worried about the success of Stability AI, a then-competitor open-source lab. Sutskever also felt “betrayed” by Reid Hoffman, an early investor, founding his own AI lab; in response, Altman said, &#8220;I think at this point OpenAI has the leverage to ask for a soft promise for new investors not to invest in competitors.” Altman also didn’t tell the board he was personally running an OpenAI VC fund, according to a deposition of former board member Helen Toner.</p>

<p class="has-text-align-none">Some of the damages Musk is calling for in his lawsuit — like the demands to unseat executives and change the company’s business structure —&nbsp;are likely unrealistic. State attorneys general from California and Delaware both blessed OpenAI’s restructuring. But Georgia Tech’s Desai says that even if the federal court doesn’t move to act on such requests, Musk might still get what he wants. The suit&nbsp;could do real damage, especially ahead of OpenAI’s impending IPO, and amid some shareholders <a href="https://www.wsj.com/tech/ai/chatgpt-openai-ipo-altman-029ae6d5">reportedly questioning</a> if Altman is the person to lead the company during that process — especially as <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">allegations</a> of his untrustworthiness and manipulative behavior resurface.&nbsp;</p>

<p class="has-text-align-none">Musk’s strategy here is likely not just about angling to win in court, but also to “go after OpenAI as it currently exists,” says Desai. “It’s to create enough issues around how it has evolved to cause trouble&nbsp;and possibly get to the point where even if Musk doesn’t win, he’ll have made it look like it’s not worth keeping Mr. Altman in his position.”&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>Any scandalous information about OpenAI’s C-suite could derail the IPO</p></blockquote></figure>

<p class="has-text-align-none">Musk has his own AI project with xAI and is nakedly trying to damage a competitor, says Molk. Much of the damage — bad PR, the cost of litigation, and distracting Altman and other executives as they begin to prepare for an initial public offering — occurs outside the purview of the courtroom, Molk says. “As long as there is some credibility [to Musk’s case], the motivation doesn’t matter,” he says.</p>

<p class="has-text-align-none">The suit is part of a yearslong feud between Altman and Musk, ever since their personal and working relationships both soured in a public way. Musk’s xAI is aimed at the same government contracts and consumer chatbot users as OpenAI. He also folded xAI into SpaceX, as he had planned to do with OpenAI and Tesla.&nbsp;</p>

<p class="has-text-align-none">OpenAI has <a href="https://openai.com/index/the-truth-elon-left-out/">responded</a> <a href="https://openai.com/index/openai-elon-musk/">to Musk</a> <a href="https://openai.com/index/elon-musk-wanted-an-openai-for-profit/">by blogging</a>. According to OpenAI, Musk himself wanted a for-profit arm of OpenAI —&nbsp;and further, 50 to 60 percent of its equity, and to be its CEO. He also proposed merging OpenAI with Tesla. OpenAI’s lawyers, playing dirtier, asked in depositions about Musk’s purported escapades at Burning Man, including whether he’d ingested “rhino ketamine.” This fits with its legal tactics elsewhere —&nbsp;including <a href="https://www.theverge.com/ai-artificial-intelligence/801994/openai-subpoenas-intimidation-tactics-nonprofits-elon-musk-restructuring">subpoenaing nonprofits</a> and <a href="https://www.ft.com/content/47b00423-1060-43c9-8c28-23631cb7a4d1?syn-25a6b1a6=1">requesting a full list of who attended a deceased 16-year-old’s memorial services</a>.</p>

<p class="has-text-align-none">OpenAI is racing both SpaceX and Anthropic for an initial public offering. And there’s more investor pressure than ever to generate revenue, after companies like OpenAI and Anthropic raised billions of dollars without generating profit. <a href="https://www.theinformation.com/articles/openai-ceo-cfo-diverge-ipo-timing">Reportedly, even OpenAI’s CFO</a> doesn’t believe the company is ready to go public in 2026 due to its sizable spending commitments. Other executives have nevertheless <a href="https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a42">expressed a desire to beat Anthropic</a> to the public market. Any scandalous information about OpenAI’s C-suite could derail the IPO.</p>

<figure class="wp-block-pullquote"><blockquote><p>Zilis: “‘Relationship’ is a relative term. But there have been romantic moments.”</p></blockquote></figure>

<p class="has-text-align-none">Musk is also potentially vulnerable to mud-slinging. Musk’s SpaceX has filed confidentially for an IPO, which may happen as soon as June. Anything revealed in the courtroom about xAI or Musk personally could potentially affect investor interest or confidence, particularly since the IPO may follow so quickly on the heels of the trial. We already know from texts surfaced in discovery that <a href="https://www.courtlistener.com/docket/69013420/455/2/musk-v-altman/">Musk was actively recruiting from OpenAI</a>.</p>

<p class="has-text-align-none">There are other topics — such as Musk’s relationship with Shivon Zilis, the former board member of OpenAI and mother of several of his children — that are particularly sensitive, too. Zilis, who was effectively OpenAI’s “Elon whisperer,” has already questioned what the meaning of a “romantic relationship” <a href="https://www.courtlistener.com/docket/69013420/455/2/musk-v-altman/">is in her deposition</a>. (Zilis: ‘Relationship’ is a relative term. But there have been romantic moments.”) And in a 2018 text message, Zilis asks Musk if she should stay “close and friendly” to OpenAI to “keep info flowing.” She also told Musk that after he hung up on a call, Sutskever was “visibly devastated” and Musk could probably recruit him if he wanted to.</p>

<p class="has-text-align-none">Another key witness on OpenAI’s list is Jared Birchall, Musk’s fixer — who, <a href="https://www.theverge.com/2019/12/4/20994895/elon-musk-testimony-vernon-unsworth-tweet-negligence-la-courthouse">among other things</a>, <a href="https://www.wsj.com/tech/elon-musk-fortune-fight-jared-birchall-igor-kurganov-11657308426?st=HLTGhu&amp;reflink=desktopwebshare_permalink">runs Musk’s family office</a>. Birchall is likely privy to a number of Musk’s secrets, and his testimony could be damaging.</p>

<p class="has-text-align-none">On the other hand, “I’m not sure how much reputation Musk has left to lose,” says Brunson.&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Silicon Valley has forgotten what normal people want]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tldr/915176/nft-metaverse-ai-weirdos" />
			<id>https://www.theverge.com/?p=915176</id>
			<updated>2026-04-20T16:19:49-04:00</updated>
			<published>2026-04-20T16:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Crypto" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="TL;DR" />
							<summary type="html"><![CDATA[One of the most mortifying things about knowing a lot of techies is listening to them tell me excitedly about some very important discovery that they believe they have made. Recently, I ran into an acquaintance of mine, who began talking my ear off about an amazing discovery he’d made with LLMs. Knowledge, it turns [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A brain is shown, melting in the sun" data-caption="The long-term risks of the All-In Podcast, illustrated. | Image: Cath Virginia / The Verge, Turbosquid, Getty Images" data-portal-copyright="Image: Cath Virginia / The Verge, Turbosquid, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268457_HUBRIS_CVIRGINIA.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	The long-term risks of the All-In Podcast, illustrated. | Image: Cath Virginia / The Verge, Turbosquid, Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">One of the most mortifying things about knowing a lot of techies is listening to them tell me excitedly about some very important discovery that they believe they have made. Recently, I ran into an acquaintance of mine, who began talking my ear off about an amazing discovery he’d made with LLMs. Knowledge, it turns out, is structured into language! You could put one word into ChatGPT and it might understand what you wanted, or make up a word and see if it understood what you meant! These amazing new tools have revealed that the English corpus contains so <em>much</em> about its speakers!</p>

<p class="has-text-align-none">He concluded that LLMs are a discovery on par with writing.&nbsp;</p>

<p class="has-text-align-none">Regular humans hit on this idea about a century ago; my most generous interpretation of what he was telling me was that he’d hit on a kind of naive, confused version of Structuralism; Saussure via a game of telephone. (There has been <a href="https://www.upress.umn.edu/9781517919320/language-machines/">recent work</a> on a similar point, which argues that one needs to <a href="https://www.jhiblog.org/2025/06/11/language-and-image-minus-cognition-an-interview-with-leif-weatherby/">understand LLMs via literary theory</a>, but it <a href="https://3quarksdaily.com/3quarksdaily/2025/08/attention-is-all-we-need-on-leif-weatherbys-language-machines.html">starts with Saussure</a>.) I tried to get out of the conversation as quickly as I could, not least because he seemed frustrated that I didn’t see things exactly as he did —&nbsp;a new behavior and likely a symptom of LLM overuse.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>There is a certain amount of hubris required to throw oneself at an unsolved problem. But elsewhere, that hubris is a liability.</p></blockquote></figure>

<p class="has-text-align-none">Not every discovery that’s new to you is actually new. For instance, there’s Elon Musk <a href="https://x.com/MarioNawfal/status/1989414278936629595">marvelling at the complexity of hands</a>; I could point to a variety of disciplines for which this is 101-level stuff: artists, who have to figure out how to draw them; surgeons, who have to figure out how to operate on them; musicians and magicians, who rely on extremely fine motor skill to produce their work; neuroscientists and psychologists, who doubtless encountered the cortical homunculus early in their careers. Or Palmer Luckey claiming that <a href="https://x.com/PalmerLuckey/status/1939520689662484906">“no one has done a postmortem”</a> on the One Laptop Per Child computing project —&nbsp;because he didn’t know there’s a whole book about it called<em> The Charisma Machine.</em></p>

<p class="has-text-align-none">At its most absurd nadir, one is reminded of Juicero, a company that sold a $400 juicer that did the same work as squeezing its proprietary juice packs with one’s bare hands.</p>

<p class="has-text-align-none">Look, discovering something that’s new to you is exciting —&nbsp;ask anyone who listened to me yell about the joys of European (higher-fat) butter — but you can’t take for granted that something that’s new to you is new to <em>everyone. </em>These things have in common a certain incuriosity that I have found endemic among a certain kind of tech enthusiast, particularly the ones who are most interested in startups and entrepreneurship. Perhaps they have been so siloed that they did not realize their “discovery” was well -known elsewhere, or perhaps their self-conception is that they are the smartest, and if they don’t know something, no one knows it.&nbsp;</p>

<p class="has-text-align-none">There is a certain amount of hubris required to throw oneself at an unsolved problem — you have to believe you can solve it. But elsewhere, that hubris is a liability. It leads you to do weird things, like <a href="https://www.theverge.com/tldr/897566/marc-andreessen-is-a-philosophical-zombie">announce that Freud invented introspection</a> and that it is a bonus that you simply do not engage in it.</p>

<figure class="wp-block-pullquote"><blockquote><p>Within recent memory, people who made software and hardware understood their job was to serve their customer</p></blockquote></figure>

<p class="has-text-align-none">When I think I have observed something important, my first impulse is to go to a library, or Wikipedia, or a person who I think may be knowledgeable, and see what else has been observed. For instance, when I had a concussion, I wanted to see if anyone else had written about what it was <em>like</em> to recover — the dry medical descriptions did very little for me. When I couldn’t easily find an account, <a href="https://www.theverge.com/2017/9/27/16086018/concussion-diary-brain-injury-recovery-symptoms">I wrote my own</a>. I still receive emails about it, years later, from people who are doing the same search I did, following their own concussions. But doing something like this requires you to take for granted that other people are smart, that smart people have always existed, and that very little in the human experience is new. That requires, you know, intellectual humility —&nbsp;and a willingness to think about other people’s experiences.</p>

<p class="has-text-align-none">While this particular kind of hubris makes people crashing bores, it’s not just an annoying personal trait. It seems to have seeped into the professional side of Silicon Valley as well.</p>

<p class="has-text-align-none">Within recent memory, people who made software and hardware understood their job was to serve their customer. It was to identify a need, and then fill it. But at some point following the financial crisis, would-be entrepreneurs got it into their heads that their job was <em>to invent the future</em>, and consumers’ job was to go along with that invented future. My guess is that they’re aping what they thought Steve Jobs was doing when he, for instance, got rid of the optical drives on the MacBook Air.&nbsp;</p>

<p class="has-text-align-none">But Steve Jobs, famously, failed at inventing the future in the 1980s and got booted from Apple. We all know how things changed when he came back. But the iMac, the iPod, the iPhone were built with a need in mind. The iMac won because it was easy to use. The iPod was easier to take with you than a CD player and a stack of CDs. (It also was a way to play the MP3s you might have illegally downloaded.) The iPhone had the App Store, which expanded its utility well beyond any other mobile device.</p>

<figure class="wp-block-pullquote"><blockquote><p>At some point, our Silicon Valley overlords forgot that in order for their vision of the future to be adopted, <em>people had to want it.</em></p></blockquote></figure>

<p class="has-text-align-none">Some of this was luck — <a href="https://www.theverge.com/2021/10/7/22711230/springboard-handspring-documentary-secret-history-first-real-smartphone">introducing the right product at the right time</a>. But each product offered consumers a distinct value proposition. Sure, early adopters jumped on each of these things because they were cool, but the uncool masses don’t care about that. They’ll buy something if it improves their life in a distinct way.&nbsp;</p>

<p class="has-text-align-none">In the place of problem-solving technology, companies have jumped on successive bandwagons like NFTs, the metaverse, and large language models. What these all have in common is that they are not built to really solve a market problem. They are built to make VCs and companies rich. NFTs, like crypto, let VCs quickly unload investments with abbreviated lockup periods. The metaverse promised to enrich companies like Facebook by having people move <em>all</em> their socializing online, where it could be surveilled and monetized. In addition, Facebook’s metaverse required the purchase of hardware, which would then need regular upgrades.</p>

<p class="has-text-align-none">At some point, our Silicon Valley overlords forgot that in order for their vision of the future to be adopted, <em>people had to want it.</em> That’s why NFTs, the metaverse, and the Oculus and Vision Pro never really found their customer base. AI is, admittedly, more useful — it’s good for organizing large swaths of data, for instance. LLMs have had widespread consumer adoption, at least as long as they remain free. But there is only really one customer for LLMs that can justify the massive cash incineration process that was required to build them: the US government.</p>

<p class="has-text-align-none">There can only be a few winners on government contracts, though. So we are now treated to the spectacle of watching AI companies scramble. OpenAI is perhaps the funniest, because it is attempting to position itself as a consumer product.</p>

<figure class="wp-block-pullquote"><blockquote><p>The people who tell us that AI will dominate our future and take our jobs are the people who are <em>hoping</em> that will be true.</p></blockquote></figure>

<p class="has-text-align-none">Consider Sam Altman telling the world that he <a href="https://www.businessinsider.com/sam-altman-chatgpt-parenting-jimmy-fallon-2025-12?ref=platformer.news">needed ChatGPT to tell him how to raise a baby</a>. You exist. I exist. Our parents did not have LLMs, or even AI, and yet somehow we survived our childhoods, as did almost everyone else we knew growing up because <a href="https://www.cdc.gov/nchs/data-visualization/mortality-trends/index.htm#data-tables">childhood death rates</a> in the US have been extraordinarily low —&nbsp;compared to most of the rest of human history — for decades. The technologies that allowed us all to survive our childhoods were sanitation, vaccines, and antibiotics. I would put money down that a mandatory measles vaccine will do more for the survival of American children than anything OpenAI has accomplished with all of its billions of dollars to date.</p>

<p class="has-text-align-none">In any event, I presume what Altman actually did was hire a nanny.</p>

<p class="has-text-align-none">Or consider Elon Musk telling us about our future humanoid robot servants. I have a robot servant. Several, actually: a dishwasher, a washer for my clothing, and a dryer. They aren’t very mobile, and yet they have saved me tremendous labor. My fridge is from the ’90s, and my microwave isn’t much younger, and both of those things have been remarkable in what they have done for me: made food storage and cooking easy, without AI involvement. It doesn’t seem like there’s much AI can do to improve things over the baseline that these machines have already established, especially since my “dumb” technology hasn’t required an update in more than 20 years. Saving money is valuable to me, too.</p>

<p class="has-text-align-none">The people who tell us that AI will dominate our future and take our jobs are the people who are <em>hoping</em> that will be true. They may be hoping this because it makes them feel important, or because they want to be billionaires, or because they simply do not understand other people. I think that final point is underestimated. If you are going to provide me with a robot servant, I have a very clear bar: It’s gotta be at least as much bang for my buck as my dishwasher.</p>

<figure class="wp-block-pullquote"><blockquote><p>There are places in our lives where efficiency isn’t desirable</p></blockquote></figure>

<p class="has-text-align-none">Normal people aren’t running around like chickens with their heads cut off, trying to automate every single part of their lives. Indeed, there are places in our lives where efficiency isn’t desirable. Vacation planning is sometimes suggested as a place AI can make our lives easier. For me, at least, planning the vacation is a pleasure in and of itself; it allows me to browse information about a place, consider what might be fun, and imagine myself doing it. If I have friends who have been to that place before, it gives me an excuse to talk to them, getting their recommendations. The entire process sharpens the anticipation I feel as the date for the vacation approaches. But if I wish to outsource that, I can do so already — that’s what cruise ships and theme parks are for.</p>

<p class="has-text-align-none">LLMs are, at best, an enterprise technology that may make certain kinds of data organization easier, or coding faster. This has almost nothing to do with most people’s lives. Dinking around with code is a hobby many tech people enjoy and one the rest of us simply don’t care about. Making it easier to write code doesn’t change that <em>I don’t want to write code.</em> I have other hobbies!</p>

<p class="has-text-align-none">The actual use for LLMs in most normal people’s lives is <a href="https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html">cheating on schoolwork</a>. For adults, it’s <a href="https://www.theverge.com/2024/12/5/24313222/chatgpt-pardon-biden-bush-esquire">looking up information</a> —&nbsp;LLMs are in the process of supplanting Google Search. Google had been <a href="https://pluralistic.net/2024/02/21/im-feeling-unlucky/">degrading its search project for some time</a>, and the results just kept getting worse. This opened the door for an alternative, and <a href="https://www.theverge.com/2024/8/30/24230975/openai-publisher-deals-web-search">the LLMs stepped through</a>. How long that will last, I don’t know —&nbsp;the LLMs themselves will require money at some point and their frequently inaccurate (and <a href="https://www.theverge.com/2024/6/27/24187405/perplexity-ai-twitter-lie-plagiarism">sometimes plagiarized</a>) results are killing the websites they rely on to <a href="https://www.theverge.com/24167865/google-zero-search-crash-housefresh-ai-overviews-traffic-data-audience">generate information</a>. Sure, it’s more inefficient to click through to a high-quality product, but how else do you plan to continue to have people generate high-quality information? No one has solved this problem.</p>

<figure class="wp-block-pullquote"><blockquote><p>Musicians aren’t bogarting creativity — they are people who enjoy making music</p></blockquote></figure>

<p class="has-text-align-none">Sometimes inefficiency is load-bearing. Take, for instance, the stock market. It is only open during certain hours, and only during certain days of the week. That means that during a panic, there is an artificial boundary that gives people time to calm down. This is effective; it’s one of the reasons that individual stocks sometimes undergo a trading halt during periods of hysteria. Now consider crypto, which is open for business 24/7/365: There is no way to pause a panic. One of the reasons the crashes in crypto are so huge and so fast is because there is no breaker to trip and no break in trading to allow traders to regroup. In fact, crypto panics are arguably exacerbated by the fact that many people <em>literally cannot sleep</em> because the market does not close.</p>

<p class="has-text-align-none">There are other ways in which consumer AI is weird. Take the AI music apps, which are predicated on the idea that there are people in the world who <em>want</em> to make music but simply haven’t taken the time to learn how to play an instrument. There are likely very few of those people! Musicians aren’t bogarting creativity —&nbsp;they are people who enjoy making music. The rest of us just enjoy listening, which is an end in itself.</p>

<p class="has-text-align-none">The place where AI music is most useful is for people who want to figure out how to get themselves onto Spotify playlists, accrue streams, and make money —&nbsp;<a href="https://www.theverge.com/2024/11/14/24294995/spotify-ai-fake-albums-scam-distributors-metadata">that is to say, scammers</a>. Similarly, the self-publishing market is rampant with AI slop, not because people are desperately trying to express themselves, but because it is easy to trick other people into buying slop on Amazon. And it’s not just the casual readers who get swindled, as demonstrated by the scandal around <em>Shy Girl</em>, the <a href="https://www.nytimes.com/2026/03/19/books/shy-girl-book-ai.html">now-withdrawn novel that fooled Hachette</a>. The end result for most people is that these <a href="https://www.youtube.com/watch?v=GbeKTa5xhZo">AI tools make it harder for them to access art made by other people</a>. And the end result for artists, of course, is that it’s harder to make a living.</p>

<figure class="wp-block-pullquote"><blockquote><p>Did Mark Zuckerberg’s Meta utopia ever develop legs?</p></blockquote></figure>

<p class="has-text-align-none">How is it that all these wunderkinds trying to build the next product to take over the world haven’t thought about this? I think the answer is simple. They do not have much in common with normal people, and haven’t thought much about what normal people’s lives are like, or what normal people value. What they have been doing instead is getting high on their own supply —&nbsp;listening to VC podcasts, freaking themselves out about whether they’ll be able to keep up with AI agents, and otherwise getting increasingly more detached from reality.</p>

<p class="has-text-align-none">I suspect this is how we wound up with NFTs, the metaverse, and the clunky VR/AR headsets. These are things that appeal to a very narrow set of people who are overrepresented in the VC and wannabe-tech-entrepreneur spaces. The Silicon Valley hype cycle worked overtime for those things, and I think we all know how this turned out. When was the last time <em>you </em>heard about a Bored Ape, or a Crypto Kitty, or any of the other novelties that briefly swept the nation? Did those novelties translate into a real, durable income stream for artists, musicians, and other creators, as we were promised? When was the last time you saw someone wearing Apple’s headset? Did Mark Zuckerberg’s <a href="https://www.theverge.com/tech/897396/meta-vr-horizon-worlds-metaverse-andrew-bosworth">Meta utopia ever develop legs</a>?</p>

<p class="has-text-align-none">Look, we all had <a href="https://www.businessinsider.com/marc-andreessen-zero-introspection-debate-2026-3">a bit of fun at Marc Andreessen’s expense</a> about his lack of introspection —&nbsp;but this is precisely the reason Silicon Valley keeps trying to force futures on consumers that they emphatically don’t want. A VC who is incapable of self-reflection will never notice that his bets on the future of consumerism are failing in exactly the same way every time. That VC hasn’t noticed, and indeed can’t notice, that his experience isn’t representative of what the ordinary person wants or needs.</p>

<figure class="wp-block-pullquote"><blockquote><p>“They come out much more at peace, but then they tend to quit their companies.”</p></blockquote></figure>

<p class="has-text-align-none">Actually, while I’m picking on Andreessen, I want to point to <a href="https://youtu.be/qBVe3M2g_SA?si=xGWA3K0Kk94vVU0U&amp;t=205">a bit of that interview that didn’t go viral</a>. It occurs right after the fatal introspection quote, but I think it gets to the real rot at the heart of Silicon Valley’s current culture. In it, Andreessen mentions psychedelics, saying he was discussing them with podcaster Andrew Huberman. “I was describing this phenomenon we see in Silicon Valley, where there are these guys who get under pressure, and they feel anxious or whatever, and someone tells them about psychedelics, and they try it,” Andreessen says. “And they kind of come out the other end as a changed person. They come out much more at peace, but then they tend to quit their companies.”</p>

<p class="has-text-align-none">In Andreessen’s telling, Huberman suggests that these people may be happier, and better off. And Andreessen says, “Yeah, but their company is failing.”</p>

<p class="has-text-align-none">The hubristic entrepreneurs (and the VCs who need them) are a relatively small slice of the population. The majority of us would much rather be happy than try to found a company that takes over the world —&nbsp;sacrificing the majority of our waking hours, our hobbies, and likely many of our relationships in the process. It may be the case that the real way to shape the future isn’t to dictate it to consumers. It is simpler just to give people things they actually want.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Elon Musk is about to be a very busy boy!]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/906027/elon-musk-lawsuit-ipo-spacex-tesla" />
			<id>https://www.theverge.com/?p=906027</id>
			<updated>2026-04-02T14:53:38-04:00</updated>
			<published>2026-04-02T14:15:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Electric Cars" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="Law" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Tesla" /><category scheme="https://www.theverge.com" term="Transportation" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[To be honest, I thought Elon Musk would confidentially file for SpaceX’s IPO on the 20th of this month, rather than the 1st. But maybe that just means he’s moved on to other numbers, and we should all mark our calendars for June 7th as an IPO date just in case. Based on the April [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk smiling with big green arrow and Space X rockets." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/STKB355_SPACE_X_IPO_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">To be honest, I thought Elon Musk would confidentially file for SpaceX’s IPO on the 20th of this month, rather than the 1st. But maybe that just means he’s moved on to other numbers, and we should all <a href="https://en.wikipedia.org/wiki/6-7_meme">mark our calendars for June 7th</a> as an IPO date just in case.</p>

<p class="has-text-align-none">Based on the April 1st filing, and the general length of an SEC review before the S-1 document becomes public, the earliest I am expecting a SpaceX IPO is June. (At least, assuming there is still anyone left at the SEC who wants to do their job instead of just glance at the first page of the filing, say, “Seems fine!” and then go out for a smoke break.) Of course, <a href="https://www.reuters.com/business/finance/filing-first-trade-inside-us-ipo-process-2026-04-01/">this process could take longer</a> — for instance, WeWork filed for an IPO in <a href="https://www.cnbc.com/2019/04/29/the-we-company-formerly-known-as-wework-files-confidentially-for-ipo.html">April 2019</a>, and its S-1 was <a href="https://www.theverge.com/2019/8/15/20806366/we-company-wework-ipo-adam-neumann">released in August for us all to laugh at</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>We all deserve a break from this guy, but we aren’t going to get one</p></blockquote></figure>

<p class="has-text-align-none">Elon Musk is going to be busy! Frankly, after the DOGE bullshit last year, we all deserve a break from this guy, but we aren’t going to get one. There are assorted Tesla announcements to expect, a trial where Musk will grandstand about Sam Altman and OpenAI, and then, of course, the S-1. Will those events occur in that order? I sure fucking hope so or else I may collapse from exhaustion.</p>

<p class="has-text-align-none">Tesla, which has <a href="https://www.theverge.com/transportation/870332/tesla-throws-in-the-towel-on-car-sales">decided it doesn’t like being a car company anymore</a>, has set a deadline of April to start production on its Cybercabs. The <a href="https://www.statesman.com/business/technology/article/tesla-cybercab-sxsw-22075456.php">design that the <em>Austin American-Statesman</em> saw at South by Southwest</a> didn’t have a steering wheel, pedals, or a rear window. <a href="https://www.theverge.com/transportation/900559/robotaxi-two-seater-tesla-lucid-drag-cost">It is a two-seater</a>, because why would you ever go anywhere with your friends? Elon Musk doesn’t have any and neither should you.</p>

<p class="has-text-align-none">As you might expect, there are some problems with the Cybercab program. First of all, people keep leaving Tesla. Mark Lupkey, who was the manufacturing guy for the Cybercab at Giga Texas, has departed —&nbsp;<a href="https://electrek.co/2026/03/30/tesla-cybercab-assembly-leader-mark-lupkey-leaves/">the third senior Cybercab leader to ankle the joint</a>, after Victor Nechita, who was the program manager, and Thomas Dmytryk, who built the ridehailing backbone. The <a href="https://finance.yahoo.com/markets/stocks/articles/tesla-loses-two-more-leaders-172334078.html?fr=sycsrp_catchall">talent drain from Tesla over the last two years</a> has been substantial.</p>

<p class="has-text-align-none">That’s just the personnel. The robotaxi service itself is not doing so hot either! While I have been promised by various dorks that autonomous driving will be safer than human driving, this robotaxi program doesn’t seem to be. <em>Electrek</em> has been <a href="https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-crashes-austin-month-4x-worse-than-humans/">keeping an eye on the crashes</a>, and it turns out that as of February, humans are four times safer than Tesla’s autonomous vehicles. Compared to Waymo, which looks like it may actually be safer than people, this is atrocious.</p>

<figure class="wp-block-pullquote"><blockquote><p>The next-gen Roadster may be revealed later this month, or never</p></blockquote></figure>

<p class="has-text-align-none">We also may or may not hear from Optimus, Tesla’s crack at the <a href="https://www.washingtonpost.com/technology/2026/03/27/musk-optimus-robot-physical-ai/">now-buzzy “physical AI,”</a> formerly known as robots. We were <em>supposed</em> to hear about Optimus in the first quarter of this year, but can you believe it, <a href="https://x.com/elonmusk/status/2038890797760934115">that got delayed</a>. I suppose it has improved over “<a href="https://www.theverge.com/2021/8/20/22633958/tesla-bot-elon-musk-ai-day">guy in a robot costume</a>,” but I’m not convinced it’s ready for primetime, not least because I am old enough to remember when Musk wanted an “<a href="https://www.businessinsider.com/elon-musk-revives-his-alien-dreadnought-factory-dream-for-batteries-2020-9">alien dreadnought</a>” manufacturing Teslas, which wound up as <a href="https://www.wired.com/story/elon-musk-tesla-life-inside-gigafactory/">an expensive failure</a> that fucked up the schedule for the Model 3. (Many Model 3s were built instead in a tent in the Tesla Fremont factory parking lot.) I get that Silicon Valley hates human workers but the fact is that we are <a href="https://www.theverge.com/2023/3/23/23651976/ai-money-investment-vc-hype">largely cheaper</a> than our competition.&nbsp;</p>

<p class="has-text-align-none">Finally, there’s the next-gen Roadster, which was supposed to be <a href="https://www.theverge.com/transportation/816120/tesla-roadster-demo-coming-next-april-musk-says">demoed on April 1st</a> —&nbsp;but it also got mysteriously delayed. It may be revealed later this month, or never. It was <a href="https://www.theverge.com/2017/11/17/16669024/tesla-roadster-2017-fastest-car-world">originally supposed to start production in 2020</a>. Sure, production delays are kind of the norm for Tesla, but six years is egregious.</p>

<p class="has-text-align-none">How <em>is </em>Tesla, the publicly traded company Musk already has? <a href="https://www.theverge.com/transportation/869603/tesla-q4-2025-earnings-revenue-profit-musk-robotaxis">In decline</a>, it seems. Its lineup is aging, the <a href="https://www.cbsnews.com/news/tesla-cybertruck-sales-elon-musk/">Cybertruck is a failure</a>, and Musk’s far-right political activity <a href="https://www.theverge.com/news/635249/tesla-takedown-protest-stock-elon-musk-future">has alienated a wide swath of car buyers</a>. Its misleadingly named Full Self-Driving <a href="https://www.theverge.com/transportation/897303/tesla-full-self-driving-nhtsa-probe-march-2026">may be in danger of a recall</a>. At least things are looking up for Tesla’s energy business, <a href="https://techcrunch.com/2026/03/12/tesla-becomes-a-utility-in-the-uk-setting-up-showdown-with-octopus-energy/">which will be a utility in the UK</a> and seems to be planning <a href="https://www.reuters.com/sustainability/climate-energy/tesla-plans-india-push-into-energy-storage-it-expands-beyond-cars-job-ad-shows-2026-03-20/">an expansion into India</a>.</p>

<p class="has-text-align-none">Anyway, I suppose we now have to discuss <em>Musk v. Altman</em>, which is scheduled for a jury trial beginning on April 27th. Musk, you may recall, cofounded research nonprofit OpenAI alongside Sam Altman and others in the halcyon days of the 2010s, when he was <a href="https://www.youtube.com/watch?v=2KK_kzrJPS8">running around telling us we were living in a simulation</a> and doing <a href="https://www.youtube.com/watch?v=KGR5HP3KSBk">cameos on <em>The Big Bang Theory</em></a>. Altman and Musk were even friends at the time, if <a href="https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/">contemporaneous <em>Wired</em> reporting is to be believed</a>. Both of them had been yelling about the danger of Skynet before they decided they would save us all from Skynet by simply building it themselves.</p>

<figure class="wp-block-pullquote"><blockquote><p>Musk is notably vicious, and OpenAI’s lawyers are notably nasty</p></blockquote></figure>

<p class="has-text-align-none">The problem with a company founded by two power-hungry monsters is that there can be only one king. By 2017, even <a href="https://openai.com/index/elon-musk-wanted-an-openai-for-profit/#summer-2017-we-and-elon-agreed-that-a-for-profit-was-the-next-step-for-openai-to-advance-the-mission">Musk had agreed OpenAI needed a for-profit arm</a> to raise the money the organization needed. Musk left in a huff because he wasn’t going to be the CEO, and he cut off the cash that he’d promised the nonprofit-slash-startup. This left OpenAI in the lurch —&nbsp;ultimately leading to its deal with Microsoft.</p>

<p class="has-text-align-none"><a href="https://www.theverge.com/2024/3/1/24087937/elon-musk-suing-openai-nightmare-1l-contracts-exam">Musk says OpenAI has betrayed its vision or whatever</a> by going for-profit. OpenAI thinks <a href="https://www.theverge.com/2024/3/5/24091773/openai-response-elon-musk-breach-of-contract-lawsuit">Musk is trying to eliminate competition</a> for xAI, the rival company he founded after flouncing off. xAI was later bought by SpaceX and will now be going public soon, so one amusing thing about this trial’s timing is that anything it reveals about xAI may affect the IPO following on its heels.</p>

<p class="has-text-align-none">It’s <a href="https://www.theverge.com/column/863319/highlights-musk-v-altman-openai">all very messy</a> and likely to get messier, since Musk is <a href="https://www.theverge.com/2023/10/1/23895069/walter-isaacson-biography-musk-review">notably vicious</a>, and <a href="https://www.theatlantic.com/technology/2025/11/openai-lawsuit-subpoenas/684861/">OpenAI’s lawyers are notably nasty</a>. Recently, there was <a href="https://www.courtlistener.com/docket/69013420/454/1/musk-v-altman/">a fun filing on the docket</a> that suggested what we were liable to get in the case: Musk’s lawyers wanted his recreational drug use and <a href="https://www.techdirt.com/2026/03/31/weeks-after-denouncing-government-censorship-on-rogan-zuckerberg-texted-elon-musk-offering-to-take-down-content-for-doge/">ties to the Trump administration excluded</a> from the trial. What’s even more fun is the piece of deposition attached, in which Musk is <a href="https://bsky.app/profile/lopatto.bsky.social/post/3mi2rqceaoc2u">asked if he knows what “rhino ket” is</a> and whether he attended Burning Man 2017. Among <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.455.1.pdf">other contentious issues</a>: whether Musk’s baby mama Shivon Zilis’ romantic relationship with Musk can be mentioned in the trial — since she was also an OpenAI board member during the relationship, OpenAI alleges. Zilis, originally a plaintiff in the case, is the mother of four of <a href="https://www.wsj.com/politics/elon-musk-children-mothers-ashley-st-clair-grimes-dc7ba05c">Musk’s known children</a>. The judge has not yet ruled on this particular mess.</p>

<p class="has-text-align-none">In the meantime, SpaceX will <a href="https://www.spacex.com/launches">continue launching rockets</a>. Bringing down the cost of space apparently hasn’t made people want to go more often, as SpaceX’s biggest customer appears to be SpaceX, and the <a href="https://www.theverge.com/tech/887899/spacex-ipo-risks-ai">Starlink internet service is what’s actually keeping the company afloat</a>. Perhaps we will see some movement on the various investigations and lawsuits around <a href="https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grok">xAI’s bespoke child sexual abuse material generator</a>, Grok. Maybe Musk will say goofy stuff about putting AI data centers in space, I don’t know.</p>

<p class="has-text-align-none">What I do know is that we are about to enter another heavy season of Elon Elonning, but this time there’s less goodwill for him than ever. I don’t think the SpaceX IPO will totally flop, but weirder things have happened! I for one am <a href="https://www.theverge.com/tech/887899/spacex-ipo-risks-ai">looking forward to the S-1</a>. In the meantime, I will simply remind you that the gray market drug of choice for most productivity nutjobs isn’t ketamine. Do with that information what you will.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Marc Andreessen is a philosophical zombie]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tldr/897566/marc-andreessen-is-a-philosophical-zombie" />
			<id>https://www.theverge.com/?p=897566</id>
			<updated>2026-03-19T16:07:54-04:00</updated>
			<published>2026-03-19T16:07:54-04:00</published>
			<category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Science" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="TL;DR" />
							<summary type="html"><![CDATA[I admit, this is an innovation I did not see coming: Silicon Valley has invented the philosophical zombie from the classic thought experiment “lol how crazy would it be if there were a philosophical zombie.” Until recently, the philosophical zombie was a concept closely associated with Australian philosopher David Chalmers, who defines it as “someone [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A photo of Marc Andreessen’s head opened up, with nothing inside." data-caption="What inner life? | ﻿Image: Cath Virginia / The Verge, Getty Images" data-portal-copyright="﻿Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/268412_Marc_Andreessen_is_a_philosophical_zombie_CVirginia2.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	What inner life? | ﻿Image: Cath Virginia / The Verge, Getty Images	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">I admit, this is an innovation I did not see coming: Silicon Valley has invented the <a href="https://plato.stanford.edu/entries/zombies/">philosophical zombie</a> from the classic thought experiment “lol how crazy would it be if there were a philosophical zombie.”</p>

<p class="has-text-align-none">Until recently, the philosophical zombie was a concept <a href="https://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/Chalmers_The_Conscious_Mind.pdf">closely associated with Australian philosopher David Chalmers</a>, who defines it as “someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether.” Chalmers’ zombie twin is identical to him functionally and psychologically —&nbsp;except that he feels nothing. This is different from a Hollywood zombie, which has “little capacity for introspection and lack[s] a refined ability to voluntarily control behavior.”</p>

<p class="has-text-align-none">So okay, Marc Andreessen is even shallower than our standard philosophical zombie —&nbsp;but still, I think there is a strong case he should be hunted and captured by the Stanford philosophy department so they can try their thought experiments out on him in real life. (Humanely, of course.) But I think for all of us who are interested in consciousness, Andreessen is certainly a specimen.</p>
<div class="youtube-embed"><iframe title="Marc Andreessen: The World Is More Malleable Than You Think" src="https://www.youtube.com/embed/qBVe3M2g_SA?rel=0&#038;start=56" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p class="has-text-align-none">For those of you who are not as internet-poisoned as I am, let me recap: A <a href="https://youtu.be/qBVe3M2g_SA?si=a53cDLqH1sd145vr&amp;t=56">video of Andreessen</a> on David Senra’s podcast — podcasts being <a href="https://www.theverge.com/2024/7/24/24204706/marc-andreessen-ben-horowitz-a16z-trump-donations">Andreessen’s favored form of self-disclosure</a> — has been making the rounds. In the video, Andreessen cheerfully says he has “zero” levels of introspection —&nbsp;“as little as possible.” This is a positive for entrepreneurs, we are told. “And you know, if you go back 400 years ago, it never would have occurred to anybody to be introspective,” Andreessen says, thus setting himself up for thunderous dunking.&nbsp;</p>

<p class="has-text-align-none">I could point to the many ancient traditions of introspection (set out variously in the Bhagavad Gita, Plato, or <a href="https://www.biblegateway.com/passage/?search=Psalm+119&amp;version=NIV">Psalm 119</a>) or note that his comments about a “guilt-based whammy” that showed up from Vienna in the 1910s and 1920s due to the work of Sigmund Freud suggests Andreessen has never met a Catholic (and is unfamiliar with major Catholic thought, to boot, as introspection is <a href="https://en.wikipedia.org/wiki/Confessions_(Augustine)">famously important to St. Augustine</a>). I could post a portrait of René “I think therefore I am” Descartes. I could note that this is such a significant misunderstanding of Freud as to suggest Andreessen is totally unfamiliar with him. I am not going to do any of those things, as I believe they have already been done by internet commenters.</p>

<p class="has-text-align-none">I am instead going to try to set out what happened, which is that I believe Andreessen read a book.</p>

<figure class="wp-block-pullquote"><blockquote><p> Chater proposes that the idea of an “inner self” is an illusion</p></blockquote></figure>

<p class="has-text-align-none">The reference Andreessen gives is <em>The Mind Is Flat</em> by Nick Chater, a professor of behavioral science at the Warwick Business School at the University of Warwick in the UK. Now personally I am somewhat skeptical of how serious a book is when its title <a href="https://www.nypress.com/news/flathead-OVNP1020050426304269992">references Thomas Friedman</a>, especially when it is being written by some business school guy. Still, the basic thrust of the book —&nbsp;as far as I can tell — is a polemic against the notion of an unconscious mind.</p>

<p class="has-text-align-none">I have not read the book, so I am relying on <a href="https://www.youtube.com/watch?v=vspX6NaLxdc">a lecture Chater gave at Google</a> and <a href="https://www.susanblackmore.uk/review-of-the-mind-is-flat-the-remarkable-shallowness-of-the-improving-brain/">reviews</a> <a href="https://nautil.us/many-of-our-beliefs-are-unconscious-a-response-to-nick-chater-237157">by</a> <a href="https://tomstafford.substack.com/p/is-the-mind-flat">his peers</a> in order to summarize it; if this seems unfair to Andreessen, I am happy at his request to actually read the book and do this again in much more irritating detail. In Chater’s Google lecture, he begins by providing a series of optical illusions, demonstrating that our vision cannot work the way we believe it works. This is pretty convincing stuff! It turns out perception is very weird. (That noise you just heard was everyone who’s ever taken a philosophy 101 class yell “<a href="https://plato.stanford.edu/entries/qualia/">qualia</a>” in unison. That’s the philosophy word for the “red-ness” of the color red, or the sensation of the smell of napalm in the morning.)&nbsp;</p>

<p class="has-text-align-none">From here, Chater proposes that the idea of an “inner self” is an illusion similar to the illusion we experience when we think we are perceiving more than one color at once. According to Chater, our minds can only do one thing at a time — that is, that we cannot have two thoughts at once, and the idea that we can is an illusion. From there, <a href="https://www.researchgate.net/publication/329946789_The_mind_is_flat_The_remarkable_shallowness_of_the_improvising_brain">he suggests that there are no mental depths</a>, there’s no such thing as <a href="https://www.health.harvard.edu/blog/unconscious-or-subconscious-20100801255">the unconscious</a>, and there’s no organizing principle to anyone’s mind. To the degree that an “inner self” exists, <a href="https://youtu.be/vspX6NaLxdc?si=XY9rJkUlEL_YG8m8&amp;t=1699">it is a kind of “tradition,”</a> the cumulative effect of your accumulated experiences, just as a canyon exists because of the cumulative effects of water.</p>

<p class="has-text-align-none">There are some obvious problems with this polemic, which are highlighted in the reviews. The key thing I think Andreessen is badly summarizing is <a href="https://youtu.be/vspX6NaLxdc?si=WnHJE861KsOlPaF4&amp;t=44">an assertion Chater makes in his Google speech</a>: The idea that the conscious mind is like the part of the iceberg above water and that there is a mysterious part of the self below the water is attributable to Sigmund Freud, and this conception of the “larger self” is wrong. There’s only the part above the water.</p>

<figure class="wp-block-pullquote"><blockquote><p>“TLDR There is no inner self, you&#8217;re chasing an imaginary concept, the end.”</p></blockquote></figure>

<p class="has-text-align-none">Here’s <a href="https://x.com/pmarca/status/2033682638771482636">Andreessen’s read of Chater</a>: “TLDR There is no inner self, you&#8217;re chasing an imaginary concept, the end.”</p>

<p class="has-text-align-none">In one of several follow-up posts to the interview on X, Andreessen went on to <a href="https://x.com/pmarca/status/2033683166905700676">something even sillier</a>: “You&#8217;re a 15 second sliding context window with the working memory of a goldfish, your long term memory is mainly fake, and it&#8217;s a minor miracle you can get out of the door in the morning.” It’s impressive in the sense that almost every single part of Andreessen’s sentence is wrong.</p>

<p class="has-text-align-none">I will begin, pedantically, with the goldfish, which actually have <a href="https://www.livescience.com/goldfish-memory.html">impressive memories</a>, like, years’ worth. But let’s get to the point: “your long term memory is mainly fake.” I suppose this depends on what Andreessen means, doesn’t it? On the one hand, any cradle Catholic can do an “Our Father” or a “Hail Mary” basically at will for the rest of their lives, even if they haven’t set foot in a church in 20 years. There are other indications that memory is in some sense “real”: <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4511859/">Small studies</a> suggest that <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC2914108/">musical memory</a> is <a href="https://pubmed.ncbi.nlm.nih.gov/15607545/">preserved</a> even in <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6525150/">Alzheimer’s patients</a>. Finally, most of us have attended a play, where actors recite speeches at will. How this is possible if long-term memory is fake, I don’t know. On the other hand, long-term memory can be <a href="https://www.psychologytoday.com/us/blog/hidden-motives/201203/unreliable-memory"><em>unreliable</em></a> — it is possible even to <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10567586/">implant false memories</a>. But <em>“mainly fake”</em>? I hope Andreessen stretched before that reach.</p>

<p class="has-text-align-none">Obviously, if you’re a “sliding 15 second context window” it would be impossible to have consistent thoughts, preferences, and beliefs over time — <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5144810/">except</a>, <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5742083/">of course</a>, <a href="https://psycnet.apa.org/record/2025-11398-001">people do</a>. Personality may <a href="https://pubmed.ncbi.nlm.nih.gov/21859226/">change</a> over <a href="https://pubmed.ncbi.nlm.nih.gov/15709943/">a lifetime</a>, but there is <a href="https://www.tandfonline.com/doi/abs/10.1207/S15327965PLI1202_01">often stability</a> for <a href="https://pubmed.ncbi.nlm.nih.gov/15709943/">years or decades</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>At this point, a reader may be tempted to point out to me that Andreessen is trolling</p></blockquote></figure>

<p class="has-text-align-none">As for “it’s a minor miracle you can get out of bed in the morning,” well, I have to assume this reflects Andreessen’s own mornings. (Also, any parent of a toddler can tell you that gross motor skills are independent of sophisticated thought, memory, or introspection.) This is the only part of the sentence I can’t really argue with — I personally don’t have much trouble getting out of bed, but I can’t speak for him.</p>

<p class="has-text-align-none">At this point, a reader <a href="https://x.com/pmarca/status/2033937776182448213">may be tempted</a> to <a href="https://x.com/pmarca/status/2034076647151071496">point out to me</a> that <a href="https://x.com/pmarca/status/2034047908044280285">Andreessen is trolling</a>, and I am <a href="https://x.com/pmarca/status/2033996232574800201">taking him</a> far <a href="https://x.com/pmarca/status/2033989507213889615">too seriously</a>. But unfortunately for everyone, Andreessen has waded into an interest of mine, making it intersect with my actual job. He has been posting about his theory of mind for days now — a repeated behavior, which suggests it must be important to him.&nbsp;</p>

<p class="has-text-align-none">So since we’re doing quasi-scientific polemics, I’ll throw my own in. I’ve been wondering for a while why many of our Silicon Valley overlords behave like they’ve <a href="https://www.mayoclinic.org/diseases-conditions/chronic-traumatic-encephalopathy/symptoms-causes/syc-20370921">repeatedly sustained traumatic brain injuries</a>, and I think I’ve come to a conclusion. One thing we know from <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3191246/">a variety</a> of <a href="https://pubmed.ncbi.nlm.nih.gov/28322422/">studies</a> is that cognition <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11881919/">doesn’t decline</a> for <a href="https://pubmed.ncbi.nlm.nih.gov/22149165/">some people</a> as they age — <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7812665/">those people</a> tend to have <a href="https://pubmed.ncbi.nlm.nih.gov/23079557/">stayed in school longer</a> and tend to have greater <a href="https://pubmed.ncbi.nlm.nih.gov/33554903/">intellectual engagement</a>. We also know that if you don’t use <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/lang.12665">one of your languages</a> on a regular basis, you <a href="https://pubmed.ncbi.nlm.nih.gov/28424634/">start to forget it</a> — <a href="https://pubmed.ncbi.nlm.nih.gov/30177893/">even if it was your first language</a>. Writing by <a href="https://www.scientificamerican.com/article/why-writing-by-hand-is-better-for-memory-and-learning/">hand is better for memory</a>; predictive text leads to <a href="https://dl.acm.org/doi/abs/10.1145/3377325.3377523">boring writing</a>.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>So since we’re doing quasi-scientific polemics, I’ll throw my own in</p></blockquote></figure>

<p class="has-text-align-none">I am now going to extrapolate from this: If you stop using some skill, mental or physical, you lose it and its benefits. We’ve already <a href="https://arxiv.org/pdf/2506.08872v1">seen signs of this</a> with heavy users of AI, along with <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12712562/">AI psychosis</a>, which may occur because chatbots are <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12805049/">too sycophantic</a>. (I have pitched my editors on attempting to give myself AI psychosis, but they seem to think I am making a joke.) So I’m going to suggest that overreliance on AI is bad for you — which suggests a corollary: Whatever is happening to normal people as a result of AI overreliance <em>has already happened to the ultra-wealthy.</em></p>

<p class="has-text-align-none">Consider: I spend a lot of time standing in line (at the supermarket, at the post office, at a bakery, at a coffee shop). While I’m standing there, I can either be annoyed at how long it’s taking me to get to the front of the line or do something else: goof around on social media or retreat into my mind palace, where I might daydream, review events from earlier in the day, or think about something interesting, like <a href="https://archive.org/details/beyondfreedomdig0000skin/mode/2up">whether it matters</a> if <a href="https://patriciachurchland.com/wp-content/uploads/2020/05/2006-Do-We-Have-Free-Will.pdf">I don’t have free will</a>. I have a lot of opportunities to hang out in my mind palace, because I do a lot of mundane chores, like grocery shopping, laundry, and picking up after myself.</p>

<p class="has-text-align-none">The ultra-wealthy don’t have to do any of this. They pay people to do it for them — cooking, cleaning, shopping, you name it. There are no periods of the day where they are forced to figure out how to entertain themselves in the face of repetitive chores. If you don’t use it, you lose it — so maybe a lot of them lose introspection. In fact, they can offload any cognitive activity they don’t enjoy! That’s what subordinates are for, isn’t it? And whatever gets consistently offloaded, they lose.</p>

<p class="has-text-align-none">But, worse, they don’t notice they’re losing any skills because they are constantly surrounded by people on their payroll, or people who want something from them. People who are, you know, <em>aggressively financially motivated to flatter them and agree with them. </em>If that yes-man tendency in chatbots is what’s driving AI psychosis or social skill atrophy in normal people, then the same thing has <a href="https://www.apa.org/news/podcasts/speaking-of-psychology/wealth-empathy">already happened</a> to the very rich.</p>

<figure class="wp-block-pullquote"><blockquote><p>We’ve got ourselves a philosophical zombie right here, and possibly a methodology for creating them at will</p></blockquote></figure>

<p class="has-text-align-none">So I am inclined to take Andreessen at his word. He’s having a little meltdown, as he sometimes does, first of all because people are making fun of him online — a thing he doesn’t encounter on a regular basis outside social media due to the yes-man phenomenon of wealth, and which is for this reason <em>much </em>more upsetting for him than it would be to a normal person. He has wholesale adopted Chater’s thinking, or at least a version of it, and it probably felt revelatory and correct to him because his own mind is shallow. (He doesn’t strike me as the kind of guy who’s thought about <a href="https://iep.utm.edu/hard-problem-of-conciousness/">the hard problem of consciousness</a> much, you know?) I’m not convinced he knows he was reading a polemic.</p>

<p class="has-text-align-none">But second, I also believe him when he says he doesn’t introspect. Because we know he agrees with Chater, we can just use Chater’s definition of thought here, bracketing whatever problems that definition may have. Because introspection under Chater’s definition is just another kind of perception, Andreessen has just functionally told us that he is missing qualia. We’ve got ourselves a philosophical zombie right here, and possibly a methodology for creating them at will<em>.</em> Someone should get on the horn to Australia — Chalmers is going to be <em>so excited</em>.</p>

<p class="has-text-align-none">I am also going to presume that in his posts, Andreessen is describing himself. But obviously there’s some overlap with a thing that Andreessen invests in, which is to say AI — which also has no long-term memory, no actual perception (and no qualia), and no ability to get out of bed. It is possible that Andreessen likes Chater because Chater suggests there is no depth to humans, which means that AI “intelligence” is very close to human intelligence, which means AI can replace humans at their jobs and Andreessen’s investments will make a boatload more money.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>So I decided to ask ChatGPTeessen another question</p></blockquote></figure>

<p class="has-text-align-none">So, <a href="https://www.vanityfair.com/news/story/dario-amodei-anthropic-ai?srsltid=AfmBOoribWwYBosFI0sj2C8p_9erzHVt3xGRailOhInUuXcpvB2trl9-">inspired by <em>Vanity Fair</em></a>, I thought it might be fun to interview Andreessen via ChatGPT —&nbsp;an a16z investment — since he seems to be implying that humans aren’t much different than AI. (Andreessen’s firm is <a href="https://x.ai/news/series-b">also invested in xAI,</a> but I think asking <a href="https://www.theverge.com/x-ai/707442/grok-antisemitic-hitler-elon-musk-opinion-reprogrammed">Mecha Hitler</a> to impersonate Andreessen is probably too rude, even for me.) I asked ChatGPT, <a href="https://chatgpt.com/s/t_69bb3a4ada9c81918ed775d9c36fa65c">roleplaying as Andreessen</a>, why he didn’t introspect; it told me “I wouldn’t actually say that,” before going on at length about why people who want to be entrepreneurs should use it but also “validate everything against external feedback.” But hey, if improvising nonsense is all Andreessen does, this is as much of an answer as the real guy can give us.</p>

<p class="has-text-align-none">So I decided to ask ChatGPTeessen another question: What did Andreessen mean when he wrote “Introspection = neuroticism x narcissism x thumbsucking”? <a href="https://chatgpt.com/s/t_69bb3a4ada9c81918ed775d9c36fa65c">Here’s the answer</a>:</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading"></h2>



<p class="has-text-align-none">What I meant is: most of what people call “introspection” is not some high-fidelity path to truth—it’s a distortion loop.</p>



<p class="has-text-align-none">Break it down.</p>



<ul class="wp-block-list">
<li><strong>Neuroticism</strong>: you’re obsessing over your own thoughts, replaying decisions, second-guessing. That’s not clarity—that’s anxiety masquerading as insight.</li>



<li><strong>Narcissism</strong>: you’re assuming your internal state is uniquely important, that your feelings are the central object worth analyzing. In reality, the world doesn’t care. Markets don’t care. Users don’t care.</li>



<li><strong>Thumbsucking</strong>: you’re not acting. You’re sitting there, self-soothing with analysis instead of making decisions and moving forward.</li>
</ul>



<p class="has-text-align-none">So the equation is deliberately provocative, but the point is practical: <strong>introspection is often a trap</strong>.</p>



<p class="has-text-align-none">In startups, this kills you. Founders sit around debating their own motivations, their fears, their identity—meanwhile someone else is shipping product, talking to customers, and taking the market.</p>



<p class="has-text-align-none">The bias should be toward <strong>external reality and action</strong>:</p>



<ul class="wp-block-list">
<li>What do users do, not what do you feel</li>



<li>What works, not what you think should work</li>



<li>What ships, not what you’ve analyzed</li>
</ul>



<p class="has-text-align-none">Use introspection sparingly, like a diagnostic tool. But if it becomes your default mode, you’re just spinning in place.</p>
</div>

<p class="has-text-align-none"></p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"></blockquote>

<p class="has-text-align-none">This answer, much like the first one, doesn’t sound <em>stylistically</em> like Andreessen: It’s not succinct or tendentious enough. On the other hand, it’s an improvising 15-second context window with no introspection telling me what Marc Andreessen thinks, which is what Andreessen also claims to be. And the gist <em>does</em> sound like his thinking, doesn’t it?&nbsp;If there’s no “inner self,” what’s the difference?</p>

<p class="has-text-align-none">So I guess we’ve all learned something important: Marc Andreessen, a philosophical zombie, can be easily replaced with AI. I bet that’s great news for a16z’s investors, since it means Andreessen no longer must be paid —&nbsp;and it frees him up to be the subject of some philosophy experiments. (Well, unless the neuroscientists get to him first, I guess.) I look forward to finding out what it’s like to be a philosophical zombie — or maybe that <a href="https://philosophy.uconn.edu/wp-content/uploads/sites/365/2020/03/Nagel-What-is-it-like-to-be-a-bat.pdf">it’s not like anything</a> to be a philosophical zombie.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elizabeth Lopatto</name>
			</author>
			
			<title type="html"><![CDATA[Oh, you think the government will regulate Kalshi and Polymarket? Wanna bet?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/business/896517/kalshi-cftc-insider-trading-polymarket" />
			<id>https://www.theverge.com/?p=896517</id>
			<updated>2026-03-18T10:20:24-04:00</updated>
			<published>2026-03-18T10:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Crypto" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[The Commodity Futures Trading Commission has a problem: It’s not very good at policing insider trading. And insider trading has become a pressing concern for prediction markets. Even Kalshi’s recently publicized fines for insider trading —&#160;levied against a politician and an employee of YouTube influencer MrBeast — were effectively self-policing. The exchange says it’s opened [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A large ear is featured among graphs and a terminal" data-caption="Who’s keeping an ear out for insiders? | Image: Cath Virginia / The Verge, Getty Images" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/268401_Betting_markets_and_regulators_insider_trading_problem_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Who’s keeping an ear out for insiders? | Image: Cath Virginia / The Verge, Getty Images	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">The Commodity Futures Trading Commission has a problem: It’s not very good at policing insider trading. And insider trading has become a pressing concern for prediction markets.</p>

<p class="has-text-align-none">Even Kalshi’s recently publicized <a href="https://www.theverge.com/policy/884570/kalshi-insider-trading-mrbeast-fines">fines for insider trading</a> —&nbsp;levied against a politician and an employee of YouTube influencer MrBeast — were effectively self-policing. The exchange says it’s opened 200 investigations, frozen some accounts, and had a dozen of its investigations turn into active cases.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“The volume of suspicious activity we see is significantly higher than what any platform publicly acknowledges.”</p></blockquote></figure>

<p class="has-text-align-none">In response to Kalshi’s announcement, the CFTC put out a statement that effectively read as a sad trombone noise about <a href="https://www.cftc.gov/PressRoom/PressReleases/9185-26">its own ability to police insiders</a>: “While Kalshi’s internal enforcement program handled these matters, under the Act, the Commission has full authority to police illegal trading practices,” the statement said. Sure, yeah, the CFTC <em>can </em>police this. But it didn’t. And there’s not a lot of reasons to believe it will.&nbsp;</p>

<p class="has-text-align-none">The CFTC, which had about <a href="https://www.cftc.gov/sites/default/files/CFTC_FY2026_Presidents_Budget.pdf">120 staffers assigned to enforcement</a> as of 2025, oversees not only prediction markets, but agricultural and stock futures, and <a href="https://www.coindesk.com/policy/2026/03/10/u-s-sec-chief-atkins-said-bond-with-sister-agency-cftc-to-include-joint-meetings-exams">part or possibly all of the crypto market</a>. The number of people assigned to enforcement has been shrinking even as the agency’s supervision portfolio has grown — it had 160 full-time employees in 2024, and is <a href="https://www.cftc.gov/sites/default/files/CFTC_FY2026_Presidents_Budget.pdf">requesting a budget for only 114 in 2026</a>.&nbsp;</p>

<p class="has-text-align-none">There are probably other insider trades that are being missed. “The volume of suspicious activity we see is significantly higher than what any platform publicly acknowledges,” says Trevor I. Lasn, who built an information dashboard called <a href="https://0xinsider.com/">0xInsider</a> to track suspicious trades on Kalshi and its primary competitor, Polymarket. “Whether that&#8217;s insiders, sophisticated researchers, or a mix of both, the pattern data is there and it&#8217;s worth examining.”</p>

<p class="has-text-align-none">Prediction markets aren’t the only thing that’s relatively new; the CFTC’s ability to police insider trading is, too. Until the passage of the Dodd-Frank Act of 2010, the only prohibited insider trading was that done by CFTC staffers and those of the exchanges it supervises. In some respects, this makes sense, given the agricultural origins of the futures market — like, who has a fiduciary duty to <em>corn</em>? Using the authority newly granted to it by the Dodd-Frank Act, <a href="https://www.ecfr.gov/current/title-17/chapter-I/part-180/section-180.1">the CFTC came up with a rule</a> modeled on the Securities and Exchange Commission’s insider trading ban.</p>

<figure class="wp-block-pullquote"><blockquote><p>“The intensity of enforcement is way different.”</p></blockquote></figure>

<p class="has-text-align-none">“At least some of the insider trading that we dislike in stocks and bonds is illegal” in commodities markets, including prediction markets, says Andrew Verstein, the faculty co-director of the Lowell Milken Institute for Business Law and Policy at UCLA. “But the intensity of enforcement is way different.”</p>

<p class="has-text-align-none">Because the rule is so new, there have been very few enforcement actions. Two cases, one against <a href="https://www.cftc.gov/PressRoom/PressReleases/7286-15">a gas trader named Arya Motazedi</a> and one against <a href="https://www.cftc.gov/PressRoom/PressReleases/7459-16">oil trader Jon Ruggles</a>, were filed and settled simultaneously. A third case, against <a href="https://www.cftc.gov/PressRoom/PressReleases/8490-22">a natural gas trader named Matthew Clark</a>, alleged Clark told a friend about his employer’s planned trades so that he could trade on this information and share the profits with Clark; Clark pleaded guilty in a related criminal case.</p>

<p class="has-text-align-none">In the absence of a real enforcement mechanism, Kalshi and Polymarket are <a href="https://www.npr.org/2026/03/06/nx-s1-5735893/iran-war-kalshi-polymarket-feud">staking out two very different positions</a>. Kalshi is trying to position itself as a law-abiding, self-policing exchange —&nbsp;even upsetting its user base by <a href="https://www.theverge.com/tech/887210/kalshi-void-bets-khamenei-death">refusing to pay out on the death of Iran’s Ayatollah Ali Khamenei</a>. Kalshi says it’s avoiding violating a <a href="https://www.law.cornell.edu/cfr/text/17/40.11">CFTC rule</a> against listing any contract “that involves, relates to, or references terrorism, assassination, war, gaming, or an activity that is unlawful under any State or Federal law.” That’s the “no assassination markets” rule, which <a href="https://www.bloomberg.com/opinion/newsletters/2026-03-02/war-markets-have-some-bugs?srnd=undefined">the CFTC may or may not be serious about enforcing</a>, who knows.&nbsp;</p>

<p class="has-text-align-none">Polymarket emphatically doesn’t give a shit, and has been merrily listing geopolitical bets, including war contracts, <a href="https://www.bloomberg.com/news/articles/2026-03-07/polymarket-founder-says-war-bets-are-facing-growing-resistance?srnd=phx-technology">to the tune of $425.4 million</a> on the week ending March 1st. Its main market operates offshore, <a href="https://gizmodo.com/best-vpn/polymarket">wink-wink-nudge-nudge</a>, and so I suppose its stance is that the CFTC simply doesn’t regulate it. I say “I suppose,” because Polymarket’s press email —&nbsp;as well as its CEO, Shayne Coplan — didn’t respond to my requests to comment.</p>

<figure class="wp-block-pullquote"><blockquote><p>“If you’re insider trading, you look at prediction markets and think, ‘No one is watching, and if anyone were, no cases have been vigorously contested and no one gets in trouble.’”</p></blockquote></figure>

<p class="has-text-align-none">Stocks and bonds, which operate under the SEC, have a more established track record with insider trading. (Even there, some inside trades likely slip through the cracks —&nbsp;hunting insider traders is something of a Red Queen’s race.) But there are also more people watching. If you attempt to engage in insider trading, your broker is obligated to report you. The exchanges have sophisticated software to detect weird trades, and they are obligated to report you. The Financial Industry Regulatory Authority, an organization that regulates members of the finance industry, monitors trades using software. And, of course, the SEC is watching. “All these layers of people are trying to catch you,” Verstein says. “And they may not, people get away with it a lot. But they are in an environment where they have to hide. And that’s really not true for prediction markets yet.”</p>

<p class="has-text-align-none">The CFTC has never been an aggressive prosecutor of insider trading, Verstein says. And while Kalshi is at least attempting to police itself at the exchange level, there just aren’t as many layers of enforcement. Kalshi spokesperson Elisabeth Diana declined to say how big its enforcement staff was. What’s more, the trade group Kalshi belongs to, the Coalition for Prediction Markets, doesn’t monitor insider trading the way FINRA does.</p>

<p class="has-text-align-none">“If you’re insider trading, you look at prediction markets and think, ‘No one is watching, and if anyone were, no cases have been vigorously contested and no one gets in trouble,’” Verstein says. Besides, if a case does somehow make it to a jury, you can just tell a story that you’re a really diligent researcher. The rules the CFTC operates under look similar to those that the SEC uses, but “in reality, the law is completely open.”</p>

<p class="has-text-align-none">Proponents of betting markets will tell you that <a href="https://www.theverge.com/podcast/889177/prediction-markets-news-gambling-polymarket-kalshi-iran-war-regulation">they surface information</a>, including insider information, and <a href="https://www.forbes.com/sites/aliciapark/2026/01/09/why-prediction-markets-need-insider-trading-according-to-their-godfather/">that’s a good thing</a>. They claim that Kalshi and Polymarket are “information institutions,” <a href="https://www.theverge.com/business/881967/polymarket-kalshi-journalism-sponsorship-ad">just like news organizations</a>, and that <a href="https://mason.gmu.edu/~rhanson/insiderbet.pdf">paying for insider information provides social value</a> by making it public. (And also that <a href="https://www.overcomingbias.com/p/insider-journalism">it’s just <em>so unfair</em></a> that journalists have been pointing out that leaking for reasons other than the public good — such as personal profit — might be a problem.) This is not an uncontested view; several states, including Arizona, which has charged Kalshi with <a href="https://www.reuters.com/world/kalshi-charged-criminally-arizona-operating-illegal-gambling-business-2026-03-17/">operating an illegal gambling business</a>. Those states are contesting the CFTC’s jurisdiction.</p>

<figure class="wp-block-pullquote"><blockquote><p>“If you can trade on when and where bombings happen, you may be tempted to change your bombing target.”</p></blockquote></figure>

<p class="has-text-align-none">Insider trading often comes paired with its “sister sin,” market manipulation, says Verstein. “If you can trade on when and where bombings happen, you may be tempted to change your bombing target,” he says. “There are bad incentives that hide in the shadow of insider trading, so we ban it in areas where we most fear the bad incentives.”</p>

<p class="has-text-align-none">Attempts at market manipulation have already started polluting the internet. A live map tracking the war in Ukraine <a href="https://www.404media.co/unauthorized-edit-to-ukraines-frontline-maps-point-to-polymarkets-war-betting/">showed a fictitious Russian advance</a> for just long enough for a Polymarket bet to resolve. And both Polymarket and Kalshi have given X posters “affiliate badges” — despite <a href="https://www.nytimes.com/athletic/6876922/2025/12/29/kalshi-polymarket-predictive-betting-markets-x-sports-insiders/">a history of those accounts faking reports</a>. As a result of a Polymarket contract, <a href="https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if-i-dont-rewrite-an-iran-missile-story/">a reporter at <em>The Times of Israel</em> has been getting death threats</a> because he refuses to change his reporting about a missile falling in Israel. But there are less aggressive forms of market manipulation, too — <a href="https://rajivsethi.substack.com/p/guessing-games">it’s possible to manipulate the odds</a> with trades, then cash out.</p>

<p class="has-text-align-none">“This field offers even more potential for abuse than insider trading does because you can both insider trade on information that will influence prediction markets, but you can also influence events that influence the prediction markets from an insider&#8217;s position,” said Sen. Adam Schiff of California, who has brought forward a bill to <a href="https://www.schiff.senate.gov/news/press-releases/news-sen-schiff-introduces-legislation-to-explicitly-ban-death-and-war-prediction-contracts/">codify bans on certain prediction market activity</a>, in an interview with <em>The Verge</em>. “So there are going to need to be very similar and perhaps even greater safeguards in place. And there are certain gambling and prediction betting that make no sense.”</p>

<p class="has-text-align-none">Schiff additionally says he doesn’t trust the CFTC chair, Michael Selig. According to Schiff, Selig assured Congress in his confirmation hearing that he’d let courts decide the litigation that came before them, and meet with stakeholders to determine the processes that suit prediction markets. “And he’s done the exact opposite,” Schiff says. “It certainly seems like he has a very strong predisposition that he came into this role with in favor of these prediction betting markets.”</p>

<figure class="wp-block-pullquote"><blockquote><p>“There’s clearly room for new regulation.”</p></blockquote></figure>

<p class="has-text-align-none">Selig’s apparent support isn’t enough for everyone. Polymarket’s <a href="https://www.bloomberg.com/news/articles/2026-03-07/polymarket-founder-says-war-bets-are-facing-growing-resistance">Coplan has complained</a> that “there’s still a lot of resistance to innovation” when it comes to prediction markets. He added, without providing evidence, that he receives emails from “people in the Middle East,” telling him that they decide whether to sleep by a bomb shelter based on Polymarket bets.</p>

<p class="has-text-align-none">It’s not clear whether Coplan’s comment references insider information leaked on Polymarket, or just informed guessing. Israel has <a href="https://www.npr.org/2026/02/12/nx-s1-5712801/polymarket-bets-traders-israel-military">arrested two people for Polymarket bets</a>, saying they used classified information. Insider information from the US government likely has also made it onto Polymarket. For instance, just before the US government snatched Nicolás Maduro, the leader of Venezuela, someone made a bet he’d be out of office, <a href="https://www.wsj.com/world/americas/maduro-polymarket-bet-a2e5d100?gaa_at=eafs&amp;gaa_n=AWEtsqdSE7inMukTf-1CLiOZ7ZTj3LKv-ZOUbVxHNvkOhopaInhcYwqzkNU9UAeFD0U%3D&amp;gaa_ts=695ed05e&amp;gaa_sig=Ht3t8_c7hl6xPoVI8cu_TdettmTa22belJSnfCO8k0pJCWrQQ_7eGn5IQtQabEHj0HfyxyLGq0TTyq7bas-3AA%3D%3D">snaring $400,000 in profit</a>. Someone else made <a href="https://www.npr.org/2026/03/01/nx-s1-5731568/polymarket-trade-iran-supreme-leader-killing">more than half a million dollars</a> with a suspicious bet on the death of Ayatollah Ali Khamenei.</p>

<p class="has-text-align-none">“There’s clearly room for new regulation,” says Cindy Schipani, a professor of business law at the University of Michigan Ross School of Business. In particular, Schipani suggests it should be illegal to trade on information that may convey national security concerns. Even trading on government secrets isn’t “cleanly illegal under any laws I could cite for you, other than maybe being fraud and violating some duties you might have to your office.”</p>

<p class="has-text-align-none">While insider trading does surface insider information, it’s obviously not an unalloyed good, points out Schipani. “Only certain people benefit, and others are harmed,” she says. This matters less in the case of, say, sports betting. When it comes to national security, however, it seems distinctly dangerous.</p>

<figure class="wp-block-pullquote"><blockquote><p>The best possible type of prediction market has traders who are <em>informed</em> without being <em>insiders</em></p></blockquote></figure>

<p class="has-text-align-none">This may be why Kalshi is trying to make sure that it isn’t associated with the anything-goes style of Polymarket —&nbsp;and why I get terse emails from Kalshi’s PR when I talk about prediction markets and insider trading. Kalshi uses software to monitor activity and flag unusual trades, and because the company collects the customer data of everyone on there, it can conduct a thorough investigation, says Diana, the spokesperson. Kalshi and <a href="http://crypto.com/">Crypto.com</a> have even formed an industry group, the Coalition for Prediction Markets. Notably, it doesn’t include Polymarket.</p>

<p class="has-text-align-none">Of course, it is possible <a href="https://www.wsj.com/finance/investing/the-tax-nerd-who-bet-his-life-savings-against-doge-6b59eda2?gaa_at=eafs&amp;gaa_n=AWEtsqe61rAUPOEbk4wCKFhknLRc_DSjFCvipWdKLZ2vnVgQSof3lOobqW16&amp;gaa_ts=69b1e2e4&amp;gaa_sig=TZ9ZWiel4IZ8OFg21isSjX4uOKA-3LHSg0PT5spT_lWsns1A2SYbJgfUzaW2SSDiCELwThQn0two8mYomUKAjw%3D%3D">to make informed guesses</a> about events <em>without</em> insider information. One might suggest the best possible type of prediction market has traders who are <em>informed</em> without being <em>insiders. </em>Markets should reward good research, says Verstein. He points to short sellers, who have surfaced fraud (for instance, <a href="https://hindenburgresearch.com/nikola/">at Nikola</a>) that was then <a href="https://www.theverge.com/2023/12/18/24006571/nikola-trevor-milton-sentence-prison-fraud-electric-truck">prosecuted by the Justice Department</a>. The ability to short Nikola before making the information public was the reason someone researched the company in the first place. In that sense, the financial incentives are positive, Verstein says.</p>

<p class="has-text-align-none">Similarly, if you see tanks rolling down the street in Pasadena, it might make sense to buy a “yes” option on “Will there be an invasion in Pasadena,” Verstein notes. If that weird little contract goes crazy, that means people notice the information. In that sense, prediction markets can function as “truth machines,” Verstein says. “That’s what made them popular with academics and political scientists for decades,” he says.</p>

<figure class="wp-block-pullquote"><blockquote><p>“If I knew insiders were always allowed to make a buck, I’d feel the market is rigged.”</p></blockquote></figure>

<p class="has-text-align-none">But informed trading isn’t the majority of the order flow on these markets — it would be too hard for insiders to make money otherwise. The majority, the people that Polymarket and Kalshi are picking up through advertising, <a href="https://murmurationstwo.substack.com/p/prediction-markets-are-not-good-markets">are people betting their fun money</a>, says Nic Carter, a partner at Castle Island Ventures. Those people won’t bet if they think the markets are unfair, Carter notes. Schipani agrees. “If I knew insiders were always allowed to make a buck, I’d feel the market is rigged.”</p>

<p class="has-text-align-none">Meanwhile, some people are trying to track insider trades so they, too, can profit. From Tre Upshaw, a former memecoin trader, there’s the “Insider Finder” portion of his tracker Polysights. He told <em>Bloomberg</em> in January that <a href="https://www.bloomberg.com/news/articles/2026-01-12/the-man-betting-on-insider-trading-as-prediction-markets-boom">85 percent of the cases he’s identified</a> resulted in a win. Lasn’s platform, 0xInsider, attempts to identify patterns that are unlikely to come from research alone. Lasn says that insider information should be surfaced quickly, to make the markets fairer.&nbsp;</p>

<p class="has-text-align-none">“When we surface suspicious activity publicly, two things happen,” Lasn said in an email. “Regular traders can factor that signal into their decisions. And the mere existence of surveillance changes behavior.”</p>

<figure class="wp-block-pullquote"><blockquote><p>“People are like, is this insider information?”</p></blockquote></figure>

<p class="has-text-align-none">Which brings us back to the CFTC. While this may be the preferred regulatory agency for prediction markets —&nbsp;since it has consistent rules, a lighter regulatory hand, and lower taxes than state gambling authorities —&nbsp;its anemic enforcement of insider trading is something of a problem for Kalshi, and anyone else wanting to make sure prediction markets are fair. And that’s just on the cut-and-dried cases. Prediction markets have weirder problems too.</p>

<p class="has-text-align-none">For instance, <a href="https://www.wsj.com/business/media/prediction-markets-campus-e57cd19f?gaa_at=eafs&amp;gaa_n=AWEtsqcSxPkRQtCxHVT8GkEJ57J823s6WPGQMXV78U2qIo0wRn4EFsSOXJM-&amp;gaa_ts=69b0e821&amp;gaa_sig=5ePTZV9FOsY6QyT7loGConL_GuhsDqDhoHN__39Z7FgxJmh5qIJS0PstN0iQNhiqdPzECYIt0gUSI9P495nGwA%3D%3D">Jeff Bezos’ stepson may have told his University of Miami frat brothers</a> that Bezos would not attend the Super Bowl, a thing that Kalshi makes it possible to wager on. “Bets among the fraternity brothers and their friends ran as high as $10,000 per person, with returns hitting more than 30 percent,” <em>The Wall Street Journal </em>wrote.</p>

<p class="has-text-align-none">“People are like, is this insider information?” one frat brother told the <em>Journal</em>.</p>

<p class="has-text-align-none">Honestly, great question. Traditionally, insider trading involves a breach of confidence and fiduciary duty, says Schipani. What fiduciary duty would Bezos’ stepson be violating by telling his frat brothers Bezos’ location? “It’s like the wild west right now,” Schipani says.</p>

<p class="has-text-align-none"><em>With reporting by Lauren Feiner.</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
