<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Joshua Dzieza | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-24T16:42:02+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/joshua-dzieza" />
	<id>https://www.theverge.com/authors/joshua-dzieza/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/joshua-dzieza/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Joshua Dzieza</name>
			</author>
			
			<title type="html"><![CDATA[How Project Maven taught the military to love AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917996/project-maven-military-ai-katrina-manson" />
			<id>https://www.theverge.com/?p=917996</id>
			<updated>2026-04-24T12:42:02-04:00</updated>
			<published>2026-04-24T13:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Books" /><category scheme="https://www.theverge.com" term="Entertainment" />
							<summary type="html"><![CDATA[In the first 24 hours of the assault on Iran, the US military struck more than 1,000 targets, nearly double the scale of the “shock and awe” attack on Iraq over two decades ago. This acceleration was made possible by AI systems that speed up the targeting process. Chief among them is the Maven Smart [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge, WW Norton" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268484_How_the_US_embraced_AI_warfare_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">In the first 24 hours of the assault on Iran, the US military struck more than 1,000 targets, nearly double the scale of the “shock and awe” attack on Iraq over two decades ago. This acceleration was made possible by AI systems that speed up the targeting process. Chief among them is the Maven Smart System.</p>

<p class="has-text-align-none">In her new book,<em> </em><a href="https://wwnorton.com/books/project-maven"><em>Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare</em></a>, journalist Katrina Manson investigates the development of Maven from its inception in 2017 as an experiment in applying computer vision to drone footage. The project spurred employee <a href="https://www.theverge.com/2018/4/4/17199818/google-pentagon-project-maven-pull-out-letter-ceo-sundar-pichai">protests at Google</a>, the military’s initial contractor, prompting the company to back out. Pushed forward by a Marine intelligence officer named Drew Cukor, whose story forms the backbone of <em>Project Maven</em>, the system ended up being built by Palantir and draws on technologies developed by Microsoft, Amazon, Anthropic, and others. Now used across the US armed forces and recently purchased by <a href="https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/">NATO</a>, Maven synthesizes satellite imagery, radar, social media, and dozens of other data sources to identify and target entities on the battlefield. It also speeds up what’s called the “kill chain.”</p>

<p class="has-text-align-none">Maven combines computer vision with a sort of workflow management system that finds targets, pairs them with weapons, and allows users to quickly click through the other steps of a targeting cycle. A process that once took hours can now be completed in seconds. An official tells Manson that the technology has allowed the US to go from hitting under a hundred targets a day to a thousand, and with the addition of LLMs, up to five thousand targets a day.&nbsp;</p>

<p class="has-text-align-none">One of the thousand targets struck on the first day of the Iran war was a <a href="https://www.reuters.com/investigations/bombed-iranian-girls-school-had-vivid-website-yearslong-online-presence-2026-03-12/">girls’ school</a>, killing more than 150 people, mostly children. The school had previously been part of an Iranian naval base, yet it was listed online as a school and playgrounds were visible on satellite imagery. While much of the coverage after the strike focused on possible hallucinations by Claude, the technology historian Kevin Baker wrote in <em>The Guardian</em> that Maven and the acceleration it enabled is the more relevant place to look. “A chatbot did not kill those children,” he <a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">wrote</a>. “People failed to update a database, and other people built a system fast enough to make that failure lethal.”</p>

<p class="has-text-align-none">The pace of war is set to accelerate further. Manson uncovers military programs to develop fully autonomous weapons — including an explosive-laden drone Jet Ski — capable of targeting and destroying targets on their own.&nbsp;</p>

<p class="has-text-align-none">I spoke to Manson about Maven and how AI is changing warfare.&nbsp;</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-text-align-none"><em>This interview has been condensed and edited for clarity.&nbsp;</em></p>

<p class="has-text-align-none"><strong>Colonel Cukor was an early and determined proponent of AI. Can you say a bit about him and what his initial motivations were?&nbsp;</strong></p>

<p class="has-text-align-none">He is chief of Project Maven, so he was the day-to-day doer and leader, but he also had this very long-term vision, which comes from his frustration that US military operators in Afghanistan were equipped with very poor intelligence tools. There was this idea that the US essentially fought that war 40 times over, every six months, because information wasn’t being handed over [when troops rotated in]. He was frustrated that data was in Excel and PowerPoint and he wanted an analytic tool that would bring intelligence to the frontline military operators. But he also had this vision for what he called “white dots” — that there would be white dots shown on a map infused with intelligence information, like a coordinate, what is there, the elevation, what is known about it. And this becomes one of the driving forces of what he tries to create through Project Maven.</p>

<p class="has-text-align-none"><strong>How was Maven initially conceived in the military, was it as this interface and information management system?&nbsp;</strong></p>

<p class="has-text-align-none">It comes out of this project called Project Maven that starts in 2017. The actual project already existed and had already got a funding stream. It was to use AI against satellite imagery, but then it got repurposed for drone video imagery. This is because the US is thinking about how to develop AI for technologies for any potential conflict against China. They had this idea that eventually war would run faster than humans could think, so they wanted to bring AI into this. The initial idea proposed by Colonel Cukor is to apply AI to drone video footage. They were sometimes managing to analyze as little as 4 percent of the collection, so they wanted AI essentially to take the place of human eyes in analyzing what was there, but it was always bigger.</p>

<p class="has-text-align-none"><strong>The public first heard about Maven with the </strong><a href="https://www.theverge.com/2018/4/4/17199818/google-pentagon-project-maven-pull-out-letter-ceo-sundar-pichai"><strong>Google protests</strong></a><strong> in 2018, and I remember Google at the time saying that this technology would not be used to kill people. But it sounds like targeting was always the intention?</strong><br><br>A spokesperson from Google at the time said that flagging images for review on the drone feed with the help of AI was intended to save lives and was for non-offensive uses only. That is not what my reporting shows. My reporting shows that many of the US military operators were motivated by the aim to save US lives and reduce civilian harm, so in that sense, it is “not offensive” because you&#8217;re analyzing intelligence information. But in the wider sense and very quickly, in the very real sense, AI target selection was intended for targeting.&nbsp;</p>

<p class="has-text-align-none">I asked someone in the book if targeting offensive weapon strikes were intended to be part of Project Maven, and he replied, “yeah, of course, it&#8217;s not like we&#8217;re doing it for kicks. The goal of the intel is to take out high-value targets.”</p>

<p class="has-text-align-none"><strong>When the Google deal falls apart, that’s when Palantir steps in. Can you tell me about Palantir’s role in the project?&nbsp;</strong></p>

<p class="has-text-align-none">Two things happen. Microsoft and AWS [Amazon Web Services] take a much bigger role in producing the algorithms and also in the compute, and alongside that, Cukor goes to Palantir and says, “Can you help?” He&#8217;s pitching this idea of the white dots on a screen. He has this 10-year vision for how the US military will remake themselves, and they&#8217;ve been trying out algorithms, which at that stage are not very good at identifying anything, and are also having to sit in systems that aren&#8217;t fit for purpose. They had a lot of problems with users not believing in AI and finding the displays very distracting. So he wants a user interface that will please the user.&nbsp;</p>

<p class="has-text-align-none">So he pitches to Palantir that they create a user interface, which actually Palantir doesn&#8217;t want to do. I&#8217;m told they didn&#8217;t believe that AI was going to take off, and they also didn&#8217;t want to just make a fancy user interface. They wanted to crunch the data. But that wasn&#8217;t initially what Cukor was pitching them and he was very persuasive. He also wanted them to be less arrogant, and he ends up counseling them on how to attempt to remake their reputation inside the Department of Defense and to get these contracts, which initially, I don&#8217;t think are worth much money. But today, nearly 10 years later, I&#8217;ve reported that Maven Smart System is going to become by the end of September a “program of record” and Palantir is the prime contractor, so in the end, it&#8217;s going to be lucrative for them.<br></p>

<p class="has-text-align-none"><strong>Ukraine sounded like a pretty big inflection point in the development of these systems. What happened there?</strong></p>

<p class="has-text-align-none">This becomes a really important moment where the artillery fire team realizes that AI can help them speed up their operations and targeting. It becomes much more explicit that intelligence is going to feed into operations. When the US is supporting Ukraine, even before the invasion of Russia, the 18th Airborne Corps is over in Wiesbaden in Germany and very quickly they start to use computer vision on the Maven Smart System to figure out where the Russian positions are, where the tanks are, what is happening. The algorithms fail very quickly. The algorithms were used to the desert in the Middle East and in Afghanistan. The algorithms couldn&#8217;t recognize tanks and other features in the snow. They collect new satellite footage over the Russian tanks and other equipment and send them back to the US to retrain the algorithms really quickly, so they become much better at spotting tanks.&nbsp;</p>

<p class="has-text-align-none">The US starts sending what they end up calling “points of interest” to the Ukrainians, who then use that to target Russian equipment and personnel. The language of “points of interest” is interesting because the US is trying to thread this needle to provide support to the Ukrainians without becoming seen in Russia&#8217;s eyes as a direct participant in the war. So they evolved this idea that a “target” is something that has gone through a process, and they are giving the Ukrainians everything just shy of that. I&#8217;m able to report that at the high point on one day in 2022, the US passes 267 points of interest to the Ukraine.</p>

<p class="has-text-align-none"><strong>What are the parts of the targeting process that are getting automated that cause that kind of acceleration?</strong></p>

<p class="has-text-align-none">The US military would say nothing is yet automated, because there is this extra stage of targeting, which is really key, which is the legal decision to strike something. In the case of why the kill chain is speeding up, what I&#8217;ve been told is that a lot of the processes involved in getting permission to strike a target have traditionally been extremely analog and slow, involving telephones and swivel chairs. So this is part of shifting this process onto digital platforms and then eventually getting to automate it.</p>

<p class="has-text-align-none">The 18th Airborne Corps had humans at six key steps. So the human decides when and how to shoot at a target. They assess what&#8217;s called an operational approach. They assess the data collected, they decide to act, communicate the decision, execute the fire, and then communicate what happened. And then with the arrival of Maven&#8217;s AI, they reduced the human role in the loop to only two places: the decision to act and the action itself. They can supervise the machine making the decision during the automated collection process, but the assessments throughout would all be AI enabled. Even at the NGA [National Geospatial-Intelligence Agency], they are producing intelligence reports that no human eyes or hands have touched that are entirely AI generated. So there&#8217;s been this huge shift into really making data and the system king.&nbsp;</p>

<p class="has-text-align-none">The other reason that they&#8217;re able to get to so many targets in a day is because the Maven Smart System is using large language models. I&#8217;ve reported [they’re using] Claude from Anthropic, and I was told it was helping speed up the processes. And Centcom [US Central Command] themselves said that with the help of AI, they were able to speed up processes that used to take days and hours down to as little as seconds. The commander, the US would say, is still making the decision. But I&#8217;ve also spoken to US military ethicists who say that there is a risk of the gamification of war, and that people may end up trusting the targets that they&#8217;re being offered on screen without understanding fully the data that&#8217;s supporting it.</p>

<p class="has-text-align-none">Now, the pushback is that this is data that&#8217;s better tagged than ever been before, that this AI-based system, essentially being a database system, means that you can audit the data and go deep into it and also give headquarters a way of following what military operators at the edge are doing with much greater transparency and accountability than ever before. This enormous operation that the US has undertaken in Iran will ultimately be a case in point. And we&#8217;ll be looking for data and accountability about how the US has, in the end, used this platform.</p>

<p class="has-text-align-none"><strong>There&#8217;s a technology scholar, Kevin Baker, who wrote a piece about how Claude got a lot of blame initially for the school strike in Iran. But he pointed to this longer term acceleration and said that these steps may have left time for deliberation or noticing errors or contradictory intelligence. I&#8217;m curious if there were concerns in the military that things were getting too fast?</strong></p>

<p class="has-text-align-none">There&#8217;s a really significant debate inside the US military about how far they should lean into this. Some are saying it&#8217;s inevitable, and others are really warning that that human assessment at the last minute is the thing that can save lives. And I don&#8217;t think that the debates proved out, but the direction of travel is clear in that the Maven Smart System is becoming a program of record. That Central Command commander is taking time out of these operations to go on to X and say that they are using AI and that they&#8217;re finding it helpful. Then you have people like retired Defense Secretary Jim Mattis saying that targeting is no substitute for strategy, that hitting a lot of things, essentially, doesn&#8217;t get you to victory.&nbsp;</p>

<p class="has-text-align-none">There&#8217;s one example that I keep going back to in my mind, which is in 1999, when the US strikes the Chinese Embassy in Belgrade. In the analysis that the US offers publicly afterwards, they say that the embassy was incorrectly labeled on a map. The embassy had moved recently. The map hadn&#8217;t been updated. One map had; others hadn&#8217;t. Someone even tried to make a call because they got worried and wanted to check, but they weren&#8217;t able to reach someone in time.&nbsp;</p>

<p class="has-text-align-none">In an example like that, if your systems flag a problem and they&#8217;re digitally connected, on the one hand, it could be much easier to raise anomalies, problems, risks of mistake. On the other, the target selection from what could be an erroneous targeting database could be made even quicker without those checks. So the decision that the US military makes about leaning into AI on the targeting cycle will only be as good as the data that is feeding it.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Joshua Dzieza</name>
			</author>
			
			<title type="html"><![CDATA[Why are Epstein’s emails full of equals signs?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/policy/879016/epstein-files-emails-text-errors-encoding" />
			<id>https://www.theverge.com/?p=879016</id>
			<updated>2026-02-15T10:22:35-05:00</updated>
			<published>2026-02-15T08:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Policy" />
							<summary type="html"><![CDATA[Many of the emails released by the Department of Justice from its investigation into Jeffrey Epstein are full of garbled symbols like: Or: The scrambled text is so ubiquitous that it’s spurred conspiracy theories that it could be some kind of code. But as believable as it might be that a cabal of elite sex [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo collage of red lines connecting Jeffrey Epstein to Donald Trump, Melania Trump, and Ghislaine Maxwell, along with a UFO, lizard, and Bigfoot." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/07/STKS516_EPSTEIN_CONSPIRACY_THEORIES_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Many of the emails released by the Department of Justice from its investigation into Jeffrey Epstein are <a href="https://www.justice.gov/epstein/files/DataSet%2011/EFTA02474673.pdf">full of garbled symbols like</a>:</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/Screenshot-2026-02-13-at-6.56.54%E2%80%AFPM.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />
<p class="has-text-align-none"><a href="https://www.justice.gov/epstein/files/DataSet%2011/EFTA02638940.pdf">Or:</a></p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/Screenshot-2026-02-13-at-6.59.44%E2%80%AFPM.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />
<p class="has-text-align-none">The scrambled text is so ubiquitous that it’s spurred conspiracy theories that it could be some kind of code. But as believable as it might be that a cabal of elite sex traffickers would communicate in a secret language, the reality is probably more boring: The symbols are likely artifacts from the way the Department of Justice converted the emails to PDFs.&nbsp;</p>

<p class="has-text-align-none">“The glyphs and symbols are probably some artifact of a poor conversion process,” said Chris Prom, professor and archivist at the University of Illinois Urbana-Champaign. Specifically, the symbols look like remnants of <a href="https://www.rfc-editor.org/rfc/rfc2045#section-6.7">Multipurpose Internet Mail Extensions</a>, or MIME, a 30-year-old standard for encoding emails. The protocol underlying email transmits messages as short strings of simple ASCII characters, so as people started writing longer messages and trying to include formatting and symbols, MIME was developed as a way of encoding them in ASCII.&nbsp;</p>

<p class="has-text-align-none">With MIME, the “=” is used to signal either that a string of text should be broken for transmission and rejoined — a “soft line break” — or, when followed by two other characters, that it should be converted to a particular non-ASCII mark. If you wanted to actually write “=” in an email, for example, it would be encoded as “=3D.” During normal use, the recipient’s email client decodes these symbols before displaying the formatted message.&nbsp;</p>

<p class="has-text-align-none">Whatever software the Department of Justice used to extract the emails and convert them to PDFs appears to have mangled some of the decoding, said Peter Wyatt, the chief technology officer of the PDF Association, who examined a batch of the <a href="https://pdfa.org/a-case-study-in-pdf-forensics-the-epstein-pdfs/">Epstein documents</a>.&nbsp;</p>

<p class="has-text-align-none">“It was in the news, and it was a whole lot of PDFs,” he said. The association performed similar analyses of the Mueller report and Manafort documents. “Generally speaking, we&#8217;re interested in anything to do with PDF. That&#8217;s kind of what we do and what we&#8217;re about.”</p>

<p class="has-text-align-none">The clarity of the text and URLs led Wyatt to believe these documents were extracted digitally then converted to PDF, rather than physically printed and scanned, as the <a href="https://pdfa.org/a-technical-and-cultural-assessment-of-the-mueller-report-pdf/">Mueller report was</a>. “So things have improved since that time,” Wyatt said.&nbsp;</p>

<p class="has-text-align-none">Specifically, the Department of Justice likely extracted the email data, converted it to PDF, then redacted it. In order to strip the document of metadata and bake in the redactions so that the black bars couldn’t be removed, they then converted the documents to image files like JPEG before converting them <em>back</em> into PDF. The software used to initially extract and convert the data also captured portions of the underlying MIME format instead of properly decoding it. Or more simply: emails, sometimes partially decoded, converted to PDF, converted to JPEG, converted to PDF.&nbsp;</p>

<p class="has-text-align-none">That at least explains the profusion of “=”. But it doesn’t fully explain why the “=” sometimes replaces letters, like the “J” in “Jeffrey.” No one I spoke to could definitively answer this question, except to say that email is hard and converting it to PDF is harder, and the DoJ was converting a lot of documents in a hurry. (The redactions have been notably inconsistent throughout the files, too.)</p>

<p class="has-text-align-none">Prom thought it might be a character set conversion problem, which he saw frequently when the archival tool he was testing couldn&#8217;t find the specific character set or font the email server was using. </p>

<p class="has-text-align-none">Craig Ball, a forensic examiner who teaches at the University of Texas at Austin School of Law pointed out that different email clients implement standards in slightly different ways, adding to the difficulty of conversion. “My hunch is that this is an incompatibility between the code pages used by the transmitting mail client (possibly a BlackBerry) and the application used to print the messages to PDF,” Ball wrote. “The presence of BlackBerry and iPhone signatures in these emails suggests the messages traversed multiple systems with different encoding practices, compounding the decoding issues during PDF generation.”</p>

<p class="has-text-align-none">“You&#8217;re looking at hundreds of different methods of converting these files from hundreds of different people using whatever software they had available to them, some of which might have been good, some of which might not have been,” said Prom. </p>

<p class="has-text-align-none">“The PDF standard is quite complex,” wrote Prom. “And email to PDF is particularly fraught.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Joshua Dzieza</name>
			</author>
			
			<title type="html"><![CDATA[Jimmy Wales trusts the process]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/846184/jimmy-wales-trusts-the-process" />
			<id>https://www.theverge.com/?p=846184</id>
			<updated>2025-12-17T11:46:29-05:00</updated>
			<published>2025-12-17T12:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Wikipedia will be 25 years old in January. During that time, the encyclopedia has gone from a punchline about the unreliability of online information to the factual foundation of the web. The project’s status as a trusted source of facts has made it a target of authoritarian governments and powerful individuals, who are attempting to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/05/257773_Jimmy_Wales_HBenoit_0013.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Wikipedia will be 25 years old in January. During that time, the encyclopedia has gone from a punchline about the unreliability of online information to the factual foundation of the web. The project’s status as a trusted source of facts has made it a target of authoritarian governments and powerful individuals, who are attempting to undermine the site and threaten the volunteer editors who maintain it. (For more on this conflict and how Wikipedia is responding, <a href="https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales">you can read my feature from September</a>.)&nbsp;</p>

<p class="has-text-align-none">Now Wikipedia’s cofounder Jimmy Wales has written a new book, <em>The Seven Rules of Trust: A Blueprint for Building Things That Last. </em>In it, Wales describes a global decline in people’s trust in government, media, and each other, instead looking to Wikipedia and other organizations for lessons about how trust can be maintained or recovered. Trust, he writes, is at its core an interpersonal assessment of someone’s reliability and is best thought of in personal terms, even at the scale of organizations. Transparency, reciprocity — you have to give trust to get trust — and a common purpose are other ingredients that he attributes to Wikipedia’s success.&nbsp;</p>

<p class="has-text-align-none">We spoke over video call about his book, how Wikipedia handles contentious topics, and the threats facing the project and other fact-based institutions.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/05/257773_Jimmy_Wales_HBenoit_0004.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="Photo by Hayley Benoit / The Verge" />
<p class="has-text-align-none"><em>The interview has been condensed and edited for clarity.&nbsp;</em></p>

<p class="has-text-align-none"><strong><em>The Verge</em></strong><strong>: You wrote a book about trust, and a global crisis in trust. Can you tell me what that crisis is and how we got there?</strong></p>

<p class="has-text-align-none">Jimmy Wales: If you look at the Edelman Trust Barometer survey, which has been going since 2000, you&#8217;ve seen this steady erosion of trust in journalism and media and business and to some degree in each other. I think it gives rise in a business context to a lot of increased cost and complexity, and politically, I think it&#8217;s tied up with the rise of populism. So I think it&#8217;s important that we focus on this issue and think about, <em>What&#8217;s gone wrong? How do we get back to a culture of trust?</em></p>

<p class="has-text-align-none"><strong>What do you think has gone wrong?</strong></p>

<p class="has-text-align-none">I think there&#8217;s a number of things that have gone wrong. The trend actually goes back to before the Edelman data. Some of the things I would point to are the decline of the business model for local journalism. To the extent that the business model for journalism has been very difficult, full stop, you see the rise of low-quality outlets, clickbait headlines, all of that. But also that local piece means people aren&#8217;t necessarily getting information that they can verify with their own eyes, and I think that tends to undermine trust. In more recent times, obviously the toxicity of social media hasn&#8217;t been helpful.</p>

<p class="has-text-align-none"><strong>Why has Wikipedia so far bucked that trend and continued to be fairly widely trusted?</strong></p>

<p class="has-text-align-none">Part of the rationale for writing the book is to say, “Look, Wikipedia has gone from being kind of a joke to one of the few things people trust, even though we&#8217;re far from perfect.” I think transparency is hugely important. The idea that Wikipedia is an open, collaborative system and you can come and see how decisions are made, you can join and participate in those decisions — that&#8217;s been very helpful. I think neutrality is really important. The idea that we shouldn&#8217;t take sides on controversial topics is one that resonates with a lot of people. I don&#8217;t want to come to an encyclopedia or frankly a newspaper and be told only one side of the story. I want to get the full picture so I can understand the situation for myself.&nbsp;</p>

<p class="has-text-align-none"><strong>You brought up the Edelman survey and decline in trust in media, government, and to a lesser extent individuals. Are we seeing a decline in trust or a transfer of trust from institutions to individuals? In the book, you say we are hardwired to trust at an interpersonal level by gauging other people&#8217;s authenticity, which is a trait that plays very well on social platforms, where some very trusted figures also gain extra trust by telling their followers </strong><strong><em>not</em></strong><strong> to trust in the media, the FDA, the universities. Do you see this dynamic playing a role, and if you do, how has Wikipedia, which is an institution, continued to be trusted?&nbsp;</strong></p>

<p class="has-text-align-none">I think there&#8217;s some truth to that. But I also think it&#8217;s incomplete because I think a lot of people who support Donald Trump will also say they don&#8217;t really trust him. They just think it&#8217;s not relevant. They&#8217;ve sort of lost faith in the idea of people being honest. So they&#8217;re more likely to say, “All politicians lie, so why is that a big deal?” I obviously think it is a big deal. I think that&#8217;s very problematic.&nbsp;</p>

<p class="has-text-align-none">Similarly, I think a lot of the people who are jumping on a bandwagon undermining trust in science, for example, basically see a way to get successful doing it. I mean, that&#8217;s a pretty cynical view of those particular people, and I&#8217;m not a very cynical person, but it&#8217;s hard to come to any other conclusion sometimes, that there&#8217;s a lot of grifting going on.</p>

<p class="has-text-align-none">I interviewed Francis Fry for the book, and she&#8217;s a Harvard academic who also has business experience. One of the things she said to me was, people often say that once you&#8217;ve lost trust — that’s it, you&#8217;ll never get it back. And she says that&#8217;s not true. You can rebuild trust. There are certain definable things that organizations and people can do to rebuild trust. So when we think about institutions being attacked, they probably should reflect on what made them vulnerable.&nbsp;</p>

<p class="has-text-align-none"><strong>You have some examples in the book, like the back-and-forth about masking and covid, and obviously journalists do make errors. But I tend to think that most publications are fairly transparent about issuing corrections, though maybe not to the level of Wikipedia. How much of the decline in trust has to do with actual mistakes made by those institutions, versus people or groups that want to be able to define their own reality undermining what they see as rival centers of facts, whether that&#8217;s academia or science or journalism?</strong></p>

<p class="has-text-align-none">I absolutely think it&#8217;s both. In many cases, we have seen media with a real blind spot, and I typically would view it more often as a blind spot problem, rather than deliberately being biased. I live in London. All three of the major political parties were all opposed to Brexit, and in London you could not really find anybody who was openly supporting Brexit, not among my social group. Everybody thought it was a completely ridiculous idea. And yet the public voted for it.&nbsp;</p>

<p class="has-text-align-none">I think a big part of that was that London wasn&#8217;t listening and the media tended too often to portray Brexit support as having to do with racism and so on. Which, of course, if that&#8217;s how you come at people, they tend to not go, “Oh, you&#8217;re right, I&#8217;m sorry. I&#8217;m going to stop being racist now and change my political views.” They&#8217;re more likely to say, “Hold on a minute, you&#8217;re not listening to me. I&#8217;m not being racist. There are these problems, functional problems, and I don&#8217;t think I&#8217;m being listened to.” To the extent the media isn&#8217;t representative of broader segments of society and isn&#8217;t listening to problems that people are having, that&#8217;s a problem. And then we also have people who are taking advantage of it and who see that opportunity to campaign and build trust by pointing the finger at the other guy.</p>

<p class="has-text-align-none"><strong>Debates on Wikipedia talk pages can get heated. People rebut other people&#8217;s proposals without a lot of pleasantry. There is real conflict, but they are generally productive conflicts. People keep engaging with each other and usually reach a compromise, which I feel is very unique in online discourse. What do you think the mechanism or mechanisms are that make this possible?</strong>&nbsp;</p>

<p class="has-text-align-none">We have a purpose to build an encyclopedia, to be high-quality and neutral, and we have a commitment to civility as a virtue in the community. We&#8217;re human beings, so of course sometimes those conversations are, I might say, a bit brusque but hopefully not stretching quite into personal attacks. There&#8217;s also this view that you really shouldn&#8217;t attack people personally. And if it gets overheated, you should probably apologize, and things like that, which is not that unusual except in online contexts. I mean, normally I think most people in real life, if you get into a proper nasty quarrel with someone, there is a sort of feeling like, <em>Yeah, that wasn&#8217;t productive and maybe we need to apologize to each other and find a better way to deal with each other</em>. In terms of how do we foster more of that? I think in online spaces, it has to do with changing culture. And in many cases, I think it&#8217;s the design of algorithms.&nbsp;</p>

<p class="has-text-align-none">I don&#8217;t go on Facebook very much anymore, but if one day I logged in and Facebook had an option that said, “We&#8217;d like to show you things we think you will disagree with, but that we have some signals in our algorithm that are of quality. Would you like to see that?” I&#8217;d be like, yes, sign me up for that. As opposed to: “Our research has shown that you tend to get agitated about trolls, so we&#8217;re going to send more trolls your way because you stay on the site longer.” Or “we&#8217;re only going to send you stuff we think you&#8217;re going to agree with,” which is also not really healthy intellectually.</p>

<p class="has-text-align-none"><strong>One of your other examples of a functional online space was the subreddit /changemyview, which feels similar to Wikipedia in some ways. It&#8217;s text-based. There are rules. You&#8217;re there for a specific purpose. Is it possible for a big platform like Facebook or X or whatever to become a healthy space, or do you need to be kind of constrained and purpose-built?</strong></p>

<p class="has-text-align-none">I think it&#8217;s hard for sure. And I think that&#8217;s a great question because I don&#8217;t think anybody knows right now. On Facebook, you’ll find pockets of groups that have good, well-run community members who are keeping the peace and insisting on certain standards. And you find horrible places as well. I think Reddit it’s the same. And another thing that I do think is interesting is looking back, because I&#8217;m now old, and I remember before the World Wide Web and I remember Usenet, which was a giant, enormous, largely unmoderated message board. That was super toxic. It had endless flame wars and horribleness and spam and all kinds of nonsense. So I always try to mention that when people have this view of the lovely, sweet days of the early internet — <em>it was such a utopia</em>. I&#8217;m like, it was kind of horrible then too. It turns out we don&#8217;t need algorithms to be horrible to each other. That&#8217;s actually something humans can do, and humans can be great to each other at the same time. But I do think, as consumers of internet spaces, I think we should say, “Actually, I really would much rather be in places that are good for me.”</p>

<p class="has-text-align-none"><strong>You recently weighed in on one of the most contentious topics on Wikipedia or anywhere, the Israel-Gaza conflict. You wrote that you thought that it shouldn&#8217;t be called a genocide in wiki voice. You normally stay out of content debates on Wikipedia. Why did you decide to weigh in on that one?</strong></p>

<p class="has-text-align-none">I think it&#8217;s really important that Wikipedia remain neutral and that we refrain from saying things that are controversial in wiki voice. I think that&#8217;s not healthy for us and not healthy for the world. So it felt important to weigh in and say, “Let&#8217;s take a deeper look at this.” And the other thing is normally, we have this idea of consensus in the community, and I would say it has a certain usually constructive ambiguity, like what is consensus? How do you define that? We&#8217;ve avoided for good reason, I think, saying, “it&#8217;s 80 percent” or any kind of simple rule like that. And the reason is because there are so many different areas in editing where there are different levels of certainty and different levels of consensus. My simplest example is, which picture of the Eiffel Tower should we have as the main picture on the Eiffel Tower wiki page? Well, maybe somebody does a straw poll and it&#8217;s 60-40. Personally, if I&#8217;m in the 40 percent, I&#8217;m going to go, <em>Most people don&#8217;t agree with me, oh well</em>, because it isn&#8217;t that important.&nbsp;</p>

<p class="has-text-align-none">Whereas in other cases, if you&#8217;ve got a significant number of good Wikipedians who are saying, “I don&#8217;t agree with this, I don&#8217;t think this should be in wiki voice, you shouldn&#8217;t go for 60 percent.” That&#8217;s nowhere near good enough, particularly not if it has enormous implications for the reputation of Wikipedia and neutrality. We should hold ourselves to a very high standard. This is the kind of thing that over the years, we have to reexamine over and over and over. Where are we drawing these lines? And are we doing a good job of it? And should we ratchet it up and be more serious about it? And over the years, we have gotten more serious about it. And I think we should be even more serious about it.</p>

<p class="has-text-align-none"><strong>Some of the editors said they felt that there was a consensus, that they&#8217;d debated this question for months, and that to frame the article as you wanted would be to give both sides of the debate equal weight, rather than to represent the proportional view of experts and institutions. What are your thoughts on that critique?</strong></p>

<p class="has-text-align-none">Yeah, I think they&#8217;re wrong. I think we have to always dig deep and examine it, and I think it&#8217;s absolutely fine to say, “The consensus of academic genocide researchers is that this was genocide.” That, as far as I can tell, is a fact, so that&#8217;s fine. Report on that fact. That doesn&#8217;t mean that Wikipedia should say it in our own voice.&nbsp;</p>

<p class="has-text-align-none">And that&#8217;s actually important more broadly that if there&#8217;s significant disagreement within the community of Wikipedians and we don&#8217;t have consensus, and if people are putting forward policy-based reasons to disagree with that, which they are, then hold on. We should always be looking for as much agreement as possible. So what can we all agree on? Oftentimes that may be stepping back, going meta and saying, “Okay, well, we can all agree to report on the facts. We&#8217;re not all going to agree on using wiki voice here. So we&#8217;re not going to do that. But we are going to report the facts that we can all agree on.”</p>

<p class="has-text-align-none">And it&#8217;s important for two reasons. One, it&#8217;s what you want from an encyclopedia. You don&#8217;t want to be jumping to a conclusion while there&#8217;s still live debate. And two, socially within the community, it means we can all have a win-win situation where we can all point at this and say, “Yeah, we disagree but we can point to this with pride and say, ‘Actually, this is a good presentation. If you read this, you&#8217;ll understand the debate.’” Brilliant. That&#8217;s where we want to be.</p>

<p class="has-text-align-none"><strong>When I see people attack Wikipedia for bias, it often comes down to which sources editors deem reliable. They’ll say, “Well, you don&#8217;t let us cite Breitbart, so now it&#8217;s going to be biased.” How are you thinking about how to draw the line of what is an acceptable source, and how to maintain neutrality as these decisions no longer seem neutral to people who have a completely different media diet made up of sources deemed unreliable?</strong></p>

<p class="has-text-align-none">It&#8217;s something we will always be grappling with. Wikipedia does not have firm rules. That&#8217;s one of the core pillars. We don&#8217;t completely ban sources. We may deprecate them and say, “Well, it’s not <em>preferred</em> as a source. We&#8217;d rather have something better.” And then I make no apologies at all for saying not all sources are equal. I always say, if I have a choice between <em>The New England Journal of Medicine</em> and Breitbart, I&#8217;m going with <em>The New England Journal of Medicine</em>. That&#8217;s just the way it is, and I think that&#8217;s fine. When I say we have to grapple with it and take seriously the question of bias, I think we do. But sometimes we&#8217;re going to conclude, <em>Actually, I think we&#8217;re fine here</em>.&nbsp;</p>

<p class="has-text-align-none"><strong>Elon Musk has been a loud voice complaining about bias on Wikipedia. Now he has Grokipedia, an AI-rewritten version of Wikipedia that draws on a bunch of sources that Wikipedia won&#8217;t allow. Have you looked at Grokipedia?</strong></p>

<p class="has-text-align-none">A little. Not enough. I need to do a deep dive.</p>

<p class="has-text-align-none"><strong>What are your thoughts on it?</strong></p>

<p class="has-text-align-none">I think a lot of the criticism that it&#8217;s getting is not surprising to me. I use large language models a lot and I know about the hallucination problem, and I see it all the time. Large language models really aren&#8217;t good enough to write an encyclopedia. And what’s particularly true is the more obscure the topic, then the more likely they are to hallucinate. I also think in terms of the question of trust, I&#8217;m not sure anybody&#8217;s going to trust an encyclopedia that has a thumb on the scales. Which is to say, when I&#8217;m not happy about something in Wikipedia, I open a conversation and enter the discourse. I&#8217;m sure if Elon doesn&#8217;t like something, it&#8217;s just going to change. I don&#8217;t see how you can trust a process like that. You know, it is reported that Grokipedia seems to agree with Elon Musk&#8217;s political views quite well. Fine. It&#8217;s Elon, but that might not be what we all want from an encyclopedia.</p>

<p class="has-text-align-none"><strong>Are you concerned that it could be what some people want, or that people will start to use or prefer an AI-revised version of Wikipedia that conforms to their worldview?&nbsp;</strong></p>

<p class="has-text-align-none">Obviously you can&#8217;t dismiss that out of hand, but I actually reflect on various research that we cite in the book about trust, that if people feel like there&#8217;s a thumb on the scale, then even if they agree with that thumb on the scale, they are likely to trust it less.</p>

<p class="has-text-align-none">I have great confidence in ordinary people. I think that if you ask people, “Would you prefer to have a news source that reflects all your own prejudices and biases and that you agree with every day?” or “Would you rather get something that is neutral and gives you insight into things you might not agree with?” I don&#8217;t think it&#8217;d be a contest. Most people would prefer the latter. That doesn&#8217;t mean they automatically click on it, and they may prefer their preferred outlet. That&#8217;s fine. That&#8217;s humanity. But I don&#8217;t think we&#8217;re about to all go off into our little mind bubbles permanently.&nbsp;</p>

<p class="has-text-align-none"><strong>How are you thinking about Wikipedia and AI more generally? The internet is increasingly full of AI-generated slop, and the foundation noted earlier this year that bots scraping the site were straining Wikipedia’s servers. Do you see AI presenting a threat, possible benefit, both?&nbsp;</strong></p>

<p class="has-text-align-none">Both. AI slop on the internet I don&#8217;t think is a huge issue for Wikipedia because we&#8217;ve spent, you know, now nearly 25 years studying sources and debating the quality of sources. And so I think Wikipedians aren&#8217;t likely to be fooled by, you know, sort of fluff content that is generated by AI.</p>

<p class="has-text-align-none">Obviously, crawling Wikipedia and hammering our servers, that&#8217;s not cool. So we hope we find a reasoned solution to that. The money that supports Wikipedia is the small donors giving an average of just over $10. They&#8217;re not donating to subsidize billion-dollar companies crawling Wikipedia. So you know, “pay for what you&#8217;re using” seems like a fair request.&nbsp;</p>

<p class="has-text-align-none">Then the other thing that I think is super interesting are questions around how might we, the community, might use the technology in a new way. I&#8217;m not a very good programmer, but I&#8217;m a programmer and I just wrote a little thing that I can feed it a short Wikipedia entry that maybe has five sources and feed it the five sources and say, “Is there anything in the sources that should be in Wikipedia but isn&#8217;t? Or is there anything in Wikipedia that isn&#8217;t supported by the sources?” I haven&#8217;t even had time to play with it, but even at a first pass, I thought, this is actually not terrible.</p>

<p class="has-text-align-none"><strong>Going back to why Wikipedia works, editors do seem to largely trust each other to be working in good faith, but it also seems like they have a lot of trust or respect for Wikipedia’s rules and processes in a way that feels rare in online communities. Where does that come from?</strong>&nbsp;</p>

<p class="has-text-align-none">I think it probably has to do with everything being genuinely community-driven and genuinely consensus-driven. The rules aren&#8217;t imposed, the rules are people writing down accepted best practices. Certainly in the early days, that was absolutely how it worked. We would be doing something for a while and then we would notice, like, <em>Oh, actually, you know, best practice is this, so we should maybe write that down as a guide for people</em>, and it becomes policy at some point. That helps to build trust in the rules, that they&#8217;re genuinely not imposed top-down, that they are the product of our values and a process and the purpose of Wikipedia.</p>
						]]>
									</content>
			
					</entry>
	</feed>
