<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Janus Rose | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-30T15:37:44+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/janusrose" />
	<id>https://www.theverge.com/authors/janusrose/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/janusrose/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Janus Rose</name>
			</author>
			
			<title type="html"><![CDATA[The more young people use AI, the more they hate it]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/920401/gen-z-ai" />
			<id>https://www.theverge.com/?p=920401</id>
			<updated>2026-04-30T11:37:44-04:00</updated>
			<published>2026-04-30T07:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[It’s been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there’s no group that has felt the pressure quite like Gen Z. Like with many tech trends before it, it’s no surprise that young people are among the biggest adopters [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Thumbs down from robot symbolizing dislike of AI by the youths" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK_414_AI_J-copy.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">It’s been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there’s no group that has felt the pressure quite like Gen Z.</p>

<p class="has-text-align-none">Like with many tech trends before it, it’s no surprise that young people are among the biggest adopters of AI chatbot tools. But contrary to the tales spun by tech companies like OpenAI and Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI. And even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future that many feel is being forced on them.</p>

<figure class="wp-block-pullquote"><blockquote><p>“The part that feels scariest to me is the human impact … their ability to have relationships or just basic communication.”</p></blockquote></figure>

<p class="has-text-align-none">Far from the stereotype of lazy young people looking for shortcuts, Gen Zers have had some of the loudest and most detailed objections to generative AI use. Their attitudes also reflect a <a href="https://www.theverge.com/policy/916210/ai-midterm-elections-data-centers-jobs">much wider backlash against AI</a> and the tech industry in general, which has recently resulted in a nonpartisan <a href="https://www.datacenterwatch.org/">movement against data centers</a> across the country and threatened both <a href="https://www.theverge.com/ai-artificial-intelligence/911778/ai-violence-sam-altman-home">CEOs</a> and <a href="https://www.theverge.com/policy/916210/ai-midterm-elections-data-centers-jobs">politicians</a> supportive of Silicon Valley’s AI frenzy.</p>

<p class="has-text-align-none">Meg Aubuchon, a 27-year-old art teacher living in Los Angeles, says their response and that of many of their peers has been to avoid chatbot tools entirely. “It just makes me want to dig my heels into a career where I never have to use AI, even if that’s a career that isn’t going to pay as well,” Aubuchon told <em>The Verge</em>.</p>

<p class="has-text-align-none">Emerging from academia and into the vice grip of an<a href="https://www.nytimes.com/2026/03/24/business/economy/college-graduates-job-market-hiring.html"> increasingly brutal job market</a>, young people face an impossible contradiction. They are being told, on the one hand, that these tools are going to eliminate millions of jobs, and on the other that they<a href="https://www.edweek.org/technology/the-ed-dept-wants-to-steer-grant-money-to-ai-what-that-means-for-schools/2025/07"> have to use them</a> if they don’t want to fall behind. They’re the first new generation of adults to navigate a world flooded with chatbots and <a href="https://www.404media.co/study-finds-a-third-of-new-websites-are-ai-generated/">generative AI slop</a>, after having already lost years of their youth to the covid-19 pandemic. And all the while, Silicon Valley’s<a href="https://www.forbes.com/sites/gilpress/2026/02/27/the-state-of-the-17-trillion-ai-bubble-the-end-of-thinking/"> multitrillion-dollar push</a> for AI adoption is clashing with their fears of its well-documented impacts — on the<a href="https://www.theguardian.com/technology/2025/dec/18/2025-ai-boom-huge-co2-emissions-use-water-research-finds"> environment</a>,<a href="https://www.cnet.com/tech/services-and-software/what-is-ai-slop-everything-to-know-about-the-terrible-content-taking-over-the-internet/"> disinformation</a>,<a href="https://www.computer.org/publications/tech-news/trends/cognitive-offloading"> academic integrity</a>, and our social fabric and<a href="https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told"> emotional well-being</a>, to name just a few.</p>

<p class="has-text-align-none">“The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people, whether that be their ability to have relationships or just basic communication,” said Aubuchon.</p>

<p class="has-text-align-none">Sharon Freystaetter, 25, went to school for computer science at a young age and spent three years working as a cloud infrastructure engineer at a major Silicon Valley company. But right as AI hype really started to take off, she left the company, citing ethical concerns and anxiety over the environmental impacts of data centers. Now, she has left the tech industry for good, and says she avoids chatbots and disables AI features in applications whenever possible.</p>

<p class="has-text-align-none">“I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are in computer science and are essentially mandated to use it,” Freystaetter, who is now a food service worker in New York, told <em>The Verge. </em>“When I came back and started to look around [for tech jobs], suddenly everything was saying ‘You need to use AI to get this job’ in the requirements.”&nbsp;</p>

<p class="has-text-align-none">Fears that chatbots are wrecking critical thinking and social skills are common among many groups of young adults, even as a wide majority of them admit to using chatbot tools regularly. According to a<a href="https://hbsp.harvard.edu/inspiring-minds/how-gen-z-is-using-ai"> recent Harvard-Gallup study</a>, 74 percent of young adults surveyed in the United States said they use a chatbot at least once a month (<a href="https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx">another study found</a> more than half of US college students admit to using the tools for their coursework on a weekly basis). At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.</p>

<figure class="wp-block-pullquote"><blockquote><p>“I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs.”</p></blockquote></figure>

<p class="has-text-align-none">And in a more<a href="https://www.theverge.com/ai-artificial-intelligence/909687/gen-z-doesnt-like-ai-gallup"> recent Gallup poll</a>, Gen Z’s opinion of AI tools hit a new low: Only 18 percent now say they are hopeful about the technology, down from 27 percent last year, and only 22 percent say they are excited, down from 36 percent. The number of Gen Z workers who think AI’s risks outweigh its benefits has also increased over the past year by 11 points, to almost 50 percent. And even though 56 percent say the tools help them finish work faster, eight in 10 now admit that using AI in this way makes actual learning more difficult in the future.</p>

<p class="has-text-align-none">To make matters worse, many university students are seeing school administrations awkwardly shoehorn AI into their higher education, consolidate computer science and engineering departments into new “AI” majors, and<a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/03/27/faculty-push-back-against-openai-deals"> pen multimillion-dollar deals with AI companies</a> like OpenAI and Anthropic to integrate chatbot tools into academic curricula. And at the same time, young people are<a href="https://futurism.com/artificial-intelligence/graduates-college-ai-jobs"> graduating into a brutal job market</a> that they complain has been made virtually impossible to navigate as AI automation tools opaquely and arbitrarily filter out their job applications.</p>

<p class="has-text-align-none">Alex Hanna, the director of research at the Distributed AI Research Institute (DAIR), says the way students are being inundated by AI and its accompanying hype is driving their resentment, leading to<a href="https://www.latimes.com/california/story/2026-04-01/csu-ai-survey-students-faculty"> widespread backlash</a> both inside and outside academia.</p>

<p class="has-text-align-none">“Universities are hearing from employers that they want students who know how to use these tools,” Hanna told <em>The Verge</em>. “This is not because the tools actually have shown much value-add — they want Gen Z to show them where the value-add is. That, or the university is investing or has donors heavily involved in the supply side (e.g., in the tech industry).”</p>

<p class="has-text-align-none">In other words, AI companies and universities are taking an “integrate first, find use cases later” approach that essentially recruits students as marketing for the AI industry while baking these tools deep into the core of academia. At Arizona State University, for example, the school’s administration is using a beta tool called ASU Atomic that uses AI to automatically synthesize professors’ lectures into bite-sized learning materials, <a href="https://www.404media.co/email/f20bb25f-7b9c-48a6-b8bf-7bc22f18a25f/?ref=daily-stories-newsletter"><em>404 Media</em> recently reported</a>.</p>

<figure class="wp-block-pullquote"><blockquote><p>74 percent of young adults surveyed in the United States said they use a chatbot at least once a month … 65 percent said that using chatbots prevents people from engaging with ideas in a critical or meaningful way.</p></blockquote></figure>

<p class="has-text-align-none">Last month, the editorial board of the University of Pennsylvania’s<a href="https://www.thedp.com/article/2026/03/penn-ai-dominance-education"> student newspaper published</a> a scathing piece criticizing the university administration’s adoption of chatbot tools and its integration of AI topics into nearly every part of its curriculum. While acknowledging the widespread use of chatbots by students, the authors wrote that by uncritically embracing the technology without any clear rules, the school is “only quickening its own demise.”</p>

<p class="has-text-align-none">“AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought,” the students wrote. “With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”</p>

<p class="has-text-align-none">In another letter written by the Oberlin College Luddite Club (appropriately, using a typewriter), students rejected a similar initiative by their school administration to “experiment” with AI-centric education.</p>

<p class="has-text-align-none">“[E]ven one semester of accepted (even encouraged) chat-bot use will jettison our student body down a lazy, irredeemable tunnel of intellectual destruction,” the Oberlin students wrote. “We will not stand by and witness the further atrophying of our liberal arts education. Rather than strengthening Silicon Valley, we build our own skills and generative sweat.”</p>

<p class="has-text-align-none">The fear that chatbot tools will lead to a permanent loss of critical thinking skills <a href="https://futurism.com/artificial-intelligence/gen-z-brain-humanity-ai">ranks high among the worries</a> held by young people about the technology. It’s also backed up by data: A recent<a href="https://arxiv.org/abs/2506.08872"> study from the MIT Media Lab</a> <a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">found</a> that EEG scans of the brain showed decreased activity in people who have been writing essays using AI tools. Other research has found that this process, known as “<a href="https://www.mdpi.com/2075-4698/15/1/6">cognitive offloading</a>,” has<a href="https://www.computer.org/publications/tech-news/trends/cognitive-offloading"> a wide range of negative impacts</a> on humans, including diminishing people’s skepticism and their ability to discern truth from deception, leading to “heightened manipulation and weakened democratic decision-making processes.”</p>

<p class="has-text-align-none">The fact that so many young people are well aware of these dangers even as they make use of the tools shows that they aren’t buying the hype of AI boosters like OpenAI’s Sam Altman, who has frequently tried to pitch chatbots as tools for doing everything from writing essays to<a href="https://www.theguardian.com/commentisfree/2025/dec/13/openai-parental-assistance"> raising a child</a>. Instead, it suggests that Gen Z is hyper-aware of the tools’ limitations — from their <a href="https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html">well-documented tendency</a> to “hallucinate” made-up information to the <a href="https://www.storyboard18.com/digital/us-man-dies-by-suicide-after-developing-emotional-bond-with-ai-chatbot-i-am-scared-to-die-ws-l-95587.htm">social</a> and <a href="https://hoodline.com/2026/04/after-son-s-suicide-sacramento-mom-joins-ai-chatbot-crackdown-at-capitol/">emotional </a>cognito-hazards of relying on machines for human advice.</p>

<p class="has-text-align-none">“Altman talks about the technology like it is magic. He has used those words precisely,<a href="https://www.buzzsprout.com/2126417/episodes/17341004"> calling</a> ChatGPT ‘Magic Intelligence in the Cloud,’” said Hanna. “Gen Z is more realistic about what the tools actually can do. They can handle text-based work that they don&#8217;t want to do or feel pressured to do. But they are often rather savvy about their limits.”</p>

<p class="has-text-align-none">This is true even among those who aren’t “anti-AI” and say they find chatbot tools useful.</p>

<p class="has-text-align-none">“I spend a lot of time thinking about this stuff and I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs,” Emma Gottlieb, a borderline Zoomer-millennial who works in technical sales for a company that makes equipment for the film industry, told <em>The Verge</em>. Gottlieb says she often uses AI tools to quickly sift through large volumes of technical documents for her job. But she knows better than to take the systems’ outputs at face value.&nbsp;</p>

<p class="has-text-align-none">“I definitely do double-checks, personally. It’s important because somebody will mislabel an eBay listing for a component part, and then the AI will say it has this feature when it really doesn’t,” said Gottlieb. “I wouldn’t say it’s a significant time-saver, but I think it’s just like fast food — it’s easy, it’s cheap, and it’s there.”</p>

<figure class="wp-block-pullquote"><blockquote><p>AI companies and universities are taking an “integrate first, find use cases later” approach.</p></blockquote></figure>

<p class="has-text-align-none">There’s one other explanation for Gen Z’s stance on AI tools that isn’t measured in data points:<a href="https://abbybinder.substack.com/p/lets-talk-about-ai-shame"> AI use has become culturally toxic</a>, and many young people (like their older counterparts) won’t admit to using it out of social shame. The use of AI-generated visuals and text is frequently a subject of ridicule on social media, and any anecdotal sampling of young people will suggest that most find it fake and deeply uncool — especially when it’s used to circumvent the creative process and <a href="https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood/">pass off ugly-looking slop</a> as “AI art.”&nbsp;</p>

<p class="has-text-align-none">Lacking any clear-cut rules, AI use also causes distrust and anxiety within academia, not just between students and professors, but among peers. According to one <a href="https://theconversation.com/university-students-feel-anxious-confused-and-distrustful-about-ai-in-the-classroom-and-among-their-peers-258665">University of Pittsburgh study</a>, students viewed the use of AI tools as a “red flag” that causes them to “think less” of their peers.</p>

<p class="has-text-align-none">But Hanna says that a more critical approach is necessary — one that “punches up” at the CEOs, marketing teams, and school administrations that are pushing these tools as universal thinking machines, and focuses on the material conditions that pressure young people to use them in the first place.</p>

<p class="has-text-align-none">“Speaking as an elder millennial, I approach Zoomers who use these tools with a bit more empathy,” said Hanna. “Why do they feel compelled to use them? What material conditions do they face at school such that they are feeling so pressured? Is there a way to offer them another kind of pressure valve? … That&#8217;s likely a better place to begin from.”</p>

<p class="has-text-align-none">Freystaetter and Gottlieb both say that instead of their own generation, they are more worried about Gen Alpha and other young people that come after them, who lose their chance to develop healthy relationships with technologies when they become mandatory and ubiquitous.&nbsp;</p>

<p class="has-text-align-none">“These are the kids who are growing up with [AI] integrated into everything, and with ease of access,” Fraystaetter said. “They grow up not knowing that they should be critical of it, and that they’re being influenced by it.”&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Janus Rose</name>
			</author>
			
			<title type="html"><![CDATA[‘Age Verification’ could force trans people to out themselves to use the internet]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/policy/892075/age-verification-kansas-id-trans" />
			<id>https://www.theverge.com/?p=892075</id>
			<updated>2026-03-10T19:11:25-04:00</updated>
			<published>2026-03-10T11:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Policy" />
							<summary type="html"><![CDATA[In 2026, a photo ID is not just paperwork — it essentially grants you permission to exist in society. Last month, Kansas legislature passed a law categorically invalidating trans people’s driver’s licenses and IDs overnight, requiring them to obtain new IDs with incorrect gender markers. Now, with a slew of online “Age Verification” laws requiring [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Kansas ID distorted" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/Vrg_illo_trans_drivers_license_kansas_online.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">In 2026, a photo ID is not just paperwork — it essentially grants you permission to exist in society. Last month, Kansas legislature passed a law categorically<a href="https://theconversation.com/kansas-revoked-transgender-peoples-ids-overnight-researchers-anticipate-cascading-health-and-social-consequences-277052"> invalidating trans people’s driver’s licenses and IDs</a> overnight, requiring them to obtain new IDs with incorrect gender markers. Now, with a slew of online “Age Verification” laws requiring online platforms to perform digital identity checks, tech policy experts warn that the inherent dangers are being expanded onto the internet, where biased automated systems threaten to expose and lock trans people out of websites, public services, and apps.</p>

<p class="has-text-align-none">As of March 2026, <a href="https://breached.company/half-of-us-states-now-enforce-age-verification-laws-the-2026-mass-rollout-of-digital-id-requirements/">over half of US states</a> have passed an “Age Verification” and “Digital ID” law. These verification systems (sometimes called “age-gating”) add a new dimension to problems that trans people have been dealing with for decades.&nbsp;</p>

<p class="has-text-align-none">“This is yet another step in requiring people to identify themselves everywhere, in physical and online spaces, as their so-called gender assigned at birth,” Dia Kayyali, an independent tech and human rights consultant, told <em>The Verge</em>.&nbsp;</p>

<p class="has-text-align-none">As many have pointed out, having an ID that doesn’t match your appearance or lived reality is not a matter of pronouns or “validation,” but<a href="https://gracebyron.substack.com/p/validity-is-not-political"> one of material consequence</a>: it prevents trans people from freely moving about the world without risking constant harassment, violence, and discrimination. Advocates For Trans Equality notes in its<a href="https://transequality.org/issues/identity-documents-privacy"> guide on trans identity documents</a> that “incorrect identification exposes people to a range of negative outcomes, from denial of employment, housing, and public benefits to harassment and physical violence.”</p>

<p class="has-text-align-none">In January 2025, the administration issued a broad anti-trans executive order claiming the federal government would only recognize a person’s “immutable biological classification as either male or female,” defying the<a href="https://www.sciencenews.org/article/biological-sex-male-female-intersex"> overwhelming consensus of medical science</a>, which shows that sex is neither immutable nor purely biological. In November 2025, the Supreme Court overturned a court order that had temporarily stopped the Trump administration from<a href="https://19thnews.org/2025/11/supreme-court-transgender-passport-updates/"> blocking gender changes on US passports</a>. While the executive order isn&#8217;t legally binding outside the federal government, states rely on the executive order to justify laws like SB 244 that invalidated licenses in Kansas.</p>

<p class="has-text-align-none">Automated online ID checking systems add new potential dangers to having a mismatching ID. <a href="https://dl.acm.org/doi/10.1145/3274357">Research shows</a> that the very design of online ID checking virtually guarantees that trans people — and <a href="https://www.theguardian.com/news/2025/sep/19/how-accurate-are-age-checks-for-australias-under-16s-social-media-ban-what-trial-data-reveals">people of color</a> — are experiencing issues disproportionately.</p>

<figure class="wp-block-pullquote"><blockquote><p>“These systems are specifically designed to look for discrepancies, and they’re going to find them.”</p></blockquote></figure>

<p class="has-text-align-none">Digital ID and age verification services generally fall into two categories. The systems typically used by government agencies to verify identity (like ID.me, which is used in some states to verify benefits like SNAP) compare an uploaded picture of a person’s ID against information stored in a government database. Others mandate biometric scans and AI “Facial Age Estimation,” an unproven computer vision technique that claims the ability to determine age by analyzing facial features. This technique is based on facial recognition, and is currently being used by platforms like Meta,<a href="https://www.biometricupdate.com/202503/mistake-on-threshold-for-facial-age-estimation-costs-onlyfans-parent-1-4m"> OnlyFans</a>, and Roblox, where it’s being<a href="https://piunikaweb.com/2026/01/21/teens-bypassing-age-verification/"> outsmarted by teenagers</a> and is generally<a href="https://www.wired.com/story/robloxs-ai-powered-age-verification-is-a-complete-mess/"> a huge disaster</a>.</p>

<p class="has-text-align-none">“Both approaches have issues and disproportionate failure rates for trans people,” Os Keyes, a postdoctoral fellow at the University of Massachusetts who researches algorithmic bias against trans people, told <em>The</em> <em>Verge</em>. Technical experts like Keyes have criticized these systems as inherently biased against trans people, whose identities don’t always fit neatly into government boxes, and whose facial features often change dramatically as a result of hormone replacement therapy (HRT).</p>

<p class="has-text-align-none">“These systems are specifically designed to look for discrepancies, and they’re going to find them,” said Kayyali. “If you are a woman and anyone on the street would say ‘that’s a woman,’ but that’s not what your ID says, that’s a discrepancy.&#8221; The danger of these discrepancies extends not just to trans people, but to<a href="https://www.advocate.com/news/lesbian-mistaken-transgender-arizona-walmart"> anyone else</a> whose<a href="https://www.refinery29.com/en-gb/bathroom-transphobia-butch-women"> appearance doesn’t match</a> normative gendered expectations.</p>

<p class="has-text-align-none">“A lot of age estimation systems are built on a combination of anthropological sex markers and skin texture. This means they fall over and provide inaccurate results when faced with people whose markers and skin texture, well, don&#8217;t match,” explains Keyes. For example, one of the most prominent markers algorithms measure to determine sex is the brow ridge. “Suppose you have a trans man on HRT and a trans woman on HRT, the former with low brow ridges and rougher skin, the latter with high ridges and softer skin,” Keyes explains. “The former is likely to have their age overestimated; the latter, underestimated.”</p>

<p class="has-text-align-none">Making this even more Kafkaesque is the fact that many of these systems are black boxes, and lack even a basic method of redress where automated decisions can be appealed — mostly because the age verification laws don’t specifically require them. Many of the laws, including in<a href="https://kslegislature.gov/li_2024/b2023_24/measures/documents/sb394_enrolled.pdf"> Kansas</a>, incorporate language that only requires platforms to conduct age verification through “a commercially available database” or “any other commercially reasonable method” — to say nothing about the transparency or accuracy of the systems.</p>

<p class="has-text-align-none">Kendra Albert, a technology lawyer and Partner at Albert Sellars, LLP, says that the open-ended language of the bills allows companies to avoid legal liability as long as they implement <em>some kind of</em> age-verification solution, regardless of its effectiveness or whether it has a way to appeal algorithmic decisions.&nbsp;</p>

<p class="has-text-align-none">“In a lot of cases, it’s not saying you have to, it’s just saying you may be liable if you don’t,” Albert said. “That makes it harder to hold anyone accountable for the decisions to implement these tools, which are gonna have negative effects on particular populations of folks.”</p>

<p class="has-text-align-none">This leaves many platforms seeking to remove legal liability by relying on third-party age verification vendors like Yoti and k-ID&nbsp; — many of which, Albert notes, usually disclaim liability for algorithmic decision-making as part of their Terms of Service. Smaller platforms that can’t afford these vendors, meanwhile, will either implement their own forms of verification (which also means they become responsible for securely storing extremely sensitive user data), or simply shut down to avoid the legal risk.</p>

<p class="has-text-align-none">Using third-party vendors also means that companies may or may not be securely storing private data, or selling user information to other companies or the government. Last month, Discord, a platform hugely popular among LGBTQ+ gamers, ended its partnership with Persona, a Peter Thiel-backed identity verification company, after<a href="https://vmfunc.re/blog/persona"> hackers claimed</a> the system was sending the private data it collected to federal agencies and comparing photos and biometric data against government watch lists. Persona CEO disputed these claims, <a href="https://x.com/rickcsong/status/2026422287977836608" data-type="link" data-id="https://x.com/rickcsong/status/2026422287977836608">posting on X</a> that Persona does “not work with any federal agency” but it is “competing for government contracts.” The Trump administration has shown it has no qualms about using such lists to label and target activists and anyone it deems an enemy. Last year, <a href="https://time.com/7322106/trump-nspm-7-domestic-terrorism/">Trump signed NSPM-7, an executive order</a> that describes a wide range of political views as “domestic terrorism” to be targeted by law enforcement — specifically including “radical gender ideology,” the administration’s shorthand for anything acknowledging the fact that trans people exist.</p>

<p class="has-text-align-none">Needless to say, using platforms that require you to submit your ID and biometric data to a third-party company may not be an appealing option for many trans people, who already face disproportionate risk of <a href="https://journals.sagepub.com/doi/10.1177/17416590251345736">doxxing</a> and <a href="https://glaad.org/glaad-alert-desk-data-shows-dramatic-rise-in-anti-trans-hate-incidents/">targeted violence</a>. Last year, it was reported that<a href="https://www.houstonpublicmedia.org/articles/lgbtq/2025/12/15/538666/texas-trans-transgender-drivers-license-id-list-privacy/"> Texas was making a list of trans people</a> from the names of those who had requested to change the gender marker on their state ID.&nbsp;</p>

<p class="has-text-align-none">Speaking of Kansas, Albert said that “in a lot of these circumstances, [the government’s] power comes from the ability to weaponize this information against individual people.”</p>

<p class="has-text-align-none">And then there’s the types of content that digital ID checks are restricting. While the stated goal of most of these laws is blocking minors from accessing porn, <a href="https://www.them.us/story/kids-online-safety-act-kosa-youth-lgbtq-content">advocates have warned</a> the definitions of content “harmful to children” are flexible enough to encompass all sorts of supposedly “harmful” materials, from online LGBTQ+ communities to information on birth control. For example, the controversial Kids Online Safety Act (KOSA) has proposed an online censorship regime under a “duty of care” that requires platforms to <a href="https://www.eff.org/deeplinks/2025/05/kids-online-safety-act-will-make-internet-worse-everyone">avoid showing content</a> that is “harmful to minors,” and designates the Trump-appointed Federal Trade Commission (FTC) as the entity that would determine what kinds of material meet that description. (A companion bill, called the <a href="https://www.theverge.com/policy/890367/house-lawmakers-online-safety-bills-kids-act">Kids Internet and Digital Safety Act</a>, has also advanced in the House of Representatives, though it has notably removed language describing the duty of care.)</p>

<p class="has-text-align-none">“I think it’s fair to say that if you look at the history of obscenity in the US and what’s considered explicit material, stuff with queer and trans material is much more likely to be considered sexually explicit even though it’s not,” said Albert. “You may be in a circumstance where sites with more content about queer and trans people are more likely to face repercussions for not implementing appropriate age-gating or being tagged as explicit.”</p>

<p class="has-text-align-none">For trans people, many of whom <a href="https://www.theverge.com/cs/features/798493/trans-underground-organizing">discovered community and acceptance for the first time online</a>, this is a major shift. In combination, anti-trans ID and age-gating laws will be blocking access and destroying invaluable resources — leaving few alternatives.&nbsp;</p>

<p class="has-text-align-none">&nbsp;“If you can’t afford a VPN, you’re going to use a free VPN that steals your data, or just not access that site at all,” said Kayyali.</p>

<p class="has-text-align-none">Faced with onerous systems that weren’t designed with them in mind, experts worry that LGBTQ+ site operators and many trans folks attempting to access age-gated websites may decide that it isn’t worth the risk.</p>

<p class="has-text-align-none"><em>Correction: Previous version of this article stated that the Kansas Department of Revenue is following the instructions of the January 2025 executive order. KDOR was following SB 244; the state uses the executive order to justify the law.</em></p>

<p class="has-text-align-none"><em>Update: Clarified Persona&#8217;s claims and added comment from Persona CEO Rick Song.</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Janus Rose</name>
			</author>
			
			<title type="html"><![CDATA[Privacy laws can’t keep up with ‘luxury surveillance’]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/807834/meta-smart-glasses-privacy-laws-wearables" />
			<id>https://www.theverge.com/?p=807834</id>
			<updated>2025-10-28T07:53:11-04:00</updated>
			<published>2025-10-28T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Gadgets" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Privacy" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Wearable" />
							<summary type="html"><![CDATA[With Meta’s new range of smart glasses, Mark Zuckerberg is pitching a vision of the future that sci-fi authors have been warning about for decades — one where privacy is truly dead, and everyone is recording everyone else at all times. This in itself is nothing new. Introduced at the company’s recent Meta Connect event, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/257966_ethics_of_recording_people_with_smart_glasses__CVirginia3-1.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">With Meta’s new range of smart glasses, Mark Zuckerberg is pitching a vision of the future that sci-fi authors have been warning about for decades — one where privacy is truly dead, and everyone is recording everyone else at all times.</p>

<p class="has-text-align-none">This in itself is nothing new. Introduced at the company’s recent Meta Connect event, the glasses represent the tech industry’s second major attempt at normalizing ubiquitous wearable surveillance devices, more than a decade after Google’s failed entry into the space with Google Glass. Back then, people wearing the experimental (and stupid-looking) tech were mocked as “Glassholes” — reminiscent of characters from Neal Stephenson’s<a href="https://marksarney.wordpress.com/2012/08/15/10-signs-that-snow-crashs-gargoyles-already-exist/"> 1992 novel <em>Snow Crash</em></a>, where despicable high-tech busybodies called “gargoyles” make a living by scanning and snitching on everyone around them for a Google-esque company called the Central Intelligence Corporation.</p>

<p class="has-text-align-none">But unlike Google in 2012, Meta’s wearable ambitions seem to be on better footing — at least in terms of making products that don’t immediately compel people to shove you into a locker. The new devices have major brand partnerships and are far less conspicuous than previous iterations. Tiny cameras are located on either the nose bridge or the outer rim of the glasses frames, and a small pulsing LED serves as the only hint that the device is recording. The Meta Ray-Ban Display glasses also include a built-in display, a voice-controlled “Live AI” feature that<a href="https://futurism.com/mark-zuckerberg-ai-glasses-demo-fails"> failed spectacularly on stage</a>, and a wearable wristband that operates the device with hand gestures, meaning a quick flick of the wrist is all it takes for someone to start livestreaming their surroundings to the company’s servers.</p>

<p class="has-text-align-none">Of course, it didn’t take long for the inevitable to happen. Photos have already emerged showing<a href="https://www.404media.co/a-cbp-agent-wore-meta-smart-glasses-to-an-immigration-raid-in-los-angeles/"> CBP and ICE agents</a> wearing Meta smart glasses during immigration raids in Los Angeles and Chicago. And last week, the University of San Francisco’s Department of Public Safety sent an alert to students after a man wearing Ray-Ban Meta glasses was<a href="https://www.wane.com/news/university-warns-of-man-recording-women-on-campus-with-meta-glasses/"> seen recording and harassing women on campus</a>. Given everything else going on right now — like the Trump administration<a href="https://www.politico.com/news/2025/09/30/judge-young-ruling-trump-deportation-free-speech-00588114"> cracking down</a> on<a href="https://www.usatoday.com/story/news/2025/09/19/jimmy-kimmel-abc-first-amendment/86243235007/"> political speech</a> and summoning National Guard troops to<a href="https://www.thedailybeast.com/trump-orders-troops-into-blue-city-for-war-on-ice-protests/"> invade American cities</a> — you’d be forgiven for thinking that people walking around with AI-powered cameras on their faces is an absolute nightmare.</p>

<p class="has-text-align-none">Which raises the same question that comes up every time tech companies move fast and break things: How is any of this even legal — let alone ethical? Legal experts say that Meta’s smart glasses exist in the ever-widening chasm between what the law says and how it actually works in practice.</p>

<p class="has-text-align-none">“Most [privacy] laws are inadequate to address this new technology,” Fred Jennings, an independent data privacy attorney based in New York, told <em>The Verge</em>. “The [legal] damages are too small, the enforcement process is too cumbersome, and they weren&#8217;t written with anything like this kind of ubiquitous private recording in mind.”</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">When it comes to internet-connected devices that capture audio and video, the conventional wisdom of the smartphone era has been that everything in public is “fair game.” But while this has proven mostly true for activists<a href="https://www.eff.org/deeplinks/2020/06/you-have-first-amendment-right-record-police"> recording the police</a>, legal experts say this idea that private citizens have absolutely no reasonable expectation of privacy in public has been distorted to extremes over the years. Today, many people have a kind of “privacy nihilism” driven by the ubiquitous presence of cameras and internet-connected devices. The assumption is that everyone is being recorded in public anyway, so what’s the big deal? This gets further complicated by body-worn devices that can instantly and surreptitiously record a person’s surroundings.&nbsp;</p>

<p class="has-text-align-none">Historically, the rules around public recording and surveillance come from a patchwork of different laws and legal principles. One of them is something called the “plain view doctrine,” which was established in the 2001 case <em>Kyllo v. United States</em>. The case involved a police raid on an indoor cannabis farm in California that took place after cops had used thermal cameras to detect warmer temperatures inside the building. The Supreme Court eventually ruled that this violated the Fourth Amendment, because the thermal cameras augmented regular vision and allowed police to “explore details of the home that would previously have been unknowable without physical intrusion.” This meant that the evidence used to justify a search had to be in “plain view” — something that could be easily seen by the casual observer without enhancement tools.</p>

<p class="has-text-align-none">Of course, none of this anticipated that internet-connected cameras would soon be on every street corner, let alone that average citizens would have wearable, AI-powered personal devices that can record and upload everything around them.</p>

<p class="has-text-align-none">“Most people have a <em>Law &amp; Order SVU</em>-level understanding of this doctrine, and took it to assume everything is fair game and therefore there’s no reasonable expectation of privacy in public,” said Jennings. “A lot of technology, these Meta glasses being a perfect example, get built off of this public mentality.”</p>

<p class="has-text-align-none">Kendra Albert, a technology lawyer and partner at Albert Sellars LLP, said that just because there is less expectation of privacy in public versus in private doesn’t mean that anything goes. Especially when things like facial recognition and live speech transcription can use an image or audio recording to unlock previously inaccessible troves of data about a person. Facial recognition on Meta Ray-Ban glasses is currently only possible using <a href="https://www.theverge.com/2024/10/2/24260262/ray-ban-meta-smart-glasses-doxxing-privacy">third-party tools,</a> but <a href="https://www.theinformation.com/articles/meta-renews-work-facial-recognition-tech-privacy-worries-fade"><em>The Information</em> reported</a> in May that the company <a href="https://www.404media.co/well-well-well-meta-to-add-facial-recognition-to-glasses-after-all/">is developing facial recognition features</a> for the devices.</p>

<p class="has-text-align-none">“The Meta glasses clash with folks’ normal assumptions regarding public space because we don’t expect people around us to be surveilling us, or able to tie our legal name or the rest of our identity to us without some effort,” Albert told <em>The</em> <em>Verge</em>. “If I’m at the coffee shop and I’m complaining about something, I might not expect that other people in the coffee shop can just attribute those comments to me with my real name as they could if I was making them online on an account that’s under my name.”</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">In the US, the laws governing recording in public spaces vary from state to state and depend on whether you’re recording video, audio, or both. For audio recordings, states have one of two types of restrictions: “single-party” consent or “all-party” consent (also known as “two-party” consent). Most states have single-party consent laws, meaning there’s nothing legally stopping you from secretly recording a conversation as long as you’re one of the parties involved. Only <a href="https://worldpopulationreview.com/state-rankings/two-party-consent-states">11 states</a> require everyone involved to consent to the recording, hence “all-party.”</p>

<p class="has-text-align-none">For commercial recordings — like a film crew shooting a busy street corner — other rules can apply. Some states have laws that protect commercial recording as long as visible notices are posted letting people know that a recording is taking place in the area. States also have “rights of publicity” protecting individuals from having their likeness used in a commercial recording without their consent.&nbsp;</p>

<p class="has-text-align-none">Obviously, the reality of this is way more complicated now that we are surrounded by internet-connected cameras that send data to tech companies. So does the law protect us when a consumer device captures our voice and likeness without consent and then transmits that data to Meta’s servers, where the company can use it for all sorts of purposes?</p>

<p class="has-text-align-none">“That&#8217;s the million dollar question, essentially,” said Jennings. “If I record someone and that gets uploaded to Meta’s cloud storage, I have captured that person’s likeness and transmitted it to a third party.” Users have plenty of good reasons to be concerned, given Meta’s history. The company has <a href="https://arstechnica.com/tech-policy/2025/08/jury-finds-meta-broke-wiretap-law-by-collecting-data-from-period-tracker-app/">violated wiretapping laws</a> and <a href="https://www.theverge.com/2022/8/10/23299502/facebook-chat-messenger-history-nebraska-teen-abortion-case">helped police investigate alleged abortion seekers</a> by turning over their chat histories, and more recently joined other tech companies in very publicly <a href="https://www.theverge.com/policy/772760/tech-ceos-ai-trump-white-house-dinner">cozying up to the Trump administration</a>.</p>

<p class="has-text-align-none">But whether or not consent violations with Meta glasses could actually result in any legal action depends heavily on the situation, including what the user and the company does with the recording, said Jennings. In many cases, individual damages are extremely small and often handled by class-action lawsuits, like the<a href="https://www.forbes.com/sites/kateoflahertyuk/2025/05/08/apple-siri-eavesdropping-payout-approved-heres-how-to-make-a-claim/"> Siri eavesdropping settlement</a> earlier this year that saw Apple pay out a paltry $95 million — hardly a disincentive for massive companies that produce the privacy-invasive technologies in question.</p>

<p class="has-text-align-none">“Even if a state hypothetically passed a law that held companies responsible and gave people individual right to sue, it would still be backwards-looking. You would only be able to do that after someone had already had their privacy violated,” said Jennings.</p>

<p class="has-text-align-none">Proving harm in individual cases would be difficult and time-consuming, too, legal experts say. One potential factor could be whether or not the person gave enough notification to bystanders that the devices are recording. On <a href="https://www.meta.com/ai-glasses/privacy/">Meta’s website</a>, the company advises users of the Meta glasses to “use your voice or a clear gesture when controlling your glasses to let them know you’re about to capture, particularly before going Live,” and to “stop recording if anyone expresses that they would rather opt out.”&nbsp;</p>

<p class="has-text-align-none">The devices also have a security feature that prevents recording if the indicator light is covered by something, like a piece of tape. But some people have <a href="https://www.404media.co/how-to-disable-meta-rayban-led-light/">already found ways to disable this feature</a>, and lawyers aren’t sure whether it would actually stand up in court.</p>

<p class="has-text-align-none">“It’s not clear to me that a small red light would be sufficient notification in some states for someone to consent to being recorded,” said Albert, noting how someone having a camera on their face is visually a lot different from someone holding up their phone to record. “The fact that when you’re recording on a cameraphone you have to have your [device] out, and people know that, changes how people behave.”</p>

<p class="has-text-align-none">In private spaces, however, the rules become a lot less ambiguous.</p>

<p class="has-text-align-none">Recording people without consent in a home or office is an obvious no-no, and in many states violators can be <a href="https://law.justia.com/codes/new-york/2014/pen/part-3/title-n/article-250/250.05">charged with a felony</a>. On the other hand, a private business that’s open to the public — like a coffee shop — may allow some forms of recording, but also has the discretion to kick someone out for violating the privacy of customers and staff. Laws governing these spaces vary from state to state, but the enforcement is left mostly up to the owners. In either case, a pulsing recording light on a pair of glasses is probably too legally ambiguous to allow for proper consent. Jennings says one thing business owners and semi-public spaces can do to make things clearer is hang up signs telling people to remove the devices while inside. But ultimately, true privacy would mean getting the law, the tech, and the written / unwritten social rules to align.</p>

<p class="has-text-align-none">“To really protect people, what we&#8217;d need is more akin to the recreational camera-drone ‘no-fly zones’ — proactive restrictions baked into the technology as well as encoded in law that punish both the end users and manufacturers alike for their violations of recording consent boundaries.”</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">Failing that, good old-fashioned shame is still the most powerful check we have on nonconsensual recording, privacy advocates say.</p>

<p class="has-text-align-none">“We saw this with Google Glass. People made clear that people weren’t welcome in an area if they were wearing these things,” Chris Gilliard, a privacy scholar and co-director of the Critical Internet Studies Institute, told <em>The</em> <em>Verge</em>.</p>

<p class="has-text-align-none">The Ray-Ban Meta glasses and other wearable smart devices are what Gilliard calls “<a href="https://digitaldemocracies.org/chris-gilliard-luxury-surveillance/">Luxury Surveillance</a>,” a class of consumer product that attempts to redefine social norms around consent by making surveillance into a chic fashion accessory. Companies like Meta invest in these devices believing they can create conditions where the tech is normalized and accepted, or at least very difficult for people to reject. But regardless of what other hypothetical use cases the companies pitch to justify these products, Gilliard said, they are still ultimately surveillance tools designed to violate consent.</p>

<p class="has-text-align-none">“I think they are a profoundly antisocial technology that should be rejected in every way possible,” said Gilliard. “Their very existence is toxic to the social fabric.”</p>

<p class="has-text-align-none">It’s still up in the air whether Meta’s gamble on glasses will pay off. Beyond their horrifying privacy implications,<a href="https://www.theverge.com/reviews/627056/bee-review-ai-wearable"> wearable AI-powered devices</a> like Bee and Friend so far have proven <a href="https://www.theverge.com/column/791010/optimizer-friend-ai-companion-wearables">more obnoxious than useful</a>, and it’s unclear whether people who buy them<a href="https://arstechnica.com/gadgets/2023/08/even-people-who-bought-metas-ray-ban-smart-glasses-dont-want-to-use-them/"> will even want to use them</a>. But one thing many privacy experts agree on is that even if we can’t change the law, we can change peoples’ attitudes around consent.</p>

<p class="has-text-align-none">“One way to think about it is protecting your community and the people you care about,” said Gilliard. “When you’re wearing these glasses, when you use your video doorbell, when you record everyone’s conversations, you’re not just surveilling yourself. And there’s no consistent and foolproof way to guarantee that information won’t be used against people you care about — to hurt trans and queer people, or hurt immigrant communities. I wish people would think about it in those terms instead of ‘did my package get delivered.’”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Janus Rose</name>
			</author>
			
			<title type="html"><![CDATA[The return of the trans underground]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/cs/features/798493/trans-underground-organizing" />
			<id>https://www.theverge.com/?post_type=vm_custom_story&#038;p=798493</id>
			<updated>2026-04-01T12:07:04-04:00</updated>
			<published>2025-10-14T09:00:02-04:00</published>
			<category scheme="https://www.theverge.com" term="Features" />
							<summary type="html"><![CDATA[In the early 1970s, long before social media and more than a decade before the earliest internet forums, a woman named Peggie Ames became a human rolodex for trans women in New York state. Born in Buffalo, Ames spent years working for gay rights organizations in the rural and suburban areas of Western New York. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/257907_future_of_being_trans_UNDERGROUND_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">In the early 1970s, long before social media and more than a decade before the<a href="https://blog.avast.com/transgender-women-community-1980s-internet"> earliest internet forums</a>, a woman named Peggie Ames became a human rolodex for trans women in New York state.</p>

<p class="has-text-align-none">Born in Buffalo, Ames spent years working for gay rights organizations in the rural and suburban areas of Western New York. In the days before the internet, it wasn’t easy to meet other trans folks outside the densely populated boroughs of New York City. But Ames had built an extensive social network of trans women and cis allies through her work with the Erickson Educational Foundation, which funded research on trans medical care, and the Mattachine Society of the Niagara Frontier, a local offshoot of the<a href="https://time.com/5600191/mattachine-society/"> pre-Stonewall-era gay rights group</a> of the same name.</p>

<p class="has-text-align-none">After she was forcibly outed in 1973, Ames became one of the relatively few openly transsexual women with a public profile at the time. In the pre-internet days, this made her someone that trans women turned to in the hopes of reaching others like them. By the end of the decade,<a href="https://theestablishment.co/the-life-and-legacy-of-trans-activist-peggie-ames/index.html"> Ames estimated</a> that she knew around 100 other trans people in the Western New York area alone. As a public figure, she saw it as her responsibility to help connect the scattered members of her community.</p>

<p class="has-text-align-none">Ames was one of several trans women who ran underground trans social networks like this in the ’70s and ’80s. It worked like this: A well-connected and more publicly visible trans woman would receive letters from other trans people from around the country. She would then dig through her little black book and write back to the sender, including contact information for other trans people she had previously connected with. At a time when many trans people were still closeted and isolated, these ad hoc pen pal networks were a lifeline.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-text-align-none">This model of trans activism seems quaint compared to the Extremely Online communities of today. Even the terms trans people use to refer to themselves have changed — first adopting “transgender,” with some more recently reclaiming “transsexual” to emphasize the material conditions of living in a trans body.&nbsp;</p>

<p class="has-text-align-none">This intentional use of “transsexual” is, in part, a rejection of the utopian, <a href="https://gracebyron.substack.com/p/validity-is-not-political">assimilationist identity politics</a> that dominated the latter half of the previous decade. With an explosion of online social media, policy wins, and <a href="https://www.thedailybeast.com/whatever-happened-to-the-transgender-tipping-point/">glossy magazine covers</a> featuring stars like Laverne Cox, the 2010s saw trans identity and visibility become the political spearhead of a supposedly inevitable progressive shift for LGBTQ rights, promising an end to the centuries of discrimination and closeted shame that had preceded it.</p>

<p class="has-text-align-none">Of course, we all know what happened next.</p>

<p class="has-text-align-none">The obsessive campaign to <a href="https://www.motherjones.com/politics/2025/09/charlie-kirk-shooter-trans-ideology-false/">blame trans people for the killing of Charlie Kirk</a> is just the latest and most extreme chapter of the anti-trans backlash that has been intensifying for years — creeping from the fixations of<a href="https://www.bbc.com/news/articles/cn0x2kx08wdo"> D-list celebrities</a> and doxxing forums like<a href="https://www.nbcnews.com/tech/internet/cloudflare-kiwi-farms-keffals-anti-trans-rcna44834"> Kiwi Farms</a> to a mainstream fascist movement supported by the highest levels of government. In just a few short decades, trans people went from living in relative obscurity to a scapegoat of the reactionary right, absurdly blamed for gun violence they’re <a href="https://thehill.com/opinion/civil-rights/5497025-trumps-idea-to-ban-guns-for-trans-people-would-backfire/">statistically unlikely to commit</a> and subjected to bad-faith media “debates” and <a href="https://translegislation.com/">discriminatory laws</a> challenging their right to exist in public. For many, it’s an impossible situation: once believing the arc of history to be bending in their favor, countless trans people now live out their lives publicly online at the very moment that an unhinged authoritarian surveillance state has <a href="https://newrepublic.com/article/200816/fbi-baseless-project-trans-extremism">declared them public enemies</a> and<a href="https://www.yahoo.com/news/articles/trump-administration-quietly-blocks-gender-195903525.html"> targeted them</a> for elimination.</p>

<figure class="wp-block-pullquote"><blockquote><p><strong>I don’t see this as a moment to despair — it’s merely a sign to change tactics</strong></p></blockquote></figure>

<p class="has-text-align-none">At the same time, social media — <a href="https://www.wired.com/2009/04/inside-moldovas/">once hailed</a> as the tool of 21st-century revolutionaries — has been transformed into a weapon of surveillance and distraction. Instead of organizing and<a href="https://www.404media.co/you-cant-post-your-way-out-of-fascism/"> building political power in our communities</a>, many of us find ourselves doomscrolling through hot takes on algorithmic hamster wheels owned by billionaire reprobates like Elon Musk and Mark Zuckerberg.</p>

<p class="has-text-align-none">AI-powered social media surveillance has been supercharged under Donald Trump, and given his administration’s <a href="https://www.them.us/story/trump-anti-trans-executive-order">crusade against trans people</a>, it’s not hard to do the math. According to public records, the Trump administration has contracted with at least four different<a href="https://www.politico.com/newsletters/digital-future-daily/2025/04/08/the-worries-about-ai-in-trumps-social-media-surveillance-00279255"> AI-driven surveillance companies that analyze social media posts</a> and claim the ability to perform “sentiment &amp; emotion analysis” for federal agencies like ICE. Even offline, the rise of <a href="https://www.theswaddle.com/how-facial-recognition-ai-reinforces-discrimination-against-trans-people">facial recognition</a> combined with transphobic policing of public spaces like bathrooms creates new risks for trans people and <a href="https://www.advocate.com/news/lesbian-mistaken-transgender-arizona-walmart">anyone else</a> whose appearance doesn’t conform to gendered norms. And of course, any trans person posting or merely existing online always risks breaking containment and drawing the attention of the <a href="https://www.them.us/story/libs-of-tik-tok-twitter-facebook-instagram-explained-childrens-hospitals-grooming">right-wing Griftosphere</a>, resulting in<a href="https://www.them.us/story/queer-teachers-are-under-attack-libsoftiktok-conservatives-school-board-protests"> doxxing</a> or worse.</p>

<p class="has-text-align-none">Still, I don’t see this as a moment to despair — it’s merely a sign to change tactics.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-text-align-none">The more I read about people like Peggie Ames, the more I think it’s time for us to ask whether the public internet has outlived its usefulness as our primary tool for political activism — trans or otherwise. I don’t mean that we should all throw away our phones and go back into the closet, but rather reconsider the logic that so much of our lives needs to unfold over public networks. If queer and trans folks are going to survive, we’ll need to once again embrace the underground, and learn when to be visible and when to shut the fuck up.</p>

<p class="has-text-align-none">In <em>Going Stealth: Transgender Politics and U.S. Surveillance Practices</em>, Toby Beauchamp outlines the long history of state surveillance as a tool for policing the bodies of trans and gender-nonconforming people. The term “going stealth” is the long-standing practice of trans people selectively obscuring their transsexual status — not as a form of deception but in order to regain some level of control over their lives and safety, knowing that perfect obscurity is usually impossible. Beauchamp illustrates this practice as a response to a society where suspicion and guilt are often preemptively assigned to people whose bodies are perceived as disabled, non-white, or gender-transgressive. (He recalls how in the immediate aftermath of the <a href="https://abcnews.go.com/US/room-211-massacre-virginia-tech-remembered-10-years/story?id=46701034">Virginia Tech shooting</a>, for example, police were called to investigate a “suspicious” person on a school campus near Detroit, Michigan, who was <a href="https://www.seattletimes.com/nation-world/copycat-threats-force-lockdowns-at-schools-in-9-states/">described as a man wearing a blonde wig and makeup</a>.) More recently, the Trump administration ramped up its efforts to <a href="https://www.newyorker.com/news/the-lede/the-bureaucratic-nightmares-of-being-trans-under-trump">nullify gender marker changes on IDs</a> like passports, making it so the information on a person’s documents doesn’t match up with their appearance. The goal of this is crystal clear: to make a person’s transsexual status legible, and thus subject to discrimination by agents of state violence in airports, bathrooms, and anywhere else policing is present.&nbsp;</p>

<p class="has-text-align-none">These are just a few examples of why trans people often construct their entire lives around navigating the state’s gaze. And it shouldn’t be surprising, given this reality, that more trans people are now choosing to take back control and embrace lifestyles that deprioritize online visibility in favor of personal safety.</p>

<p class="has-text-align-none">This approach doesn’t have to be all or nothing. In a <a href="https://substack.com/home/post/p-158945448">recent essay</a>, trans author Margaret Killjoy coins the term “demiground” to describe what a post-internet hybrid activism might look like. The idea of this paradigm is to compartmentalize your online / offline life into multiple discrete boxes, all with varying degrees of visibility and measured risk. Your “A” life includes all your social media with your most “palatable” / non-spicy persona, providing cover for your “B” and “C” lives, which prioritize in-person communication and unfold with different levels of public obscurity (and sometimes legality).</p>

<p class="has-text-align-none">The goal isn’t to retreat from online spaces and give fascists what they want, but to create a more disciplined level of control over your digital footprint. “In order to populate the demiground, we need to make it as inviting as possible,” Killjoy writes. “It needs to be clear that not only is there political value in being obscure to the state, but that it is also a better and more fulfilling way to live.”</p>

<figure class="wp-block-pullquote"><blockquote><p><strong>Online social networks are just a tool, and tools need to be constantly reevaluated to make sure they’re still serving our needs</strong></p></blockquote></figure>

<p class="has-text-align-none">This idea is not new, and has been widely practiced by people who live in a precarious relation to state violence, like sex workers. I’ve seen this “hybrid” approach manifesting in my own queer and trans social circles as an insistence on moving more discussions to end-to-end encrypted platforms like Signal (or for less risky chats, server-based platforms like Discord, which can be subject to <a href="https://www.ign.com/articles/nintendo-requests-subpoena-of-discord-to-track-down-user-behind-last-years-pokemon-teraleak">court subpoenas</a>). But more important than the tools themselves is the mindset that determines how and when they’re used. It might be a good idea to regularly check in with your people via encrypted group chats, Discord calls, or even Bluesky. But we can’t always let them fill in for time spent organizing face-to-face with neighbors.&nbsp;</p>

<p class="has-text-align-none">In other words, we should be thinking of online tools as a means of facilitating — and not replacing — the kind of connection and local organizing that help queer and trans people survive.</p>

<p class="has-text-align-none">While talking about any specific efforts in detail would violate the aforementioned golden rule (Shut The Fuck Up), suffice it to say that the dicier the political situation gets for trans people and other marginalized folks, the more survival work will need to be carried out in the underground. Localized networks that help trans people access medical care, combat discrimination, and relocate away from states hostile to their lives already exist. By taking an “if you need to know, you know” approach to these activities — especially when they exist in gray areas of the law, like <a href="https://apnews.com/article/abortion-help-navigators-pills-roe-v-wade-f760b2817126d56e6cfa5144c9f7e547">abortion</a> — we can create social buffers that resist the gaze of the state and the insatiable viral hunger of corporate internet platforms.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-text-align-none">The essential activism being done in the underground and its various levels of online / offline community is not meant to be glamorous. It’s not “content” to be shared by influencers as a slickly edited TikTok, or an edgy tweet, or an Instagram slide deck. It’s the un-monetizeable and deeply unsexy work done by people like Peggie Ames, who saw it as her responsibility to help people like her connect and organize outside of the spaces that scorned and rejected them. As a trans lesbian, Peggie had struggled to be accepted by many cis feminists, and was expelled from several lesbian groups in the Buffalo area, where her mannerisms and more “traditional” style of feminine dress were ruthlessly scrutinized as “evidence” that she was really a man. At the same time, her personal connections and long history of activism made her a kind of local celebrity in the LGBTQ community, giving her a unique opportunity to help unite the disparate trans community in the days before the internet.</p>

<p class="has-text-align-none">This is not to understate the role that online communities — and social media in particular — played in uniting many trans people. Once isolated and confused, the rise of the internet empowered trans youth and adults to name and explain long-suppressed feelings by talking to others like them. While right-wing reactionaries manufactured moral panics about a “social contagion” turning our kids trans, it wasn’t the number of trans people that had grown — it was the reach of light-speed communication networks that can show them they’re not alone.&nbsp;</p>

<p class="has-text-align-none">Even still, online social networks are just a tool, and tools need to be constantly reevaluated to make sure they’re still serving our needs. The ad hoc networks created by trans women like Peggie Ames may not be a blueprint for trans liberation in 2025. But they are a reminder that queer and trans people have always found ways to survive in the underground — and the various shades of gray that exist in between.</p>
						]]>
									</content>
			
					</entry>
	</feed>
