bad art is so critical for being a healthy human being. you gotta see some awful
movies or read some dogshit books or play a horrendous video game every now and
then. I'm not talking about "I like it despite its flaws" I mean fuckin bad.
just to keep your tastes well-calibrated you know?
She turned away from anti-trans activism and leaked a trove of emails that exposed the inner workings of the right-wing network dedicated to rolling back our rights. It was a great gift to the trans community and journalism.
Elisa Rae Shupe flying a trans flag in 2016, Sandra E. Shupe, Source, Creative Commons Attribution-Share Alike 4.0 International
by Evan Urquhart
Elisa Rae Shupe was a transfemme veteran, the first American legally recognized as nonbinary, and a whistleblower who exposed the inner machinations of the movement to roll back transgender rights. She took her own life on Jan. 27, her body wrapped in a trans pride flag, outside the Syracuse VA Medical Center.
Although rumors had circulated widely online that Shupe was the one who had died, her identity was not confirmed by any news outlet until today, when syracuse.com published a story connecting her life to the tragedy in the medical center parking lot.
I corresponded with Elisa occasionally by email, starting in 2022. I first reached out after a tipster told me that she had turned away from the anti-trans activism in which she had participated for several years as a prominent detransitioner. Her Wikipedia talk page included messages I later confirmed were from Elisa, asking that her page be updated to reflect that she had retransitioned and had denounced her anti-trans statements. I emailed to ask if she might be willing to tell her story.
Elisa’s first email to me mentioned mental illness. “I realized something … How, myself included, a lot of these [detransitioner] women have serious mental health issues and way too much time on their hands,” she wrote. “They're sitting at home on disability with nothing to do, just like I was and still am, and that plays a role in them getting sucked into the GC movement and subsequently radicalized.”
No story of mine came out of this, but when stories relating to Elisa’s disillusionment with anti-trans activism finally broke they represented some of the most significant journalism on the anti-trans right ever published.
Madison Pauly, writing for Mother Jones, published an in-depth examination of the inner workings of the “religious-right networks behind transgender health care bans” on March 8, 2023, that relied upon Elisa’s trove of email correspondence with the architects of the anti-trans movement.
A few days later a profile of Shupe by Jude Doyle appeared on the website Xtra. Doyle’s piece was an intimate portrayal of the way mental illness made Shupe vulnerable to exploitation by people who saw her as a weapon to be used against trans people. Assigned Media’s Trans Data Library project relied heavily on this reporting in many of its entries.
By leaking these emails, Elisa gave a great gift to both the trans community and journalism. The emails, and Elisa’s extensive cooperation with the reporters who wrote about it, epitomized courage, because it necessarily involved Elisa taking full responsibility for her own participation in anti-trans extremism. It also involved exposing her mental health struggles before an American public that heavily stigmatizes mental illness, particularly personality disorders and severe illness that requires hospitalization.
But Elisa believed that her borderline personality disorder lay at the heart of what made her exploitable by bad-faith actors, and she wanted the world to know it.
Borderline personality disorder may also have made her exploitable by journalists. Or, if you prefer to look at it another way, the restlessness, risk-taking, and self-destructiveness of her BPD is what allowed Elisa to strike one of the most significant blows against the anti-trans movement.
Perhaps bravery by another name is mental illness. Or, at least, maybe it was like that for Elisa.
Elisa’s death hit me hard. I’m not always good at remaining in touch with former sources (or almost-sources), but I made an effort with Elisa because she was so clearly struggling, and because her history of anti-trans activism made it difficult for trans people to fully support her. I tried to stay in touch, but not as well as I’d like. I mostly wanted her to know, when she experienced dark times, that she would always have my respect and support.
Mental illness is also deeply personal for me. I’ve written occasionally about my struggles with self-harm and disordered eating pre-transition, and very occasionally mentioned that this included inpatient stays. Elisa’s illness held an uncomfortable familiarity, too close for comfort, but so far from where I am now, healthy and whole after transitioning.
Now she is dead. Could I have done more? Yes. Would it have changed things? Probably, no. I know I’m not that powerful. Still, I wish I’d reached out, just to say hello, a bit more often, and tried harder to be a companion in dark places.
If I had, at least I might have known her better.
I will remember Elisa as a hero and a martyr, a complicated person who I could have known better if I’d only made time to do so. I will miss her occasional emails, and honor her for her contributions to history and to the trans rights movement.
World where demons are real and around and ontologically evil but with an extremely Christian-fundamentalist definition of "evil". They don't care what sins they facilitate as long as they're sins. Murder is just bad economics: you're getting someone killed who would do sins of their own. Even labor exploitation isn't great, because ideally everyone has free time for the real big-ticket sins like lust and sloth and gluttony.
So of course a bunch of demons end up working as pornographers, musicians, chefs, and so on. But it's also rare to see a queer youth center without at least a few demon volunteers absolutely dead set on helping everyone accept themselves. There are even a few who really play the long game doing labor organizing and base-building so that one day humans will be free from the shackles of capitalism and the sins of idleness will really start rolling in.
Angels exist in this setting too but they're such dour killjoys that barely any humans will tolerate spending time with them and they mostly just live off in their own towns in the middle of otherwise deserted areas.
Google’s featured snippet is pulling in an Amazon AI summary of Adolf Hitler’s Nazi manifesto Mein Kampf that calls it “a true work of art” in the latest AI-related fuckup affecting top search results.
As of writing, searching for “mein kampf positive reviews” returned a result that was pulled from an AI-generated summary of an Amazon listing’s customer reviews. So, it’s a search algorithm attempting to summarize an AI summary. The full AI summary on Amazon says: “Customers find the book easy to read and interesting. They appreciate the insightful and intelligent rants. The print looks nice and is plain. Readers describe the book as a true work of art. However, some find the content boring and grim. Opinions vary on the suspenseful content, historical accuracy, and value for money.”
As I’m writing this, Google says “An AI Overview is not available for this search,” but the Amazon AI summary was in large text directly below it, in the space where an overview would typically be, above other web results. This is what Google calls a featured snippet: "Google's automated systems select featured snippets based on how well they answer the specific search request and how helpful they are to the user," the company says. A highlight appeared, added by Google, over the phrase “easy to read and interesting.” Notably the featured snippet result for this doesn’t quote everything from Amazon’s AI, so it is itself a summary.
Google's result for "mein kampf positive reviews" as of early Thursday morning, showing the Amazon review as a "featured snippet." Screenshot of Amazon's AI-generated review summary
Alexios Mantzarlis, the director of the security, trust, and safety initiative at Cornell Tech and formerly principal of Trust & Safety Intelligence at Google, first spotted the result.
Uh... Amazon's AI summary of Mein Kampf is even worse, and pollutes Google results for [Mein Kampf positive reviews]
After I contacted Google for comment (the company hasn’t responded as of writing) an AI Overview did appear, and notes that the book is “widely condemned for its hateful and racist ideology,” but that historical analyses “might point to aspects of the book that could be considered ‘positive’ from a purely literary or rhetorical perspective.”
Screenshot of Google's search result for "mein kampf positive reviews" as of late Thursday morning, showing the AI Overview result.
This is, at least, a better summation of the conversation around Hitler’s book that Amazon’s AI summary gives. The AI-generated review summary on the Amazon listing also shows links to see reviews that mention specific words, like “readability,” “read pace,” and “suspenseful content.” Enough people mentioned Mein Kampf being boring that there’s a “boredom” link, too.
Amazon did not immediately respond to a request for comment.
The 2,067 reviews for this specific copy of Hitler’s fascist manifesto are mostly positive, and taken extremely literally, the blueprint for Nazism is easy to read and, in some sense, “interesting.” But the reviews are much more nuanced than that. Reviewing the roadmap for the Holocaust from the world’s most infamous genocidal dictator with “five stars” seems twisted, but the reviews are nuanced in a way that AI clearly doesn’t understand—but a human can.
“Mein Kampf, by Adolf Hitler, should be read by everyone in the world who are interested in a world of peace, social responsibility, and worldwide cooperation,” one reviewer wrote, in an honestly pretty concerning start to a very long review. But they go on to write more that clarifies their point of view: “This evil book presents a dark vision of how to go about creating tyranny in a democratic society so that one, similar to Russia, is created. [...] Also, Hitler is an excellent writer; he is not a rambling madman writing disconnected ideas and expressing a confusing methodology. His text is easy reading, and it is a world classic that is a must read.”
Another five-star review says: “Chilling to begin reading this book and realize that these are the words written by Adolf Hitler. Read it and absorb what he says in his own words and you soon grasp what he means. [...] We are bound to repeat History if we don't understand mistakes that were made in the past.”
These aren’t “positive” reviews; most of the five-star reviews are noting the quality of the print or shipping, and not endorsing the contents of the book.
Mein Kampf has never been banned in the U.S. (unlike plenty of other books about race, gender, and sex), but Amazon did briefly ban listings of the book from its platform in 2020 before reinstating it.
Google’s AI Overview shoots itself in the algorithmic foot frequently, so it’s noteworthy that it’s sitting this result out. When it launched in May 2024 as a default feature on searches, it was an immediate and often hysterical mess, telling people it’s chill to eat glue and that they should consume one small rock a day. In January, the feature was telling users to use the most famous sex toy in the world with children for behavioral issues. These weird results are beside the bigger point: Google’s perversion of its own search function—its most popular and important product—is a deep problem that it still hasn’t fixed, and that has real repercussions for the health of the internet. At first, AI Overview was so bad Google added an option to turn it off entirely, but the company is still hanging on to the feature despite all of this.
The Mein Kampf AI summaries are also an example of how AI is starting to eat itself online, and the cracks are showing. Studies in the last few years show that AI models are consuming AI-generated content as training data in a way that’s polluting and destroying the models themselves.
Elon Musk’s team of weird rationalists at DOGE continues to bring “move fast and break things” to functions that really need not to be broken. Anyone who knows anything about how anything works is worried. [CNN; Washington Post, archive]
The process is to feed data from a government process into an LLM and just assume the output is good.
But if you want to build a machine learning tool, let alone train an LLM, you have to understand the data and what it’s for.
DOGE has so far failed to realize that marking people in the ancient Social Security database as 150 years old means they’re dead and not receiving payments. Musk told Joe Rogan on his podcast that he still thinks these people are getting money, because he’s not here to take in new information. [AP]
DOGE employee Jordan Wick has a GitHub full of tools for feeding government employees’ digital histories into an LLM to analyze them for political loyalty. [Fast Company]
These bozos are suspicious of people who understand things. DOGE fired the entire 18F office — the department to improve technical efficiency across the US federal government, as DOGE claims to. But the conservative media considered 18F a “far-left agency that viciously subverted Trump during his first term” — so it had to go. [Blood In The Machine]
Musk is firmly on the side of AI as magic. He told Rogan: [transcript]
In terms of silicon consciousness, I think we’re trending toward having something that’s smarter than any human, smarter than the smartest human by maybe next year or something, I mean, a couple years.
What’s Musk’s idea of an AI risk?
If there’s like a super oppressive, like woke nanny AI that is omnipotent. That would be a miserable outcome … And just like execute you if you misgender someone or something like that, you know?
Roko’s basilisk is clearly gendered female, as with AI assistant voices. Alt-right AI bros’ greatest fear is Mommy spanking them. [CNN, 2011]
Like most episodes of Rogan, this is a three-hour dive into mind-numbing stupidity. Musk and Rogan discuss their favorite conspiracy beliefs, such as stopping new vaccines. Musk also insists he really founded Tesla, whatever the historical record might say. “The fundamental weakness of Western civilization is empathy. The empathy exploit.” Yeah, thanks, Elon.