24 stories
·
1 follower

WPATH and USPATH Trash Talk the Cass Review

1 Share
 

A response highlighted the lack of expertise involved in the Cass Review, the lack of evidence for many of the claims, and the lack of any new information that might change the picture of gender-affirming care.

  

by Evan Urquhart

The main professional bodies overseeing the standards for treatment of gender diverse people, WPATH and USPATH, have issued a joint response to the Cass Review, saying they are “extremely concerned” that the Review will serve as a barrier to necessary healthcare for transgender youth. The Review itself, which was published over one month ago, was billed as an independent look at the state of transgender healthcare in England. However, the Review’s objectivity has been questioned due to its asymmetric treatment of the evidence for and against gender-affirming care, resulting in very conservative recommendations that would remove access to medical and even social transition for most patients of the NHS. This is true despite the Review having found little to no evidence of regret, detransition, patients having been rushed into treatment, or any other form of harm stemming mainline treatments for gender dysphoria in youth.

(Curious about what’s in the Cass Review? Read our summary here.)

The primary concern evinced by WPATH/USPATH is for the safety and health of England’s transgender youth. A conservative estimate for the prevalence of trans people in the population is 0.5 percent, meaning that we would expect at least 70,000 young people in that country to grow up to be trans. After an interim report was published by the Cass Review the only available service for gender dysphoria in youth was shut down, and a replacement has yet to be made available.

A recent estimate of the number of patients who have been seen since the GIDS service was shut down is twelve. There are reportedly 5000 young people on the waiting list, still only a small fraction of the number of youth that can be conservatively expected to seek to transition as adults. Despite this, the Cass Review concluded, with no evidence given, that the number of youth seeking treatment was too high to be explained by trans identity alone.

Writing about the current state of trans youth healthcare in England, the WPATH/USPATH statement says, “This is a devastating situation for transgender youth and their families, whose rights are breached as they are being denied medically necessary care. We believe this to be a complete breach of the seven core values enshrined in the NHS Constitution.”

WPATH and USPATH also directly take on the credentials of Dr. Hilary Cass, who was tapped to lead the four-year Cass Review. “Hillary Cass is a pediatrician with hardly any clinical experience or expertise in providing transgender healthcare for young people. Furthermore, Hillary Cass lacks significant research qualifications or research expertise in transgender health,” the statement reads.

Transgender healthcare became controversial in the UK slowly, after a years-long campaign by right-wing tabloids that made the trans community the subject of unceasing negative attention, misinformation, and scare tactics. Although Cass claims in the introduction to the Report that her goal is to improve healthcare for young people (who she proceeds to misgender incessantly throughout the 388-page Report), there is currently no sign of improvement to be had in England, and even trans adults are expressing fears of treatment being withheld, with the specter of forced detransition looming for young and old alike on that foggy, unhappy isle.


Evan Urquhart is the founder of Assigned Media and an incoming member of the 2024-2025 Knight Science Journalism fellowship class at MIT.

 

Read the whole story
cgranade
23 hours ago
reply
Share this story
Delete

Here Is What Axon’s Bodycam Report Writing AI Looks Like

1 Share
📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.

In April, law enforcement contracting giant Axon announced Draft One, a new product that uses OpenAI’s GPT-4 Turbo to automatically generate police reports from officers’ bodycam audio. The product promises to make police work more efficient, freeing up resources for officers to direct elsewhere, while also raising serious questions about bias, accuracy, and the role of AI in policing. 

Since then, Axon has held webinars with prosecutors and law enforcement officials around the country, according to emails obtained by 404 Media and a review of recordings of the webinars online. Those recordings include some of the concerns and hopes of prosecutors around the technology, as well as show Draft One in action, providing more insight on how exactly it works.

“I’d have to see many reports to understand that the AI was capable of writing a cogent, relatively accurate report,” Bryan Porter, Commonwealth’s Attorney for the City of Alexandria, VA, said in one of the webinars. He adds he would need lawyers who are working on cases to review artificially generated reports to make sure “that they are on point and helpful in the prosecution of their cases.”

On the day of Draft One’s announcement, Axon emailed law enforcement officials about the product launch and provided details about the upcoming webinar called “AI in Criminal Justice: What Prosecutors Need to Know,” according to a copy of an email sent to the San Diego Police. 404 Media obtained the email and others through a public records act request.

“We know that this kind of new AI technology will impact the criminal justice process, and so we wanted you to know about it early and give you the chance to ask questions,” the email reads. “We’ve spoken with a number of prosecutors across the United States already, and you’ll even hear from three different prosecutors during this webinar.”

Included in the webinar is a video demonstration of Draft One. It starts with Noah Spitzer-Williams, senior principal, product management at Axon, showing a video filmed with an Axon bodycam. In that, a man called Marcus tells a police officer about a suspect who was yelling at a family, and provides a description of the suspect. Axon told 404 Media in an email that this incident was not genuine, and filmed for demonstration purposes.

Spitzer-Williams then opens up Draft One. The tool has a white, grey, and blue interface, with an option to start dictation, or click a button on the right which says “generate a draft narrative from the audio recordings of your incident.” He clicks that button, navigates to a search tab, and selects the previously shown video. 

The system then asks the user to select what type of incident they are making a narrative from. A multiple choice question has boxes for “domestic dispute,” “sexual offense,” “drug related,” “fraud and financial,” and many more. Spitzer-Williams says this selection impacts the generated output, because “we know that an impaired driving incident looks very different than a simple theft, for example.”

The user also needs to select the charge severity of the incident, be that no charge, an infraction, a misdemeanor, or a felony. This is done, Spitzer-Williams says, so customers can restrict Draft One’s use based on the incident type or the level charges. (At an Axon event recently, Spitzer-Williams claimed that most agencies using the tool are already using it for all incident types).

Finally, Spitzer-Williams clicks “generate draft.” The screen then tells officers to proofread the draft, make any corrections, and sign off on its accuracy. A few seconds later, Draft One produces the report. It includes sections where the officer is asked to provide additional information or context. 

“Marcus described the suspect as wearing a green jacket with the hood up and blue jeans, approximately the same height as himself. [INSERT officer’s observation of Marcus’s height for reference],” the report reads. In the demonstration, the system also deliberately inserted some errors that the officer must manually remove before the report can be submitted; this is to ensure that the officer is proofreading the report, Spitzer-Williams said. In this case, one of the mistakes was that “an emotional support alligator was stolen,” which did not happen in the bodycam footage.

A screenshot from the Draft One demonstration video.

After making their changes, the officer has to sign an acknowledgment that they used Draft One to generate the report, but that they are willing to testify to the accuracy of the report.

Matthew Guariglia, senior policy analyst at activist organization the Electronic Frontier Foundation (EFF), told 404 Media in a statement that “Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its official narratives to technologies in a way that makes officers immune to critique, transparency, or accountability because they have theoretical plausible deniability that they did not review the report thoroughly and therefore were not aware of mistakes the AI made.”

“We still have a tremendous number of questions about how the technology would reflect action, spoken words, and introduce uncertainty into reports—for instance, an officer shouting ‘stop resisting’ and whether it would report it as the officer's command or as the hypothetical resistance as fact. For that reason, we believe municipalities with the ability to control police purchases should show extreme restraint when it comes to deploying this new and problematic technology,” he added.

Axon told 404 Media in a statement that “While Draft One provides several benefits to law enforcement, we also believe it will benefit prosecutors who work with police agencies that leverage Draft One. Feedback from trial agencies indicate that prosecutor offices will receive reports that are more consistent, clear and detailed, which can help accelerate the justice process by delivering higher quality reports, faster.” The company added that “With Draft One, police narrative reports continue to be the responsibility of officers. Critical safeguards require every report to be edited, reviewed and approved by a human officer, ensuring accuracy and accountability of the information.”



Read the whole story
cgranade
2 days ago
reply
Share this story
Delete

Scientists Use Ultrasound to Make Cold Brew Coffee in 3 Minutes Instead of 24 Hours

1 Share

A team of scientists in Australia say that they have found a way to make cold brew coffee in less than three minutes using an ultrasonic reactor. This is a potentially massive deal because cold brew normally takes between 12 and 24 hours to brew, a problem for me, personally, when I do not carefully manage my cold brew stock. The lead scientist on the research team tells me he has also created a “cold espresso,” which is his personal favorite and sounds very intriguing.

The researchers at the University of New South Wales Sydney claim that their ultrasonic extraction held up to a “sensory analysis” and blind taste tests by trained experts: “A sensory analysis was conducted to evaluate appearance, aroma, texture, flavor, and aftertaste, which demonstrated that coffee brewed for 1 and 3 min in the sonoreactor exhibited almost undistinguishable properties compared to a standard 24 hour [cold] brewing without ultrasound,” they write in a paper about the method in the journal Ultrasonics Sonochemistry

For the uninitiated, cold brewed coffee is made by soaking coffee grounds in cold or room temperature water in large batches to create a concentrate that you can keep in the fridge for a week or two. Because the water is not hot, the extraction from ground coffee beans takes much longer than it does with traditional hot brewing. The resulting cold brew is less acidic, less bitter, and sweeter. This long brew time isn’t a problem if you plan ahead, but, as mentioned, if you do not plan ahead, you cannot really speed up the cold brew time while continuing to have cold brew. As lead author Francisco Trujillo notes in the paper, the resulting large batches of cold brew concentrate also take up a lot of counter and fridge space, meaning that not every coffee shop or restaurant has it on hand. This is a phenomenon I am very familiar with, as many establishments currently on my shitlist claim that they have “cold brew” that is actually hot coffee poured over ice. 

Trujillo’s new method uses room temperature water in a normal espresso machine that has been modified to turn the boiler off (or down) and has been modified to add a device that hits the beans with ultrasonic waves at a specific frequency that makes the beans shake. In layman’s terms, they are blasting the beans with ultrasounds, which cause the beans to vibrate and its cell walls to burst, allowing the rapid extraction of coffee without heat. Trujillo explains in the paper that extraction happens  because of “acoustic cavitation. When acoustic bubbles, also called inertial bubbles, collapse near solid materials, such as coffee grounds, they generate micro jets with the force to fracture the cell walls of plant tissues, intensifying the extraction of the intracellular content.” 

Trujillo told me that he learned this was possible in a study he published in 2020, and set to “superimpose ultrasound in the coffee basket of an existing espresso machine. We purchased a few Breville espresso machines, opened them up, and started the journey. Mathematical modeling of the sound transmission system and of acoustic cavitation was key for the success of the design.” Some of that mathematic modeling is available in the paper here:

For those of you into mathematical modeling

He said that they experimented with a variety of different frequencies, and said that frequencies between 20-100 kHz are all good at extracting coffee. “The lower the frequency, the larger the transducer and the horn,” he said. “If the frequency is in the low range, there are harmonics that can be heard. We worked at 28 kHz and at 38-40 kHz, and we chose 38-40 kHz as it was more compact and with a quieter performance.” 

Essentially, his team was able to modify an existing Breville espresso machine to do this, and said that they experimented with different brew times, and water temperatures (104 degrees F, well below boiling, was the hottest they tried) and were able to create a variety of different cold extractions, including one that is not mentioned in the paper but which Trujillo told me about that he calls “cold espresso” and which he said are his "favorite ones" and “offer a unique sensory experience like nothing in the market. It is bold and rich in flavor and aroma, less bitter, and with a great acidity. It is more viscous and with a very nice finishing (according to coffee experts that have tried our brews).  That will be a unique and novel coffee beverage to be appreciated by coffee lovers, hopefully worldwide.” 

The various ultrasonic cold brews the team produced were tested by a team at the Queensland Alliance for Agriculture and Food Innovation by a group of “11 trained sensory panelists” who “had previously been screened of their sensory acuity.” They scored the ultrasonic extractions very similarly to real cold brew, though of course whether the ultrasonic coffee is actually “almost undistinguishable” from real cold brew will depend on each person’s taste.

The "sensory profiles" of the different coffee as judged by testers.

I have long been interested in the science of coffee. When I was a freelancer, I went to Manizales, Colombia, to a national laboratory called “Cenicafe.” A scientist there called it the “NASA of Colombia,” referring to how seriously the institute takes the scientific pursuit of growing, roasting, and brewing ever-improving coffee. Cenicafe was easily one of the coolest places I’ve been in my life; they were genetically sequencing different species of coffee, hybridizing arabica and robusta coffee in attempts to create strands that taste good but are also resistant to both climate change and “coffee rust,” a fungus that regularly blights huge amounts of the coffee harvest in many countries, and were experimenting with new ways to brew coffee. I include this to say that, while inventing a new type of coffee brewing may seem frivolous, there is actually a huge amount of time, effort, and funding going into ensuring that there is ongoing innovation in coffee growing and brewing tech, which is particularly important considering that coffee plants are particularly susceptible to climate change.

Trujillo said that he plans to license the technology to coffee maker companies so that it can be used in both commercial coffee shops and in people’s homes. 

“I love cold brew, and coffee in general,” he said. “I am Colombian and my grandfather had a business of buying coffee beans from the local producers, he then dried the beans under the sun on ‘costales’ (a traditional Colombian strong fabric) that he placed on the street. That was in Ortega, a little town in Colombia. There were other gentlemen like my grandfather who had the same business. So, during the season period, the streets of Ortega were filled with costales with coffee beans drying under the sun!”



Read the whole story
cgranade
10 days ago
reply
Share this story
Delete

Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

1 Share

A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions.

In a notice posted to the site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote. 

Dawson founded Bards and Sages in 2002 as a place to publish her own supplemental content for tabletop role-playing games like Pathfinder. It expanded over the years into anthologies, novels, short story collections and a quarterly magazine. 

“The AI problem is a time problem. If I was not dealing with a host of other issues, I'd fight through the AI either until the phase passes or the AI Bros gave up on bothering me. But with everything else, I just don't have the time,” Dawson told me. “The number of submissions have just flooded the inbox. And I don't have hours a day to deal with it. As an example, I haven't checked my business email in the last week. My submission inbox...despite the fact that we are no longer accepting submissions...has 30 emails in it.” 

Dawson said that after more than 20 years of reading and writing speculative fiction, she’s become adept at spotting the differences between human-crafted writing and content churned out by a large language model like ChatGPT. She sees two types of AI-generated submissions, she said: the “hilariously obvious,” with missing words or repeated statements that the submitter didn’t seem to bother to proofread—and the “E for effort submissions,” where the submitter did seemingly revise the generated text, but has awkward sentence structures of unnatural word orders. 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.  

“The problem with AI is the people who use AI. They don't respect the written word,” Dawson told me. “These are people who think their ‘ideas’ are more important than the actual craft of writing, so they churn out all these ‘ideas’ and enter their idea prompts and think the output is a story. But they never bothered to learn the craft of writing. Most of them don't even read recreationally. They are more enamored with the idea of being a writer than the process of being a writer. They think in terms of quantity and not quality.” Another clue that submitters are using AI is that they send multiple submissions, even though the Bards and Sages guidelines state no more than one submission per person.

Dawson said there are no full-time employees at the press, and that it’s more of a labor of love than her primary income. Rights for authors who’ve published with Bards and Sages will be returned to them. But bad behavior by people posing as writers, and their infiltrating independent and small-scale publishers and communities, happens elsewhere, too—in February, I talked to fanfiction authors about how Etsy sellers have been ripping off their works in bound, printed form, and reselling them as physical books on the platform. 

ChatGPT-generated writing is increasingly a problem for scientific journals and the peer review process. Google indexes AI-generated garbage and threatens the usefulness of Google Ngram viewer, an important tool researchers use to track how language is used over time. AI-generated books are all over Amazon, and in some cases, contain potentially dangerous misinformation. We see these big platforms try to moderate AI-generated content, but the closure of Bards and Sages shows how the influx of AI infiltrating every corner of the internet affects small businesses and publishers, too.



Read the whole story
cgranade
16 days ago
reply
Share this story
Delete

Nurses Protest 'Deeply Troubling' Use of AI in Hospitals

1 Share

Hundreds of union nurses held a demonstration in front of Kaiser Permanente in San Francisco on Monday morning protesting the use of AI in healthcare, as hospitals and researchers become increasingly enthusiastic about integrating AI into patient care.

“It is deeply troubling to see Kaiser promote itself as a leader in AI in healthcare, when we know their use of these technologies comes at the expense of patient care, all in service of boosting profits,” Michelle Gutierrez Vo, a co-president of the California Nurses Association (CNA), said in a statement at the time. “We demand that workers and unions be involved at every step of the development of data-driven technologies and be empowered to decide whether and how AI is deployed in the workplace.”

National Nurses United, the CNA’s parent union, has repeatedly warned about AI being used for a variety of applications in healthcare, which range from patient monitoring to nurse scheduling to automated patient charting and using predictive algorithms on patient outcomes and diagnoses. The union wrote in a blog post that using AI for monitoring patients and flagging potential changes in their state would force nurses to respond to “excessive, if not faulty, alerts” while overlooking patients whose conditions are at risk of getting worse.

“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

There is certainly a role for AI and new analytical technologies in improving healthcare. It could, for example, help mitigate medical burnout and staffing shortages. But that’s not the concern here. The introduction of AI, the nurses say, will most likely be used to decrease reliance on human nurses by automating much of their work, which in turn suggests that more patients would be assigned to each nurse. That would, in theory, both increase pressure on nurses and decrease the quality of care on all fronts. 

Last year, in a statement to the U.S. Senate, the executive director of National Nurses United said that AI was “negatively impacting nursing practice and limiting the use of nurses’ professional judgment,” which was “putting patients and nurses at risk.”

The nurses protested outside Kaiser Permanente, which uses an “Advance Alert” AI monitoring system to automatically analyze patient data every hour. Kaiser Permanente wrote in a blog post that the system was estimated to save 500 lives per year. The healthcare magnate also hosted a conference this week, which was slated to cover AI and advanced analytics. 

“Kaiser Permanente is empowering nurses with state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs,” a Kaiser Permanente spokesperson wrote in an emailed statement. “At Kaiser Permanente, AI tools don’t make medical decisions, our physicians and care teams are always at the center of decision making with our patients. We believe that AI may be able to help our physicians and employees, and enhance our members’ experience. As an organization dedicated to inclusiveness and health equity, we ensure the results from AI tools are correct and unbiased; AI does not replace human assessment.”

Kaiser Permanente is not the only company promoting AI healthcare. Last month, NVidia announced a series of AI healthcare microservices including HippocraticAI, a company providing “always available, real-time generative AI healthcare agents” whose motto is “Do No Harm.” The model was rated by human nurses as having a 0.04 percent chance to do “severe harm” to patients, and by human physicians as having a 0.15 percent chance to do so. Human nurses did not provide medical advice that other nurses rated as having a chance to do “severe harm” in the study.

Healthcare providers have been increasingly keen to start using AI, particularly to handle MRI, CT scan, and other imaging analysis and to sift through large amounts of data. A recent study, for example, found that radiologists who supplemented their work with AI “demonstrated superior performance” to radiologists or AI analysis individually. The U.S. National Institutes of Health have implemented multiple programs to make their data “usable with artificial intelligence and machine learning applications.” Even the White House has encouraged the development of AI-based healthcare tools. 

Generative AI images, meanwhile, have made it into healthcare and scientific research. One report on gender-affirming healthcare commissioned by the U.K. National Health Service used AI-generated images of nonbinary children. 

“I have been a Kaiser nurse for more than 40 years, and I promise you, union nurses will never stop fighting for a health care system that guarantees person-to-person, hands-on care for every patient,” said CNA co-president Cathy Kennedy in a statement. “We know there is nothing inevitable about AI’s advancement into health care. No patient should be a guinea pig and no nurse should be replaced by a robot.”



Read the whole story
cgranade
24 days ago
reply
Share this story
Delete

We Can, and We Must, Clown on the Humane AI Pin Forever

1 Share

In April 2017, Bloomberg published an article about the Juicero, a product that was intended to be essentially the Keurig of juice but ended up being an expensive, over-engineered, $700 disaster of a product that exemplified Silicon Valley’s obsession with the internet of things, digital rights management, flashy hardware startups, and solving problems that do not exist. 

The article found that Juicero’s proprietary juice packs, which couldn’t be used by the machine after their expiration date, didn’t actually need to be run through the machine at all. They could be “squeezed by hand” to create juice of roughly the same quality. The article led to one of the more entertaining tech news cycles I can remember: Juicero, which had raised $120 million in venture capital funding, was righteously, relentlessly, and rightfully dunked on and clowned upon by the masses. The company shut down several months later. Its founder, Doug Evans, disappeared for a while, and later got really into harvesting “raw water.” 

This was a better, simpler time when people who took gigantic sums of money to make shitty products that sucked were widely dunked on for lighting that money on fire and for disappointing or ripping-off customers. Juicero was not the only hardware startup to meet this fate: The Coolest Cooler, a cooler/speaker/blender/iPhone charger hybrid that was at the time the most-funded product on Kickstarter, had to admit that its own product was a “clusterfuck” and essentially disappeared. There are many such cases: Ouya, Magic Leap, the hyperloop. For the gadget historians out there, there was the N-Gage.

An entire culture arose around exposing “Shitty Kickstarters” and other startups that promised society-shifting products then delivered either nothing or something deeply underwhelming. A new slogan arose: “Hardware is hard,” which was a warning both to founders and to the general public: Making a new product is hard, and we should be suspicious of relative newcomers who promise some huge leap forward. 

Last week, a company called Humane launched the Ai Pin, a $700-plus-$24-monthly-subscription AI assistant gadget absolutely savaged by a group of reviewers who were obviously open-minded about the product but found it to be incredibly lacking in the ‘does this thing actually work’ department. 

The pin was hyped for months in excruciating, unconvincing, and uncomfortable pitch videos by its founders, Imran Chaudri and Bethany Bongiorno (both formerly of Apple). The idea was that it would be a screenless AI assistant that would help people be more present because they would not be looking at their phone all the time. The reviews are almost uniformly negative and largely in the same ways: The pin gets confused often; it sometimes gives wrong answers; it overheats; its battery life is short; it sometimes takes a very long time to respond to basic inquiries; its laser projection display is hard to see in sunlight, is hard to navigate, and makes you look like a dweeb; it doesn’t have an app; it only works with T-Mobile; it can take calls but can’t use your real phone number, etc.

This version of the Ai Pin seems very likely to be a flop in the same vein as Juicero and continues a long tradition of deeply hyped, very expensive gadgets that received either large amounts of VC funding or huge backing on Kickstarter and then were underwhelming, useless, or otherwise incredibly disappointing. 

Unlike Juicero, though, which was clowned on by seemingly everyone until the product mercifully faded from the discourse, a very specific type of VC and AI bro is pushing the absurd narrative that we as a society must be nice to the clowns who have created this useless piece of soon-to-be e-waste, because they are trying a hard thing. It is the CEOs, developers, and engineers who have spent years burning through hundreds of millions of dollars and have delivered an incredibly disappointing product who deserve our support and sympathy. This hit a fever pitch after the YouTuber MKBHD, who rarely finds a product he does not like, called the pin “the worst product I’ve ever reviewed… for now.” 

In a tweet that is now viral across every social media platform that I am aware of, Daniel Vassallo, who runs a hustlebro newsletter called Small Bets, wrote “I find it distasteful, almost unethical, to say this when you have 18 million subscribers. Hard to explain why, but with great reach comes great responsibility. Potentially killing someone else’s nascent project reeks of carelessness. First, do no harm,” as though society at large has a Hippocratic Oath to support expensive, underwhelming products.

The tweet in question (text in article)

This has started an excruciating, days-long discourse about whether it is OK to say that something bad is bad, and whether we should actually say that things that are bad are good. The debate has centered on whether product reviewers and people on the internet more broadly should be nicer to people who make products that are very bad. Much of this has remained at the top of my timeline on Threads, specifically, because it is a platform whose algorithm is fundamentally broken and keeps topics floating at the top of a feed for days on end. It is also a platform that is full of people who work in tech and who think that journalists and society are too mean to them. In a world where public attention spans for actually important news lasts somewhere between a few minutes and a few hours, the “are-we-being-too-mean-to-the-Ai-Pin-people” discourse has somehow lasted for days and days and days. 

I am now, of course, adding to this neverending discourse with this article. But I want to be clear: No one is under any obligation to be nice to the creators of the Humane pin or the product itself, which, even if it worked, is a gadget that relies on mass content theft and the scraping of huge amounts of human knowledge and creativity to make a product that is marketed as making us more “human.” The people making this argument are people who have a vested interest in the general public continuing to canonize, support, and spend money on a Silicon Valley vision of the future that involves the automation of everything, the displacement of huge numbers of workers, and a new, AI-led internet that has so far done little but flooded the web with low quality junk, been used to make fake porn to harass women, and has led eager beaver know nothing CEOs to prematurely lay off huge numbers of workers to replace them with AI tools built on the back of uncompensated human labor and training largely done by underpaid “ghost workers” in the developing world

This does not mean I want every product to fail, or want for there to never be another good product again. The existence of the Humane Ai Pin is an example that even in a post-Juicero age, there is endless appetite for rich people to spend money funding people to make absurd products at great cost to everyone involved.

I will admit that it is easy to revel in the easy-to-predict failure of this product, the first version of which was obviously not going to be good for a variety of reasons, chief among them the fact that it relies on the same type of AI that we have seen making obvious and bad mistakes all day every day for years. But it is not mean, or unethical, or distasteful to say that disappointing things are disappointing and are not worth spending large amounts of money on to support some hope that maybe in the future the same people who made this very bad product at great expense will make a version of it that will suck less. It is absurd to suggest otherwise. 



Read the whole story
cgranade
30 days ago
reply
Share this story
Delete
Next Page of Stories