22 stories
·
1 follower

Scientists Use Ultrasound to Make Cold Brew Coffee in 3 Minutes Instead of 24 Hours

1 Share

A team of scientists in Australia say that they have found a way to make cold brew coffee in less than three minutes using an ultrasonic reactor. This is a potentially massive deal because cold brew normally takes between 12 and 24 hours to brew, a problem for me, personally, when I do not carefully manage my cold brew stock. The lead scientist on the research team tells me he has also created a “cold espresso,” which is his personal favorite and sounds very intriguing.

The researchers at the University of New South Wales Sydney claim that their ultrasonic extraction held up to a “sensory analysis” and blind taste tests by trained experts: “A sensory analysis was conducted to evaluate appearance, aroma, texture, flavor, and aftertaste, which demonstrated that coffee brewed for 1 and 3 min in the sonoreactor exhibited almost undistinguishable properties compared to a standard 24 hour [cold] brewing without ultrasound,” they write in a paper about the method in the journal Ultrasonics Sonochemistry

For the uninitiated, cold brewed coffee is made by soaking coffee grounds in cold or room temperature water in large batches to create a concentrate that you can keep in the fridge for a week or two. Because the water is not hot, the extraction from ground coffee beans takes much longer than it does with traditional hot brewing. The resulting cold brew is less acidic, less bitter, and sweeter. This long brew time isn’t a problem if you plan ahead, but, as mentioned, if you do not plan ahead, you cannot really speed up the cold brew time while continuing to have cold brew. As lead author Francisco Trujillo notes in the paper, the resulting large batches of cold brew concentrate also take up a lot of counter and fridge space, meaning that not every coffee shop or restaurant has it on hand. This is a phenomenon I am very familiar with, as many establishments currently on my shitlist claim that they have “cold brew” that is actually hot coffee poured over ice. 

Trujillo’s new method uses room temperature water in a normal espresso machine that has been modified to turn the boiler off (or down) and has been modified to add a device that hits the beans with ultrasonic waves at a specific frequency that makes the beans shake. In layman’s terms, they are blasting the beans with ultrasounds, which cause the beans to vibrate and its cell walls to burst, allowing the rapid extraction of coffee without heat. Trujillo explains in the paper that extraction happens  because of “acoustic cavitation. When acoustic bubbles, also called inertial bubbles, collapse near solid materials, such as coffee grounds, they generate micro jets with the force to fracture the cell walls of plant tissues, intensifying the extraction of the intracellular content.” 

Trujillo told me that he learned this was possible in a study he published in 2020, and set to “superimpose ultrasound in the coffee basket of an existing espresso machine. We purchased a few Breville espresso machines, opened them up, and started the journey. Mathematical modeling of the sound transmission system and of acoustic cavitation was key for the success of the design.” Some of that mathematic modeling is available in the paper here:

For those of you into mathematical modeling

He said that they experimented with a variety of different frequencies, and said that frequencies between 20-100 kHz are all good at extracting coffee. “The lower the frequency, the larger the transducer and the horn,” he said. “If the frequency is in the low range, there are harmonics that can be heard. We worked at 28 kHz and at 38-40 kHz, and we chose 38-40 kHz as it was more compact and with a quieter performance.” 

Essentially, his team was able to modify an existing Breville espresso machine to do this, and said that they experimented with different brew times, and water temperatures (104 degrees F, well below boiling, was the hottest they tried) and were able to create a variety of different cold extractions, including one that is not mentioned in the paper but which Trujillo told me about that he calls “cold espresso” and which he said are his "favorite ones" and “offer a unique sensory experience like nothing in the market. It is bold and rich in flavor and aroma, less bitter, and with a great acidity. It is more viscous and with a very nice finishing (according to coffee experts that have tried our brews).  That will be a unique and novel coffee beverage to be appreciated by coffee lovers, hopefully worldwide.” 

The various ultrasonic cold brews the team produced were tested by a team at the Queensland Alliance for Agriculture and Food Innovation by a group of “11 trained sensory panelists” who “had previously been screened of their sensory acuity.” They scored the ultrasonic extractions very similarly to real cold brew, though of course whether the ultrasonic coffee is actually “almost undistinguishable” from real cold brew will depend on each person’s taste.

The "sensory profiles" of the different coffee as judged by testers.

I have long been interested in the science of coffee. When I was a freelancer, I went to Manizales, Colombia, to a national laboratory called “Cenicafe.” A scientist there called it the “NASA of Colombia,” referring to how seriously the institute takes the scientific pursuit of growing, roasting, and brewing ever-improving coffee. Cenicafe was easily one of the coolest places I’ve been in my life; they were genetically sequencing different species of coffee, hybridizing arabica and robusta coffee in attempts to create strands that taste good but are also resistant to both climate change and “coffee rust,” a fungus that regularly blights huge amounts of the coffee harvest in many countries, and were experimenting with new ways to brew coffee. I include this to say that, while inventing a new type of coffee brewing may seem frivolous, there is actually a huge amount of time, effort, and funding going into ensuring that there is ongoing innovation in coffee growing and brewing tech, which is particularly important considering that coffee plants are particularly susceptible to climate change.

Trujillo said that he plans to license the technology to coffee maker companies so that it can be used in both commercial coffee shops and in people’s homes. 

“I love cold brew, and coffee in general,” he said. “I am Colombian and my grandfather had a business of buying coffee beans from the local producers, he then dried the beans under the sun on ‘costales’ (a traditional Colombian strong fabric) that he placed on the street. That was in Ortega, a little town in Colombia. There were other gentlemen like my grandfather who had the same business. So, during the season period, the streets of Ortega were filled with costales with coffee beans drying under the sun!”



Read the whole story
cgranade
1 hour ago
reply
Share this story
Delete

Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

1 Share

A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions.

In a notice posted to the site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote. 

Dawson founded Bards and Sages in 2002 as a place to publish her own supplemental content for tabletop role-playing games like Pathfinder. It expanded over the years into anthologies, novels, short story collections and a quarterly magazine. 

“The AI problem is a time problem. If I was not dealing with a host of other issues, I'd fight through the AI either until the phase passes or the AI Bros gave up on bothering me. But with everything else, I just don't have the time,” Dawson told me. “The number of submissions have just flooded the inbox. And I don't have hours a day to deal with it. As an example, I haven't checked my business email in the last week. My submission inbox...despite the fact that we are no longer accepting submissions...has 30 emails in it.” 

Dawson said that after more than 20 years of reading and writing speculative fiction, she’s become adept at spotting the differences between human-crafted writing and content churned out by a large language model like ChatGPT. She sees two types of AI-generated submissions, she said: the “hilariously obvious,” with missing words or repeated statements that the submitter didn’t seem to bother to proofread—and the “E for effort submissions,” where the submitter did seemingly revise the generated text, but has awkward sentence structures of unnatural word orders. 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.  

“The problem with AI is the people who use AI. They don't respect the written word,” Dawson told me. “These are people who think their ‘ideas’ are more important than the actual craft of writing, so they churn out all these ‘ideas’ and enter their idea prompts and think the output is a story. But they never bothered to learn the craft of writing. Most of them don't even read recreationally. They are more enamored with the idea of being a writer than the process of being a writer. They think in terms of quantity and not quality.” Another clue that submitters are using AI is that they send multiple submissions, even though the Bards and Sages guidelines state no more than one submission per person.

Dawson said there are no full-time employees at the press, and that it’s more of a labor of love than her primary income. Rights for authors who’ve published with Bards and Sages will be returned to them. But bad behavior by people posing as writers, and their infiltrating independent and small-scale publishers and communities, happens elsewhere, too—in February, I talked to fanfiction authors about how Etsy sellers have been ripping off their works in bound, printed form, and reselling them as physical books on the platform. 

ChatGPT-generated writing is increasingly a problem for scientific journals and the peer review process. Google indexes AI-generated garbage and threatens the usefulness of Google Ngram viewer, an important tool researchers use to track how language is used over time. AI-generated books are all over Amazon, and in some cases, contain potentially dangerous misinformation. We see these big platforms try to moderate AI-generated content, but the closure of Bards and Sages shows how the influx of AI infiltrating every corner of the internet affects small businesses and publishers, too.



Read the whole story
cgranade
5 days ago
reply
Share this story
Delete

Nurses Protest 'Deeply Troubling' Use of AI in Hospitals

1 Share

Hundreds of union nurses held a demonstration in front of Kaiser Permanente in San Francisco on Monday morning protesting the use of AI in healthcare, as hospitals and researchers become increasingly enthusiastic about integrating AI into patient care.

“It is deeply troubling to see Kaiser promote itself as a leader in AI in healthcare, when we know their use of these technologies comes at the expense of patient care, all in service of boosting profits,” Michelle Gutierrez Vo, a co-president of the California Nurses Association (CNA), said in a statement at the time. “We demand that workers and unions be involved at every step of the development of data-driven technologies and be empowered to decide whether and how AI is deployed in the workplace.”

National Nurses United, the CNA’s parent union, has repeatedly warned about AI being used for a variety of applications in healthcare, which range from patient monitoring to nurse scheduling to automated patient charting and using predictive algorithms on patient outcomes and diagnoses. The union wrote in a blog post that using AI for monitoring patients and flagging potential changes in their state would force nurses to respond to “excessive, if not faulty, alerts” while overlooking patients whose conditions are at risk of getting worse.

“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

There is certainly a role for AI and new analytical technologies in improving healthcare. It could, for example, help mitigate medical burnout and staffing shortages. But that’s not the concern here. The introduction of AI, the nurses say, will most likely be used to decrease reliance on human nurses by automating much of their work, which in turn suggests that more patients would be assigned to each nurse. That would, in theory, both increase pressure on nurses and decrease the quality of care on all fronts. 

Last year, in a statement to the U.S. Senate, the executive director of National Nurses United said that AI was “negatively impacting nursing practice and limiting the use of nurses’ professional judgment,” which was “putting patients and nurses at risk.”

The nurses protested outside Kaiser Permanente, which uses an “Advance Alert” AI monitoring system to automatically analyze patient data every hour. Kaiser Permanente wrote in a blog post that the system was estimated to save 500 lives per year. The healthcare magnate also hosted a conference this week, which was slated to cover AI and advanced analytics. 

“Kaiser Permanente is empowering nurses with state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs,” a Kaiser Permanente spokesperson wrote in an emailed statement. “At Kaiser Permanente, AI tools don’t make medical decisions, our physicians and care teams are always at the center of decision making with our patients. We believe that AI may be able to help our physicians and employees, and enhance our members’ experience. As an organization dedicated to inclusiveness and health equity, we ensure the results from AI tools are correct and unbiased; AI does not replace human assessment.”

Kaiser Permanente is not the only company promoting AI healthcare. Last month, NVidia announced a series of AI healthcare microservices including HippocraticAI, a company providing “always available, real-time generative AI healthcare agents” whose motto is “Do No Harm.” The model was rated by human nurses as having a 0.04 percent chance to do “severe harm” to patients, and by human physicians as having a 0.15 percent chance to do so. Human nurses did not provide medical advice that other nurses rated as having a chance to do “severe harm” in the study.

Healthcare providers have been increasingly keen to start using AI, particularly to handle MRI, CT scan, and other imaging analysis and to sift through large amounts of data. A recent study, for example, found that radiologists who supplemented their work with AI “demonstrated superior performance” to radiologists or AI analysis individually. The U.S. National Institutes of Health have implemented multiple programs to make their data “usable with artificial intelligence and machine learning applications.” Even the White House has encouraged the development of AI-based healthcare tools. 

Generative AI images, meanwhile, have made it into healthcare and scientific research. One report on gender-affirming healthcare commissioned by the U.K. National Health Service used AI-generated images of nonbinary children. 

“I have been a Kaiser nurse for more than 40 years, and I promise you, union nurses will never stop fighting for a health care system that guarantees person-to-person, hands-on care for every patient,” said CNA co-president Cathy Kennedy in a statement. “We know there is nothing inevitable about AI’s advancement into health care. No patient should be a guinea pig and no nurse should be replaced by a robot.”



Read the whole story
cgranade
14 days ago
reply
Share this story
Delete

We Can, and We Must, Clown on the Humane AI Pin Forever

1 Share

In April 2017, Bloomberg published an article about the Juicero, a product that was intended to be essentially the Keurig of juice but ended up being an expensive, over-engineered, $700 disaster of a product that exemplified Silicon Valley’s obsession with the internet of things, digital rights management, flashy hardware startups, and solving problems that do not exist. 

The article found that Juicero’s proprietary juice packs, which couldn’t be used by the machine after their expiration date, didn’t actually need to be run through the machine at all. They could be “squeezed by hand” to create juice of roughly the same quality. The article led to one of the more entertaining tech news cycles I can remember: Juicero, which had raised $120 million in venture capital funding, was righteously, relentlessly, and rightfully dunked on and clowned upon by the masses. The company shut down several months later. Its founder, Doug Evans, disappeared for a while, and later got really into harvesting “raw water.” 

This was a better, simpler time when people who took gigantic sums of money to make shitty products that sucked were widely dunked on for lighting that money on fire and for disappointing or ripping-off customers. Juicero was not the only hardware startup to meet this fate: The Coolest Cooler, a cooler/speaker/blender/iPhone charger hybrid that was at the time the most-funded product on Kickstarter, had to admit that its own product was a “clusterfuck” and essentially disappeared. There are many such cases: Ouya, Magic Leap, the hyperloop. For the gadget historians out there, there was the N-Gage.

An entire culture arose around exposing “Shitty Kickstarters” and other startups that promised society-shifting products then delivered either nothing or something deeply underwhelming. A new slogan arose: “Hardware is hard,” which was a warning both to founders and to the general public: Making a new product is hard, and we should be suspicious of relative newcomers who promise some huge leap forward. 

Last week, a company called Humane launched the Ai Pin, a $700-plus-$24-monthly-subscription AI assistant gadget absolutely savaged by a group of reviewers who were obviously open-minded about the product but found it to be incredibly lacking in the ‘does this thing actually work’ department. 

The pin was hyped for months in excruciating, unconvincing, and uncomfortable pitch videos by its founders, Imran Chaudri and Bethany Bongiorno (both formerly of Apple). The idea was that it would be a screenless AI assistant that would help people be more present because they would not be looking at their phone all the time. The reviews are almost uniformly negative and largely in the same ways: The pin gets confused often; it sometimes gives wrong answers; it overheats; its battery life is short; it sometimes takes a very long time to respond to basic inquiries; its laser projection display is hard to see in sunlight, is hard to navigate, and makes you look like a dweeb; it doesn’t have an app; it only works with T-Mobile; it can take calls but can’t use your real phone number, etc.

This version of the Ai Pin seems very likely to be a flop in the same vein as Juicero and continues a long tradition of deeply hyped, very expensive gadgets that received either large amounts of VC funding or huge backing on Kickstarter and then were underwhelming, useless, or otherwise incredibly disappointing. 

Unlike Juicero, though, which was clowned on by seemingly everyone until the product mercifully faded from the discourse, a very specific type of VC and AI bro is pushing the absurd narrative that we as a society must be nice to the clowns who have created this useless piece of soon-to-be e-waste, because they are trying a hard thing. It is the CEOs, developers, and engineers who have spent years burning through hundreds of millions of dollars and have delivered an incredibly disappointing product who deserve our support and sympathy. This hit a fever pitch after the YouTuber MKBHD, who rarely finds a product he does not like, called the pin “the worst product I’ve ever reviewed… for now.” 

In a tweet that is now viral across every social media platform that I am aware of, Daniel Vassallo, who runs a hustlebro newsletter called Small Bets, wrote “I find it distasteful, almost unethical, to say this when you have 18 million subscribers. Hard to explain why, but with great reach comes great responsibility. Potentially killing someone else’s nascent project reeks of carelessness. First, do no harm,” as though society at large has a Hippocratic Oath to support expensive, underwhelming products.

The tweet in question (text in article)

This has started an excruciating, days-long discourse about whether it is OK to say that something bad is bad, and whether we should actually say that things that are bad are good. The debate has centered on whether product reviewers and people on the internet more broadly should be nicer to people who make products that are very bad. Much of this has remained at the top of my timeline on Threads, specifically, because it is a platform whose algorithm is fundamentally broken and keeps topics floating at the top of a feed for days on end. It is also a platform that is full of people who work in tech and who think that journalists and society are too mean to them. In a world where public attention spans for actually important news lasts somewhere between a few minutes and a few hours, the “are-we-being-too-mean-to-the-Ai-Pin-people” discourse has somehow lasted for days and days and days. 

I am now, of course, adding to this neverending discourse with this article. But I want to be clear: No one is under any obligation to be nice to the creators of the Humane pin or the product itself, which, even if it worked, is a gadget that relies on mass content theft and the scraping of huge amounts of human knowledge and creativity to make a product that is marketed as making us more “human.” The people making this argument are people who have a vested interest in the general public continuing to canonize, support, and spend money on a Silicon Valley vision of the future that involves the automation of everything, the displacement of huge numbers of workers, and a new, AI-led internet that has so far done little but flooded the web with low quality junk, been used to make fake porn to harass women, and has led eager beaver know nothing CEOs to prematurely lay off huge numbers of workers to replace them with AI tools built on the back of uncompensated human labor and training largely done by underpaid “ghost workers” in the developing world

This does not mean I want every product to fail, or want for there to never be another good product again. The existence of the Humane Ai Pin is an example that even in a post-Juicero age, there is endless appetite for rich people to spend money funding people to make absurd products at great cost to everyone involved.

I will admit that it is easy to revel in the easy-to-predict failure of this product, the first version of which was obviously not going to be good for a variety of reasons, chief among them the fact that it relies on the same type of AI that we have seen making obvious and bad mistakes all day every day for years. But it is not mean, or unethical, or distasteful to say that disappointing things are disappointing and are not worth spending large amounts of money on to support some hope that maybe in the future the same people who made this very bad product at great expense will make a version of it that will suck less. It is absurd to suggest otherwise. 



Read the whole story
cgranade
19 days ago
reply
Share this story
Delete

Institute That Pioneered AI ‘Existential Risk’ Research Shuts Down

1 Share

The Future of Humanity Institute (FHI), an almost two decades-old organization focused on researching and mitigating the existential risks posed by artificial intelligence, has shut down.

“Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home [at Oxford University]),” a post on the organization's website announcing its closure says. “Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.”

FHI was established in Oxford University in 2005 by Swedish philosopher Nick Bostrom, and received funding from Elon Musk, the European Research Council, the Future of Life Institute, and others.  

“During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms,” the post on The Future of Humanity Institute website says. “FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty.”

FHI and Bostrom specifically were instrumental in shaping popular perception of the existential and entirely theoretical risk posed by artificial intelligence, best summed up by his paperclip maximizer thought experiment. The paperclip maximizer imagines a scenario in which a powerful AI eliminates all life in the universe by following a simple human instruction—producing paperclips—by following it to maximum and in a way that is not aligned broadly with human interests, and turning all matter in the universe into paperclips. 

In one of its final publications titled “Final Report,” which chronicles the history of FHI, the organization clearly blames Oxford for shutting it down. 

“While FHI had achieved significant academic and policy impact, the final years were affected by a gradual suffocation by Faculty bureaucracy,” the article says. “The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization. (One of our administrators developed a joke measurement unit, “the Oxford”. 1 Oxford is the amount of work it takes to read and write 308 emails. This is the actual administrative effort it took for FHI to have a small grant disbursed into its account within the Philosophy Faculty so that we could start using it - after both the funder and the University had already approved the grant.)”

The article goes on to explain that Oxford imposed a freeze on fundraising and hiring in 2020, and that this eventually led to the loss of lead researchers.

“I wish it were possible to convey the heroic efforts of our core administrative team that were required to keep the FHI organizational apparatus semi-performant and dynamic for all those years until its final demise!” Bostrom is quoted as saying in the article.

This type of concern about AI reached an absurd peak in 2023, when the Future of Life Institute, which has funded the Future of Humanity Institute and is focused on the same issues but is a different entity, published a letter calling technology companies and governments to “pause” AI development for six months, since it was developing at a dangerously fast pace. The letter illustrates many of the problems with the organization, effective altruism, longtermism, and a certain faction within the debate about how AI should be developed. 

As Chloe Xiang wrote for Motherboard at the time, the letter was scrutinized by many AI experts, including some who signed it. More importantly, AI experts criticized the letter as hyping up theoretical risks that have more to do with science fiction rather than focusing on the actual harm AI is causing currently, like the dangers of algorithmic bias, AI’s ability to easily create nonconsensual porn of anyone, its widespread use in surveillance tools, and flooding every corner of the internet with garbage, all very real and current harms we cover at 404 Media daily. 

“It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a research scientist and climate lead at Hugging Face, told Motherboard at the time. 

Regardless of its actual impact, FHI shutting down is going to be viewed as a huge win for techno-optimists, effective accelerationists (e/acc), and other AI maxers who have a vested interest in developing and deploying AI tools rapidly and hyping them up. This crowd sees the field of “AI safety” and effective altruism as regressive and against technological progress. 

“EA (effective altruism) institutions literally crumbling before our eyes. It's over,” Guillaume Verdon, CEO of Extropic and one of the main figures in the effective accelerationist movement under the name Beff Jezos, tweeted about the news of FHI shutting down. 

I will grant him that while the paperclip maximizer is a very fun thought experiment (and game), FHI, Future of Life Institute, Bostrom, longtermism, effective altruism, etc, are not the most useful ways of thinking about AI at our current moment, and are cringe. 

However, EA and e/acc are two sides of the same coin. One is focused on a future where all powerful AI goes rogue and turns all of humanity into paperclips. The other is focused on a future where all powerful AI has been successfully wielded to create a utopia of infinite intelligence, energy, and wealth. 

Both gloss over the reality that there’s little evidence to support any of these possibilities, and that at least for now AI is mostly being used to create nonconsensual porn, mountains of poorly written garbage, and the worst looking romantic comedy we’ve ever seen.



Read the whole story
cgranade
21 days ago
reply
Share this story
Delete

Review Used By UK to Limit Gender Affirming Care Uses Images of AI-Generated Kids

1 Share

A major review condemning the U.K. National Health Service’s gender-affirming treatment for minors uses AI-generated images of kids. The review, released Tuesday, noted the “weak evidence” and “poor quality” of studies on hormone treatments for children, which led to NHS England pausing all gender-affirming care for people under 18. 

The so-called “Cass Review” is peppered with pictures of schoolchildren and students, half of which appear to be generated by AI. Most notably, an image at the end of the “Service model” section of the report, which delineates the 32 recommendations it makes, features a non-binary child with a bleached and light pink quiff haircut. This is consistent with how generative AI tends to represent queer people—namely, white people with short textured purple or pink hair. 

A screenshot of the Cass Review's AI-generated nonbinary child.

The report’s cover image of a child doing a wall sit in jeans also appears to be AI-generated, which is evident from their hands. Two pairs of fingers appear to be merged, while one thumb cuts into the other one. 

A screenshot of the Review's cover image.

Dr. Hilary Cass’s team, who conducted the review, told 404 Media in an email that, “The report uses Adobe stock images—some real and some AI. In selecting images the Review was conscious of the sensitive and contentious nature of the subject matter and made effort not to use images that could identify any individuals.”

Reverse-image searching the AI-generated child with pink hair leads to this AI-generated image on Adobe Stock called “Non binary teen in school hallway with kids in the background. Generative AI.” This image is part of a “series” on Adobe Stock that contains 35 images. In all of the images in this series where someone in the image has pink or purple hair, they are labeled as “nonbinary,” and in some cases they are labeled as a “young nonbinary teen girl.” This specific group of AI images also has a series of group images where all of the teens are labeled as either “nonbinary” or, separately, as “happy teenage students.” These images imagine a world in which every nonbinary person has some variation of the exact same haircut and hair color and in which they exclusively hang out with and pose for pictures with other nonbinary students. AI-generated cis students in this series only hang out with other cis students, meanwhile. 

The review does not appear to acknowledge these AI-generated images in any way. There are no references to “AI”, “artificial intelligence”, or “generate” in the context of images. Nor are there any references to Midjourney, Stable Diffusion, Dall-E, or any other common AI image generators. 

When asked for comment, the NHS, which commissioned the report on gender-affirming healthcare in the U.K., directed 404 Media to contact Dr. Cass’s team. 

AI-generated images have recently begun making appearances in scientific reports. A particularly memorable study on stem cell signaling pathways featured a giant AI-generated rat penis, and was quickly retracted when the image was shared online. Last year, Amnesty International used bizarre, AI-generated images to depict violence and police brutality in Colombia. 

A study published in December in the Findings of the Association for Computational Linguistics found that Stable Diffusion’s depiction of a “person” was by default a white man. “People of nonbinary gender are farthest from this baseline depiction of ‘person,’” the study states. “The stereotypical depiction of personhood within Stable Diffusion outputs corresponds closely to Western, light-skinned men and threatens to erase from media depictions historically marginalized groups such as people of nonbinary gender and Indigenous people, among others.” 

Sourojit Ghosh, a PhD candidate at the University of Washington and the lead researcher of the project, told 404 Media in an email that this erasure of nonbinary people had significant implications for harm. “It contributes to the historic trend of nonbinary identities being erased from mainstream representation, or nonbinary people facing oppression and/or violence simply for being nonbinary,” Ghosh said.

“I think that for AI to depict stereotypical images of what it means to ‘look trans/nonbinary’ has the potential to cause real harms upon real people,” Ghosh continued. “Especially for young people, who might be seeing such images more and more in their daily media diets, this can create an unhealthy impression that there is a ‘correct’ way to present oneself as trans/nonbinary.” 

Earlier this month, WIRED noted that generative AI has a track record of representing queer and trans people as a collage of stereotypes. It is not clear why the Cass team used AI-generated images in this report, which, again, has been used as evidence by the NHS to stop providing gender-affirming care to trans kids.



Read the whole story
cgranade
23 days ago
reply
Share this story
Delete
Next Page of Stories