20 stories
·
1 follower

Nurses Protest 'Deeply Troubling' Use of AI in Hospitals

1 Share

Hundreds of union nurses held a demonstration in front of Kaiser Permanente in San Francisco on Monday morning protesting the use of AI in healthcare, as hospitals and researchers become increasingly enthusiastic about integrating AI into patient care.

“It is deeply troubling to see Kaiser promote itself as a leader in AI in healthcare, when we know their use of these technologies comes at the expense of patient care, all in service of boosting profits,” Michelle Gutierrez Vo, a co-president of the California Nurses Association (CNA), said in a statement at the time. “We demand that workers and unions be involved at every step of the development of data-driven technologies and be empowered to decide whether and how AI is deployed in the workplace.”

National Nurses United, the CNA’s parent union, has repeatedly warned about AI being used for a variety of applications in healthcare, which range from patient monitoring to nurse scheduling to automated patient charting and using predictive algorithms on patient outcomes and diagnoses. The union wrote in a blog post that using AI for monitoring patients and flagging potential changes in their state would force nurses to respond to “excessive, if not faulty, alerts” while overlooking patients whose conditions are at risk of getting worse.

“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

There is certainly a role for AI and new analytical technologies in improving healthcare. It could, for example, help mitigate medical burnout and staffing shortages. But that’s not the concern here. The introduction of AI, the nurses say, will most likely be used to decrease reliance on human nurses by automating much of their work, which in turn suggests that more patients would be assigned to each nurse. That would, in theory, both increase pressure on nurses and decrease the quality of care on all fronts. 

Last year, in a statement to the U.S. Senate, the executive director of National Nurses United said that AI was “negatively impacting nursing practice and limiting the use of nurses’ professional judgment,” which was “putting patients and nurses at risk.”

The nurses protested outside Kaiser Permanente, which uses an “Advance Alert” AI monitoring system to automatically analyze patient data every hour. Kaiser Permanente wrote in a blog post that the system was estimated to save 500 lives per year. The healthcare magnate also hosted a conference this week, which was slated to cover AI and advanced analytics. 

“Kaiser Permanente is empowering nurses with state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs,” a Kaiser Permanente spokesperson wrote in an emailed statement. “At Kaiser Permanente, AI tools don’t make medical decisions, our physicians and care teams are always at the center of decision making with our patients. We believe that AI may be able to help our physicians and employees, and enhance our members’ experience. As an organization dedicated to inclusiveness and health equity, we ensure the results from AI tools are correct and unbiased; AI does not replace human assessment.”

Kaiser Permanente is not the only company promoting AI healthcare. Last month, NVidia announced a series of AI healthcare microservices including HippocraticAI, a company providing “always available, real-time generative AI healthcare agents” whose motto is “Do No Harm.” The model was rated by human nurses as having a 0.04 percent chance to do “severe harm” to patients, and by human physicians as having a 0.15 percent chance to do so. Human nurses did not provide medical advice that other nurses rated as having a chance to do “severe harm” in the study.

Healthcare providers have been increasingly keen to start using AI, particularly to handle MRI, CT scan, and other imaging analysis and to sift through large amounts of data. A recent study, for example, found that radiologists who supplemented their work with AI “demonstrated superior performance” to radiologists or AI analysis individually. The U.S. National Institutes of Health have implemented multiple programs to make their data “usable with artificial intelligence and machine learning applications.” Even the White House has encouraged the development of AI-based healthcare tools. 

Generative AI images, meanwhile, have made it into healthcare and scientific research. One report on gender-affirming healthcare commissioned by the U.K. National Health Service used AI-generated images of nonbinary children. 

“I have been a Kaiser nurse for more than 40 years, and I promise you, union nurses will never stop fighting for a health care system that guarantees person-to-person, hands-on care for every patient,” said CNA co-president Cathy Kennedy in a statement. “We know there is nothing inevitable about AI’s advancement into health care. No patient should be a guinea pig and no nurse should be replaced by a robot.”



Read the whole story
cgranade
2 days ago
reply
Share this story
Delete

We Can, and We Must, Clown on the Humane AI Pin Forever

1 Share

In April 2017, Bloomberg published an article about the Juicero, a product that was intended to be essentially the Keurig of juice but ended up being an expensive, over-engineered, $700 disaster of a product that exemplified Silicon Valley’s obsession with the internet of things, digital rights management, flashy hardware startups, and solving problems that do not exist. 

The article found that Juicero’s proprietary juice packs, which couldn’t be used by the machine after their expiration date, didn’t actually need to be run through the machine at all. They could be “squeezed by hand” to create juice of roughly the same quality. The article led to one of the more entertaining tech news cycles I can remember: Juicero, which had raised $120 million in venture capital funding, was righteously, relentlessly, and rightfully dunked on and clowned upon by the masses. The company shut down several months later. Its founder, Doug Evans, disappeared for a while, and later got really into harvesting “raw water.” 

This was a better, simpler time when people who took gigantic sums of money to make shitty products that sucked were widely dunked on for lighting that money on fire and for disappointing or ripping-off customers. Juicero was not the only hardware startup to meet this fate: The Coolest Cooler, a cooler/speaker/blender/iPhone charger hybrid that was at the time the most-funded product on Kickstarter, had to admit that its own product was a “clusterfuck” and essentially disappeared. There are many such cases: Ouya, Magic Leap, the hyperloop. For the gadget historians out there, there was the N-Gage.

An entire culture arose around exposing “Shitty Kickstarters” and other startups that promised society-shifting products then delivered either nothing or something deeply underwhelming. A new slogan arose: “Hardware is hard,” which was a warning both to founders and to the general public: Making a new product is hard, and we should be suspicious of relative newcomers who promise some huge leap forward. 

Last week, a company called Humane launched the Ai Pin, a $700-plus-$24-monthly-subscription AI assistant gadget absolutely savaged by a group of reviewers who were obviously open-minded about the product but found it to be incredibly lacking in the ‘does this thing actually work’ department. 

The pin was hyped for months in excruciating, unconvincing, and uncomfortable pitch videos by its founders, Imran Chaudri and Bethany Bongiorno (both formerly of Apple). The idea was that it would be a screenless AI assistant that would help people be more present because they would not be looking at their phone all the time. The reviews are almost uniformly negative and largely in the same ways: The pin gets confused often; it sometimes gives wrong answers; it overheats; its battery life is short; it sometimes takes a very long time to respond to basic inquiries; its laser projection display is hard to see in sunlight, is hard to navigate, and makes you look like a dweeb; it doesn’t have an app; it only works with T-Mobile; it can take calls but can’t use your real phone number, etc.

This version of the Ai Pin seems very likely to be a flop in the same vein as Juicero and continues a long tradition of deeply hyped, very expensive gadgets that received either large amounts of VC funding or huge backing on Kickstarter and then were underwhelming, useless, or otherwise incredibly disappointing. 

Unlike Juicero, though, which was clowned on by seemingly everyone until the product mercifully faded from the discourse, a very specific type of VC and AI bro is pushing the absurd narrative that we as a society must be nice to the clowns who have created this useless piece of soon-to-be e-waste, because they are trying a hard thing. It is the CEOs, developers, and engineers who have spent years burning through hundreds of millions of dollars and have delivered an incredibly disappointing product who deserve our support and sympathy. This hit a fever pitch after the YouTuber MKBHD, who rarely finds a product he does not like, called the pin “the worst product I’ve ever reviewed… for now.” 

In a tweet that is now viral across every social media platform that I am aware of, Daniel Vassallo, who runs a hustlebro newsletter called Small Bets, wrote “I find it distasteful, almost unethical, to say this when you have 18 million subscribers. Hard to explain why, but with great reach comes great responsibility. Potentially killing someone else’s nascent project reeks of carelessness. First, do no harm,” as though society at large has a Hippocratic Oath to support expensive, underwhelming products.

The tweet in question (text in article)

This has started an excruciating, days-long discourse about whether it is OK to say that something bad is bad, and whether we should actually say that things that are bad are good. The debate has centered on whether product reviewers and people on the internet more broadly should be nicer to people who make products that are very bad. Much of this has remained at the top of my timeline on Threads, specifically, because it is a platform whose algorithm is fundamentally broken and keeps topics floating at the top of a feed for days on end. It is also a platform that is full of people who work in tech and who think that journalists and society are too mean to them. In a world where public attention spans for actually important news lasts somewhere between a few minutes and a few hours, the “are-we-being-too-mean-to-the-Ai-Pin-people” discourse has somehow lasted for days and days and days. 

I am now, of course, adding to this neverending discourse with this article. But I want to be clear: No one is under any obligation to be nice to the creators of the Humane pin or the product itself, which, even if it worked, is a gadget that relies on mass content theft and the scraping of huge amounts of human knowledge and creativity to make a product that is marketed as making us more “human.” The people making this argument are people who have a vested interest in the general public continuing to canonize, support, and spend money on a Silicon Valley vision of the future that involves the automation of everything, the displacement of huge numbers of workers, and a new, AI-led internet that has so far done little but flooded the web with low quality junk, been used to make fake porn to harass women, and has led eager beaver know nothing CEOs to prematurely lay off huge numbers of workers to replace them with AI tools built on the back of uncompensated human labor and training largely done by underpaid “ghost workers” in the developing world

This does not mean I want every product to fail, or want for there to never be another good product again. The existence of the Humane Ai Pin is an example that even in a post-Juicero age, there is endless appetite for rich people to spend money funding people to make absurd products at great cost to everyone involved.

I will admit that it is easy to revel in the easy-to-predict failure of this product, the first version of which was obviously not going to be good for a variety of reasons, chief among them the fact that it relies on the same type of AI that we have seen making obvious and bad mistakes all day every day for years. But it is not mean, or unethical, or distasteful to say that disappointing things are disappointing and are not worth spending large amounts of money on to support some hope that maybe in the future the same people who made this very bad product at great expense will make a version of it that will suck less. It is absurd to suggest otherwise. 



Read the whole story
cgranade
8 days ago
reply
Share this story
Delete

Institute That Pioneered AI ‘Existential Risk’ Research Shuts Down

1 Share

The Future of Humanity Institute (FHI), an almost two decades-old organization focused on researching and mitigating the existential risks posed by artificial intelligence, has shut down.

“Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home [at Oxford University]),” a post on the organization's website announcing its closure says. “Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.”

FHI was established in Oxford University in 2005 by Swedish philosopher Nick Bostrom, and received funding from Elon Musk, the European Research Council, the Future of Life Institute, and others.  

“During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms,” the post on The Future of Humanity Institute website says. “FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty.”

FHI and Bostrom specifically were instrumental in shaping popular perception of the existential and entirely theoretical risk posed by artificial intelligence, best summed up by his paperclip maximizer thought experiment. The paperclip maximizer imagines a scenario in which a powerful AI eliminates all life in the universe by following a simple human instruction—producing paperclips—by following it to maximum and in a way that is not aligned broadly with human interests, and turning all matter in the universe into paperclips. 

In one of its final publications titled “Final Report,” which chronicles the history of FHI, the organization clearly blames Oxford for shutting it down. 

“While FHI had achieved significant academic and policy impact, the final years were affected by a gradual suffocation by Faculty bureaucracy,” the article says. “The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization. (One of our administrators developed a joke measurement unit, “the Oxford”. 1 Oxford is the amount of work it takes to read and write 308 emails. This is the actual administrative effort it took for FHI to have a small grant disbursed into its account within the Philosophy Faculty so that we could start using it - after both the funder and the University had already approved the grant.)”

The article goes on to explain that Oxford imposed a freeze on fundraising and hiring in 2020, and that this eventually led to the loss of lead researchers.

“I wish it were possible to convey the heroic efforts of our core administrative team that were required to keep the FHI organizational apparatus semi-performant and dynamic for all those years until its final demise!” Bostrom is quoted as saying in the article.

This type of concern about AI reached an absurd peak in 2023, when the Future of Life Institute, which has funded the Future of Humanity Institute and is focused on the same issues but is a different entity, published a letter calling technology companies and governments to “pause” AI development for six months, since it was developing at a dangerously fast pace. The letter illustrates many of the problems with the organization, effective altruism, longtermism, and a certain faction within the debate about how AI should be developed. 

As Chloe Xiang wrote for Motherboard at the time, the letter was scrutinized by many AI experts, including some who signed it. More importantly, AI experts criticized the letter as hyping up theoretical risks that have more to do with science fiction rather than focusing on the actual harm AI is causing currently, like the dangers of algorithmic bias, AI’s ability to easily create nonconsensual porn of anyone, its widespread use in surveillance tools, and flooding every corner of the internet with garbage, all very real and current harms we cover at 404 Media daily. 

“It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a research scientist and climate lead at Hugging Face, told Motherboard at the time. 

Regardless of its actual impact, FHI shutting down is going to be viewed as a huge win for techno-optimists, effective accelerationists (e/acc), and other AI maxers who have a vested interest in developing and deploying AI tools rapidly and hyping them up. This crowd sees the field of “AI safety” and effective altruism as regressive and against technological progress. 

“EA (effective altruism) institutions literally crumbling before our eyes. It's over,” Guillaume Verdon, CEO of Extropic and one of the main figures in the effective accelerationist movement under the name Beff Jezos, tweeted about the news of FHI shutting down. 

I will grant him that while the paperclip maximizer is a very fun thought experiment (and game), FHI, Future of Life Institute, Bostrom, longtermism, effective altruism, etc, are not the most useful ways of thinking about AI at our current moment, and are cringe. 

However, EA and e/acc are two sides of the same coin. One is focused on a future where all powerful AI goes rogue and turns all of humanity into paperclips. The other is focused on a future where all powerful AI has been successfully wielded to create a utopia of infinite intelligence, energy, and wealth. 

Both gloss over the reality that there’s little evidence to support any of these possibilities, and that at least for now AI is mostly being used to create nonconsensual porn, mountains of poorly written garbage, and the worst looking romantic comedy we’ve ever seen.



Read the whole story
cgranade
9 days ago
reply
Share this story
Delete

Review Used By UK to Limit Gender Affirming Care Uses Images of AI-Generated Kids

1 Share

A major review condemning the U.K. National Health Service’s gender-affirming treatment for minors uses AI-generated images of kids. The review, released Tuesday, noted the “weak evidence” and “poor quality” of studies on hormone treatments for children, which led to NHS England pausing all gender-affirming care for people under 18. 

The so-called “Cass Review” is peppered with pictures of schoolchildren and students, half of which appear to be generated by AI. Most notably, an image at the end of the “Service model” section of the report, which delineates the 32 recommendations it makes, features a non-binary child with a bleached and light pink quiff haircut. This is consistent with how generative AI tends to represent queer people—namely, white people with short textured purple or pink hair. 

A screenshot of the Cass Review's AI-generated nonbinary child.

The report’s cover image of a child doing a wall sit in jeans also appears to be AI-generated, which is evident from their hands. Two pairs of fingers appear to be merged, while one thumb cuts into the other one. 

A screenshot of the Review's cover image.

Dr. Hilary Cass’s team, who conducted the review, told 404 Media in an email that, “The report uses Adobe stock images—some real and some AI. In selecting images the Review was conscious of the sensitive and contentious nature of the subject matter and made effort not to use images that could identify any individuals.”

Reverse-image searching the AI-generated child with pink hair leads to this AI-generated image on Adobe Stock called “Non binary teen in school hallway with kids in the background. Generative AI.” This image is part of a “series” on Adobe Stock that contains 35 images. In all of the images in this series where someone in the image has pink or purple hair, they are labeled as “nonbinary,” and in some cases they are labeled as a “young nonbinary teen girl.” This specific group of AI images also has a series of group images where all of the teens are labeled as either “nonbinary” or, separately, as “happy teenage students.” These images imagine a world in which every nonbinary person has some variation of the exact same haircut and hair color and in which they exclusively hang out with and pose for pictures with other nonbinary students. AI-generated cis students in this series only hang out with other cis students, meanwhile. 

The review does not appear to acknowledge these AI-generated images in any way. There are no references to “AI”, “artificial intelligence”, or “generate” in the context of images. Nor are there any references to Midjourney, Stable Diffusion, Dall-E, or any other common AI image generators. 

When asked for comment, the NHS, which commissioned the report on gender-affirming healthcare in the U.K., directed 404 Media to contact Dr. Cass’s team. 

AI-generated images have recently begun making appearances in scientific reports. A particularly memorable study on stem cell signaling pathways featured a giant AI-generated rat penis, and was quickly retracted when the image was shared online. Last year, Amnesty International used bizarre, AI-generated images to depict violence and police brutality in Colombia. 

A study published in December in the Findings of the Association for Computational Linguistics found that Stable Diffusion’s depiction of a “person” was by default a white man. “People of nonbinary gender are farthest from this baseline depiction of ‘person,’” the study states. “The stereotypical depiction of personhood within Stable Diffusion outputs corresponds closely to Western, light-skinned men and threatens to erase from media depictions historically marginalized groups such as people of nonbinary gender and Indigenous people, among others.” 

Sourojit Ghosh, a PhD candidate at the University of Washington and the lead researcher of the project, told 404 Media in an email that this erasure of nonbinary people had significant implications for harm. “It contributes to the historic trend of nonbinary identities being erased from mainstream representation, or nonbinary people facing oppression and/or violence simply for being nonbinary,” Ghosh said.

“I think that for AI to depict stereotypical images of what it means to ‘look trans/nonbinary’ has the potential to cause real harms upon real people,” Ghosh continued. “Especially for young people, who might be seeing such images more and more in their daily media diets, this can create an unhealthy impression that there is a ‘correct’ way to present oneself as trans/nonbinary.” 

Earlier this month, WIRED noted that generative AI has a track record of representing queer and trans people as a collage of stereotypes. It is not clear why the Cass team used AI-generated images in this report, which, again, has been used as evidence by the NHS to stop providing gender-affirming care to trans kids.



Read the whole story
cgranade
11 days ago
reply
Share this story
Delete

It’s Raining Money

1 Share
IT’S RAINING MONEY / DAY 1/ Cat: Yay! / DAY 2/ (Cat and Beatnik Vampire fight over the money) / DAY 3/ (Cat and Beatnik Vampire sit on their stacks of money) / DAY 4/ (Cat and Girl warm themselves at a money bonfire) / DAY 5/ Cat (pushing a wheelbarrow full of money): Invest in AI - Invest in AI / DAY 6/ Cat: I’ll give you a thousand dollars for a loaf of bread / DAY 7/ Cat: Why won’t you STOP / Giant hand from the sky: I’m simply looking for a stable investment with a high rate of return Cat: Oh - fair
Read the whole story
cgranade
16 days ago
reply
Share this story
Delete

The Incredibly Risky Scheme that Bill Gates Keeps Pushing

2 Shares
A panel from the Snowpiercer graphic novel, depicting a train that never stops moving, in a frozen landscape. Snowpiercer is probably the most famous story about geoengineering gone wrong.

Climate change keeps getting worse: 2023 was the hottest year on record, and it's not even close. Despite increasing investments in clean energy and other reasons for optimism, we're still pumping more and more carbon into the atmosphere. And meanwhile, OpenAI's Sam Altman, at the Davos forum in January, confessed that the energy needs of the semiconductors that power OpenAI's apps have turned out to require much more energy than people had expected, which means we're going to have to burn a fucktonne more carbon in our ongoing quest to create real A.I. (as opposed to the fake kind we have now.)

Altman believes we'll eventually develop nuclear fusion that can provide clean power to his data centers — but in the meantime, what do we do about all the carbon he's burning in the never-ending quest to fellate Roko's Basilisk? Altman told Bloomberg news that we'd have to "do something dramatic" and use "geoengineering as a stopgap." (via DisconnectBlog.) Altman is joining other tech leaders, chiefly Bill Gates, in proposing that we fix the problems caused by our fucking with our atmosphere by... fucking with our atmosphere.

This scares the shit out of me.

Geoengineering is incredibly risky, and even some of its proponents warn that it could wind up doing more harm than good. If you're a science fiction creator reading this, and you're looking for an apocalyptic scenario that doesn't involve nukes, zombies or meteor strikes, you could do a lot worse than writing about a geoengineering disaster. (This was one reason I was excited to be on the writing staff of the Snowpiercer TV show for a hot (cold) minute: Snowpiercer is probably the most famous story about geoengineering gone wrong.)

What is geo-engineering?

Basically, it's a way of mitigating the worst effects of climate change by putting stuff into the atmosphere that reflects sunlight away from the Earth, thus cooling the planet down somewhat. The most commonly discussed form of solar geoenginering is called Stratospheric Aerosol Injection, or SAI, in which an airplane flying at high altitude sprays special reflective aerosols into the atmosphere. Huge volcanic eruptions have cooled the planet in the past, so you can see why simulating a covering of volcanic ash might seem like a good idea.

There are other theories of geoengineering, including marine cloud brightening — spraying particles into clouds over the ocean to make them reflect more sunlight upward. And also putting honking big mirrors into space, also to reflect sunlight away from the planet.

When people talk about geoengineering, "overall, most people are talking about [putting] aerosols in the stratosphere," says Holly Jean Buck, Assistant Professor of Environment and Sustainability at the University at Buffalo. Maybe ten to twenty percent of geoengineering conversations also mention marine cloud brightening, and then other approaches are rarely brought up.

What are the risks of geoengineering?

Let's just start with the notion that it's a massively untested intervention into a delicate, complex system that we still don't fully understand. Climate scientists have also brought up a variety of worries about what could happen if we start spraying reflective stuff into our atmosphere.

Alan Robock, Distinguished Professor of Environmental Sciences at Rutgers University, shared with me a web page containing a whopping 74 scientific papers he's written about geoengineering and its risks. Among other things, injecting aerosols into the stratosphere could deplete our planet's ozone layer, increasing people's risk of skin cancer. Also, some regions might suffer much worse droughts and vastly reduced crop yields, which could lead to widespread starvation. In particular, places like India which depend on monsoons might be out of luck. Those stratospheric aerosols could come down to Earth and pose a health risk to people on the ground. Ocean acidification would continue to get worse, and might even be slightly worse than without any geoengineering. One common thread: geoengineering could help some parts of the world, while making things worse for others.

This 2021 paper also provides a great rundown of the concerns that people have raised about geoengineering. One risk that worries me is the risk of overcorrecting: cooling the planet too much and then not being able to course-correct. (This is the scenario that's taken to somewhat outlandish extremes in Snowpiercer.)

Meanwhile, there's also the risk that we could use geoengineering to cool the planet and suddenly stop — creating a sudden blowback. If we started geoengineering and then stopped abruptly, temperatures might increase so rapidly, humans might not be able to survive, a scenario referred to as "termination shock." Even though geoengineering is often pitched as a "stopgap solution" while we figure out other options, one recent paper argues we'd have to keep doing it for at least a century, which would massively increase the risks.

Okay, but how do people respond to all these dire warnings?

In addition to Buck and Robock, I also spoke to Ken Caldeira, senior scientist (emeritus) at the Carnegie Institute of Science and an adviser to Bill Gates. Caldeira told me that "some of [Robock's] concerns are legitimate," but that Robock has failed to update his views as the science has developed.

In particular, Caldeira pointed me to this 2019 paper by Peter Irvine et al., in which they model the impact of using geoengineering to reduce atmospheric carbon by 50 percent. And they find that the thing I mentioned earlier, where some regions get better while other regions get worse, is not such a great problem with "moderate" geoengineering. And this 2023 paper by Jonathan M. Moch et al., which among other things says that an increased risk of skin cancer from ozone depletion would be offset by reduced mortality from air pollution. (The Moch paper also says that SAI would be more effective than expected in some regions and less in others, and it cautions that we can't fully know how pumping sulfites into the atmosphere would affect our atmospheric chemistry.)

So Caldeira is much more optimistic about the net benefits of solar geoengineering than Robock — and yet, he still raises a huge concern.

"My primary concern is that it would work too well," Caldeira says. And this could reduce the pressure that policymakers currently feel to phase out fossil fuels and stop using "the atmosphere as a waste dump."

Adds Caldeira: "The end game of ramping up greenhouse gas concentrations and solar geoengineering deployments is not very attractive." He worries that if we rely on geoengineering without reducing our carbon, we'll run into trouble — because we'll need to pump more and more particles into the atmosphere, and geoengineering will work less well as we scale up. This is in line with the Irvine paper, which shows benefits for modest geoengineering, but not so much for massive scales.

Basically, the best case scenario is that geoengineering might buy us a bit more time to transition to clean energy — but policymakers could easily be tempted to use it as an excuse to keep delaying forever.

How to talk to people about geoengineering

Policymakers have definitely not gotten on board with the idea of geoengineering yet — lawmakers in the state of Tennessee just passed a law banning it, in fact. (In part because they seemed to confuse geoengineering with chemtrails, a common conspiracy theory.)

Buck wrote a book about how to imagine different post-geoengineering scenarios, called After Geoengineering. And in her research, she does interviews, focus groups and surveys to see how the public feels about the concept of geoengineering. In general, people are skeptical, saying things like, "That's not going to work. That's super risky. I've never met anybody that's like, 'Oh, great. This is going to deal with it.'" She says some people bring up Australia's history of importing cane toads to deal with beetles, only to be overrun by toxic toads. This plague of toads provides a powerful metaphor for the dangers of believing we know enough to mess with the environment to achieve a particular goal.

Buck cautions that people's attitudes to geoengineering could change as the climate worsens. But right now, she's not too worried about public support for this option. "I'm more worried about particular elite decision-makers."

Sometimes Buck shows people this five-minute video from CBS News, which includes Bill Gates advocating geoengineering:

"Once you mention Bill Gates is looking at this," ordinary people immediately oppose the idea, says Buck. "Man, they really hate him."

What worries Buck is that research on geoengineering will be funded by private industry, which wants to find an excuse to keep building endless data centers and burning endless carbon. (This recent New York Times op-ed expresses similar concerns.) Buck supports a 2021 plan by the National Academies to carry out a publicly-funded, peer-reviewed research program that makes its data available to everyone. The National Academies plan, she says, includes "staged off-ramps if it's found that the bads are really bad." In other words, we need a safe way to stop the research, if the risks turn out to be too great.

"Those Silicon Valley guys aren't going to embed that kind of thinking in their own research and the stuff they fund," warns Buck.


Something I Love Right Now

The Second Best Hospital in the Galaxy is an utterly delightful new animated show (on Prime Video) about a space hospital full of alien patients and doctors. Basically, picture Grey's Anatomy, but featuring time loops and bizarre alien parasites. And sexually-transmitted infections that change your DNA so you look like the last person you had sex with. Second Best Hospital, simply put, is the show that I always wished Rick and Morty would be: gonzo, outrageous, full of over-the-top science fiction ideas, feverishly clever, and side-splitting funny. But I adore its cast of characters, especially the two heroes, Dr. Klak and Dr. Sleech. The voice cast includes Kieran Culkin, Stephanie Hsu, Natasha Lyonne, Maya Rudolph and Bowen Yang. I haven't heard much about this show, which saddens me because it ROCKS.


My Stuff

Amazing news: my YA threequel Promises Stronger Than Darkness was just nominated for the Lodestar Award for Young Adult Fiction, which is given out at the Hugo Awards but is technically not a Hugo. This means that all three books in the Unstoppable trilogy have been Lodestar nominees. Also! Promises comes out in paperback on April 9, and you can pre-order a signed, personalized, doodlefied copy from the wonderful Green Apple Books. (Please put personalization requests in the "order comment" field.)

A graphic showing the cover of PROMISES STRONGER THAN DARKNESS with a plaque thingy saying Lodestar Award for Best Young Adult Book

The Trans Nerd Meetup is back! It's this Saturday from 12:30 until whenever at Zeitgeist in SF. Anyone who self-identifies as trans/nb/gnc and loves to nerd out is welcome.

My next book review should be out in the Washington Post sometime in the next week. And if you're looking for book recs, I have a massive backlog now!

The latest episode of Our Opinions Are Correct features an interview with Dr. Chuck Tingle about queer horror!

You can buy two trade paperback collections featuring Escapade, the trans superhero I co-created: New Mutants Vol. 4 and New Mutants: Lethal Legion. If you want the very first appearance of Escapade, you need to find a copy of the 2022 pride issue, which is on Marvel Unlimited but otherwise (sob) out of print.

I've also written some more books! Never Say You Can't Survive is a guide to writing yourself out of hard times. Even Greater Mistakes is a weird, silly, scary, cute collection of stories.

Read the whole story
cgranade
24 days ago
reply
Share this story
Delete
Next Page of Stories