I asked ChatGPT to write an article about trans athletes. It didn’t go well

OPINION: Language learning models are a threat to queer journalism that we should be taking very seriously

One of the defining features of the early 2020s so far has been the tech industry’s obsession with the tech of the day. These have typically been glorified get-rich-quick schemes, like NFTs, or cryptocurrency, each exploding with hype before the bubbles have burst, leaving investors in the cold. The latest in this long line of obsessions is language learning models, or AI as it’s more commonly been referred to recently.

Language learning models are computer programs which take data in the form of written pieces of work and then process the data into a probability distribution using words to produce other forms of written use. It’s not really artificial intelligence, which seems to be more of a form of marketing jargon used by techies to make their language algorithms more appealing to the general public.

But regardless of what we call it, language learning models are a threat to queer journalism that we should be taking very seriously.

Several online publications have already taken steps to produce journalism produced by “AI.” It hasn’t always gone well (see the Gizmodo listicle about Star Wars that went viral for being hilariously incoherent, or the recent Microsoft travel listicle that suggested tourists to Ottawa should visit the city’s local food bank). It’s easy to see why media owners are so keen on language learning models. Writers and editors are expensive, computers are significantly cheaper to buy and run. If you can produce written work that gets views without paying real humans to produce it, you could dramatically increase your profits.

We’re also seeing this with the current WGA strike, in which future AI writing practices are under heavy dispute.

But that comes with an untold cost as well. It would erase the human element behind the words writers are producing, and strangle literary and journalistic innovation. Language learning models, by necessity, need strings of input words in order to have an effective output. If all of our future journalism is to be produced by computers, it will essentially consist of recycled words and phrases, bereft of change.

I first became concerned with this after a conversation with another writer friend of mine. She explained that she could ask ChatGPT, a prominent language learning model, to write an article written by herself. The program would produce something that read like something she might have written—but it was empty and lacked coherence. The program reused some of her more common phrases, but didn’t make an argument that she would have made.

Curious, I signed up for ChatGPT itself earlier this week and asked it to “write a 700 word Katelyn Burns article on trans athletes,” an issue I’ve written about frequently during my eight-year journalism career. What the program produced took my breath away.

 

“The program produced numerous fake quotes purporting to be from me.”

The work read like something I would write. It matched the cadence and common sentence and paragraph structure I tend to use most often. But the words were empty and full of platitudes about general inclusivity. There was nothing there about the complex hormonal science that is commonly misunderstood by the public and that has been a staple of my work on the topic.

Worst of all, the program produced numerous fake quotes purporting to be from me personally. I ended up Googling each attributed quote, only to discover that I had never written those words and phrases.

My initial concern was that a publisher might someday want the work I produce, without wanting to pay me for it. ChatGPT and other language learning models give them a way to cut me, my brain and my work out of the picture for their own personal profit. I’m already a freelancer living at the lowest edge of the middle class—more widespread adoption of language-based AI could be devastating for me personally.

But as I thought about this more, I grew increasingly more angry. Most media organizations are run by cishet people, especially at the ownership level. But with language AI programs, they can now get “queer” perspectives that don’t involve any queer people, simply by loading up a computer algorithm that steals queer people’s words.

If we accept the premise that lived experience enhances our journalism, especially for marginalized demographics, then relying on a computer algorithm would erase that dynamic. A computer doesn’t feel or experience discrimination, it can only draw from previously written work and use its programming to reproduce similar sentences. If a new issue pops up, as has been the case with trans people over the last several years, computers can’t generate a unique take based on personal or even professional experience.

Besides that, there’s the issue of accuracy. With such a deluge of false information getting peddled in the press lately on trans issues, we can’t trust a computer to generate reliable and accurate journalism. A computer is fundamentally incapable of fact checking the input it receives.

This doesn’t just go for LGBTQ2S+ people either—any marginalized journalist should be concerned about the direction the written media is going in with language learning AI programs. It reminds me of The Little Mermaid, when Ursula stole Ariel’s voice for herself and then uses it to woo Eric away from the titular character.

AI threatens to hijack our collective voice as queer and trans writers, stealing our past work in order to produce empty, platitude-filled garbage for the profit of rich cishet people. We must oppose this, it’s a matter of protecting our own speech.

Katelyn Burns is a freelance journalist and columnist for Xtra and MSNBC. She was the first openly trans Capitol Hill reporter in U.S. history.

Read More About:
Culture, Identity, Opinion, Tech, Media, Trans, Sports

Keep Reading

A still image of Anne, played by Amybeth McNulty, in braids and a coat, looking at another child in Anne with an E.

Why the adaptation ‘Anne with an E’ speaks to queers and misfits of all kinds

The modern interpretation of Anne of Green Gables reflected queer and gender-diverse people’s lives back at them 
Karla Sofía Gascón as Emilia Perez in Emilia Perez. Gascón wears black with colourful embroidery, has long hair, and a brown purse and delicate chain.

Trans cartel musical ‘Emilia Pérez’ takes maximalist aesthetic to the extreme

REVIEW: The film’s existence raises intriguing questions about appropriate subjects for the playful machinations of French auteurs
Dorothy Allison sits behind a microphone. She has long, light-coloured hair and wears glasses and a patterned button-up shirt.

5 things to know about Dorothy Allison

The lesbian feminist writer passed on Nov. 6

‘Solemates’ is a barefoot stroll through the history of our fetish for feet

Queer historian Adam Zmith’s newest book allows us to dip our toes into the past of a common, yet stigmatized, kink