Every new development in media and technology means a new opportunity to see a reflection of our own queer identities. With the generative AI turn at the end of 2022, I—like so many others—experimented with ChatGPT, DALL-E, Bard (now known as Gemini) and other AI-infused platforms to see what queerness might look like through an AI-generated lens. Like many others, I was both delighted and terrified by the results.
DALL-E, for instance, created images that almost looked like real people when prompted with text-based inputs that included “gay” or “queer.” The platform has since become notorious for getting certain features wrong—hands and feet, for example—but figures whose wrists bent so far back as to appear broken gave me pause. Was the output I found your standard, run-of-the-mill glitch, or was this a queer stereotype deeply embedded in the model’s training set?
On my Instagram feed, queer AI art has become an increasingly common occurrence. Some users, like Emily Martinez and Jake Elwes (the brilliant mind behind the absolutely unsettling Machine Learning Porn) push against generative AI’s tendency toward homogeneity, wielding queerness as a tool that might reshape our understanding of AI. Much more frequently, however, I encounter perfectly chiselled bodies, white and sweaty, in hypermasculine settings—images produced by a training set that might as well consist solely of bara (beefy gay manga) and Tom of Finland. Like these earlier counterparts, so much of AI art seems poised to generate gay male fantasies. If these works are aiming for erotic consumption, comments responding to the posts, so often in the form of fire and sweat-droplet emojis, suggest that they’ve hit their mark.
Gymdreams8, an account with over 12,000 followers, places pairs of nearly identical young men—“boyfriend twins,” if you will—embracing on rugby fields, in construction sites, even in historic locations like the Colosseum. In a series created by another popular account, even the couple’s infant son ripples with muscles. On his official website, the artist behind Gymdreams8 writes that his work is meant to explore “representation” bridging emotions and an “inability to fully elucidate them through logical reasoning.” The idea that AI art can produce new ways of grappling with queer themes is an exciting one, but I linger on the question of what these works do in fact “represent.”
Since the initial release of DALL-E 2, Midjourney and Stable Diffusion (all three of which first appeared in 2022), text-to-image models have improved in a number of ways. Much of the output has been fixed—or at least restricted—by new parameters, but new issues are constantly emerging as well. Last month, Xtra senior editor Mel Woods pointed out one such instance in connection with the Cass Review’s use of AI-generated images intended to represent queer and trans youth. The use of these fabricated images, which suggest that literally all non-binary youth sport short, pink hair, is just one of the ways that the U.K.-based report misrepresents its focus demographic.
The coloured hairdos of the Cass Review highlight the troublesome way in which AI art tends to reproduce stereotypes. It’s worth noting that this is the exact opposite result of what we so often want when we seek out new forms of representation. In WIRED Magazine last month, Reece Rogers observed that this limiting effect is true across all generative AI platforms—and for all queer identities. “Lesbian women are shown with nose rings and stern expressions. Gay men are all fashionable dressers with killer abs.” The result, Rogers observes, is “a simplistic, whitewashed version of queer life.” A 2023 study of text-to-image models similarly suggests that the stereotypes these models reinforce are the worst when they attempt to depict trans and non-binary identities. According to the study’s authors, these images “consistently (mis)represented” trans and non-binary individuals “as less human, more stereotyped and more sexualized.”
The fact that AI art so frequently produces more of the same led to a series of more comical—yet still troubling—examples that one user of Adobe’s stock image site recently highlighted. The user reported that the service’s corpus of AI-generated images meant to represent queer couples had an “incestuous vibe” that was “deeply offensive.” Looking at the examples that this user included in their post reveals a number of images that portray gay and lesbian couples as mirror images of one another, suggesting—it seems—that every queer couple looks to be closely related. “I suspect the contributors that submit these [images] are probably not in the LGBTQ+ community,” concludes the unsatisfied Adobe user.
This final comment raises a number of broader questions. What qualifies as “queer AI art”? Does it have to be created by someone, as the Adobe user above suggests, who is “in the LGBTQ+ community”? The question is difficult enough to answer even without considering AI: does queer art need to come from queer artists, authors, filmmakers and so on, or is it more about the content that’s being depicted? Then again, maybe it’s about sparking queer desire—some type of feeling or affinity—in an audience of consumers? Adding AI into the mix means adding another layer of abstraction; if queer AI art refers to the identity of the creator, who is the creator? Is it the user who types out the prompts, the programmer designing the model or the artist whose creations are used as training data for the model (and whose work remains uncompensated)?
View this post on InstagramAdvertisement
Kay Siebler published the book Learning Queer Identity in the Digital Age before much of the development that’s come out of our AI spring (the ongoing boom in the field of artificial intelligence). Yet many of the observations she raised in her 2016 text continue to speak to the present. Through focus group studies and in-depth media analyses, Siebler observed that despite growing access to LGBTQ2S+ representation, the images that we’ve gained access to are nevertheless constricting—limited and limiting. Siebler argued that this narrow depiction of queer individuals ultimately hinders the viewer. “Facebook, YouTube and fan lists/blogs more often reinscribe stereotypes of LGBT folks rather than disrupt them,” she wrote. For gay men, this leads to what writers like INTO contributor Phillip Henry has called “gay body fascism,” an insistent pressure that “tells gay men their worth is determined by their race, waist size and their proximity to masculine beauty standards displayed by straight counterparts.” Is queer AI amplifying these pressures? We would do well to consider Siebler’s warning—which just so happens to be a common refrain for digital culture researchers—be careful how our digital tools might shape us.
In a conversation posted to Queer in AI, an online and in-person community of queer AI researchers, computer scientist Sabine Weber and digital scholar Eddie Ungless addressed how “misrepresentation is baked into text-to-image systems.” It’s not just the training data, they observed—although there is that—but there’s also the fact that so many of our society’s stereotypes run through every facet of our lives. It would be naive of us to think they won’t seep into our digital tools, especially those whose inner workings remain largely unknown to the average user. And it would be foolish of us not to suspect that a diet of the same output of “gay body fascism” will affect us in complex but often explicitly negative ways. Ungless proposed that we hit the breaks and reroute before it’s too late. “What we’re doing now is bad,” he wrote. “I don’t know what good looks like, but we need to start looking at alternatives.… Anything is better than what we have now.” Ungless, for one, seems intent on expanding beyond representation solely of boyfriend twins.
Like all things, we might make a list of pros and cons, benefits and harms, when it comes to the topic of queer AI art. Despite all of its flaws, glitches, limitations and controversies, users like Gymdreams8, aimusclestories and cuteguys.art have found it a worthwhile tool for representing gay male fantasies—many of which turn out to be rooted in the traditional and unvarying aesthetics of old physique magazines (now on digital steroids). The trouble really comes when we consider that this is not everyone’s fantasy; those like Rogers, Weber and Ungless have all critiqued generative AI for the limits it places on queer representation. If we’re to consider new fantasies, we’ll have to be intentional about creating inclusive prompts (on the part of AI artists) and algorithms that produce greater variation (on the part of AI programmers). In short: we’ll have to move beyond yesterday’s training sets.