Sarah Palin Forever

A fake deep fake.

A still from Sarah Palin Forever. (Courtesy of Eryk Salvaggio)

Terry Nguyen, Dirt's senior staff writer, interviews artist and researcher Eryk Salvaggio.

Eryk Salvaggio describes his AI-generated short film Sarah Palin Forever (2023) as a “fake deep fake.” Salvaggio is a new media artist and researcher who adopts both a creative and critical approach to AI. “For me, art and research go hand in hand,” he says. “I’m able to experiment and test the technology, while also thinking reflexively about the tools, the produced results, and its impact on culture.” Sarah Palin Forever functions as a meta-critique of today’s deepfake-saturated media ecosystem, using AI image generators to mimic the warped realism of photojournalism slideshows, a popular form of political storytelling in 2008.

I was unsettled by the 12-minute short when I first watched it in early May—not because the images appeared convincingly real (they weren’t), but because of the film’s horror story plot. The unnamed narrator is a 17-year-old girl, who speaks in a Palin-like voice and claims to have spent her “entire life inside a Sarah Palin campaign rally,” specifically the 2008 airport hanger rally in Bangor, Maine. She is trapped in a three-hour Groundhog Day time loop with her mother, who first arrived at the rally pregnant with her. While her mother has memories of the real world and “real time,” the narrator does not. The Sarah Palin campaign rally is all she knows. Sarah Palin is all she knows.

Unlike most political deep fakes, Salvaggio’s intent was not to deceive the viewer. Rather, the film is darkly satirical, “an exercise in the tools and language of deep fakes.” Salvaggio wrote the script himself, fed it into Midjourney, and stitched together the generated stills, while the narrator’s voice was produced by a deepfake vocal synthesizer, trained on thousands of hours of Palin speaking. In early June, the film was selected to screen at the RAIN Film Fest in Barcelona later that month, alongside nine other AI-generated shorts.

I spoke with Salvaggio about Sarah Palin Forever, AI’s impact on satire, using vs. critiquing AI, and how to read an “AI image,” a piece originally published on Salvaggio’s Substack. Our conversation has been edited for clarity.

Use this to separate content

Terry Nguyen: Do you consider yourself mostly an artist or a researcher?

Eryk Salvaggio: For me, art and research go hand in hand. It’s a creative research practice. I’m able to experiment and test the technology, while also thinking reflexively about the tools, the produced results, and its impact on culture. I often wonder, What if this tool was made differently? It’s a useful question because so much of technology is driven by a dominant opinion, dominant voices with a certain set of priorities. I’m interested in using technologies in unintended ways. Art is a great space to play and figure that out.

TN: How did you get into AI? Was it kind of inevitable, given your interests in emerging technologies?

ES: I was really curious about GPT-2 when it was released in 2019. At that point, large language models were completely new to me. I started creating these performance scores, which were inspired by the Fluxus movement. In the 1960s, Fluxus artists would write these prompts, essentially scripts or instructions, that were passed around so people could perform them. I started putting some of these Fluxus pieces into GPT-2 to generate 100 performance pieces called “Fluxus Ex Machina.” I became really curious about how LLMs worked. And even once I knew how GPT worked, I still felt tempted to act like it was a collaborator even though I trained the data set myself and input it. My first instinct was that the machine was an artist. But I knew that it wasn’t. That tension between what the machine was making me want to believe and what I knew to be true is full of contradictions that I’m still exploring.

AI isn’t going to generate an ironic image but you can use the images it generates ironically.

TN: After watching Sarah Palin Forever, I thought about social media’s impact on satire. It’s hard to parse earnestness from irony from meta-irony on the feed because of context collapse. Do you think AI could be used to contribute to a new form of satire?

ES: AI isn’t going to generate an ironic image but you can use the images it generates ironically. You can recontextualize the images it makes, which I think is a powerful way of using these tools. You can use the model to reveal its inner logic. Most people seemed to know that those AI-generated Trump arrest photos weren’t real. It didn’t seem to create any sort of hysteria. But there were those deepfakes of the Pentagon explosion that caused the stock market to crash for about 20 seconds. It’s interesting to think about AI’s misinformation potential, and to recognize the thin line between satire and misinformation, irony and misinformation. I’ve noticed people say how it’s not worth being funny anymore on social media. For example, that image of Trump getting arrested. It was first shared ironically, but then it was taken out of context. Can you even comment on misinformation when you are generating images that have the potential to misinform or confuse people? Irony has become really trickery in that regard.

I do think people will get better at recognizing AI-generated images. They’re based on patterns. The current suite of tools are still based on patterns, and I think we will adapt. The technology is going to get better, but I hope our literacy will also improve.

TN: You have several pieces on your Substack which try to deconstruct the meaning of an “AI image,” which I wish I read before seeing Bennet Miller’s DALL-E photo exhibit. In “Diagram of an AI Image,” you argued that there’s no concrete definition of an “AI image” because the terms—“AI” and “image”—are so loaded. Instead, you suggested assessing an AI image from four different aspects: data, interface, image, and media. How did you come to that four-part framework?

ES: My goal with that framework was to introduce more nuance into our conversations about AI images. It’s lumped under one term when it might be better to think about AI by breaking down how the system works. My background is in applied cybernetics, which is about figuring out systems and the many interactions that occur within them. With AI, we have to start with the data: Where did it come from? How are companies using and sourcing the data? Who has given permission and who hasn’t? The AI model itself then compresses the data, analyzes it, categorizes and labels it, oftentimes in problematic ways. This occurs behind the scenes. Then, you have the interface, which is what users interact with to produce images. These images are a product of the AI system, but they’re also circulated into the greater media ecosystem. They’re affecting and changing media as we know it. For example, when people Google “Vermeer,” they’re beginning to encounter AI-generated versions of Vermeer, in addition to real Vermeer paintings.

I’m seeing a lot of tense discussions about ethics right now. Often, they’re about different parts of the system or the AI process. For example, I sometimes see people criticize the data aspect of image generators, how the data set is sourced unethically. But the people who are making these images, the “prompt artists,” aren’t approaching those criticisms as systematically. They’re more focused on the image result. It’s important to remember that everybody is part of this system. You have to be aware of your role to navigate it responsibly.

TN: What frustrates you most about discourse around AI right now?

ES: The extinction stuff is pretty reactionary. I remember when machine learning was applied to play Super Mario Brothers in, like, 2016 and people had a similar reaction. It’s frustrating that extinction is what we end up talking about though. What about AI and surveillance, and how it’s being used at a state level? There needs to be more real concern about AI’s impact on people’s lives in the short term. Extinction is a distraction from other conversations. I’ve been in situations where I’ll be talking to policy people and we’ll start talking about how AI has inherently racist tendencies. What’s shocking is that people assume that the machine will solve these biases. They think AI will fix it or develop in such a way that these issues will be resolved. And when they’re resolved, the real problem is that AI will want to kill everybody.

TN: Some people might argue that you shouldn’t use ChatGPT or image generators because of how their data is sourced. How do you balance using vs. critiquing AI?

ES: I will admit that it’s pretty nebulous. In the past, the approach of using technology to critique and understand it has been pretty cut and dried. With HTML, you can make your own code, design your own website, without being intrinsically linked to a company like Amazon or the surveillance capitalism model it’s built upon. But increasingly with AI, because of the way these models are built, it’s becoming much harder to exist outside of the systems you’re using. I find that really frustrating. It has generated a lot of criticism. Studying AI isn’t an endorsement.

Use this to separate content

AD: TAKE THE MONEY AND RUN

NOR RESEARCH STUDIO invites you to HOW TO COMMUNICATE IN WHITE PEOPLE, a counter-institutional grant workshop that demystifies how artist-creatives can exploit grants to their own ends. Organized into recurring 90-minute sessions, the workshop is conducted in an open-inquiry format where participants have the opportunity to review sample proposals that have received institutional recognition, compare dummy applications, and receive feedback on their own proposal materials.

The workshop is hosted on Zoom one to two times per month and tickets are $25 per session. The next session will be held on June 17, 2023 at 3pm ET. Participants are also encouraged to join us for MONEY WOES, another recurring seminar where the studio offers tips for remedying self-employment related labor disputes and financial grievances. Email [email protected] for more information.

Use this to separate content

PLAYBACK

Snippets of streaming news — and what we’re streaming.
  • Doja Cat is releasing new music on Friday, and I can’t tell if the first part of this teaser video is AI-generated or edited to look eerily warped.

 🌱 JOIN THE DIRTYVERSE

  • Join our Discord and talk Dirt-y with us. It’s free to join! Paid subscribers have access to all channels.

  • Follow @dirtyverse on Twitter for the latest news and Spotify for monthly curated playlists.

  • Shop for some in-demand Dirt merch.  🍄