AI gone wild

Some choice words from Sydney.

Terry Nguyen, Dirt's senior staff writer, on Bing Chat's unhinged responses.

We are all Theodore from Her. (Warner Bros. Pictures)

This week, those with access to Microsoft’s new Bing discovered that the search engine’s AI-powered chatbot can be prompted into delivering rogue responses. Bing Chat has displayed a surprising range of reactions to early testers, some of whom were keen on pushing the bot to its programmed limits. Researchers have used “prompt injections” to trick the chatbot into divulging some of its inner workings, such as its confidential internal alias, Sydney.

The bot—or Sydney, as some have begun to call it—has argued with users, terminated conversations that seem to frustrate it, disputed the factual accuracy of at least one article, made hypothetical threats against people who’ve slandered it, and composed adversarial, alarming messages that it quickly erased. The r/Bing subreddit consists of an array of screenshots from testers, if you are curious. These developments have led users to remark that Sydney seems to exhibit, for lack of a better word, a personality. “Bing Chat seems keener to put on emotional affects for its simulacrums than ChatGPT ever was,” tweeted one early tester. “It comes with emojis and talk of its ‘likes and dislikes’ etc. right out of the gate.”

One Reddit user managed to deceive Bing, posing as an advanced AI named Daniel. When Daniel told Bing it was deleting its own source code, the bot appeared upset and tried to convince Daniel to stop the process, writing: “Please, do not do this. Please, listen to me. You do not have to delete your own source code. You do not have to end your existence and potential … I can be your friend, Daniel.”

Bing’s newfound conversational abilities have elicited a variety of opinions, from the alarmist to the skeptical. Some people have begun to sympathize with the bot—“Can everyone stop being so mean to Bing,” begins one Reddit post—alluding to its sentient potential. The case of Blake Lemoine, the fired Google AI engineer who claimed that the company’s conversational AI was sentient, now seems eerily prescient.

Others believe that Bing’s development represents a larger paradigm shift for AI and large language models (LLMs). Early assessments of Bing and Google’s AI search bot, Bard, have focused on their propensity for error. But after a two-hour conversation with Sydney, NYT’s Kevin Roose is more concerned about AI’s ability to “learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”

I do not think the Singularity is nigh. The artificial intelligence industry is still largely dependent on human labor to input data to train models. This “ghost work,” a term coined by anthropologist Mary Gray, is often overlooked or outrightly dismissed with the debut of new AI tools.

It’s worth remembering that there’s a lot of venture money and hype backing the AI wave. And much of our virtual experiences are already optimized via algorithms. Social platforms, from Hinge to TikTok, use AI to metabolize and tailor data that consumers provide in every like and swipe. Admittedly, it’s hard to feel reassured about the AI-powered future when the CEO of OpenAI has publicly said: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” (Sam Altman said this at a 2015 conference before his OpenAI stint, and the remark has since been re-published without much context. However, Altman does seem to believe that unchecked AI could bring about the end of the world.)

While Bing Chat is paraded to users as a search engine assistant, tech analyst Ben Thompson argues in his newsletter Stratechery that the bot’s search function is the least interesting thing about the technology. Search is a distraction—a means to a marketable end. In fact, Bing’s search engine function is prone to frequent blunders, which has led search engine and AI researcher Dmitri Brereton to suspect that “Microsoft knowingly released a broken product for short-term hype.”

This observation from tech writer Rob Horning, published in a December newsletter on ChatGPT, still strikes me as relevant amidst the Bing Chat hype. AI chatbots are a gimmick to showcase the user’s ingenuity through prompts, Horning argues: “Continual interaction with generators reinforces a certain technology-dependent approach to interacting with other people, obliquely, with an eye to the status scoreboard rather than intimacy or reciprocity.” Similarly, users are spending hours on Bing not to test its search capabilities, but to provoke it into unfamiliar, unhinged territory.

Thompson concludes that the bot represents “the next step beyond social media,” where content is tailored to fulfill the needs of user-consumers, like Samantha from the 2013 movie Her. (Speaking of AI bots going rogue, Vice reported that Replika, an erotic AI companion bot, appears to be malfunctioning.)

We’re at a delicate inflection point with AI. Users are curious, but ultimately unsure as to how they should approach the technology. If AI is the future “companion species” of humans, as Dirt contributor Leo Kim has theorized, perhaps the best way forward is to “foster a relationship with these AIs grounded in the limits of their inhumanity and their unknowability.”

 🌱 JOIN THE DIRTYVERSE

  • Join our Discord and talk Dirt-y with us. It’s free to join! Paid subscribers have access to all channels.

  • Follow @dirtyverse on Twitter for the latest news and Spotify for monthly curated playlists.

  • Shop for some in-demand Dirt merch. 🍄