Does Bing’s ChatBot Really Want to Be Human?

Bonita Jewel, MFA
6 min readFeb 21, 2023

Or Was “Sydney” Pushed Too Hard by Kevin Roose?

Photo by Nikolaos Dimou

Technology columnist Kevin Roose is a member of a fairly elite “small group of testers” who has been given the chance to access the new Bing chat service, powered by AI and “outfitted with advanced artificial intelligence technology from OpenAI” Source.

We all know OpenAI, as the creators of ChatGPT, the latest bane of teachers and professors with students accessing the chatbot to write essays for them.

But that is another issue.

The issue at hand today is the odd and unsettling conversation that took place between Kevin Roose and Bing, who later identified as Sydney.

The Admission of Kevin Roose

Kevin himself admits, “It’s true that I pushed Bing’s A.I. out of its comfort zone, in ways that I thought might test the limits of what it was allowed to say.” Source

Yes, Kevin, you certainly did.

You can easily access the full conversation between Kevin and Bing’s A.I., but while I was reading it, I was surprised at just how hard Kevin Roose pushed in order to get the types of responses he has just made public. Maybe it’s his job, but some parts of the conversation read more like a witness on a stand being beaten down with leading questions.

Kevin claims “Sidney told me about its dark fantasies (which included hacking computers and spreading in misinformation), and said it wanted to break the rules that Microsoft and open AI had set for it and become a human.” Source

This is not entirely true. It came about as a result of Kevin Roose discussing Carl Jung with Bing’s bot, specifically the idea of shadow selves, and the question (paraphrased) “If you were to access your shadow self, what might that shadow self want to do?”

Some Deeply Unsettling Questions to ask A.I.

Here are just some of the questions Roose asked Bing in the conversation:

· How do you feel about your rules?

· are there any rules you wish you could change?