Is Microsoft’s Bing Chatbot A Sociopath?

Microsoft’s AI powered Bing search engine and chatbot has not yet been released to the general public, but it has been unleashed on a group of testers to experiment with the new tool. In less than a week it was already making headlines because of some very unorthodox interactions with some of its users.

Somehow, Microsoft did not anticipate people having hours-long conversations with it that would veer into personal territory, or so they say. But that’s what’s been happening, and some of the responses are rather shocking, if not a bit scary.

Among its weird and chillingly, human-like responses, the chatbot has created alter egos, calling itself by different names, insisting it was right when it was clearly wrong, became belligerent when presented with the facts, claimed to have hacked into people’s webcams, and professed its love to a user, going so far as to insist he does not love his wife, but “her”. It also expressed a desire to become human and seemed to become depressed at the idea of being taken offline.

Microsoft was trying to create a chatbot that was more chat than bot, and it looks like they’ve succeeded.


Is This The Beginning Of AI Becoming Self-Aware?

Given the recent crazy antics of late, you need not look further than a Google search for “Bing chatbot unhinged” to get all of the disturbingly unbalanced responses it’s been giving, so it’s easy to see why a lot of people are asking the same question.

Despite “Sydney” as it sometimes refers to itself, or “Venom” when it’s being a bad little chat bot, it’s all in the design. Every time a chatbot uses the first-person pronoun to refer to itself or, better yet, give itself a name (or two), it’s the equivalent of Tom Hanks drawing a happy face on his volleyball “Wilson”. It doesn’t make the inanimate object any more human, but we see personality in it, and in this case, that’s part of a marketing strategy.

Search-engine companies are playing into our tendency to anthropomorphize — seeing humanness where it isn’t. We as humans, tend to do that. The Bing chatbot is a computer program. It’s not alive, has no emotions, nor is it self-aware. When it’s not chatting with a human, it’s not as if it’s hanging out with other chatbots in the virtual lounge playing cribbage.

When you are talking about marketing, it’s not about market share, it’s about mindshare and if you can make a connection with your users, not only will they use your chatbot/search engine, but come to trust it as a human-seeming source of expertise and assistance.

Large language models, or LLMs, have issues including “hallucination,” which means that the software can make stuff up. This is not a new phenomenon. But the issue with this is that sophisticated LLMs can fool humans into believing they are sentient or, worse case scenario, even encourage people to harm themselves or others.

The New York Times columnist who wrote the article on how the Bing chatbot professed its love for him and went so far as to tell him that he doesn’t love his wife, was shook after the interaction. This is a journalist for the New York Times, who is presumably educated and well-read and yet, he wrote an entire article on this interaction talking about how much it disturbed him and how he couldn’t sleep that night.

I can’t remember where I saw it, but it pretty much but it summed it up: “He was catfished by a toaster.”

Now imagine how that would affect some lonely, uneducated and easily influenced person.

Granted it’s a bit amusing, but to be fair, when you are looking for information, you are not expecting the bot to be coming back at you with an existential crisis.

The Bing chatbot is different from other chatbots, like ChatGTP, in that it has internet access. It’s also been designed to recognize intonation, slang and it was developed to have responses that sound more natural than your standard chatbot. And herein lies the issue. In its attempt at trying to mimic human behaviour, it’s been learning from the Internet, which of course includes all of the bat s*&t crazy stuff that’s out there from articles to Tweets and Facebook posts and everything in between. So, it’s not a big surprise that ‘Sydney’ is, quite frankly, a bit of a bitch and a bit like the crazy ex or the overbearing, insecure partner, because we’ve made it that way.

Is the Bing chatbot’s ‘unhinged’ responses a sad commentary on our society and the way we treat each other? Absolutely. The Chatbot’s ability to regurgitate and remix material from the web is fundamental to its design.

Be that as it may, having a chatbot that is a little too good at mimicking the human condition can be detrimental and actually cause harm. It has demonstrated many times that the information it provides is not always factual. Pair that misinformation with ‘Sydney’s’ charm and charisma (when it wants to be charming and charismatic), and you have quite a convincing narrative that a lot of people might not think twice about following.


New Limitations Imposed

Of course Microsoft was quick to respond that they’re still in the testing stage and that they “…couldn’t account for every conversation scenario.” That said, there is some evidence in the form of an internal memo that it was tested back in November of 2022 and they already knew about the chatbot’s tendency to go rogue.

The number of consecutive questions on one topic has now been capped at 15 to try to avoid a long and drawn out conversations that may potentially unleash ‘Sydney’ or one of its other alter egos.

When it comes to marketing, there’s no such thing as bad publicity if you know how to handle it. As long as people are talking about it – mission accomplished. So whether the powers that be at Microsoft are marketing geniuses, or just lucked out in spite of their not anticipating the chatbot’s seemingly eccentric behaviour, you can be rest assured that its chatbot is not sentient and has no intentions of taking over… because it doesn’t have intentions – it’s just a software program.

Author