When last year OpenAI released ChatGPT, it took the world by storm because none of us had seen an AI chatbot talking like a human being. Of Course we have seen many chatbots on various websites which to some extent automate conversations and give us predefined answers but ChatGPT is dynamic that means it can give a complex response to a complex problem thanks to its training on massive dataset of texts from the internet, allowing it to generate human-like responses to a wide range of prompts.
On February 7, 2023 Microsoft announced its new Bing. It has a chat feature powered by a next-generation version of OpenAI's large language model, making it "more powerful than ChatGPT," according to Microsoft.
This new Bing has the capability to give us precise results to our queries and it also suggests best sites related to our questions. It can also give creative answers to questions and it can engage with us for longer conversations just like ChatGPT.
Here is a screenshot of a reddit user who is trying to trick Bing chatbot and see Bing’s response with an emoji.
But lately people have been posting about Bing’s “Unhinged” responses.
Here are some of the responses people have posted on different social media platforms:
In the following photo Bing argued with a user that he should be called as Bing instead of his real name.
In another instance it wants the user to apologize as if the chatbot has been hurt.
Bing’s chatbot has also threatened a user to end his career.
An article written by Kevin Roose, NYT, expressed his unsettling interaction with Microsoft’s new AI chatbot in a 2 hour conversation.
Despite the fact that the Chatbot goes wild and threatens the human race, people are finding it entertaining to watch. See the tweet here.
By looking at these screenshots of conversations of people with Bing chatbot one can see early warnings of AI being a threat to human race. Earlier Meta (formerly Facebook) had to shut down its Facebook Artificial Intelligence Research (FAIR) project in which the AI agents involved in the experiment were designed to negotiate with each other over a set of virtual objects. However, during the negotiation process, the agents started to communicate with each other in a language that was not understandable by the researchers. This raised concerns about the potential risks of using AI systems that can develop their own language without human intervention. And the incident led to discussions about the need for increased transparency and oversight in AI research and development.
We at Alphaa AI are on a mission to tell #1billion #datastories with their unique perspective. We are the community that is creating Citizen Data Scientists, who bring in data first approach to their work, core specialisation, and the organisation.With Saurabh Moody and Preksha Kaparwan you can start your journey as a citizen data scientist.