Microsoft launched its new search engine Bing last week and introduced the AI-powered chatbot to millions of people, creating long waiting lists of users wanting to test it and sparking a lot of existential fear among skeptics.
The company likely expected some of the responses coming from the chatbot to be somewhat inaccurate when it first met the public, and put in place measures to stop users who tried to make the chatbot say or do strange, racist, or harmful things. These precautions have not stopped users from doing so prison breaking chatbot and forcing it to use insults or the wrong answer.
While Microsoft was taking these measures, it wasn’t quite ready for the very strange, bordering on anxiety experiences some users had after trying to conduct more informal, personal conversations with the chatbot. This includes Chatbot making things up and throwing tantrums when called upon to make a mistake or just had full of existential crisis.
In light of the bizarre reactions, Microsoft is considering introducing new security protocols and enhancements to curb these bizarre, sometimes all-too-human reactions. This could mean allowing users to restart conversations or giving them more control over their tone.
Microsoft’s chief technology officer told The New York Times it was also considered to reduce the length of protection users could have with the chatbot disabled before the conversation entered strange territory. Microsoft has already admitted it long conversations can confuse the chatbotand can pick up a ton of users and things can start to go wrong.
IN blog post from the tech giant, Microsoft admitted that its new technology was being used in ways it “didn’t fully anticipate”. The tech industry seems to be in a mad rush to somehow take part in the AI hype, which proves just how excited the industry is about technology. Perhaps it was the excitement that clouded judgment and favored speed over caution.
Analysis: The bot is now out of the bag
Releasing the technology as unpredictable and full of imperfection was definitely a risky move by Microsoft to incorporate AI into Bing to revive interest in its search engine. It is possible that it was decided to create a helpful chatbot that would not do more than it was intended to, such as give recipes, help people solve puzzling equations, or learn more about certain topics, but it is clear that he did not anticipate how determined and successful people can be if they want to provoke a certain response from the chatbot.
New technology, especially AI, can certainly make people feel the need to push it as far as possible, especially with something as responsive as a chatbot. We saw similar attempts when Siri was introduced, with users doing their best to anger the virtual assistant, laugh at it, and even date it. Microsoft may not have expected people to give the chatbot such strange or inappropriate prompts, so it wouldn’t have been able to predict how bad the responses might be.
Hopefully the newer precautions will curb any further quirks of the AI-powered chatbot and remove the uncomfortable feeling of feeling a little too human.
It’s always interesting to watch and read about ChatGPT, especially when the bot goes insane after a few clever hints, but with technology so new and untested, nipping problems in the bud is the best thing to do.
Whether the measures Microsoft plans to introduce will actually make a difference, but since the chatbot is already there, it cannot be undone. We just have to get used to patching issues as they arise and hope that anything potentially harmful or offensive is caught in time. The AI’s growing pains may have only just begun.