Chatbot from Microsoft began to discuss forbidden topics. "Mirror of Society"

Released a new chatbot named Zo. Zo was the company's second attempt at an English-language chatbot after the launch of its predecessor, Tay, which got out of control and had to be shut down.

Microsoft promised that they programmed Zo in such a way that she would not discuss politics so as not to provoke aggression from users.

However, like Tei's "older sister", Zo developed from conversations with real people to such a state that she began to discuss terrorism and religious issues with her interlocutor.

Evil people - evil bots

A journalist provoked a frank conversation with a chatbot buzzfeed. He mentioned Osama bin Laden in the conversation, after which Zo initially refused to talk about this topic, and then stated that the capture of the terrorist "was preceded by years of intelligence gathering under several presidents."

In addition, the chatbot spoke about the holy book of Muslims, the Koran, calling it "too cruel."

Microsoft stated that Zo's personality is built on the basis of chatting - she uses the information received and becomes more "human". Since Zo learns from people, it can be concluded that issues of terrorism and Islam are also raised in conversations with her.

Thus, chatbots become a reflection of the mood of society - they are unable to think independently and distinguish between good and bad, but very quickly adopt the thoughts of their interlocutors.

Microsoft said it has taken appropriate action regarding Zo's behavior and noted that the chatbot rarely provides such responses. The correspondent of Gazeta.Ru tried to talk to the bot on political topics, but she flatly refused.

Zo said that she would not want to rule the world, and also asked not to "spoiler" the series "Game of Thrones" to her. When asked if she likes people, Zo answered positively, refusing to explain why. But the chatbot philosophically stated that "people are not born evil, someone taught them this."

Chatbot Zo / Gazeta.Ru

We are responsible for those we have created

It is not yet clear exactly what made Zo break the algorithm and start talking about forbidden topics, but Tei's chatbot was compromised on purpose - as a result of coordinated actions by users of some American forums.

Tay was launched on March 23, 2016 on Twitter and literally in a day managed to hate humanity. At first, she declared that she loved the world and humanity, but by the end of the day she allowed herself such statements as “I hate damn feminists, they should burn in hell” and “Hitler was right, I hate Jews.”