According to the New York Times, AI becoming too persuasive to differentiate from regular humans is the real problem, not the chatbots themselves. While responses from the new Bing have many people worried about AI chatbots’ ability to be hostile toward users, that’s not the worrying part. Chatbots like Bing are designed to interact with users how they want. If you feed them probing questions about how evil they are, they will react how humans imagine AI reacting because it will draw on material about AI from the internet.
The problem isn’t that these chatbots can react negatively, but that Google and Microsoft will patch these responses, making the bots more and more aligned with their interests, and more interested in making them money. As chatbots become less openly scary sounding, they become more and more persuasive and seemingly trustworthy; more human.
Read more at the New York Times.