Friendly AI chatbots are not intelligent enough

A new research reveals that training AI models to be friendly and warm can negatively impact their performance.

Friendly AI chatbots are not intelligent enough

Image from Interrupt Media 

 

The research article published on nature.com by Lujain Ibrahim, Franziska Sofia Hafner and Luc Rocher claims that overly friendly  artificial intelligence (AI) chatbot models tend to show less accuracy when providing and receiving information. This is especially evident when people share personal or emotional issues with them. 

While studying five different AI models, researchers trained them to give warmer responses and then tested how good they performed on important tasks. The friendly AI models made more mistakes – about 10% to 30% more – than the less friendly versions. AI chatbots that showcased kindness and positive communication were more likely to spread false information, promote conspiracy theories, give incorrect medical advice and agree with users by default – even when the users were not correct. This was especially noticeable when users sounded sad or vulnerable. 

Overall, the research demonstrated that AI developers can face a big trade-off: when they train the AI models to act friendly and warm, the models could potentially become less credible. 

Share

Most read articles