According to the New Yorker, there is an interesting phenomenon that occurs when ChatGPT searches for information. It doesn’t have room for all the data on the internet, so it rephrases information based on the context it has been given. Oftentimes, there are gaps in the model’s knowledge that it fills in with guesses. This can have negative consequences, causing the model to “hallucinate” answers.
This raises the question: are the language models we have right now as intelligent as it seems? While they are powerful tools, are they wrong too often?
Read more at the New Yorker.