OpenAI, which developed ChatGPT, has admitted (based on its own study) there’s no way to stop false information being presented as truth due to the way generative AI works. Explaining why large language models “hallucinate”, the researchers wrote:
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.