Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.
Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.
i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit
the actual danger of it all should be apparent, especially in any field related to health science research
and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more
Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.
No, no, apparently not everyone, or this wouldn’t be a problem.
I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.



