return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 2 days agoTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comexternal-linkmessage-square25linkfedilinkarrow-up1261arrow-down18
arrow-up1253arrow-down1external-linkTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 2 days agomessage-square25linkfedilink
minus-square8oow3291d@feddit.dklinkfedilinkEnglisharrow-up1arrow-down3·1 day ago LLMs don’t have any intentions. Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions. The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
minus-squaresupamanc@lemmy.worldlinkfedilinkEnglisharrow-up3·1 day agoAn LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
minus-squaredeliriousdreams@fedia.iolinkfedilinkarrow-up4·1 day agoThe people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.
Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.
The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
An LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
The people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.