swiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · 2 days agoLavalamp too hotdiscuss.tchncs.deimagemessage-square67linkfedilinkarrow-up1484arrow-down113
arrow-up1471arrow-down1imageLavalamp too hotdiscuss.tchncs.deswiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · 2 days agomessage-square67linkfedilink
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up7·19 hours agoHmm, interesting theory. However: We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
minus-squareJerkface (any/all)@lemmy.calinkfedilinkEnglisharrow-up4·17 hours agoit was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
minus-squareMotoAsh@piefed.sociallinkfedilinkEnglisharrow-up1arrow-down3·18 hours agoOf course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up2·12 hours agoI don’t think this really addresses my second point.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.