

43·
8 days agoLet’s be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn’t consider a negative response to its actions due to its training and context being limited?
Sure it gives the llm a more human like persona, but so far I’ve yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.
There’s value in brevity and clarity, I took two paragraphs and the other was two words. I don’t like it either, but it does seem to be the way most people talk.