

but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.
People taking the response from LLMs at face value is a problem
So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.

you are really reaching to justify this stuff, it’s wild. No. I disagree. using a flawed tool doesn’t increase your critical thinking skills. All it will do is confuse and ill inform the vast majority of people. Not everybody is an astronaut.