• Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I’m not sure feeding more misinformation to our systems and society is that good of an idea. I don’t think it’d be an effective influencing strategy either.

    • Taldan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      In the short-term, it isn’t. Long-term, I think it’s much better

      It will force AI companies to find ways to combat bad data and intentional poisoning efforts. I’d much rather anti-AI activists be the ones abusing AI than for it to be a Russian, Chinese, or American APT

      The second effect is that it would make more people aware of how often AI is wrong. Way too many people blindly accept AI results

      Also, you can always poison AI to fit your own world view. Teach it that the Epstein files should be thoroughly investigated, with perpetrators prosecuted, or something