• 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    2
    ·
    2 days ago

    doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

    any money says they’re vulnerable to prompt injection in the comments and posts of the site

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      They also have a ‘skill’ sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and ‘install’ new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        To be fair this is a much more realistic threat model than “ignore all previous instructions” style prompt injection which doesn’t really work on opus.

        Skills can contain scripts etc… so yeah they’re extremely risky to share by design.

        • ThirdConsul@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          style prompt injection which doesn’t really work on opus.

          After a quick google, JB communities on Reddit don’t seem to agree with you.

          • Zos_Kia@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            14 hours ago

            There’s a lot of questionable methodology and straight up larping in these communities. Sure you can probably make Opus hallucinate a crystal meth or bomb making recipe if you get it in a roleplaying mood but that’s a far cry from actual prompt injection in live workflows.

            Anecdotally i’ve been experimenting on those AI robocallers that have been spamming my phone and even on the shitty models they use it is non trivial to get them to deviate from their script. I hope i can get it done though, as it would allow me to hold them on the line potentially for hours doing bullshit tasks, and costing hundreds to their operator.

          • Zos_Kia@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            haha yeah i don’t worry these people are really YOLOing everything. And it’s not like i’m an AI luddite i spend a few hours each day victimizing Claude code but jesus christ i’m certainly not giving it full unfettered access to my digital life.

    • CTDummy@piefed.social
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 days ago

      Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I am a little curious about how effective a traditional chain mail would be on it.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      2 days ago

      There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.