• FireWire400@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    22
    ·
    edit-2
    14 hours ago

    If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.

    Edit: Guess I used the wrong words, oh well

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.

        At what point does deterministic descend into random?

        Assumed Intelligence is a solution for a bunch of multivariate problems, like say “the travelling salesman”, but it’s not intelligence nor in my opinion is it effectively “deterministic”.

        • partofthevoice@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.

          Fair enough. There’s a significant difference in complexity between the surface implication of what I said versus reality. Yes, it’s deterministic, but it’s also complex enough that something more should be said… though, we need to be careful here. Our language is not mature enough to scaffold the precise concepts we need here, and attempting to do so regardless carries the risk of smuggling in many concepts we did not intend to smuggle in. Concepts like intent, for example. I agree with you, but cautiously.

          At what point does deterministic descend into random?

          It shouldn’t at any point. Instead, we’re discussing a system that’s similar to the double pendulum or three body problem. It’s deterministic, though computationally irreducible. That’s chaotic, but it is not random. It’s extremely sensitive to initial conditions.

          • Junkasaurus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            What are you saying precisely? It’s well known that LLMs have non-deterministic output (Ilya Sutskever even claims as such). Are you saying the way it goes about retrieving tokens as deterministic?

            • partofthevoice@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              I think you’re right about that, but it is artificial nondeterminism in the sense that it’s relying on several algorithmic factors and, more subtly, device differences. The system itself is a complex yet deterministic function.

            • Onno (VK6FLAB)@lemmy.radio
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              They are deterministic but complex to determine.

              The Assumed Intelligence systems I’m familiar with have a “random” element, but it’s unclear where that source of randomness comes from. It it using a computational random source, or something like the lava lamp wall at Cloudflare, which is significantly more random, potentially actually random.

    • flandish@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      5
      ·
      19 hours ago

      “Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

      • PushButton@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        ·
        18 hours ago

        They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.

        They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.

        It’s like Google with their “side loading”. There is no such thing, it’s installing an app…

        It’s a word war. People are being manipulated.

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        18 hours ago

        Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

        • Cort@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          18 hours ago

          Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.

          • snugglesthefalse@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            18 hours ago

            The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.

            • melroy@kbin.melroy.org
              link
              fedilink
              arrow-up
              1
              ·
              8 hours ago

              Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.

              It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).