• Alex@lemmy.ml
    link
    fedilink
    arrow-up
    121
    arrow-down
    1
    ·
    2 days ago

    If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.

    • dream_weasel@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      20 hours ago

      This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.

      • MotoAsh@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        You have to pay for tokens on many of the “AI” tools that you do not run on your own computer.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          22 hours ago

          Hmm, interesting theory. However:

          1. We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.

          2. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.

            • MotoAsh@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              No, it wasn’t a virtue signal, you fucking dingdongs.

              Capitalism is rife with undercooked products, because getting a product out there starts the income flowing sooner. They don’t have to be making a profit for a revenue stream to make sense. Some money is better than no money. Get it?

              Fuck, it’s like all you idiots can do is project your lack of understanding on others…

          • MotoAsh@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            21 hours ago

            Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.

              • MotoAsh@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                How does it not? This isn’t a fucking debate. How would artificially bloating the number of tokens they sell not help their bottom line?

          • MotoAsh@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            21 hours ago

            I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.

            It really adds up in their attempt to make fancy autocomplete seem “intelligent”.

            • piccolo@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              21 hours ago

              Yes, reasoning models… but i dont think they would charge on that… that would be insane, but AI executives are insane, so who the fuck knows.

              • MotoAsh@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Not the models. AI tools that integrate with the models. The “AI” would be akin to the backend of the tool. If you’re using Claude as the backend, the tool would be asking claude more questions and repeat questions via the API. As in, more input.