If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.
No, it wasn’t a virtue signal, you fucking dingdongs.
Capitalism is rife with undercooked products, because getting a product out there starts the income flowing sooner. They don’t have to be making a profit for a revenue stream to make sense. Some money is better than no money. Get it?
Fuck, it’s like all you idiots can do is project your lack of understanding on others…
I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.
It really adds up in their attempt to make fancy autocomplete seem “intelligent”.
Not the models. AI tools that integrate with the models. The “AI” would be akin to the backend of the tool. If you’re using Claude as the backend, the tool would be asking claude more questions and repeat questions via the API. As in, more input.
If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.
I dunno, let’s waste some water
They are trying to get rid of us by wasting our resources.
So, it’s Nestlé behind things again.
Why would it be by design? What does that even mean in this context?
You have to pay for tokens on many of the “AI” tools that you do not run on your own computer.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
No, it wasn’t a virtue signal, you fucking dingdongs.
Capitalism is rife with undercooked products, because getting a product out there starts the income flowing sooner. They don’t have to be making a profit for a revenue stream to make sense. Some money is better than no money. Get it?
Fuck, it’s like all you idiots can do is project your lack of understanding on others…
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.
How does it not? This isn’t a fucking debate. How would artificially bloating the number of tokens they sell not help their bottom line?
Dont they charge be input tokens? E.g. your prompt. Not the output.
I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.
It really adds up in their attempt to make fancy autocomplete seem “intelligent”.
Yes, reasoning models… but i dont think they would charge on that… that would be insane, but AI executives are insane, so who the fuck knows.
Not the models. AI tools that integrate with the models. The “AI” would be akin to the backend of the tool. If you’re using Claude as the backend, the tool would be asking claude more questions and repeat questions via the API. As in, more input.
Compute costs?
I’m pretty sure training is purely result oriented so anything that works goes
Exactly why this shit is and will never be trustworthy.