• 0 Posts
  • 3 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • So you know how when you’re interacting with an AI bot (chatgpt/copilot or Gemini or Claude etc) you have a chat history with it and it “remembers” its previous output and can reference it if you ask follow up stuff.

    You can use that behavior to try to get the model to give you a better answer by having it “think” about the prompt you fed it either by asking further probing questions or honing your initial prompt in or staying “hey this thing you wrote doesn’t make sense” and it’ll try to fix it.

    Well what this does is just takes the initial prompt and keeps feeding it to the bot after it spits out the response. Then it keeps doing that again and again with the same prompt. The idea is that it makes the AI look at it more and more and that the AI can “learn” from itself and make the output better.

    In other words, it’s like if you just copied your prompt and after every response to it just pasted it back in and sent it on through again.

    I’m very skeptical this works well but I’m not an expert in LLM or how they work, I just know enough about how they work to know this AI craze is most assuredly a market bubble.