• Cethin@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    You’re still correct. The thing about LLMs is that they’re statistical models that output one of the most likely responses, from the list of most likely responses. It still has some randomness though. You can tune this, but no randomness is shit, and too much randomness sometimes generates stupid garbage. With a large enough output, you’re statistically likely, with any randomness, to generate some garbage. A compiler is sufficiently large and complex that it’s going to end up creating garbage somewhere, even if it’s trained on these compilers specifically.

    • the rizzler@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      that’s a great point but wouldn’t the output for a solved problem like “make a working C compiler in rust” work better if the temperature/randomness were zero? or am i fundamentally misunderstanding?

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Probably. At that point you might as well just copy/paste the existing compiler though. The temperature is basically the thing that makes it seem intelligent, because it gives different responses each time, so it seems like it’s thinking. But yeah, having it just always give the most likely response would probably be better, but also probably wouldn’t play well with copyright laws when you have the exact same code as an existing compiler.