For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

    • TauZero@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Language is language. To an LLM, English is as good as Java is as good as machine code to train on. I like to imagine if we suddenly uncovered a library of books left over from ancient aliens, we could train an LLM on it (as long as the symbols themselves are legible), and it would generate stories in the alien language that would sound correct to the aliens, even though the alien world and alien life are completely unknown and incomprehensible to us.

    • naught101@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 hours ago

      I think on top of this, the question has an incorrect implicit assumption - that LLMs understand what they produce (this would be necessary for them to produce code in languages other than what they’re trained on).

      LLMs don’t product intelligent output. They produce plausible strings of symbols, based on what is common in a given context. That can look intelligent only in so far as the training dataset contains intelligently produced material.