For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.
I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?
Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?
I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.
I think on top of this, the question has an incorrect implicit assumption - that LLMs understand what they produce (this would be necessary for them to produce code in languages other than what they’re trained on).
LLMs don’t product intelligent output. They produce plausible strings of symbols, based on what is common in a given context. That can look intelligent only in so far as the training dataset contains intelligently produced material.