• darth_helmet@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Apple silicon has a pretty decent on-board ML subsystem, you can get LLMs to output a respectable number of tokens per second off of it if you have the memory for them. I’m honesty shocked that they haven’t built a little LLM to power Siri