• holomorphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    21 days ago

    Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them. An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly

    • FatCrab@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 days ago

      A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.