If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

  • Kongar@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    47
    ·
    15 days ago

    I’ve been playing around with AI a lot lately for work purposes. A neat trick llms like OpenAI have pushed onto the scene is the ability for a large language model to “answer questions” on a dataset of files. This is done by building a rag agent. It’s neat, but I’ve come to two conclusions after about a year of screwing around.

    1. it’s pretty good with words - asking it to summarize multiple documents for example. But it’s still pretty terrible at data. As an example, scanning through an excel file log/export/csv file and asking it to perform a calculation “based on this badge data, how many people and who is in the building right now”. It would be super helpful to get answers to those types of questions-but haven’t found any tool or combinations of models that can do it accurately even most of the time. I think this is exactly what happened to spotify wrapped this year - instead of doing the data analysis, they tried to have an llm/rag agent do it - and it’s hallucinating.
    2. these models can be run locally and just about as fast. Ya it takes some nerd power to set these up now - but it’s only a short matter of time before it’s as simple as installing a program. I can’t imagine how these companies like ChatGPT are going to survive.
    • 9488fcea02a9@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      47
      ·
      15 days ago

      This is exactly how we use LLMs at work… LLM is trained on our work data so it can answer questions about meeting notes from 5 years ago or something. There are a few geniunely helpful use cases like this amongst a sea of hype and mania. I wish lemmy would understand this instead of having just a blanket policy of hate on everything AI

      the spotify thing is so stupid… There is simply no use case here for AI. Just spit back some numbers from my listening history like in the past. No need to have AI commentary and hallucination

      The even more infuriating part of all this is that i can think of ways that AI/ML (not necesarily LLMs) could actually be really useful for spotify. Like tagging genres, styles, instruments, etc… “Spotify, find me all songs by X with Y instrument in them…”

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        44
        ·
        edit-2
        15 days ago

        The problem is that the actual use cases (which are still incredibly unreliable) don’t justify even 1% of the investment or energy usage the market is spending on them. (Also, as you mentioned, there are actual approaches that are useful that aren’t LLMs that are being starved by the stupid attempt at a magic bullet.)

        It’s hard to be positive about a simple, moderately useful technology when every person making money from it is lying through their teeth.

      • HubertManne@moist.catsweat.com
        link
        fedilink
        arrow-up
        4
        ·
        15 days ago

        This is to me what its useful for. So much reinventing the wheel at places but if the proper information could be found quickly enough then we could use a wheel we already have.

    • cyberwolfie@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      15 days ago

      I think this is exactly what happened to spotify wrapped this year - instead of doing the data analysis, they tried to have an llm/rag agent do it - and it’s hallucinating.

      Interesting - I don’t use Spotify anymore, but I overheard a conversation on the train yesterday where some teens were complaining about the results being super weird, and they couldn’t recognize themselves in it at all. It seems really strange to me to use LLMs for this purpose, perhaps with the exception of coming up with different ways of formulating the summary sentences so that it feels more unique. Showing the most played songs and artists is not really a difficult analysis task that does not require any machine learning. Unless it does something completely different over the past two years since I got my last one…

      • BakerBagel@midwest.social
        link
        fedilink
        English
        arrow-up
        14
        ·
        15 days ago

        They are using LLM’s because the companies are run by tech bros who bet big on “AI” and now have to justify that.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        Showing the most played songs and artists is not really a difficult analysis task that does not require any machine learning.

        You want to dimension reduce to get that “people who listen to stuff like you also like to listen to” recommendation. To have an idea whom to play a new song to, you ideally want to analyse the song itself and not just people’s reaction to it and there we’re deep in the weeds of classifiers.

        Using LLMs in particular though is probably suit-driven development because when you’re trying to figure out whether a song sounds like pop or rock or classical then LLMs are, at best, overkill. Analysing song texts might warrant LLMs but I don’t think it’d gain you much. If you re-train them on music instead of language you might also get something interesting, classifying music by phrasal structure and whatnot don’t look at me I may own a guitar but am no musician. And, of course, “interesting” doesn’t necessarily mean “business case” unless you’re in the business of giving Adam Neely video ideas. “Spotify, play me all pop songs that sing ‘caught in the middle’ in the same way”… not a search that’s going to make spotify money, or anyone asked for.