So, I was reading the privacy notice and the terms of use and I did read some sketchy stuff about it (data used in advertising, getting keystroke). How bad is it? Is it like chatgpt or worse? Anything I can do about it?

  • ByteMe@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Wow, that’s a thorough explanation. Thanks! I also have 16 gigs of ram and an i7 6th gen

    • voracitude@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      No problem - and, that’s not thorough, that’s the cut down version haha!

      Yeah, that hardware’s a little old so the token generation might be slow-ish (your RAM speed will make a big difference, so make sure you have the fastest RAM the system will support), but you should be able to run smaller models without issue 😊 Glad to help, I hope you manage to get something up and running!

        • voracitude@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          From that thread, switching runtimes in LMStudio might help. On Windows the shortcut is apparently Ctrl+shift+R. There are three main kinds: Vulkan, CUDA, and CPU. Vulkan is an AMD thing; CUDA is an nVidia thing; and CPU is a backup to use when the other two aren’t working for it is sssslllllooooooowwwwwww.

          In the thread one of the posters said they got it running on CUDA, and I imagine that would work well for you since it’s an nVidia chip; or, if it’s already using CUDA try llama.cpp or Vulkan.

              • ByteMe@lemmy.worldOP
                link
                fedilink
                arrow-up
                1
                ·
                42 minutes ago

                Cuda gives the error I told you before, vulkan works once and then it also stops working. I didn’t try the CPU cause I thought it would be so slow and there is no point to it

                • voracitude@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  32 minutes ago

                  Okay no worries, I’d at least try llama cpp just to see how fast it is and to verify it works. If it doesn’t work or only works once and then quits, maybe the problem is LMStudio. In that case you might want to try GPT4All (https://www.nomic.ai/gpt4all); this is the one I started with way back in the day.

                  If you care enough to post the logs from LMStudio after it crashes I’m happy to take a look for you and see if I can see the issue, as well 🙌