Just a stranger trying things.

  • 7 Posts
  • 145 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle

  • I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.


  • The Hobbyist@lemmy.ziptoSelfhosted@lemmy.worldSelf-hosting LLMs
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    3 days ago

    I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.

    If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.

    Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B

    Edit2: added links

    Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.

    Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.




  • I understand your position. There is a learning curve to containers, but I can assure you that getting your basics on the topic will open a whole new world of possibilities and also make everything much easier for yourself. The vast majority of people run containers which make the services less brittle because they have their own tailored environment and don’t depend on the host libraries and packages and also brings increased security because the services can’t easily escape their boundaries rendering their potential vulnerabilities less of an issue compared to running those same services bare metal.

    I started on synology too. There is a website called Marius hosting which focuses on tutorials for containers on synology, but his instructions have been updated the last few years to focus on spinning up containers manually rather than through the UI, which makes it more intimidating than it needs to be for beginners… I’ll link it here just as a reference. I’ll see if on the way back machine he shows the easier way and report back if I find something.

    Edit: yes here is an original tutorial for Jellyfin (this method still works for me and is still how I use docker lately): https://web.archive.org/web/20210305002024/https://mariushosting.com/how-to-install-jellyfin-on-your-synology-nas/


  • To answer your question more specifically, most people set up the pi with docker, using services which have a front end accessible in the browser. They basically use their browser to navigate to the front end of the service they want to use and administer it like that. For instance portainer to manage their docker containers, or pihole for managing their firewall, or even jellyfin for their media which is both the website to consume the media and has an administrator dashboard.

    Edit: this is in complement to using something like tailscale which basically allows you to access these services away from home. They work in conjunction.




  • Indeed, quite surprising. You got to “stroke their fur the right way” so to speak haha

    Also, I’m increasingly more impressed with the rapid progress reaching open-weights models: initially I was playing with Llama3.1-8B which is already quite useful for simple querries. Then lately I’ve been trying out Mistral-Nemo (12B) and Mistrall-Small (22B) and they are quite much more capable. I have a 12GB GPU and so far those are the most powerful models I can run decently. I’m using them to help me in writing tasks for ansible, learning the inner workings of the Linux kernel and some bootloader stuff. I find them quite helpful!



  • Have you installed google services on your phone? they are available through the grapheneOS official “App Store” app. This should be installed before whatsapp is installed (at least that is the instruction for general apps depending on google services).

    Perhaps you have done so already, but just a general advice: when using google services and invasive apps like WhatsApp, it can be a good idea to install in their dedicated profile and allow the notifications to pipe through to your main profile instead of installing both in your main profile. If you need help configuring it, let me know.


  • I have no idea if ollama can handle multi-GPU. The 70B in it’s q2_k quantized form requires already 26GB of memory, so you would need at least that to run it well and that would only imply it could be entirely run on GPU, which is the best case scenario, but not at what speed.

    I know some people with apple silicon who have enough memory to run the 70B model and for them it runs fast enough to be usable. You may be able to find more info about it online.





  • The interface called open-webui can run in a container, but ollama runs as a service on your system, from my understanding.

    The models are local and only answer queries by default. It all happens on the system without any additional tools. Now, if you want to give them internet access, you can, it is an option you have to setup and open-webui makes that possible though I have not tried it myself. I just see it.

    I have never heard of any llm “answer base queries offline before contacting their provider for support”. It’s almost impossible for the LLM to do it by itself without you setting things up for it that way.


  • whats great is that with ollama and webui, you can as easily run it all on one computer locally using the open-webui pip package or in a remote server using the container version of open-webui.

    Ive run both and the webui is really well done. It offers a number of advanced options, like the system prompt but also memory features, documents for RAG and even a built in python ide for when you want to execute python functions. You can even enable web browsing for your model.

    I’m personally very pleased with open-webui and ollama and they both work wonders together. Hoghly recommend it! And the latest llama3.1 (in 8 and 70B variants) and llama3.2 (in 1 and 3B variants) work very well, even on CPU only, for the latter! Give it a shot, it is so easy to set up :)