dhcp server liuxLinux serverNETWORK ADMINISTRATIONS

Split A GPU Between Multiple Computers – Proxmox LXC (Unprivileged)

This video shows how to split a GPU between multiple computers using unprivileged LXCs. With this, you can maximise your GPU usage, consolidate your lab, save money, and remain secure. By the end you will be able to have hardware transoding in Jellyfin (or anything) using Docker.

LXC Demo:
https://github.com/JamesTurland/JimsGarage/tree/main/LXC/Jellyfin

Recommended Hardware: https://github.com/JamesTurland/JimsGarage/blob/main/Homelab-Buyer’s-Guide/Q3-2023.md

Discord: https://discord.gg/qW5vEBekz5
Twitter: https://twitter.com/jimsgarage_
Reddit: https://www.reddit.com/user/Jims-Garage
GitHub: https://github.com/JamesTurland/JimsGarage

00:00 – Introduction to Proxmox LXC GPU Passthrough (Unprivileged)
03:03 – Proxmox Setup & Example
04:25 – Getting Started (Overview of Configuration)
12:56 – Full Walkthrough
15:20 – Starting LXC
17:10 – Deploying Jellyfin with Docker
23:17 – Quad Passthrough
24:57 – Outro

source

by Jim’s Garage

linux dhcp client

Alice AUSTIN

Alice AUSTIN is studying Cisco Systems Engineering. He has passion with both hardware and software and writes articles and reviews for many IT websites.

31 thoughts on “Split A GPU Between Multiple Computers – Proxmox LXC (Unprivileged)

  • Three questions:
    1) Have you tried gaming with this, simultaneously?

    2) Have you tested this method using either an AMD GPU and/or a NVIDIA GPU?

    3) Do you ever run into a situation where the first container "hangs on" to the Intel Arc A380 and wouldn't let go of it such that the other containers aren't able to access said Intel Arc A380 anymore?

    I am asking because I am running into this problem right now with my NVIDIA RTX A2000 where the first container sees it and even WITHOUT the container being started and in a running state — my second container (Plex) — when I try to run "nvidia-smi", it says: "Failed to initialize NVML: Unknown Error".

    But if I remove my first container, than the second container is able to "get" the RTX A2000 passed through to it without any issues.

  • For anyone else struggling to determine which GPU is which, run `ls -l /dev/dri/by-path`, and cross reference the addresses in that output with the output of `lspci`, which will also list the full GPU name.

  • Ty for sharing your knowledge
    Two questions if you may know the answer?
    1. Can Proxmox install Nvidia linux drivers over Nouveau and still share the video card?
    2. If one adds a newer headless GPU like the Nvidia L4, can you use this as a secondary or even primary video card in a VM or CT?

  • What is the command to get the gid or uid when you mention LXC namspace or host namespace?
    Greate video i hope this help me to solve the HWA.

  • Is this simplified if one were to go with a privileged container?

  • This is really useful information, thank you! Is a similar process required when using a privileged LXC or the GPU available without any extra steps?

  • You do have an error in your github notes. After carefully following the directions and c/p from your notes I thought it odd when no /etc/subguid could be found. Still I proceeded but the container wouldn't start. After looking around a bit I noticed that /etc/subguid should have been /etc/subgid. After fixing the issue the container started just fine. Regardless, great video and you gained a new sub. Thanks..

  • Do you make your own thumbnails? Yours are top tier!!!

  • 2:40 actually it's for some intel gpus possible to split between vms. but didn't do any benchmark on it and had no use, so i went for priviledged lxc at the time i was setting up mine. but now i'm considering redoing it unpriviledged, thanks for the video!

  • Really awesome! But how is this working on the technical level without GPU virtualization at all?

  • I did not catch this quite right -so is this a way that works only with many LXC+Docker inside or many LXC+ anything inside. That is – can i run, say, 4 LXC debian containers and in each one of them, one Windows 10 VM? If so – it is interesting and great! Otherwise (LXC+Docker)… isn't it already possible to share GPU with every docker container after installing nvidia cuda docker, and pass -gpu all

  • You have solved just one of my little problems , I've moved jellyfin form one server to another and frigate VA worked , but jellyfin was giving me a error .

    Stream mapping:

    Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_amf))

    Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac))

    Press [q] to stop, [?] for help

    [h264_amf @ 0x557e719b81c0] DLL libamfrt64.so.1 failed to open

    double free or corruption (fasttop)

    Could not work it out it was, it was from a backup so the same configs etc , look at your notes and there was a OOPs forgot to the. usermod -aG render,video root

    Now all working again .

  • Would there be a use case for a higher end card like a spare RYX 3070?

  • I also run my Kubernetes test env. in LXC on my laptop, makes a lot of sense.

  • I really love your channel Jim. I learn(ed) a lot from you !!
    I would love to see how to get the low power encoding working 🙂

  • Great video! How does this work with nvidia drivers with a GPU? Does the driver need to be installed on the host and then in each LXC?

  • My weekend project right here. I run unraid in a VM with some docker containers running in it. I want to move all containers outside the unraid VM. Now I can test this and also sharing the iGPU!!! Not straight put to a single VM. NICE!

  • Came from the Selfhosted Newsletter a few days ago and I am loving it. Great video, and I will definetly try that as soon as I have time

  • Nice Jim 😀. You keep making great content👌🤌

  • What you say 6 minutes into the video about the /etc/subgid file is wrong. These entries are not mappings but ranges of gid's. It's a start gid and a count.

    I'm still trying to get my head dialled in on the lxc.idmap entries in the .conf file. Getting closer. Thanks for the video.

  • Can i ask why you're running docker inside of a lxc container ?

  • Thank you. Your tutorials are some of the best, very well explained and functional.

  • Great Content! Would be nice if you elaborated more on the low power encoder in one of your next videos.

  • Is there a way to do the opposite? As in consolidate multiple GPUs, RAM etc. into one server? I have 2 laptops and an external GPU I want to connect together to combine their compute to then be able to redistribute it out to multiple devices similar to this video. Is it possible?

  • Great video. I did a similar thing ages ago to passthrough a couple of printers to an lxc unprivileged cups printer server! Was a headache to figure everything out at the time hehehe

  • This is quickly becoming my favorite channel to watch 😀 Great stuff! Can't wait to see what you have for us next!

Comments are closed.