Proxmox Post Tag - TechOpt.io https://www.techopt.io/tag/proxmox Programming, servers, Linux, Windows, macOS & more Sun, 24 Aug 2025 20:40:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.techopt.io/wp-content/uploads/2024/07/cropped-logo-1-32x32.png Proxmox Post Tag - TechOpt.io https://www.techopt.io/tag/proxmox 32 32 GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin#respond Sun, 24 Aug 2025 20:40:06 +0000 https://www.techopt.io/?p=1059 When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These […]

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These same steps can also apply if you’re running Jellyfin inside an LXC.

In this guide, I’ll walk you through enabling GPU passthrough in Proxmox (or Jellyfin) LXC step by step.

Step 1: Enabling PCI(e) Passthrough in Proxmox

The first step is enabling PCI passthrough at the Proxmox host level. I mostly followed the official documentation here: Proxmox PCI Passthrough Wiki.

I’ll summarize what should be done below.

Enable IOMMU in BIOS

Before you continue, enable IOMMU (Intel VT-d / AMD-Vi) in your system’s BIOS. This setting lets GPUs pass directly through to containers or VMs.

Each BIOS Is different, so if you’re not sure you should check your motherboard’s instruction manual.

Enable IOMMU Passthrough Mode

Not all hardware supports IOMMU passthrough, but if yours does you’ll see a big performance boost. Even if your system doesn’t, enabling it won’t cause problems, so it’s worth turning on.

Edit the GRUB configuration with:

nano /etc/default/grub

Locate the GRUB_CMDLINE_LINUX_DEFAULT line and add:

iommu=pt

Intel vs AMD Notes

  • On Intel CPUs with Proxmox older than v9 (kernel <6.8), also add: intel_iommu=on
  • On AMD CPUs, IOMMU is enabled by default.
  • On Intel CPUs with kernel 6.8 or newer (Proxmox 8 updated or Proxmox 9+), you don’t need the intel_iommu=on parameter.

With an AMD CPU on Proxmox 9, my file looked like this in the end:

grub config to passthrough gpu to proxmox lxc with iommu

Save the file by typing CTRL+X, typing Y to confirm and then Enter to save.

Then update grub by running:

update-grub

Load VFIO Kernel Modules

Next, we need to load the VFIO modules so the GPU can be bound for passthrough. Edit the modules file with:

nano /etc/modules

Add the following lines:

vfio
vfio_iommu_type1
vfio_pci

My file looked like this in the end:

added kernel modules needed for gpu passthrough to lxc in proxmox

Save and exit, then update initramfs:

update-initramfs -u -k all

Reboot and Verify

Reboot your Proxmox host and verify the modules are loaded:

lsmod | grep vfio

You should see the vfio modules listed. This is what my output looks like:

vfio kernel modules are loaded

If you don’t get any output, the kernel modules have not loaded correctly and you probably forgot to run the update-initramfs command above.

To double-check IOMMU is active, run:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

Depending on your hardware, you should see confirmation that IOMMU or Directed I/O is enabled:

output when IOMMU is active (or similar)

Step 2: Finding the GPU Renderer Device Path

Now that PCI passthrough support is enabled, the next step is to figure out the device path of the renderer for the GPU you want to pass through.

Run the following on your Proxmox host:

ls /dev/dri

This will print all the detected GPUs. For example, my output looked like this:

by-path  card0  card1  renderD128  renderD129

If you only have a single GPU, you can usually assume it will be something like renderD128. But if you have multiple GPUs (as I did), you’ll need to identify which renderer belongs to your Intel Arc card.

First, run:

lspci

This will list all PCI devices. From there, I found my Intel Arc GPU at 0b:00.0:

lspci output showing my intel arc GPU card

Next, run:

ls -l /dev/dri/by-path/

My output looked like this:

lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:04:00.0-card -> ../card0
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:04:00.0-render -> ../renderD129
lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:0b:00.0-card -> ../card1
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:0b:00.0-render -> ../renderD128

From this, I confirmed that my Intel Arc GPU was associated with renderD128. That means the full device path I need to pass to my LXC container is:

/dev/dri/renderD128

Step 3: Passthrough GPU Device to the Plex LXC Container

Now that we know the correct device path, we can pass it through to the LXC container.

  1. Stop the container you want to pass the GPU into.
  2. In the Proxmox web UI, select your Plex LXC container.
  3. Go to Resources.
  4. At the top, click Add → Device Passthrough.
    steps to get to device passthrough menu
  5. In the popup, set Device Path to the GPU renderer path you identified in Step 2 (in my case, /dev/dri/renderD128).
  6. In the Access mode in CT field, type:0666
    add passthrough device to lxc config
  7. Click Add.
  8. Once added, you can start the container again.

A Note About Permissions

Normally, you would configure a proper UID or GID in CT for the render group inside the container so only that group has access. However, in my testing I wasn’t able to get the GPU working correctly with that method.

Using 0666 permissions allows read/write access for everyone. Since this is a GPU device node and not a directory containing files, I’m not too concerned, but it’s worth noting for anyone who takes their Linux permissions very seriously.

Step 4: Installing GPU Drivers Inside the LXC Container

With the GPU device passed through, the container now needs the proper drivers installed. This step varies depending on the Linux distribution you’re running inside the container.

Some distros may already include GPU drivers by default. But if you reach Step 5 and don’t see your GPU as an option in Plex (or Jellyfin), chances are you’re missing the driver inside your container.

In my case, I’m using a Debian container, which does not include Intel Arc drivers by default. Here’s what I did:

  1. Edit your apt sources to enable the non-free repo: nano /etc/apt/sources.list
  2. Add non-free to the end of each Debian repository line:
    my sources.list after adding the non-free Debian repository
  3. Refresh apt sources: apt update
  4. Install the Intel Arc driver package: apt install intel-media-va-driver-non-free
  5. Restart the container.

For Debian, I followed the official documentation here: Debian Hardware Video Acceleration Wiki.

Note: These steps will vary depending on your GPU make, model and container distribution. Make sure to check the official documentation for your hardware and distro.

Step 5: Enabling Hardware Rendering in Plex

Now that your GPU and drivers are ready, the final step is to enable hardware transcoding inside Plex.

  1. Open the Plex Web UI.
  2. Go to Settings (top-right corner).
  3. In the left sidebar, scroll down and click Transcoder under Settings.
  4. Make sure the following options are checked:
    • Use hardware acceleration when available
    • Use hardware-accelerated video encoding
  5. Under Hardware transcoding device, you should now see your GPU. If not, double-check Step 4. In my case, it showed up as Intel DG2 [Arc A380]
  6. Select your GPU and click Save Changes.
Steps to enable hardware encoding in plex ui

Testing Hardware Transcoding

To verify that hardware transcoding is working:

  1. Play back any movie or TV show.
  2. In the playback settings, change the quality to a lower resolution that forces Plex to transcode:
    Force Plex to transcode by selecting a lower quality
  3. While it’s playing, go to Settings → Dashboard.
  4. If the GPU is handling transcoding, you’ll see (hw) beside the stream being transcoded:
    The GPU passthrough to the proxmox lxc for plex can be confirmed working from Plex dashboard: here are the steps

That’s it! You’ve successfully set up GPU passthrough in Proxmox LXC for Plex. These same steps should also work for Jellyfin with minor adjustments for the Jellyfin UI.

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin/feed 0
LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox#respond Tue, 25 Feb 2025 02:40:15 +0000 https://www.techopt.io/?p=824 Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases. When setting up a new environment, you might wonder whether you should deploy your […]

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases.

When setting up a new environment, you might wonder whether you should deploy your workload inside an LXC container or a full VM. The choice depends on what you are trying to achieve.

LXC Containers: Lightweight and Efficient

LXC (Linux Containers) provides an efficient way to run isolated environments on a Proxmox system. Unlike traditional VMs, containers share the host system’s kernel while maintaining their own isolated user space. This means they use fewer resources, start up quickly, and offer near-native performance.

When to Use LXC Containers:

  • Single Applications – If you need to run a single application in an isolated environment, an LXC container is an excellent choice.
  • Docker Workloads – If an application is only available as a Docker image, you can run Docker inside an LXC container, avoiding the overhead of a full VM.
  • Resource Efficiency – LXC containers consume fewer resources, making them ideal for lightweight applications that don’t require their own kernel.
  • Speed – Since LXC containers don’t require full emulation, they start almost instantly compared to VMs.

Considerations for LXC Containers:

  • Less Isolation – Since they share the host kernel, they are not as isolated as a full VM, which can pose security risks if an attacker exploits vulnerabilities in the kernel or improperly configured permissions.
  • Compatibility Issues – Some applications that expect a full OS environment may not work well inside an LXC container.
  • Limited System Control – You don’t have complete control over kernel settings like you would in a VM.

Virtual Machines: Full System Isolation

Virtual machines in Proxmox use KVM (Kernel-based Virtual Machine) technology to provide a fully virtualized system. Each VM runs its own operating system with its own kernel, making it functionally identical to a physical machine.

When to Use Virtual Machines:

  • Multiple Applications Working Together – If you need to run a system with multiple interacting services, a VM provides a fully isolated environment.
  • Custom Kernel or OS Requirements – If your application requires a specific kernel version or a non-Linux operating system (e.g., Windows or BSD), a VM is the way to go.
  • Strict Security Requirements – Since VMs have strong isolation from the host system, they provide better security for untrusted workloads.
  • Compatibility – Any software that runs on a physical machine will run in a VM without modification.

Considerations for Virtual Machines:

  • Higher Resource Usage – VMs require more CPU, RAM, and disk space compared to containers.
  • Slower Start Times – Because they emulate an entire system, VMs take longer to boot up.
  • More Maintenance – You’ll need to manage full OS installations, updates, and security patches for each VM separately.

Final Thoughts: When to Choose LXC Containers vs. Virtual Machines in Proxmox

In general, if you need to run a single application in isolation, or if your application is only available as a Docker image, an LXC container is the better choice. Containers are lightweight, fast, and efficient. However, if you’re running a more complex system with multiple interacting applications, need complete OS independence, or require strong isolation, a VM is the better solution.

Proxmox makes it easy to work with both LXC and VMs, so understanding your workload’s needs will help you choose the right tool for the job. By leveraging the strengths of each, you can optimize performance, security, and resource usage in your environment.

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox/feed 0
Importing a QCOW2 Image in Proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox#respond Tue, 31 Dec 2024 17:01:00 +0000 https://www.techopt.io/?p=678 If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line. Working with QCOW2 in […]

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line.

Working with QCOW2 in Proxmox setups involves a few terminal commands and some configuration in the Proxmox web interface, but it’s straightforward once you’ve done it once.

Steps to Import a QCOW2 Image in Proxmox

1. Access Your Proxmox Terminal

Log in to the terminal of your Proxmox host. This can be done directly or via SSH. The terminal is essential for transferring and managing files during the setup process.

2. Prepare the QCOW2 Storage Directory

Store QCOW2 images in a dedicated directory for better organization. Proxmox doesn’t create this directory by default, so you may need to set it up. A good place to store QCOW2 images is in /var/lib/vz/template/qcow.

  • Check if the directory exists:

    ls /var/lib/vz/template/qcow

  • If it doesn’t exist, create it:

    mkdir -p /var/lib/vz/template/qcow

3. Upload the QCOW2 Image Using SFTP

Uploading the QCOW2 image requires an SFTP connection. Here’s how to do it using the command line:

  1. Open a terminal on your local machine.
  2. Use the sftp command to connect to your Proxmox host. Replace user with your Proxmox username and host with the IP address of your Proxmox server:

    sftp user@host

  3. Navigate to the directory where you want to upload the image:

    cd /var/lib/vz/template/qcow

  4. Upload the QCOW2 file from your local system:

    put /path/to/your_image.qcow2

    Replace /path/to/your_image.qcow2 with the actual path to the QCOW2 file on your local machine.
  5. Exit the SFTP session with bye.
uploading qcow2 images to proxmox with sftp

4. Create a New Virtual Machine

Now, create a VM in Proxmox using the web interface:

  1. Log in to the Proxmox web interface.
  2. Select Create VM and configure the basic settings, such as the name and operating system. Make sure to select Do not use any media on the OS tab.
  3. On the Disks tab, leave all settings at their default values:
    • The default settings work because QCOW2 images already define their initial drive size. If needed, you can expand the drive size later.
    • Select any bus type for now, as you will replace this with the QCOW2 disk.
Do not use any media when uploading qcow2 image to proxmox

5. Import the QCOW2 Image

After creating the VM, use the qm importdisk command to import the QCOW2 image to the VM.

  1. Identify the VM’s ID from the Proxmox web interface (e.g., 120).
  2. Run the following command in the Proxmox terminal:

    qm importdisk 120 /var/lib/vz/template/qcow/your_image.qcow2 local

This imports the QCOW2 image as an unused disk linked to the VM.

Make sure to replace the VM ID, image location and storage name with the values for your system.

Here is a sample output from the command:

importing disk '/var/lib/vz/template/qcow/haos_ova-14.1.qcow2' to VM 120 ...
transferred 0.0 B of 32.0 GiB (0.00%)
transferred 396.5 MiB of 32.0 GiB (1.21%)
transferred 737.3 MiB of 32.0 GiB (2.25%)
transferred 1.1 GiB of 32.0 GiB (3.36%)
transferred 1.4 GiB of 32.0 GiB (4.47%)
transferred 1.8 GiB of 32.0 GiB (5.59%)
transferred 2.1 GiB of 32.0 GiB (6.70%)
transferred 2.5 GiB of 32.0 GiB (7.81%)
transferred 2.9 GiB of 32.0 GiB (8.92%)
...
transferred 20.9 GiB of 32.0 GiB (65.44%)
transferred 21.3 GiB of 32.0 GiB (66.56%)
transferred 21.7 GiB of 32.0 GiB (67.67%)
transferred 22.0 GiB of 32.0 GiB (68.78%)
transferred 22.4 GiB of 32.0 GiB (69.89%)
transferred 22.7 GiB of 32.0 GiB (71.01%)
transferred 32.0 GiB of 32.0 GiB (100.00%)
unused0: successfully imported disk 'VMs:vm-120-disk-3'

In this case, this created a new 32.0 GiB drive specified by our image that we can attach to our VM.

6. Attach the Disk to the VM

Once imported, attach the QCOW2 image to the VM:

  1. In the Proxmox web interface, go to the VM and select Hardware.
  2. Find the unused disk and attach it as a VirtIO Block Device or a SATA device (if your guest OS doesn’t support QEMU agent and VirtIO drivers).
  3. Optionally, remove or detach the original disk created during VM setup if it’s no longer needed.
Add the unused disk created from the qcow2 image in proxmox

7. Update the Boot Order

Ensure the VM boots from the newly attached QCOW2 image:

  1. Go to the Options tab in the VM’s settings.
  2. Set the newly attached VirtIO disk as the primary boot device.
Change the primary boot entry to the imported qcow2 image attached drive

8. Start the VM

Boot up the VM. If everything is set up correctly, the system should load from the QCOW2 image!

Remarks

  • When creating the VM, default disk settings are sufficient since the QCOW2 image specifies its own initial drive size. You can expand the size later if needed.
  • The sftp command is a straightforward way to upload the QCOW2 file. If you prefer a graphical interface, tools like FileZilla can achieve the same result.
  • Use the /var/lib/vz/template/qcow directory to keep QCOW2 images organized and accessible for future use. You can use any directory, but this is an easy place to remember since it’s with other templates.
  • You can also use any other method to upload your QCOW2 images to this directory, or even download them to this directory directly with a tool such as wget.

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox/feed 0