Virtualization Post Tag - TechOpt.io https://www.techopt.io/tag/virtualization Programming, servers, Linux, Windows, macOS & more Sun, 24 Aug 2025 20:40:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.techopt.io/wp-content/uploads/2024/07/cropped-logo-1-32x32.png Virtualization Post Tag - TechOpt.io https://www.techopt.io/tag/virtualization 32 32 GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin#respond Sun, 24 Aug 2025 20:40:06 +0000 https://www.techopt.io/?p=1059 When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These […]

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These same steps can also apply if you’re running Jellyfin inside an LXC.

In this guide, I’ll walk you through enabling GPU passthrough in Proxmox (or Jellyfin) LXC step by step.

Step 1: Enabling PCI(e) Passthrough in Proxmox

The first step is enabling PCI passthrough at the Proxmox host level. I mostly followed the official documentation here: Proxmox PCI Passthrough Wiki.

I’ll summarize what should be done below.

Enable IOMMU in BIOS

Before you continue, enable IOMMU (Intel VT-d / AMD-Vi) in your system’s BIOS. This setting lets GPUs pass directly through to containers or VMs.

Each BIOS Is different, so if you’re not sure you should check your motherboard’s instruction manual.

Enable IOMMU Passthrough Mode

Not all hardware supports IOMMU passthrough, but if yours does you’ll see a big performance boost. Even if your system doesn’t, enabling it won’t cause problems, so it’s worth turning on.

Edit the GRUB configuration with:

nano /etc/default/grub

Locate the GRUB_CMDLINE_LINUX_DEFAULT line and add:

iommu=pt

Intel vs AMD Notes

  • On Intel CPUs with Proxmox older than v9 (kernel <6.8), also add: intel_iommu=on
  • On AMD CPUs, IOMMU is enabled by default.
  • On Intel CPUs with kernel 6.8 or newer (Proxmox 8 updated or Proxmox 9+), you don’t need the intel_iommu=on parameter.

With an AMD CPU on Proxmox 9, my file looked like this in the end:

grub config to passthrough gpu to proxmox lxc with iommu

Save the file by typing CTRL+X, typing Y to confirm and then Enter to save.

Then update grub by running:

update-grub

Load VFIO Kernel Modules

Next, we need to load the VFIO modules so the GPU can be bound for passthrough. Edit the modules file with:

nano /etc/modules

Add the following lines:

vfio
vfio_iommu_type1
vfio_pci

My file looked like this in the end:

added kernel modules needed for gpu passthrough to lxc in proxmox

Save and exit, then update initramfs:

update-initramfs -u -k all

Reboot and Verify

Reboot your Proxmox host and verify the modules are loaded:

lsmod | grep vfio

You should see the vfio modules listed. This is what my output looks like:

vfio kernel modules are loaded

If you don’t get any output, the kernel modules have not loaded correctly and you probably forgot to run the update-initramfs command above.

To double-check IOMMU is active, run:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

Depending on your hardware, you should see confirmation that IOMMU or Directed I/O is enabled:

output when IOMMU is active (or similar)

Step 2: Finding the GPU Renderer Device Path

Now that PCI passthrough support is enabled, the next step is to figure out the device path of the renderer for the GPU you want to pass through.

Run the following on your Proxmox host:

ls /dev/dri

This will print all the detected GPUs. For example, my output looked like this:

by-path  card0  card1  renderD128  renderD129

If you only have a single GPU, you can usually assume it will be something like renderD128. But if you have multiple GPUs (as I did), you’ll need to identify which renderer belongs to your Intel Arc card.

First, run:

lspci

This will list all PCI devices. From there, I found my Intel Arc GPU at 0b:00.0:

lspci output showing my intel arc GPU card

Next, run:

ls -l /dev/dri/by-path/

My output looked like this:

lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:04:00.0-card -> ../card0
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:04:00.0-render -> ../renderD129
lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:0b:00.0-card -> ../card1
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:0b:00.0-render -> ../renderD128

From this, I confirmed that my Intel Arc GPU was associated with renderD128. That means the full device path I need to pass to my LXC container is:

/dev/dri/renderD128

Step 3: Passthrough GPU Device to the Plex LXC Container

Now that we know the correct device path, we can pass it through to the LXC container.

  1. Stop the container you want to pass the GPU into.
  2. In the Proxmox web UI, select your Plex LXC container.
  3. Go to Resources.
  4. At the top, click Add → Device Passthrough.
    steps to get to device passthrough menu
  5. In the popup, set Device Path to the GPU renderer path you identified in Step 2 (in my case, /dev/dri/renderD128).
  6. In the Access mode in CT field, type:0666
    add passthrough device to lxc config
  7. Click Add.
  8. Once added, you can start the container again.

A Note About Permissions

Normally, you would configure a proper UID or GID in CT for the render group inside the container so only that group has access. However, in my testing I wasn’t able to get the GPU working correctly with that method.

Using 0666 permissions allows read/write access for everyone. Since this is a GPU device node and not a directory containing files, I’m not too concerned, but it’s worth noting for anyone who takes their Linux permissions very seriously.

Step 4: Installing GPU Drivers Inside the LXC Container

With the GPU device passed through, the container now needs the proper drivers installed. This step varies depending on the Linux distribution you’re running inside the container.

Some distros may already include GPU drivers by default. But if you reach Step 5 and don’t see your GPU as an option in Plex (or Jellyfin), chances are you’re missing the driver inside your container.

In my case, I’m using a Debian container, which does not include Intel Arc drivers by default. Here’s what I did:

  1. Edit your apt sources to enable the non-free repo: nano /etc/apt/sources.list
  2. Add non-free to the end of each Debian repository line:
    my sources.list after adding the non-free Debian repository
  3. Refresh apt sources: apt update
  4. Install the Intel Arc driver package: apt install intel-media-va-driver-non-free
  5. Restart the container.

For Debian, I followed the official documentation here: Debian Hardware Video Acceleration Wiki.

Note: These steps will vary depending on your GPU make, model and container distribution. Make sure to check the official documentation for your hardware and distro.

Step 5: Enabling Hardware Rendering in Plex

Now that your GPU and drivers are ready, the final step is to enable hardware transcoding inside Plex.

  1. Open the Plex Web UI.
  2. Go to Settings (top-right corner).
  3. In the left sidebar, scroll down and click Transcoder under Settings.
  4. Make sure the following options are checked:
    • Use hardware acceleration when available
    • Use hardware-accelerated video encoding
  5. Under Hardware transcoding device, you should now see your GPU. If not, double-check Step 4. In my case, it showed up as Intel DG2 [Arc A380]
  6. Select your GPU and click Save Changes.
Steps to enable hardware encoding in plex ui

Testing Hardware Transcoding

To verify that hardware transcoding is working:

  1. Play back any movie or TV show.
  2. In the playback settings, change the quality to a lower resolution that forces Plex to transcode:
    Force Plex to transcode by selecting a lower quality
  3. While it’s playing, go to Settings → Dashboard.
  4. If the GPU is handling transcoding, you’ll see (hw) beside the stream being transcoded:
    The GPU passthrough to the proxmox lxc for plex can be confirmed working from Plex dashboard: here are the steps

That’s it! You’ve successfully set up GPU passthrough in Proxmox LXC for Plex. These same steps should also work for Jellyfin with minor adjustments for the Jellyfin UI.

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin/feed 0
How to Run Tails OS in VirtualBox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox#respond Mon, 02 Jun 2025 23:00:05 +0000 https://www.techopt.io/?p=941 If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and […]

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and open technologies to ensure security. In this guide, you’ll learn how to run Tails OS in VirtualBox in a few easy steps.

You can test or use Tails OS securely on your computer with this method, and nothing saves after you power off the virtual machine.

Step 1: Download Tails OS ISO

First, download the latest Tails OS ISO file from the official Tails website.

Download ISO for Tails OS in virtualbox

The ISO is found under the Burning Tails on a DVD section. Even though you aren’t burning a DVD, you need to use the ISO file for VirtualBox because VirtualBox boots operating systems from CD/DVD images.

The default Tails OS download on the homepage is a .img file for USB sticks, which supports persistence. However, VirtualBox cannot use persistence and does not support virtual USB drives via .img files, so always choose the ISO.

Step 2: Create a New Virtual Machine for Tails OS in VirtualBox

Open VirtualBox and click on New to create a new virtual machine.

Name and Operating System

Give your VM a name, such as Tails OS.

Select the Tails OS ISO file you downloaded as the ISO Image.

Ensure the following settings are automatically detected, and set them if not:

  • Set the type to Linux
  • Set the subtype to Debian
  • Set the version to Debian (64-bit)

Be sure to check Skip Unattended Installation. This is very important for Tails OS because you want to prevent VirtualBox from creating a default user account or setting a password. Tails OS boots directly into its own secure environment by design.

tails os on virtualbox name and operating system settings

Hardware

The technical minimum for Tails OS is 2048 MB RAM and 1 CPU core. However, for a smoother experience, I recommend:

  • Setting the base memory (RAM) to 4096 MB
  • Setting processors to 2 CPU cores or more, if your system allows
Hardware settings creating a tails os virtual machine in VirtualBox

Hard Disk

Under the Hard Disk section, select Do Not Add a Virtual Hard Disk.

do not add virtual hard disk to vm

This setup ensures it’s physically impossible to save anything inside the Tails OS virtual machine. When you stop the VM, you erase everything, keeping your session private and secure.

When you’re happy with your virtual machine settings, click Finish.

Step 3: Boot and Use Tails OS in VirtualBox

Start your virtual machine. Tails OS will boot from the ISO.

Set your language and keyboard layout settings, and click Start Tails.

Start Tails OS language and keyboard settings

Tails OS will ask you how you want to connect to the Tor network. If you’re not sure, I suggest choosing Connect to Tor automatically and clicking Connect to Tor.

Connect to Tor settings

That’s it! Launch the Tor browser and start browsing the web anonymously from Tails OS.

Tails OS running in VirtualBox

Now you can use Tails OS in VirtualBox safely, knowing that all your activities are wiped when you shut down the VM!

Remarks

  • Nothing will be saved when you shut down the virtual machine. Tails OS booted from the ISO does not support persistence. If you need persistence (the ability to save files or settings between sessions), you should use the default USB installation method instead.
  • Do not install VirtualBox Guest Additions. Guest Additions can expose parts of your host system to the virtual machine, which goes against Tails OS’s privacy goals. Besides, when you power off the VM, it will wipe Guest Additions anyway.
  • Keep the ISO file. Do not delete the ISO file you downloaded, because your virtual machine will need it every time it boots Tails OS.
  • Use open-source virtualization software. Tails OS recommends using open-source virtualization tools like VirtualBox or KVM to run Tails OS because their transparency and auditability align with Tails OS’s privacy philosophy. Proprietary alternatives (such as VMware) are not as easily audited for privacy.
  • The Tails OS documentation advises against using VirtualBox because it gets stuck at 800×600 resolution. I’ve found this advice seems outdated. You can set a variety of screen resolutions from the Tails OS display settings menu by right-clicking the desktop. VirtualBox runs Tails OS very well and is a much easier open-source alternative to KVM.

If you prefer a video guide, you can follow along with the video below:

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox/feed 0
Run Virtual Machines on OPNsense with bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve#respond Wed, 30 Apr 2025 02:00:16 +0000 https://www.techopt.io/?p=892 If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB […]

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB of RAM and 120GB of storage sitting mostly idle, with CPU usage rarely spiking beyond a modest blip.

Instead of virtualizing OPNsense itself using something like Proxmox (which is a common suggestion), I wanted to keep OPNsense on bare metal for reliability and stability. But I also wanted to move some VMs directly onto my router box, such as my Ubuntu Server VM running the TP-Link Omada software that controls my WiFi. That led me down the rabbit hole of running a virtual machine on OPNsense—and the tool that makes this possible is bhyve, the native FreeBSD hypervisor.

This is definitely not officially supported and could break with any OPNsense update, so proceed with caution. My setup is loosely based on a 2023 forum post I found in the OPNsense community forums.

Step 1: Installing bhyve

To get bhyve running, we first need to enable the FreeBSD repository temporarily. OPNsense locks its package manager to prevent upgrades from mismatched repos, so we need to handle this carefully.

Lock the pkg tool

pkg lock -y pkg

Enable the FreeBSD repository

sed -i '' 's/enabled: no/enabled: yes/' /usr/local/etc/pkg/repos/FreeBSD.conf

Install bhyve and required packages

pkg install -y vm-bhyve grub2-bhyve bhyve-firmware

Disable the FreeBSD repository again

sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/FreeBSD.conf

Unlock pkg

pkg unlock -y pkg

⚠ Leaving the FreeBSD repo enabled may interfere with future OPNsense updates. Disabling it again helps maintain system stability, but means bhyve won’t update automatically. If you want to update bhyve later, you’ll need to repeat this process.

Step 2: Configuring the Firewall for Virtual Machines on OPNsense

Next, we need to create a virtual bridge to let our bhyve virtual machines talk to each other and the rest of the network.

This part is all done from the OPNsense UI.

Create a bridge interface

  • Navigate to Interfaces → Devices → Bridge
  • Add a new bridge with your LAN interface as a member
  • Enable link-local IPv6 if you use IPv6 on your network
  • Note the name of your bridge (e.g., bridge0)
Creating the LAN bridge switch for virtual machines on OPNsense

Assign and configure the bridge interface

  • Go to Interfaces → Assignments
  • Assign bridge0 to a new interface (give it a description like bhyve_switch_public)
  • Enable the interface
  • Check Lock – Prevent interface removal to avoid losing it on reboot
Assigning an interface to the VM bridge

Allow traffic through the bridge

  • Navigate to Firewall → Rules → bhyve_switch_public
  • Add a rule to allow all traffic (you can tighten this later if needed)
Allow all firewall rule on switch for virtual machines on opnsense

One thing to note: the forum post I referenced above did not mention anything about assigning an interface or adding a firewall rule for the bridge. However, in my experience, my virtual machines in OPNsense had no network connectivity until I completed both of these steps.

Step 3: Setting Up bhyve

With bhyve installed and your network bridge configured, the next step is to prepare the virtual machine manager and your storage directory. You have two options here: using ZFS (ideal for advanced snapshots and performance features) or a standard directory (simpler and perfectly fine for one or two VMs).

Option 1: Using ZFS for VM Storage

If you’re using ZFS (like zroot), create a dataset for your VMs:

zfs create zroot/bhyve

Then enable and configure vm-bhyve:

sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/bhyve"
vm init

Option 2: Using a Standard Directory

If you’re not using ZFS or want a simpler setup for running just a few virtual machines on OPNsense:

mkdir /bhyve
sysrc vm_enable="YES"
sysrc vm_dir="/bhyve"
vm init

This sets up /bhyve as the default VM storage directory. You’ll now be able to manage and create virtual machines using the vm command-line tool, with bhyve handling the hypervisor backend.

This is the option that I personally chose for my setup, since I only plan on running 1 or 2 VMs.

Step 4: Configuring bhyve

With the storage and base configuration out of the way, the next step is to configure networking for bhyve. To do this, we’ll create a virtual switch that connects bhyve’s virtual NICs to the bridge interface we created in Step 2.

Setting up Networking

Run the following command to create a manual switch that binds to the bridge0 interface:

vm switch create -t manual -b bridge0 public

This tells vm-bhyve to create a virtual switch named public and associate it with bridge0, allowing your VMs to communicate with the rest of your network. Any virtual machine you create can now be attached to this switch to access LAN or internet resources just like any other device on your network.

Copying VM Templates

Before you start creating virtual machines in OPNsense, it’s helpful to copy the sample configuration templates that come with vm-bhyve. These templates make it easier to define virtual machines for different operating systems like FreeBSD, Linux, or Windows.

If you’re using ZFS and followed the zroot/bhyve setup as described in Option 1 above:

cp /usr/local/share/examples/vm-bhyve/* /zroot/bhyve/.templates/

If you’re using a standard directory setup like /bhyve, as described in Option 2 above:

cp /usr/local/share/examples/vm-bhyve/* /bhyve/.templates/

This copies example VM configuration templates into the .templates directory within your VM storage location. These templates provide base config files for creating new VMs and are a helpful starting point for most operating systems.

Step 5: Setting Up Your Virtual Machine

In this step, we’ll walk through creating your first VM using bhyve. Since there’s more to this than just launching a template, I’ve broken it down into three parts: setting up the VM itself (including creating a config), installing the operating system, and then configuring the firewall for the VM.

Configuring the VM

For my setup, I was creating an Ubuntu Server instance that runs TP-Link Omada controller software.

First, navigate into your templates directory. If you’re using ZFS, it’ll look like this:

cd /zroot/bhyve/.templates/

Or if you’re using a regular directory setup:

cd /zroot/bhyve/.templates/

Inside, you’ll find configuration files for different OS templates. I used the one for Ubuntu, found at ubuntu.conf, which contains the following:

loader="grub"
cpu=1
memory=512M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This basic config uses GRUB as the loader and allocates 1 CPU and 512MB of RAM. It attaches to the public switch we created earlier and uses a virtual block device for storage.

To create a custom config for my Omada controller VM, I simply copied the Ubuntu template:

cp ubuntu.conf omada-controller.conf

This gave me a dedicated configuration file I could tweak without touching the original Ubuntu template.

Next, I edited the omada-controller.conf file using nano to better suit the needs of the Omada software:

nano omada-controller.conf

And the contents:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This configuration increases the resources available to the VM—allocating 2 CPU cores and 3GB of RAM, which is more appropriate for running the Omada controller software.

Initially, I tried using the GRUB loader as shown in the Ubuntu template, but I ran into problems booting the OS after installation. After doing some research, I found that this is a fairly common issue when using GRUB with certain Linux distributions on bhyve. Switching to uefi resolved the problem for me and allowed the VM to boot normally. Your mileage may vary, but if you’re stuck at boot, switching the loader to uefi is worth trying.

Starting Guest OS Installation

Before you can install the operating system, you’ll need to download and import the installation media (ISO file) for your OS into bhyve. This is easy to do with the vm iso command.

I downloaded the Ubuntu 22.04.5 server installer ISO using:

vm iso https://mirror.digitaloceans.dev/ubuntu-releases/22.04.5/ubuntu-22.04.5-live-server-amd64.iso

This downloaded the ISO file directly into the /bhyve/.iso directory (or /zroot/bhyve/.iso if you’re using ZFS).

Once the ISO was downloaded, I started the installation process for my VM using:

vm install -f omada-controller ubuntu-22.04.5-live-server-amd64.iso

This command tells vm-bhyve to boot the VM using the downloaded ISO so you can proceed with installing the operating system in console mode.

In my case, when using the GRUB loader, the console mode installer worked fine. However, after switching to UEFI mode, I ran into a problem where the console installer would no longer boot properly. After doing some research, I found that this is a common issue with bhyve when using UEFI.

To work around this, I edited my omada-controller.conf and temporarily added the following line to enable graphical output:

graphics="yes"

The updated configuration file looked like this:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
graphics="yes"

This allowed the ISO to boot into the graphical installer, which I accessed using VNC. After installation, I planned to enable SSH on the VM to manage it more easily and remove the graphics option.

However, to use VNC to complete the installation, I needed to add additional firewall rules to allow VNC access to the virtual machine after it was created and booted.

Step 6: Configuring the Firewall (Again)

When a bhyve virtual machine boots, it creates a new network interface on the host system, usually with the prefix tap. When my bhyve VM booted, the firewall blocked all network access on the new VM interface. As a result, I was unable to connect to the VM, and the VM itself had no network connectivity.

Here’s what I did to properly assign the VM network interface and open up traffic:

  • Run ifconfig from the console to see the list of interfaces.
  • Identify the new interface created by bhyve (it will usually start with tap). In my case, it was named tap0.
  • Rename the tap interface so it can be properly assigned in the OPNsense GUI:
ifconfig tap0 name bhyve0
  • Go to Interfaces → Assignments in the OPNsense UI.
  • Assign the newly renamed bhyve0 interface.
  • Give it a description like bhyve_vm_omada_controller.
  • Enable the interface.
Assigning the vm to an interface in the OPNsense UI
  • Go to Firewall → Rules → [bhyve_vm_omada_controller].
  • Add a rule to allow all traffic through the interface.

This setup ensured that the virtual machine had full network access through its own dedicated interface, while still keeping things clean and organized within OPNsense.

Keep in mind that each time the VM is powered off and started again, a new tap interface is created. Because of this, you must manually rename and reassign the interface every time the VM boots until we set up a persistent interface configuration after the OS installation is complete.

Keeping the interface name consistent between shutdowns so firewall rules apply correctly was one of the trickiest parts of the entire setup for me. I’ll dive deeper into the different solutions I tested, and what finally solved the issue reliably, in the final step of this article.

Step 7: Connecting with VNC and Installing the OS

Now that our virtual machine on OPNsense is configured, the ISO is loaded, and the firewall rules are in place, it’s time to connect to the VM and install the OS.

  • Open your preferred VNC client. I personally used Remmina for this, but other popular options like TightVNC and TigerVNC will also work fine.
  • Connect to your OPNsense router’s IP address on port 5900.
  • You should see your OS installer’s screen boot up!
Remotely connect to virtual machines on OPNsense with VNC

Proceed through the guest OS installation like you normally would. During installation, I made sure to:

  • Enable the OpenSSH server in Ubuntu so I could manage the VM over SSH instead of VNC afterward.
  • Configure the VM with a static IP address within my LAN subnet.

Once installation was completed, I rebooted the VM. If you enabled SSH, you should now be able to connect to your VM via its IP address without needing to rely on VNC anymore.

After confirming SSH access, I edited the VM’s configuration to remove the graphics="yes" line from omada-controller.conf for security and resource efficiency.

After powering off the VM to make these changes, I had to manually rename the network adapter again following the steps from Step 6, before I could access it via SSH.

Now let’s configure the VM to start automatically at boot, and find a more permanent solution for the network adapter issue in the next step.

Step 8: Starting the VM at Boot and Fixing the Network Interface Name Issue

Starting the VM at Boot

By default, bhyve VMs don’t start automatically with the system. You can set individual VMs to start using:

vm set omada-controller boot=on

However, there’s a more flexible method that allows you to define a list of VMs that should start at boot. I used the following command to specify mine:

sysrc vm_list="omada-controller"

This ensures vm-bhyve starts the omada-controller VM whenever the system boots.

If you want to start multiple VMs, just list them separated by spaces:

sysrc vm_list="omada-controller another-vm some-other-vm"

This is useful if you plan to run multiple VMs on your OPNsense machine via bhyve.

Fixing the Networking Interface Name at Boot

One of the trickiest parts of my setup was keeping the VM’s network interface name consistent between reboots. I initially tried using the following line in my VM config:

network0_name="bhyve0"

This is supposed to create the VM’s network interface with the name bhyve0 when it boots.

However, I found that while this approach worked with loader="grub" in the VM config (BIOS mode), it caused the VM to crash immediately at startup when using loader="uefi".

Instead, I leaned into ifconfig and ran the following:

sysrc ifconfig_tap0_name="bhyve0"

This ensures the tap0 interface is automatically renamed to bhyve0 at boot time, which seems to be working well for me. This seems to allow the firewall rules we created earlier to be applied successfully without manual intervention.

Now we have configured our VM to start running at boot, and we automatically rename the created network interface at boot.

Keep in mind that with the ifconfig method, you will have to manually run ifconfig again if the VM is powered off and on, but the OPNsense host is not:

ifconfig tap0 name bhyve0

This is because the interface gets destroyed and recreated with the default tap0 name.

Running Virtual Machines on OPNsense: My Final Thoughts

Running virtual machines directly on OPNsense using bhyve is an advanced but rewarding undertaking. It allows you to consolidate infrastructure and put underutilized hardware to work, all while keeping your firewall on bare metal for maximum reliability. While the process involves a lot of manual setup—especially around networking and boot configuration—it ultimately gives you a lightweight, headless VM host tightly integrated into your router.

Just remember that this is an unofficial and unsupported use case. Updates to OPNsense or FreeBSD may break things, so keep good backups and approach with a tinkerer’s mindset. But if you’re comfortable on the command line and like squeezing every drop of utility out of your hardware, this setup is a powerful way to do just that.

Remarks

  • vm list is a good command to help see loaded VMs and their status. You can also start and stop VMs with vm start vm-name and vm stop vm-name.
  • You will have to configure the network adapter settings for each VM you create and apply the firewall rules for each VM in the OPNsense UI as we did above.
  • It’s important to note the configuration differences between using the UEFI loader and the BIOS loader when setting up virtual machines on OPNsense, as stated throughout the article.
  • To see a sample VM configuration, you can take a look at an example on GitHub here.
  • Again, this is all unsupported. Follow at your own risk.
    • This is for advanced users only. None of it is manageable through the OPNsense UI, except firewall rules. Know what you’re doing in the terminal before following this guide.

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve/feed 0
LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox#respond Tue, 25 Feb 2025 02:40:15 +0000 https://www.techopt.io/?p=824 Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases. When setting up a new environment, you might wonder whether you should deploy your […]

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases.

When setting up a new environment, you might wonder whether you should deploy your workload inside an LXC container or a full VM. The choice depends on what you are trying to achieve.

LXC Containers: Lightweight and Efficient

LXC (Linux Containers) provides an efficient way to run isolated environments on a Proxmox system. Unlike traditional VMs, containers share the host system’s kernel while maintaining their own isolated user space. This means they use fewer resources, start up quickly, and offer near-native performance.

When to Use LXC Containers:

  • Single Applications – If you need to run a single application in an isolated environment, an LXC container is an excellent choice.
  • Docker Workloads – If an application is only available as a Docker image, you can run Docker inside an LXC container, avoiding the overhead of a full VM.
  • Resource Efficiency – LXC containers consume fewer resources, making them ideal for lightweight applications that don’t require their own kernel.
  • Speed – Since LXC containers don’t require full emulation, they start almost instantly compared to VMs.

Considerations for LXC Containers:

  • Less Isolation – Since they share the host kernel, they are not as isolated as a full VM, which can pose security risks if an attacker exploits vulnerabilities in the kernel or improperly configured permissions.
  • Compatibility Issues – Some applications that expect a full OS environment may not work well inside an LXC container.
  • Limited System Control – You don’t have complete control over kernel settings like you would in a VM.

Virtual Machines: Full System Isolation

Virtual machines in Proxmox use KVM (Kernel-based Virtual Machine) technology to provide a fully virtualized system. Each VM runs its own operating system with its own kernel, making it functionally identical to a physical machine.

When to Use Virtual Machines:

  • Multiple Applications Working Together – If you need to run a system with multiple interacting services, a VM provides a fully isolated environment.
  • Custom Kernel or OS Requirements – If your application requires a specific kernel version or a non-Linux operating system (e.g., Windows or BSD), a VM is the way to go.
  • Strict Security Requirements – Since VMs have strong isolation from the host system, they provide better security for untrusted workloads.
  • Compatibility – Any software that runs on a physical machine will run in a VM without modification.

Considerations for Virtual Machines:

  • Higher Resource Usage – VMs require more CPU, RAM, and disk space compared to containers.
  • Slower Start Times – Because they emulate an entire system, VMs take longer to boot up.
  • More Maintenance – You’ll need to manage full OS installations, updates, and security patches for each VM separately.

Final Thoughts: When to Choose LXC Containers vs. Virtual Machines in Proxmox

In general, if you need to run a single application in isolation, or if your application is only available as a Docker image, an LXC container is the better choice. Containers are lightweight, fast, and efficient. However, if you’re running a more complex system with multiple interacting applications, need complete OS independence, or require strong isolation, a VM is the better solution.

Proxmox makes it easy to work with both LXC and VMs, so understanding your workload’s needs will help you choose the right tool for the job. By leveraging the strengths of each, you can optimize performance, security, and resource usage in your environment.

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox/feed 0
Importing a QCOW2 Image in Proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox#respond Tue, 31 Dec 2024 17:01:00 +0000 https://www.techopt.io/?p=678 If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line. Working with QCOW2 in […]

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line.

Working with QCOW2 in Proxmox setups involves a few terminal commands and some configuration in the Proxmox web interface, but it’s straightforward once you’ve done it once.

Steps to Import a QCOW2 Image in Proxmox

1. Access Your Proxmox Terminal

Log in to the terminal of your Proxmox host. This can be done directly or via SSH. The terminal is essential for transferring and managing files during the setup process.

2. Prepare the QCOW2 Storage Directory

Store QCOW2 images in a dedicated directory for better organization. Proxmox doesn’t create this directory by default, so you may need to set it up. A good place to store QCOW2 images is in /var/lib/vz/template/qcow.

  • Check if the directory exists:

    ls /var/lib/vz/template/qcow

  • If it doesn’t exist, create it:

    mkdir -p /var/lib/vz/template/qcow

3. Upload the QCOW2 Image Using SFTP

Uploading the QCOW2 image requires an SFTP connection. Here’s how to do it using the command line:

  1. Open a terminal on your local machine.
  2. Use the sftp command to connect to your Proxmox host. Replace user with your Proxmox username and host with the IP address of your Proxmox server:

    sftp user@host

  3. Navigate to the directory where you want to upload the image:

    cd /var/lib/vz/template/qcow

  4. Upload the QCOW2 file from your local system:

    put /path/to/your_image.qcow2

    Replace /path/to/your_image.qcow2 with the actual path to the QCOW2 file on your local machine.
  5. Exit the SFTP session with bye.
uploading qcow2 images to proxmox with sftp

4. Create a New Virtual Machine

Now, create a VM in Proxmox using the web interface:

  1. Log in to the Proxmox web interface.
  2. Select Create VM and configure the basic settings, such as the name and operating system. Make sure to select Do not use any media on the OS tab.
  3. On the Disks tab, leave all settings at their default values:
    • The default settings work because QCOW2 images already define their initial drive size. If needed, you can expand the drive size later.
    • Select any bus type for now, as you will replace this with the QCOW2 disk.
Do not use any media when uploading qcow2 image to proxmox

5. Import the QCOW2 Image

After creating the VM, use the qm importdisk command to import the QCOW2 image to the VM.

  1. Identify the VM’s ID from the Proxmox web interface (e.g., 120).
  2. Run the following command in the Proxmox terminal:

    qm importdisk 120 /var/lib/vz/template/qcow/your_image.qcow2 local

This imports the QCOW2 image as an unused disk linked to the VM.

Make sure to replace the VM ID, image location and storage name with the values for your system.

Here is a sample output from the command:

importing disk '/var/lib/vz/template/qcow/haos_ova-14.1.qcow2' to VM 120 ...
transferred 0.0 B of 32.0 GiB (0.00%)
transferred 396.5 MiB of 32.0 GiB (1.21%)
transferred 737.3 MiB of 32.0 GiB (2.25%)
transferred 1.1 GiB of 32.0 GiB (3.36%)
transferred 1.4 GiB of 32.0 GiB (4.47%)
transferred 1.8 GiB of 32.0 GiB (5.59%)
transferred 2.1 GiB of 32.0 GiB (6.70%)
transferred 2.5 GiB of 32.0 GiB (7.81%)
transferred 2.9 GiB of 32.0 GiB (8.92%)
...
transferred 20.9 GiB of 32.0 GiB (65.44%)
transferred 21.3 GiB of 32.0 GiB (66.56%)
transferred 21.7 GiB of 32.0 GiB (67.67%)
transferred 22.0 GiB of 32.0 GiB (68.78%)
transferred 22.4 GiB of 32.0 GiB (69.89%)
transferred 22.7 GiB of 32.0 GiB (71.01%)
transferred 32.0 GiB of 32.0 GiB (100.00%)
unused0: successfully imported disk 'VMs:vm-120-disk-3'

In this case, this created a new 32.0 GiB drive specified by our image that we can attach to our VM.

6. Attach the Disk to the VM

Once imported, attach the QCOW2 image to the VM:

  1. In the Proxmox web interface, go to the VM and select Hardware.
  2. Find the unused disk and attach it as a VirtIO Block Device or a SATA device (if your guest OS doesn’t support QEMU agent and VirtIO drivers).
  3. Optionally, remove or detach the original disk created during VM setup if it’s no longer needed.
Add the unused disk created from the qcow2 image in proxmox

7. Update the Boot Order

Ensure the VM boots from the newly attached QCOW2 image:

  1. Go to the Options tab in the VM’s settings.
  2. Set the newly attached VirtIO disk as the primary boot device.
Change the primary boot entry to the imported qcow2 image attached drive

8. Start the VM

Boot up the VM. If everything is set up correctly, the system should load from the QCOW2 image!

Remarks

  • When creating the VM, default disk settings are sufficient since the QCOW2 image specifies its own initial drive size. You can expand the size later if needed.
  • The sftp command is a straightforward way to upload the QCOW2 file. If you prefer a graphical interface, tools like FileZilla can achieve the same result.
  • Use the /var/lib/vz/template/qcow directory to keep QCOW2 images organized and accessible for future use. You can use any directory, but this is an easy place to remember since it’s with other templates.
  • You can also use any other method to upload your QCOW2 images to this directory, or even download them to this directory directly with a tool such as wget.

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox/feed 0