Tutorial Post Tag - TechOpt.io https://www.techopt.io/tag/tutorial Programming, servers, Linux, Windows, macOS & more Sun, 24 Aug 2025 20:40:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.techopt.io/wp-content/uploads/2024/07/cropped-logo-1-32x32.png Tutorial Post Tag - TechOpt.io https://www.techopt.io/tag/tutorial 32 32 GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin#respond Sun, 24 Aug 2025 20:40:06 +0000 https://www.techopt.io/?p=1059 When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These […]

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These same steps can also apply if you’re running Jellyfin inside an LXC.

In this guide, I’ll walk you through enabling GPU passthrough in Proxmox (or Jellyfin) LXC step by step.

Step 1: Enabling PCI(e) Passthrough in Proxmox

The first step is enabling PCI passthrough at the Proxmox host level. I mostly followed the official documentation here: Proxmox PCI Passthrough Wiki.

I’ll summarize what should be done below.

Enable IOMMU in BIOS

Before you continue, enable IOMMU (Intel VT-d / AMD-Vi) in your system’s BIOS. This setting lets GPUs pass directly through to containers or VMs.

Each BIOS Is different, so if you’re not sure you should check your motherboard’s instruction manual.

Enable IOMMU Passthrough Mode

Not all hardware supports IOMMU passthrough, but if yours does you’ll see a big performance boost. Even if your system doesn’t, enabling it won’t cause problems, so it’s worth turning on.

Edit the GRUB configuration with:

nano /etc/default/grub

Locate the GRUB_CMDLINE_LINUX_DEFAULT line and add:

iommu=pt

Intel vs AMD Notes

  • On Intel CPUs with Proxmox older than v9 (kernel <6.8), also add: intel_iommu=on
  • On AMD CPUs, IOMMU is enabled by default.
  • On Intel CPUs with kernel 6.8 or newer (Proxmox 8 updated or Proxmox 9+), you don’t need the intel_iommu=on parameter.

With an AMD CPU on Proxmox 9, my file looked like this in the end:

grub config to passthrough gpu to proxmox lxc with iommu

Save the file by typing CTRL+X, typing Y to confirm and then Enter to save.

Then update grub by running:

update-grub

Load VFIO Kernel Modules

Next, we need to load the VFIO modules so the GPU can be bound for passthrough. Edit the modules file with:

nano /etc/modules

Add the following lines:

vfio
vfio_iommu_type1
vfio_pci

My file looked like this in the end:

added kernel modules needed for gpu passthrough to lxc in proxmox

Save and exit, then update initramfs:

update-initramfs -u -k all

Reboot and Verify

Reboot your Proxmox host and verify the modules are loaded:

lsmod | grep vfio

You should see the vfio modules listed. This is what my output looks like:

vfio kernel modules are loaded

If you don’t get any output, the kernel modules have not loaded correctly and you probably forgot to run the update-initramfs command above.

To double-check IOMMU is active, run:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

Depending on your hardware, you should see confirmation that IOMMU or Directed I/O is enabled:

output when IOMMU is active (or similar)

Step 2: Finding the GPU Renderer Device Path

Now that PCI passthrough support is enabled, the next step is to figure out the device path of the renderer for the GPU you want to pass through.

Run the following on your Proxmox host:

ls /dev/dri

This will print all the detected GPUs. For example, my output looked like this:

by-path  card0  card1  renderD128  renderD129

If you only have a single GPU, you can usually assume it will be something like renderD128. But if you have multiple GPUs (as I did), you’ll need to identify which renderer belongs to your Intel Arc card.

First, run:

lspci

This will list all PCI devices. From there, I found my Intel Arc GPU at 0b:00.0:

lspci output showing my intel arc GPU card

Next, run:

ls -l /dev/dri/by-path/

My output looked like this:

lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:04:00.0-card -> ../card0
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:04:00.0-render -> ../renderD129
lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:0b:00.0-card -> ../card1
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:0b:00.0-render -> ../renderD128

From this, I confirmed that my Intel Arc GPU was associated with renderD128. That means the full device path I need to pass to my LXC container is:

/dev/dri/renderD128

Step 3: Passthrough GPU Device to the Plex LXC Container

Now that we know the correct device path, we can pass it through to the LXC container.

  1. Stop the container you want to pass the GPU into.
  2. In the Proxmox web UI, select your Plex LXC container.
  3. Go to Resources.
  4. At the top, click Add → Device Passthrough.
    steps to get to device passthrough menu
  5. In the popup, set Device Path to the GPU renderer path you identified in Step 2 (in my case, /dev/dri/renderD128).
  6. In the Access mode in CT field, type:0666
    add passthrough device to lxc config
  7. Click Add.
  8. Once added, you can start the container again.

A Note About Permissions

Normally, you would configure a proper UID or GID in CT for the render group inside the container so only that group has access. However, in my testing I wasn’t able to get the GPU working correctly with that method.

Using 0666 permissions allows read/write access for everyone. Since this is a GPU device node and not a directory containing files, I’m not too concerned, but it’s worth noting for anyone who takes their Linux permissions very seriously.

Step 4: Installing GPU Drivers Inside the LXC Container

With the GPU device passed through, the container now needs the proper drivers installed. This step varies depending on the Linux distribution you’re running inside the container.

Some distros may already include GPU drivers by default. But if you reach Step 5 and don’t see your GPU as an option in Plex (or Jellyfin), chances are you’re missing the driver inside your container.

In my case, I’m using a Debian container, which does not include Intel Arc drivers by default. Here’s what I did:

  1. Edit your apt sources to enable the non-free repo: nano /etc/apt/sources.list
  2. Add non-free to the end of each Debian repository line:
    my sources.list after adding the non-free Debian repository
  3. Refresh apt sources: apt update
  4. Install the Intel Arc driver package: apt install intel-media-va-driver-non-free
  5. Restart the container.

For Debian, I followed the official documentation here: Debian Hardware Video Acceleration Wiki.

Note: These steps will vary depending on your GPU make, model and container distribution. Make sure to check the official documentation for your hardware and distro.

Step 5: Enabling Hardware Rendering in Plex

Now that your GPU and drivers are ready, the final step is to enable hardware transcoding inside Plex.

  1. Open the Plex Web UI.
  2. Go to Settings (top-right corner).
  3. In the left sidebar, scroll down and click Transcoder under Settings.
  4. Make sure the following options are checked:
    • Use hardware acceleration when available
    • Use hardware-accelerated video encoding
  5. Under Hardware transcoding device, you should now see your GPU. If not, double-check Step 4. In my case, it showed up as Intel DG2 [Arc A380]
  6. Select your GPU and click Save Changes.
Steps to enable hardware encoding in plex ui

Testing Hardware Transcoding

To verify that hardware transcoding is working:

  1. Play back any movie or TV show.
  2. In the playback settings, change the quality to a lower resolution that forces Plex to transcode:
    Force Plex to transcode by selecting a lower quality
  3. While it’s playing, go to Settings → Dashboard.
  4. If the GPU is handling transcoding, you’ll see (hw) beside the stream being transcoded:
    The GPU passthrough to the proxmox lxc for plex can be confirmed working from Plex dashboard: here are the steps

That’s it! You’ve successfully set up GPU passthrough in Proxmox LXC for Plex. These same steps should also work for Jellyfin with minor adjustments for the Jellyfin UI.

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin/feed 0
How to Run Tails OS in VirtualBox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox#respond Mon, 02 Jun 2025 23:00:05 +0000 https://www.techopt.io/?p=941 If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and […]

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and open technologies to ensure security. In this guide, you’ll learn how to run Tails OS in VirtualBox in a few easy steps.

You can test or use Tails OS securely on your computer with this method, and nothing saves after you power off the virtual machine.

Step 1: Download Tails OS ISO

First, download the latest Tails OS ISO file from the official Tails website.

Download ISO for Tails OS in virtualbox

The ISO is found under the Burning Tails on a DVD section. Even though you aren’t burning a DVD, you need to use the ISO file for VirtualBox because VirtualBox boots operating systems from CD/DVD images.

The default Tails OS download on the homepage is a .img file for USB sticks, which supports persistence. However, VirtualBox cannot use persistence and does not support virtual USB drives via .img files, so always choose the ISO.

Step 2: Create a New Virtual Machine for Tails OS in VirtualBox

Open VirtualBox and click on New to create a new virtual machine.

Name and Operating System

Give your VM a name, such as Tails OS.

Select the Tails OS ISO file you downloaded as the ISO Image.

Ensure the following settings are automatically detected, and set them if not:

  • Set the type to Linux
  • Set the subtype to Debian
  • Set the version to Debian (64-bit)

Be sure to check Skip Unattended Installation. This is very important for Tails OS because you want to prevent VirtualBox from creating a default user account or setting a password. Tails OS boots directly into its own secure environment by design.

tails os on virtualbox name and operating system settings

Hardware

The technical minimum for Tails OS is 2048 MB RAM and 1 CPU core. However, for a smoother experience, I recommend:

  • Setting the base memory (RAM) to 4096 MB
  • Setting processors to 2 CPU cores or more, if your system allows
Hardware settings creating a tails os virtual machine in VirtualBox

Hard Disk

Under the Hard Disk section, select Do Not Add a Virtual Hard Disk.

do not add virtual hard disk to vm

This setup ensures it’s physically impossible to save anything inside the Tails OS virtual machine. When you stop the VM, you erase everything, keeping your session private and secure.

When you’re happy with your virtual machine settings, click Finish.

Step 3: Boot and Use Tails OS in VirtualBox

Start your virtual machine. Tails OS will boot from the ISO.

Set your language and keyboard layout settings, and click Start Tails.

Start Tails OS language and keyboard settings

Tails OS will ask you how you want to connect to the Tor network. If you’re not sure, I suggest choosing Connect to Tor automatically and clicking Connect to Tor.

Connect to Tor settings

That’s it! Launch the Tor browser and start browsing the web anonymously from Tails OS.

Tails OS running in VirtualBox

Now you can use Tails OS in VirtualBox safely, knowing that all your activities are wiped when you shut down the VM!

Remarks

  • Nothing will be saved when you shut down the virtual machine. Tails OS booted from the ISO does not support persistence. If you need persistence (the ability to save files or settings between sessions), you should use the default USB installation method instead.
  • Do not install VirtualBox Guest Additions. Guest Additions can expose parts of your host system to the virtual machine, which goes against Tails OS’s privacy goals. Besides, when you power off the VM, it will wipe Guest Additions anyway.
  • Keep the ISO file. Do not delete the ISO file you downloaded, because your virtual machine will need it every time it boots Tails OS.
  • Use open-source virtualization software. Tails OS recommends using open-source virtualization tools like VirtualBox or KVM to run Tails OS because their transparency and auditability align with Tails OS’s privacy philosophy. Proprietary alternatives (such as VMware) are not as easily audited for privacy.
  • The Tails OS documentation advises against using VirtualBox because it gets stuck at 800×600 resolution. I’ve found this advice seems outdated. You can set a variety of screen resolutions from the Tails OS display settings menu by right-clicking the desktop. VirtualBox runs Tails OS very well and is a much easier open-source alternative to KVM.

If you prefer a video guide, you can follow along with the video below:

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox/feed 0
Setting a Static IP Address and DNS in Ubuntu Server https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server#respond Wed, 09 Apr 2025 00:13:28 +0000 https://www.techopt.io/?p=883 If you’re running Ubuntu Server and need to configure a static IP address, you might have seen guides mentioning /etc/network/interfaces or resolvconf. However, these methods are outdated. The recommended way today is to use netplan. In this guide, you’ll discover how to set a static IP in Ubuntu and define custom DNS settings, including nameservers […]

The post Setting a Static IP Address and DNS in Ubuntu Server appeared first on TechOpt.

]]>
If you’re running Ubuntu Server and need to configure a static IP address, you might have seen guides mentioning /etc/network/interfaces or resolvconf. However, these methods are outdated. The recommended way today is to use netplan.

In this guide, you’ll discover how to set a static IP in Ubuntu and define custom DNS settings, including nameservers and search domains. Additionally, we’ll explain how to keep DHCP while specifying DNS servers for better control.

Why Should You Set a Static IP on Ubuntu Server?

Assigning a static IP ensures your server retains the same address across reboots. This reliability is essential for servers running web services, databases, or acting as internal network resources.

Step 1: Identify Your Ubuntu Server Network Interface

To begin, list your network interfaces:

ip link

You’ll usually see names like eth0, ens33, or enp0s3.

Step 2: Edit the Netplan Configuration

Netplan configurations are stored in /etc/netplan/. View the files with:

ls /etc/netplan/

Next, edit the YAML file (replace with your actual file name):

sudo nano /etc/netplan/50-cloud-init.yaml

Here’s an example static IP configuration for Ubuntu Server:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: no
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        search: [yourdomain.local]
        addresses:
          - 1.1.1.1
          - 1.0.0.1

Replace eth0 with your interface name. Adjust the IP, gateway, and DNS to match your network.

Important: Some older guides might mention using the gateway4 parameter. However, gateway4 has been deprecated. It’s better to use the routes section, as demonstrated above, for better compatibility with future Ubuntu versions.

Step 3: Apply the Static IP Ubuntu Configuration

Once you have finished editing, apply the changes with:

sudo netplan apply

To confirm your new settings, run:

ip a

This command will display your active IP address. To confirm your DNS configuration is working, you can run:

apt update


This will refresh the built-in software repositories, and as long as it’s successful you know that your DNS configuration is working.

Alternative: Keep DHCP but Configure DNS in Ubuntu Server

If you prefer to use DHCP for IP assignment but still want to control DNS servers, use this configuration:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: yes
      dhcp4-overrides:
        use-dns: no
      nameservers:
        search: [yourdomain.local]
        addresses:
          - 1.1.1.1
          - 1.0.0.1

This method allows the server to receive its IP address from the DHCP server, while your specified DNS servers handle name resolution.

Conclusion

To sum up, netplan is the modern, recommended tool for configuring a static IP Ubuntu setup. You should avoid older methods like resolvconf or editing /etc/network/interfaces, as they are deprecated in the latest Ubuntu versions. Whether you need a full static IP or simply want to control your DNS while keeping DHCP, netplan makes the process clear and manageable.

If you would like to learn about all the configuration options for netplan, you can read the official Netplan documentation.

If you would prefer to view this guide in video form, I’ve created a video explaining these instructions on the TechOpt.io YouTube channel, which you can watch below:

The post Setting a Static IP Address and DNS in Ubuntu Server appeared first on TechOpt.

]]>
https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server/feed 0
Reading my Water Meter in Home Assistant with USB SDR https://www.techopt.io/smart-home/reading-my-water-meter-in-home-assistant-with-usb-sdr https://www.techopt.io/smart-home/reading-my-water-meter-in-home-assistant-with-usb-sdr#respond Sat, 29 Mar 2025 22:25:29 +0000 https://www.techopt.io/?p=860 One of the most interesting things I’ve done recently is integrating my residential city water meter into Home Assistant using a USB Software Defined Radio (SDR). I’ve been exploring ways to import utility meter data, like gas, water, and electricity into Home Assistant, and I discovered many of these meters emit signals over radio frequency. […]

The post Reading my Water Meter in Home Assistant with USB SDR appeared first on TechOpt.

]]>
One of the most interesting things I’ve done recently is integrating my residential city water meter into Home Assistant using a USB Software Defined Radio (SDR). I’ve been exploring ways to import utility meter data, like gas, water, and electricity into Home Assistant, and I discovered many of these meters emit signals over radio frequency.

Using RTL-SDR and rtl_433 for Water Meter Home Assistant Integration

After reading success stories online, I picked up the Nooelec RTL-SDR v5 Bundle from Amazon. This little USB SDR dongle can tune from 100kHz to 1.75GHz, which is perfect for picking up utility meter signals.

My first attempt was to plug the SDR into my Raspberry Pi 4 running Home Assistant and use the rtl_433 addon. Unfortunately, I ran into some power issues and weird addon errors, likely due to the antenna drawing too much power with everything else I already had plugged in.

Setting Up a Dedicated SDR Host for Home Assistant

To fix this, I decided to run the SDR on a dedicated Raspberry Pi. I had an old, original Pi 1 Model B lying around. With a decent 2A power supply and a wireless N dongle for network connectivity, I installed Raspbian Lite and the rtl_433 tool.

I experimented with both 433 MHz and 915 MHz frequencies, using the coiled antennas included in the kit. The 900 MHz antenna ended up being the winner. On the 915 MHz band, I finally started seeing data in the logs:

An example of the data I was seeing in the rtl_433 logs

Identifying My Water Meter in Home Assistant Logs

One key data point stood out: a field called Consumption Data showing a value around 168000. After comparing this with the value on my actual Neptune T-10 water meter (used by the City of Ottawa), I realized I was picking up my own water meter’s signal!

I also saw signals from neighboring meters, but by monitoring the data for a few hours, I confirmed which one was mine. The value updated every 15 to 30 minutes.

Fixing USB Stability for Water Meter Data Collection

One hiccup: after several hours, rtl_433 would crash with libusb errors. Unplugging and replugging fixed it temporarily, but the long-term solution was to use a powered USB hub (4A supply) to give the SDR the juice it needed. I also ditched the USB extension cable from the kit and plugged the SDR directly into the hub. That seemed to fix the issue.

The nice thing with this hub is also that I can use it to power the Pi; so I have 1 power cable for the hub, with 1 USB cable running from the hub to the Pi. I have a microUSB cable going from the hub to the Pi for power, and I have the RTL-SDR and Wi-Fi dongle plugged into the USB hub. You can see the final setup up close in the picture below:

My final setup showing the USB hub, radio, Wi-Fi dongle and Pi all connected

Publishing Water Meter Data to Home Assistant via MQTT

With the setup stable, I configured rtl_433 to output data to MQTT. Since I already had the Mosquitto addon running in Home Assistant, I pointed rtl_433 to it and monitored the output using MQTT Explorer. I found the data for my meter which matched up to the logs I was seeing directly on the Pi:

Seeing water meter consumption data in mqtt home assistant

Creating a Systemd Service for Water Meter Integration

To make everything persistent, I created a systemd service at /etc/systemd/system/ha-sdr-915M.service:

[Unit]
Description=rtl_433 on 915M SDR
After=network.target

[Service]
ExecStart=/usr/local/bin/ha-sdr-915M.sh
Restart=unless-stopped
WorkingDirectory=/usr/local/bin

[Install]
WantedBy=multi-user.target

The script at /usr/local/bin/ha-sdr-915M.sh:

#!/bin/bash

source /etc/ha-sdr.env

LOGFILE="/var/log/ha-sdr/ha-sdr-915M.log"

/usr/bin/rtl_433 -f 915M -F mqtt://homeassistant.domain.com:1883,user=$MQTT_USER,pass=$MQTT_PASS -F kv 2>&1 | while IFS= read -r line
do
    echo "$(date '+%Y-%m-%d %H:%M:%S') $line"
done >> "$LOGFILE"

I enabled the service by running systemctl daemon-reload and systemctl enable ha-sdr-915M.service.

Adding a Water Meter MQTT Sensor in Home Assistant

In Home Assistant YAML configuration:

sensor:
  - name: "Water Meter Consumption Data"
    object_id: water_meter_consumption_data
    state_topic: "rtl_433/pisdrfrontoh/devices/ERT-SCM/33391039/consumption_data"
    unit_of_measurement: "gal"
    state_class: total_increasing
    device_class: water

I noticed the values from my meter were in gallons, even though I’m in Canada where cubic meters or litres are common. To convert to litres, I added a template sensor:

sensor:
  - name: "Water Meter Consumption (Litres)"
    state: >
      {{ states('sensor.water_meter_consumption_data') | float * 3.78541178 | round(2) }}
    unit_of_measurement: "L"
    device_class: water

Setting Up a Utility Meter for Daily Water Tracking

Finally, I created a utility meter entity to track daily water usage:

utility_meter:
  daily_water:
    source: sensor.water_meter_consumption_litres
    cycle: daily

This entity allowed me to graph and display daily water usage data directly from my water meter in Home Assistant:

Water meter home assistant usage graph in litres

Final Thoughts on Monitoring My Water Meter with Home Assistant

This project took a few days of tuning and monitoring, but I’m thrilled with the results. For now, I’m only picking up water meter data, but I’m hopeful I’ll find more signals soon.

If you’re thinking about tracking your water meter in Home Assistant, using an SDR like the Nooelec RTL-SDR v5 and rtl_433 software is a great DIY approach. The insight into water usage is already super useful!

The post Reading my Water Meter in Home Assistant with USB SDR appeared first on TechOpt.

]]>
https://www.techopt.io/smart-home/reading-my-water-meter-in-home-assistant-with-usb-sdr/feed 0
How to Write an ISO to USB Drive in Linux https://www.techopt.io/linux/how-to-write-an-iso-to-usb-drive-in-linux https://www.techopt.io/linux/how-to-write-an-iso-to-usb-drive-in-linux#respond Sun, 16 Mar 2025 21:03:34 +0000 https://www.techopt.io/?p=850 Creating a bootable USB drive from an ISO in Linux is a straightforward process using built-in command-line tools. Most Linux distributions include fdisk and dd by default, making this method widely applicable. In this guide, we’ll walk through how to safely write an ISO to USB in Linux using dd. Step 1: Identify the USB […]

The post How to Write an ISO to USB Drive in Linux appeared first on TechOpt.

]]>
Creating a bootable USB drive from an ISO in Linux is a straightforward process using built-in command-line tools. Most Linux distributions include fdisk and dd by default, making this method widely applicable. In this guide, we’ll walk through how to safely write an ISO to USB in Linux using dd.

Step 1: Identify the USB Drive

Before writing the ISO to USB in Linux, you need to determine the correct device name for your USB drive. Plug in the USB drive and run:

sudo fdisk -l

This command lists all storage devices connected to your system. Look for your USB drive by checking the model, size and type. It will usually be listed as /dev/sdX (e.g., /dev/sdb) or /dev/diskX (on some distributions).\

In my case, this happens to be /dev/sdc, as you can see in the screenshot below:

find usb drive in fdisk

⚠ Warning: Be very careful when selecting the device name, as writing to the wrong disk will result in data loss!

Step 2: Write the ISO to the USB Drive

Once you’ve identified the correct device, use the dd command to write the ISO to the USB drive:

sudo dd if=/path/to/iso.iso of=/dev/sdX bs=1M

Replace /path/to/iso.iso with the actual path to your ISO file and /dev/sdX with your USB drive’s identifier.

Explanation of Parameters:

  • if=/path/to/iso.iso – Input file (the ISO image).
  • of=/dev/sdX – Output file (your USB drive). Do not include a partition number (e.g., /dev/sdX1), as you need to write to the whole disk.
  • bs=1M – Block size (1 Megabyte). Without this, dd defaults to a 512-byte block size, which can significantly down the process.

Step 3: Monitor Progress

When the dd command completes, it will output a summary similar to this:

123456789+0 records in
123456789+0 records out
123456789 bytes (X GB) copied, XX.XXXX s, XX.X MB/s

This confirms that all data has been written successfully to the USB drive:

dd output after writing iso to usb linux

The dd command does not provide real-time progress updates by default. On some distributions, you can add status=progress to see the write progress:

sudo dd if=/path/to/iso.iso of=/dev/sdX bs=1M status=progress

Step 4: Safely Eject the USB Drive

Once the process is complete, ensure all data is written by running:

sudo sync

Then, safely remove the USB drive:

sudo eject /dev/sdX

Remarks

  • For bigger ISO images, the process will take longer.
  • You should use the root device name (e.g., /dev/sdX) and not a partition (e.g., /dev/sdX1).
  • For all options you can use with dd, you can consult the dd man page.

Now, your USB drive is ready to boot into the written ISO image!

If you would like a video guide, I also have one available:

The post How to Write an ISO to USB Drive in Linux appeared first on TechOpt.

]]>
https://www.techopt.io/linux/how-to-write-an-iso-to-usb-drive-in-linux/feed 0
How to Create a Custom Warning in Zabbix https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix#respond Sun, 16 Mar 2025 03:53:55 +0000 https://www.techopt.io/?p=837 Zabbix actively monitors business infrastructure, but sometimes the built-in alerts do not cover all needs. In my case, I wanted to create a custom warning in Zabbix if a Btrfs snapshot hadn’t been completed in the last 24 hours on my Linux server. With this setup, I get an alert whenever my snapshot process fails […]

The post How to Create a Custom Warning in Zabbix appeared first on TechOpt.

]]>
Zabbix actively monitors business infrastructure, but sometimes the built-in alerts do not cover all needs. In my case, I wanted to create a custom warning in Zabbix if a Btrfs snapshot hadn’t been completed in the last 24 hours on my Linux server. With this setup, I get an alert whenever my snapshot process fails or experiences a delay.

Why Use Custom Warnings in Zabbix?

Zabbix provides various alerts by default, but I needed a custom warning to:

  • Ensure Btrfs snapshots are being created regularly
  • Get notified if the snapshot process fails
  • Integrate this check into my existing monitoring system

There are numerous various use-cases for custom warnings in Zabbix. Custom warnings provide greater flexibility, allowing you to adapt them to your specific monitoring needs.

Setting Up a Custom Warning in Zabbix

1. Create a Custom Script

First, I created a dedicated directory for custom scripts in Zabbix on my Linux host:

mkdir -p /etc/zabbix/scripts

Then, I created the script file and opened it for editing:

nano /etc/zabbix/scripts/check_btrfs_snapshot.sh

I added the following script to check if a Btrfs snapshot exists within the last 24 hours:

#!/bin/bash

SNAPSHOT_DIR="/mnt/btrfs/.snapshots"  # Adjust to your snapshot location
THRESHOLD=$(date -d '24 hours ago' +%s)

latest_snapshot=$(find "$SNAPSHOT_DIR" -maxdepth 1 -type d -printf '%T@ %p\n' | sort -nr | head -n1 | awk '{print $1}')

if [[ -z "$latest_snapshot" ]]; then
    echo 0  # No snapshots found
elif (( $(echo "$latest_snapshot >= $THRESHOLD" | bc -l) )); then
    echo 1  # Recent snapshot exists
else
    echo 0  # No recent snapshot
fi

After saving the script, I made it executable:

chmod +x /etc/zabbix/scripts/check_btrfs_snapshot.sh

2. Add the Script to Zabbix Agent 2

I’m using Zabbix Agent 2 for advanced features. Since Zabbix Agent 2 supports system.run, I modified zabbix_agent2.conf to add the script:

  1. Open the configuration file: nano /etc/zabbix/zabbix_agent2.conf
  2. Add this line to allow the script: AllowKey=system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh]
  3. Restart the agent: systemctl restart zabbix-agent2

3. Create a Custom Item in Zabbix

  1. Go to Zabbix Web InterfaceMonitoringHosts
  2. Select the host where Btrfs is monitored (or where you’re adding your custom script) and click Items.
  3. Click Create Item and set your parameters:
    • Name: Btrfs Snapshot in the last 24 hours
    • Type: Zabbix Agent
    • Key: system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh]
    • Type of information: Text
    • Update interval: 30m
Parameters for custom zabbix item

I also recommend testing your script to make sure it runs properly with the Test option at the bottom of the Item UI from above.

4. Set Up a Custom Trigger

  1. Navigate to the Triggers tab.
  2. Click Create Trigger.
  3. Use this expression to trigger a warning if no snapshot exists in the last 24 hours:last(/example-hostname/system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh])=0.
    You can also use the “Add Condition UI” in the side panel by clicking Add to help generate the correct expression.
  4. Set severity (e.g., Warning).
  5. Click Add.
Zabbix custom trigger parameters to create custom warning

Final Thoughts

Setting up this custom warning in Zabbix ensures my Btrfs snapshots are created on schedule. If a snapshot fails, I get a warning immediately, allowing me to troubleshoot the issue before it becomes a bigger problem.

If you’re using Zabbix for monitoring, creating custom warnings like this can significantly improve your ability to track and respond to important system events. Try it out and enhance your monitoring experience!

The post How to Create a Custom Warning in Zabbix appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix/feed 0
Upgrade Ubuntu 20.04 LTS to 22.04 LTS https://www.techopt.io/linux/upgrade-ubuntu-20-04-lts-to-22-04-lts https://www.techopt.io/linux/upgrade-ubuntu-20-04-lts-to-22-04-lts#respond Sun, 23 Feb 2025 17:24:13 +0000 https://www.techopt.io/?p=814 Ubuntu 20.04 LTS (Focal Fossa) is nearing its end-of-life (EOL) in April 2025. If you’re still running Ubuntu 20.04 LTS on a system, now is a good time to consider upgrading to Ubuntu 22.04 LTS (Jammy Jellyfish). In my case, I have been using Ubuntu 20.04 LTS specifically for running the TP-Link Omada Controller, which […]

The post Upgrade Ubuntu 20.04 LTS to 22.04 LTS appeared first on TechOpt.

]]>
Ubuntu 20.04 LTS (Focal Fossa) is nearing its end-of-life (EOL) in April 2025. If you’re still running Ubuntu 20.04 LTS on a system, now is a good time to consider upgrading to Ubuntu 22.04 LTS (Jammy Jellyfish). In my case, I have been using Ubuntu 20.04 LTS specifically for running the TP-Link Omada Controller, which historically ran best on older LTS versions. Upgrading from Ubuntu 20.04 LTS to 22.04 LTS ensures continued security updates, bug fixes, and stability while staying on a supported version.

Why Upgrade Now?

Ubuntu LTS (Long-Term Support) versions receive five years of updates, meaning 20.04 will stop receiving standard support in April 2025. While Ubuntu does offer Extended Security Maintenance (ESM) for an additional five years, that requires an Ubuntu Pro subscription, which costs money and may not be ideal for all users.

Checking Software Compatibility Before Upgrading

One of the main reasons I stayed on Ubuntu 20.04 was because I run the TP-Link Omada Controller, which historically ran best on older LTS versions. However, as of now, Omada supports Ubuntu 22.04 LTS, making this a good time to upgrade.

If you’re running custom software, self-hosted services, or enterprise applications, check their documentation or forums to confirm compatibility before proceeding.

Steps to Upgrade from Ubuntu 20.04 LTS to 22.04 LTS

To ensure a smooth upgrade, follow these steps:

1. Back Up Your System

Before making major changes, always back up important data. If you’re running a production server, consider creating a full system snapshot using tools like:

  • Timeshift (for desktop users)
  • rsync or tar for manual backups
  • Btrfs snapshots (if you’re using a Btrfs filesystem)

I also backed up my Omada configuration through the web administration page, as well as made a snapshot of the virtual machine in Proxmox.

2. Update Ubuntu 20.04 Fully

Before upgrading, ensure your system is fully up to date. Run the following commands:

sudo apt update && sudo apt upgrade -y
sudo apt full-upgrade -y
sudo apt autoremove --purge -y

After the updates complete, reboot your system:

sudo reboot

3. Start the Upgrade Process

Once back online, use the following command to begin the upgrade:

sudo do-release-upgrade

This will check for a new release and guide you through the upgrade process.

4. Follow the On-Screen Prompts

  • The upgrade tool will download necessary packages and warn you about changes.
  • If prompted to replace configuration files, choose the default option unless you’ve manually customized them.
  • When prompted to remove obsolete packages, confirm with Y.
  • The process may take some time depending on your internet speed and system resources.

5. Reboot Into Ubuntu 22.04

After the upgrade is complete, reboot your system:

sudo reboot

Once your system boots up, confirm the upgrade with:

lsb_release -a

This should display Ubuntu 22.04 LTS.

Post-Upgrade Checks

  1. Verify Software Functionality – Ensure your applications and services, like the TP-Link Omada Controller, are running properly.
  2. Check for Remaining Updates – Run: sudo apt update && sudo apt upgrade -y
  3. Remove Old Packages – Clean up unnecessary files: sudo apt autoremove --purge -y

Conclusion

Upgrading from Ubuntu 20.04 LTS to 22.04 LTS is straightforward but requires some preparation. Since Ubuntu 20.04 reaches EOL in April 2025, it’s best to upgrade sooner rather than later to stay secure and supported. If you run software that relies on very specific versions on Ubuntu (such as the TP-Link Omada Controller), ensure it’s compatible before proceeding.

The post Upgrade Ubuntu 20.04 LTS to 22.04 LTS appeared first on TechOpt.

]]>
https://www.techopt.io/linux/upgrade-ubuntu-20-04-lts-to-22-04-lts/feed 0
Rollback openSUSE with Btrfs: A Filesystem with Snapshots https://www.techopt.io/linux/rollback-opensuse-with-btrfs-a-filesystem-with-snapshots https://www.techopt.io/linux/rollback-opensuse-with-btrfs-a-filesystem-with-snapshots#respond Sun, 23 Feb 2025 01:00:39 +0000 https://www.techopt.io/?p=801 Rollback openSUSE easily with Btrfs, a powerful copy-on-write (CoW) filesystem that provides advanced features like data integrity, transparent compression, and built-in snapshot capabilities. These features make it particularly useful for system stability and recovery. A key advantage of using Btrfs on openSUSE is its integration with Snapper, a tool that automatically creates snapshots before critical […]

The post Rollback openSUSE with Btrfs: A Filesystem with Snapshots appeared first on TechOpt.

]]>
Rollback openSUSE easily with Btrfs, a powerful copy-on-write (CoW) filesystem that provides advanced features like data integrity, transparent compression, and built-in snapshot capabilities. These features make it particularly useful for system stability and recovery.

A key advantage of using Btrfs on openSUSE is its integration with Snapper, a tool that automatically creates snapshots before critical system changes, such as package installations through YaST or zypper. This allows users to quickly rollback openSUSE if something goes wrong.

While openSUSE is known for its strong Btrfs integration, the filesystem is not exclusive to it. Other Linux distributions, such as Fedora, Ubuntu, and Arch Linux, also support Btrfs, though the level of integration varies.

In this guide, we’ll explore how Btrfs works in openSUSE, how automatic snapshots function, and how to manually create and restore snapshots when needed.

Automatic Snapshots in openSUSE

openSUSE, through Snapper, automatically creates Btrfs snapshots before major system changes. These snapshots act as a safety net, allowing you to rollback openSUSE to a working system state if something breaks.

For example, when installing or updating software via YaST or zypper, a snapshot is taken before the changes are applied. This means that if an update causes issues, you can easily revert to the state before the update.

To list all available snapshots, run:

snapper list

You’ll see a list of snapshots with IDs, timestamps, and descriptions of what triggered them.

Creating a Manual Snapshot

While automatic snapshots provide great protection, there may be times when you want to create a manual snapshot before making significant changes.

To create a new snapshot manually, run:

sudo snapper create --description "Before major change"

This will create a snapshot that you can revert to if necessary. You can confirm its creation by running snapper list.

Rollback openSUSE to a Snapshot

If something goes wrong after an update or system change, rolling back to a previous snapshot is straightforward. You can do this in two ways: using Snapper or from the GRUB boot menu.

Method 1: Rollback via Snapper (Live System)

To rollback openSUSE to a previous snapshot while still inside your running system, first identify the snapshot ID from snapper list, then run:

sudo snapper rollback <snapshot-ID>

For example, if you want to rollback to snapshot 20, you would run:

sudo snapper rollback 20

After rolling back, reboot your system to apply the changes:

sudo reboot

Method 2: Rollback via GRUB

If your system becomes unbootable after an update or change, you can rollback openSUSE from the GRUB menu:

  1. Reboot your computer.
  2. In the GRUB menu, select Advanced options for openSUSE.
  3. Choose Start bootloader from a snapshot.
  4. Select a snapshot from the list and boot into it.
  5. If the system works fine in this snapshot, you can make it permanent by running:sudo snapper rollback sudo reboot

This will set the selected snapshot as the new baseline.

Btrfs and Snapper Make a Rollback in openSUSE so Easy!

Btrfs, combined with Snapper, provides openSUSE users with a robust and reliable way to manage system changes, and rollback if necessary. Automatic snapshots ensure that package updates and system modifications can be easily undone if needed, and manual snapshots give users additional control over their system state.

Although Btrfs is especially well-integrated in openSUSE, it is also available on other Linux distributions like Fedora and Ubuntu. However, openSUSE’s implementation with Snapper makes it one of the most user-friendly and reliable choices for those looking to take full advantage of Btrfs.

The post Rollback openSUSE with Btrfs: A Filesystem with Snapshots appeared first on TechOpt.

]]>
https://www.techopt.io/linux/rollback-opensuse-with-btrfs-a-filesystem-with-snapshots/feed 0
Set Custom Device Icons in Zigbee2MQTT in Home Assistant https://www.techopt.io/smart-home/set-custom-device-icons-in-zigbee2mqtt-in-home-assistant https://www.techopt.io/smart-home/set-custom-device-icons-in-zigbee2mqtt-in-home-assistant#respond Mon, 17 Feb 2025 16:19:01 +0000 https://www.techopt.io/?p=790 Zigbee2MQTT is a fantastic tool for integrating Zigbee devices into Home Assistant, but sometimes the default icons don’t match the specific version of your device. For example, I’m using an IKEA E2204 smart plug (North American version, the TRETAKT), but Zigbee2MQTT displays the European version. To fix this, I set a custom icon, and in […]

The post Set Custom Device Icons in Zigbee2MQTT in Home Assistant appeared first on TechOpt.

]]>
Zigbee2MQTT is a fantastic tool for integrating Zigbee devices into Home Assistant, but sometimes the default icons don’t match the specific version of your device. For example, I’m using an IKEA E2204 smart plug (North American version, the TRETAKT), but Zigbee2MQTT displays the European version. To fix this, I set a custom icon, and in this guide, I’ll show you how to do the same for any device.

Steps to Set Custom Icons in Zigbee2MQTT Home Assistant Add-On

1. Open Your Home Assistant Configuration Folder

You’ll need to access the zigbee2mqtt folder in your Home Assistant config directory. I use the Studio Code Server add-on to do this, but you can use any method you prefer.

  • If using Studio Code Server, navigate to config/zigbee2mqtt/.
Zigbee2MQTT config folder in Home Assistant

2. Create the device_icons Folder

If the device_icons folder doesn’t already exist inside the zigbee2mqtt directory, create it.

  • In Studio Code Server, right-click inside the zigbee2mqtt folder and select New Folder.
  • Name the folder device_icons.
Custom device icons folder Zigbee2MQTT in Home Assistant

3. Add Your Custom Icon

For best results, your icon should be:

  • A PNG file
  • 512×512 pixels
  • Have a transparent background

To add it:

  • Drag and drop the PNG file into the device_icons folder using the Studio Code Server file browser.
  • Alternatively, upload the file using an SCP tool or any other method that works for you.

For my IKEA E2204 smart plug, I named my file:

ikea-tretakt-smart-plug.png

4. Assign the Icon in Zigbee2MQTT

Now that the icon is in place, it’s time to assign it to your device in Zigbee2MQTT.

  1. Open Zigbee2MQTT in Home Assistant.
  2. Find your device in the Devices list and click on its name.
  3. Go to the Settings tab.
  4. Scroll all the way down to the Icon box.
  5. Enter the relative path of your icon, e.g.:device_icons/ikea-tretakt-smart-plug.png
  6. Click Submit.
Set custom icon path in Zigbee2MQTT in Home Assistant

5. Verify Your New Icon

Once you’ve assigned the custom icon:

  • Go back to the Devices page in Zigbee2MQTT.
  • Your new icon should now appear next to the device name!
Showing custom icons in Zigbee2MQTT in Home Assistant

Final Thoughts

Using custom icons in Zigbee2MQTT in Home Assistant is a great way to make your smart home setup more intuitive and visually appealing. Whether you’re correcting an incorrect default icon or just want a personalized look, this method is quick and easy!

The post Set Custom Device Icons in Zigbee2MQTT in Home Assistant appeared first on TechOpt.

]]>
https://www.techopt.io/smart-home/set-custom-device-icons-in-zigbee2mqtt-in-home-assistant/feed 0
Aggregate Audio Devices for DAW Input and Output on macOS https://www.techopt.io/music-production/aggregate-audio-devices-for-daw-input-and-output-on-macos https://www.techopt.io/music-production/aggregate-audio-devices-for-daw-input-and-output-on-macos#respond Sun, 26 Jan 2025 19:12:10 +0000 https://www.techopt.io/?p=703 For music producers and audio engineers, using both input and output from the same device in your Digital Audio Workstation (DAW) on macOS can be tricky, especially when your hardware doesn’t natively support it. Thankfully, macOS offers a feature called Aggregate Devices that allows you to combine multiple audio devices into a single virtual device. […]

The post Aggregate Audio Devices for DAW Input and Output on macOS appeared first on TechOpt.

]]>
For music producers and audio engineers, using both input and output from the same device in your Digital Audio Workstation (DAW) on macOS can be tricky, especially when your hardware doesn’t natively support it.

Thankfully, macOS offers a feature called Aggregate Devices that allows you to combine multiple audio devices into a single virtual device. Here’s a straightforward guide to setting up an Aggregate Device, allowing you to seamlessly use inputs and outputs simultaneously in DAWs like Pro Tools, Logic Pro, Ableton Live, REAPER, Reason, FL Studio, or Garageband.

What Are Aggregate Devices?

An Aggregate Device is a virtual audio device on macOS that combines multiple physical devices into one. This lets your DAW see all the inputs and outputs as a single source, even if they’re from different devices or the same device that doesn’t fully support simultaneous I/O.

Step-by-Step Guide to Creating an Aggregate Audio Device for Audio Input and Output on macOS

1. Open Audio MIDI Setup

Navigate to Applications > Utilities and open Audio MIDI Setup.

Audio MIDI Setup in Application Utilities folder on macOS

If you don’t see the list of devices, go to the top menu and select Window > Show Audio Devices.

2. Create a New Aggregate Device

In the bottom-left corner, click the + button and select Create Aggregate Device.

Create aggregate device to combine audio input and output on macOS

A new device called “Aggregate Device” will appear in the device list.

New device called Aggregate Device in Audio MIDI Setup on macOS

3. Configure Your Aggregate Device

On the right-hand side, you’ll see a list of all available audio devices. Check the boxes next to the devices you want to include. For example, if you’re using a USB microphone and the built-in audio output, check both of these devices.

In my case, my USB mixer was showing up as USB Audio CODEC 1 for the output, and USB Audio CODEC 2 for the input. So, these are what I selected in my aggregate device configuration.

Selecting audio devices to combine for aggregate device on macOS

Drag devices in the list to arrange their priority. The top device becomes the master clock source, which ensures synchronization.

Ensure that the sample rates for all selected devices match. Mismatched sample rates can cause glitches or audio dropouts.

4. Rename Your Aggregate Device (Optional)

To keep things organized, double-click on Aggregate Device in the device list and rename it to something meaningful. In my case, I gave it the actual name of my hardware, Behringer XENYX X1832USB Mixer.

5. Set the Aggregate Device as Default (Optional)

If you’d like all system audio to use the Aggregate Device, right-click it and select Use This Device for Sound Input and/or Use This Device for Sound Output.

Configuring Your DAW for Aggregate Device Input and Output Usage on macOS

Once your Aggregate Device is set up, you’ll need to configure your DAW to recognize it. Below are general steps for popular DAWs:

Pro Tools

  • Go to Setup > Playback Engine.
  • Select your Aggregate Device from the dropdown menu.
  • Restart Pro Tools if prompted.

REAPER

  • Go to Options > Preferences > Audio > Device.
  • Choose your Aggregate Device as the Audio System.
Using an aggregate device for audio input and output on macos

Logic Pro

  • Navigate to Logic Pro > Settings > Audio.
  • Under the Devices tab, select your Aggregate Device for both Input and Output.

Ableton Live

  • Open Preferences > Audio.
  • Set the Audio Input Device and Audio Output Device to your Aggregate Device.

Reason

  • Open Preferences > Audio.
  • Select your Aggregate Device under both Audio Input and Output.
Using aggregate audio device for audio input and output on macOS in Reason

FL Studio

  • Open Options > Audio Settings.
  • Select your Aggregate Device as the Input/Output device.

Garageband

  • Go to Garageband > Preferences > Audio/MIDI.
  • Set both Input Device and Output Device to your Aggregate Device.

Additional Tips

  • Latency: Combining devices can introduce latency. If you notice delays, check the buffer size settings in your DAW and adjust as needed. I recommend starting with a buffer size of 64 samples and increasing by 2x each time until you no longer experience audio glitching or dropouts (i.e. 64 samples, 128 samples, 256 samples, 512 samples).
  • Consistency: Ensure all devices are connected before opening your DAW to avoid configuration errors.
  • Troubleshooting: If one device’s input or output isn’t working, double-check its sample rate and sync settings in Audio MIDI Setup.

Conclusion

Setting up an Aggregate Device on macOS makes it easy to overcome hardware limitations, enabling simultaneous input and output in your favorite DAW. Whether you’re recording vocals, mixing tracks, or experimenting with sound design, this powerful feature ensures your workflow stays smooth and uninterrupted.

For more detailed instructions, you can check Apple’s official guide here.

The post Aggregate Audio Devices for DAW Input and Output on macOS appeared first on TechOpt.

]]>
https://www.techopt.io/music-production/aggregate-audio-devices-for-daw-input-and-output-on-macos/feed 0