Servers & Networking Posts - TechOpt.io https://www.techopt.io/category/servers-networking Programming, servers, Linux, Windows, macOS & more Sun, 21 Sep 2025 18:41:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.techopt.io/wp-content/uploads/2024/07/cropped-logo-1-32x32.png Servers & Networking Posts - TechOpt.io https://www.techopt.io/category/servers-networking 32 32 How to Make Ethernet Cables: A Complete Step-by-Step Guide https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide#respond Sun, 21 Sep 2025 18:39:42 +0000 https://www.techopt.io/?p=1098 Learning how to make ethernet cables yourself is a cost-effective and customizable way to build a network setup that fits your exact needs. Buying premade cables limits you to fixed lengths and can quickly get expensive, especially if you need several cables of different sizes. By crimping your own cables, you can create perfect lengths […]

The post How to Make Ethernet Cables: A Complete Step-by-Step Guide appeared first on TechOpt.

]]>
Learning how to make ethernet cables yourself is a cost-effective and customizable way to build a network setup that fits your exact needs. Buying premade cables limits you to fixed lengths and can quickly get expensive, especially if you need several cables of different sizes. By crimping your own cables, you can create perfect lengths for your home or office, improve cable management, and even ensure higher quality by using better materials.

This comprehensive guide will walk you through everything you need to know, from selecting the right cable and connectors, to crimping, testing, and troubleshooting your custom cables.


Why Make Your Own Ethernet Cable?

There are several advantages to building your own network cables:

  • Custom Lengths: No more coiled-up mess or cables that come up just short. Instead, you can make cables the exact length you need.
  • Cost Savings: Bulk ethernet cable and connectors are far cheaper per foot than buying pre-made cables.
  • Better Quality Control: You choose the cable type, shielding, and connectors, therefore avoiding cheap copper-clad aluminum (CCA) cables.
  • Skill Building: This is a useful DIY skill for anyone interested in networking, home labs, or IT work.

Tools and Materials Needed

Here’s what you’ll need to make DIY ethernet cables successfully:

  • Ethernet Cable (Cat6, Cat6a, or Cat5e): Prefer solid copper rather than CCA for best performance and compliance with standards. Cat6 bulk cable on Amazon
  • RJ45 Connectors: Choose connectors rated for your cable type (Cat6, Cat6a, or Cat5e). Passthrough connectors are easier for beginners. Cat6 RJ45 passthrough connectors
  • RJ45 Crimping Tool: Used to secure connectors to the cable. Most crimpers also include a wire cutter and stripper. RJ45 crimp tool
  • Cable Tester (Recommended, but optional): Ensures your wiring is correct and detects any faults. Basic cable tester or advanced network tester
  • Strain Relief Boots (Recommended, but optional): Add durability to the connector ends. Strain relief boots
  • Wire Cutters/Scissors: For trimming cable and internal wires. Wire strippers/cutters

Tip: Avoid “Cat7” or “Cat8” cables sold cheaply online. These are not officially recognized Ethernet standards and often use questionable materials.


Step 1: Measure and Cut Your Cable

Pull the amount of cable you need from the box, then add roughly 30 cm (about 1 foot) of extra length for trimming and flexibility. Cut the cable cleanly using the crimper’s cutting blade or a pair of wire cutters.

cut the ethernet cable

Step 2: Strip the Outer Jacket

Use the stripping blade on your crimping tool (or a dedicated wire stripper) to remove 5–10 cm (2–3 inches) of the outer jacket from both ends of the cable. Be careful not to nick the internal wires.

strip outer insulation of ethernet cable

After that, remove the internal string, if present.

cut the string in the cable

At this stage, slide on the strain relief boots if you’re using them—forgetting them is a common mistake. Therefore, it’s best to add them now. You want the larger side facing outward from the end of the cable on both sides.

add strain relief boots to cable

Step 3: Untwist and Arrange the Wires

Inside the jacket are four twisted pairs of wires (8 total). Untwist the pairs and straighten them.

Then, arrange them in either T-568A or T-568B wiring order. Use the same standard on both ends.

T-568A Wiring Order:

  1. White/Green
  2. Green
  3. White/Orange
  4. Blue
  5. White/Blue
  6. Orange
  7. White/Brown
  8. Brown

T-568B Wiring Order (Most Common in North America):

  1. White/Orange
  2. Orange
  3. White/Green
  4. Blue
  5. White/Blue
  6. Green
  7. White/Brown
  8. Brown
T-568A vs T-568B wiring standard diagram
T-568A vs. T-568B ethernet standards wiring diagram.

Lay the wires flat and keep them in the correct order. Finally, flatten them gently with your thumb for easier insertion.

T-568B standard wired ethernet cable
My wires are arranged in the T-568B standard.

Step 4: Trim Wires to Length

For non-passthrough connectors, trim the wires so that they are just long enough to reach the end of the connector when inserted. Cut them evenly so they line up perfectly.

For passthrough connectors, leave them a bit longer since the ends will protrude and be trimmed after crimping.


Step 5: Insert Wires Into the RJ45 Connector

Slide the wires into the connector carefully, ensuring they remain in the correct order. Push firmly until:

  • Each wire reaches the very end of the connector.
  • The outer jacket passes the strain relief tab for a strong connection.
wires in RJ45 connector

For passthrough connectors, the wires should stick out slightly from the other side.

As soon as you confirm the order, you’re ready to crimp.


Step 6: Crimp the Connector

Place the connector into the crimping tool and squeeze firmly until the pins press down into the wires and the strain relief tab locks onto the outer jacket.

crimping the RJ45 connector to the cable

Additionally, for passthrough connectors, trim the wire ends flush with the connector after crimping.

Then, repeat this entire process for the other end of the cable!


Step 7: Test Your Cable

Use a cable tester to confirm that all eight wires are connected in the correct order.

testing the ethernet cable with a cable tester

The lights on both ends should flash in sequence.

If any wires are misaligned, cut off the connector and repeat the process on that side.

Once confirmed, your custom ethernet cable is ready for use!


Cat6 vs Cat6a vs Cat5e: Which Should You Choose?

Not all Ethernet cables are created equal. Therefore, here’s a quick comparison to help you decide:

CategoryMaximum SpeedMaximum BandwidthMaximum Recommended LengthBest Use Case
Cat6Up to 1 Gbps (10 Gbps up to 55m)250 MHz100 mHome and small office networks, gaming, streaming
Cat6a10 Gbps up to 100m500 MHz100 mHigh-performance networks, data-heavy tasks, future-proofing
Cat5e1 Gbps100 MHz100 mBudget builds, basic home networking

Recommendation: Use Cat6 for most home setups, Cat6a if you want to future-proof or need maximum performance for longer runs, and Cat5e only if you already have it on hand or are working with very low-cost builds.


Frequently Asked Questions

Q: Can I mix T-568A on one end and T-568B on the other?
A: Only if you are intentionally creating a crossover cable. Otherwise, use the same wiring standard on both ends.

Q: How long can an Ethernet cable be?
A: Standard twisted-pair Ethernet cables (Cat5e, Cat6, Cat6a) are rated for up to 100 meters (328 feet) in total length. This includes patch cables at both ends. Beyond this length, you may experience signal loss or reduced speeds.

For 10 Gbps on Cat6, keep runs under 55 meters; use Cat6a for longer 10 Gbps runs.

Q: Do I really need a cable tester?
A: While optional, it saves time and frustration by catching miswires before you plug into your network.

Q: Should I ever use CCA cable?
A: No. Instead, always use solid copper cable for performance, safety, and compliance with Ethernet standards.


Troubleshooting Common Issues When Making Ethernet Cables

  • Tester Shows Miswired Pair: Re-check wiring order on both ends, re-crimp if needed.
  • Cable Doesn’t Click Securely: Ensure the strain relief tab is pressed down properly during crimping. Also ensure that the release tab covers on your strain relief boots aren’t too stiff and pressing down on the release tabs of the RJ45 connectors as a result. You may want to work the rubber of the strain relief boots with your thumbs a bit to stretch and break them in.
  • Poor Network Speeds: Test on another device and verify that you’re using solid copper cable, not CCA.

Final Remarks

Learning how to make ethernet cables saves money, eliminates clutter, and gives you full control over your network setup. Whether you’re wiring a home office, building a home lab, or just need a few short patch cables, this DIY approach is a game changer.

Practice a few times and you’ll be making professional-quality network cables in minutes!

If you prefer a video guide, you can watch my video guide below.

The post How to Make Ethernet Cables: A Complete Step-by-Step Guide appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide/feed 0
GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin#respond Sun, 24 Aug 2025 20:40:06 +0000 https://www.techopt.io/?p=1059 When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These […]

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These same steps can also apply if you’re running Jellyfin inside an LXC.

In this guide, I’ll walk you through enabling GPU passthrough in Proxmox (or Jellyfin) LXC step by step.

Step 1: Enabling PCI(e) Passthrough in Proxmox

The first step is enabling PCI passthrough at the Proxmox host level. I mostly followed the official documentation here: Proxmox PCI Passthrough Wiki.

I’ll summarize what should be done below.

Enable IOMMU in BIOS

Before you continue, enable IOMMU (Intel VT-d / AMD-Vi) in your system’s BIOS. This setting lets GPUs pass directly through to containers or VMs.

Each BIOS Is different, so if you’re not sure you should check your motherboard’s instruction manual.

Enable IOMMU Passthrough Mode

Not all hardware supports IOMMU passthrough, but if yours does you’ll see a big performance boost. Even if your system doesn’t, enabling it won’t cause problems, so it’s worth turning on.

Edit the GRUB configuration with:

nano /etc/default/grub

Locate the GRUB_CMDLINE_LINUX_DEFAULT line and add:

iommu=pt

Intel vs AMD Notes

  • On Intel CPUs with Proxmox older than v9 (kernel <6.8), also add: intel_iommu=on
  • On AMD CPUs, IOMMU is enabled by default.
  • On Intel CPUs with kernel 6.8 or newer (Proxmox 8 updated or Proxmox 9+), you don’t need the intel_iommu=on parameter.

With an AMD CPU on Proxmox 9, my file looked like this in the end:

grub config to passthrough gpu to proxmox lxc with iommu

Save the file by typing CTRL+X, typing Y to confirm and then Enter to save.

Then update grub by running:

update-grub

Load VFIO Kernel Modules

Next, we need to load the VFIO modules so the GPU can be bound for passthrough. Edit the modules file with:

nano /etc/modules

Add the following lines:

vfio
vfio_iommu_type1
vfio_pci

My file looked like this in the end:

added kernel modules needed for gpu passthrough to lxc in proxmox

Save and exit, then update initramfs:

update-initramfs -u -k all

Reboot and Verify

Reboot your Proxmox host and verify the modules are loaded:

lsmod | grep vfio

You should see the vfio modules listed. This is what my output looks like:

vfio kernel modules are loaded

If you don’t get any output, the kernel modules have not loaded correctly and you probably forgot to run the update-initramfs command above.

To double-check IOMMU is active, run:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

Depending on your hardware, you should see confirmation that IOMMU or Directed I/O is enabled:

output when IOMMU is active (or similar)

Step 2: Finding the GPU Renderer Device Path

Now that PCI passthrough support is enabled, the next step is to figure out the device path of the renderer for the GPU you want to pass through.

Run the following on your Proxmox host:

ls /dev/dri

This will print all the detected GPUs. For example, my output looked like this:

by-path  card0  card1  renderD128  renderD129

If you only have a single GPU, you can usually assume it will be something like renderD128. But if you have multiple GPUs (as I did), you’ll need to identify which renderer belongs to your Intel Arc card.

First, run:

lspci

This will list all PCI devices. From there, I found my Intel Arc GPU at 0b:00.0:

lspci output showing my intel arc GPU card

Next, run:

ls -l /dev/dri/by-path/

My output looked like this:

lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:04:00.0-card -> ../card0
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:04:00.0-render -> ../renderD129
lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:0b:00.0-card -> ../card1
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:0b:00.0-render -> ../renderD128

From this, I confirmed that my Intel Arc GPU was associated with renderD128. That means the full device path I need to pass to my LXC container is:

/dev/dri/renderD128

Step 3: Passthrough GPU Device to the Plex LXC Container

Now that we know the correct device path, we can pass it through to the LXC container.

  1. Stop the container you want to pass the GPU into.
  2. In the Proxmox web UI, select your Plex LXC container.
  3. Go to Resources.
  4. At the top, click Add → Device Passthrough.
    steps to get to device passthrough menu
  5. In the popup, set Device Path to the GPU renderer path you identified in Step 2 (in my case, /dev/dri/renderD128).
  6. In the Access mode in CT field, type:0666
    add passthrough device to lxc config
  7. Click Add.
  8. Once added, you can start the container again.

A Note About Permissions

Normally, you would configure a proper UID or GID in CT for the render group inside the container so only that group has access. However, in my testing I wasn’t able to get the GPU working correctly with that method.

Using 0666 permissions allows read/write access for everyone. Since this is a GPU device node and not a directory containing files, I’m not too concerned, but it’s worth noting for anyone who takes their Linux permissions very seriously.

Step 4: Installing GPU Drivers Inside the LXC Container

With the GPU device passed through, the container now needs the proper drivers installed. This step varies depending on the Linux distribution you’re running inside the container.

Some distros may already include GPU drivers by default. But if you reach Step 5 and don’t see your GPU as an option in Plex (or Jellyfin), chances are you’re missing the driver inside your container.

In my case, I’m using a Debian container, which does not include Intel Arc drivers by default. Here’s what I did:

  1. Edit your apt sources to enable the non-free repo: nano /etc/apt/sources.list
  2. Add non-free to the end of each Debian repository line:
    my sources.list after adding the non-free Debian repository
  3. Refresh apt sources: apt update
  4. Install the Intel Arc driver package: apt install intel-media-va-driver-non-free
  5. Restart the container.

For Debian, I followed the official documentation here: Debian Hardware Video Acceleration Wiki.

Note: These steps will vary depending on your GPU make, model and container distribution. Make sure to check the official documentation for your hardware and distro.

Step 5: Enabling Hardware Rendering in Plex

Now that your GPU and drivers are ready, the final step is to enable hardware transcoding inside Plex.

  1. Open the Plex Web UI.
  2. Go to Settings (top-right corner).
  3. In the left sidebar, scroll down and click Transcoder under Settings.
  4. Make sure the following options are checked:
    • Use hardware acceleration when available
    • Use hardware-accelerated video encoding
  5. Under Hardware transcoding device, you should now see your GPU. If not, double-check Step 4. In my case, it showed up as Intel DG2 [Arc A380]
  6. Select your GPU and click Save Changes.
Steps to enable hardware encoding in plex ui

Testing Hardware Transcoding

To verify that hardware transcoding is working:

  1. Play back any movie or TV show.
  2. In the playback settings, change the quality to a lower resolution that forces Plex to transcode:
    Force Plex to transcode by selecting a lower quality
  3. While it’s playing, go to Settings → Dashboard.
  4. If the GPU is handling transcoding, you’ll see (hw) beside the stream being transcoded:
    The GPU passthrough to the proxmox lxc for plex can be confirmed working from Plex dashboard: here are the steps

That’s it! You’ve successfully set up GPU passthrough in Proxmox LXC for Plex. These same steps should also work for Jellyfin with minor adjustments for the Jellyfin UI.

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin/feed 0
Run Virtual Machines on OPNsense with bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve#respond Wed, 30 Apr 2025 02:00:16 +0000 https://www.techopt.io/?p=892 If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB […]

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB of RAM and 120GB of storage sitting mostly idle, with CPU usage rarely spiking beyond a modest blip.

Instead of virtualizing OPNsense itself using something like Proxmox (which is a common suggestion), I wanted to keep OPNsense on bare metal for reliability and stability. But I also wanted to move some VMs directly onto my router box, such as my Ubuntu Server VM running the TP-Link Omada software that controls my WiFi. That led me down the rabbit hole of running a virtual machine on OPNsense—and the tool that makes this possible is bhyve, the native FreeBSD hypervisor.

This is definitely not officially supported and could break with any OPNsense update, so proceed with caution. My setup is loosely based on a 2023 forum post I found in the OPNsense community forums.

Step 1: Installing bhyve

To get bhyve running, we first need to enable the FreeBSD repository temporarily. OPNsense locks its package manager to prevent upgrades from mismatched repos, so we need to handle this carefully.

Lock the pkg tool

pkg lock -y pkg

Enable the FreeBSD repository

sed -i '' 's/enabled: no/enabled: yes/' /usr/local/etc/pkg/repos/FreeBSD.conf

Install bhyve and required packages

pkg install -y vm-bhyve grub2-bhyve bhyve-firmware

Disable the FreeBSD repository again

sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/FreeBSD.conf

Unlock pkg

pkg unlock -y pkg

⚠ Leaving the FreeBSD repo enabled may interfere with future OPNsense updates. Disabling it again helps maintain system stability, but means bhyve won’t update automatically. If you want to update bhyve later, you’ll need to repeat this process.

Step 2: Configuring the Firewall for Virtual Machines on OPNsense

Next, we need to create a virtual bridge to let our bhyve virtual machines talk to each other and the rest of the network.

This part is all done from the OPNsense UI.

Create a bridge interface

  • Navigate to Interfaces → Devices → Bridge
  • Add a new bridge with your LAN interface as a member
  • Enable link-local IPv6 if you use IPv6 on your network
  • Note the name of your bridge (e.g., bridge0)
Creating the LAN bridge switch for virtual machines on OPNsense

Assign and configure the bridge interface

  • Go to Interfaces → Assignments
  • Assign bridge0 to a new interface (give it a description like bhyve_switch_public)
  • Enable the interface
  • Check Lock – Prevent interface removal to avoid losing it on reboot
Assigning an interface to the VM bridge

Allow traffic through the bridge

  • Navigate to Firewall → Rules → bhyve_switch_public
  • Add a rule to allow all traffic (you can tighten this later if needed)
Allow all firewall rule on switch for virtual machines on opnsense

One thing to note: the forum post I referenced above did not mention anything about assigning an interface or adding a firewall rule for the bridge. However, in my experience, my virtual machines in OPNsense had no network connectivity until I completed both of these steps.

Step 3: Setting Up bhyve

With bhyve installed and your network bridge configured, the next step is to prepare the virtual machine manager and your storage directory. You have two options here: using ZFS (ideal for advanced snapshots and performance features) or a standard directory (simpler and perfectly fine for one or two VMs).

Option 1: Using ZFS for VM Storage

If you’re using ZFS (like zroot), create a dataset for your VMs:

zfs create zroot/bhyve

Then enable and configure vm-bhyve:

sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/bhyve"
vm init

Option 2: Using a Standard Directory

If you’re not using ZFS or want a simpler setup for running just a few virtual machines on OPNsense:

mkdir /bhyve
sysrc vm_enable="YES"
sysrc vm_dir="/bhyve"
vm init

This sets up /bhyve as the default VM storage directory. You’ll now be able to manage and create virtual machines using the vm command-line tool, with bhyve handling the hypervisor backend.

This is the option that I personally chose for my setup, since I only plan on running 1 or 2 VMs.

Step 4: Configuring bhyve

With the storage and base configuration out of the way, the next step is to configure networking for bhyve. To do this, we’ll create a virtual switch that connects bhyve’s virtual NICs to the bridge interface we created in Step 2.

Setting up Networking

Run the following command to create a manual switch that binds to the bridge0 interface:

vm switch create -t manual -b bridge0 public

This tells vm-bhyve to create a virtual switch named public and associate it with bridge0, allowing your VMs to communicate with the rest of your network. Any virtual machine you create can now be attached to this switch to access LAN or internet resources just like any other device on your network.

Copying VM Templates

Before you start creating virtual machines in OPNsense, it’s helpful to copy the sample configuration templates that come with vm-bhyve. These templates make it easier to define virtual machines for different operating systems like FreeBSD, Linux, or Windows.

If you’re using ZFS and followed the zroot/bhyve setup as described in Option 1 above:

cp /usr/local/share/examples/vm-bhyve/* /zroot/bhyve/.templates/

If you’re using a standard directory setup like /bhyve, as described in Option 2 above:

cp /usr/local/share/examples/vm-bhyve/* /bhyve/.templates/

This copies example VM configuration templates into the .templates directory within your VM storage location. These templates provide base config files for creating new VMs and are a helpful starting point for most operating systems.

Step 5: Setting Up Your Virtual Machine

In this step, we’ll walk through creating your first VM using bhyve. Since there’s more to this than just launching a template, I’ve broken it down into three parts: setting up the VM itself (including creating a config), installing the operating system, and then configuring the firewall for the VM.

Configuring the VM

For my setup, I was creating an Ubuntu Server instance that runs TP-Link Omada controller software.

First, navigate into your templates directory. If you’re using ZFS, it’ll look like this:

cd /zroot/bhyve/.templates/

Or if you’re using a regular directory setup:

cd /zroot/bhyve/.templates/

Inside, you’ll find configuration files for different OS templates. I used the one for Ubuntu, found at ubuntu.conf, which contains the following:

loader="grub"
cpu=1
memory=512M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This basic config uses GRUB as the loader and allocates 1 CPU and 512MB of RAM. It attaches to the public switch we created earlier and uses a virtual block device for storage.

To create a custom config for my Omada controller VM, I simply copied the Ubuntu template:

cp ubuntu.conf omada-controller.conf

This gave me a dedicated configuration file I could tweak without touching the original Ubuntu template.

Next, I edited the omada-controller.conf file using nano to better suit the needs of the Omada software:

nano omada-controller.conf

And the contents:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This configuration increases the resources available to the VM—allocating 2 CPU cores and 3GB of RAM, which is more appropriate for running the Omada controller software.

Initially, I tried using the GRUB loader as shown in the Ubuntu template, but I ran into problems booting the OS after installation. After doing some research, I found that this is a fairly common issue when using GRUB with certain Linux distributions on bhyve. Switching to uefi resolved the problem for me and allowed the VM to boot normally. Your mileage may vary, but if you’re stuck at boot, switching the loader to uefi is worth trying.

Starting Guest OS Installation

Before you can install the operating system, you’ll need to download and import the installation media (ISO file) for your OS into bhyve. This is easy to do with the vm iso command.

I downloaded the Ubuntu 22.04.5 server installer ISO using:

vm iso https://mirror.digitaloceans.dev/ubuntu-releases/22.04.5/ubuntu-22.04.5-live-server-amd64.iso

This downloaded the ISO file directly into the /bhyve/.iso directory (or /zroot/bhyve/.iso if you’re using ZFS).

Once the ISO was downloaded, I started the installation process for my VM using:

vm install -f omada-controller ubuntu-22.04.5-live-server-amd64.iso

This command tells vm-bhyve to boot the VM using the downloaded ISO so you can proceed with installing the operating system in console mode.

In my case, when using the GRUB loader, the console mode installer worked fine. However, after switching to UEFI mode, I ran into a problem where the console installer would no longer boot properly. After doing some research, I found that this is a common issue with bhyve when using UEFI.

To work around this, I edited my omada-controller.conf and temporarily added the following line to enable graphical output:

graphics="yes"

The updated configuration file looked like this:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
graphics="yes"

This allowed the ISO to boot into the graphical installer, which I accessed using VNC. After installation, I planned to enable SSH on the VM to manage it more easily and remove the graphics option.

However, to use VNC to complete the installation, I needed to add additional firewall rules to allow VNC access to the virtual machine after it was created and booted.

Step 6: Configuring the Firewall (Again)

When a bhyve virtual machine boots, it creates a new network interface on the host system, usually with the prefix tap. When my bhyve VM booted, the firewall blocked all network access on the new VM interface. As a result, I was unable to connect to the VM, and the VM itself had no network connectivity.

Here’s what I did to properly assign the VM network interface and open up traffic:

  • Run ifconfig from the console to see the list of interfaces.
  • Identify the new interface created by bhyve (it will usually start with tap). In my case, it was named tap0.
  • Rename the tap interface so it can be properly assigned in the OPNsense GUI:
ifconfig tap0 name bhyve0
  • Go to Interfaces → Assignments in the OPNsense UI.
  • Assign the newly renamed bhyve0 interface.
  • Give it a description like bhyve_vm_omada_controller.
  • Enable the interface.
Assigning the vm to an interface in the OPNsense UI
  • Go to Firewall → Rules → [bhyve_vm_omada_controller].
  • Add a rule to allow all traffic through the interface.

This setup ensured that the virtual machine had full network access through its own dedicated interface, while still keeping things clean and organized within OPNsense.

Keep in mind that each time the VM is powered off and started again, a new tap interface is created. Because of this, you must manually rename and reassign the interface every time the VM boots until we set up a persistent interface configuration after the OS installation is complete.

Keeping the interface name consistent between shutdowns so firewall rules apply correctly was one of the trickiest parts of the entire setup for me. I’ll dive deeper into the different solutions I tested, and what finally solved the issue reliably, in the final step of this article.

Step 7: Connecting with VNC and Installing the OS

Now that our virtual machine on OPNsense is configured, the ISO is loaded, and the firewall rules are in place, it’s time to connect to the VM and install the OS.

  • Open your preferred VNC client. I personally used Remmina for this, but other popular options like TightVNC and TigerVNC will also work fine.
  • Connect to your OPNsense router’s IP address on port 5900.
  • You should see your OS installer’s screen boot up!
Remotely connect to virtual machines on OPNsense with VNC

Proceed through the guest OS installation like you normally would. During installation, I made sure to:

  • Enable the OpenSSH server in Ubuntu so I could manage the VM over SSH instead of VNC afterward.
  • Configure the VM with a static IP address within my LAN subnet.

Once installation was completed, I rebooted the VM. If you enabled SSH, you should now be able to connect to your VM via its IP address without needing to rely on VNC anymore.

After confirming SSH access, I edited the VM’s configuration to remove the graphics="yes" line from omada-controller.conf for security and resource efficiency.

After powering off the VM to make these changes, I had to manually rename the network adapter again following the steps from Step 6, before I could access it via SSH.

Now let’s configure the VM to start automatically at boot, and find a more permanent solution for the network adapter issue in the next step.

Step 8: Starting the VM at Boot and Fixing the Network Interface Name Issue

Starting the VM at Boot

By default, bhyve VMs don’t start automatically with the system. You can set individual VMs to start using:

vm set omada-controller boot=on

However, there’s a more flexible method that allows you to define a list of VMs that should start at boot. I used the following command to specify mine:

sysrc vm_list="omada-controller"

This ensures vm-bhyve starts the omada-controller VM whenever the system boots.

If you want to start multiple VMs, just list them separated by spaces:

sysrc vm_list="omada-controller another-vm some-other-vm"

This is useful if you plan to run multiple VMs on your OPNsense machine via bhyve.

Fixing the Networking Interface Name at Boot

One of the trickiest parts of my setup was keeping the VM’s network interface name consistent between reboots. I initially tried using the following line in my VM config:

network0_name="bhyve0"

This is supposed to create the VM’s network interface with the name bhyve0 when it boots.

However, I found that while this approach worked with loader="grub" in the VM config (BIOS mode), it caused the VM to crash immediately at startup when using loader="uefi".

Instead, I leaned into ifconfig and ran the following:

sysrc ifconfig_tap0_name="bhyve0"

This ensures the tap0 interface is automatically renamed to bhyve0 at boot time, which seems to be working well for me. This seems to allow the firewall rules we created earlier to be applied successfully without manual intervention.

Now we have configured our VM to start running at boot, and we automatically rename the created network interface at boot.

Keep in mind that with the ifconfig method, you will have to manually run ifconfig again if the VM is powered off and on, but the OPNsense host is not:

ifconfig tap0 name bhyve0

This is because the interface gets destroyed and recreated with the default tap0 name.

Running Virtual Machines on OPNsense: My Final Thoughts

Running virtual machines directly on OPNsense using bhyve is an advanced but rewarding undertaking. It allows you to consolidate infrastructure and put underutilized hardware to work, all while keeping your firewall on bare metal for maximum reliability. While the process involves a lot of manual setup—especially around networking and boot configuration—it ultimately gives you a lightweight, headless VM host tightly integrated into your router.

Just remember that this is an unofficial and unsupported use case. Updates to OPNsense or FreeBSD may break things, so keep good backups and approach with a tinkerer’s mindset. But if you’re comfortable on the command line and like squeezing every drop of utility out of your hardware, this setup is a powerful way to do just that.

Remarks

  • vm list is a good command to help see loaded VMs and their status. You can also start and stop VMs with vm start vm-name and vm stop vm-name.
  • You will have to configure the network adapter settings for each VM you create and apply the firewall rules for each VM in the OPNsense UI as we did above.
  • It’s important to note the configuration differences between using the UEFI loader and the BIOS loader when setting up virtual machines on OPNsense, as stated throughout the article.
  • To see a sample VM configuration, you can take a look at an example on GitHub here.
  • Again, this is all unsupported. Follow at your own risk.
    • This is for advanced users only. None of it is manageable through the OPNsense UI, except firewall rules. Know what you’re doing in the terminal before following this guide.

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve/feed 0
Setting a Static IP Address and DNS in Ubuntu Server https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server#respond Wed, 09 Apr 2025 00:13:28 +0000 https://www.techopt.io/?p=883 If you’re running Ubuntu Server and need to configure a static IP address, you might have seen guides mentioning /etc/network/interfaces or resolvconf. However, these methods are outdated. The recommended way today is to use netplan. In this guide, you’ll discover how to set a static IP in Ubuntu and define custom DNS settings, including nameservers […]

The post Setting a Static IP Address and DNS in Ubuntu Server appeared first on TechOpt.

]]>
If you’re running Ubuntu Server and need to configure a static IP address, you might have seen guides mentioning /etc/network/interfaces or resolvconf. However, these methods are outdated. The recommended way today is to use netplan.

In this guide, you’ll discover how to set a static IP in Ubuntu and define custom DNS settings, including nameservers and search domains. Additionally, we’ll explain how to keep DHCP while specifying DNS servers for better control.

Why Should You Set a Static IP on Ubuntu Server?

Assigning a static IP ensures your server retains the same address across reboots. This reliability is essential for servers running web services, databases, or acting as internal network resources.

Step 1: Identify Your Ubuntu Server Network Interface

To begin, list your network interfaces:

ip link

You’ll usually see names like eth0, ens33, or enp0s3.

Step 2: Edit the Netplan Configuration

Netplan configurations are stored in /etc/netplan/. View the files with:

ls /etc/netplan/

Next, edit the YAML file (replace with your actual file name):

sudo nano /etc/netplan/50-cloud-init.yaml

Here’s an example static IP configuration for Ubuntu Server:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: no
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        search: [yourdomain.local]
        addresses:
          - 1.1.1.1
          - 1.0.0.1

Replace eth0 with your interface name. Adjust the IP, gateway, and DNS to match your network.

Important: Some older guides might mention using the gateway4 parameter. However, gateway4 has been deprecated. It’s better to use the routes section, as demonstrated above, for better compatibility with future Ubuntu versions.

Step 3: Apply the Static IP Ubuntu Configuration

Once you have finished editing, apply the changes with:

sudo netplan apply

To confirm your new settings, run:

ip a

This command will display your active IP address. To confirm your DNS configuration is working, you can run:

apt update


This will refresh the built-in software repositories, and as long as it’s successful you know that your DNS configuration is working.

Alternative: Keep DHCP but Configure DNS in Ubuntu Server

If you prefer to use DHCP for IP assignment but still want to control DNS servers, use this configuration:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: yes
      dhcp4-overrides:
        use-dns: no
      nameservers:
        search: [yourdomain.local]
        addresses:
          - 1.1.1.1
          - 1.0.0.1

This method allows the server to receive its IP address from the DHCP server, while your specified DNS servers handle name resolution.

Conclusion

To sum up, netplan is the modern, recommended tool for configuring a static IP Ubuntu setup. You should avoid older methods like resolvconf or editing /etc/network/interfaces, as they are deprecated in the latest Ubuntu versions. Whether you need a full static IP or simply want to control your DNS while keeping DHCP, netplan makes the process clear and manageable.

If you would like to learn about all the configuration options for netplan, you can read the official Netplan documentation.

If you would prefer to view this guide in video form, I’ve created a video explaining these instructions on the TechOpt.io YouTube channel, which you can watch below:

The post Setting a Static IP Address and DNS in Ubuntu Server appeared first on TechOpt.

]]>
https://www.techopt.io/linux/setting-a-static-ip-address-and-dns-in-ubuntu-server/feed 0
How to Create a Custom Warning in Zabbix https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix#respond Sun, 16 Mar 2025 03:53:55 +0000 https://www.techopt.io/?p=837 Zabbix actively monitors business infrastructure, but sometimes the built-in alerts do not cover all needs. In my case, I wanted to create a custom warning in Zabbix if a Btrfs snapshot hadn’t been completed in the last 24 hours on my Linux server. With this setup, I get an alert whenever my snapshot process fails […]

The post How to Create a Custom Warning in Zabbix appeared first on TechOpt.

]]>
Zabbix actively monitors business infrastructure, but sometimes the built-in alerts do not cover all needs. In my case, I wanted to create a custom warning in Zabbix if a Btrfs snapshot hadn’t been completed in the last 24 hours on my Linux server. With this setup, I get an alert whenever my snapshot process fails or experiences a delay.

Why Use Custom Warnings in Zabbix?

Zabbix provides various alerts by default, but I needed a custom warning to:

  • Ensure Btrfs snapshots are being created regularly
  • Get notified if the snapshot process fails
  • Integrate this check into my existing monitoring system

There are numerous various use-cases for custom warnings in Zabbix. Custom warnings provide greater flexibility, allowing you to adapt them to your specific monitoring needs.

Setting Up a Custom Warning in Zabbix

1. Create a Custom Script

First, I created a dedicated directory for custom scripts in Zabbix on my Linux host:

mkdir -p /etc/zabbix/scripts

Then, I created the script file and opened it for editing:

nano /etc/zabbix/scripts/check_btrfs_snapshot.sh

I added the following script to check if a Btrfs snapshot exists within the last 24 hours:

#!/bin/bash

SNAPSHOT_DIR="/mnt/btrfs/.snapshots"  # Adjust to your snapshot location
THRESHOLD=$(date -d '24 hours ago' +%s)

latest_snapshot=$(find "$SNAPSHOT_DIR" -maxdepth 1 -type d -printf '%T@ %p\n' | sort -nr | head -n1 | awk '{print $1}')

if [[ -z "$latest_snapshot" ]]; then
    echo 0  # No snapshots found
elif (( $(echo "$latest_snapshot >= $THRESHOLD" | bc -l) )); then
    echo 1  # Recent snapshot exists
else
    echo 0  # No recent snapshot
fi

After saving the script, I made it executable:

chmod +x /etc/zabbix/scripts/check_btrfs_snapshot.sh

2. Add the Script to Zabbix Agent 2

I’m using Zabbix Agent 2 for advanced features. Since Zabbix Agent 2 supports system.run, I modified zabbix_agent2.conf to add the script:

  1. Open the configuration file: nano /etc/zabbix/zabbix_agent2.conf
  2. Add this line to allow the script: AllowKey=system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh]
  3. Restart the agent: systemctl restart zabbix-agent2

3. Create a Custom Item in Zabbix

  1. Go to Zabbix Web InterfaceMonitoringHosts
  2. Select the host where Btrfs is monitored (or where you’re adding your custom script) and click Items.
  3. Click Create Item and set your parameters:
    • Name: Btrfs Snapshot in the last 24 hours
    • Type: Zabbix Agent
    • Key: system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh]
    • Type of information: Text
    • Update interval: 30m
Parameters for custom zabbix item

I also recommend testing your script to make sure it runs properly with the Test option at the bottom of the Item UI from above.

4. Set Up a Custom Trigger

  1. Navigate to the Triggers tab.
  2. Click Create Trigger.
  3. Use this expression to trigger a warning if no snapshot exists in the last 24 hours:last(/example-hostname/system.run[/etc/zabbix/scripts/check_btrfs_snapshot.sh])=0.
    You can also use the “Add Condition UI” in the side panel by clicking Add to help generate the correct expression.
  4. Set severity (e.g., Warning).
  5. Click Add.
Zabbix custom trigger parameters to create custom warning

Final Thoughts

Setting up this custom warning in Zabbix ensures my Btrfs snapshots are created on schedule. If a snapshot fails, I get a warning immediately, allowing me to troubleshoot the issue before it becomes a bigger problem.

If you’re using Zabbix for monitoring, creating custom warnings like this can significantly improve your ability to track and respond to important system events. Try it out and enhance your monitoring experience!

The post How to Create a Custom Warning in Zabbix appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/how-to-create-a-custom-warning-in-zabbix/feed 0
Adding a Script Tag to HTML Using Nginx https://www.techopt.io/servers-networking/adding-a-script-tag-to-html-using-nginx https://www.techopt.io/servers-networking/adding-a-script-tag-to-html-using-nginx#respond Tue, 04 Mar 2025 01:55:00 +0000 https://www.techopt.io/?p=829 Recently, I needed to add a script to an HTML file using Nginx. Specifically, I wanted to inject an analytics script into the <head> section of a helpdesk software’s HTML. The problem? The software had no built-in way to integrate custom scripts. Since modifying the source code wasn’t an option, I turned to Nginx as […]

The post Adding a Script Tag to HTML Using Nginx appeared first on TechOpt.

]]>
Recently, I needed to add a script to an HTML file using Nginx. Specifically, I wanted to inject an analytics script into the <head> section of a helpdesk software’s HTML. The problem? The software had no built-in way to integrate custom scripts. Since modifying the source code wasn’t an option, I turned to Nginx as a workaround.

Warning

Use this method at your own risk. Modifying HTML responses through Nginx can easily break your webpage if not handled carefully. Always test changes in a controlled environment before deploying them to production.

Nginx is not designed for content manipulation, and this approach should only be used as a last resort. Before proceeding, exhaust all other options, such as modifying the source code, using a built-in integration, or leveraging a client-side solution.

How to Add a Script to HTML Using Nginx

If you need to add a script, or any other HTML to an HTML file using Nginx, you can use the sub_filter module to modify response content on the fly. By leveraging this, we can insert a <script> tag before the closing </head> tag in the HTML document.

Configuration Example

To achieve this, add the following to your Nginx configuration:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_buffering off;

        sub_filter '</head>' '<script src="https://example.com/analytics.js"></script></head>';
        sub_filter_types text/html;
        sub_filter_once on;
    }
}

Explanation

  • sub_filter '</head>' '<script src="https://example.com/analytics.js"></script></head>': This replaces </head> with our script tag, ensuring it appears in the document head.
  • sub_filter_types text/html;: Ensures the filter applies only to HTML responses.
  • sub_filter_once on;: Ensures that the replacement happens only once, as </head> should appear only once in a valid HTML document.

Adding an Nginx Proxy for Script Injection

To implement this solution without modifying the existing helpdesk software, I set up another Nginx instance in front of it. This new Nginx proxy handles incoming requests, applies the sub_filter modification, and then forwards the requests to the helpdesk backend.

Here’s how the setup works:

  1. The client sends a request to example.com.
  2. Nginx intercepts the request, modifies the HTML response using sub_filter, and injects the script.
  3. The modified response is then sent to the client, appearing as if it were served directly by the helpdesk software.

This approach keeps the original application untouched while allowing script injection through the proxy layer.

Remarks

  • Nginx is primarily a proxy and web server, not a content manipulation tool. Modifying content in this way should be a last resort after exhausting all other options, such as modifying the source code, using a built-in integration, or leveraging a client-side solution. Overuse of sub_filter can introduce unexpected behavior, break page functionality, or impact performance.
  • sub_filter requires proxy_buffering off;, which may degrade performance, especially for high-throughput sites, by preventing response buffering and increasing load on the backend.
  • If you’re adding multiple scripts or need flexibility, consider using a tag manager such as Google Tag Manager instead.
  • You can use this method to modify or inject any HTML, not just scripts.

The post Adding a Script Tag to HTML Using Nginx appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/adding-a-script-tag-to-html-using-nginx/feed 0
LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox#respond Tue, 25 Feb 2025 02:40:15 +0000 https://www.techopt.io/?p=824 Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases. When setting up a new environment, you might wonder whether you should deploy your […]

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
Proxmox is a powerful open-source platform that makes it easy to create and manage both LXC containers (CTs) and virtual machines (VMs). When considering LXC containers vs virtual machines in Proxmox, it’s essential to understand their differences and best use cases.

When setting up a new environment, you might wonder whether you should deploy your workload inside an LXC container or a full VM. The choice depends on what you are trying to achieve.

LXC Containers: Lightweight and Efficient

LXC (Linux Containers) provides an efficient way to run isolated environments on a Proxmox system. Unlike traditional VMs, containers share the host system’s kernel while maintaining their own isolated user space. This means they use fewer resources, start up quickly, and offer near-native performance.

When to Use LXC Containers:

  • Single Applications – If you need to run a single application in an isolated environment, an LXC container is an excellent choice.
  • Docker Workloads – If an application is only available as a Docker image, you can run Docker inside an LXC container, avoiding the overhead of a full VM.
  • Resource Efficiency – LXC containers consume fewer resources, making them ideal for lightweight applications that don’t require their own kernel.
  • Speed – Since LXC containers don’t require full emulation, they start almost instantly compared to VMs.

Considerations for LXC Containers:

  • Less Isolation – Since they share the host kernel, they are not as isolated as a full VM, which can pose security risks if an attacker exploits vulnerabilities in the kernel or improperly configured permissions.
  • Compatibility Issues – Some applications that expect a full OS environment may not work well inside an LXC container.
  • Limited System Control – You don’t have complete control over kernel settings like you would in a VM.

Virtual Machines: Full System Isolation

Virtual machines in Proxmox use KVM (Kernel-based Virtual Machine) technology to provide a fully virtualized system. Each VM runs its own operating system with its own kernel, making it functionally identical to a physical machine.

When to Use Virtual Machines:

  • Multiple Applications Working Together – If you need to run a system with multiple interacting services, a VM provides a fully isolated environment.
  • Custom Kernel or OS Requirements – If your application requires a specific kernel version or a non-Linux operating system (e.g., Windows or BSD), a VM is the way to go.
  • Strict Security Requirements – Since VMs have strong isolation from the host system, they provide better security for untrusted workloads.
  • Compatibility – Any software that runs on a physical machine will run in a VM without modification.

Considerations for Virtual Machines:

  • Higher Resource Usage – VMs require more CPU, RAM, and disk space compared to containers.
  • Slower Start Times – Because they emulate an entire system, VMs take longer to boot up.
  • More Maintenance – You’ll need to manage full OS installations, updates, and security patches for each VM separately.

Final Thoughts: When to Choose LXC Containers vs. Virtual Machines in Proxmox

In general, if you need to run a single application in isolation, or if your application is only available as a Docker image, an LXC container is the better choice. Containers are lightweight, fast, and efficient. However, if you’re running a more complex system with multiple interacting applications, need complete OS independence, or require strong isolation, a VM is the better solution.

Proxmox makes it easy to work with both LXC and VMs, so understanding your workload’s needs will help you choose the right tool for the job. By leveraging the strengths of each, you can optimize performance, security, and resource usage in your environment.

The post LXC Containers (CTs) vs. Virtual Machines (VMs) in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/lxc-containers-vs-virtual-machines-in-proxmox/feed 0
Five Important Technical SEO Tips for 2025 https://www.techopt.io/programming/five-important-technical-seo-tips-for-2025 https://www.techopt.io/programming/five-important-technical-seo-tips-for-2025#respond Thu, 02 Jan 2025 14:37:00 +0000 https://www.techopt.io/?p=641 As we step into 2025, staying ahead in the digital landscape requires adopting cutting-edge practices. To help you optimize your website, here are five important technical SEO tips for 2025 that will enhance your site’s performance and search visibility. From improving loading speeds to balancing lazy loading and server-side rendering, these tips will ensure your […]

The post Five Important Technical SEO Tips for 2025 appeared first on TechOpt.

]]>
As we step into 2025, staying ahead in the digital landscape requires adopting cutting-edge practices. To help you optimize your website, here are five important technical SEO tips for 2025 that will enhance your site’s performance and search visibility. From improving loading speeds to balancing lazy loading and server-side rendering, these tips will ensure your website meets modern SEO standards.

As a web developer, part of our job is to make sure we’re using available optimizations and adhering to the latest best coding practices. Many of these optimizations rely heavily on clever use of HTML and JavaScript, so technical expertise is key.

1. Optimize LCP Loading Speed

Largest Contentful Paint (LCP) is a critical metric in Google’s Core Web Vitals. It measures the time it takes for the largest visible content element—usually an image or block of text—to load and become visible to users. A slow LCP can negatively impact user experience and search rankings.

To improve LCP:

  • Avoid lazy loading the largest image or hero image on your page. This ensures the browser can prioritize its rendering immediately.
  • Use efficient image formats like WebP or AVIF for better compression.
  • Preload critical resources, such as fonts and above-the-fold images, to help the browser fetch them early.

These changes often involve direct modifications to your HTML structure and strategic resource management through JavaScript to ensure optimized delivery.

2. Lazy Load Other Images

While the largest image should not be lazy-loaded, smaller images and those below the fold can and should be. Lazy loading these assets reduces the initial page size and improves loading speed, leading to a better user experience and higher SEO performance.

Use the loading="lazy" attribute for images or leverage JavaScript libraries for more control. For example:

<img src="example.jpg" loading="lazy" alt="Descriptive Alt Text">

Strategic use of HTML attributes and JavaScript allows you to control how and when resources load, ensuring optimal performance.

3. Lazy Load Unnecessary JavaScript and Unnecessary Content Below the Fold

Lazy loading isn’t just for images—you can also apply it to JavaScript and other content below the fold. This minimizes the amount of resources the browser processes initially, reducing the time to interactive (TTI).

Here’s an example using React:

import React, { lazy, Suspense } from 'react';

const LoginModal = lazy(() => import('./LoginModal'));

function App() {
  const [showModal, setShowModal] = React.useState(false);

  return (
    <div>
      <button onClick={() => setShowModal(true)}>Open Login</button>
      {showModal && (
        <Suspense fallback={<div>Loading...</div>}>
          <LoginModal />
        </Suspense>
      )}
    </div>
  );
}

This approach defers loading the login modal until the user clicks the button. Frameworks like Vue, Angular, or vanilla JavaScript also support similar lazy loading techniques using import(), which you can read more about here.

Implementing these optimizations requires careful use of JavaScript to balance resource management and functionality.

4. Don’t Lazy Load Content Vital to Search Engines

While lazy loading has its benefits, overusing it can backfire. Content critical for SEO, like metadata, structured data, and primary text visible to users, should not be lazy-loaded. Search engines may not fully index this content, harming your rankings.

To ensure vital information is always available:

  • Use Server-Side Rendering (SSR) for pages you need to rank well in search engines. SSR renders content on the server before sending it to the browser, ensuring it’s accessible to search engines and users.
  • Prioritize preloading critical content while deferring less essential resources.

This balance often involves designing your HTML to ensure critical content is included upfront and leveraging JavaScript for secondary features. Therefore, avoid over-optimization that can harm your site’s accessibility and SEO.

5. Minimize Time to Interactive

Time to Interactive (TTI) measures how quickly a page becomes fully interactive. High TTI can frustrate users and impact rankings.

To optimize TTI:

  • Use SSR to render the initial view faster.
  • Choose smaller, lightweight JavaScript libraries and avoid running unnecessary scripts on load.
  • Combine lazy loading with efficient bundling to defer non-critical scripts until needed.

Reducing TTI requires fine-tuning your JavaScript execution and crafting your HTML to load essential resources efficiently. By optimizing these elements, you can enhance user satisfaction and meet Google’s performance benchmarks.

Conclusion

By following these five technical SEO tips for 2025, you can improve your site’s speed, usability, and search engine visibility. Many of these strategies rely on making deliberate adjustments to your HTML and JavaScript to strike the perfect balance between performance and accessibility. Stay proactive, and your website will thrive in the ever-changing SEO landscape.

The post Five Important Technical SEO Tips for 2025 appeared first on TechOpt.

]]>
https://www.techopt.io/programming/five-important-technical-seo-tips-for-2025/feed 0
Importing a QCOW2 Image in Proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox#respond Tue, 31 Dec 2024 17:01:00 +0000 https://www.techopt.io/?p=678 If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line. Working with QCOW2 in […]

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
If you’re setting up a virtual machine with a QCOW2 image in Proxmox, you might feel overwhelmed at first. However, understanding the steps can make it a manageable task. This guide will explain how to import a QCOW2 image into Proxmox, including uploading the file using SFTP via the command line.

Working with QCOW2 in Proxmox setups involves a few terminal commands and some configuration in the Proxmox web interface, but it’s straightforward once you’ve done it once.

Steps to Import a QCOW2 Image in Proxmox

1. Access Your Proxmox Terminal

Log in to the terminal of your Proxmox host. This can be done directly or via SSH. The terminal is essential for transferring and managing files during the setup process.

2. Prepare the QCOW2 Storage Directory

Store QCOW2 images in a dedicated directory for better organization. Proxmox doesn’t create this directory by default, so you may need to set it up. A good place to store QCOW2 images is in /var/lib/vz/template/qcow.

  • Check if the directory exists:

    ls /var/lib/vz/template/qcow

  • If it doesn’t exist, create it:

    mkdir -p /var/lib/vz/template/qcow

3. Upload the QCOW2 Image Using SFTP

Uploading the QCOW2 image requires an SFTP connection. Here’s how to do it using the command line:

  1. Open a terminal on your local machine.
  2. Use the sftp command to connect to your Proxmox host. Replace user with your Proxmox username and host with the IP address of your Proxmox server:

    sftp user@host

  3. Navigate to the directory where you want to upload the image:

    cd /var/lib/vz/template/qcow

  4. Upload the QCOW2 file from your local system:

    put /path/to/your_image.qcow2

    Replace /path/to/your_image.qcow2 with the actual path to the QCOW2 file on your local machine.
  5. Exit the SFTP session with bye.
uploading qcow2 images to proxmox with sftp

4. Create a New Virtual Machine

Now, create a VM in Proxmox using the web interface:

  1. Log in to the Proxmox web interface.
  2. Select Create VM and configure the basic settings, such as the name and operating system. Make sure to select Do not use any media on the OS tab.
  3. On the Disks tab, leave all settings at their default values:
    • The default settings work because QCOW2 images already define their initial drive size. If needed, you can expand the drive size later.
    • Select any bus type for now, as you will replace this with the QCOW2 disk.
Do not use any media when uploading qcow2 image to proxmox

5. Import the QCOW2 Image

After creating the VM, use the qm importdisk command to import the QCOW2 image to the VM.

  1. Identify the VM’s ID from the Proxmox web interface (e.g., 120).
  2. Run the following command in the Proxmox terminal:

    qm importdisk 120 /var/lib/vz/template/qcow/your_image.qcow2 local

This imports the QCOW2 image as an unused disk linked to the VM.

Make sure to replace the VM ID, image location and storage name with the values for your system.

Here is a sample output from the command:

importing disk '/var/lib/vz/template/qcow/haos_ova-14.1.qcow2' to VM 120 ...
transferred 0.0 B of 32.0 GiB (0.00%)
transferred 396.5 MiB of 32.0 GiB (1.21%)
transferred 737.3 MiB of 32.0 GiB (2.25%)
transferred 1.1 GiB of 32.0 GiB (3.36%)
transferred 1.4 GiB of 32.0 GiB (4.47%)
transferred 1.8 GiB of 32.0 GiB (5.59%)
transferred 2.1 GiB of 32.0 GiB (6.70%)
transferred 2.5 GiB of 32.0 GiB (7.81%)
transferred 2.9 GiB of 32.0 GiB (8.92%)
...
transferred 20.9 GiB of 32.0 GiB (65.44%)
transferred 21.3 GiB of 32.0 GiB (66.56%)
transferred 21.7 GiB of 32.0 GiB (67.67%)
transferred 22.0 GiB of 32.0 GiB (68.78%)
transferred 22.4 GiB of 32.0 GiB (69.89%)
transferred 22.7 GiB of 32.0 GiB (71.01%)
transferred 32.0 GiB of 32.0 GiB (100.00%)
unused0: successfully imported disk 'VMs:vm-120-disk-3'

In this case, this created a new 32.0 GiB drive specified by our image that we can attach to our VM.

6. Attach the Disk to the VM

Once imported, attach the QCOW2 image to the VM:

  1. In the Proxmox web interface, go to the VM and select Hardware.
  2. Find the unused disk and attach it as a VirtIO Block Device or a SATA device (if your guest OS doesn’t support QEMU agent and VirtIO drivers).
  3. Optionally, remove or detach the original disk created during VM setup if it’s no longer needed.
Add the unused disk created from the qcow2 image in proxmox

7. Update the Boot Order

Ensure the VM boots from the newly attached QCOW2 image:

  1. Go to the Options tab in the VM’s settings.
  2. Set the newly attached VirtIO disk as the primary boot device.
Change the primary boot entry to the imported qcow2 image attached drive

8. Start the VM

Boot up the VM. If everything is set up correctly, the system should load from the QCOW2 image!

Remarks

  • When creating the VM, default disk settings are sufficient since the QCOW2 image specifies its own initial drive size. You can expand the size later if needed.
  • The sftp command is a straightforward way to upload the QCOW2 file. If you prefer a graphical interface, tools like FileZilla can achieve the same result.
  • Use the /var/lib/vz/template/qcow directory to keep QCOW2 images organized and accessible for future use. You can use any directory, but this is an easy place to remember since it’s with other templates.
  • You can also use any other method to upload your QCOW2 images to this directory, or even download them to this directory directly with a tool such as wget.

The post Importing a QCOW2 Image in Proxmox appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/importing-a-qcow2-image-in-proxmox/feed 0
Inform Search Engines of Your Website to Rank Quicker https://www.techopt.io/servers-networking/inform-search-engines-of-website-to-rank-quicker https://www.techopt.io/servers-networking/inform-search-engines-of-website-to-rank-quicker#respond Wed, 25 Dec 2024 00:57:14 +0000 http://localhost:8080/?p=98 To ensure your website ranks quicker, one of the most effective SEO strategies is to inform search engines about your pages. By submitting individual URLs and sitemaps to major search engines, you can speed up the process of indexing and help your site get discovered faster. This not only increases your chances of ranking higher, […]

The post Inform Search Engines of Your Website to Rank Quicker appeared first on TechOpt.

]]>
To ensure your website ranks quicker, one of the most effective SEO strategies is to inform search engines about your pages. By submitting individual URLs and sitemaps to major search engines, you can speed up the process of indexing and help your site get discovered faster. This not only increases your chances of ranking higher, but also improves visibility in search results. In this blog, we will explore how to use webmaster tools from Google and Bing to submit your website’s content, along with the benefits of taking these steps.

Why It’s Important to Inform Search Engines

Search engines like Google and Bing rely on web crawlers to discover and index content on the internet. By submitting your individual pages and sitemaps, you give these crawlers a more direct way to access your content. This process can help new pages appear in search results faster, making it easier for users to find your site. Additionally, informing search engines of changes or updates to your content helps keep your rankings accurate and up-to-date.

Inform Google Using Google Search Console

Google Search Console (GSC) is a powerful tool that allows you to monitor and improve your website’s presence in Google search results. Here’s how to use it to inform search engines about your website:

  1. Sign In to Google Search Console
    If you haven’t already, sign in or create a Google Search Console account. Add your website by verifying ownership through methods like HTML tags, DNS records, or Google Analytics.
  2. Submit a Sitemap
    To submit a sitemap, navigate to the Sitemaps section under the Index tab in GSC. Here, you can enter the URL of your sitemap and click Submit. If you don’t have a sitemap, there are several tools and plugins (like Yoast SEO) that can help you generate one.
  3. Submit Individual Pages
    If you want to inform Google about a specific page quickly, you can use the URL Inspection Tool. Enter the URL in the search bar, then click Request Indexing. This is especially useful when you’ve updated a page or published new content.
  4. Monitor Performance
    In the Performance section, you can track how your submitted pages are performing in search results, including clicks, impressions, and average position.
Submitting an individual page in Google Search Console
Submitting an individual page in Google Search Console

By following these steps, you can efficiently submit your website’s pages and sitemaps to Google, helping it rank faster.

Inform search engines of urls and changes with sitemap
Inform search engines of website URLs and changes with sitemap

Inform Microsoft Bing Using Bing Webmaster Tools

Bing Webmaster Tools provides a similar service to Google’s Search Console. It helps you submit URLs and sitemaps for indexing. Here’s how to get started:

  1. Sign Up for Bing Webmaster Tools
    First, sign up for a free Bing Webmaster Tools account and add your website by verifying ownership via HTML meta tags, XML file, or a DNS record.
  2. Submit Your Sitemap
    Once your site is verified, navigate to the Sitemaps section and click on Submit a Sitemap. Enter your sitemap’s URL and submit it to help Bing crawl your site faster.
  3. Submit Individual URLs
    If you want Bing to crawl a specific page, go to the URL Submission section under the Configure My Site tab. Paste the URL of the page you want to submit and click Submit. This helps Bing index important or updated pages more quickly.
  4. Track Your Performance
    Bing Webmaster Tools also allows you to monitor the performance of your site by providing data on search queries, backlinks, and indexing status. Check this regularly to ensure your pages are indexed properly.

By submitting your content through Bing Webmaster Tools, you can help your site appear in Bing’s search results faster.

Informing Bing search engine of website URLs and changes with sitemap
Informing Bing search engine of website URLs and changes with sitemap

Additional Benefits of Informing Search Engines

Submitting your website’s pages and sitemaps to search engines can have several key benefits, including:

  • Faster Indexing: When you inform search engines directly, they can crawl your site faster, reducing the time it takes for your new content to appear in search results.
  • Better Visibility: Submitting sitemaps and URLs ensures that all your pages are indexed, improving the chances of ranking higher.
  • Error Resolution: Webmaster tools can highlight issues with indexing or crawling, allowing you to fix errors that might hinder your site’s performance.
  • Control Over Content: By directly submitting pages and sitemaps, you gain more control over how your content is indexed and ranked in search results.

Conclusion

In conclusion, informing search engines of your website is a simple but effective way to help your site rank quicker and improve its visibility. By using the webmaster tools from Google and Bing, you can submit individual pages and sitemaps to ensure that search engines crawl your site more efficiently. The benefits include faster indexing, better visibility, and more control over your content. Regularly submitting your updates can help maintain a healthy site that ranks well in search results.

By following this SEO tip, you can ensure your website stays competitive and discoverable. So, start submitting your pages today and enjoy quicker rankings and increased traffic!

The post Inform Search Engines of Your Website to Rank Quicker appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/inform-search-engines-of-website-to-rank-quicker/feed 0