TechOpt https://www.techopt.io/ Programming, servers, Linux, Windows, macOS & more Sun, 19 Oct 2025 17:33:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.techopt.io/wp-content/uploads/2024/07/cropped-logo-1-32x32.png TechOpt https://www.techopt.io/ 32 32 Upgrade openSUSE Leap to 16.0 from 15.6 https://www.techopt.io/linux/upgrade-opensuse-leap-to-16-0-from-15-6 https://www.techopt.io/linux/upgrade-opensuse-leap-to-16-0-from-15-6#respond Sun, 19 Oct 2025 17:33:07 +0000 https://www.techopt.io/?p=1140 Upgrading openSUSE Leap has evolved! If you’ve tried the old method using the --releasever flag that I talked about in my 15.5 to 15.6 upgrade guide, you’ve probably run into problems. In this updated guide, I’ll cover the new, official and recommended method to upgrade openSUSE Leap to 16.0 from 15.6 using the openSUSE Migration […]

The post Upgrade openSUSE Leap to 16.0 from 15.6 appeared first on TechOpt.

]]>
Upgrading openSUSE Leap has evolved! If you’ve tried the old method using the --releasever flag that I talked about in my 15.5 to 15.6 upgrade guide, you’ve probably run into problems. In this updated guide, I’ll cover the new, official and recommended method to upgrade openSUSE Leap to 16.0 from 15.6 using the openSUSE Migration Tool.

Why the Old --releasever Method No Longer Works

In my previous guide, I showed how you could upgrade openSUSE Leap releases with:

sudo zypper --releasever=16.0 dup

That used to work reliably in earlier Leap versions. However, with SLE (SUSE Linux Enterprise) 16, SUSE introduced major backend and repository format changes. The new openSUSE Leap 16.0 release merges more closely with SLE infrastructure, which means the repositories and release metadata formats have changed significantly.

If you try to use the --releasever flag now, you’ll likely see repository or GPG key errors during the upgrade. That’s because the old repository layout no longer matches Leap 16’s new structure.

The New Official Method: opensuse-migration-tool

Instead of manually changing repositories, Leap 16 introduces a dedicated migration utility designed to handle all the details for you. The tool automatically adjusts your repositories, resolves new dependencies, and manages system configuration changes.

Step 1: Install the Migration Tool

First, fully update your Leap 15.6 system:

sudo zypper refresh
sudo zypper up

Then install the new migration package:

sudo zypper install opensuse-migration-tool

Step 2: Run the Migration Process

Start the migration utility:

sudo opensuse-migration-tool

The tool will analyze your current system, identify obsolete packages, and suggest repository transitions for Leap 16.0. The system prompts you to confirm before proceeding with the distribution upgrade.

Upgrade openSUSE with opensuse-migration-tool

You’ll want to select openSUSE Leap 16.0 with the arrow keys on your keyboard, select OK and hit Enter.

You will probably encounter the following screen about disabling third-party repositories:

Repositories not recognized opensuse-migration-tool

This happens because Leap 16.0 changes how repositories are structured. You can simply hit Enter to confirm.

The upgrade process will then start! Wait a few minutes, then reboot into Leap 16.0 once the process finishes.

opensuse-migration-tool run complete

Step 3: Reboot into Leap 16.0

After the migration completes, simply reboot:

sudo reboot

You’ll now be running openSUSE Leap 16.0 with the updated repository structure.

Troubleshooting Tips

  • Do not use zypper dup --releasever=16.0. It may break dependencies.
  • If you encounter repository signature errors, remove or rename old .repo files in /etc/zypp/repos.d/ before re-running the migration tool.
  • Ensure your disk has sufficient space and that all third-party repositories are disabled before starting the upgrade.

Final Thoughts

The openSUSE team has streamlined the upgrade path to make system migrations more reliable and aligned with SUSE’s enterprise ecosystem. While older zypper --releasever methods are now deprecated, the openSUSE Migration Tool simplifies the process and ensures compatibility with the new Leap 16 architecture.

The post Upgrade openSUSE Leap to 16.0 from 15.6 appeared first on TechOpt.

]]>
https://www.techopt.io/linux/upgrade-opensuse-leap-to-16-0-from-15-6/feed 0
How to Make Ethernet Cables: A Complete Step-by-Step Guide https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide#respond Sun, 21 Sep 2025 18:39:42 +0000 https://www.techopt.io/?p=1098 Learning how to make ethernet cables yourself is a cost-effective and customizable way to build a network setup that fits your exact needs. Buying premade cables limits you to fixed lengths and can quickly get expensive, especially if you need several cables of different sizes. By crimping your own cables, you can create perfect lengths […]

The post How to Make Ethernet Cables: A Complete Step-by-Step Guide appeared first on TechOpt.

]]>
Learning how to make ethernet cables yourself is a cost-effective and customizable way to build a network setup that fits your exact needs. Buying premade cables limits you to fixed lengths and can quickly get expensive, especially if you need several cables of different sizes. By crimping your own cables, you can create perfect lengths for your home or office, improve cable management, and even ensure higher quality by using better materials.

This comprehensive guide will walk you through everything you need to know, from selecting the right cable and connectors, to crimping, testing, and troubleshooting your custom cables.


Why Make Your Own Ethernet Cable?

There are several advantages to building your own network cables:

  • Custom Lengths: No more coiled-up mess or cables that come up just short. Instead, you can make cables the exact length you need.
  • Cost Savings: Bulk ethernet cable and connectors are far cheaper per foot than buying pre-made cables.
  • Better Quality Control: You choose the cable type, shielding, and connectors, therefore avoiding cheap copper-clad aluminum (CCA) cables.
  • Skill Building: This is a useful DIY skill for anyone interested in networking, home labs, or IT work.

Tools and Materials Needed

Here’s what you’ll need to make DIY ethernet cables successfully:

  • Ethernet Cable (Cat6, Cat6a, or Cat5e): Prefer solid copper rather than CCA for best performance and compliance with standards. Cat6 bulk cable on Amazon
  • RJ45 Connectors: Choose connectors rated for your cable type (Cat6, Cat6a, or Cat5e). Passthrough connectors are easier for beginners. Cat6 RJ45 passthrough connectors
  • RJ45 Crimping Tool: Used to secure connectors to the cable. Most crimpers also include a wire cutter and stripper. RJ45 crimp tool
  • Cable Tester (Recommended, but optional): Ensures your wiring is correct and detects any faults. Basic cable tester or advanced network tester
  • Strain Relief Boots (Recommended, but optional): Add durability to the connector ends. Strain relief boots
  • Wire Cutters/Scissors: For trimming cable and internal wires. Wire strippers/cutters

Tip: Avoid “Cat7” or “Cat8” cables sold cheaply online. These are not officially recognized Ethernet standards and often use questionable materials.


Step 1: Measure and Cut Your Cable

Pull the amount of cable you need from the box, then add roughly 30 cm (about 1 foot) of extra length for trimming and flexibility. Cut the cable cleanly using the crimper’s cutting blade or a pair of wire cutters.

cut the ethernet cable

Step 2: Strip the Outer Jacket

Use the stripping blade on your crimping tool (or a dedicated wire stripper) to remove 5–10 cm (2–3 inches) of the outer jacket from both ends of the cable. Be careful not to nick the internal wires.

strip outer insulation of ethernet cable

After that, remove the internal string, if present.

cut the string in the cable

At this stage, slide on the strain relief boots if you’re using them—forgetting them is a common mistake. Therefore, it’s best to add them now. You want the larger side facing outward from the end of the cable on both sides.

add strain relief boots to cable

Step 3: Untwist and Arrange the Wires

Inside the jacket are four twisted pairs of wires (8 total). Untwist the pairs and straighten them.

Then, arrange them in either T-568A or T-568B wiring order. Use the same standard on both ends.

T-568A Wiring Order:

  1. White/Green
  2. Green
  3. White/Orange
  4. Blue
  5. White/Blue
  6. Orange
  7. White/Brown
  8. Brown

T-568B Wiring Order (Most Common in North America):

  1. White/Orange
  2. Orange
  3. White/Green
  4. Blue
  5. White/Blue
  6. Green
  7. White/Brown
  8. Brown
T-568A vs T-568B wiring standard diagram
T-568A vs. T-568B ethernet standards wiring diagram.

Lay the wires flat and keep them in the correct order. Finally, flatten them gently with your thumb for easier insertion.

T-568B standard wired ethernet cable
My wires are arranged in the T-568B standard.

Step 4: Trim Wires to Length

For non-passthrough connectors, trim the wires so that they are just long enough to reach the end of the connector when inserted. Cut them evenly so they line up perfectly.

For passthrough connectors, leave them a bit longer since the ends will protrude and be trimmed after crimping.


Step 5: Insert Wires Into the RJ45 Connector

Slide the wires into the connector carefully, ensuring they remain in the correct order. Push firmly until:

  • Each wire reaches the very end of the connector.
  • The outer jacket passes the strain relief tab for a strong connection.
wires in RJ45 connector

For passthrough connectors, the wires should stick out slightly from the other side.

As soon as you confirm the order, you’re ready to crimp.


Step 6: Crimp the Connector

Place the connector into the crimping tool and squeeze firmly until the pins press down into the wires and the strain relief tab locks onto the outer jacket.

crimping the RJ45 connector to the cable

Additionally, for passthrough connectors, trim the wire ends flush with the connector after crimping.

Then, repeat this entire process for the other end of the cable!


Step 7: Test Your Cable

Use a cable tester to confirm that all eight wires are connected in the correct order.

testing the ethernet cable with a cable tester

The lights on both ends should flash in sequence.

If any wires are misaligned, cut off the connector and repeat the process on that side.

Once confirmed, your custom ethernet cable is ready for use!


Cat6 vs Cat6a vs Cat5e: Which Should You Choose?

Not all Ethernet cables are created equal. Therefore, here’s a quick comparison to help you decide:

CategoryMaximum SpeedMaximum BandwidthMaximum Recommended LengthBest Use Case
Cat6Up to 1 Gbps (10 Gbps up to 55m)250 MHz100 mHome and small office networks, gaming, streaming
Cat6a10 Gbps up to 100m500 MHz100 mHigh-performance networks, data-heavy tasks, future-proofing
Cat5e1 Gbps100 MHz100 mBudget builds, basic home networking

Recommendation: Use Cat6 for most home setups, Cat6a if you want to future-proof or need maximum performance for longer runs, and Cat5e only if you already have it on hand or are working with very low-cost builds.


Frequently Asked Questions

Q: Can I mix T-568A on one end and T-568B on the other?
A: Only if you are intentionally creating a crossover cable. Otherwise, use the same wiring standard on both ends.

Q: How long can an Ethernet cable be?
A: Standard twisted-pair Ethernet cables (Cat5e, Cat6, Cat6a) are rated for up to 100 meters (328 feet) in total length. This includes patch cables at both ends. Beyond this length, you may experience signal loss or reduced speeds.

For 10 Gbps on Cat6, keep runs under 55 meters; use Cat6a for longer 10 Gbps runs.

Q: Do I really need a cable tester?
A: While optional, it saves time and frustration by catching miswires before you plug into your network.

Q: Should I ever use CCA cable?
A: No. Instead, always use solid copper cable for performance, safety, and compliance with Ethernet standards.


Troubleshooting Common Issues When Making Ethernet Cables

  • Tester Shows Miswired Pair: Re-check wiring order on both ends, re-crimp if needed.
  • Cable Doesn’t Click Securely: Ensure the strain relief tab is pressed down properly during crimping. Also ensure that the release tab covers on your strain relief boots aren’t too stiff and pressing down on the release tabs of the RJ45 connectors as a result. You may want to work the rubber of the strain relief boots with your thumbs a bit to stretch and break them in.
  • Poor Network Speeds: Test on another device and verify that you’re using solid copper cable, not CCA.

Final Remarks

Learning how to make ethernet cables saves money, eliminates clutter, and gives you full control over your network setup. Whether you’re wiring a home office, building a home lab, or just need a few short patch cables, this DIY approach is a game changer.

Practice a few times and you’ll be making professional-quality network cables in minutes!

If you prefer a video guide, you can watch my video guide below.

The post How to Make Ethernet Cables: A Complete Step-by-Step Guide appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/how-to-make-ethernet-cables-a-complete-step-by-step-guide/feed 0
GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin#respond Sun, 24 Aug 2025 20:40:06 +0000 https://www.techopt.io/?p=1059 When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These […]

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
When I added an Intel Arc GPU to my Proxmox server for Plex hardware transcoding, I quickly realized there isn’t much solid documentation on how to properly passthrough a GPU to a Proxmox LXC container. Most guides focus on VMs, not LXCs. After some trial and error, I figured out a process that works. These same steps can also apply if you’re running Jellyfin inside an LXC.

In this guide, I’ll walk you through enabling GPU passthrough in Proxmox (or Jellyfin) LXC step by step.

Step 1: Enabling PCI(e) Passthrough in Proxmox

The first step is enabling PCI passthrough at the Proxmox host level. I mostly followed the official documentation here: Proxmox PCI Passthrough Wiki.

I’ll summarize what should be done below.

Enable IOMMU in BIOS

Before you continue, enable IOMMU (Intel VT-d / AMD-Vi) in your system’s BIOS. This setting lets GPUs pass directly through to containers or VMs.

Each BIOS Is different, so if you’re not sure you should check your motherboard’s instruction manual.

Enable IOMMU Passthrough Mode

Not all hardware supports IOMMU passthrough, but if yours does you’ll see a big performance boost. Even if your system doesn’t, enabling it won’t cause problems, so it’s worth turning on.

Edit the GRUB configuration with:

nano /etc/default/grub

Locate the GRUB_CMDLINE_LINUX_DEFAULT line and add:

iommu=pt

Intel vs AMD Notes

  • On Intel CPUs with Proxmox older than v9 (kernel <6.8), also add: intel_iommu=on
  • On AMD CPUs, IOMMU is enabled by default.
  • On Intel CPUs with kernel 6.8 or newer (Proxmox 8 updated or Proxmox 9+), you don’t need the intel_iommu=on parameter.

With an AMD CPU on Proxmox 9, my file looked like this in the end:

grub config to passthrough gpu to proxmox lxc with iommu

Save the file by typing CTRL+X, typing Y to confirm and then Enter to save.

Then update grub by running:

update-grub

Load VFIO Kernel Modules

Next, we need to load the VFIO modules so the GPU can be bound for passthrough. Edit the modules file with:

nano /etc/modules

Add the following lines:

vfio
vfio_iommu_type1
vfio_pci

My file looked like this in the end:

added kernel modules needed for gpu passthrough to lxc in proxmox

Save and exit, then update initramfs:

update-initramfs -u -k all

Reboot and Verify

Reboot your Proxmox host and verify the modules are loaded:

lsmod | grep vfio

You should see the vfio modules listed. This is what my output looks like:

vfio kernel modules are loaded

If you don’t get any output, the kernel modules have not loaded correctly and you probably forgot to run the update-initramfs command above.

To double-check IOMMU is active, run:

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

Depending on your hardware, you should see confirmation that IOMMU or Directed I/O is enabled:

output when IOMMU is active (or similar)

Step 2: Finding the GPU Renderer Device Path

Now that PCI passthrough support is enabled, the next step is to figure out the device path of the renderer for the GPU you want to pass through.

Run the following on your Proxmox host:

ls /dev/dri

This will print all the detected GPUs. For example, my output looked like this:

by-path  card0  card1  renderD128  renderD129

If you only have a single GPU, you can usually assume it will be something like renderD128. But if you have multiple GPUs (as I did), you’ll need to identify which renderer belongs to your Intel Arc card.

First, run:

lspci

This will list all PCI devices. From there, I found my Intel Arc GPU at 0b:00.0:

lspci output showing my intel arc GPU card

Next, run:

ls -l /dev/dri/by-path/

My output looked like this:

lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:04:00.0-card -> ../card0
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:04:00.0-render -> ../renderD129
lrwxrwxrwx 1 root root  8 Aug 24 11:54 pci-0000:0b:00.0-card -> ../card1
lrwxrwxrwx 1 root root 13 Aug 24 11:54 pci-0000:0b:00.0-render -> ../renderD128

From this, I confirmed that my Intel Arc GPU was associated with renderD128. That means the full device path I need to pass to my LXC container is:

/dev/dri/renderD128

Step 3: Passthrough GPU Device to the Plex LXC Container

Now that we know the correct device path, we can pass it through to the LXC container.

  1. Stop the container you want to pass the GPU into.
  2. In the Proxmox web UI, select your Plex LXC container.
  3. Go to Resources.
  4. At the top, click Add → Device Passthrough.
    steps to get to device passthrough menu
  5. In the popup, set Device Path to the GPU renderer path you identified in Step 2 (in my case, /dev/dri/renderD128).
  6. In the Access mode in CT field, type:0666
    add passthrough device to lxc config
  7. Click Add.
  8. Once added, you can start the container again.

A Note About Permissions

Normally, you would configure a proper UID or GID in CT for the render group inside the container so only that group has access. However, in my testing I wasn’t able to get the GPU working correctly with that method.

Using 0666 permissions allows read/write access for everyone. Since this is a GPU device node and not a directory containing files, I’m not too concerned, but it’s worth noting for anyone who takes their Linux permissions very seriously.

Step 4: Installing GPU Drivers Inside the LXC Container

With the GPU device passed through, the container now needs the proper drivers installed. This step varies depending on the Linux distribution you’re running inside the container.

Some distros may already include GPU drivers by default. But if you reach Step 5 and don’t see your GPU as an option in Plex (or Jellyfin), chances are you’re missing the driver inside your container.

In my case, I’m using a Debian container, which does not include Intel Arc drivers by default. Here’s what I did:

  1. Edit your apt sources to enable the non-free repo: nano /etc/apt/sources.list
  2. Add non-free to the end of each Debian repository line:
    my sources.list after adding the non-free Debian repository
  3. Refresh apt sources: apt update
  4. Install the Intel Arc driver package: apt install intel-media-va-driver-non-free
  5. Restart the container.

For Debian, I followed the official documentation here: Debian Hardware Video Acceleration Wiki.

Note: These steps will vary depending on your GPU make, model and container distribution. Make sure to check the official documentation for your hardware and distro.

Step 5: Enabling Hardware Rendering in Plex

Now that your GPU and drivers are ready, the final step is to enable hardware transcoding inside Plex.

  1. Open the Plex Web UI.
  2. Go to Settings (top-right corner).
  3. In the left sidebar, scroll down and click Transcoder under Settings.
  4. Make sure the following options are checked:
    • Use hardware acceleration when available
    • Use hardware-accelerated video encoding
  5. Under Hardware transcoding device, you should now see your GPU. If not, double-check Step 4. In my case, it showed up as Intel DG2 [Arc A380]
  6. Select your GPU and click Save Changes.
Steps to enable hardware encoding in plex ui

Testing Hardware Transcoding

To verify that hardware transcoding is working:

  1. Play back any movie or TV show.
  2. In the playback settings, change the quality to a lower resolution that forces Plex to transcode:
    Force Plex to transcode by selecting a lower quality
  3. While it’s playing, go to Settings → Dashboard.
  4. If the GPU is handling transcoding, you’ll see (hw) beside the stream being transcoded:
    The GPU passthrough to the proxmox lxc for plex can be confirmed working from Plex dashboard: here are the steps

That’s it! You’ve successfully set up GPU passthrough in Proxmox LXC for Plex. These same steps should also work for Jellyfin with minor adjustments for the Jellyfin UI.

The post GPU Passthrough to Proxmox LXC Container for Plex (or Jellyfin) appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/gpu-passthrough-to-proxmox-lxc-container-for-plex-or-jellyfin/feed 0
Solving Next.js dynamic() Flicker with React.lazy https://www.techopt.io/programming/solving-next-js-dynamic-flicker-with-react-lazy https://www.techopt.io/programming/solving-next-js-dynamic-flicker-with-react-lazy#respond Sat, 12 Jul 2025 21:40:24 +0000 https://www.techopt.io/?p=1039 If you’re working with the Next.js App Router and using the dynamic() function for component-level code splitting, you may have encountered an annoying issue: flickering during rendering of a conditionally-rendered dynamic component. Unfortunately, this is a known issue with the App Router and the dynamic function in Next.js. This behavior can degrade user experience, so […]

The post Solving Next.js dynamic() Flicker with React.lazy appeared first on TechOpt.

]]>
If you’re working with the Next.js App Router and using the dynamic() function for component-level code splitting, you may have encountered an annoying issue: flickering during rendering of a conditionally-rendered dynamic component. Unfortunately, this is a known issue with the App Router and the dynamic function in Next.js. This behavior can degrade user experience, so solving the Next.js dynamic flicker on your website is crucial.

In this post, I’ll break down:

  • Why next/dynamic causes flickering
  • Why it’s worse with nested dynamic components
  • When dynamic() is still safe to use
  • A practical alternative using React.lazy() and Suspense
  • What trade-offs to expect when switching

The Flickering Problem in Next.js

Using dynamic() from next/dynamic is a great way to lazy-load components and reduce your JavaScript bundle size. It also supports options like { ssr: false } to only load components on the client side.

However, when you use these components with the App Router, they often cause a flash of missing or unstyled content, especially during fast navigation or when conditionally rendering dynamic components.

Nested dynamic() calls tend to amplify this issue. For example, a parent component conditionally loading a child via dynamic(), which in turn loads another sub-component dynamically, can make the flickering more severe.

This issue has been reported in GitHub issues and community threads, but a rock-solid fix hasn’t yet made it into the framework.

Interestingly, this flicker seems to affect nested dynamic components more than top-level ones. In my testing, first-level dynamically rendered components used directly in the page file rarely exhibit the issue, which means it’s generally safe to use next/dynamic there to avoid flash of unstyled content (FOUC) during initial mount.

The Better Alternative: React.lazy() + Suspense

One workaround that has proven effective is switching from next/dynamic to native React.lazy() with Suspense. This approach introduces fewer hydration inconsistencies and minimizes flickering, even with nested lazy-loaded components.

Use next/dynamic for components initially rendered on the page, and use React.lazy() for nested components that are rendered conditionally inside those components.

Example 1: Top-level safe usage with next/dynamic

import dynamic from 'next/dynamic';
import { isSignedInAsync } from '../auth';

const PageShell = dynamic(() => import('../components/PageShell'));

export default async function Home() {
  const isSignedIn = await isSignedInAsync();

  if (isSignedIn) return null;

  return <PageShell />;
}

In this example, PageShell is conditionally rendered on the server using dynamic components. This is safe since the dynamic component is rendered with the initial HTML from the server.

Example 2: Nesting with React.lazy() and Suspense

"use client";
import dynamic from 'next/dynamic';

const NestedComponent = dynamic(() => import('./NestedComponent'));

export default function PageShell() {
  const [showNested, setShowNested] = useState(false);

  return (
    <div>
      <h1>Welcome</h1>
      <button onClick={() => setShowNested(true)}>Load Nested Component</button>
      {showNested && (
        <Suspense fallback={<div>Loading nested...</div>}>
          <NestedComponent />
        </Suspense>
      )}
    </div>
  );
}

We can safely use React.lazy() and Suspense inside our dynamically-rendered PageShell component to conditionally render our NestedComponent, and still benefit from lazy-loading and code-splitting.

If we try using the dynamic function instead of React.lazy here, we may get the Next.js dynamic flicker.

Trade-offs of Using React.lazy() Instead of dynamic

While React.lazy() and Suspense often result in smoother rendering, there are two notable downsides:

1. No Server-Side Rendering

Unlike next/dynamic, which lets you disable or enable SSR, React.lazy() only supports client-side rendering. This might hurt SEO if your component needs to be visible to crawlers.

2. Flash of Unstyled Content (FOUC) on Mount

If you do try to use React.lazy() for SSR and use it in the server-rendered HTML, React.lazy() may cause a brief flash of unstyled content because the Next.js bundler doesn’t automatically include the styles for components loaded through React.lazy() in the server-rendered HTML. This limitation can lead to inconsistent rendering.

This is why it’s best to use next/dynamic for components that are visible in the server-rendered HTML, ensuring that styles and structure are present at first paint, while reserving React.lazy() for non-critical or nested components. Using next/dynamic in the initial server-rendered HTML does not seem to cause flickering.

Final Thoughts on Preventing the Next.js Dynamic Flicker

If you’re seeing flickering with next/dynamic and conditional rendering, especially in complex nested layouts, you’re not alone. While the Next.js team continues to evolve App Router, switching to React.lazy() and Suspense where you can may provide a smoother user experience at this time.

To summarize:

  • Use next/dynamic safely for top-level page components
  • Use React.lazy() for nested dynamic imports to reduce flicker

The post Solving Next.js dynamic() Flicker with React.lazy appeared first on TechOpt.

]]>
https://www.techopt.io/programming/solving-next-js-dynamic-flicker-with-react-lazy/feed 0
Fixing ‘Sequence contains more than one matching element’ Android Build https://www.techopt.io/programming/fixing-sequence-contains-more-than-one-matching-element-android-build https://www.techopt.io/programming/fixing-sequence-contains-more-than-one-matching-element-android-build#comments Sun, 06 Jul 2025 01:20:26 +0000 https://www.techopt.io/?p=1024 I just spent the last 3 days wrestling with this “Sequence contains more than one matching element” Android build error: If you noticed from the stack trace above, this is a React Native app. I tried deleting my node_modules folder, deleting my build folders, running ./gradlew clean. I would run the build again and again, […]

The post Fixing ‘Sequence contains more than one matching element’ Android Build appeared first on TechOpt.

]]>
I just spent the last 3 days wrestling with this “Sequence contains more than one matching element” Android build error:

> Task :react-native-device-country:prepareLintJarForPublish
> Task :react-native-device-info:createFullJarRelease
> Task :react-native-device-info:extractProguardFiles
> Task :react-native-device-info:generateReleaseLintModel
> Task :react-native-device-info:prepareLintJarForPublish
> Task :react-native-fbsdk-next:createFullJarRelease
> Task :react-native-fbsdk-next:extractProguardFiles
> Task :app:stripReleaseDebugSymbols
> Task :react-native-fbsdk-next:generateReleaseLintModel
> Task :app:buildReleasePreBundle FAILED
> Task :app:uploadCrashlyticsMappingFileRelease
[Incubating] Problems report is available at: file:///Users/dev/Documents/app/android/build/reports/problems/problems-report.html
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:buildReleasePreBundle'.
> Sequence contains more than one matching element.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.14.1/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
BUILD FAILED in 1h 6m 21s
1222 actionable tasks: 1208 executed, 14 up-to-date
node:child_process:966
    throw err;
    ^
Error: Command failed: ./gradlew bundleRelease
    at genericNodeError (node:internal/errors:984:15)
    at wrappedFn (node:internal/errors:538:14)
    at checkExecSyncError (node:child_process:891:11)
    at Object.execSync (node:child_process:963:15)
    at /Users/dev/Documents/app/buildscripts/buildserv/build/build-android.js:8:23
    at Object.<anonymous> (/Users/dev/Documents/app/buildscripts/buildserv/build/build-android.js:11:3)
    at Module._compile (node:internal/modules/cjs/loader:1529:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1613:10)
    at Module.load (node:internal/modules/cjs/loader:1275:32)
    at Module._load (node:internal/modules/cjs/loader:1096:12) {
  status: 1,
  signal: null,
  output: [ null, null, null ],
  pid: 22055,
  stdout: null,
  stderr: null
}
Node.js v20.19.3
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit status 1

If you noticed from the stack trace above, this is a React Native app. I tried deleting my node_modules folder, deleting my build folders, running ./gradlew clean. I would run the build again and again, but nothing worked. The same error kept popping up every time, right near the end of the build.

No amount of –debug or –stacktrace was giving me any sort of additional information. The most information I could get from this error had already been given to me.

ChatGPT and Copilot were no help, suggesting that this is a Kotlin error and most likely resides in a native library I’m using within the app.

But this didn’t make sense, because I was able to build the project on my local system with the latest dependencies just fine. It was only once I sent the build to my GitLab instance, which runs the build on a macOS VM with gitlab-runner, that I started getting this error.

So is the error with the build process, or one of the build tools itself?

Narrowing Down the Cause

After a ton of googling of this error, I finally came across this Google IssueTracker post that pointed me in the right direction. This person describes the exact same issue I’m having.

This person also says that this error started happening after an upgrade to AGP 8.9.0.

Now we’re getting somewhere. It doesn’t look like they’re using React Native, but at this point I was confident the issue isn’t stemming from anything to do with React Native.

AGP is an Android build tool. It’s possible that my macOS VM has a newer version of AGP than my local system does. This would explain why it’s only happening once I send the app to build in the macOS VM.

So, what’s the problem?

Well, it can be traced back to this section here in the app’s build.gradle:

...
splits {
        abi {
            reset()
            enable enableSeparateBuildPerCPUArchitecture
            universalApk false
            include "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
        }
    }
...

This section of the build.gradle file tells gradle to output different APK files for different CPU architectures.

When this part of the build.gradle file is encountered by running the bundleRelease gradle task, the “sequence contains more than one matching element” exception is thrown because bundleRelease expects to be generating a single universal AAB file instead of separate APK files, that can then be uploaded to the Google Play Store.

The Fix

All I did was remove this section from our build.gradle file:

...
splits {
        abi {
            reset()
            enable enableSeparateBuildPerCPUArchitecture
            universalApk false
            include "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
        }
    }
...

And it resolved the issue! We weren’t using the multiple APKs anyways, so I’m not even sure why we had this in our build.gradle file. We only upload the single universal AAB to the Play Store.

Additional Notes

In the issue tracker linked above, Google states that they do not plan on fixing this, since they don’t officially support creating multiple APKs when running bundleRelease. However, if you still need multiple APK support, someone on the issue tracker suggests the following fix:

splits {
        abi {
            // Detect app bundle and conditionally disable split abis
            // This is needed due to a "Sequence contains more than one matching element" error
            // present since AGP 8.9.0, for more info see:
            // https://issuetracker.google.com/issues/402800800

            // AppBundle tasks usually contain "bundle" in their name
            val isBuildingBundle = gradle.startParameter.taskNames.any { it.lowercase().contains("bundle") }

            // Disable split abis when building appBundle
            isEnable = !isBuildingBundle

            reset()
            //noinspection ChromeOsAbiSupport
            include("armeabi-v7a", "arm64-v8a", "x86_64")

            isUniversalApk = true
        }
}

This enables APK splitting while disabling APK splitting for the bundleRelease task, preventing the “sequence contains more than one matching element” error.

The post Fixing ‘Sequence contains more than one matching element’ Android Build appeared first on TechOpt.

]]>
https://www.techopt.io/programming/fixing-sequence-contains-more-than-one-matching-element-android-build/feed 2
How to Identify Fake FLAC Files https://www.techopt.io/music-production/how-to-identify-fake-flac-files https://www.techopt.io/music-production/how-to-identify-fake-flac-files#respond Sun, 15 Jun 2025 23:00:47 +0000 https://www.techopt.io/?p=966 If you’re a music enthusiast like me, chances are you’ve built up a library of lossless audio files. Why settle for anything less than the best sound quality? But if you aren’t ripping CDs or vinyl yourself, how can you be sure the FLAC files you’ve collected are actually lossless? The reality is that not […]

The post How to Identify Fake FLAC Files appeared first on TechOpt.

]]>
If you’re a music enthusiast like me, chances are you’ve built up a library of lossless audio files. Why settle for anything less than the best sound quality? But if you aren’t ripping CDs or vinyl yourself, how can you be sure the FLAC files you’ve collected are actually lossless? The reality is that not all FLAC files are created equal. Some may be “fake FLAC”: files that have been upsampled from lossy formats like MP3 and saved as FLAC, which doesn’t magically restore lost data.

While there’s no foolproof method to detect a fake FLAC, there are some telltale signs based on bitrate and frequency response that can help you spot them. One of my go-to tools for this task is Spek, a free and open-source audio spectrum analyzer.

What Is a Fake FLAC or Fake Lossless Files?

People create fake FLAC files by converting lossy formats—like MP3 or AAC—into lossless containers such as FLAC. Although the file extension and size might suggest high quality, the underlying audio data remains compromised. These files often originate from people who re-encode lossy sources and redistribute them under the guise of high fidelity.

Also note that while FLAC is the most common lossless audio format, other containers such as WAV and ALAC do exist as well. The indicators mentioned in this article for spotting fake lossless audio files are generic and apply regardless of the container format.

Using Spek to Analyze Frequency Spectrum

When you open a file in Spek, it displays the audio spectrum across the entire track. This visual representation reveals how much of the frequency range the file actually contains. A true lossless FLAC will have no abrupt cutoffs in the upper frequencies, whereas fake FLACs often exhibit sharp drop-offs.

Here’s a general guideline for identifying the cutoff frequencies and their corresponding bitrates:

  • 11 kHz = 64 kbps
  • 16 kHz = 128 kbps
  • 19 kHz = 192 kbps
  • 20 kHz = 320 kbps

If you notice a sharp cutoff around these frequencies, the file may have been upsampled from a lossy source.

Fake FLAC from an upsampled MP3
This fake FLAC file was upsampled from a 320 kbps MP3 file. We can see a very visible cutoff of all frequencies above 20 kHz.

What to Expect from True Lossless FLACs

Depending on the sample rate and bit depth, a legitimate FLAC file should show frequency content extending to the upper limits of the spectrum:

  • 44.1 kHz, 16-bit: Should display frequencies up to 22 kHz
  • 48 kHz, 16-bit: Should reach up to 24 kHz
  • 96 kHz, 24-bit: May extend up to 48 kHz, but a smooth fade to nothing somewhere between 20-30 kHz is normal
  • 192 kHz, 24-bit: May extend up to 96 kHz, but a smooth fade to nothing somewhere between 20-30 kHz is normal
A true 44.1 kHz/16-bit CD-quality FLAC file
The same song as above, in true 44.1 kHz/16-bit FLAC format. We can see that the whole frequency spectrum right up to 22 kHz is used.

You can often spot upsampling when you see a sharp cutoff at 22 kHz. There may also be very faint or random noise in the 22 kHz and up range. This pattern usually means someone took a 44.1 kHz file and padded it to 48 or 96 kHz.

This file was upsampled from 44.1 kHz to 48 kHz, as made clear by the sharp frequency cutoff visible at 22 kHz.

It’s also worth noting that there’s ongoing debate about whether audio content above 20 kHz contributes meaningfully to music. An audio engineer or producer might even intentionally apply a low-pass filter to cut out all frequencies above a certain inaudible range. This will result in a steeper drop-off, even in a genuine lossless file.

Additionally, not all instruments produce frequencies in this high range, so a natural lack of content above 20 kHz doesn’t necessarily indicate the file is fake.

Bitrate as Another Indicator of a Fake FLAC

Another clue is the file’s bitrate. While FLAC is a variable bitrate format, files with noticeably low average bitrates may be suspect. Here are some average bitrate ranges you might expect from real FLAC files:

  • 44.1 kHz / 16-bit (CD quality): ~700–1100 kbps
  • 48 kHz / 16-bit or 24-bit: ~800–1400 kbps
  • 96 kHz / 24-bit: ~2000–3000 kbps (can vary widely depending on the content)
  • 192 kHz / 24-bit: ~4000–7000 kbps (can vary widely depending on the content)

If you see a file with a much lower bitrate than expected and frequency cutoffs that match the patterns listed above, the file is almost certainly a fake FLAC.

Trust Your Ears

While visual analysis is helpful, always trust your ears. A song that sounds dull, muffled, or artifacted is likely not true lossless. That said, some minimal or acoustic recordings might not use the entire frequency spectrum and can still be genuine FLACs.

For example, a solo vocal track, acoustic guitar piece, or lo-fi bedroom recording may naturally have limited frequency content, especially in the high end. These types of recordings often focus on midrange clarity rather than full-spectrum detail, so a sparse frequency graph in Spek doesn’t always mean the file is fake.

Final Thoughts

Detecting fake FLAC files takes a combination of tools, knowledge, and critical listening. While Spek and bitrate guidelines provide strong indicators, no method is 100% reliable. Still, by learning to recognize the red flags, you can better curate a truly lossless music library.

The post How to Identify Fake FLAC Files appeared first on TechOpt.

]]>
https://www.techopt.io/music-production/how-to-identify-fake-flac-files/feed 0
How to Run Tails OS in VirtualBox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox#respond Mon, 02 Jun 2025 23:00:05 +0000 https://www.techopt.io/?p=941 If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and […]

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
If you want to use Tails OS without a USB stick and without a separate computer, running it inside VirtualBox is a great option. VirtualBox is open-source, just like Tails OS, making it an ideal match for users who value transparency and privacy. This aligns perfectly with the Tails OS philosophy of using free and open technologies to ensure security. In this guide, you’ll learn how to run Tails OS in VirtualBox in a few easy steps.

You can test or use Tails OS securely on your computer with this method, and nothing saves after you power off the virtual machine.

Step 1: Download Tails OS ISO

First, download the latest Tails OS ISO file from the official Tails website.

Download ISO for Tails OS in virtualbox

The ISO is found under the Burning Tails on a DVD section. Even though you aren’t burning a DVD, you need to use the ISO file for VirtualBox because VirtualBox boots operating systems from CD/DVD images.

The default Tails OS download on the homepage is a .img file for USB sticks, which supports persistence. However, VirtualBox cannot use persistence and does not support virtual USB drives via .img files, so always choose the ISO.

Step 2: Create a New Virtual Machine for Tails OS in VirtualBox

Open VirtualBox and click on New to create a new virtual machine.

Name and Operating System

Give your VM a name, such as Tails OS.

Select the Tails OS ISO file you downloaded as the ISO Image.

Ensure the following settings are automatically detected, and set them if not:

  • Set the type to Linux
  • Set the subtype to Debian
  • Set the version to Debian (64-bit)

Be sure to check Skip Unattended Installation. This is very important for Tails OS because you want to prevent VirtualBox from creating a default user account or setting a password. Tails OS boots directly into its own secure environment by design.

tails os on virtualbox name and operating system settings

Hardware

The technical minimum for Tails OS is 2048 MB RAM and 1 CPU core. However, for a smoother experience, I recommend:

  • Setting the base memory (RAM) to 4096 MB
  • Setting processors to 2 CPU cores or more, if your system allows
Hardware settings creating a tails os virtual machine in VirtualBox

Hard Disk

Under the Hard Disk section, select Do Not Add a Virtual Hard Disk.

do not add virtual hard disk to vm

This setup ensures it’s physically impossible to save anything inside the Tails OS virtual machine. When you stop the VM, you erase everything, keeping your session private and secure.

When you’re happy with your virtual machine settings, click Finish.

Step 3: Boot and Use Tails OS in VirtualBox

Start your virtual machine. Tails OS will boot from the ISO.

Set your language and keyboard layout settings, and click Start Tails.

Start Tails OS language and keyboard settings

Tails OS will ask you how you want to connect to the Tor network. If you’re not sure, I suggest choosing Connect to Tor automatically and clicking Connect to Tor.

Connect to Tor settings

That’s it! Launch the Tor browser and start browsing the web anonymously from Tails OS.

Tails OS running in VirtualBox

Now you can use Tails OS in VirtualBox safely, knowing that all your activities are wiped when you shut down the VM!

Remarks

  • Nothing will be saved when you shut down the virtual machine. Tails OS booted from the ISO does not support persistence. If you need persistence (the ability to save files or settings between sessions), you should use the default USB installation method instead.
  • Do not install VirtualBox Guest Additions. Guest Additions can expose parts of your host system to the virtual machine, which goes against Tails OS’s privacy goals. Besides, when you power off the VM, it will wipe Guest Additions anyway.
  • Keep the ISO file. Do not delete the ISO file you downloaded, because your virtual machine will need it every time it boots Tails OS.
  • Use open-source virtualization software. Tails OS recommends using open-source virtualization tools like VirtualBox or KVM to run Tails OS because their transparency and auditability align with Tails OS’s privacy philosophy. Proprietary alternatives (such as VMware) are not as easily audited for privacy.
  • The Tails OS documentation advises against using VirtualBox because it gets stuck at 800×600 resolution. I’ve found this advice seems outdated. You can set a variety of screen resolutions from the Tails OS display settings menu by right-clicking the desktop. VirtualBox runs Tails OS very well and is a much easier open-source alternative to KVM.

If you prefer a video guide, you can follow along with the video below:

The post How to Run Tails OS in VirtualBox appeared first on TechOpt.

]]>
https://www.techopt.io/linux/how-to-run-tails-os-in-virtualbox/feed 0
Local Account Creation During Windows 11 Setup https://www.techopt.io/windows/local-account-creation-during-windows-11-setup https://www.techopt.io/windows/local-account-creation-during-windows-11-setup#respond Fri, 09 May 2025 00:41:00 +0000 http://localhost:8080/?p=66 If you have recently set up a new computer with Windows 11, you probably noticed that you can no longer choose a local account instead of a Microsoft account. Previous workarounds, such as entering an invalid email address or disconnecting the internet, no longer seem to work. There are benefits to using a Microsoft account […]

The post Local Account Creation During Windows 11 Setup appeared first on TechOpt.

]]>
If you have recently set up a new computer with Windows 11, you probably noticed that you can no longer choose a local account instead of a Microsoft account. Previous workarounds, such as entering an invalid email address or disconnecting the internet, no longer seem to work.

There are benefits to using a Microsoft account with your PC, but you might still want to use a local account. Some people prefer using a local account for administrative or privacy reasons. Also, you can always sign-in to a Microsoft account at a later time.

You can still create a local account during initial setup, but it’s harder than before. Here are the steps for skipping Microsoft account login and creating a local account in newer versions of the Windows 11 Home out-of-box setup experience.

1. Proceed Through Initial Setup Until the Microsoft Account Screen

Proceed through the initial setup steps by configuring the options and clicking Next. You’ll eventually wind up at the Microsoft account screen.

Microsoft account screen during local account creation on Windows 11

2. Open Command Prompt from Setup with SHIFT+F10

Once you’re in the initial setup wizard shown above, press SHIFT+F10 on your keyboard. This will open a command prompt window.

3. Type start ms-cxh:localonly and Hit Enter

Click inside of the command prompt window and type start ms-cxh:localonly.

start ms-cxh:localonly in command prompt to create a local account on Windows 11

Hit Enter.

4. Create a Local Account

A new screen will open to create a local account. Type a username, password and choose your security questions.

Create a user window to create a local account on Windows 11 Home

When you’re done, simply click Next to proceed to the desktop as you normally would!

Remarks

  • Although you can technically run this command at the beginning of setup, it’s best to get to the Microsoft account screen first to easily configure your system’s region, keyboard settings and network.
  • As of right now, you can still create a local account on Windows 11 Pro and Enterprise versions from the setup wizard.
  • My opinion is that this clearly shows the direction Microsoft is taking with its consumer line of products. They want us to be reliant on their cloud as much as possible!

Update 05/08/2025: The command previously suggested in this article, oobe\bypassnro, has been disabled by Microsoft as of Windows 11 build 26100. Only the start ms-cxh:localonly command should be used going forward.

If you prefer a video to follow along, you can watch the tutorial on my YouTube channel down below:

The post Local Account Creation During Windows 11 Setup appeared first on TechOpt.

]]>
https://www.techopt.io/windows/local-account-creation-during-windows-11-setup/feed 0
React Native vs Flutter in 2025: Which to Choose for a New App https://www.techopt.io/programming/react-native-vs-flutter-in-2025-which-to-choose-for-a-new-app https://www.techopt.io/programming/react-native-vs-flutter-in-2025-which-to-choose-for-a-new-app#respond Sat, 03 May 2025 19:56:02 +0000 https://www.techopt.io/?p=919 When you’re building a cross-platform mobile app in 2025, one of the first questions is React Native vs Flutter: which framework should you choose? Both are powerful cross-platform tools. They let you build for iOS and Android from a shared codebase. But for long-term success, React Native is the better option in most cases. Performance […]

The post React Native vs Flutter in 2025: Which to Choose for a New App appeared first on TechOpt.

]]>
When you’re building a cross-platform mobile app in 2025, one of the first questions is React Native vs Flutter: which framework should you choose? Both are powerful cross-platform tools. They let you build for iOS and Android from a shared codebase. But for long-term success, React Native is the better option in most cases.

Performance Is No Longer the Dealbreaker

In the past, Flutter outperformed React Native due to its direct rendering model. React Native used a bridge that communicated asynchronously between JavaScript and native code. This used to cause performance bottlenecks. Today, things have changed.

React Native’s new architecture has narrowed the gap. With the Fabric rendering engine and TurboModules, apps now run with near-native speed. Interactions and animations are smooth. The old performance argument simply doesn’t apply anymore.

Hermes, a lightweight JavaScript engine, further improves speed. It reduces memory usage and startup times. React Native apps now feel fast and efficient.

Issues like navigation lag or gesture delays have mostly disappeared. Thanks to Reanimated and Gesture Handler by Software Mansion, modern React Native apps rival the performance of native Swift or Kotlin apps.

UI Flexibility and Customization

Flutter uses with Google’s Material Design by default. It’s polished and consistent, but it can feel restrictive. If you want a unique design or to match native iOS components, Flutter takes more effort.

React Native, on the other hand, gives you a blank canvas. It renders real native components. This means your app looks and behaves like a native app on each platform.

Customization in React Native is straightforward. You can easily build your own components or bring in existing native modules. Want to use Swift or Kotlin? No problem.

There are also countless libraries that give you freedom over design. React Native Paper or Nativewind libraries all help developers build beautiful UIs without limitations. Tailwind CSS is a popular option for web developers in 2025, and Nativewind allows developers to use Tailwind CSS to style their React Native components.

Libraries will help give you a quick-start with React Native; but even if you don’t want to use a UI library, it’s generally quicker and easier to achieve the look you’re going for with React Native.

Apps with demanding UX needs benefit here. Whether you want to follow iOS’s Cupertino look, Android’s native style, or go for a completely custom look, React Native makes it easier.

Community, Adoption, and Real-World Usage

React Native has a massive and experienced community. It’s used by Discord, Shopify, Microsoft, Walmart, and many others. These companies have built large-scale apps and actively contribute to the ecosystem. You can see a more extensive list in the React Native Showcase.

Flutter is still growing. It’s backed by Google and has seen adoption in some sectors, but it’s less common in enterprise apps. Yes, Flutter does have an impressive showcase of its own, with a lot of popular companies using it. But the biggest companies are still choosing React Native.

More developers and teams rely on React Native. That means better tools, more tutorials, more plugins, and faster support. Need to hire? You’ll find it easier to find experienced React Native developers.

React Native meetups, conferences, and job listings still outnumber Flutter’s. It’s the more mature option for serious production apps.

JavaScript/TypeScript vs Dart

React Native is powered by JavaScript and TypeScript. Most developers already know these languages. The ecosystem is enormous. You can find libraries, tools, and community help for just about anything.

Flutter uses Dart. It’s improved over time and offers some nice features. But it’s still niche. Fewer developers know Dart, and fewer tools exist for it.

Using JS/TS also makes it easier to integrate with full-stack solutions. Node.js for backend, React for web, and React Native for mobile? You can reuse code across all of them.

For team productivity and long-term maintenance, JavaScript and TypeScript have the edge.

Better Web and Desktop Pathways

React Native is mobile-first but has solid web support through React Native for Web. Combined with React for traditional websites, you get a unified developer experience.

Flutter officially supports web and desktop, but real-world usage is limited. Web performance is hit or miss. The experience often feels heavy and not optimized for browsers.

React Native’s ecosystem supports better CI/CD and deployment too. Tools like Expo and Fastlane streamline everything from build to publish.

Project Longevity and Trust

Google has ended many popular projects before. Think Google Reader, Stadia, or the Chromecast. That uncertainty affects how developers view Flutter.

Meta backs React Native. Could they pull back? Possibly. But the difference is that React Native has critical mass. Even if Meta dropped it tomorrow, the community would keep it alive. Many companies depend on it.

Microsoft has even built React Native for Windows and macOS. That means React Native is not just a Meta project anymore—it’s supported by multiple big players.

This gives it a much stronger foundation for long-term stability.

Rich Ecosystem and Tooling

React Native has everything you need. Navigation with React Navigation. State management with Redux or Zustand. Animations with Reanimated or Lottie. It all fits together.

You can develop faster with Expo. You get reliable TypeScript support. Debugging is better too. Flipper and Chrome DevTools make the developer experience smoother.

Third-party integrations like Stripe, Firebase, and Google Maps are easier to implement. They’re better documented and more widely tested in React Native.

React Native vs Flutter: React Native Wins in 2025

So, in the battle of React Native vs Flutter, React Native still leads in 2025.

It offers better performance than ever before. It’s easier to customize, more widely adopted, and backed by a huge community. You get the benefits of the JS/TS ecosystem and peace of mind with long-term support.

Flutter is a solid choice in some scenarios, especially if you’re deep in the Google ecosystem. But for most teams, React Native remains the best bet.

If you’re launching a new app, React Native gives you speed, flexibility, and staying power. It’s the smarter investment for today, and for the future.

The post React Native vs Flutter in 2025: Which to Choose for a New App appeared first on TechOpt.

]]>
https://www.techopt.io/programming/react-native-vs-flutter-in-2025-which-to-choose-for-a-new-app/feed 0
Run Virtual Machines on OPNsense with bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve#respond Wed, 30 Apr 2025 02:00:16 +0000 https://www.techopt.io/?p=892 If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB […]

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
If you’ve ever looked at your OPNsense box and thought, “this thing is barely breaking a sweat,” you’re not alone. Many users with overpowered hardware are now looking for ways to run virtual machines on OPNsense to take full advantage of those idle resources with additional software and services. In my case, I had 8GB of RAM and 120GB of storage sitting mostly idle, with CPU usage rarely spiking beyond a modest blip.

Instead of virtualizing OPNsense itself using something like Proxmox (which is a common suggestion), I wanted to keep OPNsense on bare metal for reliability and stability. But I also wanted to move some VMs directly onto my router box, such as my Ubuntu Server VM running the TP-Link Omada software that controls my WiFi. That led me down the rabbit hole of running a virtual machine on OPNsense—and the tool that makes this possible is bhyve, the native FreeBSD hypervisor.

This is definitely not officially supported and could break with any OPNsense update, so proceed with caution. My setup is loosely based on a 2023 forum post I found in the OPNsense community forums.

Step 1: Installing bhyve

To get bhyve running, we first need to enable the FreeBSD repository temporarily. OPNsense locks its package manager to prevent upgrades from mismatched repos, so we need to handle this carefully.

Lock the pkg tool

pkg lock -y pkg

Enable the FreeBSD repository

sed -i '' 's/enabled: no/enabled: yes/' /usr/local/etc/pkg/repos/FreeBSD.conf

Install bhyve and required packages

pkg install -y vm-bhyve grub2-bhyve bhyve-firmware

Disable the FreeBSD repository again

sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/FreeBSD.conf

Unlock pkg

pkg unlock -y pkg

⚠ Leaving the FreeBSD repo enabled may interfere with future OPNsense updates. Disabling it again helps maintain system stability, but means bhyve won’t update automatically. If you want to update bhyve later, you’ll need to repeat this process.

Step 2: Configuring the Firewall for Virtual Machines on OPNsense

Next, we need to create a virtual bridge to let our bhyve virtual machines talk to each other and the rest of the network.

This part is all done from the OPNsense UI.

Create a bridge interface

  • Navigate to Interfaces → Devices → Bridge
  • Add a new bridge with your LAN interface as a member
  • Enable link-local IPv6 if you use IPv6 on your network
  • Note the name of your bridge (e.g., bridge0)
Creating the LAN bridge switch for virtual machines on OPNsense

Assign and configure the bridge interface

  • Go to Interfaces → Assignments
  • Assign bridge0 to a new interface (give it a description like bhyve_switch_public)
  • Enable the interface
  • Check Lock – Prevent interface removal to avoid losing it on reboot
Assigning an interface to the VM bridge

Allow traffic through the bridge

  • Navigate to Firewall → Rules → bhyve_switch_public
  • Add a rule to allow all traffic (you can tighten this later if needed)
Allow all firewall rule on switch for virtual machines on opnsense

One thing to note: the forum post I referenced above did not mention anything about assigning an interface or adding a firewall rule for the bridge. However, in my experience, my virtual machines in OPNsense had no network connectivity until I completed both of these steps.

Step 3: Setting Up bhyve

With bhyve installed and your network bridge configured, the next step is to prepare the virtual machine manager and your storage directory. You have two options here: using ZFS (ideal for advanced snapshots and performance features) or a standard directory (simpler and perfectly fine for one or two VMs).

Option 1: Using ZFS for VM Storage

If you’re using ZFS (like zroot), create a dataset for your VMs:

zfs create zroot/bhyve

Then enable and configure vm-bhyve:

sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/bhyve"
vm init

Option 2: Using a Standard Directory

If you’re not using ZFS or want a simpler setup for running just a few virtual machines on OPNsense:

mkdir /bhyve
sysrc vm_enable="YES"
sysrc vm_dir="/bhyve"
vm init

This sets up /bhyve as the default VM storage directory. You’ll now be able to manage and create virtual machines using the vm command-line tool, with bhyve handling the hypervisor backend.

This is the option that I personally chose for my setup, since I only plan on running 1 or 2 VMs.

Step 4: Configuring bhyve

With the storage and base configuration out of the way, the next step is to configure networking for bhyve. To do this, we’ll create a virtual switch that connects bhyve’s virtual NICs to the bridge interface we created in Step 2.

Setting up Networking

Run the following command to create a manual switch that binds to the bridge0 interface:

vm switch create -t manual -b bridge0 public

This tells vm-bhyve to create a virtual switch named public and associate it with bridge0, allowing your VMs to communicate with the rest of your network. Any virtual machine you create can now be attached to this switch to access LAN or internet resources just like any other device on your network.

Copying VM Templates

Before you start creating virtual machines in OPNsense, it’s helpful to copy the sample configuration templates that come with vm-bhyve. These templates make it easier to define virtual machines for different operating systems like FreeBSD, Linux, or Windows.

If you’re using ZFS and followed the zroot/bhyve setup as described in Option 1 above:

cp /usr/local/share/examples/vm-bhyve/* /zroot/bhyve/.templates/

If you’re using a standard directory setup like /bhyve, as described in Option 2 above:

cp /usr/local/share/examples/vm-bhyve/* /bhyve/.templates/

This copies example VM configuration templates into the .templates directory within your VM storage location. These templates provide base config files for creating new VMs and are a helpful starting point for most operating systems.

Step 5: Setting Up Your Virtual Machine

In this step, we’ll walk through creating your first VM using bhyve. Since there’s more to this than just launching a template, I’ve broken it down into three parts: setting up the VM itself (including creating a config), installing the operating system, and then configuring the firewall for the VM.

Configuring the VM

For my setup, I was creating an Ubuntu Server instance that runs TP-Link Omada controller software.

First, navigate into your templates directory. If you’re using ZFS, it’ll look like this:

cd /zroot/bhyve/.templates/

Or if you’re using a regular directory setup:

cd /zroot/bhyve/.templates/

Inside, you’ll find configuration files for different OS templates. I used the one for Ubuntu, found at ubuntu.conf, which contains the following:

loader="grub"
cpu=1
memory=512M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This basic config uses GRUB as the loader and allocates 1 CPU and 512MB of RAM. It attaches to the public switch we created earlier and uses a virtual block device for storage.

To create a custom config for my Omada controller VM, I simply copied the Ubuntu template:

cp ubuntu.conf omada-controller.conf

This gave me a dedicated configuration file I could tweak without touching the original Ubuntu template.

Next, I edited the omada-controller.conf file using nano to better suit the needs of the Omada software:

nano omada-controller.conf

And the contents:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

This configuration increases the resources available to the VM—allocating 2 CPU cores and 3GB of RAM, which is more appropriate for running the Omada controller software.

Initially, I tried using the GRUB loader as shown in the Ubuntu template, but I ran into problems booting the OS after installation. After doing some research, I found that this is a fairly common issue when using GRUB with certain Linux distributions on bhyve. Switching to uefi resolved the problem for me and allowed the VM to boot normally. Your mileage may vary, but if you’re stuck at boot, switching the loader to uefi is worth trying.

Starting Guest OS Installation

Before you can install the operating system, you’ll need to download and import the installation media (ISO file) for your OS into bhyve. This is easy to do with the vm iso command.

I downloaded the Ubuntu 22.04.5 server installer ISO using:

vm iso https://mirror.digitaloceans.dev/ubuntu-releases/22.04.5/ubuntu-22.04.5-live-server-amd64.iso

This downloaded the ISO file directly into the /bhyve/.iso directory (or /zroot/bhyve/.iso if you’re using ZFS).

Once the ISO was downloaded, I started the installation process for my VM using:

vm install -f omada-controller ubuntu-22.04.5-live-server-amd64.iso

This command tells vm-bhyve to boot the VM using the downloaded ISO so you can proceed with installing the operating system in console mode.

In my case, when using the GRUB loader, the console mode installer worked fine. However, after switching to UEFI mode, I ran into a problem where the console installer would no longer boot properly. After doing some research, I found that this is a common issue with bhyve when using UEFI.

To work around this, I edited my omada-controller.conf and temporarily added the following line to enable graphical output:

graphics="yes"

The updated configuration file looked like this:

loader="uefi"
cpu=2
memory=3G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
graphics="yes"

This allowed the ISO to boot into the graphical installer, which I accessed using VNC. After installation, I planned to enable SSH on the VM to manage it more easily and remove the graphics option.

However, to use VNC to complete the installation, I needed to add additional firewall rules to allow VNC access to the virtual machine after it was created and booted.

Step 6: Configuring the Firewall (Again)

When a bhyve virtual machine boots, it creates a new network interface on the host system, usually with the prefix tap. When my bhyve VM booted, the firewall blocked all network access on the new VM interface. As a result, I was unable to connect to the VM, and the VM itself had no network connectivity.

Here’s what I did to properly assign the VM network interface and open up traffic:

  • Run ifconfig from the console to see the list of interfaces.
  • Identify the new interface created by bhyve (it will usually start with tap). In my case, it was named tap0.
  • Rename the tap interface so it can be properly assigned in the OPNsense GUI:
ifconfig tap0 name bhyve0
  • Go to Interfaces → Assignments in the OPNsense UI.
  • Assign the newly renamed bhyve0 interface.
  • Give it a description like bhyve_vm_omada_controller.
  • Enable the interface.
Assigning the vm to an interface in the OPNsense UI
  • Go to Firewall → Rules → [bhyve_vm_omada_controller].
  • Add a rule to allow all traffic through the interface.

This setup ensured that the virtual machine had full network access through its own dedicated interface, while still keeping things clean and organized within OPNsense.

Keep in mind that each time the VM is powered off and started again, a new tap interface is created. Because of this, you must manually rename and reassign the interface every time the VM boots until we set up a persistent interface configuration after the OS installation is complete.

Keeping the interface name consistent between shutdowns so firewall rules apply correctly was one of the trickiest parts of the entire setup for me. I’ll dive deeper into the different solutions I tested, and what finally solved the issue reliably, in the final step of this article.

Step 7: Connecting with VNC and Installing the OS

Now that our virtual machine on OPNsense is configured, the ISO is loaded, and the firewall rules are in place, it’s time to connect to the VM and install the OS.

  • Open your preferred VNC client. I personally used Remmina for this, but other popular options like TightVNC and TigerVNC will also work fine.
  • Connect to your OPNsense router’s IP address on port 5900.
  • You should see your OS installer’s screen boot up!
Remotely connect to virtual machines on OPNsense with VNC

Proceed through the guest OS installation like you normally would. During installation, I made sure to:

  • Enable the OpenSSH server in Ubuntu so I could manage the VM over SSH instead of VNC afterward.
  • Configure the VM with a static IP address within my LAN subnet.

Once installation was completed, I rebooted the VM. If you enabled SSH, you should now be able to connect to your VM via its IP address without needing to rely on VNC anymore.

After confirming SSH access, I edited the VM’s configuration to remove the graphics="yes" line from omada-controller.conf for security and resource efficiency.

After powering off the VM to make these changes, I had to manually rename the network adapter again following the steps from Step 6, before I could access it via SSH.

Now let’s configure the VM to start automatically at boot, and find a more permanent solution for the network adapter issue in the next step.

Step 8: Starting the VM at Boot and Fixing the Network Interface Name Issue

Starting the VM at Boot

By default, bhyve VMs don’t start automatically with the system. You can set individual VMs to start using:

vm set omada-controller boot=on

However, there’s a more flexible method that allows you to define a list of VMs that should start at boot. I used the following command to specify mine:

sysrc vm_list="omada-controller"

This ensures vm-bhyve starts the omada-controller VM whenever the system boots.

If you want to start multiple VMs, just list them separated by spaces:

sysrc vm_list="omada-controller another-vm some-other-vm"

This is useful if you plan to run multiple VMs on your OPNsense machine via bhyve.

Fixing the Networking Interface Name at Boot

One of the trickiest parts of my setup was keeping the VM’s network interface name consistent between reboots. I initially tried using the following line in my VM config:

network0_name="bhyve0"

This is supposed to create the VM’s network interface with the name bhyve0 when it boots.

However, I found that while this approach worked with loader="grub" in the VM config (BIOS mode), it caused the VM to crash immediately at startup when using loader="uefi".

Instead, I leaned into ifconfig and ran the following:

sysrc ifconfig_tap0_name="bhyve0"

This ensures the tap0 interface is automatically renamed to bhyve0 at boot time, which seems to be working well for me. This seems to allow the firewall rules we created earlier to be applied successfully without manual intervention.

Now we have configured our VM to start running at boot, and we automatically rename the created network interface at boot.

Keep in mind that with the ifconfig method, you will have to manually run ifconfig again if the VM is powered off and on, but the OPNsense host is not:

ifconfig tap0 name bhyve0

This is because the interface gets destroyed and recreated with the default tap0 name.

Running Virtual Machines on OPNsense: My Final Thoughts

Running virtual machines directly on OPNsense using bhyve is an advanced but rewarding undertaking. It allows you to consolidate infrastructure and put underutilized hardware to work, all while keeping your firewall on bare metal for maximum reliability. While the process involves a lot of manual setup—especially around networking and boot configuration—it ultimately gives you a lightweight, headless VM host tightly integrated into your router.

Just remember that this is an unofficial and unsupported use case. Updates to OPNsense or FreeBSD may break things, so keep good backups and approach with a tinkerer’s mindset. But if you’re comfortable on the command line and like squeezing every drop of utility out of your hardware, this setup is a powerful way to do just that.

Remarks

  • vm list is a good command to help see loaded VMs and their status. You can also start and stop VMs with vm start vm-name and vm stop vm-name.
  • You will have to configure the network adapter settings for each VM you create and apply the firewall rules for each VM in the OPNsense UI as we did above.
  • It’s important to note the configuration differences between using the UEFI loader and the BIOS loader when setting up virtual machines on OPNsense, as stated throughout the article.
  • To see a sample VM configuration, you can take a look at an example on GitHub here.
  • Again, this is all unsupported. Follow at your own risk.
    • This is for advanced users only. None of it is manageable through the OPNsense UI, except firewall rules. Know what you’re doing in the terminal before following this guide.

The post Run Virtual Machines on OPNsense with bhyve appeared first on TechOpt.

]]>
https://www.techopt.io/servers-networking/run-virtual-machines-on-opnsense-with-bhyve/feed 0