Building a router - the build
To my surprise I found some to make progress on the router build while puppy-sitting. The whole build was unspectacular. But there were still a few surprises on the hardware and software side of things. Building a router with opnsense is actually as easy as building any other computer. You might want to do some research to make sure opnsense supports your network cards. Most of the complexity and issues surfaced on the virtualisation layer. But if you only have one network to serve you will not have to deal with any of that.
The most significant change on the hardware was going from a quad port NIC to a dual port one for the public part of the network. Let us call the public network "cloud" and the private network "private". The delivered cards had proprietary Lenovo connectors on them. Something the vendor still does not understand being an issue when selling them as PCIe x4 cards. Taking a step back I realised two things:
- For the cloud a backup connection is pointless. The static IP will be down and thus services unreachable.
- I do not need an interface for services. There is no need for DNS based ad blocking nor a controller software for AccessPoints.
My replacement order was a quad and dual port NIC with Intels i350 chipset. I also added two 80mm fans to the case. Other than the listings description they were not included. I do not expect the box to produce any significant amount of heat. But there is also no reason to not put the most cooling possible in it. Keeping things cool rarely is a bad idea.
Starting the first virtual machine all I got was an error message. "Virtualisation is not supported", so only containers will work. Strange, but okay. Likely a BIOS setting I missed. Fixing this was not as simple as you would expect. To enable AMDs virtualisation layer "SVM" you have to enable "Expert mode". So far we are good. Now go to the "overlock" menu. Wait what? Set explore mode to expert. "The what?!". When you do this a new entry for "CPU Features" appears letting you enable SVM Mode. The start and end of this process seems okay to me. The middle is making me question if it would not have been better if Gamers Nexus took down MSI for good.
It shows that the board runs on a consumer chip - the B550. IOMMU mappings only allows me to pass the PCIe x16 slot to a VM. the other x16 which is actually an x4 shares a IOMMU group with other system resources. If you setup passthrough for a card in this slot it will crash the whole Proxmox host. Pro tip: Never setup a VM to start on boot before being sure it works as expected. Listen to someone who stood in his basement with a USB drive to import a zpool at 8pm. Only to update a VMs config file to not auto-boot.
I could force a passthrough with the
pcie_acs_override=downstream patch. But this would also mean turning off PCIe device isolation. It is there for a good reason and I will not take a chance with the one box on the whole network talking to the Internet.
There was not a lot to it to get pcie passthrough to work.
amd_iommu=on iommu=pt to
/etc/kernel/cmdline - append it at the end of the existing arguments.
Make sure all required modules load when booting - edit
vfio vfio_iommu_type1 vfio_pci vfio_virqfd
proxmox-boot-tool refresh and reboot. To make sure everything is working a quick
dmesg | grep -e DMAR -e IOMMU is all you need.
Easy enough, except the IOMMU issue mentioned above which was a bit of a surprise.
I tried various tools in the past for DNS based ad blocking. Pi-Hole was usually doing a pretty good job. Usability was never the best, but not bad at all. This time I decided to go with AdGuard Home.
There is, as usual, a shell script you can paste into your command prompt, hoping for the best. Luckily there are also some distribution and architecture specific packages.
Port 53 in the LXC container I setup was already in use by systemd-resolve. To fix this create a directory for configuration files
Add three lines to
[Resolve] DNS=127.0.0.1 DNSStubListener=no
nameserver 127.0.0.1 search .
And reload the service
systemctl restart systemd-resolved.
When going through the setup it should not complain about port 53 being in use anymore.
I am not sure if I will stick with Adguard Home. It is a regular DNS based ad blocker. But to its credit it offers per client configuration options. Individual allow and deny lists are useful when two people use it and do not agree what to filter. Different upsteam server can be useful for testing sometimes. So far it is holding up pretty well.
How did you spell this again?
I am evaluating TP-Link Omada AccessPoints. For whatever reason I cannot remember the name or misspell it on a regular basis. The APs can run in standalone mode, use a cloud controller or you can self-host a controller. While I tried standalone mode roaming seems to work a lot better with a controller.
Beside the usual TP-Link overhead of running MongoDB the setup is as easy. Download the dpkg, spin up a Debian 10 container and install all dependencies. Debian 10 is a supported version and I was too lazy to test if 11 will work as well.
The software itself is pretty good. The UI is responsive and lets you fine tune all settings. When I say all settings I mean all settings. I do not think you will miss a single one helping you setting up a well performing wireless network. The only thing that failed was adopting the existing APs. The default credentials I provided did not work and I had to enter them again for each AP.
The new router is now running our home for a bit over two weeks.
No issues so far. The router was sitting on top of a Coca Cola Zero box with four network cables hanging across the room for a week. Professionalism when burning in a new system, am I right? CPU load is minor except an occasional spike when an analytics script runs. Latency and jitter improved over the UDMP. WireGuard was setup in 15 minutes allowing our phones to have an on-demand VPN and always on ad blocking. I should have done this way sooner instead of fighting with Ubiquity gear for so long.