Why¶
It happened sooner than I thought it would. My “production” mini-pc croaked within 6 months of “production” use. RIP.
I thought I had documented the steps but apparently not.
So reinstalling my “production” homelab server from scratch. This document is to help me recreate it in the event of the inevitable end of my current server. The eventual goal is to have the ability to recreate it completely with code. For now, documenting the manual steps.
Hardware¶
BOSGAME Linux Mini PC Dual LAN Intel Alder Lake N100 16 GB RAM 500 GB SSD - I was impressed by the price/performance of Intel N100 and hence went with the same model. Did not use more than 2 ports so picked a model with 2 LAN ports. Picked one with a fan since I assumed that the G48S mini-pc went kaput because there was only static cooling.
Cluster Hardware¶
This are the nodes on the cluster:
Node |
IPs |
Model |
CPU |
Mem |
GPU |
---|---|---|---|---|---|
rapid-civet |
192.168.4.11 |
HP 260 G1 DM |
i3-4030U@1.9 |
3.75 |
Haswell IG (0b) |
still-fawn |
192.168.4.17,192.168.4.195 |
ASUS B85M-G R2.0 |
i5-4460@3.2 |
15.53 |
NVIDIA RTX3070 (a1) |
chief-horse |
192.168.4.19,192.168.4.197 |
Intel NUC D34010WYK |
i3-4010U@1.7 |
7.66 |
Haswell IG (09) |
pve |
192.168.86.194,192.168.1.122,192.168.4.122,10.10.10.1 |
BOSGAME DNB10M |
N100@2.8 |
15.37 |
Alder UHD |
graph TD subgraph homeCluster [Home lab Proxmox Cluster] pve["pve<br>IPs: 192.168.86.194, 192.168.1.122, 192.168.4.122, 10.10.10.1<br>Board: BOSGAME DNB10M<br>CPU: N100 @2.8<br>Mem: 15.37GB<br>GPU: Alder UHD"] rapid["rapid-civet<br>IPs: 192.168.4.11<br>Board: HP 8000<br>CPU: i3-4030U @1.9<br>Mem: 3.75GB<br>GPU: Haswell IG (0b)"] still["still-fawn<br>IPs: 192.168.4.17, 192.168.4.195<br>Board: ASUS B85M R2.0<br>CPU: i5-4460 @3.2<br>Mem: 15.53GB<br>GPU: NVIDIA RTX3070 (a1)"] chief["chief-horse<br>IPs: 192.168.4.19, 192.168.4.197<br>Board: Intel D34010WYK<br>CPU: i3-4010U @1.7<br>Mem: 7.66GB<br>GPU: Haswell IG (09)"] end <!-- markdownlint-enable MD013 --> subgraph pve_VMs [pve Key VMs] opnsense["OPNsense VM"] maas["Ubuntu MAAS VM"] end pve --> opnsense pve --> maas
Install Proxmox server¶
Download proxmox iso from https://www.proxmox.com/en/downloads
Write to USB (I used Balena Etcher)
Install Proxmox on the mini-pc
I plugged in my “WAN” port connected to my home router only. This way I can access Proxmox via my laptop. Also there would be no confusion on proxmox which is the management interface.
Use ProxmoxVE Helper-Scripts to run Proxmox VE Post Install
RIP Tteck you were gone too soon. Your work lives on.
Setup Opnsense VM¶
Following combined instructions from:
https://homenetworkguy.com/how-to/virtualize-opnsense-on-proxmox-as-your-primary-router/
https://homenetworkguy.com/how-to/virtualize-opnsense-on-proxmox-as-your-primary-router/
Download and pre-setup¶
Mainly from https://kb.protectli.com/kb/opnsense-on-proxmox-ve/ as it is simpler so far:
Download Opnsense installer - I chose the DVD option.
Add comment ‘WAN’ for existing Linux bridge
vmbr0
with the network device (enp1s0)
Note: Any changes that you do are not applied till you hit
Apply configuration
. Learnt it the hard way.
Add
LAN
Linux Bridgevmbr1
with the network device (enp2s0)Add
WIFI
Linux Bridgevmbr2
with the network device (wlp0s20f3).
Note that this highly discouraged by almost all folks due to unreliability of WIFI compared to LAN. I want to connect my home network devices (cameras, plugs) to machines in Proxmox cluster so doing it anyway.
Setup OPNsense¶
Expand the OPNsense-25.1-dvd-amd64.iso.bz2 file to get the ISO file. On my mac, the default archive expander didn’t work.
Stuffit Expander
worked for the *.bz2 file.Upload the ISO file by clicking on the
local(pve)-> ISO Images
.Then click on
Create VM
on top.Type
OPNsense
for VM nameChoose
Start at boot
In OS:
For
ISO image
choose the ISO image you uploaded.In Guest OS switch to
Other
from Linux. OPNsense is BSD based OS not Linux
Leave the defaults for
System
In
Disks
chooseVirtIO Block
(apparently for better performance). I left the32 GB
as is.In
CPU
left the defaults as isIn
Memory
choose4096
[4GB RAM]In
Network
choose the default bridgevmbr0
and changeModel
toVirtIO
Click on
Finish
Add the
LAN
device inHardware->Add->Network Device
. Choosevmbr1
andVirtIO
Start and Configure OPNSense¶
Start the VM
Remember to plug the
LAN
network cable into the serverChange the
LAN
network interface to192.168.4.1/24
via command lineConnect your laptop to the
LAN
network switch via Ethernet and use static IP in192.168.4.23
Opnsense webgui should be available at
192.168.4.1
Go through the wizard and set the time zone
Setup Canonical/Ubuntu MAAS VM¶
Canonical or Ubuntu MAAS is great way to manage your home lab server installs.
Self-service, remote installation of Windows, CentOS, ESXi and Ubuntu on real servers turns your data centre into a bare metal cloud.
Install Ubuntu MAAS as a VM and make it the DHCP and TFTP server. Initial attempt to set OPNSense as the external DHCP server, didn’t work.
Install Ubuntu server¶
Download Ubuntu server ISO - https://ubuntu.com/download/server
Create VM with
Linux OS
. The requirements are small. I used the following settings:Memory: 4096
Storage: 100GB
Go through the install:
Since the underlying storage is ZFS no need to set it up with ZFS as the boot fs.
Enable
openssh-server
Choose
lxd
as a package to be installed.
Install MAAS¶
Follow instructions for installing MAAS
Remember to use static IPs for MAAS and OPNSense
Build the custom proxmox boot image¶
Need to build a custom image so that proxmox cluster is installed.
Install the dependencies on the MAAS machine¶
git clone https://github.com/canonical/packer-maas.git
packer
from https://developer.hashicorp.com/packer/tutorials/docker-get-started/get-started-install-clicurl -fsSL https://apt.releases.hashicorp.com/gpg | apt-key add -
apt install -y lsb-release software-properties-common
apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
apt-get update && apt-get install packer
Other dependencies:
apt install -y qemu-utils libnbd-bin nbdkit fuse2fs qemu-system ovmf cloud-image-utils
It got upset with one or some of the above packages.
Run
apt install -y libnbd-bin nbdkit fuse2fs ovmf cloud-image-utils
Since KVM is not installed on the Ubuntu MAAS, switching to building the image on the proxmox host itself
Build the image
Create the
my-changes.sh
file in thedebian
directoryit seemed that it ran out of disk space. Increased in deb*.pkr.hcl from 4gb to 16GB
You have to reboot (technically only restart maas service) in order for the new image to be picked up
Ran into “No MBR magic issue. “ https://discourse.maas.io/t/no-mbr-magic-treating-disk-as-raw/7675/5
Ran into the no ga22*/debian image also. Had to reboot MAAS to get it working. https://bugs.launchpad.net/maas/+bug/2046557
Adguard plugin setup within OPNSense along with Upbound DNS¶
Follow instructions in this guide
Adding the wifi interface¶
Follow instructions and enable the WiFi interface
Network diagram to enable WiFi device access on all Proxmox VMs and LXC¶
This network diagram shows how other Proxmox cluster hosts can access the Wifi network.
graph TD %% WiFi Network Block subgraph WiFi_Network [WiFi Network 192.168.86.X] WR[WiFi Router: 192.168.86.1] WD[WiFi Device: 192.168.86.100] end %% Host1 Proxmox Block subgraph Host1_Proxmox [Host1 Proxmox - Intel N100 with 2 NICS and 1 Wireless NIC] B1[vmbr1 LAN Bridge: 192.168.4.X] B2[vmbr2 NAT Bridge: 10.10.10.1/24] DNSM[DNSMasquerade Rule: iptables NAT from 10.10.10.0/24 via physical interface] end %% OPNsense VM Block on Host1 subgraph OPNsense_VM [OPNsense VM running on Host 1] LAN[LAN Interface: 192.168.4.1] NAT[OPT1 NAT Interface: 10.10.10.254] STATIC[Static Route: Dest 192.168.86.0/24 via Gateway 10.10.10.1] end %% Host2 Proxmox Block subgraph Host2_Proxmox [Host2 Proxmox] H2B[vmbr0 LAN Bridge: 192.168.4.X] CT[Container: IP 192.168.4.10, GW 192.168.4.1] end %% Connections B1 --- LAN B2 --- DNSM H2B --- CT %% Outbound Traffic Flow CT -- "Traffic from 192.168.4.10" --> LAN LAN -- "Uses Static Route" --> STATIC STATIC -- "Routes traffic via NAT Bridge" --> B2 B2 -- "DNSMasquerade translates traffic" --> DNSM DNSM -- "Forwards traffic to WiFi Router" --> WR %% Return Traffic Flow (Simplified) WR -- "Return traffic" --> DNSM DNSM -- "Forwards to NAT Bridge" --> B2 B2 -- "Delivers traffic to OPNsense NAT Interface" --> NAT NAT -- "Delivers traffic to LAN Interface" --> LAN LAN -- "Routes traffic to Container" --> CT
Guides¶
Frigate VA-API Acceleration
MA90 packer-MAAS build
K3s NVIDIA GPU Passthrough Guide — Proxmox VE + K3s
Ollama GPU Server Guide