Home Infrastructure As code (homeiac) ------------------------------------- Documentation on setting up home servers using IaC. Goal **** Have home automation and other services available in a secure fashion while having the ability to manage them all via code. Hardware ******** - 1 Raspberry PI - for running monitoring/pi-hole services using https://balena.io/ - 1 Raspberry PI 4 - for running backup / media server using ZFS - 1 Raspberry PI - for running k3s and other development builds - One Windows PC for running heavy duty stuff Setup ***** Setup Balena Managed Raspberry PI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Get the Balena image and follow instructions till the git clone part from https://www.balena.io/docs/learn/getting-started/raspberrypi3/nodejs/ - Add the homeiac application - Conceptually application is a fleet of devices that you want a set of applications to be deployed - Current understanding is that you cannot pick and choose what applications you want on a device. It is all or nothing - Add your github ssh key so that it can talk to github repos - Add https://github.com/marketplace/actions/balena-push to your github workflow after setting the BALENA_API_TOKEN (at the org level) and the BALENA_APPLICATION_NAME at the repo level (https://github.com/homeiac/home/settings/secrets) - This is the link to the workflow - https://github.com/homeiac/home/blob/master/.github/workflows/balena_cloud_push.yml - With this support, the goal of IaC for raspberry pi devices is achieved. There is no need for keeping OS and other services up to date. Everything is managed by Balena. All changes (unless you override using local mode) goes through github. The central docker-compose.yml controls what gets deployed. Each push to master automatically updates the devices. - To disable wifi at runtime - Run the following from the cloud shell https://dashboard.balena-cloud.com/devices/ ``nmcli radio wifi off`` * (To be verified) - Move the resin-wifi-config to resin-wifi-config.ignore as follows ``cd /mnt/boot/system-connections && mv resin-wifi-01 resin-wifi-01.ignore`` Setup media / backup Raspberry PI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Setup using Ubuntu 64 bit image from https://ubuntu.com/download/raspberry-pi - create user ``pi`` - Upgrade OS and install zfs-dkms * Move /var and /home using the instructions from https://www.cyberciti.biz/faq/freebsd-linux-unix-zfs-automatic-mount-points-command/ .. code-block:: bash zpool create data /dev/sdb zfs create data/var cd /var cp -ax * /data/var cd .. && mv var var.old zfs set mountpoint=/var data/var zfs create data/home cd /home cp -ax * /data/home cd .. && mv home home.old zfs set mountpoint=/home data/home reboot rm -rf /home.old rm -rf /var.old - Install node exporter using instructions from https://linuxhit.com/prometheus-node-exporter-on-raspberry-pi-how-to-install/#3-node-exporter-setup-on-raspberry-pi-running-raspbian * Download node exporter * Unpack and install it under /usr/local/bin * Install the systemd service .. code-block:: bash cd ~ wget https://github.com/prometheus/node_exporter/releases/download/v1.0.0/node_exporter-1.0.0.linux-armv7.tar.gz tar -xvzf node_exporter-1.0.0.linux-armv7.tar.gz sudo cp node_exporter-1.0.0.linux-armv7/node_exporter /usr/local/bin sudo chmod +x /usr/local/bin/node_exporter sudo useradd -m -s /bin/bash node_exporter sudo mkdir /var/lib/node_exporter sudo chown -R node_exporter:node_exporter /var/lib/node_exporter cd /etc/systemd/system/ sudo wget https://gist.githubusercontent.com/gshiva/9c476796c8da54afe9fb231e984f49a0/raw/b05e28a6ca1c89e815747e8f7e186a634518f9c1/node_exporter.service sudo systemctl daemon-reload sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service systemctl status node_exporter.service cd ~ Setup iscsi server ~~~~~~~~~~~~~~~~~~ The following steps is required to create the iscsi targets for k3s. .. code-block:: bash # install the targetcli to setup the iscsi targets # From https://linuxlasse.net/linux/howtos/ISCSI_and_ZFS_ZVOL sudo apt-get install targetcli-fb open-iscsi # create the sparse volumes for each netboot RPI for k3s /var/lib/rancher mount # k3s does not work over NFS sudo zfs create -s -V 50g data/4ce07a49data sudo zfs create -s -V 50g data/7b1d489edata sudo zfs create -s -V 50g data/e44d4260data # use target cli to create the targets sudo targetcli # *** VERY IMPORTANT *** # in order to restore the config after reboot enable the following service # and run it once sudo systemctl enable rtslib-fb-targetctl sudo systemctl start rtslib-fb-targetctl Setup k3s (Kubernetes) ~~~~~~~~~~~~~~~~~~~~~~ Enable cgroup support by adding 'cgroup_memory=1 cgroup_enable=memory' in /boot/cmdline.txt .. code-block:: bash cgroup_memory=1 cgroup_enable=memory .. code-block:: bash cat /boot/cmdline.txt dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=6f18a865-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_memory=1 cgroup_enable=memory .. code-block:: bash curl -sfL https://get.k3s.io | sh - The instructions are from https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s Setup helm ~~~~~~~~~~ From https://helm.sh/docs/intro/install/ .. code-block:: bash curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh # add the repos helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo add bitnami https://charts.bitnami.com/bitnami Setup cloudflare for dynamic DNS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After setting up account in cloudflare, get the api token from https://dash.cloudflare.com/profile/api-tokens Use k8s yaml cloudflare-ddns-deployment.yaml to run https://hub.docker.com/r/oznu/cloudflare-ddns/ image Setup minion for k3s ~~~~~~~~~~~~~~~~~~~~ Follow instructions in https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md .. code-block:: bash sudo mkdir -p /nfs/client1 sudo apt install rsync sudo rsync -xa --progress --exclude /nfs / /nfs/client1 After many attempts and a all-nighter, was not able to make Raspberry Model B Rev 2 to work (either as a tftp client _or_ a k3s node (it was not able to start any pods) ). Setup LetsEncrypt + Traefik ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Traefik is already setup with k3s - so no additional work is required for that per se Following instructions on https://opensource.com/article/20/3/ssl-letsencrypt-k3s for setting up LetsEncrypt .. code-block:: bash kubectl create namespace cert-manager curl -sL \ https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml |\ sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' > cert-manager-arm.yaml # change example.com to home.minibloks.com... don't know whether this really made a difference # changing this showed the padlock icon in chrome sed -r 's/example.com/home.minibloks.com/g' cert-manager-arm.yaml > cert-manager-arm.yaml kubectl apply -f cert-manager-arm.yaml Modify the letsencrypt-issuer-staging.yaml with the following Contents Required only if you want to testing... For prod you can skip the below .. code-block:: yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: g_skumar@yahoo.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik Run the command .. code-block:: bash sudo kubectl apply -f letsencrypt-issuer-staging.yaml Create the certificate yaml le-test-certificate.yaml .. code-block:: yaml apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: home-minibloks-net namespace: default spec: secretName: home-minibloks-net-tls issuerRef: name: letsencrypt-staging kind: ClusterIssuer commonName: home.minibloks.com dnsNames: - home.minibloks.com Run the command .. code-block:: bash sudo kubectl apply -f le-test-certificate.yaml Create the letsencrypt-issuer-prod.yaml .. code-block:: yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: g_skumar@yahoo.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik Apply it .. code-block:: yaml sudo kubectl apply -f letsencrypt-issuer-prod.yaml Create the sample site (optional): .. code-block:: html