Kubernetes (Single node) Cluster on a 5$/month VPS

Gabrio Tognozzi
7 min readNov 15, 2020

It was a long time I was waiting for the opportunity to refine my Kubernetes abilities. I have this 5$ (20Gb Hdd, 2GB Ram) machine from OVH and I’ve decided to build with it a single node Kubernetes cluster with:

  1. Microk8s as a simple plug-and-play implementation of Kubernetes
  2. Traefik as our ingress controller and certificate provider, it will handle for us the process of generating and renewing the certificates for our DNS Domain! that’s awesome.
  3. Docker Registry image deployed, as a private and authenticated docker-registry to push our private images with ease.

General Advices

if you fail while configuring the cluster just run sudo snap remove microk8s --purge to restart from scratch. I did it almost a hundred times.

Keep tailing logs with k logs my-pod-or-service -f to see what happens inside the containers. check if the docker-registry correctly starts, the traefik pods may say something usefull for debugging, the website deployment may not be able to pull docker images etc. just read the logs.

Use netstat -tlpn to see whether or not you have the right ports bind, you may also decide to add iptables rules to avoid connections on some ports that microk8s opens ( if you find out how to close unnecessary ports, e.g. gunicorn, please let me know ).

Don’t use this configuration as the infrastructure of a serious production environment, this is for studying and maybe maintaining a couple of websites, nothing more.

MicroK8s and Kind

Thankfully to a friend of mine ( Suleiman Ali ) I was first introduced to kind, that was actually mind blowing for me: I had finally the opportunity to practice with Kubernetes concepts locally, with ease. From there, it was a short step to decide I was going to deploy my website in a single-node cluster.

While investigating I found out that microk8s is the best alternative to work with. Microk8s is an awesome product from Canonical that makes it easy to run a cluster on your machines. They go further by say production-grade Kubernetes, but I’m still not sure I would actually use microk8s in a serious production environment.

Setting Up Microk8s and some aliases

first we need to install microk8s, I also define some aliases for simplicity

sudo snap install microk8s --classic
alias k="microk8s kubectl"
alias m="microk8s"

If you want to check the status of the kubernetes cluster just need to run

# check minikube status
m status | head -n4
# look at the deployed resources
k get all -A

During the installation the CPU was actually struggling, I was afraid of receiving the intimidating email from the provider that says “don’t run CPU bound tasks for too long in our VPS!!” ( Yes I did that, I tried to mine bitcoins on a 5$/month VPS, but it is another story LOL ) but once the installation completed the situation was under control. no heavy load neither on the CPU nor on the RAM, great!

Just 2GB of persistent storage were used, while the usage of CPU and RAM was respectively 24% and 50%. This is reasonable.

Once started the microk8s cluster we need to enable the following addons:

  1. helm3 will be used to install helm charts in our cluster,
  2. RBAC stands for role base access control, and will be used by our ingress controller, and who knows who else
  3. DNS deploys a CoreDNS pod that allows for inter-service communication
  4. Storage allows us to use VolumeClaims and therefore Volumes ( e.g. for the docker-registry to persist pushed images )
m enable helm3 rbac dns storage

Falied Attempts ( ArgoCD and CertManager etc. )

At the beginning I was trying to also run a GitOps, infrastructure as code, CD pipeline, in our cluster, by deploying the ArgoCD controller. Unfortunately the machine kept yielding at me all the time, so much that I was convinced that this was not a good idea.

I was trying to use the integrated NGINX ingress-controller but it was not the proper solution for my needs: I also want to dynamically generate certificates for new domains, and this ingress controller doesn’t have integrated support for Lets Encrypt, while Traefik does.

I’ve encountered the same problem when trying to use Cert-Manager, that is an awesome operator that allows us to automate the process of generating certificates for the domains specified in our ingresses. It simply won’t work with the amount of resources we have. The second approach was to deploy a linuxserver swag ( formerly letsencrypt ) docker image as some sort of sidecar or reverse proxy. But I was feeling guilty because it was really an unpleasant solution. Traefik was a better choice in my opinion.

Configure the Docker Registry

In this step we will configure the docker registry that we will use to store our images, that will then be used from inside our cluster. Once the docker registry is configured we will be able to use our custom built images for the containers of our deployments, using an image value such as
image: gabrio.tognozzi.net:5000/image:tag.

It follows a gist that defines a Service and a StatefulSet resource. The Service is necessary to allow the ingress to reach the docker registry. We will use a StatefulSet instead of a Deployment for our registry because we want the pushed images to be persisted.

After applying this yaml wait for the pods to start. Once the docker registry has started, reading the logs of the command k logs service/docker-registry you will find a generated password to use with the user docker, for logging in.

If you run docker login form the terminal and complete the login, you will now be able to docker tagand docker push images to the newly configured docker registry. Furthermore, you will find a credential’s file under ~/.docker/config.json, we need this file to create a Secret Kubernetes resource that will be used by the pods to authenticate themselves to the registry and download their images from the it.

k create secret generic registry-credentials --from-file=.dockerconfigjson=/path/to/.docker/config.json --type=kubernetes.io/dockerconfigjson

an example of a deployment that uses the generated secret follows:

apiVersion: apps/v1
kind: Deployment
metadata:
name: gabrio-tognozzi-net
labels:
app: gabrio-tognozzi-net
spec:
replicas: 1
selector:
matchLabels:
app: gabrio-tognozzi-net
template:
metadata:
labels:
app: gabrio-tognozzi-net
spec:
containers:
- name: gabrio-tognozzi-net
image: gabrio.tognozzi.net:5000/gabrio-tognozzi-net:latest
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: registry-credentials

Configure the Traefik Ingress Controller

The most painful part of this article was figuring out what approach to use in order to allow access to our cluster from the outside world, and dynamically generate TLS certificates for the domains that are living inside the cluster.

Long story short, I’ve decided to go with Traefik, a ( painfull ) Edge Router that makes publishing your services a fun and easy experience. If you choose to use it, consider that it will cost you 3 days to get sufficiently used to the concepts such that it is reasonable for you to hope to make it work. But the effort worth it, once you make it work it offers a lot of features, such as automatic generation and renewal of certificates, a fancy dashboard, it can be integrated with prometheus metrics. It is cool.

I added Traefik to the helm repo and fetched it in order to inspect the values of its Chart

m helm3 repo add traefik https://helm.traefik.io/traefikhelm fetch --untar traefik/traefik

After a couple of days of study I came out with this set of values, their analysis follows the github gist:

The above values correspond to:

  1. expose port 80 just to redirect to port 443
  2. expose port 443, which is a tls-secured port, with letsencryptresolver configuration as certResolver
  3. use Traefik as a LoadBalancer resource, with a statically assigned ip address ( in this case it has to be the ip of our VPS )
  4. configure the letsencryptresolver , with the email of the owner, the ACME challenge type ( tlschallenge is the most supported one, e.g. you can’t have httpchallenge with port 80 that redirects to 443 ). I’ve also made a couple of try with the debugging enabled and the ACME server configured to be the staging endpoint provided by LetsEncrypt, this can be a good alternative while testing.

Consider that without ingresses/ingress-routes the above exposed ports just do nothing. To install the patched traefik helm chart you can run:

m helm3 install traefik traefik/traefik -f values.yml

We now need to apply the IngressRoute-s for our services. I’m assuming you have already deployed a service and a deployment. in my case the service associated to the website is named gabrio-tognozzi-net , while the docker-registry can be found at docker-registry. Applying the following yaml, after a minute or two, we should be able to reach the services through the newly configured ingress-controller

Conclusions

We have now configured an IngressController that will sign valid certificates every time an IngressRoute resource is deployed, a Docker registry that can be used to push and pull private images, in a single node cluster that can be used to deploy Kubernetes resources in a 5$/month VPS.

Thank you if you’ve read until this point, see you the next time! 👋

--

--