I’m using an Ubuntu 22.04 server to play around with Kubernetes. Since Ubuntu ships with Microk8s and docker installed as snaps, I ran into a few problems that I hadn’t seen before…

Snaps

Snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free. Snaps are discoverable and installable from the Snap Store, the app store for Linux with an audience of millions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions. –Canonical Snapcraft

Benefits

  • Snaps run on various Linux distributions.
  • A background service, snapd checks for updated versions of any installed snaps and updates them if available.

Restrictions

  • According to the documentation, and advice from AI ChatBots, services such as Microk8s and docker, when installed as snaps, are more restricted than when they are installed as normal packages (apt-get etc.).

Applications in a Snap run in a “container” [sandbox] with limited access to the host system. The Snap sandbox heavily relies on the AppArmor Linux Security Module from the upstream Linux kernel. –Wikipedia

In theory this means that services running as snaps cannot access certain paths on the host system, however in my own deployments I was able to mount local paths into Docker containers.

Docker

With Docker installed as a snap, it behaved pretty much as expected, at least for the simple testing I did.

microk8s

microk8s kubectl

For those used to simply running kubectl commands, when using Microk8s it seems that these have to be “wrapped” by the microk8s command. For example:

microk8s kubectl get all -n kube-system

microk8s configuration files

Since Microk8s is a version of Kubernetes, designed for small systems, test deployments, etc. it is configured by applying YAML configuration files. To avoid losing these config files, I have saved them in a Microk8s folder, with sub-folders per namespace.

microk8s kubectl apply -f <file_name> -n <namespace>

microk8s Dashboard

To help see what is going on within the Kubernetes deployment, the microk8s dashboard was enabled, however it wasn’t immediately visible on the local network. To achieve this I had to:

  • Add a NodePort for the dashboard
  • Create a ServiceAccount (user) for the dashboard
  • Create a Secret to enable persistence of the user’s token
  • Create a ClusterRole (role) with limited permissions for this user
  • Create a ClusterRoleBinding to link the user to the role

Note that one of the restrictions was that the NodePort port needed to be in the range of 30000 to 32767.