MicroK8s is a powerful yet lightweight Kubernetes distribution perfect for:
- ๐ Developers testing Kubernetes locally
- ๐งช CI/CD environments
- ๐ Homelabs
- ๐ ๏ธ Edge and IoT systems
- ๐งฉ Single-node or small clusters
This configuration is ideal for running MicroK8s with multiple workloads, monitoring tools, and CI/CD pipelines.
๐ What You'll Learn
In this guide, you will:
- โจ Install MicroK8s
- ๐ Configure user permissions
- ๐งฐ Enable essential add-ons
- ๐งช Verify the cluster is running
- ๐ Deploy a test application
- ๐ Ensure services autostart on boot
โ Why MicroK8s?
MicroK8s is:
- Lightweight ๐ชถ โ Minimal resource footprint
- Easy to install โก โ One command setup
- Zero external dependencies ๐ฏ โ Everything bundled
- Production-ready ๐ฅ โ Enterprise-grade
- Beginner-friendly and pro-friendly ๐จโ๐ป โ Great for learning and production
It includes core Kubernetes components and plug-and-play add-ons like DNS, Ingress, Helm, and the Dashboard.
๐ฆ Install MicroK8s
Update your system
sudo apt update && sudo apt upgrade -y
Install MicroK8s via Snap
sudo snap install microk8s --classic
This will take a few minutes. MicroK8s installs the latest stable Kubernetes release.
Verify installation
sudo microk8s status --wait-ready
Expected output:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
๐ Kubernetes is up and running!
๐ฅ Add Your User to the MicroK8s Group
MicroK8s creates its own Unix group to manage access. Add your user to avoid using sudo every time:
sudo usermod -a -G microk8s $USER
Apply group changes immediately without logging out:
newgrp microk8s
Now you can run microk8s commands without sudo!
๐ก Pro Tip: Create a Kubectl Alias
Typing microk8s kubectl every time is tedious. Let's create an alias so you can just type kubectl.
Add the alias to your bash configuration:
echo 'alias kubectl="microk8s kubectl"' >> ~/.bashrc
Apply the changes to your current session:
source ~/.bashrc
Test it:
kubectl version --client
From this point forward, all commands in this guide will use kubectl for brevity, but remember it is running the MicroK8s version.
๐ง Enable Essential Kubernetes Add-ons
Recommended add-ons
Enable DNS for internal service discovery:
microk8s enable dns
Enable storage for persistent volumes (databases, file storage):
microk8s enable storage
Enable Ingress for external access (domain routing, SSL):
microk8s enable ingress
Optional but useful add-ons
Enable Helm 3 package manager:
microk8s enable helm3
Enable Kubernetes Dashboard (UI):
microk8s enable dashboard
Enable Metrics Server (for monitoring CPU/RAM usage):
microk8s enable metrics-server
Check status
microk8s status
You should see all enabled add-ons listed as active.
๐งช Test Your Kubernetes Cluster
Check node status
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
hostname Ready <none> 5m v1.31.3
STATUS should be Ready โ
Check system pods
kubectl get pods --all-namespaces
All components should show Running or Completed status:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-xxxxx 1/1 Running 0 5m
kube-system coredns-xxxxx 1/1 Running 0 4m
ingress nginx-ingress-microk8s-controller-xxxxx 1/1 Running 0 3m
If everything shows Running, your cluster is healthy! ๐
๐ณ Deploy a Test NGINX App
Let's deploy a simple NGINX workload to verify everything works end-to-end:
Create deployment
kubectl create deployment nginx --image=nginx
Expose the deployment
kubectl expose deployment nginx --port=80 --type=NodePort
Get service details
kubectl get svc nginx
You'll see output like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.152.183.45 <none> 80:31234/TCP 10s
Note the NodePort (e.g., 31234).
Access the application
Open your browser and visit:
http://your-server-ip:31234
If you see the NGINX welcome page โ ๐ Your Kubernetes cluster is working perfectly!
๐ Ensuring MicroK8s Auto-Starts on Boot
Since MicroK8s is installed via Snap, systemctl cannot be used directly. Snap manages startup behavior automatically, but let's verify and ensure it's active.
โ๏ธ Step 1: Check Service Status
sudo snap services microk8s
You should see:
Service Startup
microk8s.daemon-apiserver enabled
microk8s.daemon-containerd enabled
...
If all services show Startup: enabled, MicroK8s is set to autostart โ
โ๏ธ Step 2: Enable Autostart (If Needed)
If services show disabled, simply run:
sudo snap start --enable microk8s
This enables the entire application service and its internal daemons.
โ๏ธ Step 3: Reboot & Validate
Reboot the server to test autostart:
sudo reboot
After logging back in, run:
microk8s status --wait-ready
If you see microk8s is running, then autostart is confirmed! ๐
Verify your NGINX app survived the reboot
kubectl get pods
Your nginx pod should be running.
๐พ Resource Usage Check
Check how much of your 8GB RAM MicroK8s is using:
free -h
Check disk usage (out of 120GB):
df -h
MicroK8s typically uses:
- ~500MB-1GB RAM at idle
- ~2-3GB storage for base installation
This leaves plenty of resources for your workloads! ๐
๐ ๏ธ Troubleshooting Tools
Check cluster health (Generates a report)
sudo microk8s inspect
Restart MicroK8s
sudo microk8s stop
sudo microk8s start
Check specific logs
# Check kubelet logs
journalctl -u snap.microk8s.daemon-kubelet -f
# Check API server logs
journalctl -u snap.microk8s.daemon-apiserver -f
๐ Summary Checklist
| Step | Purpose | Status |
|---|---|---|
| โ Install MicroK8s | Kubernetes runtime | Done |
| โ Add user to group | Non-root access | Done |
| โ Set up kubectl alias | Simplify commands | Done |
| โ Enable add-ons | DNS, storage, ingress | Done |
| โ Test with kubectl | Validate cluster | Done |
| โ Deploy NGINX | Confirm workload scheduling | Done |
| โ Enable autostart | Ensure services start on boot | Done |
๐งน Clean Up Test Deployment (Optional)
Once you've verified everything works, you can remove the test NGINX deployment:
kubectl delete deployment nginx
kubectl delete service nginx
๐ฏ Final Thoughts
Congratulations! ๐ Your 4-core, 8GB RAM, 120GB Ubuntu server is now running a fully functional MicroK8s cluster!
You have a robust foundation ready for production workloads.
๐ Previous Guide
๐ Back to Part 1: Prerequisites & Basic Ubuntu Setup
๐ What's Next?
Ready to deploy production applications? ๐
๐ Continue to Part 3: Production Namespace, App Deployment & Configuration
In Part 3, you'll learn how to:
- ๐๏ธ Create production namespaces for workload isolation
- ๐ Deploy PostgreSQL and n8n with proper configurations
- ๐ฆ Set up Persistent Volume Claims for databases
- ๐ Secure sensitive data using Kubernetes Secrets
Top comments (0)