This tutorial shows how to set up a running k8s system by using the k3s distribution within just some minutes. Additionally, it provides a rudimentary introduction to some selected k8s key concepts.
As an example container we use a httpd webserver container (rather than nginx, in order to avoid confusion with nginx-ingress), but of course you may use any other container instead.
For k8s we use k3s, a lightweight but complete k8s distribution, which is also suitable for production environments.
We've tested this tutorial on Ubuntu, but it probably works the same or at least similarly on other systems.
Install k3s by curl -sfL https://get.k3s.io | sh -
or have a look at: k3s.io
As mentioned above: in our example we are using an apache httpd container, which serves as a web server.
To run the container within k3s, we use the following common configuration structure:
Source: "wiki.ciscolinux.co.uk
The deployment file for our example looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: httpd-demo
name: httpd-demo
spec:
replicas: 1 # defines a ReplicaSet with 1 replica
selector:
matchLabels:
app: httpd-demo # manages pods with label httpd-demo
template:
metadata:
labels:
app: httpd-demo
spec:
containers:
$ image: httpd
name: httpd
ports:
$ containerPort: 80 # (optional) info about the port to be used
# replace $ by -
Save the file as httpd-deployment.yaml
and apply it to k3s by:
sudo k3s kubectl apply -f httpd-deployment.yaml
After some seconds, you can check with:
sudo k3s kubectl get deployments
Or find out the pod's IP with sudo k3s kubectl get pods -o wide
and check the httpd server with curl <ip>
, which should show something similar to:<html><body><h1>It works!</h1></body></html>
Each pod has its own IP address, but pods are ephemeral in nature. A Deployment may destroy pods dynamically and recreate them with different IP addresses.
To ensure that a pod can always be connected to (independent of its ip-address), we use a k8s Service.
apiVersion: v1
kind: Service
metadata:
name: webserver-svc
spec:
ports:
$ name: http
port: 80
selector:
app: httpd-demo # service refers to any pod with label "app: httpd-demo"
# replace $ by -
This (default) service type is sometimes also called a "cluster IP"-service.
After applying you may check the connection to the pod via the service by sudo k3s kubectl get services
to show the IP and then connect with curl <ip>
.
To make the service resp. pod accessible by the outside world, we need to configure ingress. Ingress is an object in k8s which provides reverse proxy functionality that allows access to internal services from outside k8s. An ingress configuration contains 3 parts:
Point 1. and 2. are already covered by k3s, as k3s contains a built-in ingress service (of type load balancer) as well as a controller. As ingress controller k3s uses "traefik".
So, we just need to create an ingress rule.
The config below will route the url path /
to our webserver service (which on its turn will call our httpd container in the end :-) ).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebserver-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
$ http:
paths:
$ path: /
pathType: Prefix
backend:
service:
name: webserver-svc
port:
number: 80
# replace $ by -
Again, after applying you may check with: sudo k3s kubectl get ingress --all-namespaces
. Or check with address "http://localhost" in your browser, it will show the httpd message "It works!".
You have set up and configured a simple, but complete k8s system with the container of your choice (or the httpd webserver from the demo) !
If you want to uninstall your k3s installation use /usr/local/bin/k3s-uninstall.sh
.