LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > Musings on technology, philosophy, and life in the corporate world
User Name
Password

Notices


Hi. I'm jon.404, a Unix/Linux/Database/Openstack/Kubernetes Administrator, AWS/GCP/Azure Engineer, mathematics enthusiast, and amateur philosopher. This is where I rant about that which upsets me, laugh about that which amuses me, and jabber about that which holds my interest most: *nix.
Rate this Entry

On-prem kubernetes, Part 4

Posted 12-15-2023 at 09:02 PM by rocket357
Updated 01-08-2024 at 11:04 AM by rocket357

Posts in this series:
  1. Background info and goals
  2. pxeboot configs
  3. installing Debian/Xen dom0
  4. installing the k8s domUs
  5. Bootstrapping a bare-bones HA Kubernetes Cluster
  6. Installing the CNI/Network Infrastructure
  7. Installing the CSIs for Persistent Volumes
  8. (this post) Installing/Configuring cert-manager
  9. Installing/Configuring ArgoCD and GitOps Concepts
  10. Installing/Configuring Authelia/Vault and LDAP/OAuth Integrations
  11. Securing Applications with Authelia
  12. Keeping your cluster up-to-date

Github for example configuration files: rocket357/on-prem-kubernetes

Overview

For all but the simplest webapps handling non-sensitive data, it's a good idea to secure them with TLS. Given the fact that automation exists for TLS today (it wasn't always so!), it essentially is zero cost (once setup) to request and utilize signed certificates. This blog post will cover setting up cert-manager (the bits that interact with an automatic CA, such as LetsEncrypt) and configuring ingress-nginx to automatically request certs from cert-manager once a new ingress is created that has an associated hostname.

Why is this so cool, you ask?

The end result: You deploy an application to kubernetes with an associated hostname/URL, and...you get a signed certificate automatically requested/downloaded/integrated with your ingress so you can immediately use it. And before the certificate expires, it is automatically renewed for you. Ad-infinitum.

Coming from an "old school" background where you had to generate a key, then use the key to create a certificate signing request, then send the signing request to a Certificate Authority, then pay them many monies and provide absurd amounts of verification that you are who you say you are and you represent the company you say you represent, then wait, then receive the cert notification, download the cert bundle, then unpack the bundle and configure your services to use the new certificate Every. Single. Time. The Certificate. Expired... cert-manager is a breath of fresh air. Most of this innovation is due to automated CAs, of course, which cert-manager leverages, but it even automates requesting certificate renewals on a schedule so you never have to screw around with key files again. Java be damned.

Install cert-manager

Let's stay true to form and utilize helm:

Code:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Now cert-manager should be running in your cluster, but we still haven't configured it to know which automated CA to reach out to. This is as simple as creating a yaml file (I named it cert-manager-le-prod.yaml):

Code:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    email: $YOU@$YOUREMAILPROVIDER
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: prod-account-key
    solvers:
    - http01:
        ingress:
          ingressClassName: nginx
This configures cert-manager to leverage LetsEncrypt for certificates, and automate the .well-known challenge paths via ingress-nginx. Thus when a certificate needs to be issued/renewed, cert-manager will stand up a pod with the appropriate challenge data and configure an ingress to point to that pod for that specific path, so LetsEncrypt can retrieve the challenge data to validate the solver. Once validated, cert-manager will download the certificate bundle and store it in a kubernetes secret for applications to use.

Now What?

But that's only half of the "challenge" (pun intended) here. We also need to configure our services to utilize the certificates as well, right?

This is where ingress-nginx comes in to play (once again). In my private gitea server I keep the yaml for all of the ingresses I'll be deploying. Within each yaml file for the ingress, I place yaml for a certificate as well. When the ingress is created, the certificate is created as well. ingress-nginx sees the tls configuration on the ingress and automatically retrieves and configures the TLS secret that cert-manager placed the signed certificate in. When the TLS secret is updated, ingress-nginx reloads automatically to utilize it (that's the theory, at least).

I use a project called WBO, which is essentially an online whiteboard application that can be deployed via docker. I spend a lot of time with my kids ensuring they understand the math and science aspects they're being taught in school, so having a whiteboard application like this (especially a self-hosted one that is local and thus, blazing fast) is invaluable.

One of the requirements to run within kubernetes, of course, is that the application can be containerized. Since WBO can be containerized, we can deploy it. But first, a certificate/ingress yaml definition for wbo might look like this:

Code:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wbo-tls
  namespace: wbo
spec:
  secretName: wbo-k8s-$MYTLD-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
  - wbo.k8s.$MYTLD
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wbo-ingress
  namespace: wbo
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    #nginx.ingress.kubernetes.io/whitelist-source-range: 10.1.0.0/16
spec:
  tls:
  - hosts:
    - wbo.k8s.$MYTLD
    secretName: wbo-k8s-$MYTLD-tls
  ingressClassName: nginx
  rules:
  - host: wbo.k8s.$MYTLD
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: wbo
            port:
              number: 10354
There's a lot going on here, so let's break it down. The first definition (yaml definitions are separated by a line that contains just '---'), tells cert-manager it defines a certificate, with a given name and should be placed in the listed namespace, with a specific secretName, utilizing a given ClusterIssuer (you can create namespaced issuers, as well, but for simplicity I'm going with a Cluster-wide ClusterIssuer here). The dns name attached to this certificate should be wbo.k8s.$MYTLD (Replace $MYTLD with $YOURTLD, of course).

The second definition details the ingress that will utilize the certificate. The same metadata applies (name, namespace), but now we have annotations, which is basically metadata that applications within kubernetes can use to "communicate" with each other for configs and the like. These annotations state that the nginx ingress should use http on the backend (you could use https instead, if the application requires it), and the commented out annotation let's us set a source IP range that should be able to utilize this ingress. There are some complexities involved with Kubernetes networking, so depending on your configuration, the actual client source ip might not be the one that reaches the ingress (proxying takes place), so I've commented that out for now for testing. The important thing here is that the ingress spec.tls.secretName must match the Certificate's spec.secretName. If not, the ingress may end up using the default self-signed dummy certificate and all of this will be in vain.

The remainder of the ingress definition deals with the paths on the host and what service to send traffic to. Services allow you to expose pod applications within kubernetes to other pods. A good example of this is the postgres-operator primary service and replica service. As long as your application is pointing to the service (and not the specific pods!) when a database failover occurs, the services are updated to point to the new primary/replicas and after a brief reconnect, your application is back in business without a reconfiguration. Services have some fun additional capabilities such as honoring readiness checks (i.e. if a pod is failing its readiness check, the service won't send traffic to it, so a broken pod is automatically rolled out (and if it remains broken long enough, it'll get recreated automatically by the Deployment/Daemonset/Replicaset/Statefulset/etc... that created it.

A Side Quest Appears: Deployment? DaemonSet? ReplicaSet? StatefulSet? Huh?!

These types of objects in kubernetes oversee the creation, update, and deletion of application pods. It's *rarely* a good idea to deploy a pod directly to kubernetes. Typically you would want to utilize a Deployment (for mostly-stateless applications) or a statefulset (for applications where state matters, such as a database cluster needing a specific EBS volume attached to a specific pod). If instead you need to ensure a given pod is created on every node, a DaemonSet would be used (say, for instance, for host-level monitoring). Most (if not all, though I cannot claim knowledge of *every* operator in existence for kubernetes) operators transparently utilize daemonsets, deployments, replicasets, or statefulsets in some fashion to create the application pods. When writing your own helm charts (should you ignore the proverbial "Thar be dragons" and go this route bravely), you'll need to know the differences between the types of "Sets" in kubernetes. When utilizing third party helm charts (much like utilizing third party software that you didn't write yourself), you have to trust the author did the right thing, or at least exposed knobs for you to be able to pick for yourself.

In my experience, most "official" helm charts do the right thing when it comes to "Sets", though it gets a bit dodgy when you start using unofficial charts. Word to the wise: it's always good to check (usually within the "templates" folder of a given helm chart you'll be able to find the specifics on this...though be prepared to read up on go templates at some point in your adventure. Thar be dragons.).

Deployment of the Certificate and Ingress

At some point I'm either going to: write/publish helm charts to wrap all of the ingress/certificates/etc.. that are kubectl applied to the cluster, or I'm going to leverage something like ArgoCD/FluxCD (dons flame-retardant suit for inevitable flamewar) to GitOps-automate deploying them. Probably the latter, because we do leverage ArgoCD at work, but I'm leaning to FluxCD so I can learn something new. =X

For now, let's see if we're ready to kubectl apply the above yaml file to create the certificate/ingress.

Code:
# check we can locally resolve wbo.k8s.$MYTLD...
$ dig wbo.k8s.$MYTLD

; <<>> DiG 9.18.12-0ubuntu0.22.04.1-Ubuntu <<>> wbo.k8s.$MYTLD
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44135
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;wbo.k8s.$MYTLD.               IN      A

;; ANSWER SECTION:
wbo.k8s.$MYTLD.        86400   IN      CNAME   k8s-loadbalancer.k8s.$MYTLD.
k8s-loadbalancer.k8s.$MYTLD. 60047 IN  A       10.1.15.0

;; Query time: 2 msec
;; SERVER: 10.1.0.1#53(10.1.0.1) (UDP)
;; WHEN: Fri Dec 15 20:49:13 CST 2023
;; MSG SIZE  rcvd: 91

# Check letsencrypt will be able to resolve (if using separate public/private DNS for your domain)
# same thing, but should resolve to your public IP and not an RFC1918 IP...
There's an easy way to automate the public side, of course (we're taking about kubernetes here), using external-dns with a "target" annotation (create the dns record pointing to a given public IP, such as your home internet IP if you're running a #HomeLab or a specific public IP if you're running kubernetes in the cloudz in a VPC) that comes in handy here. I haven't set that up just yet, so I'm a noob doing things manually at the moment. YMMV, but expect updated blog posts in the future. (Look at present me shoving work off on to future me! So proud!).

Next up is can we actually reach kubernetes from the public internet? I don't mean the kube-apiserver stuff from earlier (haproxy/keepalived), I mean the ingresses. My network utilizes relayd for this, and sadly I have yet to find a way to automate host configurations in relayd, so I've (again, manually, sadface) added the configurations necessary in my router to have wbo.k8s.$MYTLD reach a kubernetes IP (which is fairly simple, given that MetalLB/Calico are advertising all of that via BGP!).

Assuming LetsEncrypt can resolve and connect to your kubernetes cluster, you should be able to kubectl apply the yaml above and see a cert and challenge created within the wbo namespace. If you watch closely, you'll see a new pod as well (that the service points to for challenge validation) that will be deleted once the challenge is validated and the cert is issued. The cert should go to READY "True" once it is issued. If not, kubectl describe challenge is your friend.

Next Steps

That about wraps it up for this blog post. Next time we'll look at automating the ingress/certificate configs as well as ensuring our configs are safely stored in git.

Cheers!
Views 213 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 11:14 AM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration