LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > Musings on technology, philosophy, and life in the corporate world
User Name
Password

Notices


Hi. I'm jon.404, a Unix/Linux/Database/Openstack/Kubernetes Administrator, AWS/GCP/Azure Engineer, mathematics enthusiast, and amateur philosopher. This is where I rant about that which upsets me, laugh about that which amuses me, and jabber about that which holds my interest most: *nix.
Rate this Entry

On-prem Kubernetes, Part 5

Posted 12-16-2023 at 07:28 PM by rocket357
Updated 01-08-2024 at 11:02 AM by rocket357

Posts in this series:
  1. Background info and goals
  2. pxeboot configs
  3. installing Debian/Xen dom0
  4. installing the k8s domUs
  5. Bootstrapping a bare-bones HA Kubernetes Cluster
  6. Installing the CNI/Network Infrastructure
  7. Installing the CSIs for Persistent Volumes
  8. Installing/Configuring cert-manager
  9. (this post) Installing/Configuring ArgoCD and GitOps Concepts
  10. Installing/Configuring Authelia/Vault and LDAP/OAuth Integrations
  11. Securing Applications with Authelia
  12. Keeping your cluster up-to-date

Github for example configuration files: rocket357/on-prem-kubernetes

Overview

We've come a long ways in this blog series, but there's still plenty left to do. Migrating from kubectl apply to helm install is a huge step forward, just as moving to a package manager from a standalone binary install is preferable in most Linux situations (in the vast majority of cases, at least).

Right now we have most of our applications installed via helm, but there are still a few bits remaining that we have to do manually. This tends to revolve around ingress and certificate configurations, since many helm charts don't necessarily include those bits. So we have manual parts we need to accomplish to make the applications useful. This is going to be automated (as much as is possible) in today's blog post.

Welcome to GitOps

The term "GitOps" refers to relying on git repos as the "source of truth" for an infrastructure and associated applications. Essentially you want to get to the point that you can check out a repo, make a few modifications that you commit to a testing branch of the repo, make a pull request so other team members can review it, then commit the changes and have the code automatically deployed to your infrastructure. This is a terribly simplified view, of course, as many companies have automation that pushes testing branches to testing infrastructure, performs tests against the new code, and either passes or fails the build based on the results (so it may not even be eligible for a pull request).

But hey, we're working on #HomeLabs here, and I tried asking my wife and kids to review code changes (without success), so I'm going to forego the pull request part here and just...automate my changes. Helm gives us the ability to rollback when I (inevitably) break stuff, so it's a good thing we've been installing stuff with Helm!

Automation Engines, or CI/CD

CI/CD, as it is known, is the practice of continually integrating changes and continually pushing those changes out to your application infrastructure. The "CI" part is what I described above, with branches and automatic tests and pull requests. The "CD" part is the automating of deploying said changes. As we're largely going to have to skip the "CI" side since we're a one-man band, we just need to automate the "CD" side. Two tools I mentioned previously (ArgoCD and FluxCD, both unironically with "CD" in their names) can help with this. I said I'd install and test FluxCD, but like most decisions I've disclosed in this blog series, I decided to change my mind and stick to ArgoCD. I like learning new things, but I also like blogging results, so let's go with ArgoCD for today and I'll visit FluxCD in a future post (again with the present me pushing work off to future me...progress!).

Prerequisites

We need to set up a "Git" repo we can utilize in our "Ops". Hard to GitOps without git, you know.

Let's assume you have a private repo on github (or you have a private gitea server, preferably). All you need to do is create a repo, let's call it "argocd" and put all the yamls in there we've been manually installing to kubernetes. Again, this is mostly ingresses and certificates, but some of the other stuff (postgresql cluster definitions come to mind) may end up here as well. And as I'll discuss in a bit, we will even automate Helm installations...

Git is fairly straightforward. If write code and you've never used it (it could happen, even today), I highly recommend reading up on it and using it. Like other source code repositories (cvs, mercurial, etc...) it is a "safe place" to put your code. I put quotes on "safe place" because like email servers, source repos are only as "safe" as the infrastructure they're running on. If you have a gitea server running on a single SATA drive that's been clicking and making noise and you aren't backing it up regularly, you don't really have a source repo =)

Example workflow:

Code:
git init
git config --global init.defaultBranch main
git branch -m main
git config --global user.email "argo@$MYTLD
git config --global user.name "argocd"
git remote add origin ssh://git@gitea.$MYTLD/myproject/argocd.git
git add $YAMLFILES
git commit -m 'Initial commit of all teh yamls'
git push --set-upstream origin main
If you look in your git repo, you now should see all the yamls.

From "Git" to "Ops"

We have a git repo with our yaml configurations, now how do we deploy those automatically to kubernetes? Let's install ArgoCD and have some fun automating stuff!

Code:
cat > argo_ha-values.yaml < EOF
redis-ha:
  enabled: true
controller:
  replicas: 1
server:
  replicas: 2
repoServer:
  replicas: 2
applicationSet:
  replicas: 2
configs:
  params:
    server.insecure: true # we're terminating SSL on the ingress.
EOF

# ArgoCD makes the "fun" choice to support both GRPC and https on a single port, so...let's separate those out
cat > argocd-ingresses.yaml < EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-http-ingress
  namespace: argocd
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argo-cd-argocd-server
            port:
              name: http
    host: argocd.k8s.$MYTLD
  tls:
  - hosts:
    - argocd.k8s.$MYTLD
    secretName: argocd-ingress-http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-grpc-ingress
  namespace: argocd
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argo-cd-argocd-server
            port:
              name: https
    host: grpcargocd.k8s.$MYTLD
  tls:
  - hosts:
    - grpcargocd.k8s.$MYTLD
    secretName: argocd-ingress-grpc
EOF

helm repo add argo https://argoproj.github.io/argo-helm
helm install argo-cd argo/argo-cd --values argo_ha-values.yaml --namespace argocd --create-namespace

# just like yesterday, we need to ensure DNS records exist for both hostnames above...
# then apply the ingresses:
kubectl apply -f argocd-ingresses.yaml
If you look closely at the ingress definitions above, you might notice something different from yesterday. Yesterday we were providing a certificate definition along with an ingress definition, but today we're just supplying the ingress. Kubernetes is smart enough to automatically request the certificates for the hosts listed in the ingresses, so we can get away with just the ingress definitions. You could still supply the certificate definitions (it works, just like it did yesterday), but you don't have to unless you just want to.

Argo creates a kubernetes secret with a randomly generated admin password that you'll need to login via the web interface, so grab that and hit the argocd.k8s.$MYTLD page in a browser. Login with "admin" and the generated password, then let's get ArgoCD configured.

Configuring ArgoCD

If you create a custom project in ArgoCD like I did, make sure you provide appropriate resource allow lists and destinations in the project configuration. Save yourself a bit of trouble!

You'll need to add your source repo to your project as well (ArgoCD doesn't let you pull in anything that's not "scoped" in the default configuration) along with the helm repos you plan on using. It's also a good idea to add your ssh info to "Repository certificates and known hosts", and double check that the "in-cluster" kubernetes cluster is listed in clusters (we're deployed to a kubernetes cluster that we want to automate installations for, but you could add external clusters here, too if you so desired). Once all of that is complete, you should be able to add your first application.

Under "applications", hit "+New App" and provide an application name, the project name, whether or not you want to manually or automatically sync the application, and any sync options you want. Under that add the source repo (our git repo) and provide a path. I talked a bit about wbo yesterday, and those configs are under the repo in the wbo folder, so for path I'm going to put "wbo". This way, only the yaml files in the wbo folder are considered part of this application. Destination should be your "in-cluster" in whatever namespace you choose, and then you can hit "Create".

Now watch as ArgoCD goes out to the repo, grabs the yaml definitions under the path in your repo, and applies them to kubernetes automatically (you did set it to automatically sync, right?). Magic.

Now the fun begins. Make a change to the yaml you just deployed. ArgoCD should pick up the change (during the next sync period, which might not be immediate) and deploy those changes for you...without you having to lift a finger (well, besides writing the code and committing it to the repo).

And as they say, the rest is just "manually writing the automations". Add whatever you want to ArgoCD, and it'll watch the repo for changes and deploy it for you. All you have to do is write the yaml definitions and commit them.

But what about Helm?

Now comes the real fun. Let's go add in some helm repos in our project settings (say, postgres-operator... I feel like getting that deployed and managed by ArgoCD now). Once that's added, create the following yaml (ArgoCD has the capability to do what we're about to do, but the frontend doesn't directly support it yet, it seems!):

Code:
# this will look and feel dirty...just trust me...

cat > postgres-operator-helm-install.yaml < EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: postgres-operator
  namespace: argocd
spec:
  ignoreDifferences:
  - group: "*"
    kind: OperatorConfiguration
    jsonPointers:
    - /configuration/kubernetes/persistent_volume_claim_retention_policy
  destination:
    name: ''
    namespace: postgres-operator
    server: 'https://kubernetes.default.svc'
  sources:
    - repoURL: >-
        https://opensource.zalando.com/postgres-operator/charts/postgres-operator
      targetRevision: 1.10.1
      chart: postgres-operator
      helm:
        valueFiles:
          - $values/postgres-operator/postgres-operator-values.yaml
    - repoURL: 'ssh://git@gitea$MYTLD/$MYTLD/argocd.git'
      targetRevision: HEAD
      ref: values
  project: k8s.$MYTLD
  syncPolicy:
    automated:
      #prune: true # BIG NOPE
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
EOF

kubectl apply -f postgres-operator-helm-install.yaml
But dude, you said...I know. Trust me, this is one of the last time we'll have to...wait a minute!

What if you put that yaml in a git repo, then deployed it via ArgoCD? Could you automate adding helm installs to Kubernetes?!?

Yes, my friend, you can.

(NOTE: do not set auto-prune in the application yaml if you self-reference the application (i.e. postgres-operator-helm.yaml resides in the same repo directory as the postgres-operator values/helm/etc...), or the application deployment will realize the "application" yaml is missing from the newly-created application and delete the newly created application, which will remove the application from ArgoCD and uninstall it from Kubernetes...good times! TL;DR: don't store "app of apps" code in the same path as the apps it maintains!).

Notice the sources section of the above yaml. You're providing a helm chart from one repo (that you may or may not control) and a git repo for the values file. Helm uses a "values file" to customize the installation. For example, let's say you have two kubernetes clusters, dev.k8s.$MYTLD and prod.k8s.$MYTLD. The hostname URLs in dev are, of course, something along the lines of wbo.dev.k8s.$MYTLD, and the production is something along the lines of wbo.prod.k8s.$MYTLD. You can template these differences out without having to make any modifications to the helm chart, just by supplying the values that the helm chart expects in your values file (which we automate here via ArgoCD).

Also notice the "ignoreDifferences" portion of the yaml above. From time to time you'll run in to changes that have been made due to admission controllers or other automatic modifications that occur in kubernetes, and ArgoCD will get "stuck" trying to Sync the application (say, for instance, an application is modified by an admission controller to remove a particular block of the yaml, but ArgoCD doesn't see that...it just sees that it syncs and the change doesn't go away, so it repeatedly tries to sync the code. You can tell ArgoCD that it's ok to ignore certain paths that get modified so ArgoCD won't die a lonely death inside your kubernetes cluster thinking it was a complete failure.

Now what?

Now I sit back and chuckle at having converted my blog readers into mindless yaml automatons. Ha! Just kidding. Now you go about writing and committing yaml to your repo, setting up "applications" in ArgoCD for Argo to manage, and when you've added everything you can, you enjoy it. If a change needs to be made, update the repo and ArgoCD will merrily update your installations for you.

Next Steps

Next time I'll go over more of the automations and start committing a bunch of these changes to the public github examples listed above.

Until then, cheers!
Views 269 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 02:55 PM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration