LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > Musings on technology, philosophy, and life in the corporate world
User Name
Password

Notices


Hi. I'm jon.404, a Unix/Linux/Database/Openstack/Kubernetes Administrator, AWS/GCP/Azure Engineer, mathematics enthusiast, and amateur philosopher. This is where I rant about that which upsets me, laugh about that which amuses me, and jabber about that which holds my interest most: *nix.
Rate this Entry

On-prem kubernetes, Part 6.5

Posted 12-21-2023 at 11:39 AM by rocket357
Updated 01-08-2024 at 11:01 AM by rocket357

Posts in this series:
  1. Background info and goals
  2. pxeboot configs
  3. installing Debian/Xen dom0
  4. installing the k8s domUs
  5. Bootstrapping a bare-bones HA Kubernetes Cluster
  6. Installing the CNI/Network Infrastructure
  7. Installing the CSIs for Persistent Volumes
  8. Installing/Configuring cert-manager
  9. Installing/Configuring ArgoCD and GitOps Concepts
  10. Installing/Configuring Authelia/Vault and LDAP/OAuth Integrations
  11. (this post) Securing Applications with Authelia
  12. Keeping your cluster up-to-date

Github for example configuration files: rocket357/on-prem-kubernetes

Overview

As tough as the authelia install was, today is going to be a cake walk. I say that because the hard bits for webapp authentication are done (famous last words, amirite?). Today we're going to look at how to integrate Authelia with the various webapps we have running, and how to best secure your applications (some support external authentication better than others).

For this we're going to look at a few different types of applications, and example applications will be given for each one. Those types of applications are:
  1. Applications that have their own auth (and support third party, such as OpenID...example: Vault, ArgoCD)
  2. "public" apps that don't support authentication (except for perhaps an admin account...examples: WBO, changedetection.io, uptime kuma)
  3. Applications that have their own auth (and don't support external authentication configurations...example: komga, gotify)

Authelia doesn't just handle authentication, it also has authorization capabilities, so you can set ACLs on your webapps (i.e. the request is denied unless it comes from a specific network, or this group of users who have an active MFA session, etc.. to use the application). So at bare minimum, we gain the ability to set ACLs and control traffic in that fashion. Not quite as solid as I'd like with OpenID integration, but better than nothing, I suppose.

The OpenID integration Authelia supports is specifically as a Provider, not as a Relying Party. This means that Authelia can operate as an authentication source, but not accept external authentication sources. In short, you can't setup Authelia to allow you to login to Authelia with your google or github credentials, but you can set up other sites to allow you to sign in with Authelia's authentication.

So without further adieu, let's start with the easiest class of applications: full support for OIDC/OpenID.

But first, some fun with Google Security

I need to preface this with a comment about google indexing that gave me headaches yesterday. While asking a friend to hit the auth endpoint and test his credentials, it came to light that google was marking my vault.k8s.$TLD as a "dangerous site" that was offering malware to users. I had literally *just* gotten authelia up and running and integrated for testing purposes (for ACLs), and google started doing this. Fairly soon I was receiving the same warning in Chrome on my own machine, so I contacted google and filed a request to have the alert cleared. They did the following morning, but it caused a lot of issues during testing so I decided to nip that in the bud. I performed a per-ASN ipv4 prefix lookup and per-ASN ipv6 prefix lookup on google's ASNs, then constructed a monster pf table to hold those prefixes and dropped new requests incoming to my firewall from any of those IPs. Overkill? Probably. But I don't take kindly to corporations wasting my time when I've already indicated in robots.txt that I'd rather them not index or scan any of my stuff (sidenote: google's security scans intentionally do *not* respect robots.txt. Now they get to respect a connection refused error).

My only concern at this point is whether or not Chrome will scan for them from inside my own network. Scary shit, but this is the world we live in. (Seriously, who is distributing/running the malware now?).

On to Vault (ACLs first!)

First let's tackle the ACL bits. This is as simple as adding a rule such as the below to your authelia-values.yaml file:

Code:
    - domain_regex: '(vault)\.k8s\.$MYTLD'
      policy: two_factor
      networks:
      - mynet
Note that I could have just specified the subnet under networks, but since I'm using this specific subnet repeatedly in my rules, I set up network under access_control so I could refer to it by name. Now if I ever need to change it, I can update the alias and not potentially dozens of lines across my values file. Resync in ArgoCD and the rule should be in place now.

This is only part of the equation, though. This tells Authelia what to do with the traffic destined to vault.k8s.$MYTLD, but now we have to actually get that traffic routed to Authelia. And again, security concerns need to be addressed here.

The configuration that lets us route traffic to Authelia requires what's called a configuration snippet annotation. This tells the ingress-nginx controller that we need to inject a few configuration directives in the protected ingress configuration (These annotations go on the ingress you are protecting, i.e. vault.k8s.$MYTLD, not the authelia ingress). Problem is, configuration snippets have been known to allow authenticated, non-admin kubernetes users (who have permissions to create ingresses) to, well, add random code to an nginx configuration, which could lead to credential leaking. In other words, it could lead to a privilege escalation within your cluster, where a non-admin authenticated kubernetes user gains additional privileges through stealing credentials.

I looked at a few options to work around this, namely the global-auth-snippet directive (which in theory would allow you to replace the auth-snippet required by an individual ingress with a specific configuration snippet that would apply to all ingresses), but I couldn't get it to work correctly. Now that I think about it, I wonder if it was interfering with the authelia ingress? I don't know.

Truth is, this is a single tenant cluster (for the time-being) and if I do go multi-tenant, I'm not going to allow ingress creation for this specific reason (also, there's no automation at my firewall to allow traffic in via relayd, so allowing non-admin ingress creation seems pointless in my environment, so it won't be allowed). So in my specific environment, turning on configuration snippets isn't going to introduce a vulnerability. YMMV, so realize this is *my* environment I'm providing details for, not yours. You need to decide whether or not configuration snippets would be an issue for you.

That said, turning on configuration snippets is a simple process of editing a configmap. Once that's done, we can move on to setting up the annotations:

Code:
    nginx.ingress.kubernetes.io/auth-method: GET
    nginx.ingress.kubernetes.io/auth-response-headers: Remote-User,Remote-Name,Remote-Groups,Remote-Email
    nginx.ingress.kubernetes.io/auth-signin: https://auth.k8s.$MYTLD?rm=$request_method
    nginx.ingress.kubernetes.io/auth-snippet: |
      proxy_set_header X-Forwarded-Method $request_method;
    nginx.ingress.kubernetes.io/auth-url: http://authelia.authelia.svc.cluster.local/api/verify
Notice the traffic stays within kubernetes (the auth-url uses the internal service, not the ingress) so we don't have to bounce back outside of k8s (unless of course the user isn't authenticated, in which case authelia will redirect them to the signin page, which does point to the ingress). It'd be inefficient to send traffic outside the cluster if it really doesn't need to go that route.

And that's it. This basic configuration can be rolled out (the rules and annotations) as needed to protect whatever applications we have in the cluster. It really is that simple (for ACLs, at least).

Test hitting the vault url (where google interfered, grumble, grumble) and you should get an authelia signin page instead of vault. Once you authenticate to authelia, you should get the vault login page (which isn't super userful having to login twice, so let's fix that with OpenID next).

Authelia OpenID Provider Config

At the heart of OpenID (and most remote authentication systems like this) is a signature from the provider that essentially states the user is who they claim to be. This signature is typically a asymmetric key-based system, where the private key of the provider (which only the provider knows, at least in theory!) is used to encrypt a bit of information (along with a hash that shows the data hasn't been altered in transit), so if the relying party can decrypt that data successfully with the provider's public key, and the hash the relying party calculates matches the hash encrypted by the provider, and the data states the user is authenticated (and depending on the system contains additional information like a session key or something else the end user can use to speed up verifications in the future), then the relying party can rest assured no monkey business is taking place, and remotely authenticated user "Bob" is actually Bob and not Eve.

That's a super-dumbed down version of OpenID, but at the core that's the process. This means we need a key pair, and some signatures are going to take place. Any time you create and use asymmetric keys, you really need to be careful how you handle the private key. It goes without saying, but for an authentication system we really should use the strongest key algorithm and length possible (versus a key system for building a bulk TLS session, which often have reduced requirements unless you're setting up a US DoD website or something similar). OpenID also uses certificates, and that will introduce some complexities that need to be addressed as well.

At this point you might just say "well, the pros here don't seem to outweigh the cons, so I'm sticking with ACLs" and that's totally cool with me. Authelia at this point will do a commendable job protecting your ingresses behind an authentication "wall". I've worked for a lot of large companies, though, and many of them struggle with SSO (single sign on) capabilities, so it's a bit of a pet-peeve of mine having to remember 17 passwords just to access three different internal webapps. This is a hill I'll die on, if need be. =)

At first I thought I'd be really clever and use the auth.k8s.$MYTLD certificate and key pair that already exist. I wanted to bump it up from 2048 RSA to at least 4096 RSA, but cert-manager wasn't having any of that. Apparently in the past this worked with a few config tweaks, but I wasn't able to get the keySize and keyAlgorithm settings working with my version of cert-manager. I figured that was fine, I'd just roll with RSA 2048 for testing.

Now, a bit of background is in order here. We need to pass in the OIDC certificate and OIDC key to authelia, in a secure fashion. Right now the key/certchain are stored in a secret, which we *could* add to Vault (like we'll do for the HMAC key, which is just a random 64+ character string hashed to create the key authelia needs) but a problem arises: this certificate is signed by LetsEncrypt, so it will rotate. When it does, Vault will need to be updated...so keeping this in Vault is going to introduce some overhead. We could pass the values in as environment variables, but those only update on pod start, so when a certificate rotation occurs, we'd have to restart the authelia pods. The only real solution is to pass an environment variable pointing to a file that contains the key/cert, and mount the secret as files under a volume mount. This way, when the file updates, hopefully authelia will do the right thing and re-read the files (testing is needed for this).

But how do we pass in a random secret as a volume mount? Luckily, the deployments.yaml template in the authelia helm chart has a provision for that:

Code:
        {{- with $mounts := .Values.pod.extraVolumeMounts }}
          {{- toYaml $mounts | nindent 8 }}
        {{- end }}
This basically says, if our values file contains a path "pod.extraVolumeMounts", make yaml out of it and indent it to match the rest of this file (so it gets pulled in as if it were part of the file). That wasn't listed in the default values file (that I saw, it's possible I overlooked it before I started editing), so we'll add it in. (Update: it's in the default values file, not sure how I missed it haha).

The config that should work would look like this:

Code:
pod:
  extraVolumes:
  - name: tls-secrets
    secret:
      secretName: auth-k8s-$MYTLD-tls
  extraVolumeMounts:
  - mountPath: /tls-secrets
    name: tls-secrets
    readOnly: true
And then, in the pod env section:

Code:
  - name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_CERTIFICATE_CHAIN_FILE
    value: /tls-secrets/tls.crt
  - name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
    value: /tls-secrets/tls.key
Next up we need to add the Vault configs for OIDC integration to our values file. Once done, redeploy in ArgoCD.

Ok, moment of truth...let's test logging in.

Fun fact: If you plan on integrating a service with OIDC authentication, it usually helps to actually configure that service for OIDC.

After hitting up the Vault docs and setting up the oidc_discovery_url and default role (and actually creating the default role, which wasn't as straightforward as it should've been due to some UI limitations, so I had to use the CLI for that portion), I tested again and was greeted by being logged in to vault with my chosen "read-only" role. Success!

Fun fact, while reading the docs I realized that Vault can itself serve as an OIDC Provider. Neat.

In the present state, we're running on a 2048 bit RSA private key for our OIDC provider. This is far from ideal, but until cert-manager allows the keySize and keyAlgorithm options again, this will have to do for now. Oh, and our testing client secret needs to be moved somewhere safe, like uhh, well, I would say Vault, but...hrmmm. That portion shouldn't matter, though, because the pods auth via the kubernetes mechanism, not OIDC. So yeah, Vault is ok as long as the kubernetes auth roles can read the client secrets.

Next Steps

At this point we can utilize Authelia to create ACLs for any of our webapps in k8s, as well as integrate OpenID authentication for the apps that support it. I think that's a pretty decent stopping point for today.

Cheers!
Views 312 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 07:57 AM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration