LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > Musings on technology, philosophy, and life in the corporate world
User Name
Password

Notices


Hi. I'm jon.404, a Unix/Linux/Database/Openstack/Kubernetes Administrator, AWS/GCP/Azure Engineer, mathematics enthusiast, and amateur philosopher. This is where I rant about that which upsets me, laugh about that which amuses me, and jabber about that which holds my interest most: *nix.
Rate this Entry

On-prem kubernetes, Part 8

Posted 01-09-2024 at 01:32 PM by rocket357

Posts in this series:
  1. Background info and goals
  2. pxeboot configs
  3. installing Debian/Xen dom0
  4. installing the k8s domUs
  5. Bootstrapping a bare-bones HA Kubernetes Cluster
  6. Installing the CNI/Network Infrastructure
  7. Installing the CSIs for Persistent Volumes
  8. Installing/Configuring cert-manager
  9. Installing/Configuring ArgoCD and GitOps Concepts
  10. Installing/Configuring Authelia/Vault and LDAP/OAuth Integrations
  11. Securing Applications with Authelia
  12. Keeping your cluster up-to-date
  13. (this post) Securing kubernetes traffic with Calico Network Policies

Github for example configuration files: rocket357/on-prem-kubernetes

Overview

I failed to properly address network policies in my Calico installation/configuration blog post, so today seems as good a day as any to implement those policies now. It makes zero sense to have ldap-enabled OIDC integrations if bad actors can simply hit a pod directly (as opposed to going through the ingress, which requires authentication) from a compromised container, or worse, directly reach the database that backs the pods. Yes, we're doing things out of order here, but it's getting done! (But seriously, this needs to be set up before the other bits, it makes it easier).

At $DAYJOB we have a bunch of automation that auto-generates network Policies, helm values files, etc... from cluster templates that define dns domains, geolocations, and cluster naming schemes, as well as what pods should be able to reach what other pods, or cidrs, or services, etc... (and, not entirely unironically, the network policies get deployed to the myriad of kubernetes clusters we operate via ArgoCD...SMH) but that would definitely be overkill (the templating portion, I mean, not ArgoCD =) ) for a single on-prem home cluster. As such, I'll write up some simply yaml definitions of the desired network policies and deploy them alongside the ingress definitions we're deploying via ArgoCD already.

An interesting application I use in kubernetes for monitoring is uptime-kuma, which allows me to define hosts, tcp/udp ports, and URLs (in addition to other entities) for monitoring, and allows me to send notifications via a ton of different services if (when) those entities go offline. For receiving notifications I use (primarily) gotify (along with the related Android app for phone notifications) and Discord (since my wife and kids use it for gaming, I find myself there fairly often...though I suggest setting up a separate "server" for notifications unless you really like trolling your family). uptime-kuma has an added bonus that it checks (by default) every 60 seconds, so you can play around with network policies in near-realtime and get near-realtime feedback of the effects without having to separately run checks by hand. uptime-kuma is a fairly simple application with minimal external runtime dependencies (i.e. no external database to connect to), so you won't break the application itself by enabling netpols (well, not without the *right* netpols...more on that in a bit!).

For Calico, a typical yaml definition would look something like this (ref: Calico NetPol docs):

Code:
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: uptime-kuma-ingress-egress-netpols
  namespace: uptime-kuma
spec:
  selector: app == 'uptime-kuma'
  types:
    - Ingress
    - Egress
  ingress:
    - action: Allow # allow ingress traffic for now so we can connect to uptime-kuma and watch the netpol effects in near-realtime
  egress:
    - action: Allow # Allow ICMP traffic outbound from uptime-kuma to monitor various IPs outside of kubernetes.
      protocol: ICMP
      icmp:
        type: 8
      destination:
        nets:
          - 10.1.0.0/16
    - action: Allow # allow DNS traffic to both the in-cluster resolver and upstream resolver
      protocol: TCP
      destination:
        nets:
          - 10.1.0.1/32
          - 10.96.0.10/32
        ports:
          - 53
    - action: Allow # same, via UDP
      protocol: UDP
      destination:
        nets:
          - 10.1.0.1/32
          - 10.96.0.10/32
        ports:
          - 53
    - action: Allow
      protocol: TCP
      destination:
        nets:
          - 10.1.15.0/32  # this is our load balancer shared ip, which is required to reach the ingresses and verify the pods are handling requests
        ports:
          - 80
          - 443
If a pod does not have any matching network policies, Calico allows traffic to/from the pod. If a pod has an ingress or egress matching policy, it defaults to deny and only allows the traffic specified in the ingress or egress policy.

CoreDNS Caveat

One caveat is that CoreDNS does not do recursion, meaning it will return referrals to upstream resolvers instead of querying upstream resolvers on your behalf. This means you would have to open outbound dns to any IP address, because we can't control where companies/domains put their authoritative resolvers. What we can do, however, is configure CoreDNS to forward any non-local domain requests to an upstream recursive resolver we control (i.e. 10.1.0.1 above). Accomplishing that is easy, we just need to edit the CoreDNS configMap in kube-system so it reads:

Code:
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        forward . 10.1.0.1
Note the forward line added at the bottom. It conflicts with the existing /etc/resolv.conf lines, so I removed those as well. Now CoreDNS "handles" recursive dns (thanks to OpenBSD, in this particular instance). If you aren't comfortable reconfiguring the Kubernetes core dns services like this, you can always open outbound dns to 0.0.0.0/0 and utilize the referrals, but I prefer to lock down dns traffic to CoreDNS and my upstream caching recursive resolver (per the example policy above).

At this point, you should be able to add and remove IPs/ports as required in your netpol yaml to allow or deny specific traffic. Since this is a namespaced definition, it will only apply within that specific namespace (uptime-kuma, in the example above). If you want non-namespaced policies, you would need to create a GlobalNetworkPolicy instead.

Moving On...

With regards to the remaining services, setting up either GlobalNetworkPolicies or namespaced NetworkPolicies is easy, just remember to include DNS and any services that are required for your application to run (i.e. the backing database, caching layer, LDAP, etc...). Another bonus to using uptime-kuma is that once we have all of our (non- NetworkPolicy restricted) apps running and being monitored, we can easily tell if a NetworkPolicy change breaks the application.

Until Next Time...

I have a ton of NetworkPolicies to write up, so I'll end this here. Next time we'll discuss mechanisms for keeping applications up to date within the cluster.

Cheers!
Views 209 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 01:31 AM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration