LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 09-30-2021, 01:56 PM   #1
vincix
Senior Member
 
Registered: Feb 2011
Distribution: Ubuntu, Centos
Posts: 1,240

Rep: Reputation: 103Reputation: 103
iptables in kubernetes - nodeport service isn't exposed outside of the cluster


Hello,

I've deploy nginx ingress controller (https://kubernetes.github.io/ingress-nginx/deploy/) as a reverse proxy on kubernetes using the baremetal version (https://raw.githubusercontent.com/ku...al/deploy.yaml), which creates a nodePort service (making it independent of other loadbalancers/cloud providers etc.), which in turn is supposed to expose the service directly outside of the cluster.

I'm using this on a kubeadm generated single-node Kubernetes cluster (v1.22.1) on Ubuntu 20.04.

For some reason, it seems that Kubernetes doesn't create the correct iptables rules which makes the nodeport service accessible.

This is how the iptables flow should work:
PREROUTING -> KUBE-SERVICES -> KUBE-NODEPORTS -> KUBE-SVC-EDNDUDH2C75GIR6O (for 443) -> KUBE-SEP-HUGYM3C6SQL7A6WP -> DNAT to 10.220.196.158:443 (which is the ip of the nginx controller).
I've added several log rules and I was able to track the requests (done with curl over https from 10.88.88.158) up to the KUBE-NODEPORTS chain.

This is what the relevant chains look like (verbose):
Code:
Chain PREROUTING (policy ACCEPT 5677 packets, 253K bytes)
num   pkts bytes target     prot opt in     out     source               destination
1    36084 1609K cali-PREROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:6gwbT8clXdHdC1b1 */
2       74  4440 LOG        all  --  *      *       10.88.88.158         0.0.0.0/0            limit: avg 30/min burst 5 LOG flags 0 level 4 prefix "CURL from work machine: "
3    36084 1609K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
KUBE-SERVICES - the packet reaches directly KUBE-NODEPORTS, as I've already mentioned:
Code:
Chain KUBE-SERVICES (2 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 KUBE-SVC-KQVGIOWQAVNMB2ZL  tcp  --  *      *       0.0.0.0/0            10.100.247.166       /* calico-system/calico-kube-controllers-metrics:metrics-port cluster IP */ tcp dpt:9094
2        0     0 KUBE-SVC-EDNDUDH2C75GIR6O  tcp  --  *      *       0.0.0.0/0            10.99.23.104         /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
3        0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
4        0     0 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
5        0     0 KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
6        0     0 KUBE-SVC-RK657RLKDNVNU64O  tcp  --  *      *       0.0.0.0/0            10.99.96.225         /* calico-system/calico-typha:calico-typha cluster IP */ tcp dpt:5473
7        0     0 KUBE-SVC-ZXHPMTAUOSTCCLP4  tcp  --  *      *       0.0.0.0/0            10.97.230.91         /* netbox-community/netbox:http cluster IP */ tcp dpt:80
8        0     0 KUBE-SVC-EZYNCFY2F7N6OQA2  tcp  --  *      *       0.0.0.0/0            10.110.131.17        /* ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP */ tcp dpt:443
9       24  1440 KUBE-SVC-I24EZXP75AX5E7TU  tcp  --  *      *       0.0.0.0/0            10.111.108.110       /* calico-apiserver/calico-api:apiserver cluster IP */ tcp dpt:443
10       0     0 KUBE-SVC-ZY426XBBEVRNTJAC  tcp  --  *      *       0.0.0.0/0            10.96.37.163         /* netbox-community/netbox-redis-cache:redis-cache cluster IP */ tcp dpt:6379
11       0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
12       0     0 KUBE-SVC-CPHNSTXR3772MNPS  tcp  --  *      *       0.0.0.0/0            10.98.65.130         /* netbox-community/netbox-postgres:postgres cluster IP */ tcp dpt:5432
13       0     0 KUBE-SVC-EY46YKVNWPMC4DIF  tcp  --  *      *       0.0.0.0/0            10.109.99.206        /* netbox-community/netbox-redis:redis cluster IP */ tcp dpt:6379
14       0     0 KUBE-SVC-CG5I4G2RS3ZVWGLK  tcp  --  *      *       0.0.0.0/0            10.99.23.104         /* ingress-nginx/ingress-nginx-controller:http cluster IP */ tcp dpt:80
15      53  3180 LOG        all  --  *      *       10.88.88.158         0.0.0.0/0            limit: avg 30/min burst 5 ADDRTYPE match dst-type LOCAL LOG flags 0 level 4 prefix "***WORK MACHINE**: "
16    6282  377K KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
* so the problem seems to be here in the KUBE-NODEPORTS because in this chain there's no rule that matches 80/443, and I don't understand why, but maybe I understand the logic of node port incorrectly.

Code:
Chain KUBE-NODEPORTS (1 references)
num   pkts bytes target     prot opt in     out     source               destination
1       46  2760 LOG        all  --  *      *       10.88.88.158         0.0.0.0/0            limit: avg 30/min burst 5 ADDRTYPE match dst-type LOCAL LOG flags 0 level 4 prefix "***WORK MACHINE-nodeports**: "
2        0     0 KUBE-SVC-EDNDUDH2C75GIR6O  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30905
3        0     0 KUBE-SVC-CG5I4G2RS3ZVWGLK  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:http */ tcp dpt:30923
For reference, I'm adding the next chains that are supposed to lead the packets to the final DNAT towards the nginx controller:
Code:
Chain KUBE-SVC-EDNDUDH2C75GIR6O (2 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 LOG        all  --  *      *       10.88.88.158         0.0.0.0/0            limit: avg 30/min burst 5 ADDRTYPE match dst-type LOCAL LOG flags 0 level 4 prefix "***WORK MACHINE-443CHAIN-1***"
2        0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.220.0.0/16        10.99.23.104         /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
3        0     0 KUBE-MARK-MASQ  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30905
4        0     0 KUBE-SEP-HUGYM3C6SQL7A6WP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https */
And the DNAT-chain:
Code:
Chain KUBE-SEP-HUGYM3C6SQL7A6WP (1 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 KUBE-MARK-MASQ  all  --  *      *       10.220.196.158       0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https */
2        0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https */ tcp to:10.220.196.158:443
Any ideas why this is happening? What am I missing?

Thanks!
 
Old 09-30-2021, 03:02 PM   #2
vincix
Senior Member
 
Registered: Feb 2011
Distribution: Ubuntu, Centos
Posts: 1,240

Original Poster
Rep: Reputation: 103Reputation: 103
I've been struggling with this problem for some time, but it so happened that I solved it shortly after I posted this.
In the manifest, I added specific hostPort directives (which don't exist at all), as suggested here: https://stackoverflow.com/questions/...-nginx-ingress
Code:
          ports:
            - name: http
              containerPort: 80
              hostPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              hostPort: 443
              protocol: TCP
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Enforcing cluster-wide policies for a Kubernetes-based Docker cluster LXer Syndicated Linux News 0 12-21-2016 05:51 PM
cman service stucks during booting of cluster node (RedHat Cluster Suite) zama Linux - Software 0 07-09-2012 08:50 AM
iptables v1.2.9: Unknown arg `/sbin/iptables' Try `iptables -h' or 'iptables --help' Niceman2005 Linux - Security 4 12-29-2005 08:20 PM
me wants cluster me wants cluster me wants cluster funkymunky Linux - Networking 3 01-06-2004 07:51 AM
My clients "can browse" outside but "can't ping" outside mrnoe Linux - Networking 1 09-05-2003 02:55 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 12:47 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration