Kubernetes Networking

Kubernetes Services:

You can access your deployed pods only from the worker node where it is running. And it cannot be accessed even from your kubernetes master nodes or from any other nodes until it is exposed through service. Because each pods get its IP address through CNI plugin only from the local host. This is a basic behaviour of pod networking. In Order to access the application over the network, we must use kubernetes service to expose our pods to external traffic and load balancing the traffic across multiple pods.

Types of Kubernetes Services:

CusterIP (default) -

Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.

NodePort:

Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.

LoadBalancer:

Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.

ExternalName:

Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name.

Ingress:

In any Kubernetes cluster, applications must be able to connect with the outside world to provide end-users with access to the application. Kubernetes Ingress is one option available for this purpose.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Ingress is a powerful tool for managing traffic to your Kubernetes services.

Kubernetes Ingress components:

  • Ingress API object. The API object indicates the services that need to be exposed outside the cluster. It consists of the routing rules.

  • Ingress Controller. Ingress Controller is the actual implementation of Ingress. It is usually a load balancer that routes traffic from the API to the desired services within the Kubernetes cluster.

Network Policies:

Network Policies provide a way to control traffic flow at the pod level. Network Policies can be used to control ingress and egress traffic, restrict traffic to specific pods, and restrict traffic based on the source or destination IP address.

Network Policies are important for securing the network infrastructure of Kubernetes clusters. As with any network security configuration, we should employ several best practices for our Kubernetes network policy:

  • Use the default deny-all network policy to ensure that only explicitly permitted communication occurs.

  • Group Pods that must communicate with one another using the PodSelector parameter.

  • Only allow inter-namespace communication when necessary.

  • Don’t allow unnecessary network communication — even within the Kubernetes cluster.

  • Use caution when allowing Pods within the cluster to receive non-cluster network traffic.

  • Denying outgoing public internet traffic might interfere with specific application updates or validation processes.

DNS for Services and Pods:

Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.

Kubernetes publishes information about Pods and Services which is used to program DNS. Kubelet configures Pods' DNS so that running containers can lookup Services by name rather than IP.

Services defined in the cluster are assigned DNS names. By default, a client Pod's DNS search list includes the Pod's own namespace and the cluster's default domain.

Container Network Interface (CNI) in Kubernetes:

CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing daemon.

CNI (Container Network Interface) is a project by CNCF that defines a specification that allows communication between containers. Kubernetes supports CNI plugins for communication between pods.

In the CNI architecture, the orchestrator (Kubernetes) can ADD or DEL CNI plugins using CNI libraries.

Some plugins are:

Flannel:

  • one of the most popular plugins

  • provides VXLAN tunneling solution

  • configuration and management are very simple

  • does not support Network Policies

  • has a mode called host-gw which provides a tunnelless solution as long as hosts are connected with direct layer 2 connectivity

Calico:

  • one of the most popular plugins

  • default choice of the most of Kubernetes platforms (kubespray, docker enterprise, etc.)

  • uses BGP and Bird, a daemon called Felix configures routes on Bird

  • supports IP-IP encapsulation if BGP cannot be used

  • supports Network Policies

  • uses iptables for routing but it can be configured to use kube-proxy’s IPVS mode

  • has a CLI tool named calicoctl

Weave:

  • provides VXLAN tunneling solution

  • all of the nodes are connected as mesh which allows it to run on partially connected networks

  • does not scale well because of the mesh structure

  • stores configuration files on pods instead of Kubernetes CRDs or etcd

  • has an encryption library

  • supports Network Policies

  • has a CLI tool called weave

Cilium:

  • provides VXLAN tunneling solution but it can be used with kube-router to provide a tunnelless solution

  • uses BPF and XDP for routing

  • supports Network Policies

  • also has Cilium Network Policies which provides functionality not yet supported in Kubernetes Network Policies (e.g. HTTP request filters)

  • has a CLI tool called cilium

  • Linux kernel must be at least 4.9

Kube-router:

  • uses BGP

  • uses IPVS for routing

  • simpler and smaller than Calico (one daemonset vs Felix)

  • supports Network Policies

Thank you for reading!Happy Learning!!