Control Traffic Flow in Your Kubernetes Cluster Using Network Policies

When running your application in a production environment, you may want to specify which components your application can communicate with as well as which components can communicate with your application.

It’s preferable to begin with a zero-trust security policy to make sure all the applications and services you install are not accessible or exposed by default. Then you can specify how other components can communicate with your applications and services.

What Are Network Policies?

In Kubernetes, network policies are the constructs used to control the network traffic at the IP address and port level.

Network Plugins enable communication between different network entities in a Kubernetes cluster, and it’s necessary that the network plugin installed on your cluster supports network policies; otherwise, creating network policies is not going to be of any use.

Network policies can be configured by creating the NetworkPolicy Kubernetes resources just as you would for any other Kubernetes resource. NetworkPolicy resources are available in APIVersion networking.k8s.io/v1. More details about the resource can be found in this document.  

Types of Network Policies

While specifying the control flow for an application, you can either specify rules for the requests that are going out of the application (outbound requests), or the rules for the requests that are coming into your application (inbound requests). In case of Network policies, these outbound and inbound requests are mapped to egress and ingress network policy types, respectively.

By default, your application (which we’ll refer to as a “pod” subsequently) would be reachable ( via ingress requests) by other components and able to communicate (via egress requests) with other applications without any restrictions.
To ensure your pod can only send outbound requests to specific sets of pods or IP ranges, you will have to create a network policy of type egress. However, if you want to ensure that your pod accepts only requests from specific pods of IP ranges, you will have to create a network policy of type ingress.

Let’s talk about a scenario in which you have two pods – app and db – that have egress and ingress policies specified. Let’s assume the egress policy says that app is allowed to communicate with db, but won’t be able to until and unless the ingress policy for db also specifies that this communication is allowed.

A Closer Look into the Network Policy Resource

To configure a network policy, we can create a Kubernetes resource of type NetworkPolicy. Below is an example from Kubernetes documentation describing the resource:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

To create the network policy, we can create a file netpol.yaml with the content above, then run the following command:

kubectl create -f netpol.yaml

Let’s take a look at what this network policy does inside our Kubernetes cluster.

spec.podSelector

One of the most important things to remember is spec.podSelector. Recall that pods are not secured by default; therefore anyone can communicate with them and they can communicate with anyone. But as soon as this network policy is created, pods that have the label roles:db in the namespace default (namespace of network policy) are restricted, which means that the pod can send requests only to the components that are allowed by the egress policy type rule or receive requests from components that are allowed by the ingress policy type rule.

spec.policyTypes

The command spec.policyTypes specifies policy types that this network policy configures.

spec.ingress

Using the spec.ingress field, we can specify the permitted source of request to the pod that this network policy selects. In our case, it’s the pods that have the role:db label in the default namespace.

spec.egress 

This field allows us to specify to which components the request is allowed to be sent, from the pods that this network policy selects.

Details about the network policy can be found on the Kubernetes official docs at https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Mimicking the Zero Trust Scenario

Now let’s look into how we install an application in such a way that components are isolated by default and allowed to communicate with each other.
In this scenario, we have deployed two applications – “frontend” and “backend” – in two separate namespaces named “frontend” and “backend.”

» kubectl create ns frontend
namespace/frontend created
» kubectl create ns backend
namespace/backend created
» kubectl run nginx --image nginx -n frontend
pod/nginx created
 » kubectl run nginx --image nginx -n backend
pod/nginx created
» kubectl label pod -n frontend nginx app=frontend
pod/nginx labeled
» kubectl label pod -n backend nginx app=backend
pod/nginx labeled
# create service for backend pod
» kubectl expose pod -n backend nginx --port 80
service/nginx exposed

To make sure that nobody is able to communicate with our backend application, we can start off by denying all the ingress traffic for the pods that are in the backend namespace. Below is the network policy that can be used to make sure that an ingress request to any pod in the backend namespace is denied by default.

apiVersion : networking.k8s.io/v1
kind : Network Policy
metadata : 
  name : default-deny-ingress
  namespace : backend
spec : 
  podSelector : { }
  policyTypes : 
  —  Ingress

As you can see, here we say that we want to specify the ingress type policy on all the pods (podSelector: {}) of the backend namespace. But we are not actually specifying any rules for the ingress policy. It simply means none of the ingress requests will be allowed on all the pods of the backend namespace.

Before creating the network policy shown above, we would be able to communicate with backend pods from the frontend namespace.

~ » kubectl exec -it -n frontend nginx -- bash
vivek@vivek
root@nginx:/# curl <u>nginx.backend.svc.cluster<u>.local:80
<!DOCTYPE html> 
<html> 
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: O auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and 
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@nginx:/#

As soon as we create the network policy, we will not be able to communicate with the backend pods from the frontend namespace.

~ » kubectl exec -it -n frontend nginx -- bash
root@nginx:/# curl nginx.backend.svc.cluster.local
curl: (28) Failed to connect to nginx.backend.svc.cluster.local port 80: Connection timed out

To make this communication happen, we will have to create another network policy that allows ingress requests from frontend pods (frontend namespace) to backend pods. Note that, since we want to allow incoming requests into backend pods, we will create an ingress type of network policy for the backend pod.
The network policy that we see below is going to allow traffic from the frontend namespace’s pods that have app=frontend label to the pods in backend namespace that have app=backend label.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-ingress-to-backend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: frontend # this label is added automatically that we can use here
        podSelector:
          matchLabels:
            app: frontend

And as soon as we create the above network policy we can check that we are able to communicate with backend pods from frontend pods again.

 » k exec -it -n frontend nginx -- bash
root@nginx:/# curl nginx.backend.svc.cluster.local:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Thanks for reading this blog!

(For more information about implementing Default Policies and to see some examples, access our Kubernetes documentation at https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-policies.)

Similar Blog Posts
Business | December 10, 2024
Business | December 3, 2024
Business | November 18, 2024
Stay up to date on the latest tips and news
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam’s Privacy Policy
You're all set!
Watch your inbox for our weekly blog updates.
OK