In this blog, you will learn about what cilium policy is, we’ll check how the pod to pod connection works, which pod is able to access and which pod is denied, the problems you will face while creating a policy, and validating it and finally, we can check the logs. Let us understand what cilium can do.

  • Securing container-based infrastructure
  • Enabling visibility & controls
  • The basis for network controls

What you’ll learn:

  • Create and Apply a Cilium Policy
  • Pod to pod connection
  • Troubleshoot for the policy misconfiguration
  • Check the logs for applied Cilium Policy

What you'll need:

  • A Google Cloud Platform project to create GKE Cluster

Example 1:

Let me show you how this policy works. Here I’ll create five multi ubuntu pods. The first pod is able to connect with the second pod and vice versa but if we use any other pod to connect with the first pod it will deny the connection. Let us see how we can restrict the connection between pods and the creation of pods.

First, let us deploy the 5 multi ubuntu pods, Just copy and paste the following command in your terminal.

kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/e
xamples/multiubuntu/multiubuntu-deployment.yaml 
-> kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/examples/multiubuntu/multiubuntu-deployment.yaml

namespace/multiubuntu created

deployment.apps/ubuntu-1-deployment created
deployment.apps/ubuntu-2-deployment created
deployment.apps/ubuntu-3-deployment created
deployment.apps/ubuntu-4-deployment created
deployment.apps/ubuntu-5-deployment created

Below you can see the namespace multi ubuntu and 5 pods have been running successfully. You can check this in your terminal using the following command. [Note: pods may vary]

->  kubectl get pods -n multiubuntu
-> kubectl get pods -n multiubuntu
NAME                                   READY   STATUS    RESTARTS   AGE
ubuntu-1-deployment-5d6b975744-njr57   1/1     Running   0          7m26s
ubuntu-2-deployment-6c464dc68-npsj2    1/1     Running   0          7m25s
ubuntu-3-deployment-7cb8ff55fb-4r2w4   1/1     Running   0          7m24s
ubuntu-4-deployment-666b6dd9-jzmbk     1/1     Running   0          7m23s
ubuntu-5-deployment-7f746bfc45-dlwzx   1/1     Running   0          7m22s

Below you can see the namespace multi ubuntu and 5 pods have been running successfully. You can check this in your terminal using the following command. [Note: pods may vary]

->  kubectl get pods -n multiubuntu
-> kubectl get pods -n multiubuntu
NAME                                   READY   STATUS    RESTARTS   AGE
ubuntu-1-deployment-5d6b975744-njr57   1/1     Running   0          7m26s
ubuntu-2-deployment-6c464dc68-npsj2    1/1     Running   0          7m25s
ubuntu-3-deployment-7cb8ff55fb-4r2w4   1/1     Running   0          7m24s
ubuntu-4-deployment-666b6dd9-jzmbk     1/1     Running   0          7m23s
ubuntu-5-deployment-7f746bfc45-dlwzx   1/1     Running   0          7m22s

Now let us apply a rule to allow ubuntu-1 pod connection to the ubuntu-2 pod. Just copy and paste the following command in your terminal.

Kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/pod-network-ingress-egress-allow.yaml

Let us see if the policy is created and running. Just copy and paste the following command in your terminal.

-> kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/pod-network-ingress-egress-allow
.yaml
ciliumnetworkpolicy.cilium.io/pod-network-ingress-egress-allow created

-> kubectl get cnp -n multiubuntu
NAME                               AGE
pod-network-ingress-egress-allow   37s

To check if the policy works let us get inside the ubuntu-1 pod and we will connect to the ubuntu-2 pod. Before that, we need to get the pod’s internal IP. just copy and paste the following command in your terminal.

kubectl get pods -n multiubuntu -o wide

Now let us get into the ubuntu -1 pod and then we will use telnet to connect to the ubuntu-2 pod.  just copy and paste the following command in your terminal.

kubectl exec -it -n multiubuntu ubuntu-1-deployment-5d6b975744-njr57 -- bash
-> kubectl exec -it -n multiubuntu
ubuntu-1-deployment-5d6b975744-njr57 -- bash

[email protected]:/# telnet 10.4.0.130 80
Trying 10.4.0.130...
Connected to 10.4.0.130.

Here you can see we are able to connect to the ubuntu-2 using its internal IP. Let us get into the ubuntu-2 pod and connect to the ubuntu-1 pod using its internal IP.  just copy and paste the following command in your terminal.

kubectl exec -it -n multiubuntu ubuntu-2-deployment-6c464dc68-npsj2 -- bash

-> kubectl exec -it -n multiubuntu ubuntu-2-deployment-6c464dc68-npsj2 -- bash

[email protected]:/# telnet 10.4.0.99 80
Trying 10.4.0.99...
Connected to 10.4.0.99.

Let us now get inside the ubuntu-3 pod and then we will connect the ubuntu-1 pod. To do this just copy and paste the following command in your terminal.

kubectl exec -it -n multiubuntu ubuntu-3-deployment-7cb8ff55fb-4r2w4 -- bash
-> kubectl exec -it -n multiubuntu ubuntu-3-deployment-7cb8ff55fb-4r2w4 -- bash

[email protected]:/# telnet 10.4.0.99 80
Trying 10.4.0.99...
telnet: Unable to connect to remote host: Connection timed out

Here, the policy will deny the connection.

How to check logs?

Let us now retrieve the log files of a cilium pod. First, we’ll see how many cilium pods are running. just copy and paste the following command in your terminal.

kubectl -n kube-system get pods -l k8s-app=cilium
kubectl -n kube-system get pods -l k8s-app=cilium
NAME           READY   STATUS    RESTARTS   AGE
cilium-2bwqv   1/1     Running   0          132m
cilium-8hsj4   1/1     Running   0          132m
cilium-znczb   1/1     Running   0          132m

[Note: Cilium pods may vary]

Now let us get inside any one of the cilium pods and then we will use the grep command to search for our log.

kubectl -n kube-system logs --timestamps cilium-2bwqv | grep pod-network-ingress-egress-allow

Cilium monitor in an inbuilt command that can be used to listen to events in real-time. We will get inside the cilium pod and then we will run the command. just copy and paste the following command in your terminal.

-> kubectl -n kube-system exec -it cilium-8hsj4 -- bash
-> [email protected]:/home/cilium# cilium monitor
-> kubectl -n kube-system exec -it cilium-8hsj4 -- bash

[email protected]:/home/cilium# cilium monitor
Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
level=info msg="Initializing dissection cache..." subsys=monitor
-> stack flow 0xacc0e68e identity health->remote-node state reply ifindex 0 orig-ip 0.0.0.0: 10.4.0.148:4240 -> 10.128.0.52:36424 tcp ACK
-> stack flow 0xf91bcb45 identity health->remote-node state reply ifindex 0 orig-ip 0.0.0.0: 10.4.0.148:4240 -> 10.128.0.53:37592 tcp ACK
-> endpoint 2892 flow 0x0 identity remote-node->health state established ifindex 0 orig-ip 10.128.0.53: 10.128.0.53:37592 -> 10.4.0.148:4240 tcp ACK
-> endpoint 2892 flow 0x0 identity remote-node->health state established ifindex 0 orig-ip 10.128.0.52: 10.128.0.52:36424 -> 10.4.0.148:4240 tcp ACK
-> endpoint 490 flow 0xfbd03762 identity host->459 state new ifindex 0 orig-ip 10.128.0.51: 10.128.0.51:51510 -> 10.4.0.218:8080 tcp SYN
-> stack flow 0x8e32aa8e identity 459->host state reply ifindex 0 orig-ip 0.0.0.0: 10.4.0.218:8080 -> 10.128.0.51:51510 tcp SYN, ACK
-> endpoint 490 flow 0xfbd03762 identity host->459 state established ifindex 0 orig-ip 10.128.0.51: 10.128.0.51:51510 -> 10.4.0.218:8080 tcp ACK
-> endpoint 490 flow 0xfbd03762 identity host->459 state established ifindex 0 orig-ip 10.128.0.51: 10.128.0.51:51510 -> 10.4.0.218:8080 tcp ACK

Let’s see how we can troubleshoot the policy if there is any misconfiguration. Sometimes policy format is not that good and it won’t work as expected there are certain things that should be followed while writing a policy.

For example, let us take the policy we have applied and we will do some small changes to it. Here is the link to the edited policy. Just copy and paste it into your terminal.

kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/modified.yaml
-> kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/modified.yaml
ciliumnetworkpolicy.cilium.io/modified created

Now let us get inside the pod and we will try to connect.

kubectl exec -it -n multiubuntu ubuntu-1-deployment-5d6b975744-njr57 -- bash
[email protected]:/# telnet 10.4.0.130 80
Trying 10.4.0.130.

Here the policy is created, But we are not able to connect to the pod. If you look at the policy we can see instead of ubuntu pod we are directly giving its IP address. But this should not be the case and it won’t work. So instead of this policy, we can follow the steps we have used in example 1 to check for the pod to pod connection.

Example 2: DNS-egress allow only match names:

Create a Test Pod:

To test the sample cilium policy you need to deploy a test pod, just copy and apply the following commands.

kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/ciliumpodtest.yaml
-> ([email protected]) kubectl apply -f https://raw.githubusercontent.com/tamilmaran-7/cilium-example/main/ciliumpodtest.yaml
             
pod/testpod created
                                                                              
-> ([email protected]) kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          54s

You can see the testpod is running successfully and you can verify on your own cluster using the same command. Now let us see one example

-> kubectl get pods

Let’s take this example to access the particular subdomain of twitter. In this case, we’re trying to access api.twitter.com. usage of this policy is only known domains can be accessed. This can be the fix for phishing attacks

Scenario1:

Let us see what results we get without applying the policy. Instead of api.twitter.com, we can use mail.google.com.

-> kubectl exec testpod -- curl -I https://mail.google.com | head -1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0                     HTTP/1.1 301 Moved Permanently
0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

Scenario2:

Now let's apply the policy and see the difference in execution.

-> kubectl apply -f https://raw.githubusercontent.com/kubearmor/policy-templates/main/mitre/network/cpn-dns-egress-allow-only-matchname.yaml

-> kubectl exec testpod -- curl -I https://api.twitter.com | head -1

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  4078    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
HTTP/1.1 404 Not Found

Difference between two scenarios:

In the first scenario, say you have an organization, we are able to access any subdomains but after applying the policy we are only able to access the particular subdomain. It means you are only able to access the api.twitter.com.

Now let us try to access another subdomain in Twitter say help.twitter.com. Just copy the following command.

-> kubectl exec testpod -- curl -I --max-time 5 https://help.twitter.com | head -1

Here it says  ( connection timed out ) since in our policy we only mentioned a particular subdomain.

Conclusion

This blog will provide knowledge about how the connection works between pods and how we can restrict it; misconfiguration can happen without the proper knowledge of the techniques.

To know more about the KubeArmor security policies, please check out the  Policy-Templates and the KubeArmor GitHub.