In this blog we are going to see how to troubleshoot KubeArmor policies in GKE.

This blog assumes that you have read the concepts which explains all the components and fundamentals of KubeArmor. The blog will cover the following key steps:

  • Creating policies
  • Applying policies
  • Testing policies
  • Troubleshooting policies

Creating policies

  1. To deploy KubeArmor in GKE apply this script. https://raw.githubusercontent.com/kubearmor/KubeArmor/master/deployments/GKE/kubearmor.yaml
  2. Another method is to deploy kubearmor in GKE follow this guide to deploy karmor in a very simple way.
  3. Sample postgresql policy is given below:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-stigs-postgresql-console
spec:
  tags: ["STIGS", "POSTGRESQL"]
  message: "Log files access has been denied"
  selector:
    matchLabels:
      pod: testpod
  process:
    severity: 5
    matchPaths:
    - path: /usr/bin/psql
      ownerOnly: true
    - path: /bin/psql
      ownerOnly: true
    action: Block

3.The above policy has been created for postgresql workload. Psql is the binary for postgresql. If anyone tries to access the psql the policy will deny the access. Those binaries will be found easily in the bin folder inside the linux directory.

4. Let’s apply the policy and see it in action!


Applying Policies

  1. When applying policies make sure to verify with YAML lint applications to verify the structure and indentation of the YAML files. If not verified, the policy will fail to apply.

Note: The policy must follow the required policy structure of KubeArmor. View here

[[email protected]]$ kubectl apply -f postgresql-policy.yaml  
error: error parsing postgresql-policy.yaml: error converting YAML to JSON: yaml: line 13: mapping values are not allowed in this context

2. The above error shows that you may have missed the YAML indentation. As shown below.

12  process:
13 severity: 5
14   matchPaths:
15   - path: /usr/bin/psql

3. The above error can simply be fixed by changing the indentation. Make sure to use appropriate YAML lint extensions in your code editor to avoid these kinds of errors.

12  process:
13   severity: 5
14   matchPaths:
15   - path: /usr/bin/psql

4. Now we have fixed the policy. Let’s try to apply that policy and see it in action.

[[email protected]]$ kubectl apply -f postgresql-policy.yaml  
kubearmorpolicy.security.kubearmor.com/ksp-stigs-postgresql-console created

5. Now our policy has been created and applied in the GKE.

6. Check here for a complete guide and specification to write a kubearmor policy.

Testing Policies

  1. When you are testing policies you have to check with KubeArmor Security policy specification. https://github.com/kubearmor/KubeArmor/blob/main/getting-started/security_policy_specification.md
  2. Make sure the wordings are spelled correctly. Otherwise it throws an error like this shown below. These types of errors are likely to happen when we misspell the words. So make sure to check in with the contribution guide mentioned above.
[[email protected]]$ kubectl apply -f postgres-policy.yaml  
error: error validating "postgres-policy.yaml": error validating data: [ValidationError(KubeArmorPolicy.spec.process): unknown field "matchPath" in com.kubearmor.secur
ity.v1.KubeArmorPolicy.spec.process, ValidationError(KubeArmorPolicy.spec.selector): unknown field "matchLabel" in com.kubearmor.security.v1.KubeArmorPolicy.spec.selec
tor, ValidationError(KubeArmorPolicy.spec): unknown field "tag" in com.kubearmor.security.v1.KubeArmorPolicy.spec]; if you choose to ignore these errors, turn validati
on off with --validate=false

3. The above error shows we misspelled the words “matchPath” instead of “matchPaths” and followed by “matchLabel” instead of “matchLabels” also “tag” instead of “tags”. Let’s fix those errors and test the policy again.

4. This is the fixed policy with correct spellings.

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-stigs-postgresql-console
spec:
  tags: ["STIGS", "POSTGRESQL"]
  message: "Log files access has been denied"
  selector:
    matchLabels:
      pod: testpod
  process:
    severity: 5
matchPaths:
    - path: /usr/bin/psql
      ownerOnly: true
    - path: /bin/psql
      ownerOnly: true
    action: Block

5. Now our policy is applied and ready.

[[email protected]]$ kubectl apply -f postgres-policy.yaml  
kubearmorpolicy.security.kubearmor.com/ksp-stigs-postgresql created

6.Let’s deploy postgresql deployment in our cluster to test the policy.

7. To deploy a postgres service in your cluster we have to apply this script.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          envFrom:
            - configMapRef:
                name: postgres-config
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgresdb
      volumes:
        - name: postgresdb
persistentVolumeClaim:
            claimName: postgres-pv-claim

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv-volume
  labels:
    type: local
    app: postgres
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/var/lib/postgresql/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pv-claim
  labels:
    app: postgres
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
 labels:
    app: postgres
data:
  POSTGRES_DB: postgresdb
  POSTGRES_USER: postgres
  POSTGRES_PASSWORD: root
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  ports:
    - name: postgres
      port: 5432
      nodePort: 30432
  type: NodePort
  selector:
    app: postgres

8. Now we have our working postgres pod running in our k8s cluster.


[[email protected]]$ kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
postgres-98c7c5945-wwrvs   1/1     Running   0          4h58m

9. Now let’s try to access the psql binary to test if it’s working or not. The goal is to block access to the psql binary.

[[email protected]]$ kubectl exec -it postgres-98c7c5945-wwrvs -- bash
[email protected]:/# psql
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  role "root" does not exist
[email protected]:/#

10. As you can see in this above problem, psql binary is still accessible even after applying the policy.. That policy is not working as we expected. Let’s troubleshoot where it went wrong and fix the policy.

Troubleshooting Policies

  1. Even after using the correct format and parameters, the policy is not working and psql binary is still accessible. Why?
  2. Let’s check the binary location with commands we know such as “whereis” and “which”. So we can figure out whether we added the correct path or not.
  3. Here we checked psql binary location “whereis” command It shows that we added the correct path in our policy but still the policy is not working as we expected,

[email protected]:/# whereis psql
psql: /usr/bin/psql /usr/lib/postgresql/14/bin/psql /usr/share/man/man1/psql.1.gz

4. Let’s dive in deep and find if we have any other connections to the psql binary.    If there is, we can add that to our policy.


❯ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
postgres-c86f4884f-z76j8   1/1     Running   0          9h
❯ kubectl exec -it postgres-c86f4884f-z76j8  -- bash
[email protected]:/# ls -la /usr/bin/psql
lrwxrwxrwx 1 root root 37 Nov 11 16:17 /usr/bin/psql -> ../share/postgresql-common/pg_wrapper
[email protected]:/#

5. Now when we check the psql with “ls -la” command. That psql binary is linked with "../share/postgresql-common/pg_wrapper”

6. So now we can see that it’s linked with another binary. So that’s why it’s not blocking the psql binary. We can add that in our policy and test that again.

7. Here’s the updated the policy with that path included


apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-stigs-postgresql-console
  namespace: default # Change your namespace
spec:
  tags: ["STIGS", "PSQL"]
  message: "Alert! Access to psql database files has been denied"
  selector:
    matchLabels:
      pod: postgresql-1 # Change your match labels
  process:
    severity: 5
    matchPaths:
    - path: /usr/bin/psql
      ownerOnly: true
    - path: /bin/psql
      ownerOnly: true
    - path: /usr/share/postgresql-common/pg_wrapper
      ownerOnly: true
    action: Block

8. Let’s apply this policy and check whether it is blocking the psql binary or not.

9. Now our policy is created without any errors.

❯ kubectl apply -f  ksp-postgres-psql-block.yaml
kubearmorpolicy.security.kubearmor.com/ksp-stigs-postgresql-console created

10. Now our policy successfully blocked the psql binary completely.

kubectl exec -it postgres-c86f4884f-z76j8  -- bash
[email protected]:/# psql
bash: /usr/bin/psql: Permission denied
[email protected]:/#

11.  Now we have our working policy that can block the psql binary in postgres pod. So we can authorize the use of the postgres server.

KubeArmor Logs

  1. To check kubearmor generated logs. We have to type this command to see that.

2. kubectl -n kube-system get pods -A | grep kubearmor

3. This command will list the all kubearmor pods as shown below.


> kubectl -n kube-system get pods -A | grep kubearmor
kube-system   kubearmor-cb49x                                       1/1     Running   0          7h45m
kube-system   kubearmor-gcklf                                       1/1     Running   0          7h45m
kube-system   kubearmor-host-policy-manager-5bcccfc4f5-n99jk        2/2     Running   0          7h41m
kube-system   kubearmor-policy-manager-986bd8dbc-k4n9v              2/2     Running   0          7h41m

4. You have to select first three pods that named like “kubearmor-1234” this similarly. This name will differ in your environment. You have to enter the command shown below.

5. kubectl -n kube-system exec -it <kube-armor-pod> -- tail /tmp/kubearmor.log

6. You will see the logs like this in the terminal as shown in the image below

8. We can format that as “pretty json” using a website like https://jsonformatter.org/.

9. Here is the generated command modified with JSON formatter.


{
    "timestamp": 1636462493,
    "updatedTime": "2021-11-09T12:54:53.202994Z",
    "hostName": "gke-cluster-1-default-pool-f03ca967-10jc",
    "namespaceName": "default",
    "podName": "postgresdb-test-6c65dd9d7b-r7kqj",
    "containerID": "69290ed611738308d17533d3b16381fe254cb7ed0eae8d6a2f41ae9e133c4555",
    "containerName": "postgresdb",
    "hostPid": 267404,
    "ppid": 97,
    "pid": 14908,
    "uid": 0,
    "type": "ContainerLog",
    "source": "/bin/bash /bin/psql",
    "operation": "Process",
    "resource": "/bin/psql",
    "data": "syscall=SYS_OPEN flags=/bin/psql",
    "result": "Passed"
  }

10. The above logs will confirm that our policy is working as expected.

Conclusion

The article covered some of the various scenarios a developer/user can face while writing KubeArmor policies and steps to troubleshoot the same. Our experience shows that the troubleshooting issues commonly fall under two categories:

  1. Incorrect format of policy and/or misspelled keywords.
  2. Missing out exact and/or dependent paths of binaries.

This article covered both the scenarios above in depth and highlighted the steps needed to troubleshoot them respectively. An effective policy written without errors and misconfigurations will allow KubeArmor to protect your workloads!

To know more, connect with us using the social links given below.

KubeArmor website: https://kubearmor.com/

KubeArmor GitHub: https://github.com/kubearmor/KubeArmor

KubeArmor Slack: https://kubearmor.herokuapp.com/