Introduction

Elasticsearch is a sophisticated search and analytics engine that is used by  majority of enterprises today. It allows users to quickly search and analyse a wide range of data. However, since Elasticsearch is easy to set up, individuals frequently overlook the need to strengthen its security in order to safeguard and assure authorized access to data. In this article, we will be learning how to protect Elastic Stack at runtime using Accuknox’s opensource tooling. Accuknox hardens and protects workloads using Linux Security Modules (LSM) such as AppArmor and SELinux as well as eBPF for observability and networking enforcement.

Attacks and Countermeasures

In general, there can be 4 most common attacks that could happen to the Elastic Stack.

  1. Port Scanning
  2. Data theft
  3. Data deletion
  4. Logfile manipulation

The countermeasures to be executed are often simple and straightforward like minimising the exposure, creating secure access, setting up backups, log auditing and alerts. But what if we fail to ensure these countermeasures?AccuKnox opensource tools has got you covered in such unforeseen events, which might or might not happen to your Kubernetes or VM workloads.

Setting up an Elastic Stack to demonstrate runtime security

In this blog, we will demonstrate how to protect your Elastic Stack against such threats by implementing runtime security tools from AccuKnox. This will also analyse the application and generate policies that can be enforced by Linux Security Modules (LSMs) like AppArmor and SELinux.

Let us deploy Elastic Stack to a Kubernetes environment and make use of AccuKnox opensource tools to generate zero trust runtime security policies, apply them to the workload and make a comparison on the states before and after installing AccuKnox agents.

The scenario's purpose is to demonstrate AccuKnox's zero trust in an environment

Let's create a Kubernetes cluster

gcloud container clusters create sample-cluster --zone us-central1-c 
K8s cluster

Once the cluster is up and running we will begin installing Elastic, Kibana, and Fluentd to the namespace elk.

kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/elasticsearch_statefulset.yaml
kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/kibana-deployment.yaml
kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/fluentd.yaml
kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/elasticsearch_svc.yaml
kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/elasticsearch-pvc.yaml
kubectl -n elk apply -f https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/kibana-svc.yaml
Elastic, Kibana, and Fluentd

We will check the status of our elastic stack before continuing

kubectl -n elk get all

NAME                         READY   STATUS             RESTARTS   AGE
pod/es-cluster-0             1/1     Running            0          110m
pod/fluentd-c898r            1/1     Running            0          110m
pod/kibana-857548474-5bdjs   1/1     Running            0          110m

NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                         AGE
service/elasticsearch   LoadBalancer   10.60.8.108   34.133.120.128   9200:30363/TCP,9300:31644/TCP   118m
service/kibana          LoadBalancer   10.60.7.106   104.154.94.157   5601:30614/TCP                  118m

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd   1         1         1       1            1           <none>          118m

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana   1/1     1            1           118m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-84cf7f59c   0         0         0       118m
replicaset.apps/kibana-857548474   1         1         0       118m

NAME                          READY   AGE
statefulset.apps/es-cluster   1/1     118m
elastic stack

From the output, we could see that all the deployments are running fine in elk namespace.

For the complete code, refer to the accuknox/samples GitHub page

Run-time protection using AccuKnox Open-source tools

Accuknox enables the ability to protect your workloads at run-time. Accuknox enables this by allowing you to configure policies (or auto-discover them) for application and network behavior using KubeArmor, Cilium, and Auto Policy Discovery tools

KubeArmor
KubeArmor, open-source software that enables you to protect your cloud workload at run-time.

The problem that KubeArmor solves is that it can prevent cloud workloads from executing malicious activity at runtime.  Malicious activity can be any activity that the workload was not designed for or is not supposed to do.

Cilium
Cilium, an open-source project to provide eBPF-based networking, security, and observability for cloud-native environments such as Kubernetes clusters and other container orchestration platforms.

Auto Policy Discovery for your Elastic Stack

Even though writing KubeArmor and CIlium (System and Network) policies are not a big challenge AccuKnox opensource has it simplified one step further by introducing a new CLI tool for Auto Discovered Policies. The Auto-Discovery module helps users by identifying the flow and generating policies based on it.

Discovering policies has never been better with Auto Discovery. In two simple commands, you can set up and generate policies without having any trouble.

Let us now make use of AccuKnox's Auto Discovered Policies to generate zero-trust runtime security policies to secure our workload.

The auto-discovered zero trust runtime security policies can be generated using two commands. We will have to deploy Cilium and KubeArmor to the cluster and use a MySQL pod to store the discovered policies from where they can be downloaded with a single command.

First, we will use the below command to install all prerequisites.

curl -s https://raw.githubusercontent.com/accuknox/tools/main/install.sh | bash
Install Prerequisites 

Once the command is run successfully it will install the following components to your cluster:

  • KubeArmor protection engine
  • Cilium CNI
  • Auto policy discovery engine
  • MySQL database to keep discovered policies
  • Hubble Relay and KubeArmor Relay

Once this is down we can invoke the second script file which will download the auto-discovered policies from the MySQL database and store them locally. For this we will issue the below command:

curl -s https://raw.githubusercontent.com/accuknox/tools/main/get_discovered_yamls.sh | bash
Download AD-Policies

You should be able to see the following output.

Got 17 cilium policies in file cilium_policies.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_elk_elasticsearch_jsyhtyat.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_elk_kibana_mlqknccc.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_explorer_knoxautopolicy_ubksnuie.yaml
Got 1 kubearmor policies in file kubearmor_policies_default_explorer_mysql_uyvpobhy.yaml
Cilium policy

In mere seconds after installing executing auto policy discovery tool, it generated 17 Cilium policies and 4 curated KubeArmor policies.

Let us take a look at some of the auto-discovered policies

Cilium Policy #1

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-egress-svqwtqlywrwmymw
  namespace: elk
spec:
  endpointSelector:
    matchLabels:
      app: elasticsearch
  egress:
  - toEndpoints:
    - matchLabels:
        k8s-app: kube-dns
        k8s:io.kubernetes.pod.namespace: kube-system
    toPorts:
    - ports:
      - port: "53"
        protocol: UDP
Cilium policy 1

Cilium Policy #2

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-egress-fdjkltpvfovbbfj
  namespace: elk
spec:
  endpointSelector:
    matchLabels:
      app: kibana
  egress:
  - toEndpoints:
    - matchLabels:
        app: elasticsearch
        k8s:io.kubernetes.pod.namespace: elk
    toPorts:
    - ports:
      - port: "9200"
        protocol: TCP
Cilium policy 2

Cilium Policy #3

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-ingress-qtlysohntqyqwmm
  namespace: elk
spec:
  endpointSelector:
    matchLabels:
      app: elasticsearch
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: kibana
        k8s:io.kubernetes.pod.namespace: elk
    toPorts:
    - ports:
      - port: "9200"
        protocol: TCP
Cilium policy 3

KubeArmor Policy #1

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: autopol-system-2236087602
  namespace: elk
spec:
  severity: 1
  selector:
    matchLabels:
      app: elasticsearch
  file:
    matchPaths:
    - path: /etc/hosts
      fromSource:
      - path: /usr/share/elasticsearch/jdk/bin/java
    - path: /usr/share/elasticsearch/config
      fromSource:
      - path: /usr/share/elasticsearch/jdk/bin/java
    matchDirectories:
    - dir: /sys/
      fromSource:
      - path: /usr/share/elasticsearch/jdk/bin/java
  network:
    matchProtocols:
    - protocol: tcp
      fromSource:
      - path: /usr/share/elasticsearch/jdk/bin/java
    - protocol: udp
      fromSource:
      - path: /usr/share/elasticsearch/jdk/bin/java
  action: Allow
---
KubeArmor Policy 1

KubeArmor Policy #2

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: autopol-system-876225259
  namespace: elk
spec:
  severity: 1
  selector:
    matchLabels:
      app: kibana
  process:
    matchPaths:
    - path: /usr/bin/tr
      fromSource:
      - path: /usr/bin/bash
    - path: /usr/local/bin/kibana-docker
------------------------------SNIP--------------------------------------------
  file:
    matchPaths:
    - path: /dev/null
      fromSource:
      - path: /usr/share/kibana/data/headless_shell-linux/headless_shell
      - path: /usr/share/kibana/node/bin/node
------------------------------SNIP--------------------------------------------
  network:
    matchProtocols:
    - protocol: tcp
      fromSource:
      - path: /usr/share/kibana/node/bin/node
    - protocol: udp
      fromSource:
      - path: /usr/share/kibana/node/bin/node
  action: Allow
KubeArmor Policy 2

We also have predefined policies in the policy-template GitHub repository which can be utilized to achieve the same level of runtime security without having to generate autodiscovery policies. You can find them in

policy templates/elastic at main · kubearmor/policy-templates. The only downside is that you need to know the namespace and labels for your Elastic and Kibana workloads.

The Policies in action

It is time to verify whether we were able to achieve zero trust by using the auto-discovered policies generated by AccuKnox opensource tools. To test this we will scan the application with some popular scanners.

Before that let us verify that the policies are applied correctly to the cluster

kubectl get cnp,ksp -A

NAMESPACE   NAME                                                            AGE
elk         ciliumnetworkpolicy.cilium.io/autopol-egress-fdjkltpvfovbbfj    15m
elk         ciliumnetworkpolicy.cilium.io/autopol-egress-svqwtqlywrwmymw    15m
elk         ciliumnetworkpolicy.cilium.io/autopol-ingress-qtlysohntqyqwmm   14m

NAMESPACE   NAME                                                               AGE
elk         kubearmorpolicy.security.kubearmor.com/autopol-system-2236087602   14m
elk         kubearmorpolicy.security.kubearmor.com/autopol-system-876225259    13m
Cilium network policy

Initiating the Attack scenario

Let us initiate a recent attack that happened in Elastic search, the Anonymous Database Dump dubbed as CVE-2021-22146. With all the policies applied to the cluster, we will initiate the attack. You can download the code for the attack from our samples repository

samples/CVE-2021-22146.py at main · accuknox/samples

wget https://raw.githubusercontent.com/accuknox/samples/main/elasticstack/CVE-2021-22146.py

All done! let's run the exploit and see what we get.

python3 exploit1.py -s 34.133.120.128 -p 9200 -i 5
      _           _   _         _
  ___| | __ _ ___| |_(_) ___ __| |_   _ _ __ ___  _ __
 / _ \ |/ _` / __| __| |/ __/ _` | | | | '_ ` _ \| '_ \
|  __/ | (_| \__ \ |_| | (_| (_| | |_| | | | | | | |_) |
 \___|_|\__,_|___/\__|_|\___\__,_|\__,_|_| |_| |_| .__/
                                                 |_|

The attack is not happening since the policies are denying external connection to the elastic pod. To make sure that we still have a connection between our kibana and elastic search pods we’ll execute into the kibana pod and do a curl request to the elastic search

 kubectl -n elk exec -it pod/kibana-857548474-vrkps -- bash

curl 34.133.120.128:9200
{
  "name" : "es-cluster-0",
  "cluster_name" : "k8s-logs",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
elastic pod

If we try to curl to Elastic pod IP from anywhere else we’ll get a connection timeout error

curl 34.133.120.128:9200

curl: (28) Failed to connect to 34.133.120.128 port 9200: Connection timed out
Elastic pod IP

Now its time to see how the Elastic and Kibana deployments behave if the runtime security policies are removed

kubectl -n elk delete cnp,ksp --all

ciliumnetworkpolicy.cilium.io "autopol-egress-fdjkltpvfovbbfj" deleted
ciliumnetworkpolicy.cilium.io "autopol-egress-svqwtqlywrwmymw" deleted
ciliumnetworkpolicy.cilium.io "autopol-ingress-qtlysohntqyqwmm" deleted
ciliumnetworkpolicy.cilium.io "cnp-kibana-policy" deleted
kubearmorpolicy.security.kubearmor.com "autopol-system-2236087602" deleted
kubearmorpolicy.security.kubearmor.com "autopol-system-876225259" deleted
Elastic and Kibana deployment

Time to attack and see the difference.

python3 exploit1.py -s 34.133.120.128 -p 9200 -i 5
      _           _   _         _
  ___| | __ _ ___| |_(_) ___ __| |_   _ _ __ ___  _ __
 / _ \ |/ _` / __| __| |/ __/ _` | | | | '_ ` _ \| '_ \
|  __/ | (_| \__ \ |_| | (_| (_| | |_| | | | | | | |_) |
 \___|_|\__,_|___/\__|_|\___\__,_|\__,_|_| |_| |_| .__/
                                                 |_|
                                                 
  {"error":{"root_cause":[{"type":"json_parse_exception","reason":
  "Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}{"error":{"root_cause":[{"type":"json_parse_exception",
  "reason":"Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}{"error":{"root_cause":[{"type":"json_parse_exception",
  "reason":"Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}{"error":{"root_cause":[{"type":"json_parse_exception",
  "reason":"Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}{"error":{"root_cause":[{"type":"json_parse_exception",
  "reason":"Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}{"error":{"root_cause":[{"type":"json_parse_exception",
  "reason":"Unrecognized token '$': was expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"}],"type":"json_parse_exception","reason":"Unrecognized token '$': was 
  expecting ('true', 'false' or 'null')\n at 
  [Source: [email protected]; line: 1, 
  column: 3]"},"status":500}%
Python 3 exploit

The attack happened and we were able to dump database values anonymously. We’ll do one more test and make sure that anyone can access the Elastic and Kibana workloads now.

curl 34.133.120.128:9200

{
  "name" : "es-cluster-0",
  "cluster_name" : "k8s-logs",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
Elastic and Kibana workloads 

We could see that the attack happened after we deleted the policies which were auto-discovered in a safe environment. This means applying the auto-discovered policies ensured that the workload had been protected at runtime.

Accuknox's policy templates repository

Accuknox's policy templates is an open-source repo that also contains a wide range of attack prevention techniques including MITRE, as well as hardening techniques for your workloads. Please visit

GitHub - kubearmor/policy-templates: Community curated list of System and Network policy templates for the KubeArmor and Cilium to download and apply policy templates.

Conclusion

Despite the difficulty of detecting and mitigating an Elasticsearch attack, the AccuKnox opensource tools can secure you workloads in a simple click within minutes.

Now you can protect your workloads in minutes using AccuKnox, it is available to protect your Kubernetes and other cloud workloads using Kernel Native Primitives such as AppArmor, SELinux, and eBPF.

Reach out to us if you are seeking additional guidance in planning your cloud security program.

Read more blogs from Cloud Security Category here.