Preparing for your Kubernetes Certificate Administrator (CKA) Exam

A set of scenarios to help you gain some muscle memory in prepration for your Certificate Kubernetes Administrator (CKA) exam.

Preparing for your Kubernetes Certificate Administrator (CKA) Exam
Photo by Ibrahim Boran / Unsplash

I've recently been preparing for my CKA exam and have realised that I rely on the documentation too much when working with clusters. I doubt anyone is going to remember every aspect of the API but there are some common things which are worth trying to commit to muscle memory.

To help with this, I prepared myself a set of scenarios which would require me to use the key parts of the API that will be most commonly used and will be useful to know well for the exam itself.

I am going to share with you the scenarios that I have crafted. I set myself a time of 30 minutes to complete them without referring to the documentation.


Getting started

  1. Create a new namespace named zoo.
  2. Update your local configuration to use the zoo namespace by default.

Working with pods, secrets & persistent volumes

  1. Create a pod named lion running the nginx:1.23-alpine image in a container called bernard. The pod should expose port 80 and should have appropriate liveness and readiness checks.
  2. Create a secret named passwords with the following values. LION_PASSWORD=simba and BEAR_PASSWORD=baloo. Use a single kubectl command, don't create a manifest.
  3. Create a new pod named zookeeper running the curlimages/curl:7.80. image which will need to keep running. This should mount the secrets from password into /secrets and expose them all as environment variables. It should also set another environment variable named KEEPER_NAME to marjorie. Finally, an environment variable called NODE_NAME should be set to the name of the node where the pod is running. Verify the environment within this pod using exec.
  4. Create a ReadWriteOnce persistent volume on the cluster called toolbox which mounts /data/toolbox from the host the pod is running on. The directory should have a maximum size of 1Gi. No need to specify a storage class name.
  5. Create a persistent volume claim called toolbox-pvc which requests 1Gi of ReadWriteOnce storage.
  6. Create a new pod named tiger running the nginx:1.23-alpine with the toolbox-pvc mounted at /toolbox. The the container should request 50Mi of memory and 50 millicores of CPU. Add appropriate configure to resolve bigsuperzoo.com to 1.2.3.4. Verify using exec.

Working with deployments & daemon sets

  1. Create a new deployment named monkey with 2 pods running the nginx:1.23-alpine image. Use a single kubectl command, don't create a manifest.
  2. Create a new deployment named tortoise running the nginx:1.23-alpine image which with a replica count of [number of worker pods in cluster + 1]. Pods should not be allowed to co-exist on the same node as another pod in this deployment. Verify one pod is not scheduled.
  3. Create a new daemon set named frog which runs on all nodes, including control planes, which mounts /data/pond from the node. There should be two containers: one named writer which should write the current time to /data/time every second. The other should be named reader which will read the data from /data/time and print it to STDOUT. reader should be able to see the data from writer. It can use the busybox image.
  4. Scale the monkey deployment from 2 pods to 4 pods. Verify it rolls out successfully using kubectl.
  5. Configure the monkey deployment to automatically scale when CPU utilization goes above 60% and scales down when it reduces. Ensure there are always 2 pods and a maximum of 8 pods.

Working with services and ingresses

  1. If you don't already have an ingress controller on your cluster, install one.
  2. Create a new NodePort service named monkies which exposes the pods in teh monkey deployment on port 80. Verify it can be accessed.
  3. Create a new ingress (and associated objects) called tortoises which exposes the pods from the tortoise deployment. The ingress should use the hostname tortoises.zoo.com. To verify, create a new pod which resolves the hostname to the IP address of your service and use curl within that pod to check.

Events & Metrics

  1. Get a list of all events on the cluster and order them by the date they occurred.
  2. Choose one of your more recently pods. Get a list of all events that affect that pod only, ordered by the date they occurred.
  3. Produce a list of all containers across the whole cluster ordered by their memory usage.

JSONPath

  1. Produce a list of all pods on your cluster where the list only includes the pod name, the namespace, the IP address allocated to the pod and the name of the node the pod is scheduled on.
  2. Produce a list of all pods on your cluster including the name of the pod and, if the pod was created from a ReplicaSet, the name of the replica set.

Working with RBAC

  1. Create a new service account named config-map-sa.
  2. Create a new role which allows config maps to be listed. It should not allow any other access to the Kubernetes API.
  3. Configure the service account to use the newly created role within the zoo namespace only.
  4. Create a new pod called config-map-reader with the nginx:1.23-alpine image and configured to use the new service account. Construct an appropriate curl command within this pod to make a successful request to the Kubernetes API and list all config maps in the zoo namespace.

Working with etcd

  1. Take a backup of your etcd database.
  2. Restore your backup back to your cluster. Make a change to your cluster before doing so and verify that it is reverted when the restore is complete.

User management

  1. Create a new user for john allowing them to authenticate to the Kubernetes API using a certificate and key. John should have full access to the cluster. Use the Certificate Signing Request API and a 4096-bit RSA key.
  2. Create a valid kubeconfig file for john containing everything they need to connect to the cluster. Use the file yourself to verify they can connect.

Working with kubeadm

  1. Check the expiry date of all certificates on one of your control plane nodes.
  2. Check the expiry date of the certificate issued to Kubelet on one of your worker nodes.
  3. Generate a command needed to join a new node to your cluster (including accurate certificate & token information).
  4. Upgrade all control plane components to the latest version of Kubernetes.
  5. Upgrade all worker nodes to the latest version of Kubernetes.



Mastodon