Preparing for your Kubernetes Certified Administrator (CKA) Exam
I've recently been preparing for my CKA exam and have realised that I rely on the documentation too much when working with clusters. I doubt anyone is going to remember every aspect of the API but there are some common things which are worth trying to commit to muscle memory.
To help with this, I prepared myself a set of scenarios which would require me to use the key parts of the API that will be most commonly used and will be useful to know well for the exam itself.
I am going to share with you the scenarios that I have crafted. I set myself a time of 30 minutes to complete them without referring to the documentation.
Getting started
- Create a new namespace named
zoo
. - Update your local configuration to use the
zoo
namespace by default.
Working with pods, secrets & persistent volumes
- Create a pod named
lion
running thenginx:1.23-alpine
image in a container calledbernard
. The pod should expose port80
and should have appropriate liveness and readiness checks. - Create a secret named
passwords
with the following values.LION_PASSWORD=simba
andBEAR_PASSWORD=baloo
. Use a single kubectl command, don't create a manifest. - Create a new pod named
zookeeper
running thecurlimages/curl:7.80
. image which will need to keep running. This should mount the secrets frompassword
into/secrets
and expose them all as environment variables. It should also set another environment variable namedKEEPER_NAME
tomarjorie
. Finally, an environment variable calledNODE_NAME
should be set to the name of the node where the pod is running. Verify the environment within this pod usingexec
. - Create a ReadWriteOnce persistent volume on the cluster called
toolbox
which mounts/data/toolbox
from the host the pod is running on. The directory should have a maximum size of1Gi
. No need to specify a storage class name. - Create a persistent volume claim called
toolbox-pvc
which requests1Gi
of ReadWriteOnce storage. - Create a new pod named
tiger
running thenginx:1.23-alpine
with thetoolbox-pvc
mounted at/toolbox
. The the container should request 50Mi of memory and 50 millicores of CPU. Add appropriate configure to resolvebigsuperzoo.com
to1.2.3.4
. Verify using exec.
Working with deployments & daemon sets
- Create a new deployment named
monkey
with 2 pods running thenginx:1.23-alpine
image. Use a single kubectl command, don't create a manifest. - Create a new deployment named
tortoise
running thenginx:1.23-alpine
image which with a replica count of [number of worker pods in cluster + 1]. Pods should not be allowed to co-exist on the same node as another pod in this deployment. Verify one pod is not scheduled. - Create a new daemon set named
frog
which runs on all nodes, including control planes, which mounts/data/pond
from the node. There should be two containers: one namedwriter
which should write the current time to/data/time
every second. The other should be namedreader
which will read the data from/data/time
and print it to STDOUT.reader
should be able to see the data fromwriter
. It can use thebusybox
image. - Scale the
monkey
deployment from 2 pods to 4 pods. Verify it rolls out successfully using kubectl. - Configure the
monkey
deployment to automatically scale when CPU utilization goes above 60% and scales down when it reduces. Ensure there are always 2 pods and a maximum of 8 pods.
Working with services and ingresses
- If you don't already have an ingress controller on your cluster, install one.
- Create a new
NodePort
service namedmonkies
which exposes the pods in tehmonkey
deployment on port 80. Verify it can be accessed. - Create a new ingress (and associated objects) called
tortoises
which exposes the pods from thetortoise
deployment. The ingress should use the hostnametortoises.zoo.com
. To verify, create a new pod which resolves the hostname to the IP address of your service and use curl within that pod to check.
Events & Metrics
- Get a list of all events on the cluster and order them by the date they occurred.
- Choose one of your more recently pods. Get a list of all events that affect that pod only, ordered by the date they occurred.
- Produce a list of all containers across the whole cluster ordered by their memory usage.
JSONPath
- Produce a list of all pods on your cluster where the list only includes the pod name, the namespace, the IP address allocated to the pod and the name of the node the pod is scheduled on.
- Produce a list of all pods on your cluster including the name of the pod and, if the pod was created from a ReplicaSet, the name of the replica set.
Working with RBAC
- Create a new service account named
config-map-sa
. - Create a new role which allows config maps to be listed. It should not allow any other access to the Kubernetes API.
- Configure the service account to use the newly created role within the
zoo
namespace only. - Create a new pod called
config-map-reader
with thenginx:1.23-alpine
image and configured to use the new service account. Construct an appropriate curl command within this pod to make a successful request to the Kubernetes API and list all config maps in thezoo
namespace.
Working with etcd
- Take a backup of your etcd database.
- Restore your backup back to your cluster. Make a change to your cluster before doing so and verify that it is reverted when the restore is complete.
User management
- Create a new user for
john
allowing them to authenticate to the Kubernetes API using a certificate and key. John should have full access to the cluster. Use the Certificate Signing Request API and a 4096-bit RSA key. - Create a valid kubeconfig file for
john
containing everything they need to connect to the cluster. Use the file yourself to verify they can connect.
Working with kubeadm
- Check the expiry date of all certificates on one of your control plane nodes.
- Check the expiry date of the certificate issued to Kubelet on one of your worker nodes.
- Generate a command needed to join a new node to your cluster (including accurate certificate & token information).
- Upgrade all control plane components to the latest version of Kubernetes.
- Upgrade all worker nodes to the latest version of Kubernetes.