Certified Kubernetes Administrator (CKA) Exam Questions

Question: 1
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task –
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.
kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
kubectl create sa cicd-token --namespace app-team1
kubectl create clusterrolebinding deploy-b --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
Question: 2
Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force
Question: 3
Given an existing Kubernetes cluster running version 1.21.1, upgrade all of the Kubernetes control plan and node components on the master node only to version 1.21.2. You are also expected to upgrade kubelet and kubectl on the master node.
# Upgrade the kubeadm:
apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.21.2-00 && apt-mark hold kubeadm # Now drain the mater node. kubectl drain master --ignore-daemonsets #Now run the kubeadm upgrade command with required version.
kubeadm upgrade apply v1.21.2 #Now upgrade kubelet and kubectl with below command. apt-mark unhold kubelet kubectl && apt-get install -y kubelet=1.21.2-00 kubectl=1.21.2-00 && apt-mark hold kubelet kubectl #Then restart the kubelet. systemctl daemon-reload && systemctl restart kubelet && kubectl uncordon master
 
Question: 4
Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 saving the snapshot to /srv/data/etcd-snapshot.db
 
Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previous.db.
 
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
 
CA certificate: /opt/KUIN00601/ca.crt 
Client certificate: /opt/KUIN00601/etcd-client.crt 
Clientkey:/opt/KUIN00601/etcd-client.key
# Backing up the etcd.
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db

#Restoring from etcd:
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db
Question: 5
Create a new NetworkPolicy name “allow-port-from-namespace” that allows Pods in the existing namespace “echo” to connect to port 9000 of other Pods in the same namespace. Ensure that the new NetworkPolicy:
 
Does not allow access to Pods not listening on port 9000
 
Does not allow access from Pods which are not in namespace internal
# vim netpol.yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: all-port-from-namespace namespace: echo spec: podSelector: matchLabels: {} ingress: - from: - namespaceSelector: matchLabels: name: internal
- podSelector: {} ports: - port: 9000

#Now run the below command to create the network policy.

kubectl create -f netpol.yaml
Question: 6
Reconfigure the existing deployment “front-end” and add a port specification named “http” exposing port “80/tcp” of the existing container “nginx”
 
Create a new service named “front-end-svc” exposing the container port “http”.
 
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled
# Changing the deployment configuration.

apiVersion: apps/v1 kind: Deployment metadata: name: front-end spec: template: spec: containers: - image: nginx name: nginx ports: - containerPort: 80 name: http

# To create the new service to expose the container port http.

kubectl expose deployment front-end --name front-end-svc --port 80 --target-port 80
Question: 7
Create a new nginx Ingress resource as follows:
 
Name: pong
 
Namespace: ing-internal
 
Exposing service hi on path /hi using service port 5678 The availability of service hi can be checked using the following command, which should return hi:
 
Example:  curl -kL <INTERNAL_IP>/hi
# Create the namespace ing-internal.
kubectl create ns ing-internal

# Now prepare the manifest for ingress resource.
# vi ingress-resource.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pong
namespace: ing-internal annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /hi pathType: Prefix backend: service: name: hi port: number: 5678 # Now create the ingress resource with below kubectl command.: kubectl create -f ingress-resource.yaml
Question: 8
Scale the deployment loadbalancer to 6 pods.
kubectl scale deployment loadbalancer --replicas=6
Question: 9
Schedule a pod as follow:
 
Name: nginx-kusc00401
Image: nginx
Node selector: disk=spinning
apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: Always
  nodeSelector:
    disk: ssd
Question: 10
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/kubernetes/nodenum
kubectl describe nodes |grep Taint |grep -v NoSchedule |wc -l >/opt/kubernetes/nodenum
Question: 11
Create a pod named kucc4 with a single container for each of the following images running inside:
 
Image: redis
Image: consul
# Run the command to generate the manifest file for the pod
kubectl run kucc4 --image=redis –dry-run=client -o yaml > multipod.yaml
# Now edit the multipod.yaml file
# vi multipod.yaml


apiVersion: v1 kind: Pod metadata: name: kucc4 spec: containers: - image: redis name: redis - image: consul name: consul

# Now run the command to create the pod.
kubectl apply -f multipod.yaml
Question: 12
 Create a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. 
The type of volume is hostPath and its location is /srv/app-config
#Create the manifest file for the pv.
#vi pv-volume.yaml

apiVersion: v1 kind: PersistentVolume metadata: name: app-config labels: type: local spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/srv/app-config"
#Now create the persistent volume with below kubectl command.
kubectl create -f pv-volume.yaml
Question: 13
Create a new PVC with below specifications:
 
Name: pv-volume
Class: csi-hostpath-sc
Capacity: 10Mi 
 
Create a new Pod which mounts the PVC as a volume:
Name: web-server
Image: nginx
Mount path: /usr/share/nginx/html 
 
Configure the new Pod to have ReadWriteOnce access on the volume. 
 
Finally, using kubectl edit or kubectl path expand the PVC to a capacity 70Mi and record that change.
#Check the available storage-class for the pvc.
kubectl get sc
Output: csi-hostpath-sc
#Now create the pvc with below manifest.

# vi pv-vol.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-volume spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Mi storageClassName: csi-hostpath-sc


# Now run the kubectl apply command to create the pvc.

kubectl create pvc pv-vol.yaml
# Now create the pod to use the above pvc.
# vi web-server.yaml apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: pv-volume volumes: - name: pv-volume persistentVolumeClaim: claimName: pv-volume
#Now edit pvc pv-volume to change the capacity.
kubectl edit pvc pv-volume --save-config
#Change the capacity from 10Mi to 70Mi. To check the changes run the command.
kubectl get pvc
Question: 14
Monitor the logs of pod loadbalancer and extract log lines corresponding to Error  and unable-to-access-website
Write them to /opt/KUTR00101/
#Run the below command to fetch the log of the pod. Later grep the string and extract it into desired file.

kubectl logs loadbalancer |grep Error |grep unable-to-access-website > /opt/KUTR00101/loadbalancer
Question: 15
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
# ssh into worker node which is in NotReady state.
ssh wk8s-node-0
sudo -i
# Check the status of the kubelet.
systemctl status kubelet.service
#You will see the kubelet is in stopped state. Now start the kubelet and enable it.


systemctl start kubelet.service systemctl enable kubelet.service

#This will make the kubelet in start state. In sometime you will see the node is in running state.
system status kubelet
kubectl get node.

 

Question: 16
From the pod label name=app, find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUT00401/KUT0001.txt (which already exists).
kubectl top pod --sort-by=cpu --selector name=app |head -2 |tail -1 |cut-d' ' -f1 >/opt/KUT00401/KUT0001.txt
Question: 17
Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s built-in logging architecture(e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.
 
Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command: /bin/sh -c tail -n+1 -f /var/log/legac-appp.log
 
Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.
 
Don’t modify the existing container.
 
Don’t modify the path of the log file, both containers must access it at /var/log/legacy-app.log.
# Get the manifest of the pod legacy-app and save into pod-legacy.yaml file, later on edit the file according to the question.
kubectl get pod legacy-app -o yaml > pod-legacy.yaml
# vi pod-legacy.yaml
kind: Pod
metadata:
  name: podname
spec:
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: logs
      mountPath: /var/log
  - name: count-log-1
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-ap.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
    emptyDir: {}