Post

Certified Kubernetes Application Developer (CKAD) Cheatsheet

Personal notes and cheatsheet made during CKAD exam prep. Covers all five exam domains: application design and build, deployment, observability, environment and security, and services and networking.

Overview

The Certified Kubernetes Application Developer (CKAD) exam is a hands-on, performance-based certification offered by the Linux Foundation and CNCF. The exam runs for 2 hours in a live Kubernetes cluster environment. No multiple choice, all CLI tasks, with kubectl documentation available in the exam’s environment.

Official syllabus (as per the CNCF curriculum v1.32, which was in effect when this exam was taken in May 2025):

DomainWeight
Application Design and Build20%
Application Deployment20%
Application Observability and Maintenance15%
Application Environment, Configuration and Security25%
Services and Networking20%

Domain breakdown:

  • Application Design and Build – container images, workload resources (Deployment, DaemonSet, CronJob), multi-container Pod patterns, persistent and ephemeral volumes
  • Application Deployment – rolling updates, blue/green and canary strategies, Helm, Kustomize
  • Application Observability and Maintenance – probes, health checks, CLI monitoring tools, logs, debugging, API deprecations
  • Application Environment, Configuration and Security – CRDs, Operators, RBAC, authentication/authorization, resource requests/limits, ConfigMaps, Secrets, ServiceAccounts, SecurityContexts
  • Services and Networking – Services (ClusterIP/NodePort/LoadBalancer), Ingress, NetworkPolicies

The exam is based on Kubernetes v1.32.


Exam Tips

Use kubectl explain to Look Up Fields

kubectl explain is documentation in the terminal.

1
2
3
4
5
6
7
8
9
10
# Top-level fields for a resource
kubectl explain pod
kubectl explain deployment

# Nested fields
kubectl explain pod.spec
kubectl explain pod.spec.containers
kubectl explain pod.spec.containers.resources
kubectl explain pod.spec.containers.livenessProbe
kubectl explain networkpolicy.spec.ingress

Add --recursive to see the full field tree in one shot:

1
kubectl explain pod.spec --recursive

Generate YAML with --dry-run=client -o yaml

1
2
# Service
kubectl expose deployment my-app --port=80 --target-port=8080 --dry-run=client -o yaml

Use Aliases and Shortcuts

The kubectl binary should be aliased as k in the exam environment, it was for me.

1
2
3
4
5
6
7
8
9
10
11
12
alias k=kubectl

k get po          # pods
k get deploy      # deployments
k get svc         # services
k get cm          # configmaps
k get ns          # namespaces
k get sa          # serviceaccounts
k get pv          # persistentvolumes
k get pvc         # persistentvolumeclaims
k get netpol      # networkpolicies
k get ing         # ingresses

Force Delete Stuck Pods

Pods may be stuck in terminating due to a node issues:

1
kubectl delete pod <name> --force --grace-period=0

Edit a Running Resource

For quick changes without re-applying a file:

1
kubectl edit deployment my-app

To change a specific field directly from the CLI:

1
2
3
kubectl set image deployment/my-app my-container=nginx:1.25
kubectl set resources deployment/my-app --limits=cpu=200m,memory=256Mi
kubectl scale deployment my-app --replicas=5

Patch Resources

For one-field changes without opening an editor:

1
kubectl patch deployment my-app -p '{"spec": {"replicas": 3}}'

1. Basics

kubectl Commands

CommandDescription
kubectl get <type>List resources
kubectl get <type> -l <key>=<value>Filter by label
kubectl get pods --show-labelsShow all pod labels
kubectl describe <type> <name>Detailed resource info + events
kubectl apply -f <file>Create or update from file
kubectl delete <type> <name>Delete resource
kubectl edit <type> <name>Edit live resource in-place
kubectl set <field> <type>/<name> <key>=<value>Modify a live resource field
kubectl expose <resource>Expose a pod/deployment as a service
kubectl exec -it <pod> -- <program>Interactive session inside pod
kubectl exec <pod> -- <cmd>Run a single command in pod
kubectl logs -f <pod> -c <container>Stream logs from a container
kubectl logs -p <pod>Logs from previous (crashed) container
kubectl top podShow pod CPU/memory usage
kubectl <cmd> --dry-run=clientSimulate without creating
kubectl <cmd> -o yamlOutput as YAML
kubectl explain <path>Show valid fields for a resource path

Kubernetes Context

A context bundles three things stored in ~/.kube/config: a cluster, user auth info, and a namespace.

CommandDescription
kubectl config get-contextsList all contexts
kubectl config current-contextShow active context
kubectl config use-context <name>Switch context
kubectl config set-context --current --namespace=<ns>Switch namespace
kubectl config viewView full kubeconfig

Resource Types Reference

ResourceDescription
PodBasic unit. Runs one or more containers.
ReplicaSetMaintains N identical pod replicas.
DeploymentManages ReplicaSets with rolling updates and rollback.
DaemonSetRuns one pod per node. Used for logging agents, monitoring, etc.
StatefulSetOrdered pods with stable identity and per-pod storage. Used for databases.
JobRuns pods to completion. For one-off batch tasks.
CronJobSchedules Jobs on a cron expression.
ServiceStable networking endpoint for pods. Types: ClusterIP, NodePort, LoadBalancer.
IngressRoutes external HTTP/HTTPS to Services by hostname and path.
NetworkPolicyFirewall rules for pod-to-pod traffic.
ConfigMapNon-sensitive key-value config data.
SecretSensitive data (passwords, tokens), base64-encoded.
PersistentVolume (PV)Cluster-wide storage resource backed by a physical disk or cloud volume.
PersistentVolumeClaim (PVC)Request for storage. Binds to a PV.
ServiceAccountPod identity for Kubernetes API access.
Role / ClusterRoleDefines allowed actions on resources. Namespaced vs cluster-wide.
RoleBinding / ClusterRoleBindingGrants a Role to a subject.
CustomResourceDefinition (CRD)Extends Kubernetes with new resource types.
HorizontalPodAutoscaler (HPA)Scales pods based on CPU/memory usage.

YAML Structure

Every Kubernetes manifest shares the same top-level structure:

1
2
3
4
5
6
7
8
9
apiVersion: apps/v1        # API group + version
kind: Deployment           # Resource type
metadata:
  name: my-app
  namespace: dev
  labels:
    app: my-app
spec:                      # Resource-specific configuration
  ...

Deployment hierarchy:

1
2
3
4
5
6
7
8
9
10
11
12
13
Deployment
└── spec (Deployment-level)
    ├── replicas
    ├── selector
    └── template (Pod blueprint)
        └── spec (Pod-level)
            ├── containers[]
            │   ├── image
            │   ├── ports
            │   ├── resources
            │   ├── env
            │   └── volumeMounts
            └── volumes

2. Pods, Config, and Resource Management

ConfigMaps

Store non-sensitive config data as key-value pairs or files.

1
2
kubectl create configmap my-config --from-literal=DB_HOST=localhost
kubectl create configmap app-config --from-file=database.txt
1
2
3
4
5
6
7
8
9
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  DB_HOST: localhost
  database.txt: |
    DB_HOST=localhost
    DB_PORT=3306

Secrets

Sensitive data stored as base64-encoded values. Default type is Opaque.

1
2
3
4
5
6
7
kubectl create secret generic my-secret --from-literal=DB_PASS=pass123
kubectl create secret generic my-secret --from-file=./db-details.txt
kubectl create secret docker-registry regcred \
  --docker-server=<registry> \
  --docker-username=<user> \
  --docker-password=<pass>
kubectl create secret tls my-tls --cert=tls.crt --key=tls.key
1
2
3
4
5
6
7
apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  DB_PASS: cGFzczEyMw==    # base64 of "pass123"

Common Secret types:

TypeUse Case
OpaqueGeneric key-value secrets (default)
kubernetes.io/dockerconfigjsonPrivate registry credentials for image pulls
kubernetes.io/tlsTLS certificate + private key (used by Ingress)
kubernetes.io/service-account-tokenAuto-generated SA token (managed by Kubernetes)

Pods

The most basic unit of Kubernetes. It can run 1 or many containers.

1
kubectl run mypod --image=nginx:alpine

Full pod spec with ConfigMap/Secret injection, volumes, and resource limits:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
apiVersion: v1
kind: Pod
metadata:
  name: full-example-pod
  labels:
    app: my-app
spec:
  serviceAccountName: default
  restartPolicy: Always    # Always | OnFailure | Never
  imagePullSecrets:
  - name: regcred           # Reference a kubernetes.io/dockerconfigjson secret

  volumes:
  - name: cache-volume
    emptyDir: {}
  - name: config-volume
    configMap:
      name: my-config
  - name: secret-volume
    secret:
      secretName: my-secret

  containers:
  - name: main-app
    image: nginx:1.27
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
    resources:
      requests:
        cpu: "100m"
        memory: "128Mi"
      limits:
        cpu: "500m"
        memory: "256Mi"
    env:
    - name: ENVIRONMENT
      value: production
    - name: DB_HOST
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: DB_HOST
    - name: DB_PASS
      valueFrom:
        secretKeyRef:
          name: my-secret
          key: DB_PASS
    envFrom:
    - configMapRef:
        name: my-config
    - secretRef:
        name: my-secret
    volumeMounts:
    - name: cache-volume
      mountPath: /cache
    - name: config-volume
      mountPath: /etc/config
    - name: secret-volume
      mountPath: /etc/secret

Ephemeral Volumes (emptyDir)

Temporary volume that only lives while the pod runs. Useful for caches and inter-container file sharing.

1
2
3
volumes:
- name: cache-volume
  emptyDir: {}

Resource Requests and Limits

Requests are the minimum guaranteed resources. Limits are the maximum. If CPU is exceeded, the container throttles. If memory is exceeded, the container may be killed.

ResourceQuotas

Enforces namespace-wide limits on object counts and total compute resources.

1
kubectl create quota sample-quota --hard=pods=10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-quota
  namespace: dev-team
spec:
  hard:
    pods: "10"
    configmaps: "20"
    persistentvolumeclaims: "5"
    requests.cpu: "2"
    requests.memory: 1Gi
    limits.cpu: "4"
    limits.memory: 2Gi

LimitRanges

Automatically assigns default requests/limits and enforces min/max per container. Ensures pods always have resource constraints even if the developer forgot to set them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: LimitRange
metadata:
  name: container-limits
  namespace: dev-team
spec:
  limits:
  - type: Container
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"
    default:
      cpu: "500m"
      memory: "256Mi"
    max:
      cpu: "1"
      memory: "512Mi"
    min:
      cpu: "50m"
      memory: "64Mi"
    maxLimitRequestRatio:
      cpu: "2"
      memory: "4"

3. Workloads and Deployments

Jobs

Runs pods to completion. Retries on failure.

1
kubectl create job sample-job --image=busybox
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: batch/v1
kind: Job
metadata:
  name: hello-job
spec:
  completions: 3      # Total pods to run to completion
  parallelism: 1       # How many run at the same time
  backoffLimit: 4      # Retry attempts before marking failed
  activeDeadlineSeconds: 120    # Kill the whole job after this many seconds regardless of backoffLimit
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ["echo", "Hello CKAD!"]
      restartPolicy: Never    # Required for Jobs

CronJobs

Schedules Jobs on a recurring cron expression.

1
kubectl create cronjob sample-cronjob --image=busybox --schedule="*/5 * * * *"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello-cronjob
spec:
  schedule: "*/5 * * * *"
  concurrencyPolicy: Forbid       # Allow | Forbid | Replace
  startingDeadlineSeconds: 60
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            command: ["date"]
          restartPolicy: Never

Cron schedule format:

1
2
3
4
5
6
7
* * * * *
| | | | |
| | | | └─ Weekday (0-6, Sun=0)
| | | └─── Month (1-12)
| | └───── Day of month (1-31)
| └─────── Hour (0-23)
└───────── Minute (0-59)
ExpressionMeaning
0 * * * *Every hour on the hour
30 14 * * *Daily at 14:30
*/10 * * * *Every 10 minutes
0 9 * * 1-5Weekdays at 09:00
0 0 * * *Midnight every day
0 0 1 1 *Once a year, Jan 1st at midnight

Deployments

Manages ReplicaSets with rolling updates and rollback support.

1
kubectl create deployment sample-deployment --image=nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  minReadySeconds: 10
  revisionHistoryLimit: 5
  progressDeadlineSeconds: 600
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.27
        ports:
        - containerPort: 80

Rollout Commands

CommandPurpose
kubectl set image deployment/<name> nginx=nginx:1.28Update image
kubectl rollout status deployment/<name>Check rollout progress
kubectl rollout history deployment/<name>View revision history
kubectl rollout undo deployment/<name>Roll back to previous version
kubectl rollout undo deployment/<name> --to-revision=3Roll back to specific revision
kubectl rollout pause deployment/<name>Pause mid-rollout
kubectl rollout resume deployment/<name>Resume paused rollout

Deployment Strategies

RollingUpdate (default): Replaces pods gradually with zero downtime. Controlled by maxUnavailable and maxSurge.

Recreate: Kills all old pods first, then starts new ones. Causes downtime but guarantees no two versions run simultaneously.

Blue-Green: Run two deployments in parallel (old=blue, new=green). Switch traffic instantly by updating the Service selector. The Service selects by pod label, not deployment name.

1
2
3
4
# Service pointing to green
selector:
  app: my-app
  version: green

Canary: Gradually shift traffic to the new version by running both deployments with a shared label and adjusting replica counts.

1
2
3
4
# stable: 9 replicas, canary: 1 replica = 10% canary traffic
# Service selects both via shared label:
selector:
  app: my-app

DaemonSets

Ensures one pod runs on every node. Used for log collectors, monitoring agents, and node-level utilities.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: log-collector
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: fluentd
  updateStrategy:
    type: RollingUpdate    # Or OnDelete
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
      - name: fluentd
        image: fluent/fluentd:v1.16
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      volumes:
      - name: varlog
        hostPath:
          path: /var/log

HorizontalPodAutoscaler (HPA)

Automatically scales the number of pod replicas in a Deployment based on observed CPU or memory usage. Requires the Metrics Server to be running in the cluster.

1
2
# Create from CLI (CPU-based)
kubectl autoscale deployment my-app --cpu-percent=50 --min=2 --max=10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50    # Scale up when average CPU > 50% of requests
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 70
1
2
kubectl get hpa
kubectl describe hpa my-app-hpa

The Deployment must have CPU requests set, otherwise the HPA has nothing to calculate a percentage against.


4. Observability and Debugging

Probes

ProbePurposeAction on Failure
LivenessIs the container alive?Restart container
ReadinessIs the container ready for traffic?Remove from Service endpoints (no restart)
StartupHas the app finished starting?Block liveness/readiness until passed

Probe check methods:

  • httpGet – HTTP GET to a path/port; success if 2xx/3xx
  • tcpSocket – checks if TCP port is open
  • exec – runs a command; success if exit code is 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
containers:
- name: my-app
  image: busybox
  startupProbe:
    exec:
      command: ["cat", "/tmp/ready"]
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 10
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 10
    periodSeconds: 10
    failureThreshold: 3
  readinessProbe:
    httpGet:
      path: /healthz
      port: 8080
    initialDelaySeconds: 15
    periodSeconds: 5
    failureThreshold: 3
    successThreshold: 1

Debugging Flow

1
2
3
4
5
kubectl get pods                       # Check status
kubectl describe pod <pod-name>        # Events section has the root cause
kubectl logs <pod-name>                # App logs
kubectl logs -p <pod-name>             # Logs from previous (crashed) container
kubectl exec -it <pod-name> -- sh      # Shell into container
SymptomLikely CauseDebug Command
ImagePullBackOffWrong image name or missing registry credentialskubectl describe pod <pod> – check Events
CrashLoopBackOffApp crashes on startupkubectl logs <pod> or kubectl logs -p <pod>
Pod PendingInsufficient resources or bad nodeSelectorkubectl describe pod <pod>
Readiness failingWrong probe path or dependency not readykubectl describe pod <pod> – check Events
Config issuesWrong env vars or missing mountskubectl exec -it <pod> -- sh

API Deprecations

Each resource is identified by an API group, version, and kind. Kubernetes graduates APIs through alpha, beta, and stable (GA), and eventually removes old versions.

StageCharacteristics
Alpha (v1alpha1)Experimental, off by default.
Beta (v1beta1)Enabled by default, compatibility guarantees.
Stable (v1)Production-ready, long-term supported.
1
2
kubectl api-versions                    # Check available API versions in the cluster
kubectl explain deployment              # Check which apiVersion to use

Common migrations:

ResourceOld APICurrent Stable
Deploymentextensions/v1beta1apps/v1
DaemonSetextensions/v1beta1apps/v1
StatefulSetapps/v1beta1apps/v1
NetworkPolicyextensions/v1beta1networking.k8s.io/v1
Ingressextensions/v1beta1networking.k8s.io/v1
CronJobbatch/v1beta1batch/v1

apps/v1 Deployments require .spec.selector, which was optional in older APIs.


5. Storage

PersistentVolumes (PV)

Cluster-wide storage resource provisioned by an admin. Backed by real storage (disk, NFS, cloud volumes).

1
[ Physical Storage ] <--> [ PV ] <--> [ PVC ] <--> [ Pod ]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
  name: aws-ebs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce         # RWO: one node | ROX: many nodes read-only | RWX: many nodes read-write
  storageClassName: gp2
  persistentVolumeReclaimPolicy: Delete    # Retain | Delete | Recycle (deprecated)
  awsElasticBlockStore:
    volumeID: vol-0abcd1234efgh5678
    fsType: ext4

PV lifecycle phases: Available -> Bound -> Released -> Failed

PersistentVolumeClaims (PVC)

A namespaced request for storage. Kubernetes matches and binds it to a compatible PV (matching accessMode, storageClass, and size).

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: gp2
  resources:
    requests:
      storage: 1Gi
1
2
3
4
5
# Pod using a PVC
volumes:
- name: my-storage
  persistentVolumeClaim:
    claimName: my-pvc

Binding rules: PVC accessMode must match PV; PVC requested size must be <= PV capacity; Kubernetes picks the smallest matching PV.

StorageClasses

Enables dynamic PV provisioning. When a PVC references a StorageClass, Kubernetes creates a PV automatically – no manual PV creation needed.

1
2
3
4
5
6
7
8
9
10
11
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
provisioner: ebs.csi.aws.com    # CSI driver (in-tree kubernetes.io/aws-ebs was removed in v1.27)
parameters:
  type: gp2
  fsType: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer    # Immediate | WaitForFirstConsumer

StatefulSets

For stateful apps that need stable pod identity and per-pod storage. Unlike Deployments, pods are named with an ordinal index (mysql-0, mysql-1) and scale up/down sequentially.

Requires a headless Service (clusterIP: None) for stable DNS.

1
2
3
4
5
6
7
8
9
10
11
12
# Headless Service
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  clusterIP: None
  selector:
    app: mysql
  ports:
  - port: 3306
# DNS: mysql-0.mysql.default.svc.cluster.local
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: mysql
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: my-secret-pw
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 5Gi

6. Multi-Container Pod Patterns

Sidecar

A helper container runs alongside the main container in the same pod, sharing network and volumes. Used to add functionality without modifying the main container.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# nginx writes logs to a shared volume; fluentd sidecar streams them out
spec:
  containers:
  - name: nginx
    image: nginx:1.27
    volumeMounts:
    - name: log-volume
      mountPath: /var/log/nginx
  - name: fluentd-sidecar
    image: fluent/fluentd:v1.16
    volumeMounts:
    - name: log-volume
      mountPath: /var/log/nginx
  volumes:
  - name: log-volume
    emptyDir: {}

Init Containers

Runs before the main containers start. All init containers must complete successfully (exit 0) in sequence before the pod proceeds. Used for pre-start setup like waiting for a dependency, running migrations, or writing config.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
spec:
  volumes:
  - name: config-volume
    emptyDir: {}
  initContainers:
  - name: setup-config
    image: busybox
    command: ['sh', '-c', 'echo "config initialized" > /work/config.txt']
    volumeMounts:
    - name: config-volume
      mountPath: /work
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config

7. Configuration, Security, and RBAC

ServiceAccounts

A ServiceAccount provides a pod with an identity for authenticating to the Kubernetes API. Every namespace gets a default SA that is auto-mounted into pods. Credentials are mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

1
2
3
4
5
apiVersion: v1
kind: ServiceAccount
metadata:
  name: custom-sa
  namespace: dev
1
2
3
4
# Assign SA to a pod
spec:
  serviceAccountName: custom-sa
  automountServiceAccountToken: false    # Disable token injection if API access not needed

By itself a ServiceAccount has no permissions. It needs a Role and RoleBinding.

RBAC

Key Idea: ServiceAccounts (or users) are the who. Roles/ClusterRoles define the what. Bindings connect them.

ComponentScopePurpose
RoleNamespacedPermissions within one namespace
ClusterRoleCluster-widePermissions across all namespaces, or for non-namespaced resources
RoleBindingNamespacedGrants a Role to a subject in a namespace
ClusterRoleBindingCluster-wideGrants a ClusterRole to a subject globally

Role

1
kubectl create role pod-reader --verb=get --verb=list --resource=pods
1
2
3
4
5
6
7
8
9
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]               # "" = core API group
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

RoleBinding

1
kubectl create rolebinding read-pods-binding --role=pod-reader --serviceaccount=default:custom-sa
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: default
subjects:
- kind: ServiceAccount
  name: custom-sa
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

ClusterRole works the same as Role but applies cluster-wide, and can also target non-namespaced resources (nodes, persistentvolumes) and non-resource URLs (/healthz, /metrics).

Testing permissions

1
2
kubectl auth can-i list pods --as=system:serviceaccount:prod:web-sa -n prod
kubectl auth can-i list pods -A --as=system:serviceaccount:prod:web-sa

SecurityContext

Security settings applied at pod level (defaults for all containers) or container level (overrides pod-level).

SettingPod-levelContainer-level
runAsUser, runAsGroupYesYes (overrides)
fsGroupYesNo
privilegedNoYes
capabilitiesNoYes
readOnlyRootFilesystemNoYes
1
2
3
4
5
6
# Pod-level: all containers run as UID 1000
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
1
2
3
4
5
6
7
8
# Container-level: least privilege with capabilities
securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  readOnlyRootFilesystem: true
  capabilities:
    drop: ["ALL"]
    add: ["NET_BIND_SERVICE"]    # Allow binding to ports < 1024

Common Linux capabilities:

CapabilityPurpose
NET_BIND_SERVICEBind to privileged ports (<1024)
CHOWNChange file ownership
KILLSend signals to other processes
SYS_TIMEModify system clock
ALLEvery capability – always drop this first

CRDs and Operators

Kubernetes lets you define custom resource types via CustomResourceDefinitions, extending the API with your own kinds.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 1. Define the CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: postgresdatabases.mycompany.com
spec:
  group: mycompany.com
  names:
    kind: PostgresDatabase
    plural: postgresdatabases
    singular: postgresdatabase
  scope: Namespaced
  versions:
  - name: v1
    served: true
    storage: true
1
2
3
4
5
6
7
8
# 2. Create a custom resource
apiVersion: mycompany.com/v1
kind: PostgresDatabase
metadata:
  name: customer-db
spec:
  size: small
  version: "15.0"

An Operator is a controller pod (installed from OperatorHub, Helm, or GitHub) that watches custom resources and acts on them – creating pods, running migrations, handling upgrades, etc.

1
2
3
kubectl get crd
kubectl describe crd postgresdatabases.mycompany.com
kubectl get postgresdatabases

8. Services and Networking

Services

Services provide stable networking for pods. Since pod IPs change on restart, a Service gives a fixed ClusterIP and DNS name backed by an Endpoints object.

1
2
3
kubectl expose pod my-pod --port=80 --target-port=8080 --name=my-svc
kubectl expose deployment my-deploy --type=NodePort --port=80 --target-port=8080
kubectl expose deployment my-deploy --type=LoadBalancer --port=80 --target-port=8080
TypeAccessUse Case
ClusterIP (default)Cluster-internal onlyService-to-service communication
NodePortExternal via node IP + static port (30000-32767)Dev/testing without a load balancer
LoadBalancerExternal via cloud load balancerProduction ingress from the internet
1
2
3
4
5
6
7
8
9
10
11
# ClusterIP
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
  - port: 80           # Service port
    targetPort: 8080   # Container port
1
2
3
4
5
6
7
8
9
# NodePort
spec:
  type: NodePort
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080    # Optional, auto-assigned if omitted

Testing:

1
2
3
kubectl exec -it <pod> -- curl http://my-service:80         # ClusterIP
kubectl get nodes -o wide && curl http://<NodeIP>:30080     # NodePort
kubectl get endpoints <service-name>                         # Verify endpoint exists

Ingress

Single external entry point for HTTP/HTTPS routing to multiple Services by hostname and path. Requires an Ingress Controller (NGINX, Traefik, etc.) to be installed.

1
Internet --> [Ingress Controller] --> Ingress Rules --> Services --> Pods
1
2
kubectl get ingressclass
kubectl create ingress web-ing --rule="example.com/=web:80" --class=nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - example.com
    secretName: example-tls
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80

Testing:

1
2
3
kubectl port-forward svc/ingress-nginx-controller -n ingress-nginx 8080:80
curl http://localhost:8080
curl http://localhost:8080/api

NetworkPolicies

Controls pod-to-pod and pod-to-external traffic. Requires a CNI plugin that supports it (Calico, Cilium, Weave Net).

When any NetworkPolicy selects a pod, all traffic not explicitly allowed is denied.

Restrict ingress by pod label:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-db
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
      - podSelector:
          matchLabels:
            role: frontend
    ports:
    - protocol: TCP
      port: 5432

Restrict by IP range (Ingress + Egress):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
spec:
  podSelector:
    matchLabels:
      role: api
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
      - ipBlock:
          cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 80
  egress:
  - to:
      - ipBlock:
          cidr: 0.0.0.0/0
    ports:
    - protocol: TCP
      port: 443

9. Tooling

Docker

ActionCommand
Build imagedocker build -t myimage:1.0 .
List imagesdocker images
Run containerdocker run -it myimage:1.0
Tag imagedocker tag myimage:1.0 myrepo/myimage:1.0
Push to registrydocker push myrepo/myimage:1.0
Inspect containerdocker inspect <container-id>
Login to registrydocker login <registry-url>

Helm

Manages Kubernetes apps as versioned packages (charts). Uses templating to support multiple environments via values.yaml.

TermMeaning
ChartPackage of Kubernetes manifests (templates + values)
ReleaseDeployed instance of a chart
values.yamlConfigurable parameters for the chart
ActionCommand
Add repohelm repo add bitnami https://charts.bitnami.com/bitnami
Update reposhelm repo update
Searchhelm search repo nginx
Installhelm install my-nginx bitnami/nginx
List releaseshelm list
Upgradehelm upgrade my-nginx bitnami/nginx --set service.type=NodePort
Uninstallhelm uninstall my-nginx
Dry-runhelm install --dry-run --debug my-nginx bitnami/nginx

Kustomize

Built into kubectl. Customizes Kubernetes manifests for different environments without forking the base files. Uses a declarative overlay approach.

TermDescription
BaseOriginal generic manifests
OverlayEnvironment-specific patches on top of the base
kustomization.yamlDeclares which resources and patches to apply
1
2
3
4
# kustomization.yaml to apply multiple manifests at once
resources:
  - deployment.yaml
  - service.yaml
1
2
kubectl kustomize my-app/           # Preview final manifest
kubectl apply -k my-app/            # Apply with kustomize

For multiple environments, keep one base and write small patch files per environment:

1
2
3
4
5
6
7
8
9
10
base/
  deployment.yaml
  service.yaml
overlays/
  dev/
    kustomization.yaml
    patch-replicas.yaml
  prod/
    kustomization.yaml
    patch-replicas.yaml

Comments powered by Disqus.