What happens if only a limit is specified for a resource and no admission-time mechanism has applied a default request ?
Kubernetes will create the container but it will fail with CrashLoopBackOff .
Kubernetes does not allow containers to be created without request values, causing eviction.
Kubernetes copies the specified limit and uses it as the requested value for the resource.
Kubernetes chooses a random value and uses it as the requested value for the resource.
In Kubernetes, resource management for containers is based on requests and limits . Requests represent the minimum amount of CPU or memory required for scheduling decisions, while limits define the maximum amount a container is allowed to consume at runtime. Understanding how Kubernetes behaves when only a limit is specified is important for predictable scheduling and resource utilization.
If a container specifies a resource limit but does not explicitly specify a resource request , Kubernetes applies a well-defined default behavior. In this case, Kubernetes automatically sets the request equal to the specified limit . This behavior ensures that the scheduler has a concrete request value to use when deciding where to place the Pod. Without a request value, the scheduler would not be able to make accurate placement decisions, as scheduling is entirely request-based.
This defaulting behavior applies independently to each resource type, such as CPU and memory. For example, if a container sets a memory limit of 512Mi but does not define a memory request, Kubernetes treats the memory request as 512Mi as well. The same applies to CPU limits. As a result, the Pod is scheduled as if it requires the full amount of resources defined by the limit.
Option A is incorrect because specifying only a limit does not cause a container to crash or enter CrashLoopBackOff. CrashLoopBackOff is related to application failures, not resource specification defaults. Option B is incorrect because Kubernetes allows containers to be created without explicit requests, relying on defaulting behavior instead. Option D is incorrect because Kubernetes never assigns random values for resource requests.
This behavior is clearly defined in Kubernetes resource management documentation and is especially relevant when admission controllers like LimitRange are not applying default requests. While valid, relying solely on limits can reduce cluster efficiency, as Pods may reserve more resources than they actually need. Therefore, best practice is to explicitly define both requests and limits.
Thus, the correct and verified answer is Option C .
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server . When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect .
=========
What is the role of a NetworkPolicy in Kubernetes?
The ability to cryptic and obscure all traffic.
The ability to classify the Pods as isolated and non isolated.
The ability to prevent loopback or incoming host traffic.
The ability to log network security events.
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B , so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept—isolated vs non-isolated—is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS. Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism. Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin . If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules—option B .
=========
What is the order of 4C’s in Cloud Native Security, starting with the layer that a user has the most control over?
Cloud - > Container - > Cluster - > Code
Container - > Cluster - > Code - > Cloud
Cluster - > Container - > Code - > Cloud
Code - > Container - > Cluster - > Cloud
The Cloud Native Security “4C’s” model is commonly presented as Code, Container, Cluster, Cloud , ordered from the layer you control most directly to the one you control least—therefore D is correct. The idea is defense-in-depth across layers, recognizing that responsibilities are shared between developers, platform teams, and cloud providers.
Code is where users have the most direct control: application logic, dependencies, secure coding practices, secrets handling patterns, and testing. This includes validating inputs, avoiding vulnerabilities, and scanning dependencies. Next is the Container layer: building secure images, minimizing image size/attack surface, using non-root users, setting file permissions, and scanning images for known CVEs. Container security is about ensuring the artifact you run is trustworthy and hardened.
Then comes the Cluster layer: Kubernetes configuration and runtime controls, including RBAC, admission policies (OPA/Gatekeeper), Pod Security standards, network policies, runtime security, audit logging, and node hardening practices. Cluster controls determine what can run and how workloads interact. Finally, the Cloud layer includes the infrastructure and provider controls—IAM, VPC/networking, KMS, managed control plane protections, and physical security—which users influence through configuration but do not fully own.
The model’s value is prioritization: start with what you control most (code), then harden the container artifact, then enforce cluster policy and runtime protections, and finally ensure cloud controls are configured properly. This layered approach aligns well with Kubernetes security guidance and modern shared-responsibility models.
Kubernetes Secrets are specifically intended to hold confidential data. Which API object should be used to hold non-confidential data?
CNI
CSI
ConfigMaps
RBAC
In Kubernetes, different API objects are designed for different categories of configuration and op erational data. Secrets are used to store sensitive information such as passwords, API tokens, and encryption keys. For data that is not confidential , Kubernetes provides the ConfigMap resource, making option C the correct answer.
ConfigMaps are intended to hold non-sensitive configuration data that applications need at runtime. Examples include application configuration files, feature flags, environment-specific settings, URLs, port numbers, and command-line arguments. ConfigMaps allow developers to decouple configuration from application code, which aligns with cloud-native and twelve-factor app principles. This separation makes applications more portable, easier to manage, and simpler to update without rebuilding container images.
ConfigMaps can be consumed by Pods in several ways: as environment variables, as command-line arguments, or as files mounted into a container’s filesystem. Because they are not designed for confidential data, ConfigMaps store values in plaintext and do not provide encryption by default. This is why sensitive data must always be stored in Secrets instead.
Option A, CNI (Container Network Interface) , is a networking specification used to configure Pod networking and is unrelated to data storage. Option B, CSI (Container Storage Interface) , is used for integrating external storage systems with Kubernetes and does not store configuration data. Option D, RBAC , defines authorization policies and access controls within the cluster and is not a data storage mechanism.
While both Secrets and ConfigMaps can technically be accessed in similar ways by Pods, Kubernetes clearly distinguishes their intended use cases based on data sensitivity. Using ConfigMaps for non-confidential data improves clarity, security posture, and maintainability of Kubernetes configurations.
Therefore, the correct and verified answer is Option C: ConfigMaps , which are explicitly designed to hold non-confidential configuration data in Kubernetes.
What is the core functionality of GitOps tools like Argo CD and Flux?
They track production changes made by a human in a Git repository and generate a human-readable audit trail.
They replace human operations with an agent that tracks Git commands.
They automatically create pull requests when dependencies are outdated.
They continuously compare the desired state in Git with the actual production state and notify or act upon differences.
The defining capability of GitOps controllers such as Argo CD and Flux is continuous reconciliation: they compare the desired state stored in Git to the actual state in the cluster and then alert and/or correct drift , making D correct. In GitOps, Git becomes the single source of truth for declarative configuration (Kubernetes manifests, Helm charts, Kustomize overlays). The controller watches Git for changes and applies them, and it also watches the cluster for divergence.
This is more than “auditing human changes” (option A). GitOps does provide auditability because changes are made via commits and pull requests, but the core functionality is the reconciliation loop that keeps cluster state aligned with Git, including optional automated sync/remediation. Option B is not accurate because GitOps is not about tracking user Git commands; it’s about reconciling desired state definitions. Option C (automatically creating pull requests for outdated dependencies) is a useful feature in some tooling ecosystems, but it is not the central defining behavior of GitOps controllers.
In Kubernetes delivery terms, this approach improves reliability: rollouts become repeatable, configuration drift is detected, and recovery is simpler (reapply known-good state from Git). It also supports separation of duties: platform teams can control policies and base layers, while app teams propose changes via PRs.
So the verified statement is: GitOps tools continuously reconcile Git desired state with cluster actual state —exactly option D .
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector , Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP , NodePort , and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them .
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act of running containers on a node is performed by the container runtime . The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided, CRI-O is the only container runtime, so B is correct.
It’s important to be precise: the component that “runs containers” is not the control plane and not etcd. etcd (option A) stores cluster state (API objects) as the backing datastore. It never runs containers. cloud-controller-manager (option C) integrates with cloud APIs for infrastructure like load balancers and nodes. kube-controller-manager (option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox” and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime” is the most general answer, the question’s option list makes CRI-O the correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
In a serverless computing architecture:
Users of the cloud provider are charged based on the number of requests to a function.
Serverless functions are incompatible with containerized functions.
Users should make a reservation to the cloud provider based on an estimation of usage.
Containers serving requests are running in the background in idle status.
Serverless architectures typically bill based on actual consumption , often measured as number of requests and execution duration (and sometimes memory/CPU allocated), so A is correct. The defining trait is that you don’t provision or manage servers directly; the platform scales execution up and down automatically, including down to zero for many models, and charges you for what you use.
Option B is incorrect: many serverless platforms can run container-based workloads (and some are explicitly “serverless containers”). The idea is the operational abstraction and billing model, not incompatibility with containers. Option C is incorrect because “making a reservation based on estimation” describes reserved capacity purchasing, which is the opposite of the typical serverless pay-per-use model. Option D is misleading: serverless systems aim to avoid charging for idle compute; while platforms may keep some warm capacity for latency reasons, the customer-facing model is not “containers running idle in the background.”
In cloud-native architecture, serverless is often chosen for spiky, event-driven workloads where you want minimal ops overhead and cost efficiency at low utilization. It pairs naturally with eventing systems (queues, pub/sub) and can be integrated with Kubernetes ecosystems via event-driven autoscaling frameworks or managed serverless offerings.
So the correct statement is A : charging is commonly based on requests (and usage), which captures the cost and operational model that differentiates serverless from always-on infrastructure.
=========
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Role Based Access Control (RBAC) is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, so B is correct. RBAC works by defining Roles (namespaced) and ClusterRoles (cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities using RoleBindings or ClusterRoleBindings .
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While it can be granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer is B: Role Based Access Control .
=========
What is the difference between a Deployment and a ReplicaSet?
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
=========
To visualize data from Prometheus you can use expression browser or console templates. What is the other data visualization tool commonly used together with Prometheus?
Grafana
Graphite
Nirvana
GraphQL
The most common visualization tool used with Prometheus is Grafana , so A is correct. Prometheus includes a built-in expression browser that can graph query results, but Grafana provides a much richer dashboarding experience: reusable dashboards, variables, templating, annotations, alerting integrations, and multi-data-source support.
In Kubernetes observability stacks, Prometheus scrapes and stores time-series metrics (cluster and application metrics). Grafana queries Prometheus using PromQL and renders the results into dashboards for SREs and developers. This pairing is widespread because it cleanly separates concerns: Prometheus is the metrics store and query engine; Grafana is the UI and dashboard layer.
Option B (Graphite) is a separate metrics system with its own storage/query model; while Grafana can visualize Graphite too, the question asks what is commonly used together with Prometheus , which is Grafana. Option D (GraphQL) is an API query language, not a metrics visualization tool. Option C (“Nirvana”) is not a standard Prometheus visualization tool in common Kubernetes stacks.
In practice, this combo enables operational outcomes: dashboards for error rates and latency (often derived from histograms), capacity monitoring (node CPU/memory), workload behavior (Pod restarts, HPA scaling), and SLO reporting. Grafana dashboards often serve as the shared language during incidents: teams correlate alerts with time-series patterns and quickly identify when regressions began.
Therefore, the verified correct tool commonly used with Prometheus for visualization is Grafana (A) .
=========
Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Which open-source cloud native storage orchestrator automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services?
CubeFS
OpenEBS
Rook
MinIO
Rook is the open-source, cloud-native storage orchestrator specifically designed to automate the deployment, configuration, and lifecycle management of Ceph within Kubernetes environments. Its primary goal is to transform complex, traditionally manual storage systems like Ceph into Kubernetes-native services that are easy to operate and highly resilient.
Ceph itself is a mature and powerful distributed storage platform that supports block storage (RBD), object storage (RGW), and shared filesystems (CephFS). However, operating Ceph directly requires deep expertise, careful configuration, and continuous operational management. Rook addresses this challenge by running Ceph as a set of Kubernetes-managed components and exposing storage capabilities through Kubernetes Custom Resource Definitions (CRDs). This allows administrators to declaratively define storage clusters, pools, filesystems, and object stores using familiar Kubernetes patterns.
Rook continuously monitors the health of the Ceph cluster and takes automated actions to maintain the desired state. If a Ceph daemon fails or a node becomes unavailable, Rook works with Kubernetes scheduling and Ceph’s internal replication mechanisms to ensure data durability and service continuity. This enables self-healing behavior. Scaling storage capacity is also simplified—adding nodes or disks allows Rook and Ceph to automatically rebalance data, providing self-scaling capabilities without manual intervention.
The other options are incorrect for this use case. CubeFS is a distributed filesystem but is not a Ceph orchestrator. OpenEBS focuses on container-attached storage and local or replicated volumes rather than managing Ceph itself. MinIO is an object storage server compatible with S3 APIs, but it does not orchestrate Ceph or provide block and filesystem services.
Therefore, the correct and verified answer is Option C: Rook , which is the officially recognized Kubernetes-native orchestrator for Ceph, delivering automated, resilient, and scalable storage management aligned with cloud-native principles.
What is Serverless computing?
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Serverless computing is a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matches A . In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless” doesn’t mean there are no servers; it means developers don’t manage servers . The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis” phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes can enable serverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.”
Options B, C, and D are incorrect because they describe specialized or vague “operating system” services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question is A : providing backend services on an as-used basis.
=========
In a Kubernetes cluster, which scenario best illustrates the use case for a StatefulSet ?
A web application that requires multiple replicas for load balancing.
A service that routes traffic to various microservices in the cluster.
A background job that runs periodically and does not maintain state.
A database that requires persistent storage and stable network identities.
A StatefulSet is a Kubernetes workload API object specifically designed to manage stateful applications. Unlike Deployments or ReplicaSets, which are intended for stateless workloads, StatefulSets provide guarantees about the ordering, uniqueness, and persistence of Pods. These guarantees are critical for applications that rely on stable identities and durable storage, such as databases, message brokers, and distributed systems.
The defining characteristics of a StatefulSet include stable network identities, persistent storage, and ordered deployment and scaling. Each Pod created by a StatefulSet receives a unique and predictable name (for example, database-0 , database-1 ), which remains consistent across Pod restarts. This stable identity is essential for stateful applications that depend on fixed hostnames for leader election, replication, or peer discovery. Additionally, StatefulSets are commonly used with PersistentVolumeClaims, ensuring that each Pod is bound to its own persistent storage that is retained even if the Pod is rescheduled or restarted.
Option A is incorrect because web applications that scale horizontally for load balancing are typically stateless and are best managed by Deployments, which allow Pods to be created and destroyed freely without preserving identity. Option B is incorrect because traffic routing to mi croservices is handled by Services or Ingress resources, not StatefulSets. Option C is incorrect because periodic background jobs that do not maintain state are better suited for Jobs or CronJobs.
Option D correctly represents the ideal use case for a StatefulSet. Databases require persistent data storage, stable network identities, and predictable startup and shutdown behavior. StatefulSets ensure that Pods are started, stopped, and updated in a controlled order, which helps maintain data consistency and application reliability. According to Kubernetes documentation, whenever an application requires stable identities, ordered deployment, and persistent state, a StatefulSet is the recommended and verified solution, making option D the correct answer.
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs include resource requests (CPU/memory), taints/tolerations , and affinity/anti-affinity rules. Option A directly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—so A is correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer is A .
=========
Which component in Kubernetes is responsible to watch newly created Pods with no assigned node, and selects a node for them to run on?
etcd
kube-controller-manager
kube-proxy
kube-scheduler
The correct answer is D: kube-scheduler . The kube-scheduler is the control plane component responsible for assigning Pods to nodes. It watches for newly created Pods that do not have a spec.nodeName set (i.e., unscheduled Pods). For each such Pod, it evaluates the available nodes against scheduling constraints and chooses the best node, then performs a “bind” operation by setting the Pod’s spec.nodeName.
Scheduling decisions consider many factors: resource requests vs node allocatable capacity, taints/tolerations, node selectors and affinity/anti-affinity, topology spread constraints, and other policy inputs. The scheduler typically runs a two-phase process: filtering (find feasible nodes) and scoring (rank feasible nodes) before selecting one.
Option A (etcd) is the datastore that persists cluster state; it does not make scheduling decisions. Option B (kube-controller-manager) runs controllers (Deployment, Node, Job controllers, etc.) but not scheduling. Option C (kube-proxy) is a node component for Service networking; it doesn’t place Pods.
Understanding this separation is key for troubleshooting. If Pods are stuck Pending with “no nodes available,” the scheduler’s feasibility checks are failing (insufficient CPU/memory, taints not tolerated, affinity mismatch). If Pods schedule but land unexpectedly, it’s often due to scoring preferences or missing constraints. In all cases, the component that performs the node selection is the kube-scheduler .
Therefore, the verified correct answer is D .
=========
Which of these commands is used to retrieve the documentation and field definitions for a Kubernetes resource?
kubectl explain
kubectl api-resources
kubectl get --help
kubectl show
kubectl explain is the command that shows documentation and field definitions for Kubernetes resource schemas, so A is correct. Kubernetes resources have a structured schema: top-level fields like apiVersion, kind, and metadata, and resource-specific structures like spec and status. kubectl explain lets you explore these structures directly from your cluster’s API discovery information, including field types, descriptions, and nested fields.
For example, kubectl explain deployment describes the Deployment resource, and kubectl explain deployment.spec dives into the spec structure. You can continue deeper, such as kubectl explain deployment.spec.template.spec.containers to discover container fields. This is especially useful when writing or troubleshooting manifests, because it reduces guesswork and prevents invalid YAML fields that would be rejected by the API server. It also helps when APIs evolve: you can confirm which fields exist in your cluster’s current version and what they mean.
The other commands do different things. kubectl api-resources lists resource types and their shortnames, whether they are namespaced, and supported verbs—useful discovery, but not detailed field definitions. kubectl get --help shows CLI usage help for kubectl get, not the Kubernetes object schema. kubectl show is not a standard kubectl subcommand.
From a Kubernetes “declarative configuration” perspective, correct manifests are critical: controllers reconcile desired state from spec, and subtle field mistakes can change runtime behavior. kubectl explain is a built-in way to learn the schema and write manifests that align with the Kubernetes API’s expectations. That’s why it’s commonly recommended in Kubernetes documentation and troubleshooting workflows.
=========
What components are common in a service mesh?
Tracing and log storage
Circuit breaking and Pod scheduling
Data plane and runtime plane
Service proxy and control plane
A service mesh is an architectural pattern that manages service-to-service communication in a microservices environment by inserting a dedicated networking layer. The two most common building blocks you’ll see across service mesh implementations are (1) a data plane of proxies and (2) a control plane that configures and manages those proxies—this aligns best with “service proxy and control plane,” option D .
In practice, the data plane is usually implemented via sidecar proxies (or sometimes node/ambient proxies) that sit “next to” workloads and handle traffic functions such as mTLS encryption, retries, timeouts, load balancing policies, traffic splitting, and telemetry generation. These proxies can capture inbound and outbound traffic without requiring changes to application code, which is one of the defining benefits of a mesh.
The control plane provides the management layer: it distributes policy and configuration to the proxies (routing rules, security policies, identities/certificates), discovers services/endpoints, and often coordinates certificate rotation and workload identity. In Kubernetes environments, meshes typically integrate with the Kubernetes API for service discovery and configuration.
Option C is close in spirit but uses non-standard wording (“runtime plane” is not a typical service mesh term; “control plane” is). Options A and B describe capabilities that may exist in a mesh ecosystem (telemetry, circuit breaking), but they are not the universal “core components” across meshes. Tracing/log storage, for example, is usually handled by external observability backends (e.g., Jaeger, Tempo, Loki) rather than being intrinsic “mesh components.”
So, the most correct and broadly accepted answer is D: service proxy and control plane .
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer is B . A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation .
=========
A CronJob is scheduled to run by a user every one hour. What happens in the cluster when it’s time for this CronJob to run?
Kubelet watches API Server for CronJob objects. When it’s time for a Job to run, it runs the Pod directly.
Kube-scheduler watches API Server for CronJob objects, and this is why it’s called kube-scheduler.
CronJob controller component creates a Pod and waits until it finishes to run.
CronJob controller component creates a Job. Then the Job controller creates a Pod and waits until it finishes to run.
CronJobs are implemented through Kubernetes controllers that reconcile desired state. When the scheduled time arrives, the CronJob controller (part of the controller-manager set of control plane controllers) evaluates the CronJob object’s schedule and determines whether a run should be started. Importantly, CronJob does not create Pods directly as its primary mechanism. Instead, it creates a Job object for each scheduled execution. That Job object then becomes the responsibility of the Job controller , which creates one or more Pods to complete the Job’s work and monitors them until completion. This separation of concerns is why option D is correct.
This design has practical benefits. Jobs encapsulate “run-to-completion” semantics: retries, backoff limits, completion counts, and tracking whether the work has succeeded. CronJob focuses on the temporal triggering aspect (schedule, concurrency policy, starting deadlines, history limits), while Job focuses on the execution aspect (create Pods, ensure completion, retry on failure).
Option A is incorrect because kubelet is a node agent; it does not watch CronJob objects and doesn’t decide when a schedule triggers. Kubelet reacts to Pods assigned to its node and ensures containers run there. Option B is incorrect because kube-scheduler schedules Pods to nodes after they exist (or are created by controllers); it does not trigger CronJobs. Option C is incorrect because CronJob does not usually create a Pod and wait directly; it delegates via a Job, which then manages Pods and completion.
So, at runtime: CronJob controller creates a Job; Job controller creates the Pod(s); scheduler assigns those Pods to nodes; kubelet runs them; Job controller observes success/failure and updates status; CronJob controller manages run history and concurrency rules.
=========
Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer is A . Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage the lifecycle of many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated” scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.” Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what option A states.
=========
Which of the following capabilities are you allowed to add to a container using the Restricted policy?
CHOWN
SYS_CHROOT
SETUID
NET_BIND_SERVICE
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged” ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted” is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE .
=========
Which persona is normally responsible for defining, testing, and running an incident management process?
Site Reliability Engineers
Project Managers
Application Developers
Quality Engineers
The role most commonly responsible for defining, testing, and running an incident management process is Site Reliability Engineers (SREs) , so A is correct. SRE is an operational engineering discipline focused on ensuring reliability, availability, and performance of services in production. Incident management is a core part of that mission: when outages or severe degradations occur, someone must coordinate response, restore service quickly, and then drive follow-up improvements to prevent recurrence.
In cloud native environments (including Kubernetes), incident response involves both technical and process elements. On the technical side, SREs ensure observability is in place—metrics, logs, traces, dashboards, and actionable alerts—so incidents can be detected and diagnosed quickly. They also validate operational readiness: runbooks, escalation paths, on-call rotations, and post-incident review practices. On the process side, SREs often establish severity classifications, response roles (incident commander, communications lead, subject matter experts), and “game day” exercises or simulated incidents to test preparedness.
Project managers may help coordinate schedules and communication for projects, but they are not typically the owners of operational incident response mechanics. Application developers are crucial participants during incidents, especially for debugging application-level failures, but they are not usually the primary maintainers of the incident management framework. Quality engineers focus on testing and quality assurance, and while they contribute to preventing defects, they are not usually the owners of real-time incident operations.
In Kubernetes specifically, incidents often span multiple layers: workload behavior, cluster resources, networking, storage, and platform dependencies. SREs are positioned to manage the cross-cutting operational view and to continuously improve reliability through error budgets, SLOs/SLIs, and iterative hardening. That’s why the correct persona is Site Reliability Engineers .
=========
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP : The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort : Exposes the Service on a static port on each node. Traffic to < NodeIP > : < NodePort > is forwarded to the Service endpoints. This is often used for simple external access in environments without load balancers, or as a building block for other systems.
LoadBalancer : Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName : Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4) .
=========
What is a Kubernetes Service Endpoint?
It is the API endpoint of our Kubernetes cluster.
It is a name of special Pod in kube-system namespace.
It is an IP address that we can access from the Internet.
It is an object that gets IP addresses of individual Pods assigned to it.
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was the Endpoints object; today Kubernetes commonly uses EndpointSlice for scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s why D is correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL). Option B is incorrect; there’s no special “Service Endpoint Pod.” Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint” in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer is A . In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting— A .
=========
Which of the following best describes horizontally scaling an application deployment?
The act of adding/removing node instances to the cluster to meet demand.
The act of adding/removing applications to meet demand.
The act of adding/removing application instances of the same application to meet demand.
The act of adding/removing resources to application instances to meet demand.
Horizontal scaling means changing how many instances of an application are running, not changing how big each instance is. Therefore, the best description is C: adding/removing application instances of the same application to meet demand . In Kubernetes, “instances” typically correspond to Pod replicas managed by a controller like a Deployment. When you scale horizontally, you increase or decrease the replica count, which increases or decreases total throughput and resilience by distributing load across more Pods.
Option A is about cluster/node scaling (adding or removing nodes), which is infrastructure scaling typically handled by a cluster autoscaler in cloud environments. Node scaling can enable more Pods to be scheduled, but it’s not the definition of horizontal application scaling itself. Option D describes vertical scaling —adding/removing CPU or memory resources to a given instance (Pod/container) by changing requests/limits or using VPA. Option B is vague and not the standard definition.
Horizontal scaling is a core cloud-native pattern because it improves availability and elasticity. If one Pod fails, other replicas continue serving traffic. In Kubernetes, scaling can be manual (kubectl scale deployment ... --replicas=N) or automatic using the Horizontal Pod Autoscaler (HPA) . HPA adjusts replicas based on observed metrics like CPU utilization, memory, or custom/external metrics (for example, request rate or queue length). This creates responsive systems that can handle variable traffic.
From an architecture perspective, designing for horizontal scaling often means ensuring your application is stateless (or manages state externally), uses idempotent request handling, and supports multiple concurrent instances. Stateful workloads can also scale horizontally, but usually with additional constraints (StatefulSets, sharding, quorum membership, stable identity).
So the verified definition and correct choice is C .
=========
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started . Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull , not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup , making option B the correct answer.
What is a probe within Kubernetes?
A monitoring mechanism of the Kubernetes API.
A pre-operational scope issued by the kubectl agent.
A diagnostic performed periodically by the kubelet on a container.
A logging mechanism of the Kubernetes API.
In Kubernetes, a probe is a health check mechanism that the kubelet executes against containers, so C is correct. Probes are part of how Kubernetes implements self-healing and safe traffic management. The kubelet runs probes periodically according to the configuration in the Pod spec and uses the results to decide whether a container is healthy, ready to receive traffic, or still starting up.
Kubernetes supports three primary probe types:
Liveness probe : determines whether the container should be restarted. If liveness fails repeatedly, kubelet restarts the container (subject to restartPolicy).
Readiness probe : determines whether the Pod should receive traffic via Services. If readiness fails, the Pod is removed from Service endpoints, preventing traffic from being routed to it until it becomes ready again.
Startup probe : used for slow-starting containers. It disables liveness/readiness failures until startup succeeds, preventing premature restarts during initialization.
Probe mechanisms can be HTTP GET , TCP socket checks , or exec commands run inside the container. These checks are performed by kubelet on the node where the Pod is running, not by the API server.
Options A and D incorrectly attribute probes to the Kubernetes API. While probe configuration is stored in the API as part of Pod specs, execution is node-local. Option B is not a Kubernetes concept.
So the correct definition is: a probe is a periodic diagnostic run by kubelet to assess container health/readiness, enabling reliable rollouts, traffic gating, and automatic recovery.
=========
What is a Dockerfile?
A bash script that is used to automatically build a docker image.
A config file that defines which image registry a container should be pushed to.
A text file that contains all the commands a user could call on the command line to assemble an image.
An image layer created by a running container stored on the host.
A Dockerfile is a text file that contains a sequence of instructions used to build a container image, so C is correct. These instructions include choosing a base image (FROM), copying files (COPY/ADD), installing dependencies (RUN), setting environment variables (ENV), defining working directories (WORKDIR), exposing ports (EXPOSE), and specifying the default startup command (CMD/ENTRYPOINT). When you run docker build (or compatible tools like BuildKit), the builder executes these instructions to produce an image composed of immutable layers.
In cloud-native application delivery, Dockerfiles (more generally, OCI image build definitions) are a key step in the supply chain. The resulting image artifact is what Kubernetes runs in Pods. Best practices include using minimal base images, pinning versions, avoiding embedding secrets, and using multi-stage builds to keep runtime images small. These practices improve security and performance, and make delivery pipelines more reliable.
Option A is incorrect because a Dockerfile is not a bash script, even though it can run shell commands through RUN. Option B is incorrect because registry destinations are handled by tooling and tagging/push commands (or CI pipeline configuration), not by the Dockerfile itself. Option D is incorrect because an image layer created by a running container is more closely related to container filesystem changes and commits; a Dockerfile is the build recipe, not a runtime-generated layer.
Although the question uses “Dockerfile,” the concept maps well to OCI-based container image creation generally: you define a reproducible build recipe that produces an immutable image artifact. That artifact is then versioned, scanned, signed, stored in a registry, and deployed to Kubernetes through manifests/Helm/GitOps. Therefore, C is the correct and verified definition.
=========
Which of the following is a responsibility of the governance board of an open source project?
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project ' s “terms of engagement”.
Define the license to be used in the project.
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project ' s terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C .
=========
Which of the following are tasks performed by a container orchestration tool?
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible for scheduling , scaling , and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination— placement + elasticity + self-healing —is the core of container orchestration, matching option A precisely.
=========
What does the " nodeSelector " within a PodSpec use to place Pods on the target nodes?
Annotations
IP Addresses
Hostnames
Labels
nodeSelector is a simple scheduling constraint that matches node labels , so the correct answer is D (Labels) . In Kubernetes, nodes have key/value labels (for example, disktype=ssd, topology.kubernetes.io/zone=us-east-1a, kubernetes.io/os=linux). When you set spec.nodeSelector in a Pod template, you provide a map of required label key/value pairs. The kube-scheduler will then only consider nodes that have all those labels with matching values as eligible placement targets for that Pod.
This is different from annotations: annotations are also key/value metadata, but they are not intended for selection logic and are not used by the scheduler for nodeSelector. IP addresses and hostnames are not the mechanism used by nodeSelector either. While Kubernetes nodes do have hostnames and IPs, nodeSelector specifically operates on labels because labels are designed for selection, grouping, and placement constraints.
Operationally, nodeSelector is the most basic form of node placement control. It is commonly used to pin workloads to specialized hardware (GPU nodes), compliance zones, or certain OS/architecture pools. However, it has limitations: it only supports exact match on labels and cannot express more complex rules (like “in this set of zones” or “prefer but don’t require”). For that, Kubernetes offers node affinity (requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution) which supports richer expressions.
Still, the underlying mechanism is the same concept: the scheduler evaluates your Pod’s placement requirements against node metadata, and for nodeSelector, that metadata is labels . Therefore, the verified correct answer is D .
=========
What is the purpose of the kube-proxy?
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
The correct answer is B : kube-proxy maintains network rules on nodes . kube-proxy is a node component that implements part of the Kubernetes Service abstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonly iptables or IPVS , depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining the network forwarding rules that make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.”
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
A site reliability engineer needs to temporarily prevent new Pods from being scheduled on node-2 while keeping the existing workloads running without disruption. Which kubectl command should be used?
kubectl cordon node-2
kubectl delete node-2
kubectl drain node-2
kubectl pause deployment
In Kubernetes, node maintenance and availability are common operational tasks, and the platform provides specific commands to control how the scheduler places Pods on nodes. When the requirement is to temporarily prevent new Pods from being scheduled on a node without affecting the currently running Pods , the correct approach is to cordon the node.
The command kubectl cordon node-2 marks the node as unschedulable . This means the Kubernetes scheduler will no longer place any new Pods onto that node. Importantly, cordoning a node does not evict, restart, or interrupt existing Pods. All workloads already running on the node continue operating normally. This makes cordoning ideal for scenarios such as diagnostics, monitoring, or preparing for future maintenance while ensuring zero workload disruption.
Option B, kubectl delete node-2 , is incorrect because deleting a node removes it entirely from the cluster. This action would cause Pods running on that node to be terminated and rescheduled elsewhere, resulting in disruption—exactly what the question specifies must be avoided.
Option C, kubectl drain node-2 , is also incorrect in this context. Draining a node safely evicts Pods (except for certain exclusions like DaemonSets) and reschedules them onto other nodes. While drain is useful for maintenance and upgrades, it does not keep existing workloads running on the node, making it unsuitable here.
Option D, kubectl pause deployment , applies only to Deployments and merely pauses rollout updates. It does not affect node-level scheduling behavior and has no impact on where Pods are placed by the scheduler.
Therefore, the correct and verified answer is Option A: kubectl cordon node-2 , which aligns with Kubernetes operational best practices and official documentation for non-disruptive node management.
Which of the following is a feature Kubernetes provides by default as a container orchestration tool?
A portable operating system.
File system redundancy.
A container image registry.
Automated rollouts and rollbacks.
Kubernetes provides automated rollouts and rollbacks for workloads by default (via controllers like Deployments), so D is correct. In Kubernetes, application delivery is controller-driven: you declare the desired state (new image, new config), and controllers reconcile the cluster toward that state. Deployments implement rolling updates, gradually replacing old Pods with new ones while respecting availability constraints. Kubernetes tracks rollout history and supports rollback to previous ReplicaSets when an update fails or is deemed unhealthy.
This is a core orchestration capability: it reduces manual intervention and makes change safer. Rollouts use readiness checks and update strategies to avoid taking the service down, and kubectl rollout status/history/undo supports day-to-day release operations.
The other options are not “default Kubernetes orchestration features”:
Kubernetes is not a portable operating system (A). It’s a platform for orchestrating containers on top of an OS.
Kubernetes does not provide filesystem redundancy by itself (B). Storage redundancy is handled by underlying storage systems and CSI drivers (e.g., replicated block storage, distributed filesystems).
Kubernetes does not include a built-in container image registry (C). You use external registries (Docker Hub, ECR, GCR, Harbor, etc.). Kubernetes pulls images but does not host them as a core feature.
So the correct “provided by default” orchestration feature in this list is the ability to safely manage application updates via automated rollouts and rollbacks .
=========
A Pod named my-app must be created to run a simple nginx container. Which kubectl command should be used?
kubectl create nginx --name=my-app
kubectl run my-app --image=nginx
kubectl create my-app --image=nginx
kubectl run nginx --name=my-app
In Kubernetes, the simplest and most direct way to create a Pod that runs a single container is to use the kubectl run command with the appropriate image specification. The command kubectl run my-app --image=nginx explicitly instructs Kubernetes to create a Pod named my-app using the nginx container image, which makes option B the correct answer.
The kubectl run command is designed to quickly create and run a Pod (or, in some contexts, a higher-level workload resource) from the command line. When no additional flags such as --restart=Always are specified, Kubernetes creates a standalone Pod by default. This is ideal for simple use cases like testing, demonstrations, or learning scenarios where only a single container is required.
Option A is incorrect because kubectl create nginx --name=my-app is not valid syntax; the create subcommand requires a resource type (such as pod, deployment, or service) or a manifest file. Option C is also incorrect because kubectl create my-app --image=nginx omits the resource type and therefore is not a valid kubectl create command. Option D is incorrect because kubectl run nginx --name=my-app attempts to use the deprecated --name flag, which is no longer supported in modern versions of kubectl .
Using kubectl run with explicit naming and image flags is consistent with Kubernetes command-line conventions and is widely documented as the correct approach for creating simple Pods. The resulting Pod can be verified using commands such as kubectl get pods and kubectl describe pod my-app .
In summary, Option B is the correct and verified answer because it uses valid kubectl syntax to create a Pod named my-app running the nginx container image in a straightforward and predictable way.
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image , making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface . Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency . They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A .
Which statement about Secrets is correct?
A Secret is part of a Pod specification.
Secret data is encrypted with the cluster private key by default.
Secret data is base64 encoded and stored unencrypted by default.
A Secret can only be used for confidential data.
The correct answer is C . By default, Kubernetes Secrets store their data as base64-encoded values in the API (backed by etcd). Base64 is an encoding mechanism, not encryption, so this does not provide confidentiality. Unless you explicitly configure encryption at rest for etcd (via the API server encryption provider configuration) and secure access controls, Secret contents should be treated as potentially readable by anyone with sufficient API access or access to etcd backups.
Option A is misleading: a Secret is its own Kubernetes resource (kind: Secret). While Pods can reference Secrets (as environment variables or mounted volumes), the Secret itself is not “part of the Pod spec” as an embedded object. Option B is incorrect because Kubernetes does not automatically encrypt Secret data with a cluster private key by default; encryption at rest is optional and must be enabled. Option D is incorrect because Secrets can store a range of sensitive or semi-sensitive data (tokens, certs, passwords), but Kubernetes does not enforce “only confidential data” semantics; it’s a storage mechanism with size and format constraints.
Operationally, best practices include: enabling encryption at rest, limiting access via RBAC, avoiding broad “list/get secrets” permissions, using dedicated service accounts, auditing access, and considering external secrets managers (Vault, cloud KMS-backed solutions) for higher assurance. Also, don’t confuse “Secret” with “secure by default.” The default protection is mainly about avoiding accidental plaintext exposure in manifests, not about cryptographic security.
So the only correct statement in the options is C .
=========
What is the name of the lightweight Kubernetes distribution built for IoT and edge computing?
OpenShift
k3s
RKE
k1s
Edge and IoT environments often have constraints that differ from traditional datacenters: limited CPU/RAM, intermittent connectivity, smaller footprints, and a desire for simpler operations. k3s is a well-known lightweight Kubernetes distribution designed specifically to run in these environments, making B the correct answer.
What makes k3s “lightweight” is that it packages Kubernetes components in a simplified way and reduces operational overhead. It typically uses a single binary distribution and can run with an embedded datastore option for smaller installations (while also supporting external datastores for HA use cases). It streamlines dependencies and is aimed at faster installation and reduced resource consumption, which is ideal for edge nodes, IoT gateways, small servers, labs, and development environments.
By contrast, OpenShift is a Kubernetes distribution focused on enterprise platform capabilities, with additional security defaults, integrated developer tooling, and a larger operational footprint—excellent for many enterprises but not “built for IoT and edge” as the defining characteristic. RKE (Rancher Kubernetes Engine) is a Kubernetes installer/engine used to deploy Kubernetes, but it’s not specifically the lightweight edge-focused distribution in the way k3s is. “k1s” is not a standard, widely recognized Kubernetes distribution name in this context.
From a cloud native architecture perspective, edge Kubernetes distributions extend the same declarative and API-driven model to places where you want consistent operations across cloud, datacenter, and edge. You can apply GitOps patterns, standard manifests, and Kubernetes-native controllers across heterogeneous footprints. k3s provides that familiar Kubernetes experience while optimizing for constrained environments, which is why it has become a common choice for edge/IoT Kubernetes deployments.
=========
What best describes cloud native service discovery?
It ' s a mechanism for applications and microservices to locate each other on a network.
It ' s a procedure for discovering a MAC address, associated with a given IP address.
It ' s used for automatically assigning IP addresses to devices connected to the network.
It ' s a protocol that turns human-readable domain names into IP addresses on the Internet.
Cloud native service discovery is fundamentally about how services and microservices find and connect to each other reliably in a dynamic environment, so A is correct. In cloud native systems (especially Kubernetes), instances are ephemeral: Pods can be created, destroyed, rescheduled, and scaled at any time. Hardcoding IPs breaks quickly. Service discovery provides stable names and lookup mechanisms so that one component can locate another even as underlying endpoints change.
In Kubernetes, service discovery is commonly achieved through Services (stable virtual IP + DNS name) and cluster DNS (CoreDNS). A Service selects a group of Pods via labels, and Kubernetes maintains the set of endpoints behind that Service. Clients connect to the Service name (DNS) and Kubernetes routes traffic to the current healthy Pods. For some workloads, headless Services provide DNS records that map directly to Pod IPs for per-instance discovery.
The other options describe different networking concepts: B is ARP (MAC discovery), C is DHCP (IP assignment), and D is DNS in a general internet sense. DNS is often used as a mechanism for service discovery, but cloud native service discovery is broader: it’s the overall mechanism enabling dynamic location of services, often implemented via DNS and/or environment variables and sometimes enhanced by service meshes.
So the best description remains A : a mechanism that allows applications and microservices to locate each other on a network in a dynamic environment.
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
Kubernetes Secrets are designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, the default protection applied to Secret values in the Kubernetes API is base64 encoding , not encryption. That is why D is correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively stored unencrypted in etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct if encryption at rest is explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain text”) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection— base64 encoding is the right answer.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer is B : kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
What is the Kubernetes abstraction that allows groups of Pods to be exposed inside a Kubernetes cluster?
Deployment
Daemon
Unit
Service
In Kubernetes, Pods are ephemeral by design. They can be created, destroyed, rescheduled, or replaced at any time, and each Pod receives its own IP address. Because of this dynamic nature, directly relying on Pod IPs for communication is unreliable. To solve this problem, Kubernetes provides the Service abstraction, which allows a stable way to expose and access a group of Pods inside (and sometimes outside) the cluster.
A Service defines a logical set of Pods using label selectors and provides a consistent virtual IP address and DNS name for accessing them. Even if individual Pods fail or are replaced, the Service remains stable, and traffic is automatically routed to healthy Pods that match the selector. This makes Services a fundamental building block for internal communication between applications within a Kubernetes cluster.
Deployments (Option A) are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or exposure capabilities. They control how Pods run, not how they are accessed.
Option B, “Daemon,” is not a valid Kubernetes resource. The correct resource is a DaemonSet , which ensures that a copy of a Pod runs on each (or selected) node in the cluster. DaemonSets are used for node-level workloads like logging or monitoring agents, not for exposing Pods.
Option C, “Unit,” is not a Kubernetes concept at all and does not exist in Kubernetes architecture.
Services can be configured in different ways depending on access requirements, such as ClusterIP for internal access, NodePort or LoadBalancer for external access, and Headless Services for direct Pod discovery. Regardless of type, the core purpose of a Service is to expose a group of Pods in a stable and reliable way.
Therefore, the correct and verified answer is Option D: Service , which is the Kubernetes abstraction specifically designed to expose groups of Pods within a cluster.
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D .
=========
Which tool is used to streamline installing and managing Kubernetes applications?
apt
helm
service
brew
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts , which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers. Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B) .
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow , meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B .
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
The Container Runtime Interface (CRI) defines the protocol for the communication between:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
The CRI (Container Runtime Interface) defines how the kubelet talks to the container runtime , so A is correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run with containerd , CRI-O , and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI may trigger image pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating” due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer is A: the kubelet and the container runtime .
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The correct answer is B : a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods . This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B .
=========
What are the most important resources to guarantee the performance of an etcd cluster?
CPU and disk capacity.
Network throughput and disk I/O.
CPU and RAM memory.
Network throughput and CPU.
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers—so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness—hence B .
=========
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
Host Network
Network
Process ID
Process Name
By default, containers in the same Kubernetes Pod share the network namespace , which means they share the same IP address and port space. Therefore, the correct answer is B (Network) .
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Network”) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks. Option C (“Process ID”) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true). Option D (“Process Name”) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace . This default behavior is why Kubernetes documentation explains a Pod as a “logical host” for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B : containers in the same Pod share the Network namespace by default.
=========
What’s the difference between a security profile and a security context?
Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
The correct answer is B . In Kubernetes, a securityContext is part of the Pod and container specification that configures runtime security settings for that workload—things like runAsUser, runAsNonRoot, Linux capabilities, readOnlyRootFilesystem, allowPrivilegeEscalation, SELinux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile , in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission . The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply—ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers. Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
SecurityContext = workload-level configuration knobs (declared in manifests, applied at runtime).
SecurityProfile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
=========
Which of the following is a challenge derived from running cloud native applications?
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
The correct answer is B . Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.” Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B .
=========
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability , making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
Which Kubernetes component is the smallest deployable unit of computing?
StatefulSet
Deployment
Pod
Container
In Kubernetes, the Pod is the smallest deployable and schedulable unit, making C correct. Kubernetes does not schedule individual containers directly; instead, it schedules Pods, each of which encapsulates one or more containers that must run together on the same node. This design supports both single-container Pods (the most common) and multi-container Pods (for sidecars, adapters, and co-located helper processes).
Pods provide shared context: containers in a Pod share the same network namespace (one IP address and port space) and can share storage volumes. This enables tight coupling where needed—for example, a service mesh proxy sidecar and the application container communicate via localhost, or a log-forwarding sidecar reads logs from a shared volume. Kubernetes manages lifecycle at the Pod level: kubelet ensures the containers defined in the PodSpec are running and uses probes to determine readiness and liveness.
StatefulSet and Deployment are controllers that manage sets of Pods. A Deployment manages ReplicaSets for stateless workloads and provides rollout/rollback features; a StatefulSet provides stable identities, ordered operations, and stable storage for stateful replicas. These are higher-level constructs, not the smallest units.
Option D (“Container”) is smaller in an abstract sense, but it is not the smallest Kubernetes deployable unit because Kubernetes APIs and scheduling work at the Pod boundary. You don’t “kubectl apply” a container; you apply a Pod template within a Pod object (often via controllers).
Understanding Pods as the atomic unit is crucial: Services select Pods, autoscalers scale Pods (replica counts), and scheduling decisions are made per Pod. That’s why Kubernetes documentation consistently refers to Pods as the fundamental building block for running workloads.
=========
How to load and generate data required before the Pod startup?
Use an init container with shared file storage.
Use a PVC volume.
Use a sidecar container with shared volume.
Use another Pod with a PVC.
The Kubernetes-native mechanism to run setup steps before the main application containers start is an init container , so A is correct. Init containers run sequentially and must complete successfully before the regular containers in the Pod are started. This makes them ideal for preparing configuration, downloading artifacts, performing migrations, generating files, or waiting for dependencies.
The question specifically asks how to “load and generate data required before Pod startup.” The most common pattern is: an init container writes files into a shared volume (like an emptyDir volume) mounted by both the init container and the app container. When the init container finishes, the app container starts and reads the generated files. This is deterministic and aligns with Kubernetes Pod lifecycle semantics.
A sidecar container (option C) runs concurrently with the main container, so it is not guaranteed to complete work before startup. Sidecars are great for ongoing concerns (log shipping, proxies, config reloaders), but they are not the primary “before startup” mechanism. A PVC volume (option B) is just storage; it doesn’t itself perform generation or ensure ordering. “Another Pod with a PVC” (option D) introduces coordination complexity and still does not guarantee the data is prepared before this Pod starts unless you build additional synchronization.
Init containers are explicitly designed for this kind of pre-flight work, and Kubernetes guarantees ordering: all init containers complete in order, then the app containers begin. That guarantee is why A is the best and verified answer.
How does Horizontal Pod autoscaling work in Kubernetes?
The Horizontal Pod Autoscaler controller adds more CPU or memory to the pods when the load is above the configured threshold, and reduces CPU or memory when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, but does not reduce the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods to the specified DaemonSet when the load is above the configured threshold, and reduces the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, and reduces the number of pods when the load is below.
Horizontal Pod Autoscaling (HPA) adjusts the number of Pod replicas for a workload controller (most commonly a Deployment) based on observed metrics, increasing replicas when load is high and decreasing when load drops. That matches D , so D is correct.
HPA does not add CPU or memory to existing Pods—that would be vertical scaling (VPA). Instead, HPA changes spec.replicas on the target resource, and the controller then creates or removes Pods accordingly. HPA commonly scales based on CPU utilization and memory (resource metrics), and it can also scale using custom or external metrics if those are exposed via the appropriate Kubernetes metrics APIs.
Option A is vertical scaling behavior, not HPA. Option B is incorrect because HPA can scale down as well as up (subject to stabilization windows and configuration), so it’s not “scale up only.” Option C is incorrect because HPA does not scale DaemonSets in the usual model; DaemonSets are designed to run one Pod per node (or per selected nodes) rather than a replica count. HPA targets resources like Deployments, ReplicaSets (via Deployment), and StatefulSets in typical usage, where replica count is a meaningful knob.
Operationally, HPA works as a control loop: it periodically reads metrics (for example, via metrics-server for CPU/memory, or via adapters for custom metrics), compares the current value to the desired target, and calculates a desired replica count within min/max bounds. To avoid flapping, HPA includes stabilization behavior and cooldown logic so it doesn’t scale too aggressively in response to short spikes or dips. You can configure minimum and maximum replicas and behavior policies to tune responsiveness.
In cloud-native systems, HPA is a key elasticity mechanism: it enables services to handle variable traffic while controlling cost by scaling down during low demand. Therefore, the verified correct answer is D .
=========
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer is A : ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling , matching option A .
=========
Which statement best describes the role of kubelet on a Kubernetes worker node?
kubelet manages the container runtime and ensures that all Pods scheduled to the node are running as expected.
kubelet configures networking rules on each node to handle traffic routing for Services in the cluster.
kubelet monitors cluster-wide resource usage and assigns Pods to the most suitable nodes for execution.
kubelet acts as the primary API component that stores and manages cluster state information.
The kubelet is the primary node-level agent in Kubernetes and is responsible for ensuring that workloads assigned to a worker node are executed correctly. Its core function is to manage container execution on the node and ensure that all Pods scheduled to that node are running as expected , which makes option A the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over responsibility for running the Pod. It continuously watches the API server for Pod specifications that target its node and then interacts with the container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). The kubelet starts, stops, and restarts containers to match the desired state defined in the Pod specification.
In addition to lifecycle management, the kubelet performs ongoing health monitoring. It executes liveness, readiness, and startup probes, reports Pod and node status back to the API server, and enforces resource limits defined in the Pod specification. If a container crashes or becomes unhealthy, the kubelet initiates recovery actions such as restarting the container.
Option B is incorrect because configuring Service traffic routing is the responsibility of kube-proxy and the cluster’s networking layer, not the kubelet. Option C is incorrect because cluster-wide resource monitoring and Pod placement decisions are handled by the kube-scheduler . Option D is incorrect because cluster state is managed by the API server and stored in etcd , not by the kubelet.
In summary, the kubelet acts as the executor and supervisor of Pods on each worker node. It bridges the Kubernetes control plane and the actual runtime environment, ensuring that containers are running, healthy, and aligned with the declared configuration. Therefore, Option A is the correct and verified answer.
What is the purpose of the kubelet component within a Kubernetes cluster?
A dashboard for Kubernetes clusters that allows management and troubleshooting of applications.
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
A component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet is the primary node agent in Kubernetes. It runs on every worker node (and often on control-plane nodes too if they run workloads) and is responsible for ensuring that containers described by PodSpecs are actually running and healthy on that node. The kubelet continuously watches the Kubernetes API (via the control plane) for Pods that have been scheduled to its node, then it collaborates with the node’s container runtime (through CRI) to pull images, create containers, start them, and manage their lifecycle. It also mounts volumes, configures the Pod’s networking (working with the CNI plugin), and reports Pod and node status back to the API server.
Option D captures the core: “an agent on each node that makes sure containers are running in a Pod.” That includes executing probes (liveness, readiness, startup), restarting containers based on the Pod’s restartPolicy, and enforcing resource constraints in coordination with the runtime and OS.
Why the other options are wrong: A describes the Kubernetes Dashboard (or similar UI tools), not kubelet. B describes kube-proxy , which programs node-level networking rules (iptables/ipvs/eBPF depending on implementation) to implement Service virtual IP behavior. C describes the kube-scheduler , which selects a node for Pods that do not yet have an assigned node.
A useful way to remember kubelet’s role is: scheduler decides where , kubelet makes it happen there . Once the scheduler binds a Pod to a node, kubelet becomes responsible for reconciling “desired state” (PodSpec) with “observed state” (running containers). If a container crashes, kubelet will restart it according to policy; if an image is missing, it will pull it; if a Pod is deleted, it will stop containers and clean up. This node-local reconciliation loop is fundamental to Kubernetes’ self-healing and declarative operation model.
=========
Which of the following scenarios would benefit the most from a service mesh architecture?
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters . In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint— D .
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members , the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum . Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members , which corresponds to option D .
=========
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
The correct answer is A: /var/log/pods/ . When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/ , option A .
=========
Which of the following observability data streams would be most useful when desiring to plot resource consumption and predicted future resource exhaustion?
stdout
Traces
Logs
Metrics
The correct answer is D: Metrics . Metrics are numeric time-series measurements collected at regular intervals, making them ideal for plotting resource consumption over time and forecasting future exhaustion. In Kubernetes, this includes CPU usage, memory usage, disk I/O, network throughput, filesystem usage, Pod restarts, and node allocatable vs requested resources. Because metrics are structured and queryable (often with Prometheus), you can compute rates, aggregates, percentiles, and trends, and then apply forecasting methods to predict when a resource will run out.
Logs and traces have different purposes. Logs are event records (strings) that are great for debugging and auditing, but they are not naturally suited to continuous quantitative plotting unless you transform them into metrics (log-based metrics). Traces capture end-to-end request paths and latency breakdowns; they help you find slow spans and dependency bottlenecks, not forecast CPU/memory exhaustion. stdout is just a stream where logs might be written; by itself it’s not an observability data type used for capacity trending.
In Kubernetes observability stacks, metrics are typically scraped from components and workloads: kubelet/cAdvisor exports container metrics, node exporters expose host metrics, and applications expose business/system metrics. The metrics pipeline (Prometheus, OpenTelemetry metrics, managed monitoring) enables dashboards and alerting. For resource exhaustion, you often alert on “time to fill” (e.g., predicted disk fill in < N hours), high sustained utilization, or rapidly increasing error rates due to throttling.
Therefore, the most appropriate data stream for plotting consumption and predicting exhaustion is Metrics , option D .
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer is B: GitOps Toolkit . Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as the GitOps Toolkit . This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit” and “GitHub Actions Toolkit” are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with” those.
“Helm Toolkit” is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer is B .
=========
Which is the correct kubectl command to display logs in real time?
kubectl logs -p test-container-1
kubectl logs -c test-container-1
kubectl logs -l test-container-1
kubectl logs -f test-container-1
To stream logs in real time with kubectl, you use the follow option -f, so D is correct. In Kubernetes, kubectl logs retrieves logs from containers in a Pod. By default, it returns the current log output and exits. When you add -f, kubectl keeps the connection open and continuously prints new log lines as they are produced, similar to tail -f on Linux. This is especially useful for debugging live behavior, watching startup sequences, or monitoring an application during a rollout.
The other flags serve different purposes. -p (as seen in option A) requests logs from the previous instance of a container (useful after a restart/crash), not real-time streaming. -c (option B) selects a specific container within a multi-container Pod; it doesn’t stream by itself (though it can be combined with -f). -l (option C) is used with kubectl logs to select Pods by label, but again it is not the streaming flag; streaming requires -f.
In real troubleshooting, you commonly combine flags, e.g. kubectl logs -f pod-name -c container-name for streaming logs from a specific container, or kubectl logs -f -l app=myapp to stream from Pods matching a label selector (depending on kubectl behavior/version). But the key answer to “display logs in real time” is the follow flag : -f.
Therefore, the correct selection is D .
What does “continuous” mean in the context of CI/CD?
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
The correct answer is C : in CI/CD, “continuous” implies frequent releases , automation , repeatability , and fast feedback/processing . The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous” does not mean “periodic” (which eliminates B and D). It also does not mean “manual processes” (which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.” The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine” issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous” in CI/CD is frequent + automated + repeatable + fast , which is option C .
=========
TESTED 01 May 2026
