Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Questions 4

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.

The consumer chooses which partition to read without any assignment from brokers.

C.

The consumer group will not be rebalanced if a consumer leaves the group.

D.

All topics must have the same number of partitions to use assign() API.

Buy Now
Questions 5

(A consumer application needs to use an at-most-once delivery semantic.

What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)

Options:

A.

auto.offset.reset=latest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

B.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

C.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

D.

auto.offset.reset=earliest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

Buy Now
Questions 6

You use Kafka Connect with the JDBC source connector to extract data from a large database and push it into Kafka.

The database contains tens of tables, and the current connector is unable to process the data fast enough.

You add more Kafka Connect workers, but throughput doesn't improve.

What should you do next?

Options:

A.

Increase the number of Kafka partitions for the topics.

B.

Increase the value of the connector's property tasks.max.

C.

Add more Kafka brokers to the cluster.

D.

Modify the database schemas to enable horizontal sharding.

Buy Now
Questions 7

What is a consequence of increasing the number of partitions in an existing Kafka topic?

Options:

A.

Existing data will be redistributed across the new number of partitions temporarily increasing cluster load.

B.

Records with the same key could be located in different partitions.

C.

Consumers will need to process data from more partitions which will significantly increase consumer lag.

D.

The acknowledgment process will increase latency for producers using acks=all.

Buy Now
Questions 8

You are building a system for a retail store selling products to customers.

Which three datasets should you model as a GlobalKTable?

(Select three.)

Options:

A.

Inventory of products at a warehouse

B.

All purchases at a retail store occurring in real time

C.

Customer profile information

D.

Log of payment transactions

E.

Catalog of products

Buy Now
Questions 9

You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.

Processing is CPU-bound, and lag is increasing.

What should you do?

Options:

A.

Add more consumers to increase the level of parallelism of the processing.

B.

Add more partitions to the topic to increase the level of parallelism of the processing.

C.

Increase the max.poll.records property of consumers.

D.

Decrease the max.poll.records property of consumers.

Buy Now
Questions 10

Your Kafka cluster has five brokers. The topic t1 on the cluster has:

Two partitions

Replication factor = 4

min.insync.replicas = 3You need strong durability guarantees for messages written to topic t1.You configure a producer acks=all and all the replicas for t1 are in-sync.How many brokers need to acknowledge a message before it is considered committed?

Options:

A.

2

B.

3

C.

4

D.

5

Buy Now
Questions 11

Match the topic configuration setting with the reason the setting affects topic durability.

(You are given settings like unclean.leader.election.enable=false, replication.factor, min.insync.replicas=2)

CCDAK Question 11

Options:

Buy Now
Questions 12

Which statement describes the storage location for a sink connector’s offsets?

Options:

A.

The __consumer_offsets topic, like any other consumer

B.

The topic specified in the offsets.storage.topic configuration parameter

C.

In a file specified by the offset.storage.file.filename configuration parameter

D.

In memory which is then periodically flushed to a RocksDB instance

Buy Now
Questions 13

You are developing a Java application using a Kafka consumer.

You need to integrate Kafka’s client logs with your own application’s logs using log4j2.

Which Java library dependency must you include in your project?

Options:

A.

SLF4J implementation for Log4j 1.2 (org.slf4j:slf4j-log4j12)

B.

SLF4J implementation for Log4j2 (org.apache.logging.log4j:log4j-slf4j-impl)

C.

None, the right dependency will be added by the Kafka client dependency by transitivity.

D.

Just the log4j2 dependency of the application

Buy Now
Questions 14

What is the default maximum size of a message the Apache Kafka broker can accept?

Options:

A.

1MB

B.

2MB

C.

5MB

D.

10MB

Buy Now
Questions 15

(You create an Orders topic with 10 partitions.

The topic receives data at high velocity.

Your Kafka Streams application initially runs on a server with four CPU threads.

You move the application to another server with 10 CPU threads to improve performance.

What does this example describe?)

Options:

A.

Horizontal Scaling

B.

Vertical Scaling

C.

Plain Scaling

D.

Scaling Out

Buy Now
Questions 16

You create a topic named loT-Data with 10 partitions and replication factor of three.

A producer sends 1 MB messages compressed with Gzip.

Which two statements are true in this scenario?

(Select two.)

Options:

A.

Compression type will be stored in batch attributes.

B.

By default, compression is the producer’s responsibility.

C.

The message is already compressed so it will not be serialized.

D.

All compressed messages will be stored in the same topic partition.

Buy Now
Questions 17

(You are configuring a source connector that writes records to an Orders topic.

You need to send some of the records to a different topic.

Which Single Message Transform (SMT) is best suited for this requirement?)

Options:

A.

RegexRouter

B.

InsertField

C.

TombstoneHandler

D.

HeaderFrom

Buy Now
Questions 18

You have a Kafka Connect cluster with multiple connectors.

One connector is not working as expected.

How can you find logs related to that specific connector?

Options:

A.

Modify the log4j.properties file to enable connector context.

B.

Modify the log4j.properties file to add a dedicated log appender for the connector.

C.

Change the log level to DEBUG to have connector context information in logs.

D.

Make no change, there is no way to find logs other than by stopping all the other connectors.

Buy Now
Questions 19

What is accomplished by producing data to a topic with a message key?

Options:

A.

Messages with the same key are routed to a deterministically selected partition, enabling order guarantees within that partition.

B.

Kafka brokers allow you to add more partitions to a given topic, without impacting the data flow for existing keys.

C.

It provides a mechanism for encrypting messages at the partition level to ensure secure data transmission.

D.

Consumers can filter messages in real time based on the message key without processing unrelated messages.

Buy Now
Questions 20

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

Options:

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=true

C.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

D.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=false

Buy Now
Questions 21

(You are writing lightweight XML messages to a Kafka topic named userinfo.

Which format should you use for the value field?)

Options:

A.

XmlSerializer

B.

StringSerializer

C.

ByteSerializer

D.

VoidSerializer

Buy Now
Questions 22

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

Your consumers must process these messages with low latency and minimize consumer lag

Processing takes ~6x longer than producing

Transactions for each bank account must be processed in orderWhich strategy should you use?

Options:

A.

Use the timestamp of the message's arrival as its key.

B.

Use the bank account number found in the message as the message key.

C.

Use a combination of the bank account number and the transaction timestamp as the message key.

D.

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

Buy Now
Questions 23

(You deploy a Kafka Streams application with five application instances.

Kafka Streams stores application metadata using internal topics.

Auto-topic creation is disabled in the Kafka cluster.

Which statement about this scenario is true?)

Options:

A.

The application will continue to work and internal topics will be created, even if auto-topic creation is disabled.

B.

The application will terminate with a non-retriable exception.

C.

The application will work, but application metadata will not be stored.

D.

The application will be on hold until internal topics are created manually.

Buy Now
Questions 24

(A stream processing application tracks user activity in online shopping carts, including items added, removed, and ordered throughout the day for each user.

You need to capture data to identify possible periods of user inactivity.

Which type of Kafka Streams window should you use?)

Options:

A.

Session

B.

Hopping

C.

Tumbling

D.

Sliding

Buy Now
Questions 25

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

Options:

A.

Log the bad record, no other action needed.

B.

Log the bad record and seek the consumer to the offset of the next record.

C.

Log the bad record and call the consumer.skip() method.

D.

Throw a runtime exception to trigger a restart of the application.

Buy Now
Questions 26

Match each configuration parameter with the correct deployment step in installing a Kafka connector.

CCDAK Question 26

Options:

Buy Now
Questions 27

Match the testing tool with the type of test it is typically used to perform.

Options:

Buy Now
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Feb 16, 2026
Questions: 90

PDF + Testing Engine

$63.52  $181.49

Testing Engine

$50.57  $144.49
buy now CCDAK testing engine

PDF (Q&A)

$43.57  $124.49
buy now CCDAK pdf