- Rahul Neelakantan
Apache Kafka is a publish-subscribe open-source message broker application. This messaging application was coded in “Scala”. Kafka’s design pattern is mainly based on the design of the Transaction log tailing.
Table of Contents
- List out the components of Apache Kafka?
- Explain the role of the offset?
- What is a Consumer Group?
- What is the role of the ZooKeeper in Kafka?
- Is it possible to use Kafka without ZooKeeper?
- What do you know about Partition in Kafka?
- List out the advantages of Kafka?
- List out the main APIs for Kafka?
- How does Kafka ensure load balancing of the server?
- Why are Replications critical in Kafka?
- What can you do with Kafka?
- Explain Multi-tenancy?
- Compare RabbitMQ vs. Apache Kafka?
- What are the differences between Traditional queuing systems & Apache Kafka?
- What is Data Log in Kafka?
- List out the features of Kafka Streams?
- What is the main difference between Kafka and Flume?
- How do you define a Partitioning Key in Kafka?
- What is the process for starting a Kafka server?
- What is ZooKeeper?
- What do you mean by ZNode?
- List the different types Of Znodes?
- What is the ZooKeeper ensemble?
- What is ZooKeeper quorum?
- What is ZooKeeper Atomic Broadcast (ZAB) protocol?
- What is the Paxos algorithm?
- Explain about watch event in ZooKeeper?
- What is the role of Kafka Producer API?
- What is the difference between Partition & Replica for Topic in Kafka?
- What is Geo-Replication in Kafka?
- Explain how you can get precisely one message from Kafka during data protection?
- How can the Kafka cluster be rebalanced?
- What are the three broker configuration files?
- What maximum message size can the Kafka server receive?
- How can the throughput of a remote consumer be improved?
- What is ISR in Kafka?
- How can churn be reduced in ISR in Kafka, and when does the broker leave it?
- What is the consumer lag in Kafka?
- What is Kafka producer Acknowledgement?
- What is a Smart producer/ dumb broker?
List out the components of Apache Kafka?
- Topics – Bunch or a collection of messages.
- Producers – Issue communications as well as publish messages to a Kafka topic.
- Consumers – Subscribe to a topic(s) and reads & process messages from the topic(s).
- Brokers – Manage storage of messages in the topic(s) we use
Explain the role of the offset?
There is a sequential ID number given to the messages in the partitions, what we call, an offset. So, to identify each message in the partition uniquely, we use these offsets.
What is a Consumer Group?
The concept of Consumer Groups is exclusive to Apache Kafka. Every Kafka consumer group consists of one or more consumers that jointly consume a set of subscribed topics.
What is the role of the ZooKeeper in Kafka?
Apache Kafka is a distributed system built to use Zookeeper. Although, Zookeeper’s central role here is to make coordinate between different nodes in a cluster. However, we also use Zookeeper to recover from previously committed offset if any node fails because it works as a periodically commit offset.
Is it possible to use Kafka without ZooKeeper?
It is impossible to bypass Zookeeper and connect directly to the Kafka server, so the answer is no. If somehow, ZooKeeper is down, then it is impossible to service any client request.
What do you know about Partition in Kafka?
In every Kafka broker, there are few partitions available. And, here, each partition in Kafka can be either a leader or a replica of a topic.
List out the advantages of Kafka?
- High Throughput
- Low Latency
List out the main APIs for Kafka?
- Producer API
- Consumer API
- Streams API
- Connector API
How does Kafka ensure load balancing of the server?
The leader’s primary role is to perform all read and write requests for the partition. The Followers passively replicate the leader.
Hence, at the time of Leader failure, one of the Followers takes over the leader’s role. This entire process ensures load balancing of the servers.
Why are Replications critical in Kafka?
Because of Replication, we can be sure that published messages are not lost and can be consumed in the event of any machine error, program error, or frequent software upgrades.
What can you do with Kafka?
- To transmit data between two systems, we can build a real-time stream of data pipelines with it.
- Also, we can build a real-time streaming platform with Kafka that can react to the data.
Multi-tenancy means that a single instance of the software and its supporting infrastructure serves multiple customers. Each customer shares the software application and also shares a single database. As a result, each tenant’s data is isolated and remains invisible to other tenants.
We can quickly deploy Kafka as a multi-tenant solution. However, by configuring which topics can produce or consume data, Multi-tenancy is enabled. Also, it provides operations support for quotas.
Compare RabbitMQ vs. Apache Kafka?
Kafka is distributed, durable, and highly available; here, the data is shared and replicated.
|There are no such features in RabbitMQ.|
|Can support to the tune of 100,000 messages/second.||The performance rate is around 20,000 messages/second.|
What are the differences between Traditional queuing systems & Apache Kafka?
|Apache Kafka||Traditional queuing Systems|
Messages persist even after being processed. That implies messages in Kafka don’t get removed as consumers receive them.
It deletes the messages just after processing completion, typically from the end of the queue.
|Kafka enables to process logic based on similar messages or events.|
Traditional queuing systems don’t permit the process of logic based on similar messages or events.
What is Data Log in Kafka?
As we know, messages are retained for a considerable amount of time in Kafka. Moreover, consumers are flexible that they can read at their convenience.
Although, there is a possible case that if Kafka is configured to keep messages for 24 hours and possibly that time consumer is down for time greater than 24 hours, then the consumer may lose those messages.
However, still, we can read those messages from the last known offset, but only at a condition that the downtime on the part of the consumer is just 60 minutes. Moreover, on what consumers are reading from a topic, Kafka doesn’t keep state.
List out the features of Kafka Streams?
- Kafka Streams are highly scalable and fault-tolerant.
- Kafka deploys to containers, VMs, bare metal, cloud.
- Kafka streams are equally viable for small, medium, & significant use cases.
- Full integration with Kafka security.
- Interoperability with Java Applications
- Exactly-once processing semantics.
- There is no need for a separate processing cluster. In the Kafka Producer, when does QueueFullException occur?
QueueFullException typically occurs when the Producer attempts to send messages at a pace that the Broker cannot handle. Since the Producer doesn’t block, users will need to add enough brokers to handle the increased load collaboratively.
What is the main difference between Kafka and Flume?
Even though both are used for real-time processing, Kafka is scalable and ensures message durability.
How do you define a Partitioning Key in Kafka?
Within the Producer, the role of a Partitioning Key is to indicate the destination partition of the message. By default, a hashing-based Partitioner is used to determine the partition ID given the key. Alternatively, users can also use customized Partitions.
What is the process for starting a Kafka server?
Since Kafka uses ZooKeeper, it is essential to initialize the ZooKeeper server and then fire up the Kafka server.
What is ZooKeeper?
Highly available service for the maintaining purpose of small amounts of coordination data, notify clients of changes in that data, and monitor clients for their failures is what we call Zookeeper.
To manage the large set of hosts, we use the ZooKeeper distributed coordination service. Since it was challenging to Coordinate and operate in a distributed environment, ZooKeeper makes it easy with its simple architecture and API.
In addition, developers can focus on core application logic without even worrying about the distributed nature of the application with the help of Zookeeper.
What do you mean by ZNode?
The term ZNode is referred to every node in a ZooKeeper tree. The primary purpose of the Znode is to maintain a stat structure. However, the stat Structure includes version numbers for data changes, and Access Control List (ACL) changes.
List the different types Of Znodes?
- Persistence ZNode (Default Node) – This node is alive even after the client, which created that particular znode, is disconnected.
- Ephemeral ZNode – The ephemeral znodes get deleted automatically when a client gets disconnected from the ZooKeeper ensemble
- Sequential ZNode – Sequential znodes can be either persistent or ephemeral.
What is the ZooKeeper ensemble?
An array of nodes (or servers) that form our Distributed Computer Ecosystem is what we call Ensemble. Primarily, we use multiple zookeeper servers to create an ensemble, when we want to have high availability in the zookeeper server.
What is ZooKeeper quorum?
As we can run ZooKeeper in a replicated mode in production, hat replicated group of servers in the same application is called the quorum.
What is ZooKeeper Atomic Broadcast (ZAB) protocol?
Zookeeper Atomic Broadcast (ZAB) is the protocol under the hood that drives the ZooKeeper replication order guarantee. It also handles electing a leader and the recovery of failing leaders and nodes.
What is the Paxos algorithm?
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible nodes. Consensus is the process of agreeing on one result among a group of participants. It relies on a quorum for durability.
Explain about watch event in ZooKeeper?
Watches are one-time triggers. If you get a watch event and want to get notified of future changes, you must set another watch.
Because watches are one-time triggers, and there is the latency between getting the event and sending a new request to get a watch, you cannot reliably see every change that happens to a node in ZooKeeper. So be prepared to handle the case where the znode changes multiple times between getting the event and setting the watch again.
Also, a watch object, or function/context pair, will only be triggered once for a given notification.
What is the role of Kafka Producer API?
The goal is to expose all the producer functionality through a single API to the client.
What is the difference between Partition & Replica for Topic in Kafka?
Partitions: A single piece of a Kafka topic. The number of partitions is configurable on a per topic basis. More partitions allow for remarkable parallelism when reading from the topics. The number of partitions determines how many consumers you have in a consumer group. This partition number is somewhat hard to decide on until you know how fast you are producing data and how quickly you consume the data. If you have a topic that you know will be high volume, you will need more partitions.
Replicas: These are copies of the partitions. They are never written to or read. Their only purpose is for data redundancy. If your topic has n replicas, n-1 brokers can fail before there is any data loss. Additionally, you cannot have a topic with a replication factor higher than the number of brokers that you have.
What is Geo-Replication in Kafka?
Kafka MirrorMaker provides geo-replication support for your clusters. With MirrorMaker, messages are replicated across multiple data centers or cloud regions. You can use this inactive/passive scenario for backup and recovery or inactive/active scenarios to place data closer to your users or support data locality requirements.
Explain how you can get precisely one message from Kafka during data protection?
To get an exact message from Kafka, you have to avoid duplicates during data consumption and avoid duplication in production. Here are the two ways to get precisely one semantics during data production:
- Avail a single writer per partition; every time you get a network error, checks the last message in that partition to see if your previous write succeeded.
- In the message, include a primary key (UUID or something) and de-duplicate on the consumer.
How can the Kafka cluster be rebalanced?
When a customer adds new disks or nodes to existing nodes, partitions are not automatically balanced. If several nodes in a topic are already equal to the replication factor, adding disks will not help rebalance. Instead, the Kafka-reassign-partitions command is recommended after adding new hosts.
What are the three broker configuration files?
The essential configuration files are a broker.id, log.dirs, zookeeper.connect
What maximum message size can the Kafka server receive?
The maximum message size that Kafka server can receive is 1 million bytes.
How can the throughput of a remote consumer be improved?
If the consumer is not located in the same data center as the broker, it requires tuning the socket buffer size to amortize the long network latency.
What is ISR in Kafka?
In-Sync Replicas are the replicated partitions in sync with their leader, i.e., those who have the same messages (or in sync). It’s not mandatory to have ISR equal to the number of replicas.
The definition of “in-sync” depends on the topic configuration, but by default, it means that a replica is or has been fully caught up with the leader in the last 10 seconds. The setting for this period is replica lag.max.ms and has a server default that can be overridden on a per topic basis.
How can churn be reduced in ISR in Kafka, and when does the broker leave it?
ISR has all the committed messages. It should have all replicas till there is an absolute failure. A replica is dropped out of ISR if it deviates from the leader.
What is the consumer lag in Kafka?
Reads in Kafka lag behind Writes as there is always some delay between writing and consuming the message. This delta between the consuming offset and the latest offset is called consumer lag.
What is Kafka producer Acknowledgement?
An acknowledgment or ack is sent to the producer by a broker to acknowledge receipt of the message. Ack level defines the number of acknowledgments that the producer requires before considering a request complete.
What is a Smart producer/ dumb broker?
A smart producer/dumb broker is a broker that does not attempt to track which messages have been read by consumers. Instead, it only retains unread messages.