When not use Kafka? - rnakidi/dsa GitHub Wiki

They tried to fit Kafka in the design, without much of a thought.

It’s like they had one tool in their toolbox, so tried to fix every problem with it. Their reasoning?

“This will make the system fail-proof. If any component fails, Kafka will rescue it.”

The result?

  • Overengineering.
  • Missing the core problem.
  • Wasted time during design discussion.

Here’s what they should’ve done instead:

STEP 1: Learn when to use Events

Not every problem needs Kafka or even an event-driven solution.

Understand these patterns first:

  • Message Queues: For decoupling systems.
  • Pub-Sub: For fanout scenarios where one event triggers multiple actions.
  • Event Sourcing: When you need to rebuild system state based on events.
  • Event Streaming: For continuous data pipelines.
  • Request-Response: For simple synchronous tasks (still the default in many cases).

STEP 2: Understand Where Kafka is required

Kafka works best when you need:

  1. Independent use cases that process the same data.
  2. High throughput systems where scalability is critical.

It supports:

  • Multiple consumer groups handling data in parallel.
  • Offset management so consumers can track what’s processed and replay events if needed.

But it’s not a fix for:

  • Capacity issues without proper partitioning and infrastructure planning.
  • Poor API design.
  • Data consistency problems (it provides durability but doesn’t enforce strong consistency).

STEP 3: Learn Real-World Use Cases

Example 1: E-commerce App

After a payment is completed:

  1. Send a notification to the customer.
  2. Update the inventory.

Why Kafka works here?

Both tasks are independent and need to happen in parallel.

Kafka allows fanout where each consumer group processes its part.

Example 2: Video Streaming Platform

After a video upload:

  1. Convert the video to 3 different resolutions (1080p, 720p, 480p).
  2. Encode each resolution into 2 formats.

Why Kafka works here?

  • Resolution processing happens independently.
  • Encoding starts only when resolutions are ready.
  • High throughput requirements.

Conclusion - System design is about trade-offs, not throwing single solution to every problem.

Next time you face a design interview, start by understanding:

  1. Does the use case even need events?
  2. Can the problem be solved more simply?
  3. Is Kafka helping scalability, or just hiding underlying issues?