Using Kafka for Remote Procedure Calls (RPC) might raise eyebrows among seasoned developers. At its core, RPC is a programming technique that creates the illusion of running a function on a local machine when it executes on a remote server. When you make an RPC call, your application sends a request to a remote service, waits for it to execute some code, and then receives the results – all while making it feel like a regular function call in your code.
Apache Kafka, however, was designed as an event log, optimizing for throughput over latency. Yet, sometimes unconventional approaches yield surprising results. This article explores implementing RPC patterns with Kafka using the Karafka framework.
The idea emerged from discussing synchronous communication in event-driven architectures. What started as a theoretical question – “Could we implement RPC with Kafka?” – evolved into a working proof of concept that achieved millisecond response times in local testing.
In modern distributed systems, the default response to new requirements often involves adding another specialized tool to the technology stack. However, this approach comes with its own costs:
Sometimes, stretching the capabilities of existing infrastructure – even in unconventional ways – can provide a pragmatic solution that avoids these downsides.
Disclaimer: This implementation serves as a proof-of-concept and learning resource. While functional, it lacks production-ready features like proper timeout handling, resource cleanup after timeouts, error propagation, retries, message validation, security measures, and proper metrics/monitoring. The implementation also doesn’t handle edge cases like Kafka cluster failures. Use this as a starting point to build a more robust solution.
Building an RPC pattern on top of Kafka requires careful consideration of both synchronous and asynchronous aspects of communication. At its core, we’re creating a synchronous-feeling operation by orchestrating asynchronous message flows underneath. From the client’s perspective, making an RPC call should feel synchronous – send a request and wait for a response. However, once a command enters Kafka, all the underlying operations are asynchronous.
Such an architecture has to rely on several key components working together:
A unique correlation ID is always generated when a client initiates an RPC call. The command is then published to Kafka, where it’s processed asynchronously. The client blocks execution using a mutex and condition variable while waiting for the response. Meanwhile, the message flows through several stages:
Below, you can find a visual representation of the RPC flow over Kafka. The diagram shows the journey of a single request-response cycle:
This architecture makes several conscious trade-offs. We use single-partition topics to ensure strict ordering, which limits throughput but simplifies correlation and provides exactly-once processing semantics – though the partition count and other things could be adjusted if higher scale becomes necessary. The custom consumer approach avoids consumer group rebalancing delays, while the synchronization mechanism bridges the gap between Kafka’s asynchronous nature and our desired synchronous behavior. While this design prioritizes correctness over maximum throughput, it aligns well with typical RPC use cases where reliability and simplicity are key requirements.
Getting from concept to working code requires several key components to work together. Let’s examine the implementation of our RPC pattern with Kafka.
First, we need to define our topics. We use a single-partition configuration to maintain message ordering:
topic :commands do
config(partitions: 1)
consumer CommandsConsumer
end
topic :commands_results do
config(partitions: 1)
active false
end
This configuration defines two essential topics:
The consumer handles incoming commands and publishes results back to the results topic:
class CommandsConsumer < ApplicationConsumer
def consume
messages.each do |message|
Karafka.producer.produce_async(
topic: 'commands_results',
# We evaluate whatever Ruby code comes in the payload
# We return stringified result of evaluation
payload: eval(message.raw_payload).to_s,
key: message.key
)
mark_as_consumed(message)
end
end
end
We're using a simple eval
to process commands for demonstration purposes. You'd want to implement proper command validation, deserialization, and secure processing logic in production.
To bridge Kafka's asynchronous nature with synchronous RPC behavior, we implement a synchronization mechanism using Ruby's mutex and condition variables:
class Accu
include Singleton
def initialize
@running = {}
@results = {}
end
def register(id)
@running[id] = [Mutex.new, ConditionVariable.new]
end
def unlock(id, result)
return false unless @running.key?(id)
@results[id] = result
mutex, cond = @running.delete(id)
mutex.synchronize { cond.signal }
end
def result(id)
@results.delete(id)
end
end
This mechanism maintains a registry of pending requests and coordinates the blocking and unblocking of client threads based on correlation IDs. When a response arrives, it signals the corresponding condition variable to unblock the waiting thread.
Our client implementation brings everything together with two main components:
class Client
class << self
def run
iterator = Karafka::Pro::Iterator.new(
{ 'commands_results' => true },
settings: {
'bootstrap.servers': '127.0.0.1:9092',
'enable.partition.eof': false,
'auto.offset.reset': 'latest'
},
yield_nil: true,
max_wait_time: 100
)
iterator.each do |message|
next unless message
Accu.instance.unlock(message.key, message.raw_payload)
rescue StandardError => e
puts e
sleep(rand)
next
end
end
def perform(ruby_remote_code)
cmd_id = SecureRandom.uuid
Karafka.producer.produce_sync(
topic: 'commands',
payload: ruby_remote_code,
key: cmd_id
)
mutex, cond = Accu.instance.register(cmd_id)
mutex.synchronize { cond.wait(mutex) }
Accu.instance.result(cmd_id)
end
end
end
The client uses Karafka's Iterator to consume responses without joining a consumer group, which avoids rebalancing delays and ensures we only process new messages. The perform
method handles the synchronous aspects:
To use this RPC implementation, first start the response listener in a background thread:
# Do this only once per process
Thread.new { Client.run }
Then, you can make synchronous RPC calls from your application:
Client.perform('1 + 1')
#=> Remote result: 2
Each call blocks until the response arrives, making it feel like a regular synchronous method call despite the underlying asynchronous message flow.
Despite its simplicity, this implementation achieves impressive performance in local testing - roundtrip times as low as 3ms. However, remember this assumes ideal conditions and minimal command processing time. Real-world usage would need additional error handling, timeouts, and more robust command processing logic.
The performance characteristics of this RPC implementation are surprisingly good, but they come with important caveats and considerations that need to be understood for proper usage.
In our local testing environment, the implementation showed impressive numbers.
A single roundtrip can be completed in as little as 3ms. Even when executing 100 sequential commands:
require 'benchmark'
Benchmark.measure do
100.times { Client.perform('1 + 1') }
end
#=> 0.035734 0.011570 0.047304 ( 0.316631)
However, it's crucial to understand that these numbers represent ideal conditions:
While Kafka wasn't designed for RPC patterns, this implementation demonstrates that with careful consideration and proper use of Karafka's features, we can build reliable request-response patterns on top of it. The approach shines particularly in environments where Kafka is already a central infrastructure, allowing messaging architecture to be extended without introducing additional technologies.
However, this isn't a silver bullet solution. Success with this pattern requires careful attention to timeouts, error handling, and monitoring. It works best when Kafka is already part of your stack, and your use case can tolerate slightly higher latencies than traditional RPC solutions.
This fascinating pattern challenges our preconceptions about messaging systems and RPC. It demonstrates that understanding your tools deeply often reveals capabilities beyond their primary use cases. While unsuitable for every situation, it provides a pragmatic alternative when adding new infrastructure components isn't desirable.
The post Breaking the Rules: RPC Pattern with Apache Kafka and Karafka appeared first on Closer to Code.
Welcome to the Ubuntu Weekly Newsletter, Issue 876 for the week of January 19 –…
Canonical Ceph with IntelⓇ Quick Assist Technology (QAT) Photo by v2osk on Unsplash When storing…
This article provides a guide for how to install PalWorld on Ubuntu VPS server. How…
Using APT to manage software on Ubuntu (or similar Linux systems) is generally simple. It…
This article provides a breakdown of the top 7 best Ubuntu VPS Hosting providers for…
Over the past 5 years, Canonical has been contributing to Flutter, including building out Linux…