Using Service Fabric Partitioning to Concurrently Receive From Event Hub Partitions

Prelog

The classic tale is you have Azure EventHub with a number of partitions, you run your receiver process in a singleton Azure Web Jobs or a single instance Azure Worker Role cloud service both will use “Event Processor Host” to pump events into your process. The host externalize offset state storage to Azure Blob Storage (to survive processor crashes and restarts). This works fine yet it has the following problem. You can not scale it out easily. You have to create more separate cloud services or web jobs each handling some of your event hub partitions. Which will complicate your upgrade and deployment processes.

This post is about solving this problem using a different approach.

Service Fabric Azure Event Hub Listener

Azure Service Fabric partitions are somehow logical processes (for the purists I am talking about primary replicas). They can exist separably each on a server, or co exist on a node. depending on how big your cluster is and your placement approach.

Using this fundamental knowledge we can create a service that receives events from Azure Event Hub as the following:

  • If the number of Service Fabric Partitions is greater than Event Hub Partitions. Then each Service Fabric partition should receive from multiple Event Hub partitions at once.
  • If the number of Service Fabric Partitions is equal Event Hub Partitions then each Service Fabric partition should receive from one Event Hub Partition.

Because we have absolute control over how the above mapping is done, we can also add 1:1 mapping (irrespective of the number of partitions here and there). or even custom funky logic where some service fabric service partitions are doing the receive the rest are doing something else.

The second piece of the new tale is state. Service Fabric has it is own state via the use of stateful services I can save few network I/Os by using them to store the state – reliably – internally on the cluster via the use of Reliable Dictionary. The state management for our receiver is plug-able. As in you can replace it with whatever custom storage you want (one of the scenarios I had in mind is running the listener in a stateless service and use a another packing stateful service).

The third piece of the new tale is usage. Event Hub Processor is great but it requires a few interfaces to be implemented (because it is a general purpose component). Ours is highly specialized single purpose component. Hence we can simplify the entire process by just implementing one interface (that represents what happened to events and when to store the offset).

The last piece of our new tale is the entire code wrapped in Service Fabric ICommnuicationListener implementation allowing you to use just as any other Service Fabric listener.

The code for the above is on github here https://github.com/khenidak/service-fabric-eventhub-listener

till next time

@khnidk

Leave a Reply

Your email address will not be published. Required fields are marked *