OpsLevel Logo
Product

Visibility

Catalog

Keep an automated record of truth

Integrations

Unify your entire tech stack

AI Engine

Restoring knowledge & generating insight

Standards

Scorecards

Measure and improve software health

Campaigns

Action on cross-cutting initiatives with ease

Checks

Get actionable insights

Developer Autonomy

Service Templates

Spin up new services within guardrails

Self-service Actions

Empower devs to do more on their own

Knowledge Center

Tap into API & Tech Docs in one single place

Featured Resource

March Product Updates
March Product Updates
Read more
Use Cases

Use cases

Improve Standards

Set and rollout best practices for your software

Drive Ownership

Build accountability and clarity into your catalog

Developer Experience

Free up your team to focus on high-impact work

Featured Resource

The Ultimate Guide to Microservices Versioning Best Practices
The Ultimate Guide to Microservices Versioning Best Practices
Read more
Customers
Our customers

We support leading engineering teams to deliver high-quality software, faster.

More customers
Hudl
Hudl goes from Rookie to MVP with OpsLevel
Read more
Hudl
Keller Williams
Keller Williams’ software catalog becomes a vital source of truth
Read more
Keller Williams
Duolingo
How Duolingo automates service creation and maintenance to tackle more impactful infra work
Read more
Duolingo
Resources
Our resources

Explore our library of helpful resources and learn what your team can do with OpsLevel.

All resources

Resource types

Blog

Resources, tips, and the latest in engineering insights

Guide

Practical resources to roll out new programs and features

Demo

Videos of our product and features

Events

Live and on-demand conversations

Interactive Demo

See OpsLevel in action

Pricing

Flexible and designed for your unique needs

Docs
Log In
Book a demo
Log In
Book a demo
No items found.
Share this
Table of contents
 link
 
Resources
Blog

Getting Started With Kafka and Event-Driven Microservices

Insights
Visibility
Platform engineer
DevOps
Getting Started With Kafka and Event-Driven Microservices
OpsLevel
|
November 23, 2021

Event-driven architecture (EDA) is more scalable, responsive, and flexible than its synchronous counterpart. That’s because it processes and distributes information as it arrives instead of storing it for processing when a client requests it.

One of the fastest and easiest ways to get started with EDA is to put Kafka behind your microservices.

What’s special about EDA and Kafka? How do they achieve their speed and scalability? Let’s look at Kafka, EDA, and how you can fit them into your microservice strategy.

What Is Kafka?

Apache Kafka’s docs call it an “event streaming platform.” That’s a good definition because Kafka is more than just an API or an application. So, let’s start there.

Kafka’s Event Streaming

Event streaming creates durable feeds with data captured from sensors, mobile devices, application services, databases, and other sources.

Kafka shapes that data and routes it to various destinations as event streams. The concept of durable streams is important with Kafka because you can receive those streams in real-time or replay them later.

That means your services can respond to events as they happen and replay streams on-demand as needed, such as when they need to recover from a restart.

Many companies are already using event streaming for a broad set of applications, such as:

     
  • monitoring medical equipment
  •  
  • capturing events from IoT devices
  •  
  • processing, monitoring, and storing financial transactions
  •  
  • building event-driven architectures

That last bullet is why we’re here. But we need to look at platforms first.

Kafka’s Event Platform

Kafka isn’t a single application you install on a server or an API you add to your code. It’s a platform, a collection of components for collecting data, creating low-latency and high-throughput event streams, and consuming events from them.

The tools are there to do it all.

Kafka already has adapters for a wide variety of external systems via its Kafka Connect API and tools. Here are a few examples:

     
  • JDBC Source and Sink can import and export data to any JDBC-compliant database
  •  
  • JMS Source will republish data from any JMS messaging system
  •  
  • ActiveMQ Sink and Source can send and receive data from ActiveMQ
  •  
  • Amazon CloudWatch Source imports data from CloudWatch logs and publishes it as events

There are more than 120 connectors available for sending and receiving events in Kafka.

Kafka also provides a stream processing library via Kafka Streams. You can use that library to consume data from one of its existing connectors, or you can use it to write your own.

Finally, it has a message broker for routing events between sources and sinks. It runs in a cluster of one or more nodes.

What Is a Microservice?

We probably don’t need to spend a lot of time on microservices, but let’s go over a quick refresher before discussing how they can benefit from EDA.

Microservices are discrete services you design to do one and only one thing well, as part of a greater system.

Here’s a definition from Martin Fowler:

 In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.  

These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

These small services act together to make up a larger application. Few dependencies exist between them, which reduces complexity and increases agility. You can add new features to one service without breaking another. And you can add new services, too.

But this decoupling doesn’t come for free. Individual services might not need to talk to each other, but they’re not an application if they don’t present a consistent view of data and operations to clients. They’re a collection of potentially related services.

Now, let’s see how event-driven architecture can help with that.

Event-Driven Architecture

Event-driven architecture (EDA) comprises components that produce, consume, and react to events.

It’s a pattern that works well with loosely coupled elements like microservices.

EDA Components

EDA systems have three parts: event sources, sinks, and channels.

Event sources publish events. They detect a state change, collect the associated information, and publish it. Sources aren’t aware of event listeners or their state.

Event sinks consume events. Similar to sources, sinks are unaware of their specific counterparts. More than one source may publish an event type, and a new source can replace another. Sinks react to an event when it’s published.

Event channels are the paths events take between sources and sinks.

Channels are essential because sinks don’t solicit events; they see them as they occur. Sources don’t offer events; instead, they publish to a channel.

A sink subscribes to the proper channels and gets the events it’s interested in. The channel is a demarcation point between sources and sinks.

The separation of concerns between sources and sinks is important. Sources publish events to channels. Sinks take in events from those same channels. To anthropomorphize things a bit, they shouldn’t know about each other.

Why Kafka and Microservices?

That separation of concerns is what gives EDA architectures their speed and scalability, and it’s why you want to see how Kafka can help you.

Once you develop a sink that knows how to take in an event, you’re done. You can reuse that code or the entire application as often as you need. Less code. More reuse.

A source publishes events that any number of sinks will consume. So, it only needs to post that event once. If it publishes the event to a durable stream, it’s available forever.

The source will never have to publish it again. So, services that supply data to your clients don’t have to work as hard.

A sink consumes events that may come from hundreds of sources. So, once you develop a sink that knows how to take in an event, you’re done. You can reuse that code or the entire application as often as you need. Less code. More reuse.

Getting Started With Kafka and Microservices

So how can you get started with Kafka and microservices? Why should you? Why shouldn’t you?

Event Sources

Chances are, your microservices are already consuming data from sources supported by Kafka. That means you won’t have to write much new code, if any, to create your event sources. So, set one up and see what your data looks like as an event stream.

Events and Event Channels

Now that you’ve set up that connector and see how easy it is to generate events, you’re ready to start pointing your microservices at it, right? Not so fast!

First, you need to organize your data into events and then arrange your events into channels.

If you’re retrofitting an existing service, you’re probably tempted to use your current schema. But will it make sense if more than one service consumes it in the future? Does it make sense if it’s going to represent more than one event?

You should also look at value-added services like Confluent Schema Registry for storing your schemas and making them available to your services.

Remember, event channels are a critical part of the infrastructure. They maintain that vital separation of concerns between sources and sinks. So don’t fall into the trap of one big channel that all sources and sinks connect to. Map related events to channels that will make sense as your system grows.

Event Sinks

Now, with your schemas and channels defined, you can work on your sinks. You’ve already done most of the hard work by organizing your data into events and channels.

Which service will watch each channel? Some services will need more than one. Others may take in data from one and republish it to another after filtering and transforming it.

Monitoring

How do you monitor these loosely coupled processes?

You can monitor Kafka via any JMX reporter or several different tools from Confluent like Confluent Health+.

If you use Kafka via a SaaS provider, they’ll offer monitoring tools, too. It’s a robust platform that can scale for very high throughput, and fault tolerance is part of its messaging layer.

Kafka monitoring covers your sources and your sinks, but you’re going to want to observe how they’re doing before and after they process events. EDA makes it easier to create smaller services that do that one thing well, so they’re going to proliferate. You need a catalog to keep track of your services, who owns them, and how they’re supported. OpsLevel can help with that.

Recap: Kafka and Microservices

We’ve discussed what Kafka and event-driven architecture (EDA) are. Then we discussed how EDA could make your microservices more robust and easier to scale. Finally, we looked at how to introduce Kafka into your environment. Take a good look at Kafka and EDA today!

If you want to learn more about OpsLevel, request a custom demo and see how we can help you manage your microservices

This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).

More resources

March Product Updates
Blog
March Product Updates

Some of the big releases from the month of March.

Read more
How Generative AI Is Changing Software Development: Key Insights from the DORA Report
Blog
How Generative AI Is Changing Software Development: Key Insights from the DORA Report

Discover the key findings from the 2024 DORA Report on Generative AI in Software Development. Learn how OpsLevel’s AI-powered tools enhance productivity, improve code quality, and simplify documentation, while helping developers avoid common pitfalls of AI adoption.

Read more
Introducing OpsLevel AI: Finding Your Lost Engineering Knowledge
Blog
Introducing OpsLevel AI: Finding Your Lost Engineering Knowledge

Read more
Product
Software catalogMaturityIntegrationsSelf-serviceKnowledge CenterBook a meeting
Company
About usCareersContact usCustomersPartnersSecurity
Resources
DocsEventsBlogPricingDemoGuide to Internal Developer PortalsGuide to Production Readiness
Comparisons
OpsLevel vs BackstageOpsLevel vs CortexOpsLevel vs Atlassian CompassOpsLevel vs Port
Subscribe
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
SOC 2AICPA SOC
© 2024 J/K Labs Inc. All rights reserved.
Terms of Use
Privacy Policy
Responsible Disclosure
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Data Processing Agreement for more information.
Okay!