Designed for those new to Kafka, this tutorial will introduce participants to the fundamentals of Kafka and the Confluent platform. Participants will learn how to build an application that can publish data to, and receive data from Apache Kafka, a distributed streaming platform. This tutorial will include some short hands-on exercises to help solidify the learning.
We will discuss what Kafka is, explain how it works, and teach the fundamentals of how to build modern data applications with Kafka. We’ll also discuss key architectural concepts and developer APIs. This tutorial is ideal for application developers, ETL (extract, transform, load) developers, or data scientists who are new to Kafka.
Topics will include:
- Introduction to what Kafka is, its capabilities, and major components
- Types of data appropriate for Kafka
- Producers, Consumers, and Brokers, and their roles in a Kafka cluster
- Developer APIs in various languages for publication/subscription to Kafka Topics
- Common patterns for application development with Kafka