This slides deck comes from the DevConf.cz session about how to build an event streams pipeline running on Kubernetes and using Apache Kafka as ingestion system, Apache Camel for integration and InfluxDB with Grafana for dashboards.
Showcases Category: Kafka
Hippo is a data ingestor service for gRPC and REST based clients. It publishes your messages on a kafka queue and eventually saves them to influxDB, it is build to be easily scalable and prevent SPOF for your mission-critical data collection.
This video shows a quick demo of the F1 Telemetry – Kafka project.
I am building a data streaming platform for a visual effects production company. This article will follow my current architecture for the platform.
Horus is a distributed tool to collect snmp and icmp metrics from various network equipments and send the results to Kafka, Prometheus, and/or InfluxDB.
If you are dealing with the streaming analysis of your data, there are some tools which can offer performing and easy-to-interpret results. First, we have Kafka, which is a distributed streaming platform which allows its users to send and receive live messages containing a bunch of data (you can read more about it here). We will use it as our streaming environment. Then, if we want to visualize, in real time, our results, we need a tool which can capture our data and predictions: it is Grafana, and among its data sources, it can be connected to InfluxDB, an open source time series database. So, through this article we will build an ML algorithm which can extract information and make predictions, in real time, on our data, throughout the following steps:
This post focuses on monitoring your Kafka deployment in Kubernetes if you can’t or won’t use Prometheus. Kafka exposes its metrics through JMX. To be able to collect metrics in your favourite reporting backend (e.g. InfluxDB or Graphite) you need a way to query metrics using the JMX protocol and transport them. This is where jmxtrans comes in handy. With a few small tweaks it turns out it’s pretty effective to run this as a sidecar in your Kafka pods, have it query for metrics and transport them into your reporting backend. For the impatient: all sample code is available here.
This project provides an API for developers to track live geographical data. The process starts by 1) a client sending formatted events to an API which 2) serves the event data onto a messaging queue 3) which is read off by another API, 4) then served up live on a heatmap hosted in the browser. The backend layer is written in Go. Other components used include gRPC, Kafka, Couchbase, InfluxDB, Leaflet Maps, and Heatmap.js.
Would you like to learn how to do stream processing with Apache Kafka on DC/OS? If so, read on!