Ceph and Microsoft Fabric Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The Ceph plugin for Telegraf helps in gathering performance metrics from both MON and OSD nodes in a Ceph storage cluster for effective monitoring and management.
The Microsoft Fabric plugin writes metrics to Real time analytics in Fabric services, enabling powerful data storage and analysis capabilities.
Integration details
Ceph
The Ceph Storage Telegraf plugin is designed to collect performance metrics from Monitor (MON) and Object Storage Daemon (OSD) nodes within a Ceph storage cluster. Ceph, a highly scalable storage system, integrates its metrics collection through this plugin, facilitating easy monitoring of its components. With the introduction of this plugin in the 13.x Mimic release, users can effectively gather detailed insights into the performance and health of their Ceph infrastructure. It functions by scanning configured socket directories for specific Ceph service socket files, executing commands via the Ceph administrative interface, and parsing the returned JSON data for metrics. The metrics are organized based on top-level keys, allowing for efficient monitoring and analysis of cluster performance. This plugin provides valuable capabilities for managing and maintaining the performance of a Ceph cluster by allowing administrators to understand system behavior and identify potential issues proactively.
Microsoft Fabric
This plugin allows you to leverage Microsoft Fabric’s capabilities to store and analyze your Telegraf metrics. Eventhouse is a high-performance, scalable data-store designed for real-time analytics. It allows you to ingest, store and query large volumes of data with low latency. The plugin supports both events and metrics with versatile grouping options. It provides various configuration parameters including connection strings specifying details like the data source, ingestion types, and which tables to use for storage. With support for streaming ingestion and event streams, this plugin enables seamless integration and data flow into Microsoft’s analytics ecosystem, allowing for rich data querying capabilities and near-real-time processing.
Configuration
Ceph
[[inputs.ceph]]
## This is the recommended interval to poll. Too frequent and you
## will lose data points due to timeouts during rebalancing and recovery
interval = '1m'
## All configuration values are optional, defaults are shown below
## location of ceph binary
ceph_binary = "/usr/bin/ceph"
## directory in which to look for socket files
socket_dir = "/var/run/ceph"
## prefix of MON and OSD socket files, used to determine socket type
mon_prefix = "ceph-mon"
osd_prefix = "ceph-osd"
mds_prefix = "ceph-mds"
rgw_prefix = "ceph-client"
## suffix used to identify socket files
socket_suffix = "asok"
## Ceph user to authenticate as, ceph will search for the corresponding
## keyring e.g. client.admin.keyring in /etc/ceph, or the explicit path
## defined in the client section of ceph.conf for example:
##
## [client.telegraf]
## keyring = /etc/ceph/client.telegraf.keyring
##
## Consult the ceph documentation for more detail on keyring generation.
ceph_user = "client.admin"
## Ceph configuration to use to locate the cluster
ceph_config = "/etc/ceph/ceph.conf"
## Whether to gather statistics via the admin socket
gather_admin_socket_stats = true
## Whether to gather statistics via ceph commands, requires ceph_user
## and ceph_config to be specified
gather_cluster_stats = false
Microsoft Fabric
[[outputs.microsoft_fabric]]
## The URI property of the resource on Azure
connection_string = "https://trd-abcd.xx.kusto.fabric.microsoft.com;Database=kusto_eh;Table Name=telegraf_dump;Key=value"
## Client timeout
# timeout = "30s"
Input and output integration examples
Ceph
-
Dynamic Monitoring Dashboard: Utilize the Ceph plugin to create a real-time monitoring dashboard that visually represents the performance metrics of your Ceph cluster. By integrating these metrics into a centralized dashboard, system administrators can gain immediate insights into the health of the storage infrastructure, which aids in quickly identifying and addressing potential issues before they escalate.
-
Automated Alerting System: Implement the Ceph plugin in conjunction with an alerting solution to automatically notify administrators of performance degradation or operational issues within the Ceph cluster. By defining thresholds for key metrics, organizations can ensure prompt response actions, thereby improving overall system reliability and performance.
-
Performance Benchmarking: Use the metrics collected by this plugin to conduct performance benchmarking tests across different configurations or hardware setups of your Ceph storage cluster. This process can assist organizations in identifying optimal configurations that enhance performance and resource utilization, promoting a more efficient storage environment.
-
Capacity Planning and Forecasting: Integrate the metrics gathered from the Ceph storage plugin into broader data analytics and reporting tools to facilitate capacity planning. By analyzing historical metrics, organizations can forecast future utilization trends, enabling informed decisions about scaling storage resources effectively.
Microsoft Fabric
-
Real-time Monitoring Dashboards: Utilize the Microsoft Fabric plugin to feed live metrics from your applications into a real-time dashboard on Microsoft Fabric. This allows teams to visualize key performance indicators instantly, enabling quick decision-making and timely responses to performance issues.
-
Automated Data Ingestion from IoT Devices: Use this plugin in scenarios where metrics from IoT devices need to be ingested into Azure for analysis. Using the plugin’s capabilities, data can be streamed continuously, facilitating real-time analytics and reporting without complex coding efforts.
-
Cross-Platform Data Aggregation: Leverage the plugin to consolidate metrics from multiple systems and applications into a single Azure Data Explorer table. This use case enables easier data management and analysis by centralizing disparate data sources within a unified analytics framework.
-
Enhanced Event Transformation Workflows: Integrate the plugin with Eventstreams to facilitate real-time event ingestion and transformation. By configuring different metrics and partition keys, users can manipulate the flow of data as it enters the system, allowing for advanced processing before the data reaches its final destination.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration