Amazon CloudWatch and Databricks Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin will pull Metric Statistics from Amazon CloudWatch, streamlining the process of monitoring and analyzing AWS resources.
Use Telegraf’s HTTP output plugin to push metrics straight into a Databricks Lakehouse by calling the SQL Statement Execution API with a JSON-wrapped INSERT or volume PUT command.
Integration details
Amazon CloudWatch
The Amazon CloudWatch Plugin allows users to pull detailed metric statistics from Amazon’s CloudWatch service. As a monitoring solution, CloudWatch enables users to track various metrics related to AWS resources and applications, facilitating improved operational and performance insights. The plugin uses a structured authentication method that prioritizes security and flexibility through a combination of STS (Security Token Service), shared credentials, environment variables, and EC2 instance profiles, ensuring robust access control to AWS resources. Key features include the ability to define specific metric namespaces, aggregated periods for metrics, and optional inclusion of linked accounts for cross-account monitoring. A significant aspect of this plugin is its capacity to handle both sparse and dense metric formats, allowing for varied output structures depending on user preference. Thus, it supports versatile use cases in cloud monitoring and analytics by providing comprehensive, timely data directly from CloudWatch.
Databricks
This configuration turns Telegraf into a lightweight ingestion agent for the Databricks Lakehouse. It leverages the Databricks SQL Statement Execution API 2.0, which accepts authenticated POST requests containing a JSON payload with a statement
field. Each Telegraf flush dynamically renders a SQL INSERT (or, for file-based workflows, a PUT ... INTO /Volumes/...
command) that lands the metrics into a Unity Catalog table or volume governed by Lakehouse security. Under the hood Databricks stores successful inserts as Delta Lake transactions, enabling ACID guarantees, time-travel, and scalable analytics. Operators can point the warehouse_id
at any serverless or classic SQL warehouse, and all authentication is handled with a PAT or service-principal token—no agents or JDBC drivers required. Because Telegraf’s HTTP output supports custom headers, batching, TLS, and proxy settings, the same pattern scales from edge IoT gateways to container sidecars, consolidating infrastructure telemetry, application logs, or business KPIs directly into the Lakehouse for BI, ML, and Lakehouse Monitoring. Unity Catalog volumes provide a governed staging layer when file uploads and COPY INTO
are preferred, and the approach aligns with Databricks’ recommended ingestion practices for partners and ISVs.
Configuration
Amazon CloudWatch
[[inputs.cloudwatch]]
region = "us-east-1"
# access_key = ""
# secret_key = ""
# token = ""
# role_arn = ""
# web_identity_token_file = ""
# role_session_name = ""
# profile = ""
# shared_credential_file = ""
# include_linked_accounts = false
# endpoint_url = ""
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
period = "5m"
delay = "5m"
interval = "5m"
#recently_active = "PT3H"
# cache_ttl = "1h"
namespaces = ["AWS/ELB"]
# metric_format = "sparse"
# ratelimit = 25
# timeout = "5s"
# batch_size = 500
# statistic_include = ["average", "sum", "minimum", "maximum", sample_count]
# statistic_exclude = []
# [[inputs.cloudwatch.metrics]]
# names = ["Latency", "RequestCount"]
# [[inputs.cloudwatch.metrics.dimensions]]
# name = "LoadBalancerName"
# value = "p-example"
Databricks
[[outputs.http]]
## Databricks SQL Statement Execution API endpoint
url = "https://{{ env "DATABRICKS_HOST" }}/api/2.0/sql/statements"
## Use POST to submit each Telegraf batch as a SQL request
method = "POST"
## Personal-access token (PAT) for workspace or service principal
headers = { Authorization = "Bearer {{ env "DATABRICKS_TOKEN" }}" }
## Send JSON that wraps the metrics batch in a SQL INSERT (or PUT into a Volume)
content_type = "application/json"
## Serialize metrics as JSON so they can be embedded in the SQL statement
data_format = "json"
json_timestamp_units = "1ms"
## Build the request body. Telegraf replaces the template variables at runtime.
## Example inserts a row per metric into a Unity-Catalog table.
body_template = """
{
\"statement\": \"INSERT INTO ${TARGET_TABLE} VALUES {{range .Metrics}}(from_unixtime({{.timestamp}}/1000), {{.fields.usage}}, '{{.tags.host}}'){{end}}\",
\"warehouse_id\": \"${WAREHOUSE_ID}\"
}
"""
## Optional: add batching limits or TLS settings
# batch_size = 500
# timeout = "10s"
Input and output integration examples
Amazon CloudWatch
-
Cross-Account Monitoring: Utilize this plugin to monitor resources across multiple AWS accounts by enabling the
include_linked_accounts
option. This scenario allows companies managing multiple AWS accounts to aggregate metrics into a central monitoring dashboard, providing a unified view of all metrics while ensuring secure data access and compliance through proper role management. -
Dynamic Alerting System: Integrate this plugin with alerting tools to create an automated system that triggers alerts based on defined thresholds for CloudWatch metrics. For instance, if latency metrics exceed specified limits, alerts can be sent to relevant teams, enabling proactive responses to performance issues and reducing downtime.
-
Cost Management Dashboard: Use the metrics gathered from the plugin to build a cost management dashboard that visualizes AWS service usage metrics over time. By correlating these metrics with billing data, organizations can identify high-cost services and take informed actions to optimize their resource usage and spending.
-
Performance Benchmarking for Applications: Leverage the metrics collected from applications running on AWS to perform performance benchmarks. For example, by tracking latency and request count metrics for an ELB, developers can assess the impact of application changes on its performance, making data-driven decisions for optimization.
Databricks
- Edge-to-Lakehouse Telemetry Pipe: Deploy Telegraf on factory PLCs to sample vibration metrics and post them every second to a serverless SQL warehouse. Delta tables power PowerBI dashboards that alert engineers when thresholds drift.
- Blue-Green CI/CD Rollout Metrics: Attach a Telegraf sidecar to each Kubernetes canary pod; it inserts container stats into a Unity Catalog table tagged by
deployment_id
, letting Databricks SQL compare error-rate percentiles and auto-rollback underperforming versions. - SaaS Usage Metering: Insert per-tenant API-call counters via the HTTP plugin; a nightly Lakehouse query aggregates usage into invoices, eliminating custom metering micro-services.
- Security Forensics Lake: Upload JSON batches of Suricata IDS events to a Unity Catalog volume using
PUT
commands, then runCOPY INTO
for near-real-time enrichment with Delta Live Tables, producing a searchable threat-intel lake that joins network logs with user session data.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration