Quick Fix: Updating Telegraf Configs to Send Data to InfluxDB 3.0

Navigate to:

We recently introduced a new version of InfluxDB, rewritten from the ground up to improve performance across the board. As with any undertaking of this nature, developers will need to make some adjustments to their applications in order to incorporate the new database. We even faced this challenge internally.

We had many Telegraf instances sending data to legacy versions of InfluxDB. We didn’t want to turn off those data pipelines just yet, so we needed that data to output to 1.x/2.x and 3.0 instances of InfluxDB. What we needed was a simple way to make this happen without causing any interruptions or issues.

As part of our internal migration to InfluxDB Cloud Dedicated, we needed to update our Telegraf writers from sending data to a legacy instance of InfluxDB to sending that data to a new, Dedicated instance. After some experimentation, we learned that it was as easy as adding a new outputs.influxdb entry to our Telegraf config that references our InfluxDB Cloud Dedicated cluster. Let’s explore the process.

The basic Telegraf configuration file consists of global agent settings, input plugins that gather data, and output plugins that send data to a destination. By adding a new outputs.influxdb plugin entry (in addition to the one already in the config pointing to the legacy version of InfluxDB), we can send all the data we’re already collecting to our new InfluxDB Cloud Dedicated Cluster without any other configuration changes.

Example:

Our Telegraf config was already set up to send data to our 1.x instance:

[[outputs.influxdb]]
  urls = ["http://our_old_db:8086"]
  database = "telegraf"
  username = "telegraf_user"
  password = "MY_PRECIOUS"
  retention_policy = ""
  write_consistency = "any"
  timeout = "5s"

So, all that the configuration needed was a new output plugin with the proper credentials to authenticate with the Dedicated cluster. But where do you get the credentials?

As part of onboarding, customers receive their own unique cluster URL and account ID. However, it is up to them to create their own databases and tokens as needed. You can find the documentation here, but we’ll provide a quick walkthrough with examples here.

Begin by contacting InfluxData Support to acquire download information for the latest administrative client for your OS. Download and unzip the file, then execute the command influxctl init. Something similar to the following should appear; fill in the values to match your environment:

[root@server1]# ./influxctl init
info    Welcome to the interactive prompt to set up profile!
info    Name: using a profile name other than 'default' requires
info      	specifying the `--profile` option for all commands.
> What is the name of the profile? [default]:
info    Account ID: This was provided as a UUID
> What is the account ID: account-UUID-goes-here
info    Cluster ID: This was provided as a UUID
> What is the cluster ID: clusterID-goes-here
info    profile default successfully created and ready for use

With that step completed, we can now add a database and create a token.

  1. Use the ./influxctl login command to authenticate with the system. This command should open a web browser or display a link that allows you to authenticate with the cluster using your provided Auth0 credentials.
  2. Use the ./influxctl database create command to create your new database:
    [techops@server1]$ ./influxctl database create telegraf
    database "telegraf" successfully created
  3. Use the ./influxctl token create command to create a new database token and specify the database permissions to grant to the token:
    ./influxctl token create -–read-database telegraf -–write-database telegraf "R/W token for telegraf"
    This creates a token with read and write capabilities for the telegraf database with a description of R/W token for telegraf. A successful execution should look like this:
    warn    please copy the token and store in a safe place
    warn    this is the *only time* you will see the token
    apiv1_abcdefghijkl123456789

Use the provided token string to populate the token configuration option in the new output plugin in your Telegraf configuration file:

[[outputs.influxdb_v2]]
urls = ["https://my_url.a.influxdb.io"]
token = "apiv1_abcdefghijkl123456789"
bucket = "telegraf"

Restart the Telegraf agent, and data should now flow to your existing 1.x database as well as the new InfluxDB Cloud Dedicated database! Check the logs to ensure there are no errors, and then use these queries as examples to test your database:

[root@server1]# curl --get https://my_test_url.a.influxdb.io/query --header "Authorization: Token apiv1_abcdefghijkl123456789" --data-urlencode "db=telegraf" --data-urlencode "q=show measurements"

{"results":[{"statement_id":0,"series":[{"name":"measurements","columns":["name"],"values":[["cpu"],["disk"],["diskio"],["influxdb"],["influxdb_ae"],["influxdb_cluster"],["influxdb_cq"],["influxdb_database"],["influxdb_entitlements"],["influxdb_hh"],["influxdb_hh_database"],["influxdb_hh_node"],["influxdb_hh_processor"],["influxdb_httpd"],["influxdb_localStore"],["influxdb_memstats"],["influxdb_qc_all_active"],["influxdb_qc_all_duration_seconds"],["influxdb_qc_compiling_active"],["influxdb_qc_compiling_duration_seconds"],["influxdb_qc_executing_active"],["influxdb_qc_executing_duration_seconds"],["influxdb_qc_memory_unused_bytes"],["influxdb_qc_queueing_active"],["influxdb_qc_queueing_duration_seconds"],["influxdb_qc_requests_total"],["influxdb_queryExecutor"],["influxdb_rpc"],["influxdb_runtime"],["influxdb_shard"],["influxdb_subscriber"],["influxdb_tsm1_cache"],["influxdb_tsm1_engine"],["influxdb_tsm1_filestore"],["influxdb_tsm1_wal"],["influxdb_write"],["kapacitor"],["kapacitor_edges"],["kapacitor_ingress"],["kapacitor_load"],["kapacitor_memstats"],["kapacitor_nodes"],["kapacitor_topics"],["mem"],["net"],["netstat"],["ping"],["processes"],["swap"],["syslog"],["system"]]}]}]}

[root@server1]$ curl --get https://my_test_url.a.influxdb.io/query --header "Authorization: Token apiv1_abcdefghijkl123456789" --data-urlencode "db=telegraf" --data-urlencode "q=SELECT time, host, usage_user from cpu group by host limit 1" --header "Accept: application/csv"

name,tags,time,host,usage_user
cpu,host=data3,1684005120000000000,data3,2.4729520867069734
cpu,host=data4,1684005120000000000,data4,2.4318349298448556
cpu,host=meta0,1684005120000000000,meta0,0.2734107997271841
cpu,host=meta1,1684005110000000000,meta1,0.20491803264512043
cpu,host=meta2,1684005120000000000,meta2,0.06738544474608282

Congratulations! You’re now writing data to your InfluxDB Cloud Dedicated cluster! Hopefully this brief tutorial is helpful. For more information on working with InfluxDB Cloud Dedicated, check out our documentation. While this post specifically references the Cloud Dedicated product, the same process works for InfluxDB Cloud Serverless.