Category Archives: IoT

Improved MQTT Support in InfluxDB with SurgeMQ


As InfluxDB has grown in popularity over the past two and a half years, we’ve seen it used in many different use cases across custom DevOps monitoring, real-time analytics, and the Internet of Things (IoT). Each of these domains has its own patterns, standards, and protocols, so we’ve been working hard to make Telegraf, our open-source data collection agent, support as many services and systems as possible. This helps our users get data into and out of InfluxDB easily and enables developers to keep building awesome new applications.

MQTT is an example of one of the aforementioned protocols, which is heavily used in industrial monitoring and has become increasingly popular in IoT applications. We recently added support to Telegraf for consuming data from MQTT brokers, but given the amount of segmentation in the MQTT broker space and the specific requirements of high-performance brokers (e.g. very high numbers of concurrent connections), we started to pursue the possibility of building our own MQTT broker to ensure the tightest possible integration with the rest of our products and allow us to more effectively work with customers and partners to build IoT solutions backed by InfluxDB.

We’re huge supporters of the Go programming language, as we’ve discussed in the past, so it was important to us that the project be written in Go for optimal productivity, performance, and community engagement. When we began our investigation, we discovered the SurgeMQ project, written in Go by the brilliant and incredibly talented Jian Zhen. After a few weeks of initial discussions, we received Jian’s blessing to officially take over the project. Says Jian, “I’ve been watching the InfluxDB project from its early days, and I’m a huge fan. I’m happy to see SurgeMQ helping people use InfluxDB in new ways.”

Effective immediately, we’ll be working to finish turning SurgeMQ into a standalone server and begin building packages for it alongside the rest of our products. Continue reading Improved MQTT Support in InfluxDB with SurgeMQ

Part 7: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 7 : Collecting System Sensor Data with Telegraf

The last part of this tutorial looks at Telegraf, the “T” in the TICK Stack. Telegraf is an agent that is used to collect metrics from various input channels and write them to output channels. It supports over 60 plugins which can function as the input source or output target of data.

The agent is completely plugin driven and while it supports multiple plugins off the bat, you can write your own plugins too.

Our tutorial so far has looked at collecting temperature data from multiple weather stations and persisting that in InfluxDB. In addition to that, we also looked at setting up Chronograf to view the temperature data via a dashboard and set up alerts via Kapacitor, that pushed notifications to Slack in case the temperature went over a certain limit.

At this point, the data is being collected via Raspberry Pi stations that are having the temperature data and the flow is pretty much in place. The area that we would look at utilizing Telegraf would be to monitor the CPU, Memory and other system parameters of the InfluxDB server.

  • Telegraf comes along with a input plugin named `system`. This plugin captures various metrics about the system that it is running on like memory usage, CPU, disk usage and more. We shall use this plugin to capture the cpu and memory metrics on the InfluxDB server.
  • The input metrics captures will need to be sent to an output system. In our case, we will push this data into InfluxDB itself. This will help us capture these metrics into an InfluxDB database on which we could potentially then build out dashboard and alerts too via Chronograf and Kapacitor. Sounds neat. The output plugin therefore will be InfluxDB.

The diagram below depicts what we are going to do:

tele1
Installing Telegraf

We are going to install Telegraf on the InfluxDB Server instance. Currently we just have one instance running in the Google Cloud and we will be setting it up on that.

As mentioned earlier, the VM runs Debian Linux and we can follow the steps for installing Telegraf as given at the official documentation site. Follow the instructions as given for installing the latest distribution of Telegraf as given below:

wget http://get.influxdb.org/telegraf/telegraf_0.10.2-1_amd64.deb

sudo dpkg -i telegraf_0.10.2-1_amd64.deb

Configuring Telegraf

We need to provide a configuration file to Telegraf. This configuration file will contain not just Agent configuration parameters but also the input and output plugins that you wish to configure.

There are a ton of plugins for both input and output that Telegraf supports and it does give a command to generate a telegraf.conf (Configuration file) that creates all the input and output plugin configuration sections. That is a useful thing to keep with you but not what we want for our need.

We will be using the following generic command to generate a Telegraf configuration file for us:

telegraf -sample-config -input-filter <pluginname>[:<pluginname>] -output-filter <outputname>[:<outputname>] > telegraf.conf

In our case, we have the following:

We generate a `telegraf.conf` as shown below:

telegraf -sample-config -input-filter cpu:mem -output-filter influxdb > telegraf.conf

Let us look at the key sections in the generated `telegraf.conf` file:

  • [agent] : This is the section for the Telegraf agent itself. Ideally we do not want to tweak too much here. Do note that you could change the frequency (time interval) at which the data collection is done for all inputs via the `interval` property.
  • The next section is one or more `outputs`. In our case, it is just `influxdb output` i.e. `[[outputs.influxdb]]`. Two properties are key here, urls and database. The urls property is a list of influxdb instances. In our case there is just one and we are running Telegraf on the same machine as the InfluxDB instance, so the endpoint is pointing to the InfluxDB API Endpoint at `http://localhost:8086`. Similarly, database property is the database in which the input metrics will be collected. By default it is set to `telegraf` but you can change it to another one. I will go with the default one.
  • The next sections are for the inputs. You can see that it has created the `[[inputs.cpu]]` and `[[inputs.mem]]` inputs. Check out the documentation for both cpu and mem inputs.

Starting Telegraf and collecting metrics

Let us start the Telegraf Agent now via the following command:

telegraf -config telegraf.conf

We could have pushed the generated `telegraf.conf` into `/etc/telegraf` folder and started it as a service, but for the purpose of this tutorial explanation here, this is fine.

On successful startup, it displays an output as shown below:

$ telegraf -config telegraf.conf
2016/02/15 04:36:39 Starting Telegraf (version 0.10.2)
2016/02/15 04:36:39 Loaded outputs: influxdb
2016/02/15 04:36:39 Loaded inputs: cpu mem
2016/02/15 04:36:39 Tags enabled: host=instance-1
2016/02/15 04:36:39 Agent Config: Interval:10s, Debug:false, Quiet:false, Hostname:"instance-1", Flush Interval:10s

Recollect that one of the properties for the Telegraf Agent was the interval property which was set to 10 seconds. This was the interval at which it will poll all the inputs for data.

Here is the output from several data collection intervals:

2016/02/15 04:36:40 Gathered metrics, (10s interval), from 2 inputs in 531.909µs
2016/02/15 04:36:50 Gathered metrics, (10s interval), from 2 inputs in 447.937µs
2016/02/15 04:36:50 Wrote 4 metrics to output influxdb in 3.39839ms
2016/02/15 04:37:00 Gathered metrics, (10s interval), from 2 inputs in 482.658µs
2016/02/15 04:37:00 Wrote 3 metrics to output influxdb in 4.324979ms
2016/02/15 04:37:10 Gathered metrics, (10s interval), from 2 inputs in 775.612µs
2016/02/15 04:37:10 Wrote 3 metrics to output influxdb in 7.472159ms
2016/02/15 04:37:20 Gathered metrics, (10s interval), from 2 inputs in 438.388µs
2016/02/15 04:37:20 Wrote 3 metrics to output influxdb in 3.219223ms
2016/02/15 04:37:30 Gathered metrics, (10s interval), from 2 inputs in 419.607µs
2016/02/15 04:37:30 Wrote 3 metrics to output influxdb in 3.159644ms
2016/02/15 04:37:40 Gathered metrics, (10s interval), from 2 inputs in 426.761µs
2016/02/15 04:37:40 Wrote 3 metrics to output influxdb in 3.894155ms
2016/02/15 04:37:50 Gathered metrics, (10s interval), from 2 inputs in 449.508µs
2016/02/15 04:37:50 Wrote 3 metrics to output influxdb in 3.192695ms
2016/02/15 04:38:00 Gathered metrics, (10s interval), from 2 inputs in 498.035µs
2016/02/15 04:38:00 Wrote 3 metrics to output influxdb in 3.831951ms
2016/02/15 04:38:10 Gathered metrics, (10s interval), from 2 inputs in 448.709µs
2016/02/15 04:38:10 Wrote 3 metrics to output influxdb in 3.246991ms
2016/02/15 04:37:30 Gathered metrics, (10s interval), from 2 inputs in 419.607µs
2016/02/15 04:38:20 Gathered metrics, (10s interval), from 2 inputs in 514.15µs
2016/02/15 04:38:20 Wrote 3 metrics to output influxdb in 3.838368ms
2016/02/15 04:38:30 Gathered metrics, (10s interval), from 2 inputs in 520.263µs
2016/02/15 04:38:30 Wrote 3 metrics to output influxdb in 3.76034ms
2016/02/15 04:38:40 Gathered metrics, (10s interval), from 2 inputs in 543.151µs
2016/02/15 04:38:40 Wrote 3 metrics to output influxdb in 3.917381ms
2016/02/15 04:38:50 Gathered metrics, (10s interval), from 2 inputs in 487.683µs
2016/02/15 04:38:50 Wrote 3 metrics to output influxdb in 3.787101ms
2016/02/15 04:39:00 Gathered metrics, (10s interval), from 2 inputs in 617.025µs
2016/02/15 04:39:00 Wrote 3 metrics to output influxdb in 4.364542ms
2016/02/15 04:39:10 Gathered metrics, (10s interval), from 2 inputs in 517.546µs
2016/02/15 04:39:10 Wrote 3 metrics to output influxdb in 4.595062ms
2016/02/15 04:39:20 Gathered metrics, (10s interval), from 2 inputs in 542.686µs
2016/02/15 04:39:20 Wrote 3 metrics to output influxdb in 3.680957ms
2016/02/15 04:39:30 Gathered metrics, (10s interval), from 2 inputs in 526.083µs
2016/02/15 04:39:30 Wrote 3 metrics to output influxdb in 4.32718ms
2016/02/15 04:39:40 Gathered metrics, (10s interval), from 2 inputs in 504.632µs
2016/02/15 04:39:40 Wrote 3 metrics to output influxdb in 3.676524ms
2016/02/15 04:39:50 Gathered metrics, (10s interval), from 2 inputs in 640.896µs
2016/02/15 04:39:50 Wrote 3 metrics to output influxdb in 3.773236ms
2016/02/15 04:40:00 Gathered metrics, (10s interval), from 2 inputs in 491.794µs
2016/02/15 04:40:00 Wrote 3 metrics to output influxdb in 3.608919ms
2016/02/15 04:40:10 Gathered metrics, (10s interval), from 2 inputs in 571.12µs
2016/02/15 04:40:10 Wrote 3 metrics to output influxdb in 3.739155ms
2016/02/15 04:40:20 Gathered metrics, (10s interval), from 2 inputs in 505.122µs
2016/02/15 04:40:20 Wrote 3 metrics to output influxdb in 4.151489ms

Since we have the InfluxDB Server running along with the endpoints for Admin interface, we can investigate the `telegraf` database from the Admin interface itself (you could have done that via the InfluxDB shell too!)

tele2
Here are some of the `cpu` measurement records:

tele3
Here are some of the `mem` measurement records:

tele4
As a next step, you could hook in visualization (Chronograf) or alerts (Kapacitor) into this Telegraf database.

Conclusion

This concludes the 7-part tutorial on using the TICK-stack from InfluxDB. The TICK-stack provides a best in class set of components to build modern and extensible solutions on a time-series database. We hope this tutorial gave you a glimpse into its potential and gets you started to create winning applications.

What’s next?

  • Get started with InfluxDB here.
  • Looking to level up your InfluxDB knowledge? Check out our economically priced virtual and public trainings.

Part 6: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 6 : Setting up Alerts with Kapacitor

In this part, we are going to take a look at Kapacitor, the “K” in the TICK stack. Kapacitor is a stream and batch processing engine, that is both a data processor and an alerting engine. In our case, we are going to specifically use it in the following way:

  • Define an Alert that monitors the temperature data and checks if it crosses a threshold of 30 degrees Celsius.
  • If the temperature reported is greater than that, we would like to log this record in a file and also raise a notification in our Slack channel.

The capabilities of Kapacitor are much beyond that and it comes along with a complex engine to detect data patterns and funnel that to multiple channels that it supports straight off the bat. In our case, logging the high temperature in a file and raising a notification via Slack are just a couple of integrations that it can do.

So let’s get started with setting up Kapacitor and seeing our temperature alert in action.

Installing Kapacitor

We are going to run Kapacitor on the same instances as our InfluxDB instance. This instance is running on the Google Cloud, so the best way to install this software is by SSH’ing into the instance.

To set up Kapacitor into our VM instance (the instance running InfluxDB), we will need to SSH into the instance. Follow these steps:

  • Login to Google Developers Console and select your project.
  • From the sliding menu on top, go to Compute –> Compute Engine –> VM Instances
  • You will see your VM instance listed.
  • Look out for the SSH button at the end of the row.
  • Click that and wait for the SSH session to get initialized and set up for you. If all is well, you should see another browser window that will transport you to the VM instance as shown below:

k1
The next thing is to install Kapacitor and since this a Debian Linux that we had opted at the time of creating the VM, we can follow the steps for installing Kapacitor as given at the official documentation site.

wget https://s3.amazonaws.com/kapacitor/kapacitor_0.10.1-1_amd64.deb
sudo dpkg -i kapacitor_0.10.1-1_amd64.deb

On successful installation you will ideally have two applications that we will be using:

  • `kapacitord` : This is the Kapacitor daemon that will need to be running to process the data coming into InfluxDB.
  • `kapacitor` : This is the CLI that we will use to talk to the kapacitord daemon and setup our tasks, etc.

Generating a default Configuration file

Kapacitor is a powerful product with multiple configuration options that makes it challenging to create an initial configuration file. Hence to make it easier, we can take the help of the kapacitord application to help us generate a default configuration file.

We go ahead and generate a default configuration file : `kapacitor.conf` as shown below:

$ kapacitord config > kapacitor.conf

The Configuration file (kapacitor.conf) has multiple configuration sections including connection to InfluxDB, the various channels that one can configure and more.

Here are few configuration sections of interest in the `kapacitor.conf` file:

  • `[http]` : This is the API Endpoint that kapacitord will expose and which the Kapacitor client will communicate to.
  • `[influxdb]` : On startup, Kapacitor sets up multiple subscriptions to InfluxDB databases by default. This section has various configuration properties for connecting to the InfluxDB instance. You will notice that it is a localhost url since InfluxDB instance is running on the same instance.
  • `[logging]` : This section has the default logging level. This can be changed if needed via the Kapacitor client.
  • `[slack]` : The section that we are interested in our tutorial is to get notified via Slack. The various properties include the channel in Slack that we want to post the message to, the incoming Webhook URL for the Slack Team, etc. We shall look at this a little later in this tutorial, when we set up the Slack Incoming Webhook Integration.

Start the Kapacitor Service

We do not make any changes to our `kapacitor.conf` file at the moment. We simply launch the Kapacitor Service as shown below:

$ kapacitord -config kapacitor.conf

This starts up the Kapacitor Service and you will notice towards the end of the console logging that bunch of subscriptions are setup, including that on the temperature_db database that we are interested in.

Kapacitor Client

The Kapacitor CLI (kapacitor) is the client application that you will be using to communicate to the Kapacitor Daemon. You can use the client to not just configure Alerts, enable/disable them but also check on their status and more.

One of the commands to check if there are any Tasks setup for Kapacitor is via the lists command. We can fire that as shown below:

$ kapacitor list tasks
Name       Type      Enabled   Executing Databases and Retention Policies

This shows that currently there are no tasks configured.

Create the High Temperature Alert Task via TICKscript

The next thing we are going to do is setup the Task to detect if the temperature is greater than 30 degrees Celsius from Station S1. The Task Script is written in a DSL called TICKscript.

The TICKscript for our High Temperature alert is shown below:

stream
   .from()
   .database('temperature_db')
   .measurement('temperature')
   .where(lambda:"Station" == 'S1')
   .alert()
   .message('{{index .Tags "Station" }} has high temperature : {{ index .Fields "value" }}')
   .warn(lambda:"value" >= 30)
   .log('/tmp/high_temp.log')

Notice that it is intuitive enough to read the script as given below:

  • We are looking at working in the stream mode, which means that Kapacitor is subscribing to realtime data feed from InfluxDB v/s batch mode, where Kapacior queries InfluxDB in batches.
  • We then specify which InfluxDB database via database(). This will monitor the stream of data going into our temperature_db database.
  • A filter is specified for the Tag Station. The value that we are interested in is the “S1” station.
  • For the above criteria, we would like to get alerted, only if the measurement value is greater than 30.
  • If the value is greater than 30, then we would like to log that data in a temporary file at this point (we will see the Slack integration in a while).
  • The message that we would like to capture (i.e. a custom message) is also specified. For e.g. “S1 has high temperature : 30.5”.

We save the above TICKscript in `temperature_alert.tick` file.

Configure the High Temperature Alert Task

The next step is to use the Kapacitor client to define this task and make it available to Kapacitor. We do that via the define command as shown below:

$ kapacitor define \
-name temp_alert \
-type stream \
-tick temperature_alert.tick \
-dbrp temperature_db.default

Notice the following parameters:

  • We name our Task as temp_alert.
  • We specify that we want to use this in stream mode.
  • We specify the TICKscript file : temperature_alert.tick.
  • The Database Retention Policy is selected as the default one (infinite duration and a replication factor set to the number of nodes in the cluster) from the temperature_db database.

We can now look at the tasks that the Kapacitor Service knows about as given below:

$ kapacitor list tasks

Name                 Type      Enabled   Executing  Databases and Retention Policies
temp_alert           stream    false     false      ["temperature_db"."default"]

You can see that the Enabled and Executing properties are set to false.

Dry Run : Temperature Alert

One of the challenges that you face while developing an Alerting system is to test it out before it goes into Production. A great feature of Kapacitor is to do a dry run of the Alert based on a snapshot/recording of data.

The steps are straightforward and at a high level we have to do the following:

  • Make sure that the Alert (temp_alert) is not enabled. We verified that in the previous section.
  • We ask Kapacitor to record a stream of data that is coming into InfluxDB for a given interval of time (say 20 seconds or 30 seconds). While recording this data, we ensure that some of the data coming in meets the condition to fire the Alert as we are expecting. In our case, if the temperature is above or equal to 30, then it should log the data.
  • Kapacitor records the data in the defined time interval above and gives us a recording id.
  • We then replay that data and tell Kapacitor to run it across the Alert (temp_alert) that we have defined.
  • We check if our TICKscript associated with the alert is working fine by checking our log file (/tmp/high_temp.log) for any entries.
  • If the Test runs fine, we will then enable the task.

Let’s get going on this. We already have our `temp_alert` not enabled i.e. the value for the `Enabled` attribute is false, as we saw in the `kapacitor list tasks` command.

The next step is to ask Kapacitor to start recording the data for our alert. We ask it to record the data in stream mode (the other options are batch and query). We specify the duration as 30 seconds and also specify the task name `(temp_alert)`.

kapacitor record stream -name temp_alert -duration 30s

This will make Kapacitor record the live stream of data for 30 seconds using the database and retention policy from the task specified. If your data is streaming in, give it a total of 30 seconds to record it. Alternately, you can also generate INSERT statements using Influx client.

Just ensure that the time interval from the first INSERT to the last INSERT is equal or more than the duration specified (30 seconds), where you send data via manual INSERT statements or even if it is streaming in.

The above command will complete and will output a recording id, an example of which is shown below:

`fbd79eaa-50c5-4591-bbb0-e76f354ef074`

You can check if the recordings are available in Kapacitor by using the following command:

kapacitor list recordings <recording-id>

A sample output is shown below:

ID                                      Type    Size      Created
fbd79eaa-50c5-4591-bbb0-e76f354ef074    stream  159 B     17 Feb 16 22:18 IST   

A size greater than zero indicates that the data was recorded. Now, all we need to do is replay the recorded data against the Alert that we have specified. The -fast parameter is provided to replay the data as fast as possible and not wait for the entire duration that the data was recorded against (in our case 30 seconds)

kapacitor replay -id $rid -name temp_alert -fast

where `$rid` is a variable that contains the value of the `Recording Id`.

The data that I had used during the recording phase contained values over 30 degrees centigrade for some of the records and that is exactly what I would expect the Alert to be fired upon and the records to be written to the `/tmp/high_temp.log` file.

On checking the file `/tmp/high_temp.log` for entries, we do notice the entries as shown below:

$ cat /tmp/high_temp.log
{"id":"temperature:nil","message":"S1 has high temperature : 31", … }
{"id":"temperature:nil","message":"S1 has high temperature : 32", … }
{"id":"temperature:nil","message":"S1 has high temperature : 31”, … }

Enable the Task

Now that we have validated that our Alert is working fine, we need to go live with it. This means we need to enable the task as shown below:

$ kapacitor enable temp_alert

You can now check up on the details of your task via the `show` command as shown below:

$ kapacitor show temp_alert

This will print out details on the task along with the TICKscript for the Task as given below:

Name: temp_alert
Error:
Type: stream
Enabled: true
Executing: true
Databases Retention Policies: ["temperature_db"."default"]
TICKscript:
stream
   .from()
   .database('temperature_db')
   .measurement('temperature')
   .where(lambda:"Station" == 'S1')
   .alert()
   .message('{{index .Tags "Station" }} has high temperature : {{ index .Fields "value" }}')
   .warn(lambda:"value" >= 30)
   .log('/tmp/high_temp.log')

DOT:
digraph temp_alert {
stream0 -> stream1 [label="0"];
stream1 -> alert2 [label="0"];
}

Note that the `Enabled` and `Executing` properties are now true.

High Temperature Alert in Action

If the temperature values are coming in, the Task will be executed and the record will be written to the log file. A specific record from the `/tmp/high_temp.log` file is shown below:

{"id":"temperature:nil","message":"S1 has high temperature : 30","time":"2016-01-22T06:37:58.83553813Z","level":"WARNING","data":{"series":[{"name":"temperature","tags":{"Station":"S1"},"columns":["time","value"],"values":[["2016-01-22T06:37:58.83553813Z",30]]}]}}

Notice that the message attribute has the message along with other tags, values and timestamp.

This confirms that our High Temperature Alert Task has been setup correctly and is working fine. The next thing to do is to set up the Slack Channel Notification.

Slack Incoming Hook Integration

The Slack API provides multiple mechanisms for external applications to integrate with it. One of them is the Incoming Webhooks Integration. Via this integration mechanism, external applications can post a message to a particular channel or an user inside a Slack Team.

Kapacitor supports posting messages to your Slack Team via this mechanism, so all we need to do is provide the details to the Kapacitor configuration, specify the slack notification in our TICKscript and we are all set.

Enable Slack Channel

The first step is to enable this integration inside of your Slack Team. To do that, we will assume that you are logged in to your Slack Team and you are the Administrator.

Go to Slack App Directory and click on Make a Custom Integration as shown below:

k2

This will bring up a list of Custom Integrations that you can build for your team and we will select the Incoming WebHooks as shown below:

k3

We want the message to be posted to the #general channel, so we select that channel and click on the Add Incoming WebHooks integration.

k4

This completes the WebHooks setup and it will lead you to the details page for the integration that you just setup. This will contain the Webhook URL that you need to note down. Kapacitor will just need to have this information, so that it can post the JSON Payload data to Slack, which in turn will deliver it to your #general channel.

k5

Configuring Slack Channel in Kapacitor Configuration file

The next thing that we need to do is go back to the `kapacitor.conf` file that our Kapacitor service was using.

In that file, you will find the `[slack]` configuration section and which we fill out as follows:

[slack]
 enabled = true
 url     = "https://hooks.slack.com/services/<rest of Webhook URL>"
 channel = "#general"
 global  = false

Notice that the Webhook URL that we got from the previous section is set for the url property. We also enable this channel, specify the `channel (#general)` to post to and set the global to false, since we would like to explicitly enabled the Slack integration in our TICKscript.

Save this file and restart the Kapacitor service again.

You should see the last few lines in the startup console as shown below:

[udp:temperature_db.default] 2016/01/22 06:46:53 I! Started listening on UDP: 127.0.0.1:35958
[influxdb] 2016/01/22 06:46:53 I! started UDP listener for temperature_db default
[task_master] 2016/01/22 06:46:53 I! Started task: temp_alert

Notice that the listener has been started for our temperature_db database and our task has also been started.

Add Slack Channel to TICKscript

We have not yet modified our TICKscript, which only logged the high temperature to a file. We will add the Slack channel now.

Open up the `temperature_alert.tick` file in an editor and add the additional line as highlighted below:

stream
   .from()
   .database('temperature_db')
   .measurement('temperature')
   .where(lambda:"Station" == 'S1')
   .alert()
   .message('{{index .Tags "Station" }} has high temperature : {{ index .Fields "value" }}')
   .warn(lambda:"value" >= 30)
   .slack()
   .log('/tmp/high_temp.log')

Save the `temperature_alert.tick` file.

Reload Task

We will now reload the Task again because we have changed the script. To do that, you have to define the task again (use the same name) as shown below. The `define` command will automatically reload an enabled task:

$ kapacitor define -name temp_alert -tick temperature_alert.tick

Slack Channel Notification

We are all set now to receive the Slack Notification. If the temperature data is streaming in and if the temperature value is greater than 30 degrees Celsius, you will see a notification in Slack. Shown below is a sample record in our general:

k6
This concludes the integration of Kapacitor into our IoT our sensor application.

What’s next?

  • In part seven, we will explore how to use Telegraf to collect system data about our temperature sensors. Follow us on Twitter @influxdb to catch the next blog in this series.
  • Looking to level up your InfluxDB knowledge? Check out our economically priced virtual and public trainings.

Part 5: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 5 : Visualizing IoT Sensor Data with Chronograf

So far in the series, we’ve managed to set up InfluxDB and it is now receiving data from various temperature stations. While we can use the InfluxDB web application to query for data, it falls short when it comes to visualizing the same data via charts and dashboards. For e.g. an ability to continuously monitor the data and see it on a graph.

In this part of the tutorial, we are going to look at Chronograf. Chronograf is the “C” in the TICK stack and is used for visualization of InfluxDB data. We can use this tool to look at the temperature readings that have been reported via various stations.

Chronograf is a standalone application that you need to setup on a system. This could reside on premises or in the cloud or even running on your laptop, as we are going to see. As long as you configure it to point to the appropriate InfluxDB data and setup your graphs/dashboards, you can run it from anywhere. Below is a simple GIF that gives you an idea of the how the interface looks: Continue reading Part 5: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform

Part 4: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 4 : Integrating InfluxDB into the IoT Project

So far in the series, we have looked at what InfluxDB is, setup a InfluxDB Host on Google Compute Engine and wrote a Python application that can interact with it.

This part now integrates the Python Client application code that we wrote in the previous section with an actual IoT project that uses an Arduino, a Temperature Sensor and InfluxDB as our database to store all the temperature readings that we collect.

First up, let me explain what the eventual goal is and how this is a first step in that process. The goal is to set up a series of low cost climate/environment modules that capture various types of data like temperature, humidity and more. Then take all this data and put it in the cloud where we can eventually build out dashboards, alerts and more. All the good stuff in the cloud will be powered by InfluxDB.

We will now explain a setup where we create a system comprising an Arduino Uno, a temperature sensor, a Python application that can read the data from the Arduino Uno (yes, I did not use an Arduino Internet Shield) and post that data to the cloud.

The Hardware Setup

I used the following:

  • Arduino Uno microcontroller
  • LM35 Temperature Sensor
  • Eventually we will have the Raspberry Pi that interfaces with the Arduino to read and transmit off the values but to validate things for now, the Uno was powered via a laptop/desktop with Python installed on it. The communication between the Uno and the PC is via serial port communication.

Arduino Uno + Temperature Sensor Setup

Here is how the LM35 sensor is connected to the Arduino Uno board.

lm35

Of course, we used a breadboard to connect all this together but I am simplifying the diagram here so that you know what is connected to which pin.

The LM35 has 3 pins:

  • The first one goes to the 5V power pin on Arduino
  • The 3rd one is the GND
  • The middle pin is the VOUT where it emits out the values that we need to capture. We connect this to the Analog Pin (A0) on the Arduino. We can then write our Arduino code to read that value, as is shown next.

Arduino Code

The Arduino Code is straightforward as given below:

float temp;
int tempPin = 0;

void setup()
{
Serial.begin(9600);
}

void loop()
{
temp = analogRead(tempPin);
temp = temp * 0.48828125;
Serial.print(temp);
Serial.println();
delay(10000);
}

You will notice in the loop that every 10 seconds, we are printing out the temperature value that was read from the Analog Pin (#0).

If you run the serial port monitor that comes with the Arduino IDE and if the Arduino is powered up and connected as per the diagram shown, then you will find the Temperature value being printed on the serial monitor as given below:

com15
Once this happens, we know that the Arduino setup is looking good and all we need to do now is to write a client program on the PC that interfaces with this Arduino, read the values via the Serial port and then write those values to the InfluxDB database.

We can now integrate the code that we had used in the previous section to write to the InfluxDB database. The integrated code is shown below:

import serial
import datetime
from influxdb import InfluxDBClient

#Setup some constants with InfluxDB Host and Database name
INFLUXDB_HOST = '<PublicIPInfluxDBHost>'
INFLUXDB_NAME = 'temperature_db'

#Connect to Serial Port for communication
ser = serial.Serial('COM15', 9600, timeout=0)

#Setup a loop to send Temperature values at fixed intervals
#in seconds
fixed_interval = 10
while 1:
try:
 #temperature value obtained from Arduino + LM35 Temp Sensor
 temperature_c = ser.readline()
 #Timestamp
 timestamp = datetime.datetime.utcnow().isoformat()

 #Station Name that is recording the temperature
 station_name = "S2"

 #Initialize the InfluxDB Client
 client = InfluxDBClient(INFLUXDB_HOST,'8086','','',INFLUXDB_NAME)

 #Write a record
 json_data = [
     {
         "measurement":"temperature",
 "time":timestamp,
 "tags": {
     "Station":station_name
 },

 "fields": {
     "value":temperature_c
 }
     }
 ]

 bResult = client.write_points(json_data)
 print("Result of Write Data : ",bResult)
 time.sleep(fixed_interval)
except ser.SerialTimeoutException:
 print('Error! Could not read the Temperature Value from unit')
 time.sleep(fixed_interval)

That completes the integration. We now have an end-to-end IoT Prototype application that is able to collect a temperature reading every 10 seconds and store that in InfluxDB. This is just one weather station reading this data. We can now provision and deploy multiple such weather stations across the city. Each of the weather stations will be having this setup and code and the only change will be the `Station Name`, which will be set to the particular `Station Name`.

Since we have created the `Station Name` as the tag, our InfluxDB setup can now help us store data as well as query it for all stations, multiple stations and more.

This concludes the tutorial for setting up InfluxDB and integrating it with an IoT project.

What’s next?

  • In Part Five, we will investigate other parts of the TICK stack and see how it can further help us to collect and visualize the sensor data that we are going to be collecting. We will also see how to setup alerts depending on certain threshold values we see in the data. Follow us on Twitter @influxdb to catch the next blog in this series.
  • Looking to level up your InfluxDB knowledge? Check out our economically priced virtual and public trainings.

Part 3: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 3 : Using InfluxDB Client Libraries

InfluxDB provides support for Client Libraries in multiple programming languages. A client library goes a long way in wrapping the core HTTP API with a high level wrapper, so that you can work directly with the language and not worry about the low level mechanics of the API.

Hop over to HTTP API client libraries documentation and check out the list of programming languages that InfluxDB supports.

In part 3, we are going to look at the Python client library to perform the same operations that we have been doing so far i.e. query for records and insert some records.

The first step for the Python client library should be installed the InfluxDB Python library in your local setup and the documentation provides information on that. You can do that via the pip install mechanism.

`$ pip install influxdb`

The steps to using the Python client library is no different when it comes to conceptually thinking of what we need to do:

  • Import the python library
  • Establish a client connection to the InfluxDB host. In our case, this is the host running on Google Compute Engine.
  • Use the client object to query for records. You can use the queries similar to the ones that we saw in the previous part.
  • Use the client object to write a record or two to the InfluxDB database.

The `inluxdbclient.py` Python program is shown below:
Continue reading Part 3: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform

Part 2: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


Part 2: Using the InfluxDB API

InfluxDB comes with an in-built HTTP API that you can use to perform all the basic operations. The API is powerful enough to allow you both administrative operations like creating databases and data management operations along with querying.

At this point in time, if we recap from Part 1 — we have installed InfluxDB on Google Cloud Platform powered via a single Compute Engine instance. Remember that there are 2 ports that InfluxDB uses as we saw in the previous part, 8083 (Admin Web Application) and 8086 (API). And we had exposed both these ports to allow traffic by configuring the Firewall rules in Google Compute Engine.

The InfluxDB API starting page is available here. It exposes 3 endpoints that are self-explanatory:

  • /ping
  • /query
  • /write

Since the HTTP Server API is exposed on port 8086, the HTTP endpoint will be in the following format (for e.g. /ping endpoint):

http://:8086/ping

InfluxDB also has a set of client libraries that you can use from your client programming language. That would definitely make things easier and we will take a look at that too but for now we will deal with the basic HTTP calls, so that we understand it better.

What we are going to do is execute some HTTP POST calls to the InfluxDB API Server to insert some records. Remember that in the previous section, we created the following:

  • A database named temperature_db
  • Inserted a few records with the following characteristics:
  • A measurement named temperature
  • Each measurement has a single tag named station and its value is something like S1, S2 and so on.
  • Each measurement has a value field which is the temperature reading in degrees celsius.

The Writing Data API guide for InfluxDB API comes in handy here.

Continue reading Part 2: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform

Part 1: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform


InfluxData is a platform for collecting, storing, visualizing and managing time series data. My particular interest in this platform stems from a lot of work that I had done a few years back in the RFID space. A time series database was the key to creating “Track and Trace” applications.

Since then, with the advent of hardware like Raspberry Pi, Arduino, Tessel and low cost sensors, it is straightforward to start collecting data. In most sensor systems, a time series database is again very useful since typically you will end up collecting a series of sensor data over time. For e.g. capturing the temperature every 5 minutes at a particular location. Replace temperature with your favorite sensor data if you want and you get the picture. While my requirement for time series data might be restricted to IoT and sensors, there are fairly wide ranging applications of a time series database. Check out the use cases that InfluxData has put up.

You always have the option to build out your own solution to capture this data. In addition to capturing data, you will need dashboards, raising alerts based on data values and more. It makes pragmatic sense to look at platforms that have been created to handle all of this and more.

At a high level, what you want from platforms like this, are some high level functional requirements like:

  • Storing time series data with flexibility in the data schema to allow for more fields or tags moving forward.
  • Collection or integration mechanisms to translate/push time series data from various sources into this single normalized database.
  • Standard dashboards that allow for querying/visualization in the product itself. For most projects, this is sufficient unless you have custom requirements.
  • Ability to define alerts on the data and notify users/applications.

There will definitely be more requirements but the above is a general list that would be expected from any such platform. In addition to the above, there could be cross cutting concerns like security, logs, etc. that would be a requirement too.

InfluxData caught my interest because it had elements (modules / applications) that addressed all the above points. It has the following modules, named TICK and I am reproducing the diagram from their official documentation:

tick-stack-grid

This tutorial series will start off with InfluxDB first and get that up and running on the Google Cloud Platform. Once we get that in place, we will build out an IoT project with Arduino/Python and feed that data into InfluxDB. Then we could possibly look at some visualization and or alerts via other products in the TICK stack.

Part 1 : Get InfluxDB up and running on Google Cloud Platform

Google Cloud Platform – create a project

The first step is to create a project. Follow these steps:

  • Visit Google Developers Console and login with your account.
  • Click on Create Project. This will bring up the New Project dialog. Enter a name for the project and ensure that you have selected the correct Billing Account. An example screenshot is shown below:

Continue reading Part 1: How-to Create an IoT Project with the TICK Stack on the Google Cloud Platform

How to send sensor data to InfluxDB from an Arduino Uno


Introduction

Since InfluxDB is a time series database, it’s the perfect candidate for storing data collected from sensors and various other devices that comprise the Internet of Things. In this article, we’ll look at a basic use case involving data collection from a temperature sensor connected to an Arduino Uno, then sent to InfluxDB via the native UDP plugin. Continue reading How to send sensor data to InfluxDB from an Arduino Uno