Coming soon! Our webinar just ended. Check back soon to watch the video.
Webinar Date: 2018-08-08 08:00:00 (Pacific Time)
In this webinar, InfluxData and Particle will share with you how you can use these two popular tools to build a solid end-to-end IoT platform. With a combined community of over 280,000, this solution is already proven to gather metrics and events from your IoT devices and sensors to help you successfully manage your devices.
Watch the webinar “How to Use InfluxDB & Particle to Create That Perfect End-to-End IoT Solution” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “How to Use InfluxDB & Particle to Create That Perfect End-to-End IoT Solution”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
• Chris Churilo: Director Product Marketing, InfluxData
• David Simmons: IoT Developer Advocate, InfluxData
• Jeff Eiden: Senior Product Manager, Particle
David Simmons 00:00:01.103 Great. Thanks, Chris. I’m going to be—I’ll be your driver today, and Jeff will be our [inaudible] color analyst, as we’re calling it. So, Chris, if you could also feel free to put my Twitter handle in the chat. People are welcome to follow me on Twitter and ask questions there as well. I respond fairly quickly there.
Jeff Eiden 00:00:25.705 Looking forward to doing this with you, David. And looking forward to having a chance to speak about this and answer questions from folks out there in the world.
David Simmons 00:00:37.958 Right. I’m super-excited about this one. I have been using Particle for quite a while and this integration with InfluxData was sort of one of the first projects I did when joining InfluxData, and it has been super-useful for me. I use it all the time. So I’m hoping you guys find it useful as well. And we’ll go through sort of a little bit of the background of InfluxData, and a little bit of background of Particle, and why we did this, and why this is all happening as well. So just to start off, you know, the Internet of Things, everybody talks about the number of things that are going to be added to the Internet, you know, 20 billion devices, and that’s great for Particle as they make things, right? Right, Jeff?
Jeff Eiden 00:01:26.423 Yeah.
David Simmons 00:01:27.839 Well, what’s often not talked about, is this number, which is the amount of data that those things are going to generate. So even though we called it the Internet of Things, it’s also sort of the Internet of data because each one of these things is generating a stream of data. And so just to give you a little bit of perspective on what a zettabyte is, it’s huge. Right?
Jeff Eiden 00:01:56.605 And real quick, David, just wanted to add there, you know, it’s interesting that you can see that number and think about, “Okay, how do you manage that data once it does get into the cloud?” I think what a lot of what we’ve been focused on in our history at Particle is, “Well, there is that first very important step of how you get the physical world connected to the Internet to be able to send that quantity of data?” and equal importance there. So both halves—which we’ll go into on the next slide—are very important in terms of how the Internet of Things as an industry matures. Both making it easier to allow the physical world to connect to the Internet to be able to send that data in a secure and scalable way, as well as being able to manage that data once it does get into the cloud.
David Simmons 00:02:44.529 That’s a great point, Jeff. Thanks for that. So if you look at sort of an IoT platform, you’ve got your devices and your sensors out on the—really the far edge, and those are the ones that are instrumenting your world. Those are your Particle devices that have their sensors attached to them and they’re actually collecting data. And some of them are connected to a gateway, but they’re again feeding their data back into a cloud like the Particle cloud, and driving those enterprise applications because again, it’s all about what your sensors are delivering in terms of data to your applications, so that you can actually begin to instrument your physical world. And so, we see these layers that go across all of that, right? There’s the connectivity to connect the devices. You’ve got to manage the devices. You’ve got to have security on the devices. And then on top of that is this data layer, these data services that run from device to application. And one of the great things about this engagement with Influx and Particle is, we really provide that data service, but those other three layers are really what Particle excels at. You want to step in on that a little bit, Jeff?
Jeff Eiden 00:04:13.081 Yeah, sure. One of the things that I like about this slide is that it paints the picture of IoT as a stack, right? And that stack compared to other kinds of technologies, perhaps that are software-only, is tremendously wide. And so when you think about something like security and IoT, that really means a security suite, if you will. Because you need that sense of security and knowing that your data isn’t being tampered with, it isn’t subject to man-in-the-middle attacks, it’s being encrypted, all the way through the stack, from the hardware, through kind of the connectivity layer, into the cloud and then beyond, once it goes into those enterprise applications, as David mentioned. So the same is true for all three of these bands here.
Jeff Eiden 00:04:56.973 So Particle focuses on providing you with these tools as a kind of end-to-end fully integrated IoT solution. And to David’s point, really Influx picks up where Particle leaves off in that we help you connect the physical world to the Internet using our security features, our hardware, which we’ll talk about, our devices OS, our device cloud, which offers the device management and connectivity piece. But once that data gets to the cloud, it’s really important to deal with that large surge of data coming from these connected devices to know what to do with it. And with the TICK Stack, with the time series nature of Influx and its ability to handle data in a very scalable way, and offer tools to make exploration of that data easy for people, and glean insight from that data. It’s really a match made in heaven, and I’ve been really thankful for David’s work in terms of how laying down the foundations for how you can integrate Particle with Influx. And I think you’ll see how later, as we go through the actual tutorial, that it’s really quite easy to get working together. And the power of these tools together makes for a really special experience and will help realize that ultimate goal when you start to build your IoT products, of getting some of that business value that has driven so much of the interest in this space to begin with.
David Simmons 00:06:26.591 Great. Thanks. Slide not advancing. Great. So there was an IoT—the Eclipse Foundation did a survey of IoT developers earlier this year. And 62% of IoT data from the developers’ point of view is viewed as time series data. I found that a little low for what I would have thought it was, but IoT data is time series data, right? Because if you look at what time series data is, it’s this, it’s a sensor reading at a time, right? It’s your altitude, it’s your speed, it’s your course. It’s all of these readings with a timestamp, right? And so what you’re doing is, you’re not running a traditional database where you’re updating values, you’re basically streaming time series values into this database and you’re just adding to the end of it continuously. It’s not like you’re going to go back later and say, “Well, actually the temperature was something different, right? It’s not something that you’re going to update, it’s really a steady stream of data points over time. And so what you can see is on the left side, you can see some raw data that gives you the time value and various sensor readings, right? And then on the right side, you can see this graph that shows you sort of what those values are over time and how they change. It gives you a good insight into what’s going on with the instrumenting environment in real time, and over time, so you can see where my temperature is going up and down and the pressure’s not changing or various other things, right?
Jeff Eiden 00:08:39.665 Yeah. These tools that Influx provides for data exploration are phenomenal, and I did just want to comment on the 62% of IoT data is time series data. That’s over half, right? So it is a fair chunk. But I also think perhaps IoT data might be something that people consider to be a bit broader in terms of things that you might not consider being IoT data. For instance, sending firmware updates, over-the-air firmware binaries to devices for device configuration, which I feel like if the question was perhaps asked in terms of what percentage of IoT sensor data is time series data, I imagine that percentage would be even higher. So if we think about it through the lens of data collected by devices through sensors, through actuators, and through sensing their physical environment, I think you would find that value of the percentage of time series data to be even higher than what that value was reported as.
David Simmons 00:09:39.248 Good point. And a lot of that is also some of what we call sensor metadata about the model number and the deployment date, and some other information that you may have about that sensor as it’s deployed. So in terms of IoT data services, there’s a bunch of things that InfluxData provides in terms of data services, and that is mostly around the collection, the streaming analytics, storage visualization and control. And what that means is, you want to be able to collect your data as fast as it is generated, right? You don’t want your collection point to be the bottleneck in your data string. You really want to be able to stream that data to your real-time analytic, so that you can do your visualizations and response to data in real time, right? One of the things that I’ve been saying for years about IoT data is that if you’re not actually taking action on your data, there’s not a whole lot of point in collecting it, right? If what you’re doing is collecting things like vibrational analysis on a motor, and you go back six months later and look at your data and say, “Yep. You know that vibration went out of whack.” And sure enough, three days later the motor blew up and we had to shut the factory down and fix the motor, right? That’s the kind of information that you want in real time so that you can do predictive analytics and predictive maintenance on things, rather than just verifying what you already know, which is the motor blew up, right? So it’s that streaming analytics, being able to store large amounts of this time series data, visualize that data, and then use that data in real time to control the output and control what’s going on in the environment, right?
Jeff Eiden 00:11:33.890 That’s absolutely spot-on, David. And actually predictive maintenance, preventative maintenance, and real-time monitoring of assets is something that we see as one of the main use cases of real IoT products, solving real business problems at Particle. And to that extent, driving significant business value in terms of minimizing down time and avoiding of costly truck rolls if they’re not necessary. All these things add up to creating quite a bit of value, and then creating quite a bit of efficiencies for companies who are building IoT products.
David Simmons 00:12:15.340 Right. And there’s actually a great case study on this on your website, Jeff, about how Logical Advantage used Particle to deploy just such a system to textile manufacturing.
Jeff Eiden 00:12:28.450 Exactly. Yes, they did so with their spindle machines to monitor them because any down time that they experience is quite costly for their business. And to your point, it’s not just the real-time understanding of “are the machines working or not,” but they can go back and analyze that data and understand, “Perhaps there were certain indicators that certain parts were failing,” and then three months later it led to a much bigger outage. And they can use that information to build predictive models to maximize up time moving forward.
David Simmons 00:13:00.172 Exactly. That’s my buddy, Dan. He’s great at that stuff [crosstalk] Logical Advantage.
Jeff Eiden 00:13:04.132 Right on.
David Simmons 00:13:07.558 So why InfluxData, right? I mean, why are we doing this? And I’m not going to spend a whole lot of time on this, even though it’s super complicated looking. But really what this boils down to is this is the sort of the functional graph of what we call the TICK Stack. So the “T” is Telegraf, and that’s our collection agent. We’ve got over 160 plugins, one of which is for Particle, that collect data from various sources. And it’s a very high-performance collection engine that allows you to collect data from a lot of different sources. That feeds into the “I” in the TICK Stack, which is InfluxDB. It’s a purpose-built, from the ground up time series database. It’s specialized for this special case of time series data, which is different than other kinds of data, right? The “C” on that is Chronograf. That’s our graphical front end, and you’ll see a screenshot of that later, about how you can build visual dashboards of what’s going on with your IoT data. And the last part, which we probably won’t get to much today, is Kapacitor. And that is a real-time data processing engine that can allow you to react to your incoming data in real time. And we can have another webinar at a later date on some of the things that I’ve done with Kapacitor in regard to IoT data in responding to IoT data, and making changes. In fact, that’s what the 3D printer is printing right now, is part of a Kapacitor response in the physical world demo that I’m building.
Jeff Eiden 00:14:54.785 David, I feel like a webinar on IoT wouldn’t be complete without 3D printing in the background, so. Kudos to you.
David Simmons 00:15:03.430 You probably do have a point there. So this is InfluxData, right? We are purpose-built for time series data. It’s what we do. It’s pretty much the only thing we do is time series data. We ingest huge volumes of data in the number of points per second in the hundreds of thousands and millions of points per second, right? On top of that, you can do real-time queries on those very large data sets. We have built-in facility for evicting your data, so that you can set timeouts on your data. I don’t really care about the millisecond-level temperature data from my sensors that’s older than two months old, right? I’m not going to go back and look at it with that granularity. So I can transform that data, and sample that data, and then throw the old data away automatically. And just a little plug, if you want to show up tomorrow, I’ll be talking about actually how to do some of that data downsampling in InfluxDB. And we also do really well at storage optimization and compression. So we’re very lean on disk and in terms of storing your IoT data. Which, again, when you’re talking about 11 and a half zettabytes of data by 2020, being able to store it efficiently and quickly is going to be key. So I’m going to let Jeff take over for the Y Particle because he’s the Particle guy.
Jeff Eiden 00:16:46.144 Thanks. Yeah. Can you toggle to the next line? So where does Particle fall into place, as far as at the IoT stack? As I kind of alluded to earlier on, a lot of the customers that come to us and developers that enter into our ecosystem, they have large ambitions for what they want to build with IoT. And we see a wide variety of use cases from preventative maintenance, to monitoring of remote assets, to being able to build certain consumer facing IoT products. We really support a broad swath of different use cases.
Jeff Eiden 00:17:24.894 But with these grand ambitions that people come to Particle with, their first step is how do I get my thing connected to the Internet? And, honestly, for many years the emphasis was on the cloud side first, and the hardware second. And what we did was kind of flip that on its head. So what we do is make that hardware piece and that firmware piece to begin prototyping your IoT product quite easy and quite fast. So that’s really where Particle begins is we make our own hardware development kits to get started quickly, to prototype, to breadboard. And we also have industrial-grade connectivity modules for when you are ready to transition from prototype to production. If you are used to working with Arduino and Raspberry Pi, and you actually want to take something to market what you realize quite quickly is you pretty much have to tear everything down and start from scratch, and work on a platform that is conducive to those kinds of professional products. With Particle we wanted to solve that problem by making it really easy for you to transition from our prototyping dev kits to our modules that are built for scale.
Jeff Eiden 00:18:37.878 We also help you with the connectivity piece. So one of our hardware families is the Electron, which is our cellular-powered development kit and an E Series for the industrial module. And so we actually offer Particle SIM cards. We do a lot to optimize the connection between the device and the cellular towers to minimize data usage and make sure that you’re not paying for any more data than you absolutely need to be paying for.
Jeff Eiden 00:19:06.173 We also have an IoT Device Cloud, which is essentially the device management piece of the IoT stack. So you can do things like send over-the-air firmware updates, and not just to a single device, but you can actually do coordinated firmware deploys as you scale up to a fleet of thousands or millions of units, but still retaining the ability to create segmentation when you need it, using things like device groups. Recently we also rolled out a feature called remote diagnostics that lets you understand the health of your fleet, even after you’ve deployed it into the field to keep that visibility. And then the transition between Particle and a tool like Influx is the piece on IoT App. So how do you use that data to create interfaces, to create business logic, and do things like data storage and analytics? That’s where we have things like SDKs and our integrations service that allows you to funnel your data from Particle to other tools and services and databases, like Influx, and we’re really proud of that flexibility that our platform offers.
Jeff Eiden 00:20:08.342 We know that data—so the whole purpose of this webinar is the gold of the IoT industry. Being able to collect that data, analyze that data, and use it to create business value, and we know that for many people, retaining ownership over their data is super important. So we make it super easy to keep that ownership, and once that data gets to the Particle cloud, pipe it to exactly where you need it to go, and the tools that you already use for your business. So those are the four main buckets of the functionality that Particle provides. Can you go to the next slide, David?
David Simmons 00:20:41.895 Sure. And I just want to say I have literally dozens if not more of different IoT development platforms around, and I always come back to Particle because of this, the hardware, the connectivity, and the cloud, and being able to easily manage a fleet of devices. It’s really quite nice.
Jeff Eiden 00:21:08.250 So this is to basically illustrate what the experience, as David is mentioning, what that getting started experience is like, and that’s what we focus so much of our time on at Particle. We make hardware, unlike other IoT providers that focus perhaps more on the cloud side. Again, we feel it’s really important that that first unboxing experience—whether you’re an embedded engineer for 30 years, or you’re just coming to IoT for the first time—it makes sense, and you’re able to build an end-to-end prototype to accomplish what you came to IoT to accomplish. So this is a screenshot of our Photon, which is our Wi-Fi development kit, which we’ve had around for a couple of years now. And essentially there’s useful components that come in the packaging for a dev kit, it’s like resistors, like a breadboard so you can easily prototype, like LEDs. And we also make it super easy to use our developers’ tools. We have a Web IDE, a Desktop IDE, a Device Management Console, and all of these devices run what we call our Device OS, which essentially ensures that these devices can easily connect to the Particle Device Cloud and start to send data to and from the cloud in a secure encrypted fashion, so you never really have to worry about that piece of the puzzle.
Jeff Eiden 00:22:29.992 So I would encourage all of you to get started with one of our development kits and really put it to the test. And I hope you understand what we’re going for in the solving that piece of the IoT puzzle, which is the hardware. And getting the hardware connected is really quite difficult if you go about it yourself. And our goal is to alleviate a lot of that pain and frustration and give you that smooth path from prototype to production once you are ready to move forward, like fleet management tools, like those industrial-grade IoT modules.
Jeff Eiden 00:23:03.954 But all IoT projects start with a breadboard and a development kit, just like this. And I’ll be remissed if I didn’t add a quick plug. We always get really excited with new hardware and networking technologies that we adopt. We announced Particle Mesh, which is our third generation of Particle hardware, which, right now, all of our devices are essentially standalone. So this is a Photon that has its own connection to the Device Cloud. We have our Electron that connects to the Device Cloud over cellular. For the first time we’re going to be offering the ability to create local networks. So like the slide earlier, you’ll actually have a device acting as a gateway, and then a few, if not many, edge notes that create a localized Mesh network and the power of that, to be able to run devices at the edge that are super low power, or to be able to aggregate data using a gateway, and to be able to have a local network communicate between one device to the next. Or the ability to have redundancy, right? Imagine if you had one Wi-Fi gateway and one cellular gateway, if one of those gateways goes down, your mesh network still stays up because of the power of meshing technology. So we’re really excited about the capabilities of Mesh to open up new use cases in the IoT space that we couldn’t address quite as well before. So those are shipping in October, so we look forward to getting those in the hands of our community and, again, continuing to work with Influx to continue to push the technology forward in terms of what you can do with that data once it is in the cloud.
David Simmons 00:24:53.266 And I am seriously jumping up and down waiting for those to arrive. I can’t wait to start playing with those. I think they’re going to be a huge addition to getting IoT data in and out of the cloud. So why the two of us together? Well, I would hope that you sort of figured that out by now, based on everything that I have said and everything that Jeff said about what Particle offers in terms of devices, and development, and apps, and the Particle Cloud, and security, and deployment. And what Influx offers in terms of data collection and storage aggregation, reaction to data, and dashboards, right? So this is sort of an overall architecture of what that sort of looks like, from connecting the Particle Cloud to the Telegraf agent, to collect, store, process and analyze all of your data in InfluxDB and visualize it with Chronograf.
Jeff Eiden 00:26:06.589 Couldn’t have said it better. I mean this is basically a high-level view of what any end-to-end IoT system would look like. You have your devices at the edge. You have a cloud to manage those devices. You have a data layer that deals with storing, processing, analyzing the data. And then you have interfaces that your employees, your end-users, your customers can use to interface with these IoT devices. So having these tools together is—I can’t overstate the power that is at your fingertips, with a very easy integration process.
David Simmons 00:26:47.695 So let’s actually get into a little bit more of the meat of what that integration process is. And I know you’ve all been sitting here wondering when we were going to get to that. But it’s super easy. It’s almost embarrassingly easy in terms. So basically you install Telegraf—and I’m running Telegraf for my integration on a DigitalOcean Droplet—but you can run it in Amazon Cloud, or Google Cloud, or pretty much anywhere you want, on your own server. Install and start Telegraf and then just edit the configuration file and you’ll see that in that configuration file there is a webhooks input and you define your service address and then there’s a webhooks for Particle. And the path is going to be slash Particle. And you edit that file, restart Telegraf, and you’ll notice that you will now have an endpoint listening on that 1619 port and you can access that at the slash Particle URL. It’s, like I said, the Telegraf part of it is almost embarrassingly easy to get set up.
Jeff Eiden 00:28:12.381 And this was a Telegraf plugin that you wrote, David, is that correct?
David Simmons 00:28:15.037 Right. I wrote this Telegraf plugin and it turns out that writing plugins for Telegraf is pretty easy as well. That was my first one; I’ve now written a bunch of them. And then you can simply go to the Particle Console and click on webhook and configure a webhook to point to that endpoint. And this is one of the great things about the Particle Console, is that it’s really simple to just set these things up to integrate to the service that you’re looking for.
Jeff Eiden 00:28:52.760 Yeah. And this screenshot here, as David mentioned, is of our console. And we consider this the device management hub, if you will, it’s the central place where you’re seeing information and real-time data coming back from your devices, you’re able to do device management actions like releasing firmware, assigning devices to groups, seeing remote diagnostics, but a big piece, which is the focus of this presentation, is integrations. Which is, as I mentioned, the pipe that allows you to easily set up streaming data from the Particle Device Cloud to external services. And webhooks make that quite easy to hit any REST API endpoint that is exposed on the Internet. So what David did is, is he cleverly figured out how to tie these two things together, but at the end of the day if there is an HTTP endpoint that can be hit, it is something that you can integrate with Particle. And the console tries to add a nice user interface layer to that to make it very easy to configure these webhooks.
David Simmons 00:29:59.365 Great. So when you click on that webhooks button, you have to fill out a few bits of information about it. And so you’ll name your event, and that is what the event will be when it comes into the Particle cloud. And then the URL of your Telegraf server, and you’ll notice that there’s the URL and the Particle—and it looks like I might have left off the port number 1619, on that, right? And what we’re going to be sending it is a JSON object. And so you’ll just put the request format as a JSON.
Jeff Eiden 00:30:48.994 Yeah. And just to add some color there, as the “color commentator”, the event name field on the top, if you’re new to Particle, that might not immediately make sense. So on the next slide, or the next couple slides, David will show you basically our Device OS allows you to write firmware applications for your devices, using really easy firmware APIs. And one of the communication primitives that we offer is Pub/Sub—published and subscribed. So from your devices, you are able to write code like Particle.publish to allow you to publish an event name, and then a data payload along with that event name. So essentially our webhook system, our integration system, listens for devices publishing a certain event, and it will only trigger that webhook when that event name is published. So that is what you would configure when you type an event name here. In your firmware you would define the Particle.publish function and give it the event name, and then you would plug that event name here into the webhook builder.
David Simmons 00:31:50.625 And that’s a great point because you may want to trigger different, send different types of data from your device to different places. So you can send your time series sensor data to Influx based on the event type or the event name.
Jeff Eiden 00:32:12.275 Exactly.
David Simmons 00:32:15.479 So then you’re going to basically define what the JSON object looks like. And most of this is boilerplate, but down here at the bottom you’ll see that I added this influx_db. (And it went to the next slide on me). This influx_db field tells the plugin what database I’m going to, which Time Series Database within Influx I’m going to store my data in, right? And the rest of it is basically boilerplate from the predefined JSON data that comes from the Particle cloud. And so you really have to only define that one field, which is going to tell Telegraf which database to put your data into.
Jeff Eiden 00:33:03.829 And those values there, with the double curly braces—those are dynamic. So those will basically add runtime when event comes through, it will populate with the real device ID, the real event name, the real event data payload, and pass that along with the event. So it’s not like we’re sending those strings, right? That’s a little bit of a templating framework that we’ve built to work with webhooks.
David Simmons 00:33:27.553 Right. It makes it super easy to get all that data into your data string. So I wanted to go through a little bit of how you’re going to format this data because you need to actually send as your payload a JSON object. And so on the left here, I’ve broken out the structure of the JSON object. So we have a top-level field, which is the data. And in that data, we have a couple of fields, and I’ll go through those. One is tags. And tags are a way to identify your data in the InfluxDB database. So you can tag your data with a location, or a name, or things like that. And that will allow you to group your data when you go and do queries on it by, let’s say, if you want to make a tag of Particle Photon. Then you’re able to group all of your—you can go and do a search in your database and get all the data that has come from any Particle Photon, right?
Jeff Eiden 00:34:43.914 Or you can give it the—add another tag for the device ID, and then you can get all the data that has come from that device ID, right? And then the values are actually the numeric sensor-reading values that you’re sending. And then I’ve added the time to live, the published app which is a timestamp, the ID of the device, and the name. And if you look over here on the right side, you’ll see how I actually write that data string into a formatted JSON object, right? And so I’m filling in those values with, let’s say, the temperature, and the raw temperature, and what I calculated as the temperature Fahrenheit, right? And then I’m simply just publishing that data under the “My Event” name. So I probably should have thought about this a little bit before, but under this webhook setup, I would have put the event name as “My Event”. And then this data would have triggered that webhook and all of this data would have gone to the InfluxDB instance via Telegraf. And I hope I made that clear. Did you want to add to that at all?
Jeff Eiden 00:36:16.188 Yeah. No, that’s perfect. So that Particle. publish is one of the communication primitives that you can call on the device. So as in being able to write a line as simple as Particle.publish, is something that we had to work very hard to do. And you can write this kind of firmware on any one of our devices, regardless if it’s a Photon, Electron, or it’s one of our industrial modules. Some of the other primitives that we offer is Particle.subscribe. So you could actually have the device listen for events from the cloud, and get that back to the device, which can be useful in many situations. We also have function, which is “tell a device to do something”: “Open the valve,” or, “Turn on the light.” That is something that often our customers use to instruct a device to take an action, or send a command, remotely from the cloud. And then, lastly, we have variable. So if you’re not trying to send periodic data like Particle.publish, and you’re really just interested in querying, “What is the value of this sensor right now?” That’s what variable allows you to do. But for this purpose, where you’re kind of sending data periodically and formatting that data in a way that Influx will be able to understand, and integrating with webhooks, Particle.publish is the primitive that is most relevant.
David Simmons 00:37:30.365 And if you’ve written other Arduino-type code, you know that in order to publish data, you typically have to spin-up an entire Wi-Fi client and probably an HTTP client, and all sorts of other network plumbing, and then format everything, and then write to it, and then read from it, to make sure that you got the correct response, all that. And basically you can—with Particle you can do that in one call with Particle.publish, and it all goes securely and is transmitted to the cloud, and you’re done.
Jeff Eiden 00:38:07.553 Yeah. That was the other part. I was going to mention that Particle.publish is that data is encrypted when it is sent to the cloud. Our devices use public and private keys, so along with our Device Cloud, and that data is encrypted when it does get transmitted. So all things that you don’t have to worry about when you use Particle.
David Simmons 00:38:34.944 So here’s a little animation that shows how you can quickly go into your Chronograf frontend and make sure that your data has gotten to InfluxDB. So you can go into the Data Explorer and you’ll see that I have my IoT database, and in there I have my influxdata_sensors measurement, which is what we defined again in that webhook, right? And here we can see where that data came from. It came from my Telegraf instance. And then over in the fields, I can see all the sensor data that’s come from there, and if I click on one, like the temperature Celsius, you’ll see that in the top you’ll see the query built, and in the bottom, you can see the actual graphing of the data in real time. It’s really handy to be able to see what’s going on with your data as you’re doing it, right? And verify that your data’s getting there and see what your data looks like. And from there you can actually build dashboards. So you can go and use that graphical frontend to define queries based on the data, and then different UI elements to display that data in real time.
Jeff Eiden 00:40:10.106 So here you’ll see a dashboard that I have that is displaying my CO2 values in both a dial gauge, and as a graph over time, right? And infrared, I’ve got two lines there: one for the light, and one for the infrared. I’ve got the temperature; again, a dial graph and a graph over time. And some other things that I’m monitoring in that dashboard, so that I can quickly see at a glance where things are. Not just where things are now, especially like if I’m looking at CO2, I can see where things are now, but I can also get a quick view of what the trend was over the last 15 minutes. Because as you see up here in the upper right-hand corner, I’m showing the past 15 minutes’ worth of data, and I can see what was actually going on in my data over time, as well as just a quick glance of what it is right now.
Jeff Eiden 00:41:14.386 Yeah. And our infrastructure SRA team uses Influx for our cloud management, and one of the things that is super valuable is that tagging system that David mentioned. And that’s super useful in IoT as well, if you tag each data point that comes in with the device ID, or perhaps the device group, to be able to cut this data and hone in on a particular device, or set of devices that you care about at that moment. Or also the ability to get different granularities of that data as you change the timeframe. That is incredibly useful, especially as you’re dealing with a large amount of sensor readings that come from your fleet.
David Simmons 00:41:59.305 And actually if you look down here at the bottom, I have these two little strip graphs that show you—I actually have some Kapacitor-alerting going on in the background, and you can see that when those alerts happen, right? And so I can get a glimpse of what time those alerts happened, and how many of them happened over time as well. So if you want to know how to do this yourself, there is an entire tutorial that goes through all of this in excruciating detail of how you set up all of the various parts and integrate them to basically get this end-to-end, from device to dashboard IoT data solution for you.
Jeff Eiden 00:42:57.830 And that’s on docs.particle.io that I have—the link is there. But if you just want to navigate to docs.particle.io, and you click on that Tutorials header there, it’s really easy to find. And David did a really nice job writing this tutorial in a way that’s super easy to follow.
David Simmons 00:43:13.406 Right. And you’ll see when you click on “Tutorials” you’ll see the integrations. There’s the Google Cloud, Google Maps, Azure, and then InfluxData is actually listed as one of those integrations, and there is steps on how to do that. So hopefully it’s fairly straightforward, and that combined with this webinar, will give you a good start to getting this stuff up and running. So one of the things that we say at Influx is—and we actually trademarked this phrase because we’re so driven towards it—is “Time to Awesome.” It’s got to be easy to use, easy to deploy, easy to work with. And everything’s open source, right? So I know that I have done a complete deployment of all of the Influx parts of the TICK Stack on Linux in three and a half minutes, and there’s a video to prove it. I posted it, there’s a video on our website called, “Time to Awesome,” that you can go and check out, and it shows you basically how to do an entire install in Linux in three and a half minutes. And that’s kind of a “Time to Awesome,” and one of the things that I like about Particle is they seem to have—they don’t get to use the phrase “Time to Awesome.” But they really do have a great “Time to Awesome” with their cloud infrastructure, and how you write, and deploy, and manage code for devices.
Jeff Eiden 00:44:44.723 That’s awesome. I love that phrase. I kind of want to steal it, but I won’t steal it from you guys.
David Simmons 00:44:54.004 We actually went out and trademarked it, believe it or not.
Jeff Eiden 00:44:56.901 We’ll have to find our own catch phrase, but that’s amazing. I mean, we’re totally—we are aligned in the desire and the focus on creating those kinds of experiences for our users. Both in the data world, and also when you’re getting started with IoT prototyping. So I think that’s why you see so much overlap and opportunity to have these products work seamlessly together because there is a shared goal and shared focus between Influx and Particle to make what was quite difficult something that can be done in a matter of hours, minutes instead of months and years.
David Simmons 00:45:37.854 Right. And that, you know, as I said I’ve been using Particle for a long time and it’s—one of the reasons is that it’s very fast to get prototypes and things up and running and doing what you need them to do without a whole lot of unnecessary hassle. So that’s it for us. And I guess it’s time for questions.
Chris Churilo 00:46:03.254 And you have a lot of questions. So what I would appreciate—if you guys wouldn’t mind—reading the questions aloud, and then answering them. And even the questions that Jeff was so kind to answer during your presentation, so that we can capture it on the video. And they’re both in the chat and the Q&A and there is some overlap, so. Just start with one of them and let’s just plow through those questions.
Jeff Eiden 00:46:28.979 Yeah. So I’ll take a couple. So Tushar asked, “Does Particle have its own C-based language like Arduino?” So when you use the Particle Developer Tool IDEs, we have one for the web and one for desktop. You write C++ and when we first started we knew that there was already a thriving community of Arduino developers, and we wanted to make sure that those folks had the ability to kind of start to experiment with Particle. So we actually use wiring, which is the same framework on top of C++ that Arduino uses. So there’s a lot of compatibility between the ecosystem of Arduino and the ecosystem of Particle. So, yes, it’s C++ and it uses the wiring framework. But there are some kind of Particle-specific isms, if you will, as we mentioned in the presentation, like Particle.publish, like Particle.function, that are specific to our ecosystem.
Jeff Eiden 00:47:26.954 There was another question around mesh, and specifically the number of devices that a mesh network will be able to support. I will caveat by saying we are still in the process of doing testing, as we are getting closer to mass manufacturing, and also shipping out to customers. But our early tests indicate that we can have something close to at least a hundred devices that are connected in a Particle mesh network, which is pretty exciting to get some of that early indications of that capability because if you dig into the details, we’re actually using the same meshing technology that Nest uses, which is OpenThread. And most of the applications, actually if not all of the applications of OpenThread, are really based on Linux-based systems and a little bit beefier computing technologies. And Particle was the first one to really try and apply this technology to the embedded world, for being optimized for super low power and low bandwidth devices. And when you make the tradeoff of low power and low bandwidth, you, of course, make the tradeoff of less kind of computing power and RAM, and memory, and CPU, and things like that. So really, they didn’t have a whole lot of data to indicate how many devices these kinds of networks could support, so. They’re as curious as we are, and so we’re in the process of doing some of those tests to understand what the capabilities are of embedded-based mesh networks.
Jeff Eiden 00:49:13.025 So let’s see if I can answer a couple of other ones here. So Bill Evans asked, “Does the tutorial work with AWS, or just Google and Azure?” So backing up our integration system, we have a few what we call first-class integrations that we have kind of branded that are Google Cloud Platform, Google Maps, and Azure IoT Hub. And we are quickly approaching that world with Influx, and we have what is essentially a first-class integration with Influx that David has put together. Right now we don’t have one for AWS, but our webhooks feature of integrations makes it completely possible and straightforward to integrate Particle with AWS as well, so. That is something that many of our customers have done.
David Simmons 00:50:04.328 Also, if I’m not mistaken, and I wrote the tutorial a long time ago. It actually uses an AWS instance to one Telegraf—
Jeff Eiden 00:50:19.762 Oh, gotcha. There you go.
David Simmons 00:50:19.409 And InfluxDB.
Jeff Eiden 00:50:21.340 Right. Good point.
David Simmons 00:50:21.625 So and in fact there’s an AMI that you can just grab that’s prebuilt with all of that stuff that you can just spin up under your account. And run and it will have all of the Influx TICK Stack already running there. And then you can just point your webhook at it.
Jeff Eiden 00:50:44.069 Yeah. That’s a great point. Thanks, David, for clarifying. One of the first questions that was asked was, “Edge/gateway versus fog node.” I think what this kind of gets at is a lot of the jargon that’s often associated with IoT. And I’ve heard these terms being thrown around, but essentially, I think of terms like “edge” and “fog computing” as somewhat similar. Which is to say basically low-power devices in the physical world that are sensing some sort of data and doing some level of computing in the physical world outside of the cloud. Gateway is a term that I think makes more sense when you do have this kind of hub-and-spoke model like Particle Mesh, where you have one device that’s kind of acting as the channel to the Device Cloud, and then you have many nodes that are participants in that mesh network, but use the gateway as a proxy to the cloud. And I think what’s really special about the way that Particle is going about implementing Mesh is, while the edge nodes, the Xenons that we are going to be offering that don’t have their own Wi-Fi or cellular radios, they don’t have the ability to communicate directly to the device cloud. But when you get these devices and start playing around with it, what you’ll realize is that really, we’re treating the gateway as just another jump to be able to connect to the Device Cloud, and it will feel like those devices have their own dedicated connection.
Jeff Eiden 00:52:19.674 So what do I mean by that? You will be able to write something like Particle.publish on an edge node, like a Xenon, and that data will make its way to the Device Cloud, and under the hood it’s getting routed through a gateway, but to you, you are publishing data from a device that is a participant in a local mesh network to a Device Cloud, even though it doesn’t have its own dedicated connection. So that’s something that I’m really excited about to kind of give you the power to talk to the Device Cloud, while at the same time giving you the ability to have that low power consumption and data usage that meshing technologies provide.
David Simmons 00:52:59.758 And Robbie asked about Telegraf being able to pull or accept unsolicited data. And basically Telegraf, depending on the plugin, can do either one. There are some Telegraf plugins that pull for data, and then there are others like the Particle one, that listen for incoming data. And with regard to drivers for PLCs and things like that, most of that depends on what protocol is coming out of the PLC. So we at Influx are working closely with the Eclipse Foundation IoT working group in trying to get plugins for Telegraf built that can handle some of the more common PLC output formats like PPMP and CoApp and things like that. So those are not necessarily supported right now, but definitely some of them are, like MQTT is supported out of the box with Telegraf. You can just turn it on.
Chris Churilo 00:55:36.196 If you don’t mind, there’s just a couple of other questions I think are worth covering. So a question about, “Why CoApp and not MQTT?”
Jeff Eiden 00:55:46.043 That’s a very good question. I think that gets quite technical quite quickly. I will say at the time that Particle was built, there was an analysis that was done on all of the different IoT communication protocols. We evaluated CoApp; we evaluated MQTT. And for a variety of reasons, CoApp just made the most sense, and it was ultimately what we decided to build on top of with Particle. But what I will say is to you, the end developer, the person that is building an IoT product, it doesn’t really matter, frankly. There’s all sorts of layers on top of those communication protocols that really provide the functionality to allow you to focus on what matters, which is what you’re building and the specific use case that you’re going after. In many ways CoApp and MQTT offer a lot of the same things, and the difference is more or less in semantics. So I would say, focus more on what the goal are of your IoT initiatives and less on some of the low-level networking protocol details. Because at the end of the day, they do largely the same thing.
Chris Churilo 00:57:09.486 And then there’s just one—I don’t think we answered this one, but I think it’s worth answering. And then we’ll end with the questions. But Andrew asks, “Is most building automation systems operate on the BACnet protocol? Do either of the solutions support this protocol?”
Jeff Eiden 00:57:26.621 David, are you familiar with that?
David Simmons 00:57:29.156 I am not. But at some point, I guess, BACnet would talk to a gateway, which you could then do a, some sort of—you know—like an MQTT client to collect BACnet data and have Telegraf collect from there.
Chris Churilo 00:57:56.323 Okay, so Andrew, I don’t know if that—if you want to ask, or give us more color on that. Just feel free to throw that in the community, or send me a note, and I can definitely send it over to the guys and we can dig into that one a little. Because I think—we were all, “I’ve never seen that before.” But doesn’t mean it’s not popular. So if there are any other questions, please feel free to just shoot them over to me, and I’ll just forward them to the guys, and we’ll get them answered, because it is past the top of the hour. In addition, if you do have other questions, especially on the InfluxData side—actually even on both—you can also post them into the community site. David lives on our community site, and he’ll be able to answer them, and he can easily reach out to Jeff to make sure that we get those questions answered. And then, Jeff, do you guys have a URL that you want to share for any other support, kind of?
Jeff Eiden 00:58:54.365 Yeah. If folks have more questions, we have a very thriving community around the Particle ecosystem. So if you go to community.particle.io that’s a great place to ask more questions. If you’re looking to get started, we recommend starting with the dev kit, so if you go to our website you can find links to our store, or if you want to go directly there it’s store.particle.io. And you’ll find links to picking up a dev kit to get started.
Chris Churilo 00:59:23.972 That’s awesome. And as you guys all know, we all are now familiar with one of the Number One Fan Boys of Particle, and that’s Mr. David Simmons himself. And he is pretty particular about his tech, so. I think [crosstalk].
Jeff Eiden 00:59:40.404 Well, the feeling is mutual. We have a lot of respect for what Influx does and we really appreciate the enthusiasm that you guys showed in working with us and David’s work contributing that amazing tutorial that you saw today to our Docs, which, again, you all can find on our Docs site, which is docs.particle.io.
Chris Churilo 00:59:59.707 Excellent. All right, so I’ll do a quick edit of the video and then I’ll post it later on today, and if you want to take another listen to it, it’s the same URL
that we used for registration. Otherwise, wait for the email that comes in tomorrow. And if you have any other questions, just let us know. And especially let us know about the projects that you build using these two great technologies. We always love to be able to hear from you guys and share your experiences with other users in the community.
Jeff Eiden 01:00:26.763 Absolutely.
Chris Churilo 01:00:27.408 Thanks, Jeff. Thanks, David.
Jeff Eiden 01:00:29.156 [crosstalk]
David Simmons 01:00:30.618 It was great.
Chris Churilo 01:00:29.752 [crosstalk] presentation. And we’ll see you guys later. Bye, bye.
David Simmons 01:00:34.073 Bye.
Track and graph your Aerospike node statistics as well as statistics for all of the configured namespaces.
Knowing how well your webserver is handling your traffic helps you build great experiences for your users. Collect server statistics to maintain exceptional performance.
Collect and graph performance metrics from the MON and OSD nodes in a Ceph storage cluster.
Use the Dovecot stats protocol to collect and graph metrics on configured domains.
Easily monitor and track key web server performance metrics from any running HAProxy instance.
Gather metrics about the running Kubernetes pods and containers for a single host.
Collect and act on a set of Mesos statistics and metrics that enable you to monitor resource usage and detect abnormal situations early.
Gather and graph metrics from this simple and lightweight messaging protocol ideal for IoT devices.
Gather phusion passenger stats to securely operate web apps, microservices & APIs with outstanding reliability, performance and control.
The Prometheus plugin gathers metrics from any webpage exposing metrics with Prometheus format.
Monitor the status of the puppet server – the success or failure of actual puppet runs on the end nodes themselves.