Webinar Registration
Coming soon! Our webinar just ended. Check back soon to watch the video.
Introduction to the TICK Stack and InfluxDB Enterprise
Webinar Date: 2019-03-07 08:00:00 (Pacific Time)
In this webinar, we will provide an introduction to the components of the TICK Stack and a review of the features of InfluxDB Enterprise and InfluxDB Cloud. She will also demo how to install the TICK Stack.
Watch the Webinar
Watch the webinar “Introduction to the TICK Stack and InfluxDB Enterprise” by filling out the form and clicking on the download button on the right. This will open the recording.
Transcript +
Here is an unedited transcript of the webinar “Introduction to the TICK Stack and InfluxDB Enterprise.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
• Chris Churilo: Director Product Marketing, InfluxData
• Katy Farmer: DevRel, InfluxData
Katy Farmer 00:00:01.382 Great. Thank you, Chris. Welcome, everybody, to our Getting Started series. We’re going to go over some of the more basic things this morning, but if you have any questions, please, like Chris said, put them in the chat or the Q&A. We’re going to cover what is the TICK Stack, what’s involved with InfluxDB Enterprise and InfluxDB Cloud and then we’re going to do a demo, so it’s pretty straightforward this morning, but, again, any questions. And I firmly believe that there is no such thing as a stupid question, because I have asked all of them. So if you think it’s a stupid question, just be like, “Nope, Katy has definitely asked that,” and feel free.
Katy Farmer 00:00:42.719 So we’re going to start off just talking about what time series is and what makes it different. Time series data is built to be high in volume and it needs to be able to ingest really quickly, right? So if you think about things that produce time series data, like a sensor, let’s say I have a drone and it’s sending back information about its location every second or so. The longer that drone is out, the more points I’m getting back. And while it may only be sending information every second, it could be recording all the way down to the nanosecond. So the longer a drone stays out, the more and more points we have and then, I think it’s something like after 1,200 hours—I did the math on this before—you reach over a million points for a single device. So if you have a database that’s tracking all of your company’s drones, and let’s say that’s 100 or something, you’re talking about at least a million points per device getting written to the same database. That’s kind of crazy. So time series data is built to be able to handle that volume and that speed of ingest.
Katy Farmer 00:01:59.599 It’s also really good for real-time queries on large data sets. Right? So as data comes in, you want to be able to query your database that’s holding all of your drone information and say, “Hey, where is drone A at this second?” or, “Where was drone A at this time when this bad thing happened? I want to make sure that drone A wasn’t responsible.” It’s also really good at the rapid eviction and transformation of data. And what we mean by rapid eviction is that we don’t keep it around for longer than we need it because, again, we’re talking about such high volumes that the more data you keep, the more, space you run out of, right? So most people don’t keep the entirety of their time series data forever. They keep aggregates and then they use our retention policies to get rid of data when it’s not useful anymore. We also talk about downsampling of high-precision data, which is basically what we talked about a second ago, which is data that comes in in all these individual points. But the thing about time series is that individual points aren’t as valuable as the whole data set. So when I look at a single point, it may not offer me that much value to know exactly what all of a drone’s properties were at a specific time. What may be more valuable to me is knowing what that drone did over a period of an hour or a day. So downsampling is the process of taking that high-precision data down to some aggregates. And time series data also is different in the way that we need to optimize and compress data in order to store it.
Katy Farmer 00:03:56.737 So let’s get into this a little bit more when it comes to Influx. At Influx, we’re really concerned with you, the developer. We care about developer happiness. I think that complicated tools shouldn’t be hard to use. You should be able to do tough and challenging things with products that are built to do it in a way that doesn’t cause you pain. So we’re worried about developer happiness. We have this principle that our CTO came up with, which is called Time to Awesome, which basically just means, how much time do you have to spend setting up something in order to be able to use it? Right? We don’t want to spend a lot of time configuring. I wish I could have back all the time I spent in my career just configuring things. And then also we want it to be really easy to scale out and deploy, because those, in my mind, those sort of fit into the first two categories. If it’s easy to scale out and deploy, then that contributes to developer happiness and also Time to Awesome. So at Influx, we will talk about sort of our architecture for a second.
Katy Farmer 00:05:14.269 So our platform is built for metrics and events, which is to say that part of our platform is to be able to accumulate metrics and events, to analyze them, and then to act on them. Now, you can see– so if we’re using our drone example, as the drone sends back metrics and events, we collect them in our database, but that’s not enough. Why would we store them if we didn’t have some use for them, right? So then we want to move on to the analyze section, where we say, “Every time this drone reports back that it’s too far away, I’m going to automate a process that turns it around and sends it closer. Keeps it in this boundary.” And then the automation just sort of naturally leads to that act portion, right, and the act would be to turn it around, send it back home. I do want to talk to you a little bit about how Influx works from a product standpoint, which is basically, you can see our open source core here. There are four pieces of our stack. It’s InfluxDB, Telegraf, Kapacitor, and Chronograf, and we’ll get into the details of how they’re different and what they do in a couple minutes.
Katy Farmer 00:06:45.424 We’re just going to talk a smidge about the difference between Enterprise and Cloud and the open source core. So probably something like 90% of our users use the open source version. It’s really performant. I have seen people do some really cool and impressive things with it. So if you’re thinking about using Influx, you should really start with the open source and see what you can do. And then, from there, decide if you need Enterprise or Cloud. So Enterprising is different in that it offers clustering, which is sort of the main thing that people need and get from the Enterprise version, and also some high availability. So some people really can’t afford for things to be down, ever. And then InfluxDB Cloud is more of a managed service, so that we would take care of all of your problems for you and you wouldn’t have to think about it too much. Which is always, I mean, if I could afford it, I would use managed services for everything in my life. I’d just be like, “Laundry, managed service.” So let’s talk a little bit about the pieces. This is our best sort of visualization of what our stack looks like. But even this almost doesn’t do it justice. So let’s start with Telegraf, which is our collection agent. Basically, Telegraf collects those metrics and events we talked about before. And that could be anything from system stats to database information to methods queues, and then Telegraf can then pipe those metrics and events on to another piece of the Stack. In this case, either InfluxDB or Kapacitor.
Katy Farmer 00:08:50.660 So let’s talk about Kapacitor a little bit first. Kapacitor is our real-time data processing engine, which basically means that it’s a really good place for you to do data transformation, really complex queries, and things that offer really high workloads. So Kapacitor is a place for you to do work that would otherwise take up resources and time in the database. So the database itself is sort of the heart of us. InfluxDB is our purpose-built Time Series Database. It was specially built to be optimized for time series, and as we move forward as a company, that database is always improving. I’m always really impressed with the new releases and what the engineers are able to accomplish. And then there’s Chronograf, which is our UI. So we want to have a complete interface where you can use and interact with the database and Kapacitor, all through this UI. So why would you use this Stack?
Katy Farmer 00:10:06.368 First of all, it’s easy to use. When I was interviewing to work at Influx, they asked me to write some code and to use the open source, and I was able to set up the database, query it, get information back, all within an hour or so. And if I can do it, I firmly believe that means anyone can do it. The TICK Stack also has no external dependencies, right? You don’t need to have anything else installed in order for the TICK Stack to work. You just need—let’s say, if you just want to use Influx, you don’t even need the rest of our Stack for Influx to work, right? You can just install Influx and use it on its own. The same with Telegraf. The other two pieces, not as much, but then still, you don’t need—there’s no configuration. You don’t have to go update anything else. You just install the TICK Stack. There’s also this important distinction that we didn’t really cover in the difference between metrics and events. So a lot of services either work for metrics or events, but not both. So metrics are things that happen regularly. They’re things you expect. And events are irregular. So a metrics is more like if you were listening to a system and you’re just getting back maybe web stats or traffic stats, something like that, and events would be more along the lines of, “Your server has crashed. I need help. Who’s out there?” Right?
Katy Farmer 00:11:53.928 The TICK Stack is also horizontally scalable, which is always something people ask about. It’s also a full platform. So it’s not just a single tool, although you can use many of the pieces as single tools. They’re made to work together. They’re a bit like Lego. They’re modular, but they’re made to go together. We also don’t choose a side in the pull versus push debate. So if you like to pull, then that’s totally fine. You should go ahead and query—or pull, sort of meaning that you ask for metrics. And pull and push, there’s a lot of debate about how people like to set up their applications and we have decided to just make either work. We don’t want to take a side, so. Yeah. It’s kind of hard for me to explain this morning, mostly because I haven’t had enough caffeine. But if you’re not familiar, please ask it in the Q&A and I’ll do a better job.
Katy Farmer 00:13:09.874 Okay. So now we’re going to walk through each of the pieces of the Stack and just get in more to how they work and what they do. So Telegraf is our plug-in-driven server agent for collecting and reporting metrics. So Telegraf polls metrics from third-party APIs. It can listen to metrics via anything, like StatsD or Kafka. Telegraf is built around plug-ins and most of those plug-ins are written by our community. They’re all open source. So there’s—I think the last time I checked, there was over 140 plug-ins for Telegraf, both input, output, aggregates, that kind of thing. So output plug-ins send metrics to other data stores, or services, or message queues. Some examples would be InfluxDB, or Graphite, or OpenTSDB, Datadog, Librato. You sort of get the idea here. The thing that’s really great about plug-ins is that if it means that you need to send metrics or receive metrics from a service that we don’t currently have, then you can build a Telegraf plug-in to make it happen. We often have trainings on how to write Telegraf plug-ins. They’re written in Go, but they’re pretty easy to get going. We have good support for it and we have excellent engineers on Telegraf who are good at making these things get merged and get working. Telegraf also has a minimal memory footprint. It’s not a resource hog. It’s a streamlined sort of collection agent. I think it might be our most popular piece of the Stack in a lot of ways because it’s so good at just being this pipeline between services, or message queues, or what have you, that a lot of people use it just as this direct pipeline.
Katy Farmer 00:15:30.107 So InfluxDB is our custom, high-performance data store written specifically for time series data. The database allows for high ingest speed and data compression, which is really important, as we talked about earlier, for time series data. It needs to be able to accept a huge number of data points per second, and also it needs to be able to compress that data because it’s such high volume that if it’s not compressed, the storage would be sort of implausible. The database is also written entirely in Go, so it compiles into a single binary with no external dependencies, meaning you could download that binary and install from it, or just run it, and it would work. We also, right now, have a query language called InfluxQL, which is SQL-like. It’s really similar. I think my SQL knowledge, when I started using InfluxQL was like a solid B minus, and when I read InfluxQL, it looked very much like SQL to me. It was easy to understand and easy for me to write.
Katy Farmer 00:16:57.286 So we also have a—our database is built around tags, fields, and measurement, and tags allow for us to index a series, so they’re faster and more efficient. So, essentially, anything that you assign as a tag in your database will be automatically indexed. So those things that you do assign to be tags should be the things that you think you’re going to query by, if that makes sense. Our retention policies efficiently auto-expire stale data. So when you write data to the database, you can also assign it a retention policy at the same time, and say, “I just want to keep this for 30 days,” or however long it makes sense for you to keep it and the database will sort of take care of that for you. Then you don’t have to worry about deleting millions of points. The retention policies offer that. The database also supports ephemeral time series with TSI. And TSI is a relatively new feature in the database, which is, I think, super exciting, but I just get kind of excited about the database. Essentially, TSI allows for the index to be moved between memory and disk as needed, so that it can be more performant. And with TSI, that means we can be even faster and be more efficient, more optimized than we ever have been. Right now, it is in the open source and Enterprise and Cloud versions. It’s not automatically enabled, so if you want to try it, you do have to go in and enable it. But, other than that, it’s there and it’s production-ready.
Katy Farmer 00:19:03.130 So let’s talk a little bit about performance. I’ve sort of said a couple times that it’s fast, but what does that mean? So a low load for the database is essentially lower than 250,000 writes per second. It’s less than 25 queries per second. The cardinality without TSI would be around 100k, but with TSI, it’s over a billion. A moderate load is somewhere around 500,000 writes a second. It’s over 25 queries per second. The cardinality should still be—it’ll be above 100k and, again, still over a billion with TSI enabled. And then high, our writes, over 750,000 per second, queries, over 50 per second, cardinality, over a million, and then—I mean, I probably should have prefaced this with cardinality with TSI is basically like we’re around this one-billion mark and we’re really excited about the difference that TSI can make, but right now, the only important piece is that it’s a lot. With TSI enabled, there’s a really good chance you could get over a billion with your cardinality.
Katy Farmer 00:20:40.272 So also with those performance requirements, obviously, comes some hardware recommendations. So if you have a low load, you’re looking at about 4 cores, about 4 to 8 gigs of ram, and about 50 IOPS. Moderate is 8 cores, between 8 and 16 gigs, and 500 to 1,000 IOPS. And then high is over 8 cores, over 16 gigs of ram, and about 1,000 IOPS. And I think that you’ll notice that for the performance that we’re offering here, for these number of writes and queries, these hardware recommendations are not that crazy. There was definitely a time when you needed—4 cores was considered the very top of the line, so it’s exciting that you can accomplish so much on relatively available hardware.
Katy Farmer 00:21:44.659 All right. We’re moving on to talk about Chronograf, which is always exciting. It’s our integrated user experience for the TICK Stack. So it’s this beautiful UI where you can go in and do some admin. You can create databases. You can create users with different permissions for those databases. You can configure alerts. You can also explore your data. You can create and visualize queries, meaning you can build a query in Chronograf and then see the graph of whatever type makes the most sense for you. So you can build your query and then you can say, “I want to see this as a single stat,” or, “I want to see this as a multi-line graph,” whatever sort of suits your need. You can also build custom dashboards using your collected metrics and events. So you have access to your InfluxDB databases in Chronograf, then you can go in there and build dashboards because you know you always want to see where your drone was. Or, let’s say, more to the point, you want to know your drone’s CPU usage. You want to make sure that it’s not hitting some threshold. You can build a dashboard that just always shows you how much CPU usage your drone is using so you don’t have to re-query it every time.
Katy Farmer 00:23:14.640 There’s also a rapid time to value. You have pre-built dashboards that are based on metrics from Telegraf. So when we—we’ll do this later in the demo, but when you install the Stack, Telegraf will start collecting some basic metrics from its installation location. If I install it on my local laptop, it’s going to collect CPU usage, some memory, some disk numbers, and then that dashboard will just already be available for me to see. From Chronograf, you can also create Kapacitor tasks. And we’ll get into Kapacitor a little bit more in a minute, but essentially you can create rules, rules for alerts, and there’s a TICKscript editor, and TICKscript is the DSL for Kapacitor where we can sort of write scripts for customized tasks. It’s really important to me that all of this happens in Chronograf because, while I am generally very happy in the command line and I would use it for everything if I could, I understand that that’s not going to be everyone. And especially, if you think about it, oftentimes, as developers, we’re happier in the command line, or at least we may not be happier, but we’re comfortable there. But you also want to be able—you want your boss to be able to use it. You want non-developers to be able to—maybe you want marketing to be able to use it, or sales, or anyone else in your office. We want to make sure that these tools are available to everyone.
Katy Farmer 00:24:54.940 Okay. Now we’re in Kapacitor. So Kapacitor is our open source data processing framework that makes it easy to create alerts, run ETL jobs, and detect anomalies. That’s a lot of things. And I wish I could tell you that there was a limit to what Kapacitor can do, but Kapacitor is built in such a way that when our other pieces of the Stack sort of are missing something, it’s usually still doable in Kapacitor. Kapacitor is largely in charge of downsampling, which we talked about earlier. So if you have all of your drone metrics in an InfluxDB instance, but you only care about sort of the averages, like on the average, how fast was it going for the past 24 hours, you can use Kapacitor to do that aggregation and then spit the aggregate back out. And it’s up to you where you send that data, right? You can store it somewhere, like back in InfluxDB. You can send it, really—if you want to just send an alert out with it, you can do that. You can also query data from InfluxDB on a schedule and receive that data. So a lot of people, when they use InfluxDB, they use something called continuous queries and continuous queries are really helpful, but they often use a lot of resources and they can sort of slow down the performance of your database. So Kapacitor is a great place to do that work instead. So when you say, “I always want to query this at the end of the day,” Kapacitor can do that.
Katy Farmer 00:27:03.401 It can also perform data transformation and enrichment. So a lot of times when we’re passing data from the database to Kapacitor, it’s because we want to do something to that data. So this is where the data transformation will happen, and then, like I said before, you can store that transformed data back in InfluxDB. You can also add user-defined functions to detect anomalies, or if they’re user-defined functions, really, you could do a lot with that. So, essentially, using TICKscript, which, again, is the DSL that we built for Kapacitor, you can write a user-defined function made by you to do something highly specific. So we have people who—from TICKscript, basically you can just make a call to an API or something and call your function—which doesn’t need to be in Go. Our Stack is written in Go, but your functions don’t need to be. And I love Go, I love writing Go, but I also think it’s important that we—when possible, we’re language agnostic because you don’t really want to get into debates about which languages are best for which things because then we’d never get anything done. Kapacitor is also compatible with Prometheus scrapers and it can integrate with a lot of services for alerts, like HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, VictorOps, and a lot more.
Katy Farmer 00:29:00.241 So we want to talk a little bit about Kapacitor’s performance, since it’s doing so much of this work for us. A low load for Kapacitor is fewer than 10 tasks. It’s fewer than 100,000 writes per second, and the cardinality is still right around 100,000 or less. Moderate is over 100 tasks. It’s over 100,000 writes per second, and it’s over 100,000 for cardinality. And then high tasks is still—oh, sorry, a high load is still over 100 tasks, it’s over 300,000 writes per second, and it’s higher than a million in cardinality. The hardware recommendations are, for low, which is still fewer than 10 tasks, we’re looking at 4 cores, 4 to 8 gigs of ram. For a moderate task which is still less than 100—sorry, for a moderate load, which is less than 100 tasks, you’re looking at 6 cores and 8 to 16 gigs of ram, and a high load, which is over 100 tasks, is 8 or more cores and 16 gigs of ram.
Katy Farmer 00:30:16.179 Okay. So we’re going to touch a little bit on InfluxDB Enterprise now. The core of what Enterprise has to offer is these clustering capabilities, which offers high availability and scalability. There’s also Kapacitor clustering capabilities for the same sort of high-availability options, and we’re working on scalability there too. It’s also got enhanced backup and restore. It’s got more fine-grained auth. It’s got enhanced security. It’s battle-tested. I’m pretty sure that our support team would be proud to say that it’s battle-tested. And it’s got enhanced production deployment capabilities. InfluxDB Cloud is all the things that are in InfluxDB Enterprise plus Chronograf. It’s got basic alerting. It has a Kapacitor add-on and it’s fully managed by us at InfluxData. It’s monitored by us. It’s optimized by us, and it runs in AWS. So Cloud basically takes all of the work out of your hands, gives it to us, and we make sure that it’s working the way it’s supposed to for you.
Katy Farmer 00:31:35.236 So just to go over sort of the differences here in our product offerings. Starting with our open source TICK Stack, we have this open source core, which is really at the heart of everything that we do, and again, probably 90% of our users are using the open source. It’s extensible. It’s got support for regular and irregular data, metrics and events. So the TICK Stack can run on-prem or on the cloud. In Enterprise, it’s still got the open source core. It’s extensible. It’s got support for metrics and events. It’s high-availability clustering, scalability and clustering. It’s got advanced backup and restore. It’s got complete platform support. It runs on any cloud or on-prem. And then InfluxDB Cloud, still, again, our little heart of the open source core, it’s got support for metrics and events. It’s high availability. It’s got scalability, advanced backup and restore. It’s got the complete platform. It’s managed by us and it runs on AWS.
Katy Farmer 00:32:52.430 So it’s demo time. Check in and see if there are questions, otherwise I’m going to hop into a demo. All right. So what I’m going to start with is how to install the Stack. I’m on a Mac. I use Homebrew, so that’s what I’m going to demo today. I’m more than happy to answer questions about how to— sorry—install it on different operating systems, but for today, I’m going to do brew, install, InfluxDB. Luckily, my laptop totally got destroyed a couple days ago, so it’s a fresh, fresh install. So you can see, I just did brew, install, InfluxDB. It happened. I’m going to install Telegraf next. Great. Even faster. And then—oops, I spelled install wrong. I’m going to do Chronograf. Chronograf will actually install Kapacitor. You can see there that it was listed as a dependency of Chronograf.
Katy Farmer 00:34:27.296 All right. So now I need to make sure that I start those services. You know, I always write these backwards every time. Whatever I think it is, is always wrong. I should always do the opposite of what I think is the answer. So I tend to start them as soon as I install them, but that’s up to you. But since I know we’re going to be using them, I like to start the services and leave them running. As soon as I start all these things, then now I can hop into the command line tool for Influx. And, you can see it says connected to local host. I’m in my shell now. I can say, “What databases are available? Show databases.” So you can see there are two that are already there. There’s internal, which is just kind of tracking some performance stuff, and then Telegraf, which is the name of the database, again, that we—that is there to display what Telegraf is tracking on the machine it’s installed on, in this case, my laptop. So I could—from in here, I can just write straight InfluxQL. So I can also say—I can’t write that yet. What I need to do first is choose a database. So I can say, “Use Telegraf,” and then I’m using that database, then I could say, “Show me the measurements that are available,” and I can see all these. So then I could write queries to say, “Show me when the CPU is over this—over 90%,” or whatever.
Katy Farmer 00:36:49.241 So what I am going to do now is open up Chronograf and sort of show you what that looks like. All right. So you see I come here. It’s kind of—it’s blank because I don’t have any alerts set up or anything right now. This sort of left-hand side is where all of your actions are going to live. So if I go to this host list and I go to apps and I click on system, these are where these pre-built dashboards live. Right now, there’s nothing on them because I haven’t really been collecting metrics on that Telegraf database for more than a minute, but essentially—here we go. This one’s got some stuff. These are where these pre-built dashboards live. So if I hop into the data explorer, I can show you a little bit about just how to build a query in Chronograf.
Katy Farmer 00:37:59.841 So you select your database. In this case, I know that Telegraf is the name of my database. This piece here, in Chronograf, that says .autogen, that’s the name of the retention policy. So our default retention policy here is called autogen, which is basically—I think it’s an infinite retention policy. So let’s see what my CPU usage is like. Usage System. Cool. Okay. So when I select, I select my measurement, which is CPU, and then I select the field. And I can select any number of these, right? I could say Usage User and Usage System. Usage Guest is nothing because I’m the only one using this laptop. And you can see the graph being built down here as I go. Again, since I haven’t been recording these metrics for very long, there’s not too much data here, but hopefully, you can start to see how you’re building this. And then you can see the query up here. Now, it’s worth noting that the query that you build in the data explorer doesn’t look exactly like InfluxQL because Chronograf has things called template variables, which are just there to help Chronograf understand and—understand Influx QL. But it’s extremely similar and, for the most part, I wouldn’t worry about it. You can’t copy and paste this into the command line and have it work exactly the same way, I guess.
Katy Farmer 00:39:47.142 Okay. So you’ll see that I built this visualization, but I can’t really do anything with it here, right? The data explorer is about—just sort of—well, just like it says, right? You’re just exploring your data. But if I want to save or build some info, then I will go create a dashboard. I click Create Dashboard, I click Add graph, I click Add Query, and now I can do pretty much the same thing, right? Let’s say I just want to track my own system usage, right? Here’s my system, and then here’s this, and then I have a lot of the same things available to me, right? I can still see the query, but I can also choose, over here, in this visualization, I can choose what kind of graph works best for me. Line graph, stacked graph, step-plot, bar graph, which isn’t, in this case, really the right choice. Some of these are going to obviously not be the right choice. Single stat, which doesn’t work for this type of data, but if you were dealing with an aggregate, would be really helpful. A gauge, which is a similar idea. Like, for this case, I’m going to say a stacked graph is the best choice, right? And then there are all these controls, so you can make sure this looks like you want it to look. So I’m going to pause there so we can answer any questions you guys have. Yeah.
Chris Churilo 00:41:31.405 Okay. I’m going to unmute [inaudible], so you can ask your question if you have your microphone on. Or you can type your question in. It’s up to you.
Guest 00:41:50.202 Hello, Katy.
Katy Farmer 00:41:52.085 Hey.
Guest 00:41:56.174 Hello?
Katy Farmer 00:41:57.700 Hi.
Guest 00:41:59.274 Hi. So I have a question regarding something. Particularly, what my understanding is, TICK is getting some metrices and logs from somewhere. So can we push applications logs through the telemetry?
Katy Farmer 00:42:20.020 Yeah. Good question. So I have done something similar to this. I write a lot in Ruby on Rails and I have pushed metrics from my Ruby on Rails application through Influx. So, essentially, the piece that you need is the instrumentation, right? So you need to instrument your applications so that those metrics or events are available to be sent through Telegraf. But in my case, I was using MySQL as the database, and there’s a MySQL Telegraf plug-in. So I used the MySQL Telegraf plug-in to send data about HTTP events, and web events, as well as site traffic, stuff like that, all the way through to Influx so that I could track my site usage in my application.
Guest 00:43:11.804 Okay. One more thing that—had you heard about the ELK Stack also?
Katy Farmer 00:43:18.907 Yeah.
Guest 00:43:19.178 Yeah. So how the ELK Stack—like ELK in ELK Stack, we have Kibana and all. So Kibana is also doing the same kind of work, like displaying the data in the charts format. And also how TICK and ELK is different?
Katy Farmer 00:43:37.273 Yeah. That’s also a really good question. ELK is really great at what it does, and I think the thing that makes it different is just that they’re optimized for different types of data. So ELK is very much about a specific kind of analytics and a lot of times what they—sort of like the key difference is a lot of time in the Elasticsearch portion, which is essentially like—Elasticsearch is really good at what it does, but what it does is not time series. Right? What it does really well is text search and text indexing, and that kind of thing, whereas our sort of biggest strength is that, for time series data, for those really high number of points, for anything that’s coming from a sensor, or that you need to track with really high-precision time, all the way down to the nanosecond, Influx is going to be optimized better for that. That’s sort of the key difference, is just use case, right?
Chris Churilo 00:44:47.931 It’s also—I think let me just chime in here just a little bit. So, also, when you think of the ELK stack, it’s more about being able to access a collection of logs that you’re going to be sending from all your various components and a lot of people will actually use Influx data with the ELK stack. So they’ll have this collection of logs which is just sitting there, which is really great, but then what they do is they collect this set of metrics using InfluxDB. So when there is an incident where they realize, “Hey, there’s something wrong with my MySQL database. It’s not performing correctly,” so now that I know at this point, something went wrong, then I can go in and take a look at the logs, if I need to. So a lot of people use them together.
Katy Farmer 00:45:33.554 Yeah. Great point, Chris.
Chris Churilo 00:45:33.699 [crosstalk] themselves. You need to have some kind of indicator, right, that something’s wrong.
Guest 00:45:41.063 Oh, okay. So I have more a bit more doubts, can I ask? Can I proceed?
Katy Farmer 00:45:49.134 Yeah.
Guest 00:45:50.141 Yeah. So particularly, like you talked about application logs and all. So you said that the application logs, like when you are integrating your Ruby on Rails application with the ELK stack. So it’s like you are—is there any inbuilt plug-in already made, or in place, or if we have a log, we have to parse them, then we can work into the format of what the Influx is expecting? It is like this, or something else?
Katy Farmer 00:46:25.470 Yeah. Okay. Also good question, you’re doing a great job. So I think it’s sort of—my first instinct will be to say it depends on your application. But, that being said, so we have really good client libraries, depending on the language and stack that you’re using, which I have used, so that I don’t have to—I could sort of skip that step. We have client libraries in Ruby, JavaScript, Go, Java, Python, so if you are using one of those client libraries, that can sort of abstract away that pain and just enable you to send what you want without having to worry about parsing it. There is, definitely, a—I would have to check the list. I’ll just go look at it. I’m going to look at the list of Telegraf—
Chris Churilo 00:47:21.257 Yeah, so I actually put that link in there for you. So there is a log parser Telegraf plug-in, so.
Katy Farmer 00:47:27.158 Yeah, log parser.
Chris Churilo 00:47:28.369 I just put a blog up there.
Guest 00:47:29.723 Yeah. Yeah. I will come to this point also. You talked about log parser plug-in. So it will work well with the Tomcat and all. So it will read the logs from the Tomcat, whatever the logs produced by the Tomcat and after that it will do some integration. So if, for example, that logs is something, for example, we use some JSON loggers and all. So we will put log messages and all. So if we want to put those logs into the Influx, so is there any plug-in for Telegraf which will do this for me?
Katy Farmer 00:48:12.934 That’s a good question. I don’t know for sure, off the top of my head. I would probably have to ask our Telegraf team.
Guest 00:48:24.407 Because what I know that there is a log parser, yeah. Exactly. You are true. What it will do, it particularly takes the logs, whatever produced by the Tomcat Apache server, and it will log into the InfluxDB. But, for example, a team, or whatever, if personally we are using Log4j or something, so if you want that Log4j and all produces some logs, and if you want to put those logs into the InfluxDB, so is there any log parser, kind of log parser you have made, to do the formatting and all? If it is not there, then we have to write our own plug-in? It is like this?
Katy Farmer 00:49:08.323 Yeah. So I’m not sure if there’s one. I don’t think there’s a plug-in that changes JSON to our protocol, is the short answer. I don’t think that there is one, but I know that this is a problem that a lot of people in our community have had, and so a lot of times—so we have a community site where people will ask these kinds of questions, and I think there may be a solution there. Maybe I’ll have Chris send out the link for it, but I know I have run into this problem before, I just can’t quite remember what the solution was.
Guest 00:49:49.771 Yeah. I search in your community also. I’m there in the community.
Katy Farmer 00:49:53.744 Cool.
Guest 00:49:54.075 So yeah. So, particularly, I have not found any such type of queries. I’ve seen that people are searching for log parser there, like Apache Tomcat log parser they are talking about, but they never talked about JSON parser and all which would put the data into the InfluxDB.
Chris Churilo 00:50:12.896 Actually, we do have a JSON parser. I just put the link in there. You should take a look at that one. I don’t know if it’s going to do what you need, though. I just put it in the GitHub link in the chat panel.
Guest 00:50:23.934 Okay. Okay. So, okay. So you continue with your presentation, I will ask whatever question later. Please.
Katy Farmer 00:50:32.423 Okay. Yeah. Thank you.
Guest 00:50:35.876 Thank you. Thank you, Katy.
Katy Farmer 00:50:38.548 All right. I mean, really my next bit is about the community. So I want to point out that we have a community site, which gets a lot of usage, and sometimes there are a lot of solutions there and it’s always worth asking questions there because our Influx team is on it a lot. So we get a lot of our sort of information about how to move forward from our community. That’s one of the cores of open source, right, is that you sort of listen to what people need. So the community site is sort of a place where we listen and figure out what features we need, what plug-ins Telegraf needs, that sort of thing. We also have recently started meet-ups in San Francisco. We just had our third one last night, and it was really great. We also have them in New York, Boston, London, and Denver. So that’s always a really good place to come with these questions. There’s always lots of Influx people there and lots of Influx users, which is the more sort of important piece because I’m of the opinion that it is always hard to know as much as the people who use the product every day. A lot of times, our users are better at answering each other’s questions than we can ever be because they are deeper in the use of the product than we are.
Katy Farmer 00:52:15.293 We also have a really great blog series on our website where we try to answer some of the most common—sometimes, really common questions. Sometimes, really complicated questions. Sometimes, just how to get started. So I always recommend checking out the blog series. And if you find yourself writing something and really running into trouble, you can always tweet at us, or you can tweet at me. We’re InfluxDB on Twitter and I am TheKaterTot, because that’s—I’m always happy to answer questions there, or you can mention me, or DM me, or whatever you need. If you need to get my attention or the company’s attention, feel free. So that is the end of the presentation and I’ll let Chris handle any Q&A.
Guest 00:53:10.562 So hello?
Katy Farmer 00:53:11.866 Yeah?
Guest 00:53:12.409 Yeah, Katy, one more thing that you said. So if I want to contact your team and all, so particularly you if I want to ask some questions, so can I get that ID and all. So if I have some parsing, like this JSON logger, if I am building it, so I will contribute it.
Katy Farmer 00:53:38.352 Yeah. That would be great. Let me—
Guest 00:53:43.533 Could you please ping your email ID to me so that—?
Katy Farmer 00:53:48.075 Yes. Absolutely. Chris, feel free to put my email address.
Chris Churilo 00:53:54.625 Just put it in the chat?
Katy Farmer 00:53:56.087 Yeah. Chat, chat, chat.
Chris Churilo 00:53:59.737 Here. I’ll just do it.
Katy Farmer 00:54:00.710 Thank you. Clearly, chat is beyond my abilities.
Chris Churilo 00:54:08.871 Okay. Cool.
Guest 00:54:10.712 Okay. Thank you, Katy. And I hope so I will build up on this one which is capable of parsing all these things, like through JSON format message, if you are parsing, and after that it will parse it and it will take up the timestamp field from your logs and put directly into the Influx. So it will be very useful.
Katy Farmer 00:54:33.794 Yeah. That would be great.
Chris Churilo 00:54:37.825 Awesome. Thank you so much. And thanks to Katy and that was, I think, the last of our questions, so we’ll end our training now and we’ll see you guys next week.
Guest 00:54:51.109 Okay. And one more thing, Katy. I want to—you have meet-ups in San Francisco and all. When there is in India?
Katy Farmer 00:55:02.159 Probably—
Chris Churilo 00:55:03.552 Maybe you can start one.
Guest 00:55:07.722 Okay. Okay. Okay. Thank you, Katy.
Katy Farmer 00:55:11.446 Thank you.
Guest 00:55:12.536 Thank you. Thanks a lot.

Katy Farmer
DevRel
Katy lives in Oakland, CA with her husband & two dogs (at least one of whom talks to her about fun, technical stuff). She loves to experiment with code, break stuff, and try to fix it. She learned to code at Turing School of Software and Design in Denver, CO, and it gave her the perfect chance to break stuff before she knew how to fix it. Ask her about Ruby, OOP, Go, & natural language processing,