In this session you’ll get a detailed overview of Kapacitor, InfluxDB’s native data processing engine. We’ll cover how to install, configure and build custom TICKscripts to enable alerting and anomaly detection.
Watch the webinar “Intro to Kapacitor for Alerting and Anomaly Detection” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “Intro to Kapacitor for Alerting and Anomaly Detection.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
• Chris Churilo: Director Product Marketing, InfluxData
• Michael DeSa: Software Engineer, InfluxData
Chris Churilo 00:00:00.494 Okay. With that, it’s three minutes after the hour, so hello everybody. My name is Chris Churilo and I run the trainings and the webinars here at InfluxData. And today’s training is on the introduction to Kapacitor. I just want to remind everybody that we are recording this session so you can take another listen to it. In addition, if you have any questions, just put them in the chat or the Q&A, whatever is convenient for you. And when we get a couple of breaks during this session today, Michael or myself will answer them, depending on the question. And it doesn’t have to be just about Kapacitor. It’s your chance to ask questions about any of the components in the TICK Stack. So with that, I’m going to put myself on mute and let Michael take it away.
Michael DeSa 00:00:47.708 Thank you very much, Chris. Awesome. So as Chris, mentioned today is kind of a getting-started series with intro to Kapacitor. My name is Michael DeSa. I’m a software engineer and I’ve worked on kind of a number of projects here at InfluxData, including Kapacitor. So just in case anybody doesn’t know kind of the high-level overview of what InfluxData is, and our commercial offerings, and the TICK Stack, and how those things kind of come into play with one another, so we have an open kind of core, or an opensource core of the TICK Stack which is Telegraf, our agent for collecting metrics; we have InfluxDB for storing those metrics, which is the database; we have Chronograf, which is our user interface or visualizing and kind of interacting with those metrics, kind of like a EY layer to everything within the TICK Stack; then we have Kapacitor, which is processing and alerting of that time series data. So all of those are opensource and free to use. Where we start to think about a commercial offering is if you need high availability and so you need to have a cluster or you need scale out performance. Or if you don’t want to manage an instance yourself, it’s where we start to think about our commercial offerings. So InfluxEnterprise is an on-prem solution that has clustering so you can have a highly available system and also scale it out. And then InfluxCloud is a hosted version of InfluxEnterprise, where we will host the database for you and a number of other auxiliary services.
Michael DeSa 00:02:30.378 So what is it that we’ll be covering today? Today we’re going to be covering what Kapacitor is. We’re going to talk a little bit about how to install Kapacitor. Just like all the other InfluxData tools, very easy to install. We’ll go into TICKscript a little bit. So TICKscript is a domain-specific language used in Kapacitor to define tasks. We’ll talk about some of the weirdnesses in there, namely the weirdnesses around quoting rules. And then we’re going to go through the process of creating a task. We’ll start with a simple one and kind of gradually make it more and more complex. We’ll do that for stream tasks and we’ll do it for batch tasks. We’ll talk a little about batch versus stream and when you might want to use one over the other. And then we’re going to use Kapacitor as a downsampling engine instead of continuous queries. So something we’ve kind of been encouraging people to do is, if they have continuous queries, just move those continuous queries into Kapacitor rather than have them run in InfluxDB. So just kind of highlight at a very high level the kind of core capabilities of Kapacitor. Whenever you hear Kapacitor, I want you to think processing and alerting. Those are the two things that should kind of come to your mind. Processing and alerting—Kapacitor. If I need to do anything with processing or alerting, I use Kapacitor.
Michael DeSa 00:03:56.277 And just to kind of give you a little bit more of a detailed explanation of what Kapacitor is, it’s a real-time stream processing engine for time series data. It can process data in either batches or streams. Both types of data will be coming from Influx. The difference between a batch and a stream is that, with a batch, Kapacitor will periodically query InfluxDB, and with a stream, InfluxDB will write data to Kapacitor in real time. Kapacitor has the ability to plug in custom logic or user-defined functions. So suppose that you have some kind of custom function that you want to apply to your data, some kind of anomaly detection or anything like that, you can apply your custom logic in a UDF, or you can do a custom kind of math or logic in actual TICKscript. So there are kind of two ways you can go about that. And it integrates with a number of different alert providers, so things like HipChat, or OpsGenie, or Alerta, or Sensu, PagerDuty, Slack. There’s a huge array of various kinds of options that you can pick from for doing your alerting.
Michael DeSa 00:05:12.673 So that’s kind of Kapacitor in a more expressive manner. To install and start Kapacitor, if you are on Debian, it should be as easy as wget and then this link right here, where it’s basically just the standard Kapacitor. If you are on another kind of system, feel free to go to our downloads page and you can find your particular version of Kapacitor for your operating system, and just download that and you should just be able to just install it from there, then kind of start it up. Or if you’re on OS X, you can simply just [inaudible] through install. So once you’ve downloaded it and installed it, you can start it either in the background using systemctl start, or you could start it in the foreground with kapacitord.
Michael DeSa 00:06:08.962 So once we have the actual program running, we can start defining tasks. And as I mentioned earlier, tasks are defined from what is called TICKscript, which is a domain-specific language used to define Kapacitor tasks. It is a chaining invocation language, meaning that you have these kind of pipes that separate different nodes and then these dots that describe specific attributes on those nodes. So those are referred to as chaining methods with the pipes and property methods with the dots. As you can see kind of at a high level, what we’re doing here is we’re kind of defining what’s called a pipeline, where you can think of basically kind of like a stream of data where things are flowing from one node to the next node and different kind of pieces of logic happen at each place along the chain. And as you can see, it has a number of things. It’s got strings, which are these single-quoted values here. It has variables, which you can just store the Strings, Ints, Floats, Bools, etc. You have durations literals. So as you can see, as a first-class type, you have duration literals, like 5 minutes, 10 minutes, 20 minutes, etc. And then you have pipelines which are kind of these streams of things where you have a chain of different functions kind of pulled together under a common branch.
Michael DeSa 00:07:53.134 So as I mentioned, there’s a little bit of weird quoting rules in TICKscript, and so it’s kind of something that takes a bit of time to get used to in that strings are represented by single quotes. So a single quote refers to the string literal, and a double quote refers to referencing a value. Right? So whenever you see double quotes, you should think, “I’m referencing the value stored under this particular tag or field,” whereas whenever you see single quotes, you mean the literal string. And so another way you can kind of think of this is, you should really only ever use double quotes in lambda expressions. So in this example here, we have lambda is_up equals true. This would be a case where we are referencing the value of is_up and checking to see whether or not it is true. The one place where this gets a little bit confusing is if I have function count that operates on the field value. I use single quotes here and I use the string literal to refer to the field value in the function count. So this is a little bit of a place where it gets a little bit weird, and just the kind of rule of thumb is, is it in a lambda expression? No. So then we use single quotes.
Michael DeSa 00:09:23.930 So now we’ve got some TICKscript stuff down, we’re going to kind of go through the building up of basic TICKscripts. We’re going to start with the simplest thing we can do, which just logs all of the data from the CPU measurement, and to say this, we say stream, pipe from, measurement, CPU, where it’s specified in single quotes, and then pipe log. So this is a very simple TICKscript. All it does is it streams all the data from the measurement CPU, and then logs it to the Kapacitor logs. Once we have that task or that TICKscript, we can define a task from that TICKscript by saying Kapacitor define, the name of the task, which in this case we’ll call it CPU, dash tick, which is the path to the TICKscript, dash type, which is the path—whether or not it’s a stream or a batch, this has actually been deprecated; you do not need to specify type anymore—and then you say dash DBRP, which are the databases and retention policies that the task is allowed to access.
Michael DeSa 00:10:32.002 Once we’ve defined the task, we can enable the task, and once we’ve enabled the task, we can say Kapacitor show and then the name of the task, and it outputs this nice structure that we see here. So just a bunch of information about the task as it is. So we see stuff like if there are any errors, is it a template, the type, status, if it’s executing, when it was created, modified, last enabled, the databases and retention policies it’s allowed to access, the TICKscript, and then we get a dot representation of that actual task. So this is particularly useful for debugging things. So as data starts to flow through our TICKscript or our task, we can actually see information about it in this dot representation. So whenever I’m debugging a TICKscript, that’s one of the things I usually ask for, is kind of get Kapacitor show, the name of the task, and I start looking at this dot diagram down here to get an idea of what’s actually happening.
Michael DeSa 00:11:38.633 So now that we have a basic TICKscript, we’re going to do something a little bit more interesting. So we previously just logged our results, not the most interesting thing in the world. So now what we’re going to do is we’re going to create five-minute windows of data and then emit that every minute. So it’s kind of a rolling window of five minutes of data. And once we have done that, we want to compute the mean of the usage user field. We’re going to name it mean usage user, and then we’re going to log the result. So what that ends up looking like is: stream, pipe from measurement CPU, pipe window; period, five minutes, every five minutes—so period is the size of the window; every is how frequently that window emits—and then we say pipe mean, which is just applying the average function to the field usage user—so all of the data in that window is kind of aggregated together using the mean function—and then we name the resulting value from that mean usage user, and then we log the output. So this is kind of just a standard bread and butter, computing means of time windows.
Michael DeSa 00:12:54.402 We can make it even a little bit more interesting. Instead of doing it for all of the particular CPUs underneath the CPU measurement, we can do a where filter where we say the same exact TICKscript that we had before, except we only want to see it for CPU total, not for each CPU individually. So we can do something along the lines of stream from measurement CPU where lambda of CPU is equal equal cpu-total. So what I’m saying here is, look up the tag values underneath the tag CPU and see if it is equal to cpu-total. If it is, let the data continue through. Then we want to window that into five-minutes periods and emit that period every minute, and then we compute the mean of the usage user, [inaudible] as mean_usage_user. Again, this is essentially the same thing that we were doing previously, except we’ve added this nice where condition in here.
Michael DeSa 00:13:56.117 So now that we have the where condition and we have the mean, what we probably want to do is issue a critical alert—we’ve got some value. We want to issue an alert whenever that value reaches some critical threshold. Right? So in this case, let’s assume that if the mean usage user is over 80% our CPU usage is way too high and we need to be alerted about it. So what we would do is we have the same TICKscript that we had previously: stream from measurement CPU where lambda CPU is equal to CPU total; window period, five minutes, every minute; mean usage user as mean usage user, type alert, and then we say .crit, and we pass in a lambda expression to crit that says mean_usage_user greater than 80. And then we have a message that is, “This CPU usage is too high,” and what we want you to do is ping the alerts channel in Slack, and also email whoever is on call. So we’ve kind of taken some basic processing and ended up creating an alert around it. So we can do the exact same kind of logic, or exact same kind of processing, with a stream task as we can with a batch task. So we just did something with a stream. We’re going to move on to doing it with a batch.
Michael DeSa 00:15:19.896 So in this case, to compute the mean usage user as mean_usage_user, what we simply do is we put the InfluxQL query inside of the query note of a batch that’s chained off of batch. So say, batch, query, select mean usage_user as mean_usage_user from Telegraf, autogen, CPU; and we say a period of five minutes, every five minutes—so this is that same windowing kind of logic that we had been doing previously—and then we log the result. So we can then define the batch task in the same way that we define the stream task. We say Kapacitor define batch CPU, the name of the task, and I say dash tick for the path to the TICKscript, dash type batch, which again is deprecated—you no longer need to specify whether or not it is batch or stream—and then you specify the database and retention policies that the task is scoped to. So we can have our batch task do the exact same thing that our stream task had been doing, which is to issue a critical alert if mean usage user is over 80 with, “CPU usage is too high,” and output the result to Slack and send an email about it. And so what we’re doing there is just issuing the same alert that we’ve been doing on our stream task to our batch task.
Michael DeSa 00:16:57.921 So one of the questions that I get asked all the time is, when should you use batching versus streaming? The answer that it comes down to is, which instance is under more load. And so batching is very good if you have a very high throughput InfluxDB, and all of that InfluxDB load would be placed onto a Kapacitor instance. And so that can be a little bit much for the Kapacitor because it has to do all of this processing in addition to all of the buffering of data. So for that reason, if you have a very high throughput environment, a very high throughput InfluxDB, I generally recommend using batch tasks instead of stream tasks. One of the upsides for Kapacitor is that it doesn’t buffer all that data in RAM. The downside of batching though, is that it places additional query load on Influx, which, if you have a very high query load on Influx already, maybe non-desirable.
Michael DeSa 00:18:04.419 And then one other kind of issue with batching is that you really don’t quite get as low of latency as you can with something like a streaming job. So in a streaming job, all writes are mirrored from an InfluxDB instance—mirrored from an InfluxDB instance onto Kapacitor. This was actually a little bit backwards here. Right? So mirrored from Kapacitor to Influx—I’m sorry, from Influx to Kapacitor. It has the upside of, it’s a lot lower latency. So if you need to be alerting on specific values, then it usually worked pretty well. It has the downside that all of the data will be buffered in RAM, and one of the issues with streaming is it can place a lot of additional write load on Kapacitor, which, if you have a very high throughput InfluxDB instance or cluster, can actually be problematic. So the next kind of thing that we want to talk a little bit about is using Kapacitor for downsampling.
Chris Churilo 00:19:10.203 Hey, Michael?
Michael DeSa 00:19:10.855 In particular—what was that?
Chris Churilo 00:19:12.644 Hey, Michael?
Michael DeSa 00:19:13.698 Yeah?
Chris Churilo 00:19:13.684 Yeah. Maybe let’s pause here and take a look at these functions so that we can go back to those samples. In the chat, there’s a couple of questions.
Michael DeSa 00:19:23.159 Sure. All right. So we have a question in the chat. It says: “Can we express a lambda that filters for data points with a specific field that is defined when you have the same type of measurement data points with different fields?” So you can make what is called a templated task to do that. Alternatively, you can convert the name of a field to be a consistent one across them. But typically we recommend using a templated task with the lambda expressions. Hopefully, Anthony, that answers your question. So the next question is, “Do we need InfluxDB when we use Kapacitor since its supporting alerting along with processing? What makes us go for InfluxDB over Kapacitor?” So you don’t need to use Influx with Kapacitor. We actually have some users that use Kapacitor only by itself and they do alerting on Kapacitor. They don’t care about actually storing their data. They just want to alert on it and then kind of drop it off to the floor. So if that’s the kind of workflow that you have, or if that’s okay with you, then you can simply use Kapacitor on its own. But typically what we see is people will have their time series data stored in Influx, and then they process on that data with Kapacitor, or they alert on that data with Kapacitor. Any other questions?
Michael DeSa 00:21:06.082 All right. Let me check the Q&A and see if there’s anything there. Doesn’t look like there’s anything quite in the Q&A. All right. If you have any more questions, please do post them in the chat. I’m happy to answer them. So as I mentioned, one thing I want to talk about is using Kapacitor for downsampling and that is in place of continuous queries. So just at a high level, what is downsampling? Downsampling is the process of reducing a sequence of data points in a series down to a single data point. So it’s things like computing the average, the max, the min of a window of data for a particular series or a particular time series. Why would somebody ever want to downsample? And I think the answer is that you get faster queries, you have to send less data over the wire when you do stuff, and you can actually just end up storing less data. And so why would one downsample using Kapacitor in Influx since Influx has the ability to do continuous queries? The answer for that is you can offload some of the computation and some of the processing onto Kapacitor rather than having Influx handle all of it. On top of that, the way that continuous queries work or implemented in the database is a little bit less performant than just using Kapacitor to do the same thing, particularly if you are in a cluster.
Michael DeSa 00:22:42.122 So the idea of downsampling in Kapacitor is you simply create a task that aggregates your data—it can be stream or batch, just depending on your use case—and then you just write that data back into Influx into a different retention policy. So an example would be to downsample data into five-minute windows and then just write that data back into Influx into a different retention policy. And so you’d say something like, batch, query, select mean usage user as usage user from Telegraf, autogen, CPU; period, five minutes, every five minutes; and then pipe InfluxDBout, the database Telegraf, retention policy, five minutes; and then I like to tag it with some additional information here. In particular, I like to tag it with where this data came from, which in this case was Kapacitor.
Michael DeSa 00:23:32.512 So again, when should you use batch versus stream? It kind of comes down to, if you have larger windows, then in a lot of series, I might consider doing batching. And if you have smaller windows with not as many series, I would maybe recommend streaming, and depending on your throughput and all of these things. But as kind of a general rule of thumb, I typically recommend people start with stream, and then when stream stops working for them, switching to batch. And so if you can do streaming things, I think streaming has a lot of upsides. But as we start to get into very high-throughput, high-performance systems, batching tends to become necessary. And that is it. Again, I’m Michael at InfluxData. Happy to answer any questions. I’ll hang around and chat for some time after this. And thank you for hanging out with me today.
Chris Churilo 00:24:28.814 Thank you, Michael. And there is a question from Dennis in the Q&A.
Michael DeSa 00:24:35.914 All right. So we have a question, it’s: “Can we use wildcards for downsampling to downsample a whole database in one query?” You can. However, you can’t do it in Kapacitor. You can do it through a continuous query in Influx, but it’ll end up putting a whole lot of load on your instance. There’s not really a solid way to downsample all of the data in the entire database in a single continuous query and to maintain things like measurement names and field names across all of them. It’s something that we’re actively working on and a place where I would describe it as a little bit of a pain point for the database. That being said, if you’re coming from other systems like Graphite where you have this kind of automatic rollups that happen for you, in Influx we’ve noticed that, since our compression is actually so good, people can usually keep around broad data almost indefinitely, really. And so I feel like we’ve kind of gotten away with maybe not a great story around downsampling everything in the database, simply because we can actually store a ton of kind of data with relatively little amount of space. Any other questions?
Chris Churilo 00:26:09.079 So we’ll keep the line open for just a few minutes if you have any other questions for Michael. It doesn’t have to be about Kapacitor. It can be about any of the other components of our opensource TICK Stack or even InfluxEnterprise. We’ll leave the lines open for you guys. Just want to remind everybody, I will post this recording later on today and you’ll be able to actually access it via the same URL that you registered to make it easy for everybody. And we have more training next week on optimizing the TICK Stack and then actually go into a lot more details on downsampling the week after that. So I encourage everybody to take a look at the schedule and join us for those other trainings. And then another plug for InfluxDays, which will be coming up in London on June 14th, and you can take a look at the agenda at influxdays.com. I think we had a really good time. Michael and I were there in both the New York and San Francisco events. Got a chance to meet a lot of you guys, which is always really helpful for us. All right. We’ve got a question in the Q&A from Ram.
Michael DeSa 00:27:18.967 Yeah. So the question is, “Can we use InfluxDB for big data use cases like telemetry for a continuous stream?” And yes, yeah, that’s our kind of bread and butter. That’s what we see people do all the time.
Michael DeSa 00:27:44.571 That’s probably one of our strongest kind of use cases. Sometimes we see people maybe have a different kind of architectures, but we really do see them storing the data in Influx pretty universally. I think we’re kind of willing to go up head to head with really any other kind of time-series-data platform or other kind of streaming tools out there, and I think we can usually perform pretty well.
Chris Churilo 00:28:18.670 And it looks like there’s a couple questions in the Q&A as well.
Michael DeSa 00:28:24.951 So we got another question. It’s, “Can we use wildcards for a retention policy in Kapacitor?” You cannot use a wildcard for a retention policy. You have to define it for each specific retention policy. And the other question is: “Does downsampling automatically maintain tags or do they have to be added back in?” Whatever tags you had grouped by are the tags that will be maintained. So if you grouped by a star, all of your tags will be maintained. If you only group by host, the only one that will be maintained is the host tag. So group by star, you maintain all your tags; group by host, you only maintain the host. The next question is, “With a stream or a batch, is it possible to use a single TICKscript to compare results from two measurements on two different DBs and alert with complex logic on top of that?” Yes, it is. It’s entirely possible. So what you would do is you would create a single task that read data from two different either databases or measurements. You could just use both of them, and then you can combine via a join. You can do complex logic in that join, compute the average between them and then take some sigma of that, and then when that sigma is above a certain criteria, kick off an alert. That’s entirely something that we see people do pretty regularly.
Michael DeSa 00:30:01.234 The next question is, “Does a batch query automatically query just for the needed window size?” Yes, it does. So based off of what you specified for your period and your every, it will issue a query for that time range. It just queries the time range that you asked. You can have it go longer and group by time if you’d like. So if I want to say do rolling 10-minute intervals, where I query the last 60 minutes’ worth of data grouped into 10-minute buckets every 10 minutes, I can do that as that’s something that’s pretty common as well.
Chris Churilo 00:30:54.089 Okay. We’ll do another last call for questions. Let’s wait for a few seconds more. And we’ve got one from Anthony in the Q&A.
Michael DeSa 00:31:09.673 “Can we use an external API, JSON, in TICKscript?” You can. There’s two ways you can do an [inaudible] output of JSON from a task. So you can output JSON in a format that we defined. You can output it in an alert format or you can output it in a custom format. So there’s kind of a couple different ways that you can do each of those things. So the answer is yes, you can output a custom. So the next question is, “Can we join more than two data sets based on time?” Yes. Kapacitor has the ability to join data from different data sets based on time. So when we join data, we do a strict inner join on time and the tag set.
Michael DeSa 00:32:16.338 The question is: “How to integrate external anomaly detectors, Python scripts, etc.?” So there’s two ways you can do it. You can do it with Python or you can do it Go. We have some documentation on what are called UDFs or user-defined functions that describe specifically how one does this. But essentially you make a task that consumes some data, and then you make a Python program that hooks into Influx, and then you use a node in TICKscript that calls out to that Python script or that Go script. There’s examples of this in the Kapacitor repository or in our documentation.
Chris Churilo 00:33:09.653 I’m afraid to unmute myself. We might get another question [laughter].
Michael DeSa 00:33:17.695 Yeah, that’s good. We got lots of question today.
Chris Churilo 00:33:19.996 Yeah. Michael likes questions, guys, so don’t be shy.
Michael DeSa 00:33:23.099 Love questions.
Chris Churilo 00:33:28.125 And he always has an answer, so let’s stump him. And don’t worry, if you have a question afterwards, you can always go to the community site. Michael, and the DevRels, and the other engineers are often there, answering questions about Kapacitor and the other components in the TICK Stack. And if you can join us live at InfluxDays or any of the events that we’re going to be going to, I really would encourage it. We could always get to a lot more of your questions. All right, Rahul. Rahul is asking you, Michael: “How exactly is this used without InfluxDB?”
Michael DeSa 00:34:15.664 So what you would do is you would create a stream task and then Kapacitor exposes an HTTP endpoint that is the exact same as the InfluxDB endpoint, so /write. And you specify the same semantics that you would as if you were writing to InfluxDB with line protocol data, and it will kind of just operate on that. So it doesn’t require to be hooked up to an InfluxDB. You can simply just write data to it in the manner that you would write to InfluxDB and it will accept that data. So you don’t have to use Influx at all so long as you’re using line protocol. That is kind of the one requirement.
Chris Churilo 00:35:18.760 All right. I’ll leave the line open for one more minute, and they will shut down the training for today.
Chris Churilo 00:35:51.366 Michael, we can’t hear you.
Michael DeSa 00:35:54.852 Thank you. So the question is: “Do you plan to add more built-in anomaly detection algorithms other than holt-winters?” So there’s a little bit of attention—we generally shy away from picking anomaly detection, and it kind of comes down to one of our kind of belief system about anomaly detections. People usually say that they want anomaly detection. What they’re really saying is: “I just want you to magically solve my problem for me.” And a lot of the information that you can get out of anomaly detection systems isn’t necessary always super beneficial. So we do support those types of workloads, but we generally recommend that people do them via UDFs and keeping it to something that’s in their own domain rather than having something that we kind of just implement for you that’s a black box anomaly detection. It means a lot of different things to a lot of different people. We have a really good talk actually, that was in one of our InfluxDays, about anomaly detection and kind of our stance on anomaly detection and how we think about anomaly detection. Generally, when we see people talking about anomaly detection, we think that it’s not necessarily the right way to go. Even if you do something maybe anomalous, you may not be able to do anything about it. It just may be anomalous.
Michael DeSa 00:37:22.685 So we like to focus on things that are actionable, and we feel that not all anomaly detection systems provide data that is actually actionable. And if it’s not actionable, then it’s not quite necessarily useful. And this is something that we’ve kind of seen in the space over the last few years. There was a lot of promise about this anomaly detection and, “This AI will solve my problem of just telling me whenever something was wrong,” and we’ve kind of gradually started to see that that really hasn’t delivered in quite the way that we had imagined it would be, so. There’s a number of vendors out there that will tell some great anomaly detection capabilities, but what we’ve really noticed is it’s not really super necessary. So we probably won’t be adding our own specific anomaly detection. There’s a few of them out there that integrate with our products. But yeah, that’s currently where we’re at. I highly recommend, we’ve got a video on InfluxDays about anomaly detection by Baron Schwartz. Yeah, exactly. Exactly. VividCortex talk is the one that I’m referring to. It’s a great talk, so I highly recommend checking that out. Yep.
Chris Churilo 00:38:56.046 Okay. Well, with that, I think we will end our session today. But if you do have any other questions, please post it to community or send it to me and I’ll post to community and make sure I get it answered for you guys as quickly as possible. And check out those talks. I think they’re definitely worth listening to. You could find them on our YouTube Channel, or you can just go to InfluxDays and I put the links in there for everybody. And with that, I wish you a fantastic day and I hope you’re having fun with our opensource projects. Thanks, everyone.