Installation of InfluxDB Enterprise (Cluster)
Coming soon! Our webinar just ended. Check back soon to watch the video.
Webinar Date: 2019-01-17 08:00:00 (Pacific Time)
In this webinar, you will learn how to install an InfluxDB Enterprise cluster in your own environment.
Watch the Webinar
Watch the webinar “Installation of InfluxDB Enterprise (cluster)” by filling out the form and clicking on the download button on the right. This will open the recording.
Transcript +
Here is an unedited transcript of the webinar “Installation of InfluxDB Enterprise (cluster).” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
• Chris Churilo: Director Product Marketing, InfluxData
• Katy Farmer: Developer Advocate, InfluxData
Chris Churilo 00:00:04.091 Alright. Good morning, good afternoon, everybody. Thank you for joining us on our training today. Usually, I get started at three minutes after the hour. I’m just going to wait another 30 seconds. It looks like people are joining a little bit late this morning. No problem. Today’s training is on installing your InfluxDB Enterprise cluster. My name is Chris Churilo and I work here at InfluxData. We do our trainings every Thursday and we also have a webinar every Tuesday, where we try to review various customer use cases. Thanks for joining us today. Just a couple of housekeeping items. If you have any questions during the training, feel free to put your questions in the chat or the Q and A panel. Either one will do, and if we happen to get a break in the middle of the training today, then we’ll make sure we answer your questions and we’ll definitely get them all answered before the end of the training today. And you don’t have to limit your questions to just InfluxDB Enterprise. Any questions at all, we will try to get to them as best that we can.
Chris Churilo 00:01:05.584 Today’s training is being conducted by Katy Farmer, who’s one of our fantastic developer advocates here in San Francisco. And want to remind everyone, one more thing. I am recording this session, so I’ll post this before the end of the day and you can go back, you can go to the same link and be able to see it, take another listen to it. And if you have any other ideas about other trainings or other webinars that you would like to see us conduct, just shoot me an email. Everyone has my email. It’s the same email that you got with the invite for today’s training, so I’m always happy to oblige. So with that, why don’t we go ahead and get started, Katy.
Katy Farmer 00:01:52.257 Installing your enterprise cluster today? So, hopefully this’ll be useful to you and will get you started with the things that you need. And if you feel like this presentation is missing anything, feel free to let us know. So, we’re just going to start with, you know, what are we and what are we doing here today. So, we are the market-leading time series platform here at InfluxData. We’re built specifically for metrics, events, and other time-based data. We deal specifically with time series data, which is essentially anything where the timestamp is just as important to you as the value of the data itself. So, we offer something we call time to awesome, which basically means you shouldn’t have to spend a lot of time worrying about configuration or how to install. You should be able to get using the tool to be able to get what you need out of it.
Katy Farmer 00:02:53.500 We also think that you should be able to take action while it matters. You don’t really want to know about problems that happened yesterday. You want to know about them while they’re happening, and we’re fast, scalable, and available, which we’ll get into as we move forward. So, we’re just going to do sort of a brief overview here, and we’ll get in more to the details of what each product is in a little bit, but we offer an open source core, which is our full stack of InfluxDB: Telegraf, Kapacitor and Chronograf. Now, a lot of our users use the open source and they’re super happy with it. The difference here is that when you upgrade to Enterprise or InfluxDB Cloud, you can access clustering, you have improved manageability, improved security. This is really good for people who need high availability and who need enhanced security. So, today we’re going to talk about what is InfluxDB Enterprise, what is InfluxDB Cloud, what is an InfluxDB cluster, especially regarding what are meta nodes versus data nodes, what type of hardware you need to run your cluster on, and then we’re going to do a little demo of the installation of the cluster.
Katy Farmer 00:04:31.567 So, what is InfluxDB Enterprise? So, InfluxDB Enterprise is the full InfluxData components, which I listed earlier: Telegraf, InfluxDB, Chronograf, and Kapacitor. Some of the features that are available only in Enterprise are the InfluxDB clustering capabilities, and the clustering essentially offers you two things: high availability and scalability. In Enterprise, you can also have Kapacitor clusters. Kapacitor, right now, offers you high availability and scalability is something that we know people want with Kapacitor and it’s something we’re certainly looking into. You also get an enhanced backup and restore option, “fine-grained off”, which essentially, in case you’re wondering, because I certainly did when I saw this, “fine-grained off” means that you can decide who sees what. So you can, for example, if you have different teams who are all sending their CPU usage into the same cluster, you can make sure that teams can only see their own data. Right, so you could put permissions in so that I can only see my own CPU usage, but I can’t see yours. It also has enhanced security. It’s battle-tested by some of our current customers and a lot of our open source users, as well as all the engineers on our team and it has enhanced production deployment capabilities.
Katy Farmer 00:06:13.550 And InfluxDB Cloud is really similar. It includes all of these enterprise features, the difference being that it is hosted and managed by my friends here at Influx. So, there would be no on-prem solution, instead a cloud solution that offers the same components. So I just want to make a note here about time series index in InfluxDB. This is something that a lot of our users have wanted for a long time, and it came out in our newest release which is 1.5, because in the past people would hit high cardinality in their databases and they want to know how to scale. And scalability can mean a lot of things, depending on what we’re talking about, right. Scalability sometimes just means more nodes. Sometimes it means better hardware, but in this case, people wanted to scale in terms of the number of series that they had. And that was something that, you know, we couldn’t handle as well as we would have liked for a long time, so now you can scale out with TSI. So it’s not automatically enabled, but it is production-ready. You can go turn it on, and if you want to know more of the details of how it works I’m happy to answer that in the Q&A, but for now, sort of the important piece to know is that it allows you to handle a higher CPU load and the increased disk I/O.
Katy Farmer 00:07:56.476 Alright, so now I’m going to talk to you a little bit about what each piece is and what it does, starting with Telegraf. Telegraf is our agent for collecting and reporting metrics and events. Essentially, it helps us build a pipeline between sources of data, right. So you can see in this example, you can send information from a database into Telegraf and then into another database, like InfluxDB or into Kapacitor if you wanted to get an alert. You can send just about anything into Telegraf, these days. Telegraf is plug-in driven, so we have a huge number of plugins, I think upwards of 140 for different types of agents that we can collect and report to, and it’s always increasing. We have we have an amazing team, and also, we have a really amazing community and they build most of the plugins that are available. So let’s jump to InfluxDB, which is sort of the heart of who we are. It was our first product. It’s a purpose-built Time Series Database. Essentially, our team built this database because they saw a problem that needed to be solved. So, if you’re wondering or asking how it’s different than other databases, the short answer is that it was built specifically with time series data in mind, which means that it’s optimized for a higher number of writes than other databases and it handles ephemeral data better and really takes into consideration the time aspect of this data, which other databases can do but not as efficiently.
Katy Farmer 00:09:56.226 Kapacitor is our real-time streaming data processing engine, which is a lot of words in a row, but essentially, Kapacitor can be in charge of a lot of extra tasks that we have such as any ETL that happens to your data, right. You want to transform the data somehow, then Kapacitor is a really good place to do that. Kapacitor can handle anomaly detection, machine learning algorithms you want to plug into it, or any user-defined functions that you have. So, you can set up custom functions on your end and have them run and be computed in Kapacitor, and then you have like a few options about what you can do with that data once it’s gone through the processing engine. You can send an alert with it, and we have a lot of integrations with different alertings. You can go to Slack or PagerDuty or a bunch of others. You can just do it by email, or you can, from Kapacitor, can send data in, transform it, and then send it back to InfluxDB to be stored again. And, last but not least, we have Chronograf, which is the complete interface for the InfluxData platform, meaning it’s sort of the visual layer. This is the UI. This is where you can go see visualizations of your data, configure settings, add users, and really manage the whole platform from there.
Katy Farmer 00:11:39.092 So, the InfluxData architecture consists of three separate software processes: data nodes, meta nodes, and Chronograf. To run an Influx DV cluster, you only need the meta and data nodes. Chronograf is an added layer for ease of use, to help you manage and configure and query your data, but it’s not necessarily required. But the meta and data nodes are necessary for your setup to work. You can see that, in the diagram on side, we have three meta nodes that communicate with each other and then two data nodes talk to each other and then also to the meta nodes. So, if at this point you’re wondering, “What is a meta node?” you are not alone. Meta nodes keep the meta state consistent across the cluster. Information like users, databases, continuous queries, retention policies, that’s data that the cluster needs to know about, but without meta nodes it wouldn’t have a good way of sharing that information accurately. So, meta nodes are in charge of the meta state, which means information that helps the whole cluster be accurate and consistent. You need three meta nodes for high availability. The short explanation for that is that you must always have an odd number of meta nodes. If you’re interested in why that is, again you can ask in the Q&A. It’s maybe not super important for now, but three meta nodes gives you high availability. It means you can afford to sacrifice one and the other two will still be able to manage, but you need at least three.
Katy Farmer 00:13:49.001 So, typically, meta nodes don’t need a large amount of resources. Meta state isn’t a resource hog and they run pretty efficiently in relation to the data nodes, which to the actual storage of the time series data. They respond to queries and they do not participate in consensus. Now, consensus happens in the meta nodes, so the reason that we have an odd number of meta nodes is so that they can reach consensus, which is part of how we determine load balancing. So, you need two data nodes for high availability, which is sometimes obvious, but we’ve seen people use one data node and not totally get way their backup didn’t work, right. But so, two data nodes for high availability means you—if one goes down, or is unavailable for some reason, then you still have the second node to query and store data. Typically, data nodes need a large number of resources, mostly because storing and querying data tends to be a more expensive use of your resources.
Katy Farmer 00:15:21.306 Chronograf handles user management, pre-built dashboards, custom dashboards, database management, and retention policy management. Now, you can do a lot of these things through the Influx command line, with the exception of the dashboards, but Chronografs is there to just sort of make things easier. I, for example, am usually more comfortable in the command line than I am with visualization tools, but that’s not true for a lot of people. And, you know, developers might be more comfortable in the command line, but managers or executives or other people who just want to go in and view the data might be more comfortable in the in the UI layer. So Chronograf just gives you an easy way to access some of these things. And the pre-built dashboards, I just want to touch on before we move on. A lot of configurations come with pre-built dashboards, so for example, if you were sending data from MySQL and you were using the Telegraf MySQL data plug-in and sending information to InfluxDB that way, then when you went to Chronograf, you could have some pre-built MySQL dashboards that would tell you the reads and writes per second, you know, active use, that sort of thing.
Katy Farmer 00:16:50.869 We try to build in as many pre-built dashboards as we can, so that you can get to your data as quickly as possible. So a complete InfluxDB Enterprise installation should include a dedicated InfluxDB instance that monitors the InfluxDB cluster. So you need a separate InfluxDB instance with Telegraf on each node and Chronograf for visualization, that is tracking your cluster. Because we’re all about visibility into what the cluster is doing, right? So if you just set up your cluster and start letting it do its thing without monitoring it, then that often means that you don’t know when something has gone wrong in the cluster, and neither do we, right? So we want to make sure that we have visibility into what those clusters are doing, and the way to do that is through this separate InfluxDB instance. And then, you can just go into Chronograf and easily see that everything is working as it should be.
Katy Farmer 00:18:08.983 So, some general advice for your clusters is to put a load balancer in front of your data nodes, so that queries are spread across each node in the cluster. Essentially, this makes sure that one node doesn’t have a really slow latency because it’s doing all of the work while the other nodes are doing nothing, right. The load balancer makes sure that the load is spread evenly, so that the time they spend on each task is more reasonable. And something else to note is that higher replication factors result in lower query latency, but higher write latency. So we haven’t touched on replication factors quite yet, but we are going to, and it’s important to know right now that how you set up your data and configure your data is really important to how performant your cluster is going to be, right. So, we’ll get to replication factors in a minute, but right now let’s talk about hardware recommendations. For data nodes you need four CPU cores minimum, 16 gigs of RAM minimum. Like we said, they need more resources because they handle the data storage and retrieval. The meta nodes require one CPU core, two gigs of RAM, and again, you need to have an odd number of meta nodes to reach consensus.
Katy Farmer 00:19:56.515 Replication factor and anti-entropy. So, in your cluster, all of your data must go to a shard or shard group, and a replication factor is what determines how many—sorry, I want to make sure I say this right for you all, because I have messed it up before. That’s okay, you could all come with me to my speaker notes for a second. So replication factor determines how many copies of any one shard should exist. So essentially, what this helps you with is saying, if you just want one copy of it, maybe you’re okay with the idea that something happens to it and it’s not available, right. But if you need high availability all the time, then that might mean for you, a higher replication factor. You want more copies of your data so that it’s never unavailable to you. This also introduces this concept that’s relatively new to, I think 1.5 or maybe a little bit early earlier, called anti-entropy. So, anti-entropy makes sure that there is accuracy between the replicated shards, and if there isn’t, and if they’re not equivalent or if they’re not accurate, then anti-entropy goes about repairing that. And then we reach what is referred to as eventual consistency. So maybe you’ve heard that term kind of tossed around. That’s what that means, is that we have checks and balances in place to make sure that your data is eventually consistent.
Katy Farmer 00:22:12.779 So when you’re creating a database, you can set up the replication factor fairly easily. You would, like, create database, mydb, with replication two. So that would automatically give you a replication factor of two, meaning if one was unavailable or the server was down or something, then you would have a shard somewhere else with the same information on it. When you’re creating a retention policy, you can do the same thing. Create retention policy, myrp, on my database. Duration, one hour. Replication, one. So let’s go over the pieces of this. In the first section, this is just create retention policy, right. This would be what it looked like in the command line tool. Myrp is the name of our retention policy. Mydb is the name of our database. Duration one hour is the retention policy duration, so how long do we want to keep this data? One hour. And then, replication one is the replication factor. In this case, replication one means that we’re fine with just one copy of this data.
Katy Farmer 00:23:47.452 So, now we’re back to this piece of advice. Higher replication factors result in lower query latency, but higher write latency. So when we have higher replication factors, so let’s imagine that here we have replication two. Right, we have two copies of our data. That means that when we go to query our data, there’s a really good chance that we’re going to get it quickly because we have it in two places. If one is unavailable for some reason, we can just go to the other one. So that means that our time in retrieving the data is going to be lower, but it does equal a higher write latency because every time I write data to mydb, it has a replication factor of two, so it has to write each data point twice. So what we mean here is that the higher the replication factor, the longer it will take to write data because you have to write it more times. You have to write it the same number of times as your replication factor. So, if my replication factor is 10, then I have to write the same data 10 times, but when I go to retrieve that data, my chances of finding it are really good because I have it stored in so many places. So, hopefully you can start to see the use case for this is that there’s a trade-off here, and the trade-off is that you’re okay with it taking a little longer to write because you always need it to be available for reads.
Katy Farmer 00:25:45.774 So, how do we set up a two data node cluster? The first step is to get a license key, and we start five machines, three meta instances, and two data instances. This five-machine setup is sort of the minimum high-availability setup. The meta instances, you have to download the package, configure the nodes, start the nodes, and then join the nodes. For data instances, you download the package, configure the nodes, start the nodes, and add the nodes. And right now, it’s okay if you don’t totally know what that means. We’re going to go through it here in a minute. So, I want to touch on Enterprise Kapacitor. So I actually wasn’t sure for a while what Enterprise Kapacitor was offering that the open source one wasn’t. I’m relatively new to Influx, and sometimes it’s easier to just assume there is some, like, engineering magic that you don’t understand, but someone recently explained it to me in a way that I thought made a ton of sense. So it’s not that you necessarily have features in Kapacitor that you don’t have in the open source. What you do have is high availability.
Katy Farmer 00:27:24.685 So if you have data streaming to two instances of Kapacitor, there’s a couple things you need to know. Those Kapacitor nodes need to be configured the same. They need to have the same alert rules and the same setup entirely. But, they will only trigger one event. So, let me give you an example. I want an alert every time my CPU usage is above 80% on my server. I don’t want it to, you know, crash or melt or start a fire, I don’t know. If I have two Kapacitor nodes, both waiting for that, they both have the same rule. They both know that I want to know when my usage is over 80%. When my alert is triggered, my CPU usage is at 80%, both nodes say, “Uh-oh it’s time to send this alert,” but I only get one alert. I’m not going to get one from each node. So if you had four Kapacitor nodes, you wouldn’t want four alerts. You would just want one alert for, hey, your CPU usage is over 80%. So, it de-duplicates the data on the output side. So one only one event is triggered from the set, but it is important that the set of Kapacitor nodes are configured exactly the same way. So we are going to head into a bit of a demo now. Give me one second, and we will start it up.
Katy Farmer 00:29:44.003 So I’m going to do things a tiny bit differently on my machine today. I am building from source, which just means I have access to the to the codebase. You all will get a binary that you can then download and build from. So that’s a little bit different, but basically the difference is just how we start up this process. So, I’m going to use a command now that starts up my Influx server. So you can see, it’s not really important where this path lives—again, that’s specific to me building from source—but influxd, this command at the end here, is the is the important piece for you. Influxd is the command that will start our server. So now you can see it running, you get this beautiful ASCII art. So that’s the first thing we want to do, is just start the Influx service. Now, when you build from your binary you will probably start Influx as a service, rather than how I’m starting it here. And depending on your setup, that could mean starting it with Brew, or starting it with whatever package manager you have, or service manager you have. So I’m going to open a new tab here, and the first thing I want to do is figure out, like, what do I want to do first, right. And the first thing I want to do is configure my meta node. So, I have typed influxd-meta config. The commands should always be, you know, pretty descriptive of what it is they do. So there’s just a couple pieces here that are important for us to know. The bind address and the hostname, you’ll have to fill out, and then under the Enterprise section, you need to fill out this registration information. So, if you don’t fill out your license key and license path—or, sorry, not both. One or the other. If you don’t fill out your license key or your license path, then you won’t be able to access your copy of Enterprise.
Katy Farmer 00:32:28.818 Everything in the meta section has a default that should be good for you. You know, there are definitely situations where you’re going to want to configure it, but most of the time and it’s fine to start with the defaults and you shouldn’t have to worry about it. So again, when you set up this config, the important thing that you want to notice is that you need to set up the bind address, the hostname, and then the license key and the server URL. So, this Enterprise section and this bind address and hostname. So, the next thing I’m going to do is type influxd-meta, and what this does is, it starts my meta node. So you can see it running here, again great ASCII art, and the M here should tell you meta. So we’re going to leave that running and open up another window, even. So now we know that that’s running, we’re going to do Influxd-ctl, okay. And because I didn’t type any options after this, what happens is it shows me, like, what do you think that you want right now? Here are all the available commands. You can add-data, which means add a data node, add-meta, add meta node. Here are the backups, copy shard, join, remove a data node, remove a meta node. So, you can see all of the options that you have here. So influxd-ctl is our ctl, sorry. When you see “ctl”, most people will say “cuddle” and also it’s fun, because shouldn’t there always just be more cuddling? So influxd-ctl is the command that we care about here, and if you type it with no arguments afterward it will prompt you with some help.
Katy Farmer 00:34:57.335 So what we want to do right now is add a meta node, so let’s see what happens when I just type influxd-ctl add-meta. It says, “HTTP address value is empty.” Now, that’s because add-meta takes an argument, which is the address of the meta node. So let me just go up, I’m just going to go up through my history. So add-meta, and then you say localhost:8091, that’s where my meta node lives. 8091 is the default port, and you can see when I hit enter, it says, “added meta node 1 at localhost:8091,” so it’s confirmed that I did the thing which is always good to pay attention to. So influxd-ctl, add-meta, and then the address of where that meta node lives, which in my case is localhost 8091. So we’ve added it, now we need to join it to the cluster. So I’m going to do the same thing. I’m going to type influxd-ctl, join. And then we’re going to see, what does it want from me? So it says, “Joining meta node,” “Searching for the meta node,” “Successfully created cluster.” Great, and if you see some extra stuff here, well that’s because I have done some previous joining and removing. But again, we need to start our meta node, and once our meta node is started, we need to add it, and then it needs to join the cluster. So right now, I’m only setting up a single meta node, but keep in mind that you would have to do this same process for all three of your meta nodes. Three is your minimum number of meta nodes, and so you would do the same process for each of those nodes. But I’m just going to show you a single meta node and a single data node, but the process would be exactly the same for two data nodes or for three meta nodes.
Katy Farmer 00:37:40.540 So let’s try adding a data node. Again, I’m going through my history of commands here. So, if we want to add a data node, the process is really similar. influxd-ctl add-data, and then the address, again, of where that data node lives. And the default for the data nodes, the default port, is 8088. So I’m going to hit enter and see what happens. Oh it says it added it. Great. Added data node to localhost:8088. And we’re getting some confirmation here, but we’re not, maybe, sure if what we want to happen has happened, so we can do influx—oh I forgot the D. Don’t forget that. Influxd-ctl, show. And now I can see a list of the data nodes and meta nodes that are currently configured and set up in my cluster. So, right now I can see this single data node and this meta node, but we still need to join the data node that we just added. So let’s see what happens. We hit join again, and now it is added my meta node and my data node that I just created. Up here I added the data node at the default address, which is localhost 8088, and then I did influxd-ctl join. So that way, I have joined all of my meta and data nodes into my cluster. Now remember that if you had one meta node and one data node, this would not be a functioning high-availability setup. You need a minimum of three meta nodes and two data nodes for high availability.
Katy Farmer 00:40:08.356 So now that we’ve done that, I also want to show you that you can do influxd-ctl-h. This is for when you’re like, “I don’t remember how to do what I want to do and I need help.” So, when you type influxd-ctl-h, it gives you all of the available commands and options. So, available commands are adding data nodes, adding meta nodes, making backups, copying shards, you know, joining, removing, doing a restore. Show is how we show all the members of our cluster. We can do, show shards. Let’s try that. Let’s try that right now because, frankly, I’m not sure what it shows. So let’s go influxd-ctl, show shards. Cool, okay. So here I can see the IDs, I can see the name of the retention policy, the name of the database, the number of desired replicas, the shard group it lives in, when it expires, and who’s the owner, so that’s pretty convenient. So those are shards, I’m going to be honest, I didn’t even know I had on my little local machine right here. The most important thing for you to remember here is not step-by-step how to do this, but knowing how to access this help menu is really helpful. influxd-ctl-h gives you all of the available commands.
Katy Farmer 00:41:58.114 And I think I’m going to pause there for some questions. [inaudible]
Chris Churilo 00:42:17.890 Alright, so we actually had a number of questions during the presentation, so I’m just going to read some of those out loud, so in case people missed out the conversations happening in the chat, maybe this will also spark some other questions from everybody here. So, I think there was a little bit of confusion about what’s included in InfluxDB Enterprise and InfluxDB Cloud, and what is optional, what is not. So can you just jump back to the—
Katy Farmer 00:42:47.407 Yeah, of course.
Chris Churilo 00:42:48.622 —to that slide. So the key difference between—there we go. Perfect. So the key differences between InfluxDB Enterprise and Cloud and our open source projects is that if you want HA or some of the other features that Katy mentioned, clustering, manageability, security, those capabilities are all tied to the database and actually, also, Kapacitor. So within InfluxDB Enterprise and Cloud, you can use all four projects, Chronograf, InfluxDB, Kapacitor, and Telegraf, but the features that we’ve outlined up there are actually part of InfluxDB and Kapacitor the Enterprise versions. And if you only want to use InfluxDB, that’s not a problem for any three options, open source or for the commercial offerings, and we price our commercial offerings by the node, which makes sense when you talk about a database or with Kapacitor as well, because that’s also related to data. Telegraf, there’s no price tag at all with Telegraf. You can use it with the open source core projects or with the commercial offerings. And also, Chronograf is also optional, and if you would prefer to write directly into InfluxDB, you can avoid using Telegraf completely. I’ve listed out a number of different options for how to do that. You can write directly into it, you can write via the HTTP API, you can write via the various client libraries that we have.
Chris Churilo 00:44:25.054 In addition, if you want to have your own UI, you can definitely avoid using Chronograf if you want to. You can write your own UI. We have a lot of customers that actually do that. They may have a smartphone app or a particular dashboard that they’ve created to do the query of the data in InfluxDB. That’s not a problem. If you want to use Grafana, that’s also an option. And if you don’t even want to use Kapacitor as well, that’s also okay. So, you can also just use InfluxDB. So we’ve broken these projects out into these four different pieces so that we can offer you the flexibility, but I wanted to make sure that everyone on the call understood that because we had a couple of questions. Let me see. So, we don’t have any other questions in the Q&A or the chat, but we’re going to leave the lines open for a few more minutes so if you have any other questions—and it can be about anything, doesn’t have to be just about InfluxDB Enterprise—feel free to put those questions in there. We also have a community site where you can also answer—or, answer? Get your questions answered. Let me just throw that link in there for everybody. There we go.
Chris Churilo 00:45:48.424 And a lot of our developers and Katy and the other developer evangelists, also, sit there looking for any unanswered questions to make sure we get them answered for you. Alternatively, there’s also, as I mentioned, a number of webinars where we review various case studies, where users will share with our audience their implementations. And you’ll notice that there’s a lot of variances in their architecture, using all the components of the TICK Stack, using only InfluxDB, there’s a lot of different configuration that our users have opted to use, so I recommend checking out some of those webinars, as well.
Katy Farmer 00:46:32.549 I think that’s one of the things people really like about what we have to offer, is that it’s kind of modular. I like to think of it like LEGO pieces. You know, you can use one or four or two. You can use them in so many different configurations that it’s hard for us to cover them all, so I’m sorry if that was not clear when I did it the first time, but yeah, you can you can totally use just Telegraf or just InfluxDB. We don’t force you to use any piece that you don’t want to, if you don’t need it. And like Chris said, there’s a lot of good options for the visualization. If you want to use Chrono to write your own, or if you have some other dashboarding tool that you just love and know how to use, then by all means. We’re not proprietary in that way.
Chris Churilo 00:47:23.542 Okay, so like I said, we’ll just leave the line open for maybe two more minutes, see if we get any other questions. And just want to remind everyone, I will post the recording later on today, so you can come back and take another listen to this. I also listed, in the chat panel, the partnering documentation for InfluxDB Enterprise so you can also take a look at that to understand how to install and also configure, and then use some of these other features as well.
Katy Farmer 00:47:54.851 Yeah, and while we’re waiting for more questions, I just want to mention that we are sponsoring meetups now. So we have a time series meetup in San Francisco, Denver, New York, and we’re about to start one in Boston. so if you’re interested in starting one up where you are or attending one, then let us know and we’re happy to help. I organize the one in San Francisco, so if you’re in the Bay Area you can always stop by and chat with me. In May, we’re having our VP of Products talk about the future of Chronograf and what that looks like, so it’s going to be a good option if you want to bring some of your some of your questions.
Chris Churilo 00:48:49.841 Okay, looks like we don’t have any more questions, so we will conclude our training today. Thank you so much for joining—Oops, as I said that, Simon asked, “How do I download the slides?” Simon, I’ll post them to SlideShare. Let me just get the link now, so you can go back. So I’ll post them after this. Just give me a second.
[silence]
Chris Churilo 00:49:57.268 Okay, here is the links, and I’ll put this latest one in in a few minutes. Any uses with both Telegraf—Okay, so Simon, let me just—so the slides will be up on SlideShare five minutes or so after we conclude this. And then a follow-up question from Sean is, “Any uses with both Telegraf and InfluxDB use cases?” Yeah, so we do have a couple, and let me just get those links and then share them with everybody. That’ll be the easiest. So, I recommend watching the NewVoiceMedia video. Jack does a really good job of sharing his experience with using various configurations of InfluxDB and Telegraf. And also another one that’s worth looking at is Coupa. And then, the third one worth looking at that’s slightly different…let me just pull the third one up. This way you can hear from other users what their infrastructure looks like. There you go, there’s three different use cases there that you guys can take a look at. So, BBOXX—Oops, that’s Coupa. BBOXX is the one. BBOXX is the third one. So we’ve got Coupa, BBOXX, and NewVoiceMedia. I recommend that you guys take a look at those. All three different data flow architectures, so you can see really the flexibility of this solution.
Chris Churilo 00:51:54.262 Okay, one more time, any other questions? Okay, awesome. Great. Thank you so much for joining us, and we’ll see you again next week.