Coming soon! Our webinar just ended. Check back soon to watch the video.
Kubernetes is a powerful system for managing containerized applications in a clustered environment and offers a better way of managing related, distributed components across a varied infrastructure. In this webinar, Jason Dobies from Red Hat OpenShift and Gunnar Aasen from InfluxData will demonstrate how to use InfluxData to orchestrate OpenShift container scaling based on any metric collected about the application running inside. In addition, they will show you how to use InfluxData as the long-term store for all metrics collected with Prometheus to fulfill historical analysis (forecasting) and audit compliance.
Watch the webinar “Use InfluxDB to Orchestrate Your OpenShift Containers to Keep Your Apps Performing” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “Use InfluxDB to Orchestrate Your OpenShift Containers to Keep Your Apps Performing”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Chris Churilo 00:00:05.161 All right. Good morning, good afternoon everybody. Thanks for joining us today in our webinar. We have a great topic today where we’re going to review container orchestration with InfluxDB and Red Hat OpenShift. And just going to cover some housekeeping items really quickly. If you do have any questions feel free to post your questions in either the Q and A or the chat panel, and we’ll definitely get those answered either in the break, during the presentation or definitely at the end of the presentation. We’re recording the session and I’ll post this before the end of the day, and the automated email should hit your inbox tomorrow morning so you can take another listen to this. And yeah, and that’s it, those are our general housekeeping items. So with that we’ll go ahead and introduce our speakers today, and today we have Jason Dobies from Red Hat and Gunnar as well from InfluxData, and I’ll let them both introduce themselves and take it away. So with that I will take the guys off of—
Jason Dobies 00:01:09.718 Gunnar, do you want first to introduce?
Gunnar Aasen 00:01:12.082 Yeah. Hi, everyone. My name is Gunnar and I run the Partner Engineering at InfluxData, and so deep engineering expertise in InfluxDB’s stack.
Jason Dobies 00:01:25.127 My name is Jason Dobies. I work in the OpenShift business unit at Red Hat and I handle the technical strategy for our I and C partners. And with that let’s get started. So first we’re going to begin with me talking a little about OpenShift, what the problem space is, what we’re trying to solve here. I’ll keep this fairly quick. I can normally go a little bit further down the rabbit hole on this particular area, but let’s start very high level. What is it that we’re trying to solve and what brought us to this point? And it really ultimately comes down to everyone who needs to deploy their applications faster and more reliably. What previously used to be acceptable using even something as recent as the agile methodology, where we would look at three weeks sprints is no longer fast enough. We need a much faster turnaround time with getting applications, builds getting deployed, having as minimal downtime as possible. And it has become a real problem, I’m sure you know, over the past couple years, that everyone has been trying to solve in their own way. One of the building blocks towards this solution is the rise of container technologies. The idea being we can package our application in a very lightweight, very portable format, run that through our entire pipeline of CI/CD and then get that deployed, again, with relatively fast turnaround time with load churn, with consistent environments being seen between development up to QA and production. So it’s important to realize containers are one piece of it, the rest of this talk I’m going to start build that story on top of it, but before I go on there’s two really different views to take on containers. From an infrastructure standpoint, you can think of them as similar to virtual machines and what they’re trying to accomplish, via largely self-sufficient, encapsulated run-time environment with their significantly lighter weight. Whereas, if you used to think in terms of a couple of dozen virtual machines running on host, now we’re talking in terms of hundreds to thousands of containers being able to run concurrently. And they do this by not having an emulated hardware layer like Hypervisor that provides virtual machines, instead they simply run at the Linux kernel itself as a process. And there’s a lot that can be said about that, but I’m going to leave it fairly high level there. I just realized that they do ultimately run in the host kernel but sectioned off in terms of being able to keep multi-tenancy security concerns in place.
Jason Dobies 00:04:02.633 Now, from the application standpoint or from your developer’s standpoint, it’s a means for packaging your entire application up. And that means all of its dependencies, all the environment needs, getting all of that into a single thing that can then be shared and moved around from different environments. Again, virtual machines had the idea of being able to be portable, it didn’t quite panned that way. Containers are significantly better in this approach. You will download some kind of image that contains all of the information your container will each run. And, again, that’s everything from a very lightweight based operating system up to the specific environment in tooling. It’s a very easy way to have something—I can throw an example out there with Python, multiple versions of Python are in the wild today being used in production, the big distinction between Python 2 or Python 3, obviously extremely different environments, they can co-exist very easily on the same host by having all of that package inside of a container, and all that stays encapsulated inside of the running container. So our DevOps picture, this sounds great, right? We have these writing containers, we can easily drag them and move them around to different host and get them deployed. So this looks like on paper, yeah, this is a great solution, it tends to solve all of our needs. The reality of this situation is a bit different, because containers are so lightweight they’re leading to more of a microservices architecture of splitting up your giant monolith application into these very small parts. Now, again, these small parts ultimately go toward our end goal of rapid deployment. Being able to just shift updates to individual services that were updating or switch it up to push out security release, so things like that. But the reality of the applications now is, instead of having a single container that runs our application, we may have multiple—we may have 5, we may have 10, we may have 70 different containers that all piles our application but can be deployed and scaled individually, so they may find themselves in a different host. They’re going to have different networking, different storage requirements. They’re going to have different interactions between those services as your finance is going to talk to your various backend systems, but we shouldn’t be able to access those backend systems externally and so on and so forth. So the reality is, as much as containers are a great building block toward this, we need more than that. We need some extra features on top of it. We need scheduling. We need something to handle when and where to deploy our containers for us. It doesn’t scale well at all for us to manually start taking containers and put them in various data centers.
Jason Dobies 00:06:49.989 Lifecycle and health becomes extremely important because instead of monitoring our larger, more monolithic applications, now we have all of these different pieces running in different areas, in different environments, and getting that full picture is more difficult than ever. Obviously security remains a large issue as it always has. The ability to look at all of these running containers and understand which ones maybe susceptible to a new CBE that’s been with this, or how do I go and deploy the stacks out and how would it affect all of the applicable containers. Scaling comes into play when you hear the term web scale. A lot of these cloud-native applications are meant to suit these extremely large-scale use cases. So it have the ability of saying, out of my 10 services, these 3 in particular needs to be scaled up, either permanently because they’re more necessary or simply to respond to a particular load increase for whatever reason. And you need to be able to very dynamically, very simply say, hey, I’m going to scale these couple of services up to make sure that they’re still used in the same data store in the backend but they’re capable of handling whatever spikes that you may run into. And then data on the backend, we need a persistence layer underneath it as well. Containers are ephemeral. I start it up, I make some changes or I run some applications on it but it won’t save the data back into the container and into the image. So the next time I spawn the container off of that image or if I scale it up, I lose all of that state. So we do need some kind of persistence layer to be able to attach storage to that. It’s going to be nonvolatile or it’s going to last through all of these operations that we take. So what we find is, on top of containers we need some sort of orchestration engine. Now, couple of years ago OpenShift version 2 had similar ideas of encapsulating the application and providing these ad on services. We looked around at the environment or in the industry I should say and realized that containers were taking off as the more standardized or adapted version of this technology. Kubernetes had just been released. It is an open-source orchestration engine for handling largely the scheduling of containers, in the scaling needs that we have. So in OpenShift version 3, we repeated it and rebased everything on top of Kubernetes and built on top of it, adding the extra features that we need outside of what just base Kubernetes offers. But I want to mention this to start with, so we’ve now gone from just the individual imaging technology for containers up to an orchestration engine.
Jason Dobies 00:09:37.263 So the idea is we start with our developer, and our DevOps flow looks fairly standard up to a point. We have our developer, they work with the virtual control system which ultimately feeds into some sort of CI/CD system and then the images pop up. At this point we’re talking about container images in the Docker format. And then Kubernetes comes in and decides when and where it’s going to deploy these containers across your data center. The thing is, it’s not quite enough by itself. We may say SaaS Kubernetes is not quite not enough by itself. It’s entering—answering the orchestration needs but we have more needs than that. We need networking capabilities. We need to be able to segment off networks for multi-tenant purposes so that way certain applications cannot see other applications. We may want to do within a single application to say which services can talk to another. So we needed an advanced networking layer that we can plug into when necessary to provide different options but it’s going to allow us to configure the network much more than a flat, everything is visible to the world that Kubernetes provides. We also need an image registry. Kubernetes by itself doesn’t provide any mechanism for hosting or supplying images. Yes, we can obviously get them out of Docker Hub but many enterprises are finding the need and the desire to store their own registry for whatever purposes, for security purposes, for the ability to plug in a security scanner, to keep an eye in the images and make sure there’s no CBEs coming out. For simple load issues where we don’t want to go out to the Internet every time we get these images, we want our CI/CD system to feed directly into a registry that we control. Still not quite enough. We need image metrics and logging switch. Like I said earlier we need to be able to understand, hey, this is our application, this is how broken out it is, how is it functioning? We can’t just look at a single service in the entire stack. We need to understand all of them and how they relate to each other. And like I said it’s becoming even more important as we start to break things up across the data center, and we start to scale them, this each service individually. Still not quite enough. We have a fairly strict need for managing these deployments in terms of, if I have all of these services they have to get rolled out in a particular order or if to be upgraded in a particular order. And we need to handle those relationships so we maintain this zero downtime SLA.
Jason Dobies 00:12:15.071 We need a full-blown lifecycle management. So this promotion from—of an image from Dev to [inaudible] to production environments. We want orchestration engineer—we want orchestration platform, I should say, to handle that for us. That’s a lot of manual work that we don’t need to have people go in through on their own, it should be as simple as possible to facilitate that. On top of that we need the application services. We should provide this as part of our framework or a part of our platform to our active developers. So if they need access to a database, they need access to a message queue, it should always supported easily, deployable inside of the platform as well. We need some kind of self-service portal. If we’re trying to achieve the rapid delivery and speed that the industry is moving toward, we can’t be waiting on filing an IT ticket and getting us two, three days later before we get access to the resources. Developers nowadays need a self-service portal where they can get the resources they need and they can envision these application services and deploy all of the necessary pieces and their applications for testing. So what we see is—this is where OpenShift fits in. We build up containers to an orchestration engine, we need a full-blown platform, and OpenShift provides all the pieces we build up to on the previous slides. So, again, this is very high-level overview, which to state that we take our container images, and OpenShift handles then all of the lifecycles for that, including things like configuring, networking, providing security benefits on top of that, as well as number of other topics that’s covered. So with that extremely high-level overview of OpenShift and, again, if we have time at the end, some more specific questions. But I’m going to hand it off to Gunnar now to start to dig in to the InfluxData integrations.
Gunnar Aasen 00:14:09.086 Great. Thanks, Jason. And let me just pull the slides real quick.
Gunnar Aasen 00:14:29.446 All right. So obviously OpenShift is a great opportunity to start to basically re-architecture and make things that previously weren’t possible or easy to be much easier. And also to sort of relieve a lot of the burden and a lot of like sort of everyday tasks that previously might have taken either to train the people to manage and work with and making that sort of just much easier to do. And I want to talk about one of the things that come—one of the issues that comes up with using containers and sort of these more automated systems like OpenShift is basically one of the huge issues that comes up. And that people find out is that figuring out when things are breaking and keeping track of the system when there are ton—like many more moving components behind the system. And so with that I just want to introduce the concept of time-series and why that’s relevant in sort of an OpenShift world. So if you’re not familiar with what time-series are, essentially it’s just any kind of timestamp, sort of timestamp the values. Typically you see this in sort of server monitoring application monitoring, stuff like that. But essentially when you’re dealing with a lot of containers and sort of a system where there’s a bunch of moving components, like in OpenShift, you start to generate a lot of data. And this data starts to come in sort of all at once and is sometimes hard to handle especially if you’re starting to scale up to thousands of containers, and say, have a fairly heavy usage of OpenShift. And so basically there are couple of sort of unique things about time-series and if you ever use any kind of server monitoring tool, sort of DevOps tool, probably you’re familiar with like a typical graph of CPU or memory or anything like that over the past time period. In these graphs themselves aren’t too hard to render and store the data for, but once you start to get into sort of certain scale which is easy to do, we are working on with containers in OpenShift versus, let’s say, individual servers and data centers or something, you start to sort of accrue a bunch of stats. And this is especially important for sort of monitoring—monitoring from the sort of low level infrastructure side of things, all the way up to your applications and even your build or pipeline. It’s very important to start to understand where things are getting held up and where problems are occurring because it becomes harder to sort of track this stuff down if you’re taking a look at something that’s much more virtualized in terms of the workload. It could be occurring on any number of nodes, it could be a slow disk somewhere, it could be backing up a bunch of different things, and if you don’t have your metrics to be able to look into and take into that, it could be hard to operate this kind of systems.
Gunnar Aasen 00:18:40.630 And so that’s what we had InfluxData—why we have InfluxData to build InfluxDB, and InfluxDB is basically a database for time-series data. And we’ve also built out a whole platform around, which I’ll explain just a minute. But basically InfluxDB handles this time series workload incredibly well. It scales very, very well. It’s open-source. It’s very easy to use. So one of the sort of key design guidelines of InfluxDB is that it’s trying to reduce the time to awesome. So it’s just trying to get you up and running and productive in your environment which is something that OpenShift does very well as well. So if you sort of like look at the InfluxDB in the constrict of like the InfluxData platform, which I’ll explain a second, basically when you’re working with time series data typically you’re taking a bunch of data from various different sources. So whether that would be your pods and your OpenShift containers or maybe some on-prem server somewhere or monitoring [inaudible] pipeline backup and stuff like that, you can have basically data—time series data being generated from bunch of different sources and you want to basically ingest that. And then obviously if you’re ingesting that you want to do something with that data and the real value comes from being able to take that data, visualize it, eventually figure out what you care about and alert and then eventually automate the issue, the things that you automate issues as they occur or alert people that there are issues occurring. So that makes both your infrastructure and your applications easier to manage. So InfluxData, we have basically this open-source core down here. We call this the TICK Stack. So that’s T-I-C-K. It composes basically a platform, upon which you can do—you ingest the storage with InfluxDB, the visualization to Chronograf and the alerting and processing with Kapacitor. And then on top of the platform—so that level of platform is all open-source free to use—and on top of that platform in terms of commercial offerings, we actually have an Enterprise version which simply adds clustering, so high availability and additional scaleout and security features, bells and whistles, and we also have a hosted offering as well. But just want to talk about the sort of open source TICK Stack since that’s where most people find real value in the development that we do.
Gunnar Aasen 00:22:26.693 Basically, we have a collection agent called Telegraf, which if you are familiar with the OpenShift world, Telegraf essentially is like a node exporter but push-based, although you can also do pull-based deployments. And it’s a fairly lightweight agent that you can drop anywhere from sort of like bare-metal, on like an IoT edge gateway, all the way to, like, as next to your containers running sort of applications or even pulling from other like data queues like Kafka or something like that or message broker. And it’s got basically a bunch of plugins that makes it super easy to basically do a config-based collection of stats for most popular software. And then you can use that to send data into InfluxDB. And then we have—so InfluxDB would be the storage, so Telegraf collects your data and sends it to InfluxDB to store it. Chronograf is our visualization agent. So basically if you’re familiar with, let’s say, Grafana or another visualization tool, Chronograf is essentially a very well-tailored experience specifically for InfluxDB and Kapacitor which is our—is sort of rounding out of the TICK Stack. And Kapacitors are basically a processing engine to be able to alert and to transform your data into whatever desirable state you want it in so that you can get valuable data from it. And so this all comes together to create sort of almost a pipeline flow. And there are multiple ways to set this up and play around at the moment within the OpenShift—the moment within the OpenShift ecosystem, we have Helm charts which aren’t necessarily the recommended way to—which aren’t the recommended way to deploy applications of OpenShift. So we’re actually working on a new service broker for the TICK Stack, however it’s not quite done yet. But at the moment we’ll also be publishing some converted or some templates for the TICK Stack as well, which then you can deploy to OpenShift. And some things that—so OpenShift does a bunch of monitoring as well primarily within the Kubernetes infrastructure through Prometheus. So Prometheus, so if you’re not familiar with it, it’s essentially sort of like the built-in sort of like metrics—sort of like the name for all the metric stuff within Kubernetes. Essentially it will do—Prometheus will do sort of like medium term—short to medium term collection of metrics—sort of metrics about how different nodes are running in the actual underlying Kubernetes infrastructure. So what that means is that—what TICK Stack, the TICK Stack can help expose with Prometheus, is basically it can provide a long-term store for your Prometheus metrics. So basically you can dump all of your Prometheus metrics into InfluxDB instead of having a week long, a few back in your history, you can have basically keep metrics for all time. You can also sort of combine metrics from your underlying infrastructure with sort of high level application metrics. And this is actually what sort of ends up being the most useful, is that if you want to correlate, let’s say, an increase in response time, can you like trace that back down to either, you know, was it an increase in http request against some API endpoint, or was it say some server went offline for whatever reason. And so being able to combine your—being able to combine your infrastructure application metrics ends up being very helpful. And InfluxDB ends up being a bit more flexible for collecting application metrics, specifically InfluxDB supports event data so you don’t need to have timestamp sending on the—around intervals. You can push data in as it’s generated. And this allows you to—this allows someone to basically take more fine grain snapshots of the whole stack, the whole OpenShift stack all at once.
Gunnar Aasen 00:28:27.911 And with that I’m just going to do a quick little demo of Chronograf. I was going to have this run in OpenShift but my OpenShift—I’m actually just going to run this in Minishift, and Minishift is the sort of the local development environment of OpenShift. However, my Minishift is not healthy right now. So I’m just going to give you all a quick overview of storage as what sort of what the Influx and the TICK Stack means and sort of like a visualization. So essentially you can do dashboards and set up sort of monitoring and overview dashboards. You can also sort of explore via Data Explorer and dig deep into sort of what different things are happening underneath in your underlying metrics. And then you can also go through and you can actually define alerts, and this is like Kapacitor component right here, you can assign, you can create alerts and essentially create thresholds very easily and then expand this further to actually trigger alerts, so. And that ends up being sort of a scripting language in summary.
Gunnar Aasen 00:30:21.860 And with that I’m just going to end the demo there. And I’ll hand it off to Chris for questions.
Chris Churilo 00:30:31.450 So if anybody has any questions as it pertains to the demo or the overview of OpenShift and InfluxDB, please put your questions in the Q & A or the chat panel. And we have reserved some time now to be able to answer those questions or talk a little bit more about these two products.
Chris Churilo 00:31:03.065 So Jason, you did a really fantastic job of providing us an overview of how—I guess how—maybe the industry is kind of oversimplifying—how easy it is to run containers, I think you did a really nice job of articulating that there’s so many things that we still need to think about when it comes to containers and orchestration, the need for advance networking capabilities and the ability to collect metrics and logs, more complex deployment or application lifecycle management capabilities. You’ve had a lot of experience with these technologies. What would your recommendation be to the audience here? Where should they start when they are considering containers and also an orchestrator to go with it?
Jason Dobies 00:31:58.193 Oh, wow. So I’ll throw in the chips in and say when you would be considering an orchestration, you go at OpenShift, as a more real answer to it. The first thing most people have to start to consider is, do you want to try—how do you want to migrate your existing Legacy applications? Do you want to do a simple lift and shift of the entire thing and drop it into the container and then slowly start to pull out services? Or do you want to start to pull out services first and adapt to your Legacy system to connect to it? We see both. We see both types of approaches. We are seeing obviously some [inaudible] development where people can, out the gate, start with a containerized approach from the beginning. So that’s a bit of a cop out to say, yes, we’re seeing all the models but it also is very relevant to say that there’s no set solution, there’s no single path of, hey, this is how you migrate your Legacy app into a containerized deployment. Everyone is still figuring it out, which is actually one of the more interesting parts about this. All of these, there’s no incumbent, there’s no set play of doing things in this containerized world, and it’s been kind of fascinating watching it all crop up. In terms of some more specifics, it comes down to exactly what it is you’re trying to do. Keeping in mind that there are options in OpenShift to provide keys to where your containers will be deployed. So if you have fairly—for instance machine learning that will use GPUs, there are new features coming out in the near future that’s going to let you say these types of workload should run on these particular nodes because they have the hardware to support it. Similar with storage, if you need fast IO or you can just access large disks. So there’s definitely options in terms of customizing your environments, as Gunnar eluded to you, this might call Minishift for a developed environment. Getting started with playing with OpenShift or even just Kubernetes by itself, there are some very lightweight mechanisms for it. On my laptop right now I’ve used the client to simply stand up—running in Docker directly on my laptop, Minishift is slightly larger installation but, again, it’s all pretty self-contained so where I’m going with this, this is very easy to get your own instance running. Red Hat also provides a hosted offer called OpenShift Online. They have a free tutor available where, hey, I just have something I want to try and play around with containers and see what happens. And you have access to a couple of projects inside of a running OpenShift cluster that, again, Red Hat maintains, but it’s a good way of just getting your hands dirty and playing around with this stuff. So the barrier to entry is extremely light now. If you really will start with container and see what happens and what’s available to you, it’s fairly easy to get one of these types of orchestration platform set up. And I think that’s a good takeaway, it’s a good starting point to say, all right, let’s get it up and running and see what happens and then start to build on it from there.
Chris Churilo 00:35:16.390 Yeah. I think that’s really great advice. I mean, let’s just get started with something small, there’s so many things that you can do with it, don’t be overwhelmed with it, just kind of take small little bites if you will, and then kind of determine your deployment mechanism, which way do you want to go. Do you want to attack the entire set of things that you already described? So when it comes to—so Gunnar maybe you can talk a little bit about where would InfluxDB—first of all, where should it fit in into this lifecycle, like when you’re adapting containers, is it something that you think should happen up front or is it something that can happen little bit later on once people have kind of established themselves with this? And then also what are things that they should start monitoring up front versus maybe it’s just nice to have down the road or maybe everything should be done up front?
Gunnar Aasen 00:36:10.910 I think it’s definitely something to start considering as you work on new applications. Obviously anything you have running that you’re not monitoring, it’s always best to really get on that and start at least hooking up some kind of metrics just to understand if it’s actually, you know, what you think it’s doing. And the experience you’re giving to users is the experience that you want to give them, especially for certain—like high [inaudible], you know, where 1% error rate is actually a fairly—that’s like you’re down for a significant number of users versus like say—maybe some people have some kind of like little app that runs in the background and does something like pushes a message to your chat every once in a while. That kind of stuff—maybe not quite as important to automate. But basically if you are starting a new development I think it’s, you know, this day and age with containers and sort of like the ephemeral nature of the way application is run now and especially as sort of different deployment architectures sort of come into their own such as like functional—function as a service platforms and they continue to increase or even use of sort of like SaaS dependencies and stuff like that for infrastructure. It becomes important to understand where your—to think during the development, it’s like, oh, where can things break? And if they do break, how do I know that—how can I know that this web docking message isn’t actually getting into—coming back from the browser to your servers? Or how do you know if you’re say like have a high latency on some database call? And sort of—especially when you have many, many different layers where you got—you’ve got a load balance—not just a load balancer but also multiple layers of routing, microservices, sending messages back and forth, a lot of stuff can break in even small sort of like glitch on the radar that maybe in like back in the day when you’re running couple of servers in a closet was been a huge issue suddenly start to add up and become a bigger thing as you scale these services basically. And so I think the most important part when starting out and thinking about sort of new services that you’re developing, especially in terms of like monitoring metric kind of sense, is to understand sort of, number one, aware, are things going to break? And sort of also where and what kind of impact will certain things breaking and how? And then also sort of what is your desire—like how desirable is it to provide certain levels of reliability basically. And that in deciding those factors, is there really a useful way for then being able to figure out what to actually focus on because in support of the—in working with containers, the OpenShift and other, you know, these big systems have a lot of components, you just have to decide what is—there’s so much to look at, you’ll never create a perfect system and so what do I actually care about delivering at the end of the day, and then driving your metrics based on that.
Chris Churilo 00:40:35.344 Well, I suppose you can also grab feedback from the rest of the team as well, right? What they think could be very harmful to the user experience.
Gunnar Aasen 00:40:45.842 Yep.
Chris Churilo 00:40:46.939 Okay. Cool. So you guys have provided some—a list at the end of the presentation of how they can get started, but also we can guide these visitors today to go to the OpenShift catalog where they can get both OpenShift and also the InfluxData open-source bits from Red Hat so they can get started with playing around with these technologies. Any other last words of advice?
Jason Dobies 00:41:20.293 You know I’m struggling right now to find something super motivational in sales, so I’m just going to say, have fun. It’s a really cool time to be in technology.
Gunnar Aasen 00:41:29.607 Yeah, I will second that. There’s just so much happening right now and like Jason mentioned earlier in the call or at the end of the webinar, that all this stuff is really just sort of—is really just sort of developing and a lot of the code behind, a lot of this—all of these tools and even containers and Docker itself, it’s all just been written in the last few years so, yeah, it’s a really exciting time, a huge shift in the way that everyone’s going to operate and deploy and run their services in the future.
Chris Churilo 00:42:07.095 Excellent. All right, so download it, check it out, try it for yourself. And if you do have any questions, feel free to post your questions in the InfluxData community. We do a lot of questions about monitoring and managing and orchestrating that containers there, so please feel free to check out to see what conversations are happening there or feel free to start another one. So thank you so much, Jason and Gunnar. And as I mentioned at the beginning of the webinar, the recording will be posted and if you do have any other questions, feel free to send me a note or post them our community site. All right. Thanks, gentlemen. And we wish everyone a great rest of your day.
Jason Dobies 00:42:50.111 Thank you, Chris.
Gunnar Aasen 00:42:52.133 Thank you.
Track and graph your Aerospike node statistics as well as statistics for all of the configured namespaces.
Knowing how well your webserver is handling your traffic helps you build great experiences for your users. Collect server statistics to maintain exceptional performance.
Collect and graph performance metrics from the MON and OSD nodes in a Ceph storage cluster.
Use the Dovecot stats protocol to collect and graph metrics on configured domains.
Easily monitor and track key web server performance metrics from any running HAProxy instance.
Gather metrics about the running Kubernetes pods and containers for a single host.
Collect and act on a set of Mesos statistics and metrics that enable you to monitor resource usage and detect abnormal situations early.
Gather and graph metrics from this simple and lightweight messaging protocol ideal for IoT devices.
Gather phusion passenger stats to securely operate web apps, microservices & APIs with outstanding reliability, performance and control.
The Prometheus plugin gathers metrics from any webpage exposing metrics with Prometheus format.
Monitor the status of the puppet server – the success or failure of actual puppet runs on the end nodes themselves.