Introduction to Docker & Monitoring with InfluxData
In this webinar, Gary Forghetti, Technical Alliance Engineer at Docker, and Gunnar Aasen, Partner Engineering, provide an introduction to Docker and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
Watch the Webinar
Watch the webinar “Introduction to Docker & Monitoring with InfluxData” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “Introduction to Docker & Monitoring with InfluxData.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
• Chris Churilo: Director Product Marketing, InfluxData
• Gunnar Aasen: Partner Engineering, InfluxData
• Gary Forghetti: Technical Alliance Engineer, Docker
Chris Churilo 00:01.838 All right, as promised we’re going to get started. It’s three minutes after the hour. My name is Chris Churilo, and I am the Director of Product Marketing here at InfluxData, and I welcome you to our webinar this morning, an Introduction to Docker and Monitoring with InfluxData. And we have two really great speakers today. We have Gary Forghetti who is from Docker, and we also have Gunnar Aasen who’s from InfluxData, and we’re just going to go ahead and start getting into this. So Gunnar can you move the slide to the next slide, please? All right so those are our speakers, and I apologize. So here’s our agenda. This is what we’re going to go into. We’re going to actually start with getting into Docker and giving you—well, Gary will give you a really nice, in-depth overview of Docker images and containers and then he’ll directly into Docker Compose and Swarm. And then he’ll finish it up by going into what’s new in the latest version of Docker. From there we’ll switch over to Gunnar and we’ll just lay out what time series is really trying to address. What the problems are and just do a brief introduction into InfluxData and also a live demo. So with that, Gary do you want to take over the presentation or do you want Gunnar to just move the slides for you?
Gary Forghetti 01:25.558 All right, everybody, see my screen okay?
Chris Churilo 01:28.365 Yes.
Gary Forghetti 01:29.029 All right, very good. All right, so as Chris mentioned earlier, this is the agenda I’m going to try to follow. Just quick intro on myself. I’m a technical alliance engineer at Docker, but I wear many hats. I’ve only been with Docker since April this year, but I’ve got over 40, yes, over 40 years of experience in the IT industry. Enough said on that. So the agenda—I’m going to start with a quick overview of Docker, and I’m going to get into images and containers, just help explain the differences and how they pertain to Docker. And then go into a simple example of using Docker Compose to automate the deployment of a stack. And then I’ll get into Docker Swarm. I’ll give you a quick overview of Docker Swarm and then get into an example of deploying a multi-tier stack in Docker using Docker Swarm. And then I’ll finish with a quick slide on what’s new in Docker 1706. Okay so start off with what’s Docker? Well, Docker is the world’s leading software container product platform. It’s been around since 2013. It originally came out by a company called DotNet as a Linux developer tool. And what Docker solves is the problem, the age-old problem of, well the application works on my machine but when I deploy it somewhere else and somebody else runs it, it’s got issues. So one of the things Docker lets you do—it lets you—if you typically have legacy applications, it lets you transform and modernize your applications so they run in containers and with the improved enhancements of virtualization and they allow you to reduce your costs and quickly be—deploy and develop and test your applications with all kinds of other side benefits. Docker gives you all kinds of benefits as—your applications become very portable. They can run different nodes, different hosts, cloud or even physical hardware. So you become very agile, very efficient during your testing because once you test your application in a container, it can run anywhere a Docker can run.
Gary Forghetti 04:05.032 And a Docker has built-in security right out of the box to help your applications run secure. And again, Docker can run on all different platforms, physical hardware, can run on your desktop and virtual machines and public and private clouds, so it’s very, very—there are endless opportunities where you can run your application once you Dockerize it. As far as the Docker platform, it contains lots of products and tools which will allow you to quickly develop tests and deploy your applications. Docker’s got both a community edition and an enterprise edition. The community edition is the free edition. That’s community support and then using Docker Enterprise edition—that’s the paid subscription. There is three flavors of that. There’s the Basic and the Standard and the Advanced. The difference between the basic and standard and advanced is you get the Docker data center platform with Standard and Advanced which includes the UI and also contains the trusted registry. In addition to the Advanced option, addition to Dockers Datacenter also gives you security scanning.
Gary Forghetti 05:36.503 Okay, let’s get to quickly, and I’ll talk a little bit about images and containers. So if you’re thinking about Docker image, it’s a template. And that’s what gets used to run your running application which runs in a container. So if you have a cloud background, if you’re familiar with cloud computing, Docker image is somewhat similar to a machine image. You’ll deploy that image into a running application. And in a cloud world, your machine image turns into an instance or running a virtual machine. In Docker, your image is used to deploy a running container. So your image contains everything your application needs to run, your application code, any binaries it needs, any libraries, configuration files, and also the image contains metadata that tells Docker how to run your application. And you build your Docker images using the Docker image command. And I’m not going to get into a lot of detail on that today. I’m assuming you have that background, if not, Docker’s got great documentation on that. The nice thing about Docker images is they don’t contain a full operating system, so they’re lightweight. They don’t contain a kernel and the kernel modules. So that way, the Docker image is lighter weight, it’s small, and it’s going to take advantage of the kernel on the host that you’re running Docker on. That also makes it very, very portable. And your Docker images—once they’re built—they’re stored in the registries and that’s how they’re accessed and run. They’re pulled down, and run from a registry, and there are three different types of registries. There’s Docker Hub which is a default free registry, and then we have Docker store for your more curated and certified content, and you can also, if you have Docker, if you installed Docker Advanced Edition—sorry, Docker Enterprise Edition, standard or advanced, you get private registries. You can create your own private registry and secure that down.
Gary Forghetti 07:52.675 So basically, you think about Docker image as a container at rest. It’s a template which you’re going to use to just launch your Docker application and container. So the Docker Image basically contains multiple layers. Each layer when you build your image—each layer and there are steps that Docker goes through to create the image, and the image can contain multiple layers. The bottom layer is the base layer, that’s where you build your own image from. Typically, you build that from an image, a base image for an operating system like Ubuntu or CentOS or Alpine, etc. And that base image—Docker provides officially base images in Docker Hub and Docker Store that you can build your image on and that image doesn’t contain—that base official image with the operating system does not contain the entire operating system. It just contains drivers that you’re going to run.
Gary Forghetti 09:00.560 Again, so, your Docker Image is built on a base image and then as you build and install your applications on top of that image, layers are created, and Docker uses what it calls a copy and write system. So each layer gets written on top of the other layer and once the image is built everything is read-only. Okay? When you launch your image in a container, Docker pulls the image down, launches it, and it creates a writable layer on top of that for your image to use at runtime, for your application to use at runtime. And that image that Docker generates when your container runs is a writable layer, but goes away once you terminate your application in the container.
Gary Forghetti 09:54.280 So Docker Container, basically, it’s a process. And Docker Containers—they run your application, they run as processes. And again, they’re created from your Docker images. They’re very lightweight and portable, and like I mentioned before, because they don’t contain a full operating system, they don’t contain a kernel. Okay? They use the existing kernel on the host where you’re running Docker. Okay? So in the diagram on the left, you can see three containers that are running. One container is running Tomcat, another container is running SQL Server, and another container has just got some binary applications running on top of an Alpine image. They’re all using the kernel, the underlying kernel on the virtual machine or physical machine that they’re running on. So they’re sharing that so it makes them lighter-weight. And the Docker Containers, they run on open standards, right? And so they can run on all the major Linux distros—Ubuntu, CentOs, Alpine, SUSE and also MacOS and Microsoft Windows. And again, the containers or applications can run on virtual machines, they can run on bare metal and they can also run in the cloud, private and public clouds. Anywhere you can run Docker, your container, your application can run.
Gary Forghetti 11:25.908 So let’s quickly go through some comparisons. Let’s compare an application running on physical hardware, on a virtual machine, and then in a Docker Container and see what the differences are and the benefits and so forth. This is your typical legacy application. It’s running bare metal. Your application’s running on top of an operating system, on a physical server and I think you’re all aware of the limitations here. It’s a slower deployment time. You have to install your application on top of this physical machine. You have to make sure it runs, test it thoroughly. There are high costs involved here because you have to make sure your hardware contains enough resources for your application to run at its biggest or highest threshold. So you’re over-capacitating your physical server, so there are times you’re wasting resources. It’s difficult to scale your application. If your application needs more resources, well you’re running on a physical box, so you’re going to have to bring the box down to add more memory or add more CPUs. So it makes it very difficult to scale and difficult to migrate your application because it has to be installed on another box and retested. You have to test it against different versions of the operating system, so there are a lot of variables involved here that make it hard for your applications to be very easily deployed and migrated and moved to other hardware boxes.
Gary Forghetti 12:59.404 In some cases, you might be locked in by the vendor. You might be locked into a certain type of box, or certain kernel or even certain components or a certain operating system. So I think you’re all familiar with this environment and it’s not the most efficient environment from a cost standpoint and resources and also for time. So comparing the physical environment to the hypervised environment. Definitely, there are a lot of benefits here because now you’re running multiple virtual machines on a single hardware box. You’re sharing the resources. You’re taking a physical machine and you’re dividing it into multiple virtual machines so it’s easier to scale. Okay? If your virtual machine needs more resources you can allocate those on the fly. So your applications have more elastic because the virtual machines can grow both vertically and horizontally. And it’s also very good from a cost standpoint because you only pay for what you use. And again, you can take advantage of multiple applications running on the same physical box in a virtualized environment. So they can share the resources on the machine.
Gary Forghetti 14:17.909 There are still some limitations here though because you still have to allocate to your virtual machines the resources it needs, CPUs and memory and storage. Also, each virtual machine that runs your application has its own guest operating system so there’s overhead there. So there are some resources there that you’re wasting. Every virtual machine running on that physical box on that hypervisor has their own operating system. Also, you’re not guaranteed that your application’s going to be portable in this environment. You could put your application on a different virtual machine running on a different box and it may not run. There might be differences there. Okay? It may run on one cloud and not another. Okay? So it’s not as portable in this environment but there are some benefits over bare metal. In a Docker world with Docker Containers, you get a lot more benefits because the Docker Container is running in a packaged environment with everything it needs in a packaged environment and it runs on top of Docker. So Docker is the virtualized environment running on top of the operating system. Okay?
Gary Forghetti 15:27.570 So in this example here, you have a physical machine, it’s running one host OS, one kernel, Docker’s running on top of that and you’re running three applications, three Docker Containers. Okay? In the diagram on the right, you’re running a virtualized environment, running a hypervisor and you’re running two Docker nodes. One Docker node is running two containers, and the other Docker node is running one container and they’re sharing the hypervisor, okay. So again, the Docker Containers take up less space because there’s no operating system, okay. They run as isolated processes so they share the same resources, right, that are available on that virtual machine or on that physical machine if you’re running on the physical environment. Since they’re smaller or lighter weight they start quicker, okay. They take up less space, there’s a smaller attack surface, okay, because they don’t contain the canal and the canal modules, they’re easier to scale, okay. They need more resources on that machine, they can grab them very easily. They’re very portable, I can take that container and run it on a different virtual machine regardless of what that virtual machine is running as long as Docker is running there. And that can be a virtual machine in a different cloud or it could be a virtual machine running on my desktop, so on and so forth. So Docker Containers are much more portable [inaudible] the packaged environment that’s required for application and that container can be moved, okay. So used together, Docker Containers and virtual machines gives you a lot more flexibility on how to deploy your applications and take advantage of the system resources more efficiently and reduce your cost and also reduce the time it takes to deploy your applications.
Gary Forghetti 17:28.433 Okay, I’m going to quickly get into some quick terminology here that I’m going to refer to in the follow-on slides in this presentation, talk about stack services and tasks. Okay, so in this presentation and as far as Docker is concerned, a stack is a collection of services that make an application. So you have an application and that—you’re going to deploy it as a stack and that stack can have multiple services. So a service could typically be things like—services basically a Docker Image that you’re going to build so you might build a Docker Image that runs Apache, okay. And you’re going to run an Apache Net application, that’s a service. Okay, and that Apache web application is a service that’s going to run from that Apache Docker Image. Okay, you may want to run a service that has a load balancer or may have a Database associated with your application, you might have a Business Logic Backend Application that does your processing. This could be your one stack that has your—your stack could have four services. Your stack is running a Load Balancer, your Load Balancer is balancing traffic to your Web Frontend application, Web Frontend Application could be talking to your Business Logic Backend Application, which is still in your business logic and your business logic application could be talking to your Database as well as your web application could be talking to your database. So this is a stack of four services, okay. And a service can run one or more tasks, okay. So think of a task as an individual container running at a service, so for instance, I could run my Web Frontend Application, an Apache application as a service. And I can say, “Well, I want to run three of those at once,” that way I can have a load balancer and I could have load balancer balance the traffic to those three Apache Web Frontend Applications. So a stack is one or more services, a service runs a specific application and a service can contain one or more tasks, one or more containers where also—containers are also a process running the operating system.
Gary Forghetti 19:55.007 Okay, let’s quickly talk about how to manually deploy an application stack with Docker. This is using commands. So in this example, I have a stack with two applications. I have a stack that contains the Tomcat application and that Tomcat application requires MySQL Database Server. So using Docker commands, I’m going to create a network, and I’m going to run two containers. One container is going to run my database server application. The other container is going to run my Tomcat, a web application. So the first thing I do is: I run the docker network create command, then I create my own network that these containers are going to run in so I can isolate them from the rest of the activity running in Dockers. That’s a nice security feature. So I create the network with the docker network create command. Then I run a container for the database, and my database has to be up before my application. So I start at first. I specify that necessary parameters that this database application requires. I specify the network I want to run in this—that’s the network I created previously. And I specify some environment variables which contain information that this application needs in order to connect to the database.
Gary Forghetti 21:18.384 Then I start up the web application as a container. And then after the containers would come up, I display them with the Docker LS command. So using these three commands, I can bring up these three applications. So let’s talk about how to automate that stack with Docker Compose instead of using all these commands. So Docker Compose is a tool that Docker provides which lets you to automatically deploy a multi-container application without having to type in all the commands. It does dependency, handles dependencies for you. It also can be used to bring down your application, and make sure that everything comes down normally, and everything gets removed. Okay? So basically you define your application stack in a compose yaml file which is—it’s industry kind of standard for a configuration file. And Docker has its own unique keywords and values that you’ll code in this compose file. And then you use the Docker Compose Command. You’ll specify that compose file which contains those definitions to bring up your stack. You can also use the Docker Compose Command to bring down that stack without having to worry about the order that these things are brought up in, and also enable not have to worry about this syntax. Docker Compose is really intended for development and testing. Yeah, you could use it in production, but if you’re going to production you really want to get into using Docker Swarm, and we’ll talk about that in a little bit and why that’s better.
Gary Forghetti 23:09.324 So here’s a very high-level template for a Docker Composed Yaml temp file. You have a version you can specify which determines what level of function you’re required in this Yaml File. And then you define your services, your volumes, your network. Okay? And under your services, you have different options you can specify for each service what Docker image you want to deploy in your container. What command you want to use to start. Any environment variables and then what volumes that you define in this Yaml File that you want to give this container access to. So every service you have, you’ve got to define multiple service options sections in your Yaml File. The next slide actually has the actual sample that I used to deploy the two-service stack. The same stack that was done manually in the previous example with commands. I’ve defined two services, the Tomcat application and the database application. I’ve specified what image I wanted, so what I want each container to run in, use to deploy with. And I’ve got the dependencies set up. The Tomcat application has the dependencies on the database, what networks to deploy it in, what ports to expose so I can get to the application and so on and so forth. So then by using the docker-compose up -d command, it’s going to read the contents of that Docker Compose yaml file and deploy my stack automatically with the dependencies I’ve defined and bring it up. This example, the application, is a Tomcat application. It is listening on port 8080. So by pointing my web browser to the Docker node—the IP address, the host, and the Docker node and specifying the port, I’m able to get to the application. So that’s Docker Compose.
Gary Forghetti 25:18.853 Let’s talk a little about Docker Swarm. Docker Swarm is Docker’s orchestration scheduling and clustering application product. In addition to Docker Compose, it allows you to scale up and scale down. It allows you to perform upgrades and roll-backs. You can update your applications that are running in Docker Swarm. Okay. It has built-in discovery of services and load-balancing inside the swarm, so the nodes in the swarm can communicate with each other very easily. And it’s also load-balancing coming into the swarm, looking for applications. We’ll talk about that. Docker Swarm ensures that your containers are restarted if they fail. So once you define your application in Docker Swarm, Docker Swarm will make sure that your application stays up. If a container fails, it will restart it automatically. You could control where in the swarm your applications run—which nodes. You can be very granular with constraints. There’s built-in security. All the traffic between the nodes in the swarm are securely encrypted. You can ensure that only the trusted servers that know it’s your swarm are running your containers. Docker Swarm has a great facility to define secrets, and passwords, and keys. And then you can grant access to what containers can access those secrets.
Gary Forghetti 26:45.526 Docker Swarm lets you run both Windows and Linux workloads in the same swarm. Very nice. And to deploy your application in swarm, you use a very similar Yaml file as compose, but there’s some new options. You’re going to use the Docker Stack deploy command to deploy your stack in Docker Swarm and Docker Stack RM command. The nice thing about Docker Swarm is, Docker Swarm makes all the nodes in the swarm appear as one. So you communicate. You say: “I want to deploy my application to this swarm.” Docker Swarm will deploy all your containers in that swarm to nodes in a swarm as if it was one cohesive unit. By default, Docker Swarm is not enabled. So if you want to use swarm you have to enable it with a Docker Swarm init command. We’ll talk about that.
Gary Forghetti 27:35.271 So Docker Swarm contains both managers and worker nodes. The managers have the most power. They manage and control the swarm. All the managers have a database called the raft database which stores your configuration information, stats information, and security information. Only one manager leads and manages the swarm at one time. You have follower managers which basically, route requests to the leader and they provide backup. So if something happens to the manager, the followers will do a vote and vote on a new manager, a new leader to lead the swarm. Managers are also workers. They can do work. They can run containers, but you can restrict that. You can control where your containers run with constraints. If you don’t want your containers to run on managers, you can do that. You can change the nodes on the fly dynamically with Docker commands. You can make a manager a worker, or a worker a manager. You can add new managers to a swarm or remove managers from a swarm. You can add and remove nodes to a swarm, all with commands. Again, all the traffic between the managers, workers, is secure-encrypted. For high availability you definitely want to have more than one manager, and you want to have the odd numbers. So they can vote better. Okay. You need a majority to vote. Okay? So you want to deploy your managers in three, five, sevens, so forth.
Gary Forghetti 29:07.159 Docker Swarm, there’s different commands to manage the swarm. There’s five different types of commands. There’s a docker swarm command, which lets you manage the swarm, lets you create the swarm, initialize it. Lets you join nodes to the swarm and display information on the swarm, for each node. There’s a docker node command to let you manage the nodes. Lets you display the nodes and promote nodes from workers to managers and vice versa. There’s a demote. There’s an update command. Lets you update your nodes in the swarm. To deploy your stacks, and manage your stacks, there’s a docker stack command. Okay? Individually, inside the stack, you have services. There’s a docker service command. Lets you manage your services, create them, inspect them, and so forth, list them, and remove them, etc. And Docker has a docker secret command for creating and managing secrets that you can make available to your containers that are running your applications that are running in the swarm.
Gary Forghetti 30:07.151 Here’s a high-level example. I’ve got a swarm manager, and I’ve got three nodes. And I want to deploy a patchy web server to service with three replicas. I want three containers running at once, so I can issue the docker service create command. I name my service Apache. I say I want three running. I want to publish port. I want to make port 80 available, and I specified my Docker image. In this case here, I’m using the Docker official httpd version 2.4 image. And I issue that command on the Docker Swarm manager, and Docker Swarm Manager deploys three Apache containers in that swarm. All right. Let’s talk about automating, imaging a stack with Docker Swarm. So I’ve got an example. I only have one manager and two workers in a production environment. You definitely want to have more managers, and you probably going to want more workers also. To keep it simple, I only have one manager and two nodes, so I’m running Docker on three machines. In this case, I’m running virtual machines. So I’ve got three nodes. I’ve got node1, node2, and node3. On node1, I’m going to initialize a swarm on node1. So I do a Docker Swarm in it, and I told the swarm which IP address I want to advertise to the other nodes. And after I issue the Docker Swarm command, I have initialized a swarm. This node becomes a manager. And it spits out a command, so I can join workers. So I go to the other two nodes, and I issue that command. And these nodes now join the swarm as workers. If I want to join as a manager, I have to run Docker Swarm join manager on this manager node. It would echo out a join for a manager, and I would then run that manager join command on the nodes they’ll want to be managers. With this example, I only want one manager and two worker nodes, so I just do the join as workers. So after I do that, I go back to node1. I can do a Docker node in a ls command, and I can see I’ve got three nodes. I have a manager later, and the other two nodes are workers. And how do you know they’re workers? If they were managers, there’d be a follower status here because only one leader manager, and the other manager would be followers. Since there’s no option here, I know these two nodes, two and three, are workers.
Gary Forghetti 32:56.392 I want to go through a very quick application called Pets. I’m going to deploy a Pets Application to this swarm. Pets is a multi-service Dockerized application written by a Docker employee. It basically displays some random pictures of some Pets. It’s got a web browser and a backend database. It’s written in Python. This application, it’s the stack and the deploy, has two services: it’s got the Pets frontend application, and it’s going to use a Docker Image. This Docker Image here called chrch/docker-pets—this was written by the Docker employee. This is going to use backend database which is going to use the consul Docker image that’s using the consul application, which is a key value store. This application is used a lot for demos. It’s been used a lot in labs at DockerCon. They frequently use this application. It’s publicly available on this Docker, on this GitHub repo. You can pull it down. You can run it yourself.
Gary Forghetti 34:00.185 So quickly looking at the Docker Compose file that I’m going to deploy this application. This compose file contains—this is part of the GitHub repo. They contain the definitions to deploy two different services. The Web Service and the Database Service named Web and DB. The Web Service is going to deploy the Pets Application. The Database is going to deploy the consul Docker Image, Docker application. I want to run two replicas of the Web Frontend Application. I want to run three replicas of the backend application. So that’s going to two containers, three containers. So what I do is I go to my Docker node1 which is the manager. I do a Docker Stack deploy, and I give it that compose file. And I say I want this stack to be called Pets and it processes the requests and that Docker Compose Yaml file. Creates the network and it creates the two services: the web and the database application. I can then do a Docker Stack LS and it says: “You’ve got one stack running called Pets with two services.” l can run one Docker Stack services Pets command to display the two services. I’ve got a pets_web with two containers running, and I’ve asked for two. And I’ve got the database service running, three containers and I’ve asked for three containers. So I’m running a Docker Stack services pets command. I can just play the two services in a little more detail. I can see which Docker Image is used to run each service. Now if you notice, when I started this stack I gave it a name of Pets. Docker is going to prefix the web service and the database service in the compose Yaml file with the stack name. It’s going to prefix with a stack_. So I can see that it’s got two—the two servers running here, pets_web, pets_ db. If I do a Docker Stack ps command to display the processes for the Pets stack, it gives me even more detail. I can see the actual containers, the actual tasks that are running, I’m running one, two, three database tasks and one, two web tasks, and I can see where they’re running. DB1 task is running on node1, DB2 is running on node2, DB3 is running on node3, web.1 is running on node2, web.2 is running on node1.
Gary Forghetti 36:57.577 I can get more granular. I can do a Docker Service PS on the individual service. So here I did a Docker service PS on pets_web. And it only displays the web service, and I can see it running two web tasks to web containers. And they’re running on these nodes. Same thing for the Pets service. I could do a Docker Service PS on pets_db. I can see I’m running three of those on node1, 2, and 3. Now, inside the swarm, inside that network, the Docker services, my containers, my tasks, my applications can all communicate amongst themselves inside that swarm using the service name. So on node2, I displayed the containers. I’m running a pets_web.1 and DB on this container—I’m sorry, on this Docker node on node2. If I run a ping command inside the web container, and I try to ping the database, I can ping it by the service name DB. So this web application running on this Docker node can communicate with the database by referencing the DB name, the service name. Okay. It can also reference it by the fully qualified name, pets_ which is a stack name. That’s very powerful. So my applications running on containers, my web application can access the database by specifying the service name. So that’s how I can code my connection information from my web application to talk to my database application. Very powerful.
Gary Forghetti 38:39.448 Likewise, coming into the node, if I need to get to the web application—the web application is running on port 5000—I can put my web browser to one of the Docker nodes running in my swarm. And the Docker Swarm will route that request to one of the application containers running in the swarm. I’m on Docker node1. I know that the pets_web application container, there’s one running on node2, one running on node1. If I go to node2, I can see that I’m running the web one application here and the database two here. If I go to node3, I’m only running the database application here. I’m not running the web application here. And on node3, my IP address is 77, ends in the 77. I can point my web browser to that node, port 5000, and Docker will route this request to one of the web applications, either on node1 or node2 automatically, for me, without me having to worry about the routing. So it’s doing some load balancing built-in. So once I get in the application, I can log in and play with it a little bit. Okay. And I can see that it’s using the database. It’s talking to the database. It’s keeping track of what I’m doing inside the database. I can serve another pet, display another pet. Up it comes, and I can see that now I’ve got two pets served. So the web application is recording information in the database, and it’s able to retrieve it and display it. So quickly—let’s talk about removing, I’m going to go to node2. I can see I’m running a couple containers. I display the containers. I’m going to remove one of the containers. I’m going to remove pets_web.1 with the Docker remove command. I go over the node1 now. I can display the service for pets_web. I can see that the container, one of the containers, failed four seconds ago. And Docker has automatically started up a new container. So one of the tasks in that service was brought down, and Docker automatically restarted it for me.
Gary Forghetti 41:11.413 The last example I’m going to do here is going to show you how easy it is to update your application running in the swarm. I’m going to go back to the same Docker Compose file that I used to bring up the stack. I’m going to replace the console image, Docker Image from version 7.2 to version 9.2. So I go into this compose file, I change the Docker Image. And I’ve added a stanza here called uptake config to tell it how to do this update. I said: “I want to replace what the parallelism, keyword, only replace one container at a time and delay 10 seconds between updates.” So instead of updating all three—all the three database containers that are running, because I’ve asked for three replicas, I said replicas of three when I first brought this up, it’s going to replace one at a time. It’s going to wait 10 seconds between updates. And there’s more options here, for updating your containers. This is just a very simple example. So now, I go up and I display the current Pets database. I can see I’m running three containers, all running 7.2 image. I then run the same Docker stack deploy command with the same command I did before and this time, it’s going to go through and do updates. And I can see it’s updating the services. I now go back in display, the database service. With the Docker servers ps command. I can see that has already started to do the updates. It already updated pets_db 1 and 2. It hasn’t gotten to three yet. I can see it’s delayed 3 seconds, 13 seconds. I can see now that pets_db one is running 9.2 and so is pets_db two. Wait a few more seconds, display of the command one more time. I can see that it’s updated all three containers, all three containers have been updated to 9.2. It’s shut down the old containers and replaced the containers with the new updated containers. So now I’m running updated versions of my application.
Gary Forghetti 43:22.806 All right, let me quickly go through what’s new in Docker 17.06 and pass control over to Gunnar. So what’s new in 17.06, we now have support for running Docker on IBM Z, the mainframe, and Windows 2016. There’s now the ability to have custom roles. There’s more granularity for controlling what you’re doing in Docker, down to the API function. There’s a role-based asset control for nodes so you can restrict what users and what teams can deploy applications to what nodes that are running in your swarm. There’s mixed cluster support. You can run both Linux and Windows in the same cluster. However, you’re going to have to use placement constraints because you can’t run Windows containers on Linux nodes and you can’t run Linux containers on Windows nodes. But you can run those containers in the same cluster. You just restrict where your Windows containers run. They run on Windows nodes, Linux containers run on Linux nodes. We have some policy-based automation so you can automate image promotion using predefined policy. So you can say, “If I have my private registry and I have a scan—and I have an image that’s been scanned successfully—I want to move it to one repo to another repo in my registry. Maybe from my test repo to my production repo.” You can also control and restrict your repositories from being updated or deleted with policy-based automation. There’s also a very nice popular function called multi-stage builds. They allow you to create your applications in your Docker files with multiple FROM statements. What that gives you the ability to do is, you can build your application a stage at a time and not carry over artifacts, and tools, and secrets from prior stages in your Docker file. So at the end of your Docker application build, you’re only building an image with your application. You’re not building the image with tools that are only needed at build time. There’s more info at this Docker link. And I think that’s all I got. I do have some additional information, some more links. And I also have the Vagrant file I used to bring up that Docker Swarm. I ran that exercise using Vagrant and VirtualBox. You can download my Vagrant file and bring it up and bring up that test swarm and play yourself. And, that’s all I have. I think it’s time to pass control over to Gunnar.
Gunnar Aasen 46:06.691 Great. Thank you, Gary. Definitely looking forward to using that multi-stage swarm in builds. So anyway, I’m going to start sharing my screen, but I think you need to stop sharing yours, Gary.
Gary Forghetti 46:25.762 Should be set.
Gunnar Aasen 46:26.898 Yep. Thank you. All right, so I’m going to go over. And we’re sort of getting up there in time, so I’m going to try and sort of fly through some of this intro stuff. And if you’re not sort of fully familiar with InfluxData and the whole TICK Stack and InfluxDB and Telegraf and all that stuff, I highly recommend afterwards going and checking out our website, our documentation. We’ve got a lot of, sort of, getting started guides out there. So, if you’re completely new to InfluxDB and InfluxData, I’m sorry but I’m not going to be going into too much detail on sort of how everything works and a lot of the terminology. But you can look that up after the webinar. So I’m just going to go through and basically—so one of the things that Docker has sort of introduced into the world of systems operations and DevOps and everything is this concept of a lot of metrics, right? So instead of having a couple of servers where you’re attending to them and keeping an eye on them, and you maybe have three servers and then when you load stuff on your website you add another server, now we have, thanks to Docker, a different sort of paradigm, where we have, essentially people running fairly large systems where they are running potentially tens to even hundreds of Docker containers per host, right? And so, what I like to think about sort of Docker and Docker containers and all that, I like to think of it as sort of like a lighter weight VM almost in a sense. So it allows you to run all of these applications that previously you maybe had to run separate boxes somewhere to handle it. But now you can sort of run a lot of these containers and run onto them onto different images running to meet different workloads.
Gunnar Aasen 49:01.711 And one of the issues that pops up with this is essentially—once you start to run all these different workloads—then you start to realize, well, where is my visibility into making sure everything is running? What’s actually happening in a container if you’re segmenting up the memory on a large machine into smaller little bits for each container. How can you tell when a specific container is starting to run out of memory or maybe there’s some other issue with something that’s crunching CPU? And so, Docker sort of introduces an additional problem which it’s harder to maintain visibility in this containerized world that is fast becoming a standard in technology. And so what we do here at InfluxData is we’re a primary developer of InfluxDB which is a Time Series Database. And in time series data, it’s essentially what you can think of as the monitoring data that is output when you’re, say, checking for the memory being used at any point in time and then storing that data. And what this is, time series data, is basically monitoring data. And it’s very useful for essentially keeping track of how things are actually performing at any given point and time.
Gunnar Aasen 50:47.301 And there’s a couple of other things with time series that essentially make it sort of a separate data problem from a lot of—a separate data problem from a lot of regular relational databases, or other databases you might be still interacting with, where you’re having a really high velocity of data. So you got a ton of writes coming in because you’re collecting pretty much new data constantly and you’re writing that. And then you also want to read that data because you want routine monitoring, something like that. You also want to get sort of real-time view of what data is actually coming in. And it’s usually fairly unimportant to know what a specific individual value is, but rather it’s more important to know the general trend, right. So if you think of it like looking at graph or looking at a derivative of the way that CPU or memory is trending over a certain time or disk usage is trending, that becomes more useful than a specific point for value at one point of time.
Gunnar Aasen 52:04.203 So InfluxDB is a purpose-built database specifically to handle this type of data, and specifically to handle basically like this type of Docker containerized world that we live in now where we’re generating lots of data for each container and trying to get this build into these systems that we set up. And we’ve generated—we started out developing InfluxDB and we’ve slowly built out a full sort of stack or platform around time series that we call the TICK Stack. And that’s just an acronym for the different components in the stack which are Telegraf which is our collection agent. So it’s basically this thing that you would deploy to an actual machine that would be going out and collecting stats from the Docker that came in. Or collecting stats from the local OS to grab useful information, or maybe even grabbing stats from an application running in a Docker Container, and then shipping those stats back to an InfluxDB server, which is the actual database itself. And InfluxDB, like I mentioned earlier, was built from the ground up to handle time series, so it’s got a very useful SQL-like query language. It has really high performance, and it also has very good compression as well as some other things that make it easier to handle time series data.
Gunnar Aasen 53:46.108 So we also have the other components in the TICK Stack—Chronograf, which is our basically UI and graphing visualization engine that ties together the entire TICK Stack into a cohesive whole. I’ll be showing that a little bit later. And then we also have Kapacitor, which is our sort of event processing and alerting engine. So if you want to, say, make your time series actionable, so be able to trigger some kind of action if a subset of containers happen to restart at the same time. Or say you want to monitor a specific container or something like that or some specific setting, you can then have that alert to Slack or PagerDuty or something like that. And so in addition to developing the TICK Stack, which is just open source, I guess InfluxDB and Kapacitor themselves are both open core, so we actually have a sort of Enterprise, clustered, high visibility version of InfluxDB, and also a clustered version, high visibility version of Kapacitor as well. And we also have a managed hosting platform as well. And so I just went over TICK Stack, so that’s going to draw me directly into the demo.
Gunnar Aasen 55:16.852 And so specifically for the TICK Stack and Docker, we have quite a few different options that you can sort of use here. Essentially, we publish basically containers for all of the different components of the TICK Stack, so it got a Telegraf container, InfluxDB container, and you can actually go on to our GitHub and see the actual base, sort of Docker files that are used to generate these containers. And we actually use—we run our InfluxDB Cloud platform using Docker containers, and so I think there are definitely some things to keep in mind, but generally, we totally recommend using Docker to monitor, especially for things like Telegraf. If you’re deploying a set of containers, just dropping in a Telegraf container to collect the stats and ship them away is super easy and really, I think makes working with the TICK Stack much easier as well. In addition to that, we actually have a Docker compose file, and we’ll have some links at the end of the presentation that you can come here to sort of grab and stuff. But we have a Docker Compose file set up with the whole TICK Stack, and that’s what I’ll be showing in just a second.
Gunnar Aasen 56:54.993 But basically, Docker Compose is really easy to get up and running. And then, sort of like the bread-and-butter of the TICK Stack and working with Docker here, is basically we have this Docker input plugin in Telegraf for collecting Docker stats. And so this is the canonical way to grab your container stats using Telegraf and generally using it in a TICK Stack in total. And so this will allow you to—and I’ll show this in a little bit using the Docker Compose TICK Stack setup I just showed you. We’ll show how you can collect Docker stats with the TICK Stack and sort of use Chronograf with that as well. And you can see here, this input plugin will allow you to collect a bunch of different stats. And I’ll go over some of the caveats in using this plugin as well in a little bit. The other thing I want to mention is Docker also has an events API or I guess sort of streaming API, and we would like to add support for that in Telegraf, but it is an open issue. So if you’re feeling particularly comfortable with Docker, and want to contribute to the TICK Stack, and help the community get better visibility into Docker events, I would totally recommend making some contributions there.
Gunnar Aasen 58:34.965 So let me share the terminal here. All right, so like I showed earlier, we have a Docker Compose for the TICK Stack, and essentially this makes it super easy to spin up the TICK Stack. So if we go here, we can just see that I’m in the TICK-Docker repository, and we GitHub basically, different compose version for each of the major versions of the TICK Stack. The latest version right now is 1.3. In the next couple of months, hopefully we’re releasing sort of like the 1.4 cycle, but 1.3 is latest right now. So we’re just going to change this directory, and basically, we have our Docker Compose file here, and we can come in here and let me share this as well.
Gunnar Aasen 59:59.663 All right, as you can see here, we’re going to dock our compose file. And Gary sort of went over some of this before, but basically, this will just spin up each of the InfluxDB containers for each of the components of the TICK Stack: Kapacitor, Telegraf, InfluxDB and in Chronograf. And we can go back here, and I’ll show you that if you do Docker Compose up -d to put it in daemon mode, you will see that we start up. Actually, we will start up all the containers for the TICK Stack. And you can see here we have basically, Chronograf, InfluxDB, Kapacitor, and Telegraf running. And so the way this works right now is we have—sort of Telegraf is collecting metrics right now. So it’s collecting metrics on my local laptop sort of the CPU, memory, and all the other stuff like that, but I’ve also set it up to collect metrics with the actual Docker daemon that’s running on my laptop right now, and collect metrics on the containers that are running as well. And so we’ll see it in just a second—or actually I can just show you right now, if we do a Docker Compose and spin up the InfluxDB-CLI we can come in here, and we can show the databases, and we can use the Telegraf database.
Gunnar Aasen 61:56.497 So Telegraf is reporting stats to the Telegraf database and we can show the measurements that we are collecting and you can see right here, that we are collecting some Docker stats. And this isn’t going to be some pretty output, but we can look at the Docker stats for Docker. So you can see that we are collecting CPU stats for different docker containers here, for Kapacitor, Chronograf, as well. So as I’ve said earlier, we’ve got this whole TICK Stack running right here. Now what makes it—you can interact with TICK Stack using the InfluxDB-CLI, and there’s also a Kapacitor CLI for setting up alerts—but probably the easiest way to interact with the TICK Stack is to use Chronograf. And Chronograf is a sort of a UI on top of InfluxDB. And it’s similar to, if you’ve ever used Grafana. We’re basically developing Chronograf to specifically, only be very tightly integrated with the InfluxDB and Kapacitor stack, and so you’ll see here that we’ve built in some easy ways to interact with Kapacitor as well as building a dashboard, as well, and other things like that.
Gunnar Aasen 63:38.776 So this is the homepage when you spin up Chronograf. As you can see here, I actually already have some alerts set up. I think these are some basic CPU alerts. And you can sort of scroll through that here, but all the important, or fun stuff, is over on the left-hand side here in Chronograf. What’s nice is that Chronograf will automatically recognize Telegraf stats that are reporting to an InfluxDB server, and you can see here that I’ve had a couple, sort of hosts that I’ve been running different things on, and so we have some sort of pre-canned dashboards all that are spun up here. We also are able to set up, or use the data explorer, which is a way to easily create InfluxQL queries. So InfluxQL is the InfluxDB specific SQL-like language. It’s definitely not SQL, but it certainly reads like it and acts like it in most ways. But basically, this allows us to do some interesting things, like explore our data. So we can come in here and we can take a look at the Docker data itself, and, let’s see… We’ll select that, and one of the nice things about InfluxDB is it uses a concept of tags. So all data is tagged, and one of the nice things with tagged data is that it allows you to sort of easily segment your time series data. So if you think of this specific line here—this isn’t very interesting. But if you just think of this specific line here as a specific thing, in this case, the number of containers the Docker daemon is running. That’s not right, this is running. This is specifically called a time series, an individual time series. And so these tags and fields allow you to sort of group various time series together and split them apart as well.
Gunnar Aasen 66:14.003 And so we can see if we do this and come over to say, the memory, we can—let’s see, we can split it up by Docker, by container name. And we can look at the 6%. And we can see here, for example, this is the Docker memory usage for each of the containers on the system and my screen is a little bit constrained, but you can see here that the CLI has no value—CLIs have no value right now, but we can see that InfluxDB is using 3.57% of my memory. We also have a dashboard builder and when I create a dashboard—but it looks like I may have blown it away. You can create dashboards similar to a data explorer. In this case, let’s recreate that memory graph here and we’ll do usage percent. And so this allows you to create dashboards. This nice thing here is we can also add templates. We can add template variables to sort of split up the—make it easier if you’re running a fleet of machines that are all running Docker, or in the case of Docker Swarm or something like that and you want to see—split your dashboards up based on specific services or specific hosts, creating template variables is an easy way to go through and set that up as well.
Gunnar Aasen 68:12.829 Finally, I think one of the more useful things of Chronograf is that you can set alerts and alert rules. And so here, basically you can—this is Docker specific, but I’m just going to say for our CPU total it goes over, over, let’s see, 6%. Let’s create an alert. You can set up here, basically, you alert to a various number of things. In this case, I don’t have anything set up, but you could send it to Slack and PagerDuty. The way you would configure that would be down here in the configurations. And, anyways, you can sort of do some more data management as well. So, you can set up users, view queries, and view your databases and retention policies as well. So, that is it for my presentation on using Docker and TICK Stack. So I encourage you to sort of check out the links I showed you, play around with the TICK-Docker-compose demo. And Chris will leave the webinar open for a little bit more if you want to ask us questions about monitoring Docker with the TICK Stack.
Chris Churilo 70:00.843 Cool, thank you. So we will leave the line open for a little bit, and Gunnar, would you mind reading some of the questions that were asked earlier that you answered?
Gunnar Aasen 70:15.447 Yeah. All right. So we’ll go through some of the open questions right now. So Archie asks, “If you set up Telegraf in a Docker container to monitor host status specifically using the disk plugin, is it possible to monitor logical volumes?” I believe so, yes, you can—I don’t know if you can monitor them specifically with the—or yeah, so I’m pretty sure you have to do some setup. I think it depends on how your volumes are set up, and I think there are some sort of magic Docker things you have to expose to your Telegraf container, but it is definitely possible to monitor logical volumes with Telegraf. Let’s see, “Can we get a recording?” Yes, the recording will be posted. In terms of questions we’ve answered already, so, “Is a single image able to be deployed on multiple OSes or would you build an image for each distro?” Yeah, so I think both Gary and I answered this, but essentially yeah, the—I guess the question how it’s phrased is sort of, I guess basically Docker runs—the Docker daemon runs on top of the OS and exposes that OS to the sort of container image containers that are running from these images, so you don’t really think—you don’t really need to think of the images as sort of having to be built for specific underlying OSes because basically, I guess as Gary explained in his part of the presentation, Docker sort of extracts all of that underlying sort of deployment information away within the image and container, so—though basically you can expose some things from the underlying OS to the Docker containers and Docker containers can also be sort of basically created with an almost minimal amount of a sort of data within themselves, but essentially yeah, you do not need to create new images based on the OS on which—the underlying OS on which the Docker daemon is deployed, generally. I think there are some exceptions for Windows, and then, “In some of the diagrams—” Anonymous asks, “In some diagrams you showed Windows in being running on the same parent OS, additionally you said that Docker shares the kernel, I was under the impression that Docker ran via lightweight VM layer Docker machine when the kernel doesn’t match the parent. Is that the case?” So, Gary, I think, did a pretty good job of explaining this, but, essentially the Docker can run a process on the underlying OS and Docker manages basically the setting up—so I think probably where there would be anything—I’m not a Docker expert so maybe Gary wants to—
Gary Forghetti 74:24.866 Yeah, let me answer that. So—
Gunnar Aasen 74:27.570 Yeah. Okay.
Gary Forghetti 74:28.115—in that same chat question, I posted a couple of commands so, basically, on my Docker node—the node that’s running Docker—it’s running Ubuntu1604.3. I then ran a container, started up a container running the Docker official image for Apache httpd. And once that container came up, I ran a command to display what the OS was inside that container. And that container is running Debian 8-Jessie. But that container has a barebone, minimum Debian Jessie operating system. There’s no kernel, there’s no drivers. It just contains some libraries, some codes so it can communicate with the underlying Ubuntu operating system. So the container’s running a Debian-like OS. Right? Without the Debian kernel and kernel drivers, but it’s running on top of Ubuntu. The real operating system underlying what Docker’s running on is actually the Docker daemons running out of Ubuntu1604. And so the container that’s running Apache has no idea that Ubuntu is the real underlying operating system.
Gunnar Aasen 75:52.310 Great. Thanks, Gary. We’ve got another couple of questions. Our question from Archie is: “Wondering, can Telegraf detect logical volumes to monitor? In a sense that if Telegraf gets deployed on multiple hosts, where each may have different logical volumes set up, does it need to be specifically configured for each host’s volume setup?” So Telegraf will collect—I think the disk plugin will collect data from anything that is exposed to it. So in this case, I believe the Docker daemon will have to be set up—sorry, the Docker Container will have to be set up to expose the volumes to [inaudible]. And I do not know, off the top of my head—maybe Gary knows this—if Docker has a way to automatically expose logical volumes to containers without some kind of configuration for that kind of thing. I want to say, “No.” But, I may be proven wrong [laughter]. Gary, do you have any—?
Gary Forghetti 77:06.751 Yeah, I’m looking at the question. Can it detect logical volumes to monitor? No, you have to expose—no, you are going to have to expose the volumes to the application running in the Docker container.
Gunnar Aasen 77:20.503 Yeah. Okay, great. And then Nickolas asks: “Are there any performance restrictions to put InfluxDB into Docker instead of putting it into the host OS?” Generally, we’ve found there are not a ton of restrictions actually. Obviously, you have to sort of think about, a little bit more that you’re using the Docker daemon to basically be the—you know in some sense you have to choose whether you want the Docker daemon to manage the process essentially or shunt that off to system D. And then there’s also—we actually find at InfluxData we find Docker is very useful for setting sort of limits—easily setting limits to containers. So restricting memory available to the container or CPU available of the container, specifically. So it does require a little bit more configuration than just setting up the base package on the host OS. But you do get some benefits out of—aside from the traditional Docker benefits of the containerization and having it be able to run anywhere. And you also get some additional benefits as well with the caveat that you do need to set it up. And it can be an extra step.
Gunnar Aasen 79:02.258 We’ve got one more question from Armond and I think we’re going to end the webinar. So the last question is: “Is there any possibility to limit server resource usage between containers?” I think this is asking whether server resources can be limited between a set of containers, or specifically between the connection of two containers? I don’t know. Gary, do you know if that’s possible?
Gary Forghetti 79:40.460 On a container basis, you can control how much CPU, memory, so forth, that’s running on that container. There are options on the Docker container Run Command. There are options in the compose Yaml files depending on how you’re employing your application, to restrict the usage of the system resources. So for instance, if you did a Docker Container run—help, you’ll see rules of options there to restrict what a container’s doing. So, prevent it from completely using all the resources on the node.
Gunnar Aasen 80:23.998 Yeah, I think that question might need some clarification. But yes, you can set individual limits—or you can set limits on containers for sure. In terms of what other kind of resource limits you want to set, you’ll have to probably look at documentation there. And then also I missed—asked a question right there at the very beginning. He basically says he’s looking at a mutable image built chain that uses Docker to package and deliver services that will change dozens of times in a week. One of the services they deliver is Telegraf running on a container next to other containers. Then a swarm compose, each worker at the VM with these containers starting on boot mostly. So the question is: “What are some recommendations for tagging the metrics collected by a sister container?” So the Telegraf Docker input plug in will add basically—and I think I showed this in my demo—will basically add a bunch of sort of tags and the metrics it collects. Some of those tags are like the container name, and the container version, and the container image and various stuff like that.
Gunnar Aasen 81:56.486 So we typically find it’s quite easy to add and sort of like use those various tagging facilities that are automatically collected there, or that information that’s already collected there to basically designate what containers are running where, and which containers are associated with which other containers. Telegraf has some other options as well, in terms of being able to set tags on various metrics based on the tags that are set on the metrics it collects, which is a little bit confusing, but there is some documentation in Telegraf, I found that, on how to do that. But basically, I guess my recommendation on collecting sister container stats would be to sort of namespace something in your Docker name, container name, or something else. Sort of add a namespace in there. Or specifically, add a tag for the various sister container segment or sets. And you could be as explicit as just setting a tag called sister container group or something like that. Or just container group. And set it equal to the load balancer group or something like that, or application group.
Gunnar Aasen 83:33.198 Then, also asks, “What have you seen for folks integrating build versions into metrics?” Again, this is collected by version—the image version is collected by the Telegraf input plugins, so that’s pretty easy to instrument. And finally, “Do you have any great examples on Dockerized Telegraf collecting stats on Docker [inaudible] that’s running it?” So I guess I don’t know if there’s a specific example that I have in mind. But I’d recommend just sort of checking out a branch.use for this demo on the Docker, the TICK Docker Compose. That branch is specifically the 1.3-Telegraf-docker-input branch. And you can see just in that branch, the Telegraf is set up to collect Docker stats, container stats. Aside from that, I don’t—there are probably some examples floating around InfluxDB but I don’t have one off the top of my head.
Chris Churilo 85:01.122 Okay. I know we’ve got more questions, but I think what we need to do is we’re going to have to ask people to put your questions at community.influxdata.com. I will post this webinar later today, and I appreciate Gunnar and Gary for reading out these questions because I think it helps in recording for people to experience that portion. Because a lot of times people have the same questions themselves. So overall, thank you so much, everybody. This was very well-attended, and we had a lot of great questions and a lot of great information. And we always love to hear feedback from everybody, so if you have any other recommendations on topics that you want us to cover, please just shoot me a line. And we can hope to accommodate you. And hopefully, Rob, your questions got answered. If not, just shoot me a line and we’ll make sure we start to move your questions over to the community site and go from there. All right. When I get a chance, I will post the presentation on Slide Share as well. And you’ll be getting an email from me with a link to the webinar. And I want to thank Gary and Gunnar for doing such a fantastic job. And of course, we want to thank you all for participating today. Thank you.
Gunnar Aasen 86:26.118 Thanks, Chris. Thank you, Gary.
Gary Forghetti 86:27.151 Thank you. Yep.
Chris Churilo 86:28.164 Bye.
Gunnar Aasen 86:29.705 Bye.
Gary Forghetti 86:30.263 Bye.