In the service of increasing business agility, enterprises are turning to the cloud, microservices, and containers to accelerate software delivery. These landslide software architectural changes are creating a whole new set of security concerns, for which existing security methods and tools are at best inadequate.
Cloud-native security can provide one consistent security model that works across sites and clouds, with no changes to source for developers, and provide clear visibility for InfoSec teams including real-time and historical application visibility, and it can avoid gymnastics by decoupling security from the network infrastructure.
Don Chouinard and Bernard Van De Walle of Aporeto will demonstrate how to secure cloud-native workloads without complex network gymnastics while using InfluxDB to maintain longitudinal data for visualization and troubleshooting.
Watch the Webinar
Watch the webinar “How Aporeto Secures Cloud-Native Workloads with InfluxData” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “How Aporeto Secures Cloud-Native Workloads with InfluxData.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
• Chris Churilo: Director Product Marketing, InfluxData
• Don Chouinard: Product Marketing Lead, Aporeto
• Bernard Van De Walle: Engineering/Product, Aporeto
Chris Churilo 00:00:02.629 All right. So let’s go ahead and get started today. Today, I’m pleased to announce that we have a customer webinar with Aporeto. I’ll be going over their solution and also talk about how they use InfluxData to help drive their solution. So, Don and Bernard, are you guys ready?
Don Chouinard 00:00:22.328 Yes. Thank you very much, Chris. Thanks for having us this morning.
Chris Churilo 00:00:24.971 Awesome. I’ll let you take it away.
Don Chouinard 00:00:26.755 Okay. Great. So let’s start the first slide here then. And thank you for joining us this morning, everybody. So this is one approach to security. Just want to start you off with a chuckle. So clearly, we can do better than this, and that’s why we’re here to speak to you today about cloud-native security. We’re from Aporeto. Aporeto are leaders in security for cloud-native applications. And with this product that we have, which uses the InfluxDB, people are able to achieve much stronger security than they otherwise would be able to have. And strangely, the operations actually become simpler rather than more difficult. And lastly, when this is used, no changes need to be made to the actual source code. This is done in the mesh, and you’ll see the details of that. So I’m going to tee it up and tell you just a couple of slides, just to frame this, and then Bernard’s going to do an in-depth demo of our product using—not in depth. He’s going to do a demo of our product using Kubernetes, and then I’ll wrap up at the end. Sound good? All right. Let’s do this.
Don Chouinard 00:01:45.189 So we’re getting a lot of feedback from the people that we’re calling on in the area of cloud-native security, and they’re telling us their frustrations. They’re telling us what they need from a product, and you might find that you resonate with some of these things that they’ve been telling us. Perhaps you have some of your own items that you’ll have in mind. So the first is that you’ve got all of these containers. You’ve got all these microservices and they’re running around on-premises, maybe on a hybrid cloud or a multi-cloud environment, and trying to secure all of these application components using IP address range rules is basically not working. These components are popping up, closing down, maybe within seconds, they’re moving around at the instructions of the orchestrator. Or for HA or performance reasons, multiple copies are being spun up and spun down. It’s just crazy to try and keep an environment like that secure, using IP address ranges. This is something we used to be able to do on-prem with the monoliths, where things were moving very slowly. But when we move to cloud-native and we start to break apart the monolith and decompose it, that model just completely falls apart. So we need to move to a new model. And what people are interested in is moving to this model where rather than being tied to IP address ranges, protection is tied to the actual application components themselves.
Don Chouinard 00:03:20.573 Now, just take a leap with me for a minute. Imagine if security were just attached to the application components. All the of the problems go away. Now, security is always current, it’s easy to keep up, security follows the application components wherever they run, however many of them there might be at any given time, and life just gets a heck of a lot simpler. And so people are looking for that one consistent model that they can use on-premise, prove it out, and then use that across the hybrid cloud and the multi-cloud. And it is possible to do that, of course, with Aporeto. But you don’t want to have to make changes to your source code because to impact the velocity of innovation for the organization, that’s death. In today’s highly competitive world, the developers need to be able to just plow forward, developing new capabilities that are going to fuel the business and the key business success factors, and not be concerned with having to pepper code throughout everything they’re doing in order to address security. Now, security is a very specialized area, to do it right is very difficult and very complex, so it should be done by a specialist. It should be done by a framework or a solution, and have no source code changes being required, and you’ll see a little bit about that in a minute.
Don Chouinard 00:04:38.865 Now, if you have three application components, you can probably write those security policies. They’re human-readable, they’re easy to write, and life would be good. But when you have 500, all of a sudden you don’t want to be writing all of those security policies yourself. So what we do is we go into an observation mode, and we write all the policies for you. And that is something that people very much, in the feedback to us, they very much value that. So security now can be codified. You can actually build it in to your CI/CD pipeline. This is the great hope that everyone has. So that not only will the current components unchanged, be more properly secured with fine-grain security, but also any new components that are made – even by a new programmer that just came onboard – that those application components will have the same level of security around them, so that the overall application is protected. And it’s possible to do this with Aporeto, certainly. And what people have been telling us is that when they use the product for a little while, that the big surprise for them is the incredible visibility that—not only the automation in the CI/CD pipeline, like the joys that can come from that—and you’ll see that when we show you on Kubernetes here in a minute—but also the visibility that you get into your security profile with your application. So sometimes you just want to do something simple like encrypt the traffic from Component A to Component Q, and that’s something that would take a lot of work to do properly. So that is actually another thing that the Aporeto product will do. Just with one simple rule, you can say, “I need to encrypt this traffic because these two components are in different zones, and I just feel better doing it that way.” So that’s your quick overview of the pain that people are having and the design center that we have for Aporeto. And now with one picture, I’m going to show you what Aporeto actually does.
Don Chouinard 00:06:44.969 So let’s start at the bottom, we’ve got some hosts. These hosts could be virtual, or they could be physical. And they’re running some application components, so these would be their Linux processes or containers in pods if you’re on Kubernetes. You could use any orchestrator you want. And so what we do is we take and we—and it’s very easy to just install the enforcer onto each host. It’s just a container. So you get it running—or Daemon set. You get it running on each host. And then the enforcer is going to—it’s going to wrap all of those application components, and it’s going to protect them from each other and from application components that might be running on other hosts. So all of the security policies live up in the cloud in software as a service. Now, this can run in our cloud—it’s a multi-tenant cloud—or you can run your own copy of this to have more control over it. The security policies are up there, and they get pushed down to each of the enforcers. This is what you want, centralized management but distributed enforcement. And this is how we’re able to scale to the massive levels that our customers need us to scale to. It’s the local enforcement that is the key and you only have one footprint on each host, and it’s easy to get down there. So the security policies are up in the cloud. Now, the other thing that’s happening is the telemetry information from each of the enforcers is going up into the cloud, and this is going right into the InfluxDB. So we’ve got a Time Series Database to capture it in, and now we can do things like give you some great dashboards, good visibility as to what’s going on with your application components and their security, and advanced analytics even to protect against things that are on the other side of the horizon that aren’t quite visible yet.
Don Chouinard 00:08:33.269 So now, all of this has to be secured with cryptographically-pure security. So we start with a CA, a root CA, right? X.509 root certificate, and that empowers the security service. So now the security service is blessed. It’s known. It’s checked in. And now it blesses the individual enforcers, and now the enforcers bless the individual application components. So now every entity in this picture is now known, cryptographically-strong identity for each of the application components. And then the security will be managed by the enforcer, who is part of the chain of trust. And it will do its enforcement based on the security policies, that as I mentioned earlier can be automatically generated. Which is important, especially when you get a lot of application components. So that’s it. That’s sort of the picture of what we have and how it works. And so now, I’d like to hand over to Bernard to take you through some detail of seeing this product in action, and then you’ll get a little more of an idea of what I’m talking about. Bernard?
Bernard Van De Walle 00:09:42.657 Yeah. Thanks, Don, for that nice introduction. And so I would like to introduce why we actually need a Time Series Database, and why we chose InfluxDB as the Time Series Database of choice. And so by monitoring and enforcing every single event into your cloud or your cluster—let’s take, for example, Kubernetes cluster—this would generate a lot of different events. And so to do this, I would like to take the example of a two-tier application, that you can see on the right side. So what you got is you got a client, a typical web client. You got a content server, that could be a Nginx container. And you got a database process in the backend. And so typically, this application accepts HTTP requests coming into the Nginx container frontend, and this generates backends database requests. And so all of the system in your cluster is going generate a lot of events, and we are going to monitor and record those events into the Time Series Database. And so for example, those events could be the Nginx container going up. This could happen at a specific moment in time, at one second. And for some reason, that container goes down and then comes up again, and we want to be able to monitor those events and replay those events for auditing and logging reasons. The other thing that we can fully monitor and record is all the flow events in between those different components. So for example, the client is going to generate an HTTP request to the frontend at a specific moment in time, and this will also generate a request to the database in back-end. And so every single one of those flow events, they could be accepted or rejected, and this would generate a full point in time into the InfluxDB Time Series Database. And so what we really do is we record two types of events, the ones related to the processing units or containers processes—that’s basically when a process of container comes up or down—and the ones related to flows. And this means every single time you put in a new TCP or UDP connection, we are going to generate an event, push this to the TSDB.
Bernard Van De Walle 00:11:59.215 And so the reason why we do this is because we want to be able to replay and audit what happened in your cluster over the past, let’s say, hour, or day, or week, that’s up to you. Okay? And so when we decided to use a TSDB to do this, the biggest requirement for us was to have a very high ingestion rate, because we can install Aporeto and the enforcers up to hundreds of hosts. This means that every time you open a TCP connection in your cluster between those hosts, it will generate an event, and those events altogether generates a lot of data. So we looked around with a couple of databases, mainly KairosDB, OpenTSDB to start with, but none of them were high availability or easy to use. By the end of the day, we chose InfluxDB, mainly because it was very easy to set up in HA, and also because it was written in Go which was for us a small plus, but always nice to have. The performance was also key, by the way. And so what I want to do next is illustrate all of this before a very simple demo, which is called Apobeer. It’s on GitHub. It’s basically a simple application, a two-tier application that will generate a request between a client and a server, and you will see how this is going to generate a couple of events between containers and processes.
Bernard Van De Walle 00:13:26.955 Okay. So this is the demo application. Basically, we go to an architecture with two clients, one which is tagged as a frontend, one which is tagged as an external. And then you go to a server, which is tagged as role=backend. So what we want to do is we want to show that a client which is allowed to open a connection to backend, will succeed to do so. And a client which is not allowed to request a connection to the backend, will be rejected to do so. And so you will also see how Aporeto is going to keep track of those events, and you will be able to replay those in time and see which container or which processing unit was trying to connect to which other processing units. Okay? So for this demo, let me switch to Kubernetes. What I got right now in my cloud, I turn on GKE, is a full Kubernetes cluster. And if I do a kubectl get nodes, for those of you that know Kubernetes, you will directly see that I got three physical nodes ready to accept new container requests. And so in this cluster, I already installed Aporeto. So if I do kubectl get pods, like this, what you can see is that we pre-installed Aporeto enforcers and you got one enforcer on each node. And so these clusters are connected to our Aporeto backend, with the InfluxDB Time Series Database. This means that if I switch to our backend, this is the console, the UI for product. What we’ll be able to see is you have a dashboard in which you see all the enforcers that are up and running, connected. And at this moment in time, you have no processing units running yet, which means no container in your cloud is running.
Bernard Van De Walle 00:15:18.043 So what I’m going to do very simply is start the application. So to do this, I simply create the application based on the definition file. Here we go. There we go. And so automatically by doing this, Kubernetes scheduled some containers in my cloud. And you can see this. It got the backend which is a server, and then external and frontend. And so at this point, we’ve got nine containers up and running. And if I go back to my cloud, I see a new namespace called Beer. Here we go. And what you see is automatically we were able to monitor those containers coming up, and so I’m going to go in live mode. And by the way, this visualization is directly coming from InfluxDB. So we are querying InfluxDB, with all the events that all the enforcers are pushing down to Influx. And what you can see is right now we are displaying the last 20 seconds and the flows from the last hour, but you can change this. So, for example, I’m going to change it for the flows on the last 30 seconds. And what you can see right now is that all the flows between frontend and backend, and all the flows between external and backend are red which means that nothing’s allowed. So what you want to do is, of course, we want to allow frontend to backend as an allowed set of flows. And to do this, I’m going back to Kubernetes and I’m going to instantiate a networking policy. So I do kubectl and I’m creating my policy. All right. And as soon as I do this, the flows between frontend and backend will start to be allowed, and you’ll be able to monitor this live in our tool. And some of those flows will now turn orange, which means that over the last interval of 30 seconds, those flows were together accepted and rejected. Okay. So this is the Time Series Database, so what you do is you aggregate the data that we got for the last 30 seconds, and you display those based on the data we received. Okay. So let’s check what’s going on here.
Bernard Van De Walle 00:17:57.644 Something else that we can do very simply is, for example, we can scale the deployment and you will see how this is going to be directly displayed into our UI. I do a scale with Kubernetes, and I bump the amount of containers from three to five. There we go. Here we go. So as you can see, some of the flows are becoming green, which means that they’ve started to become allowed by Kubernetes itself and by Aporeto. And here we go, more and more of those flows are becoming green. This is because I can re-aggregate data from InfluxDB, which means we display over the last 40 seconds. But if you want a more bigger picture, we can display the flow from the last five minutes and so on and so on. And so the other thing we can do with the help of InfluxDB is simply to go out of live mode, so here we go. And this will display all the flows and all the containers that we got in our cluster for the last two days, and you can choose that interval. So for example, between Monday and Tuesday— so between yesterday and today—and you can even replay some of those like this. And since this cluster was just boarded, nothing happened in that cluster until this morning. And so I’m just going to fast-forward and we can replay all those events, and there we go. So this is really key for auditing and replaying what happened in your cluster. Every time a container or flow is happening into your cluster, what we do is we push that event into InfluxDB and it will be recorded. So we can replay it at a later moment in time. And, of course, we use—what we do, we use the feature of Influx that allows you to age data and to aggregate data of all the weeks and months, after those are put into the DB. And so the last thing I want to show you is, if I switch to another cluster that I got running for a long time, such as a cluster I can replay. That actually launched yesterday, I think. If I take all the flows from Monday to today and I replay this, this will be [inaudible]. Yeah, here we go. And if I replay this data, what you can see is, yeah, nothing happened. And then at some point, the cluster went through three nodes, like this for a little bit of time, then it prompted and then it died. And so this is the UI. But, of course, everything you see on the UI can be queried directly into a API. And so it can get very granular amount of data based on which container is coming up, which container is dying, restarting, and so on. And more importantly, which flows are attempting to connect to which container. Okay. So let me go back to the presentation.
Don Chouinard 00:21:05.762 Great. Well, thank you very much, Bernard. So that was interesting because the security policies were attached to the individual application components. We had front-end, we had external, and then we had backend. And as more of these components were cranked up using the Kubernetes, then they were automatically protected because of the tags associated with the application components. And that’s one thing that I wanted to mention earlier, that that is how we’re picking them out. We’re doing it based on label, and so it’s very scalable for really any environment that you would have. And speaking of which, so now that was just two clients going against a server. Let me just bring your attention now to a little bit bigger picture. Here’s your typical three-tier application, where you would have your web server sitting in the first zone – security zone – and the firewall rules say anyone can come into the web servers. Okay. Then you put your middle tier logic, your containers, your microservices, whatever, into zone two to keep them protected, and only zone one can get to zone two. And then you put your databases into zone three, and to protect them you make a rule that says, “Only those programs whose address originates in zone two, should be able to get to programs whose address is zone three.” So typically, you would do these zones and they would be based on IP address ranges and that’s where the trouble begins. What you want to have happen are these flows in green. Someone comes into the web server, web server goes to pick a beer, to get a beer to give to the person. To pick a beer, it needs to get a random number, so it can pick a random beer. To pick a beer, it needs to get a price for the beer, and those go back to databases and then someone gets a beer at a certain price. So that’s how it’s supposed to work.
Don Chouinard 00:23:03.132 But all the different things that can happen to this application that nobody thought could happen, or they wished would never happen, because there is a rule that says that anyone from zone one can get into zone two. So the first arrow at the very top, you have an unintended access path to the microservice called Random Number. And someone could take control of that in a land and expand attack, maybe that’s the weak link and that’s where they start, and they expand their attack through all of the rest of the containers. It isn’t the land that ends up in the headlines. It’s the expand, because that’s when they get to the really good stuff and they steal the databases and so forth.
So all of these red arrows, and those of you that have been looking at it are like, “Don, you left half of the red arrows off.” I know, it got too confusing. There were all kinds of unintended access paths. And so when I do security based on IP address ranges, it’s just crazy. All of the paths that are left open, it is very, very difficult for me to report back to my boss, “What is the security posture of our application?” Well, I have one answer, “Horrible.” “Well, make it better.” “No, I have no visibility.” “You make it better.” “I don’t have the tools.” So it’s a very tough situation.
Don Chouinard 00:24:24.772 Now, things can be dramatically better with Aporeto in the picture. And this is why I’ve come back on after Bernard, just to kind of paint this picture for you. When security is tied to the application components themselves, which are selected through labels, then what’s going to happen is your security will always be current. There are no old obsolete firewall rules kicking around. You will have no unintended access paths. All traffic will be blocked. You can rest well at night. All traffic will be blocked, unless you have specifically allowed it with one of these security policies. This is very easy to administer. It’s very easy to think about, because you’re out of the space of IP address ranges and you’re into, “These are the components of my application, and this is how they interact.” Therefore, this is how I want them to interact and only this way. It just makes things very easy. Now, this is all done without making any changes to the source code, we’re just picking out these components based on their labels and putting the protection into place. And as I mentioned earlier, those application security policies can actually be automatically generated. You can edit them yourself too, they’re very human-readable. So here we have a three-tier application running across, maybe it’s on-premise, maybe it’s across a hybrid-cloud, Azure, AWS, Google Cloud platform, multi-cloud. The security travels with the application components, and that’s what people want. Now maybe you’re in a Kubernetes environment and you’re thinking, “Well, is this going to work in Kubernetes,” because Kubernetes is the orchestrator, it’s got a lot of control over what’s going on. In fact, they have a thing in Kubernetes. Aren’t they all set with security? Don’t they have their network policy resource objects on? Well, they do have network policy resources, but those are very limited in what they can do.
Don Chouinard 00:26:18.972 So what we’ve done is we’ve gone in and we’ve implemented a proper 100% compatible network plugin for Kubernetes, and this is how Aporeto gets in. If you have network policy resources in place, we abide by them. But you can go much further once you put Aporeto into your Kubernetes environment, because rather than just being able to control ingress and egress traffic based on IP address ranges – which is what you get with stock Kubernetes – you can go on to get the benefits that I’ve been talking about this morning, which is you can control traffic based on the identities of the application components. Not where they run, but what they are. You can request encryption between two pods and, boom, we’ll just do it. So with one policy, it happens. All the key rotation, chain of trust, all of that, we just take care of that. You don’t have to worry about it. We’ll automatically generate editable security rules for your hundreds of application components. Without us, you’re going to have to code those by hand, right? And jam them into your yam, well, that’s not going to be fun. So it gets much easier with Aporeto in the picture. We’re going to use our very highly scalable and proven plugin that we have, and the security settings are going to be very easy to set up and keep running. You’re going to get that historical view that you saw, that Bernard showed, with the ability to go back in time for forensic purposes, see what’s going on with your security posture at all points in time. And not all applications are 100% Kubernetes. They span Kubernetes and non-Kubernetes environments, boom, we got you covered for there. And lastly, even if you are all-Kubernetes, you might have individual clusters. And so the ability to have one security model that works across clusters, Aporeto’s going to bring that to you.
Don Chouinard 00:28:11.190 So lots of benefits of using Aporeto with your Kubernetes. We love Kubernetes. We love the other orchestrators as well, but I just wanted to point out specifically some of the things we’re doing with Kubernetes. And so in summary, I want to thank you for your time and point out that we love InfluxDB here, it is absolutely central to the Aporeto solution. We love its high availability, we can run it on three servers, we’ve got a cluster going, we don’t have to worry about the time series data going missing. It gives us the awesome performance profile that’s required for the huge scale environments that our customers are putting us to use in, and the complexity’s very low. With Aporeto, we’re coming in and giving you identity based on the application components, and that’s a much more powerful model. So now you’ve got an enforcer on each host, access policies determining what can move, one consistent security policy across on-prem and clouds, with fine grain security for individual services. So you can’t have land and expand attacks, based on open access routes that you didn’t even realize were open. You’ll have access, based on multiple factors beyond just, “Who is this user that’s trying to access this process?” No, no, no. Now it’s, “Who is this user running what process is trying to get to this other process?” So you’ve actually scoped down super user privileges, which is really good for keeping control of land and expand attacks. So it’s going to be easier to administer and keep current than anything we’ve had in the past, and you’ll end up having an unrivaled security posture across sites and clouds. So with that, I would like to thank you for coming to the webinar, and let’s see if we have some questions.
Chris Churilo 00:30:00.802 So if you do have any questions, feel free to put them either in the Q&A or the chat panel, and I think I will prime the pump while we’re waiting for some of the questions. I just want to say thank you so much for this demo. I think watching your UI, really I think highlights the need for tracking how—what containers are doing. I think a lot of people that I speak to feel like, “Well, containers often have a short life span. Why do we need to track that data?” But looking at your demo, you could see very quickly that that historical information is actually paramount for any kind of security audit.
Don Chouinard 00:30:40.295 Yeah. In the old days, when a service, a monolith would come up and it would run for weeks or months, and it would run at a fixed address. The old address-based security. I mean, it wasn’t that good back then. You see it in the headlines all the time. What happens is land and expand attacks. People get the privileged credentials. They get into some low value target, and then they expand from there. And so even when things aren’t popping up, shutting down within seconds, and moving around. And so there is no perimeter to secure, even before you get there. With the old monoliths, people would get into that gooey—it’s called the M&M theory, where they get past the hard outer shell, and then they gorge on the gooey inner center and get everything that they’re looking for. So when you have security around each of the individual Linux processes, or individual containers, or pods, now you’ve got this degree of isolation where land and expand attacks will not be able to take place. So that finer grain security can really pay off. I see a question here about, “Can you give us a picture of your growth since adopting InfluxDB?” Bernard, we’ve had InfluxDB from the very early times, right?
Bernard Van De Walle 00:31:59.243 We started with KairosDB, but then we switched to InfluxDB. And maybe because the ingestion time was not so great enough with KairosDB.
Don Chouinard 00:32:08.174 Yeah. Yeah. And we’re a private company, so we don’t share too much information about our size and our customer base. But for most of what we’ve been doing, it’s been on the InfluxDB.
Chris Churilo 00:32:19.297 And then, do you also use InfluxDB to monitor just other performance metrics about your SaaS solution?
Bernard Van De Walle 00:32:27.799 Not right now, but this is something that we are maybe looking at maybe toward more like monitoring related to the nodes in Kubernetes, [inaudible] pushing some logs to InfluxDB.
Don Chouinard 00:32:38.477 Yeah. Okay. Now, we’ve got one here. We’ve got—Amir has an application that runs on AWS and on-premise. Okay. Right now, they’re using a VPN between their on-prem and their AWS. That’s pretty much what AWS tells you to do. So now, how would—if he were to use the Aporeto solution, how would he do things differently, Bernard? I can answer.
Bernard Van De Walle 00:33:15.236 No, no, it’s okay. Just give me—
Don Chouinard 00:33:16.076 I think I can get this one. So you’ve got your on-site, you’ve got a VPN up to your AWS, and so you’ve tried to have a degree of protection. So now you’ve got these address ranges, and they’re somehow linked together through the VPN that you’ve created.
Bernard Van De Walle 00:33:30.772 I think the key thing here is, [inaudible] for your networking. And what we do is we go on top of your networking, so we are completely disjoined from your network environment. So this means that you apply security without being in the way of networking, which means you can use whatever networking provider he wants. With [inaudible], you might use something like Flannel with Kubernetes or whatever provider is available to you. Whatever you choose to use, that doesn’t really matter for us. We just come on top of it and plug in nicely.
Don Chouinard 00:34:00.787 Yeah. Usually these things go in stages, where people leave their VPN in place, they run their application components across the environments. And once they get confidence in the Aporeto, you can turn the VPN off. You don’t need it anymore. Because components can run at any location, and we’re going to be able to attend to the security. Seeing any more questions?
Chris Churilo 00:34:30.951 No, but it could be everyone’s just a little bit shy. So we’ll keep the lines open. And if you do have questions afterward, you can always email them to me and I will make sure I forward them to the guys and we’ll get them answered. I just want to remind everybody that this is being recorded, so I’ll post this at the end of the day and you’ll be able to take another listen to it as well. And we’ll just keep these lines open for now. The more I think about what you guys are doing, I’ve always had this theory that with security, it is about monitoring. It’s really about watching, seeing what’s going on. Understanding the good state and then knowing—getting that event triggered when there’s a bad state or something unusual is happening. And so I’m pretty excited to hear one of customers is using InfluxDB in this situation, because it just makes a lot of sense to me.
Bernard Van De Walle 00:35:32.351 Right. Exactly. And so I just want to say again that Aporeto is two things, it’s mainly monitoring and then we reinforce once we know what’s going on into your cluster. And so when we started to do this, the funny thing is a lot of people have no idea what’s going on in Kubernetes’s cluster or Docker set of hosts or containers. And so with this solution, what you can really do with InfluxDB is really push all that information and get inside, and then apply security rules on top of it.
Don Chouinard 00:36:01.303 Yeah. Even to see the dependencies of the application components has value. Just right there. Before you even start going in and securing them, either manually or automatically.
Chris Churilo 00:36:12.761 I think the other thing that I appreciated about your solution, is that you don’t tie the hands of the developer. Because I think a lot of times whenever you have a security breach, then everything gets locked down completely. And so then it just kind of frustrates the staff. They understand, but it’s a little bit frustrating. But with your solution, they still get the benefits of being able to build very quickly in their environment, but have the confidence that things are being monitored and the right kind of policies can get applied.
Don Chouinard 00:36:47.231 Yeah. That’s been coming through loud and clear from our customers. They’re talking about velocity of innovation not being impacted by security. A lot of people now are playing around with Kubernetes with no security, and they just know that their application cannot go live. They’re just ignoring the problem. It’s kind of like the joke slide that I let off with. See no evil, hear no evil, speak no evil. And yet, there’s hope. All they need to do is introduce Aporeto into the environment, and all of the components that have been written with no changes can now get the level of security they need to go live.
Chris Churilo 00:37:27.253 Looks like we have another question from Amir.
Don Chouinard 00:37:33.394 Okay. Can you elaborate on the developer point? How is it that Aporeto does not impact developer progress? Because there’s no library that you need to take and include in with your code, right? You don’t have to come to grips with, “Oh, I have a root CA. I’ve got a chain of trust. What am I doing for key rotation?” All of those problems are taken away. You just code your application to do what you need it do, and then you label it. Right? Each component needs a label. Once components have their labels, then the security policies can select the components and apply the security policies appropriately. And that’s what I really liked about the demo that Bernard did, where he cranked up more front-ends, more backends, more externals. And you know what? They were automatically secured. Because of the labels that they had on them, we just picked them up and just secured them. So you get that consistent model.
Bernard Van De Walle 00:38:29.561 Yeah. I think if you look back at Kubernetes, for example, the way we do this is simply the user doesn’t even know we are in the cluster. So this means from a developer perspective, just write your code and the only thing you need to do is you need to label your containers like you will do for any Kubernetes deployment, and automatically it will take up those labels and apply security based on those. Okay. So it’s really transparent from a user perspective or from a developer perspective, and we sit on top of the cluster and we monitor what’s going on. The developer per se doesn’t really know that we are there. It’s more like the InfoSec people that will use our tool, that will be able to set up those rules on top of what the deployment already does.
Don Chouinard 00:39:15.524 Right. And let me just add to that, that it’s more than InfoSec, right, because it backs right up to DevSecOps now. Because the rules about, okay, when a component has this label, it will become under the power of these security policies. And therefore, these are the things it will be able to get to and not be able to get to. All of that is codified in your CI/CD pipeline. So it keeps everyone happy. The developers don’t have to worry about security. It automatically gets applied as things come down through the pipeline, through test, and staging, and deployment. And so there are never any situations where security is out of step with the actual application itself, and that’s pretty much as good as it gets.
Chris Churilo 00:39:55.986 So unfortunately, in the security space, the thing that really propels people to do something about it is a terrible event happens. Have there been any events recorded with the press, or just with the outside world, about any kind of breaches with containers or with any kind of orchestrator?
Don Chouinard 00:40:22.492 Well, they usually don’t go into that level of detail on the breaches. But certainly, we know that whether there’s an orchestrator involved or not, that security is something that you cannot just hide from. You need to deal with it. You need to make sure that things are attended to properly.
Chris Churilo 00:40:45.623 Cool. So if there’s any other questions, please feel free to put them in the chat or the Q&A, and we’ll keep the lines open for just maybe two more minutes.
Don Chouinard 00:40:55.702 Yeah. I don’t see any more questions coming in, but I certainly appreciate the interactivity, the questions we’ve gotten. Hopefully, you feel your time was well-spent this morning. And I’d like to invite people to reach out to us through aporeto.com. Come to our website. We’ve got data sheets, we’ve got white papers, learn more about it. You can sign up for a more in-depth demonstration of the product and have a conversation with our security experts, talk through the details of what this would mean for you, what it could do for you, how you would get started. It’s actually very easy to get started with. And, yeah, so we look forward to more interactions for people that are interested and feel that this could be something that they want to learn more about.
Chris Churilo 00:41:48.890 Excellent. Thank you so much. Just as a reminder, we have our training on Thursday for anybody that wants to sign up. It’s Introduction to InfluxDB. And as I mentioned, we’ll post this webinar and also we’ll be building a companion paper that goes with it. So you can get more information from us. Or, of course, you can go to the Aporeto website as well. So with that, I want to thank everybody. I want to thank our wonderful speakers and also thank our attendees. And with that, I wish you a good day.
Don Chouinard 00:42:21.880 Thank you.
Bernard Van De Walle 00:42:23.358 Thank you.
Chris Churilo 00:42:24.199 Thank you.
Bernard Van De Walle 00:42:25.088 Bye, Chris.
Track and graph your Aerospike node statistics as well as statistics for all of the configured namespaces.
Knowing how well your webserver is handling your traffic helps you build great experiences for your users. Collect server statistics to maintain exceptional performance.
Collect and graph performance metrics from the MON and OSD nodes in a Ceph storage cluster.
Use the Dovecot stats protocol to collect and graph metrics on configured domains.
Easily monitor and track key web server performance metrics from any running HAProxy instance.
Gather metrics about the running Kubernetes pods and containers for a single host.
Collect and act on a set of Mesos statistics and metrics that enable you to monitor resource usage and detect abnormal situations early.
Gather and graph metrics from this simple and lightweight messaging protocol ideal for IoT devices.
Gather phusion passenger stats to securely operate web apps, microservices & APIs with outstanding reliability, performance and control.
The Prometheus plugin gathers metrics from any webpage exposing metrics with Prometheus format.
Monitor the status of the puppet server – the success or failure of actual puppet runs on the end nodes themselves.