Coming soon! Our webinar just ended. Check back soon to watch the video.
Webinar Date: 2019-11-12 08:00:00 (Pacific Time)
Performing analytics at the edge, in the data center or in the cloud, is needed in today’s distributed landscape. Nebbiolo Technologies™ and AnalyticsPlus built an Edge Computing Platform that brings the flexibility of virtualized computation, network and storage resources to the edge, as an integrated solution combined with ML and AI libraries into their fogOS middleware. At the heart of the solution is the open-source time series database, InfluxDB, and the data processing framework Kapacitor.
In this webinar, they will share how they built this point-and-click solution to help customers in 3 specific verticals (manufacturing, healthcare, and finance) to unlock the power of high-frequency data in real-time to become a data-driven organization.
Watch the webinar “Edge Computing at its Finest with Real-time Analytics at Scale with High-Velocity data” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “Edge Computing at its Finest with Real-time Analytics at Scale with High-Velocity data”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Anil Joshi: 00:00:02.455 Okay. Thanks, Chris. And good morning, everyone, here in the US. And if you’re joining from outside the US, then just hello. And what we will do today, I think — my name is Anil Joshi. I am the founder and CEO of AnalyticsPlus. And then, Pankaj, if you can get to the next slide, we can talk a little bit about the AnalyticsPlus. And then you can talk a little bit about Nebbiolo Technologies. So it is a joint presentation. And I will give you a brief description of AnalyticsPlus, and then Pankaj will talk about Nebbiolo Technologies.
Anil Joshi: 00:00:49.055 So the company is a Chicago-based company, primarily focused on advanced analytics and predictive modeling in the IoT and healthcare platforms. Most of our products and services are related to healthcare. But as you know, analytics is kind of a horizontal discipline, and it cuts across different industries. So we have done quite a bit of work in the healthcare, manufacturing, industrial, finance, and other verticals. And we are very delighted to actually have partnered with Nebbiolo Technologies to develop the analytics functionalities of the platform. So with that, I will ask Pankaj to talk about his company.
Pankaj Bhagra: 00:01:46.417 Good morning. Good morning, everybody. My name is Pankaj. I’m a co-founder and software architect at Nebbiolo. Primary background is in distributed system, networking, building complex systems. And last four, five year, we are busy building Nebbiolo Technologies. It’s a pioneer in edge computing, fog computing platforms. We’re trying to give the same cloud-like infrastructure at the edge so that the application can migrate flexibly between the cloud and an edge, and we should have a freedom of running any kind of analytics. And that’s that we partner with AnalyticsPlus. Nebbiolo Technologies is a family fund targeting the industrial automation, but that of a platform is fairly horizontal. It applies to any IoT verticals. So we also have some part is going on into energy, in oil and gas, but main focus continues to be as a small startup industrial automation. And between Nebbiolo Technologies and AnalyticsPlus, we’re able to bring a very holistic platform for deploying applications, running analytics, and that’s what we’re going to show you today. So I won’t take more time, and I’ll pass it back to Anil to start walking us through what we’re doing, what this can be applied and [inaudible].
Anil Joshi: 00:03:12.182 Okay. Thanks, Pankaj. And if you can get to the next slide. Yeah. So I think before I get into the platform capabilities, I just wanted to actually extend a little bit of my conversation earlier. So I think the team has the experience in both the traditional hypothesis-driven analytics as well as the machine learning AI-based analytics. And as the data becomes larger and bigger, the ML approaches are much more efficient and insightful than the traditional methods of statistical analysis. But the team has those capabilities both on the traditional data science as well as the evolving ML-based analytics. The focus, I think, is changing more and more on the streaming analytics as compared to the batch analytics, I think, that has been traditionally prevalent in most verticals for last 50 to 60 years. In the last five years, the whole game is actually changing, and that’s where, I think, the platforms like what we have built is quite useful. As the machines are getting smarter, every item is getting — have some kind of censors in it, and they’re sending data. So you need those kind of capabilities that can actually analyze and find the insight about the machine, which is sending the data and aggregating it with the other censors to come up with some insights about a business problem. So that’s sort of like the capabilities of the team, and where we are headed in terms of our analytics capabilities.
Anil Joshi: 00:04:58.982 The platform capabilities, I think Pankaj will actually talk more details and dive deep into the system and its infrastructure. But primarily, I think there are six or so feature set of the platform. So the whole platform has the compute capabilities, networking, storage, security, and analytics. As I said earlier, it’s pretty comprehensive platform to work with. For any kind of streaming data, time series data, this platform has the capabilities to develop applications and to find insights into those data. The good thing is that, I think, these are all kind of virtually and centrally managed to a distributed set of nodes, the compute nodes that are available there. So even though these are distributed compute nodes, you can actually manage them and work with them through a centralized management system that sits on top of it. Of course, the security is the key, and then specially, I think, in the healthcare and the financial work. So we have taken special kind of reinforcement in terms of developing features that actually adds to the security of the system. And again, I think this is a platform that is highly scalable, and I think Pankaj will talk a little bit more detail about how this is actually scalable.
Anil Joshi: 00:06:32.449 And one of the highlights of the platform is that you do not have to be a data scientist, you do not have to be a statistician to use this platform to do the analytics to solve your business cases. It is a programless AI, ML pipelines that you can build once you’re trained in the system for about a very, very short period of time. So the learning curve is very small, but you can do quite a bit of work and quite a bit of analytics using the platform. So that’s sort of like the overall feature set of the platform. So Pankaj, yeah.
Anil Joshi: 00:07:13.607 So I think I have given you kind of an overview, but the rest of the conversation here would be sort of like talk about the standout features, and I just actually outlined some of them. We’ll talk a little bit about the use cases that we actually see that can be developed or used on the platform. And then I will hand it over to Pankaj, and he will talk about the platform architecture. Those of you who are really deep-dive into the technology, I think you would appreciate the kind of work and the kind of thinking that has gone into building this platform and the data analysis framework; how we actually thought about what kind of functionality that we want to bring into the platform given the kind of data that is available these days and going forward. And then some of the decision-making early on, why did we choose InfluxDB and TICK Stack? What were some of the comparable technologies available at that time, and what actually led us to use this and some of the benchmarking that you will see? And time permitting, we will give you a brief demo of the platform so you actually get to see it how this platform actually works.
Anil Joshi: 00:08:35.452 So I think in a very fundamental discussion about cloud computing versus the edge computing, right —? Everybody is now using — or a lot of companies are now very well-versed with the cloud-based infrastructure and computing on the cloud versus on-prem. So why this edge computing is becoming more and more prevalent, and where this actually has an edge — pardon the pun — over cloud, right? So I think one of the fundamental tenets of edge computing is that it reduces the latency. If you actually have an application where you need to have the response rate immediately, you want to respond, you want to figure out what’s going on, and if you want to change your action based upon this information, you need to do the computing on the side of the device or the machinery where the data is being collected. So latency or the immediacy of response is one of the key reasons, I think, this edge computing is quite popular. And as the IoT expands, all these different devices have sensors associated with them, this will become more and more prevalent.
Anil Joshi: 00:10:02.452 Second aspect of comparing this cloud versus edge is that in the cloud infrastructure, cloud framework, all the data is actually coming on the cloud. And that means it has associated cost of the processing and storage, correct? On the edge side, because you have distributed the processing at the side of the device, not all data needs to be used because you can actually analyze it, and if it is kind of a normal data coming through, if the machine is working normally just sending the data that is expected to send, you just do not have to work with that data. Only the anomalous data could be actually used to actually take some action. So it actually reduces the data load on the system while maintaining the latency of the response rate. So I think that’s the other kind of real benefit of this edge infrastructure.
Anil Joshi: 00:11:16.751 And certainly, I think the scalable aspects of it, right? So you can install as many nodes as you want on a factory floor or in a hospital system with all these different kinds of machines are being collected and being used. And you can collect all the data that is coming through these machines, make the processing next to the machine, and take the action that is required rather than sending all the data to the cloud and then figuring out what kind of analytics you want to perform and do a response or create a response for those machines; whether they’re being maintained properly or not, or whether they’re functioning properly or not, or any kind of a feedback mechanism that you want to provide. So the point here is that as the things become computers — they actually have sensors — the more you actually analyze and process the data next to it, less expensive it will become, more secure it will become, and it is much more scalable environment. So that’s sort of like the main point I wanted to make here. And again, Pankaj can talk a little bit more when he discusses the details of this infrastructure. Next, Pankaj. Yeah.
Anil Joshi: 00:12:48.921 So I think we have discussed most of the things in my last couple of slides, and I think I’ve got to speed it up here a little bit. But again, one side is, basically, the infrastructure. On the left-hand side, you can see it is being layered in terms of the — the bottom layer is the edge. Middle is the fog. And on the top is the cloud. And I think we talked about how these things are connected together. Then the second part here talked about the distributed, centrally managed federation of fogNodes. It’s a highly scalable infrastructure. You can have these nodes connected with each other. And you can actually have the compute, and you can put the different kinds of applications on node, and that can be managed centrally. And that is the core piece of it. So they compute in a distributed environment but can be managed centrally.
Anil Joshi: 00:13:45.974 On the right-hand side of this screen, what you see is, basically, the analytics. And one point I would make here is that we have used the existing functions available in the Kapacitor. But more importantly, a lot of work for the last three years has been to develop some user-defined functions. These are mostly the quality assurance, quality checks, and the machine learning algorithms. That includes the neural networks. That includes any kind of anomaly detection algorithms. So in multiple of them, we have actually built. Depending upon the kind of situation that you have, you can actually pick and choose which algorithm you want to run on a given set of data. So that, I think, is one of the key highlights of the platform.
Anil Joshi: 00:14:39.186 And again, fogSM is the manager that actually will allow you to manage different nodes and different applications running on those nodes. And we talked about the intuitive drag and drop UI to build the analytics. You do not have to be a data scientist. Many supervisors at the factory floor do not have that kind of a training. They can still actually find the insights on the machines that they’re managing and supervising a process in a manufacturing facility. So we want to make it as easy for them to use, but underneath is the complexity and the depth of the analytics that actually gets filtered out in this drag and drop kind of UI that we have built.
Chris Churilo: 00:15:32.436 Hey, Anil. We have a question that I think would be good to talk to right now. So Shisha asked, “What is the accuracy differences between analytics at the edge versus the cloud? And are there plans to consolidate models?”
Anil Joshi: 00:15:47.358 Great question. I think in terms of the accuracy, if the model you have built, whether you have built on the edge, or you have built on the cloud, I think that should not actually make a difference. If you are actually using huge amounts of data on the cloud, you could actually say that the precision could be a little bit higher on the cloud side because you just have a lot of data available on the cloud compared to what you actually will work with given the capacity of the nodes at which actually these models are built. In practice, what you actually want to do is that you would build — depending upon the situation, you will build those models that require huge amounts of historical data on the cloud, but you will actually deploy them on the edge. So you can actually have the best of both the worlds. So depending upon the situation — for example, in those use cases where a machine is sending data at a very high velocity, let’s say, 100 data points per millisecond, right? So high velocity data, and all you want to do is see if there’s any pattern that is developing in the stream of data, and you want to just find an anomalous behavior. You can actually run that quite well on the edge because we have built some algorithms that are — we have multiple anomaly detection algorithm that you can actually run, and it will actually find out whether something is anomalous or not at any sort of like window length in which you’re trying to look at. So those kind of applications, I think the edge models are as good as you can get, right?
Anil Joshi: 00:17:40.943 On the other hand, let’s say you’re trying to find out — I mean, I may be taking a little bit longer time here, but let’s say you actually want to find out — you want to classify some kind of a — let’s say I can come up with some claims data. One of the great issues in the healthcare is that many insurance companies actually deny claims that are sent by the hospitals and the physician offices, right? And so the interest of the hospital is to make sure what actually goes to the insurance companies get paid. So you can actually run models and classify those claims into: will be paid, will be partially paid, or will not be paid. For those kind of systems, you need a lot of feature sets to build the model and to actually analyze those claims before they go to the insurance companies. In those situations, I would actually think that those cloud-computing-based models will be a little bit more accurate because of the amount of data and the history of the data that you will get.
Pankaj Bhagra: 00:18:46.525 But Anil, is it fair to say that once the model has been built, where you run the inference engine, whether it’s you run the inference engine in the cloud or inference engine at the edge, the results are going to be identical?
Anil Joshi: 00:18:57.668 Correct. Correct. That’s what I said. So the precision will not matter, whether you’re actually deploying it — whether you’re actually deploying it on the edge or on the cloud, I think the difference is when you’re building a model that requires a lot of, let’s say, 200 different feature set. If I’m going to analyze a claims data that has about 200 different items, right, and I need five years of history, right, and these claims actually — for any big hospitals, are millions of these claims. So building the model, I think, I will actually try to sort of like work on the cloud. But in terms of the deployment, once the model has been built — because then you’re just processing one claim at a time, right? So build the model on the cloud, deploy it on the edge, and then you’re passing through this. So in real time, you can actually provide a feedback to the coding team whether this is the right claim to be sent to insurance company, or they need to be tweaked before it goes out. So in that kind of a situation, we can use the edge infrastructure to run those.
Anil Joshi: 00:20:11.562 So I think I will take another couple of minutes just here to sort of talk about where this platform is being used mostly, and where it is highly applicable. So industrial automation certainly is one key area, and I think in the scheme of things, this is where the edge computing is most popular today. And then there are multiple use cases. There are use cases in quality control. There are use cases in predictive maintenance of these machines. And there are use cases where you want to prevent any kind of anomalous behavior that could actually develop in any particular machine, and you want to stop it so that the damage is much less than would have been if there was no kind of analytics happening real time around those data we’re coming through. So those are some of the early applications of edge computing in the industrial factory floors. And predictive maintenance is quite useful because I think it does save companies quite a bit of money and the downtime because now, as the data is coming through, you can actually check the pattern if the signals that are coming through — whether it is the temperature of the machine, whether it’s the vibration of the machine, whether it’s some other kind of data that is coming through, is the pattern changing? And if this pattern is changing, is this means that something will actually happen down the line, whatever the time frame could be? So this is a very good use case for the kind of platform that we have built.
Anil Joshi: 00:21:59.064 In healthcare, I think they’re still trying to figure out where it would be most useful. But I actually see this as hospital is also like a factory floor where all these machines, the X-ray machine, the MRI, the PET scans, all those are available, and you do want to make sure that you need kind of predictive maintenance that you can provide to them. That’s one use case. Second use case is — many situations, I think this remote patient monitoring, right, is a typical use case of edge computing. So you have all these devices that are collecting data like your blood pressure, your pulse ox, your weight, your ECG data. All those data are coming, and then more and more machines will actually have sensors in them. So I think you get to actually understand the data and the patient’s condition even when they’re not at home. That’s what the remote patient monitoring is all about, right? So you’re able to collect the data like an ICU data while the patient is at home, and the computations are going to occur locally. And that can be aggregated in such a way that it can provide some insight from there.
Anil Joshi: 00:23:14.030 Another kind of example that I think we’re sort of like brainstorming here — you’ve not seen an example, but I think there’s a use case — is the blockchain, right, where you can actually have a distributed ledger technology. For example, let’s say you go to a physician. Physician actually takes a look at it and does the diagnosis and provides the treatment. Before the bill, the claim actually goes to the insurance companies, it may actually go through the patient who was actually treated. And then it gets approved and then goes to the insurance company for payment. Today, there is almost 40 to 60 billion dollar of fraud and abuse in healthcare claims filing. But if you can create a blockchain where there’s a linkage that actually goes through the patient, now, any kind of fake claims that are coming from a hospital or a clinic to the insurance company could all be avoided because you need a signature from the patient or the approval of the patient, and this all could be actually done locally. So we’re just sort of thinking through that kind of use cases, but that is something that could be thought through and developed into application.
Anil Joshi: 00:24:34.791 So I think I will stop there. If I can have one more screen — same thing, I think, other applications are there in retail augmented reality. You want to go to a store, and you want to try your dress. You don’t have to go to a room to actually take on that dress and see how you look. You can actually have this applications that actually allows you to see how the dress will look on you with this augmented reality. Many companies are trying to sort of create this online, but there are some applications of similar situations in a retail store, physical stores where you can actually have this kind of a computation take place. Of course, one of the largest application of edge computing is the autonomous vehicles where you need to actually make the decision immediately, and that actually has to happen locally. So multiple situations in the self-driving cars are all, basically, edge computing. So with that, I will actually stop here. If there are questions, I think maybe perhaps we can do it now. And all depends upon Chris and Pankaj, but we can do it at the end as well. So with that, I will hand it over to Pankaj to take the discussion forward.
Pankaj Bhagra: 00:25:55.545 So it’s like the question at this stage — I’ll dig a little deeper. Thanks, Anil, for setting the landscape, explaining how wide this technology can be used in various verticals. Nebbiolo’s focus more is on right now taking the same setup, edge technologies, [into set?] of analytics, whether it’s cloud-driven models or edge-driven models. We’re applying heavily this into the industrial. So we’re focusing more on the industriality, and I’ll show you some of the use cases which we’re driving these real-time insights, data-driven insights at the edge.
Pankaj Bhagra: 00:26:34.876 Now, it’s like two use cases come to my mind. One, we’re trying to do with this big welding and riveting shop based of a very big manufacturer in Germany. And the fundamental problem they had is that they were doing a random sampling of the produced — or the cars they’re manufacturing. And what they want to move forward was that instead of random sampling, the parts they’re producing, they would like to sample each and every part and use the analytics and data-driven models to predict which part is going to fail. Now, in this particular use case, I mean, they were oversampling or overproducing the parts, in this case, actually the welds. The welding on the car was done 20% more than what it should be. And the reason was that the quality control was not done on every part. So that’s why they were overproducing or overcommitting. And thus, as you can see, it has significant impact on the cost of production, the time it takes to produce, and the weight of the vehicle.
Pankaj Bhagra: 00:27:43.014 Now, our approach of doing inspection on each and every part being produced, doing the analytics on each part, gathering the sensor data, and doing analysis at the edge, we’re able to shave off significant portion of the parts or the overcommitting of the welds they were doing on the part. And the amount of data we’re producing — and we’ll show you how we’re doing in subsequent slide — you could see that this couldn’t have been done if you want to do it for every parts you’re producing across multiple factory lines. It can [be hard?] to even do it in the cloud. And the response time, what they needed was that if they identify that the continuous parts that they’re getting produced, they’re of a bad quality, they would like to even change the control loop. They would like to go back and fix the weld head so that the subsequent parts are produced with a better quality. And that loop is 18 milliseconds. Imagine doing your analysis, identifying the events, determining the control event, which fixes the subsequent part within 18 milliseconds. You have to absolutely real-time, and we’re using our full real-time streaming analytics identifying these bad parts and impacting the quality of the subsequent parts within that tight loop.
Pankaj Bhagra: 00:29:05.769 Very similar use case on the right-hand side. This is an engine head, which is basically producing the engine heads for the car. Again, a very big manufacturer in Alabama. Here also it’s a very similar use case, timely using the insights for doing that quality control and identifying that what’s the quality of the engine heads being produced. Very similar use, instead of random sampling, tell me which part is going to fail, look at each and every data sensor, predict using various models. And this is where we use a lot of our data science team expertise timely from the Anil’s team and finally figuring it out which part has a high predictability of failure. And we’ll show you how the architecture looks like for both of them, and you could see that this architecture can, basically, replicate itself for similar use cases are even in adjacent verticals.
Pankaj Bhagra: 00:30:09.610 A little bit introduction of the architecture. So what does the edge computing infrastructure looks like? As you can see that, let’s say we have three fundamental pillars of our solution. There’s a component of what we call the edge virtualization which enables to run your VMs and your containers at the edge. So whether the VMs are the customer VMs, or we bring our VMs in which you can put your workload as a docker container, or if it’s VMs or real-time operating system or Windows operating system, then you can run your native executables and your native cycles for the real time. We also have a component for industrial security. This is significantly important. It’s not a feature somebody will buy a product, but it’s the very first feature if you don’t have it, you would not be installed. So in terms of, let’ say, doing the secure zoning, intrusion prevention system, we have done significant work in terms of data ownership. This is the area where — let’s say when we ingest the data, we stamp the data with the — it’s a unique ID, and that ID is maintained through all the system, and only the right users with a role-based access can access the data, whether the data lives within our system or leaves out of our ecosystem. They can also choose to encrypt the data with their own specified keys. That way, as the data leaves out of our system, the data only can be encrypted out by the user keys.
Pankaj Bhagra: 00:31:44.753 And the third portion of our infrastructure, which is the edge streaming analytics, which today we’re going to focus the most, is the ability to deploy the pipelines, the data pipelines from the cloud in a programless fashion and deploy these pipelines at the edge to do either the statistical analysis, or you have the ability to run your AI, ML pipelines at the edge. Now, the key difference in this approach versus the other approach is that it’s totally programless, A. B is that it’s fully extendable. That means you don’t have to stay put or get limited by the set of tools we provide you. It’s fully extendable. As Anil was initially talking about that we significantly used the user-defined function of the Kapacitor to add the capabilities. And that’s an open API, so you can extend your user-defined functions on top of what we provide to add an augment — and this allows you to keep your data logic private to yourself. It could be proprietary. It could be significant amount of value-add you don’t want to share with anybody. So you can bring your own UDF and stick it into this pipeline and manage this pipeline from something what we call the fogSM, which is our central management system, from which we manage all these three functions: the edge virtualization, the industrial security, and streaming analytics at the edge on distributed nodes. And this is deployed in various, I mean, spaces. You can see these things on the website. So I’m not going to delve too much time on it. But let me go directly into what we’re — I mean to say, how does the architecture deep-dive looks like.
Pankaj Bhagra: 00:33:35.433 Now, we touch upon three components. So you see that we have use cases for workload modernization. There you can bring your VMs, your dockers, and we manage them. This could be a legacy workload. This could be a real-time workload, or this could be a modern workload for IoT. The second pillar is about the streaming analytics, giving the capability of doing the streaming analytics at the edge using a user-defined function, storing your data locally on the nodes, key difference from streaming this data to the cloud. So each of our node which is deployed at the edge has its own time series database, has its own streaming analytics infrastructure, has its own capability to visualize the data locally and selectively choose what data you want to send to adjacent node or send it to the cloud. That whole infrastructure runs on each nodes, and you manage this thing from the central entity. And of course, there’s a significant portion of our industrial security. But today’s focus stays with the streaming analytics topic, so I’ll talk about it. If you’re interested more about workload modernization or industrial security, we can take it in the subsequent webinars.
Pankaj Bhagra: 00:34:48.952 Key differences or key highlights of this platform is that it can be remote managed. We have a capability or the function of what we call the application store and the config store. This is where the users can bring in their applications. And then once the applications are onboarded in your app store or config store, you can push these applications securely and in a scale fashion to multiple nodes. The same way as we’re deploying our application and config, we also deploy our data pipelines. So you compose your application or your data pipeline in a simple user-defined canvas, and we’ll show you this part a little bit more in demo. The user configure the data pipelines in fogSM, our central management system, and then pushes these data pipelines to the edges. And this whole system is highly available. There’s no single point of failure. And the key difference between the edge versus the cloud center is that even in the edge, we’re trying to make a full cluster, so there’s no single point of failure. If one node fails, another adjacent node pick up the workloads. It’s a fully resilient and fully, highly available cluster at the edges and the other distributed nodes. We provide a capability where you take a full snapshot and recovery of your applications which are running. And then we also allow for data ingestion a lot of people’s integrations.
Pankaj Bhagra: 00:36:16.166 The system is fully open and extendable. It’s distributed, runs on any hardware, can run on on-prem or on cloud depending upon your choice and have an ability to do the data or the cloud federation with a multi-cloud. And there’s where a lot of people use significant amount of data reduction. A lot of data is kept private, but in cases where you have to have certain application running in the cloud, you would like to federate it securely towards multi-cloud. We don’t participate in digitized services. These are the applications which run on top of our infrastructure. However, we participate and facilitate their running and their security and their ability to migrate and scale.
Pankaj Bhagra: 00:37:04.361 Looking a little deeper, now when we talk about the data analysis, the first part of it is about how securely you can bring the data into the system. So significant effort goes into data ingestion, data cleaning, data storage, data visualization, and then comes the whole part of the streaming analysis of analyzing, processing, and driving insights from it. So this platform provides us the full, holistic functionality starting from data ingestion. So there are use cases where the devices you’re trying to talk to have some kind of a standard protocol. So in an industrial world, typically OPC UA, MQTT are one of the common standard protocols. But if you go and see that, hardly 10 to 20 percent of devices talk to those standard protocols. Rest of the devices always have the legacy protocols. And you need to have some kind of either a proprietary protocol extraction, or you leverage some of these hundred or well-established industrial connectors who provide you the data ingestion capability. So we heavily use [inaudible], [inaudible], in certain cases [inaudible]. And then [inaudible] is another very well-established industrial connectors where we can do the protocol conversion from machine data and bring the data into something much more standard protocol like OPC UA and MQTT.
Pankaj Bhagra: 00:38:42.360 Now, on each node, we run our data broker, which provide the publish-subscribe semantics. So we can produce — multiple devices can produce the data into the data broker bus and various consumer applications can consume the data from that on their interested topics. Ask and register on a topic that gives me a data, sends the data which is coming from any robotic device that this serial access — or you can have even only a very, very refined topic that if the conveyor belt is moving more than 100 miles per hour, then give me a notification. So you can have a topic-based subscription. And on top of that, the broker bus, what we have as an ability is to have all this data exposed to streaming analytics. Now, as the data — it’s like our infrastructure allows — we can allow to start building pipelines which suggested on a topic, and then the data goes to streaming analytics infrastructure. This is where the entire magic happens. So now you start registering on topics, and you say what do you want to do. You want to clean the data. You want to transform the data. You want to do a statistical sample on that data, or you wanted to do the machine learning on that data. So all of that is getting done — your inference engine is getting run on the data or the topics the user is interested in, and this is where the streaming analytics is managed in form of user configuring the data pipeline. These data pipelines are built in the fogSM and pushed down on the edges.
Pankaj Bhagra: 00:40:22.490 Now, as you can see, the whole management system is done from the fogSM, whether we’re pushing the connectors, which are helping us doing the data ingestion, or the pipelines, which are telling how to massage the data, how to analyze the data, how to prepare the data, and how to look the data in a rich fashion. All of this including the AI, ML models are pushed from the fogSM. So this is your configuration or your management station, and this is your execution agent. I hope you’re able to see my mouse. I’m trying to highlight some of the pieces. If not, Chris or Anil, give me a cue that people are not able to — you should see my mouse.
Chris Churilo: 00:41:05.736 Yeah. We can see it. Yeah. We can see it, for sure.
Pankaj Bhagra: 00:41:10.758 Okay. Excellent. So now, as you can see that, we have talked about the data ingestion, the data brokerage. We talked about the streaming analytics, the pipelines, and the connectors which are getting pushed from the cloud. And then the entire data is, basically, stored into the time series database locally, which can be locally visualized or can be — the actions or insights can be driven outside even to the machine control. Now, this is the area where we’re using the TICK stack heavily. So this is the InfluxDB. This is the Kapacitor. This is the Grafana. And the rest of the things are, basically, managed from the fogSM to push and manage the TICK stack on each of these nodes which are distributed across factories, across deals, and all of them managed from the central entity called fogSM. So that would give you a high-level composition how and where the TICK stack is. We run the TICK stack on the cloud. We run TICK stack on each edge and replicate this across the clusters, across the deals.
Pankaj Bhagra: 00:42:14.451 Going a little deep into the data analysis framework, and this is where one of the question also popped in is that how do you build your models? What’s the accuracy of the models you have when you run in the cloud, and when you run it on the edge? If you see that — to build the model, we look at the historical data. We look at the live data. We look at continuous shop floor data, right? So this is where a lot of enriching of the data happens, and using that data, you continuously keep building your models. And the richer your model is — it’s like the more label sets you have, the richer your model; the more accurate your model is. But once the model has been built, we push this model down to the inference engine which runs on the edge. So now, if you look at this hierarchy, we’re trying to show you that in an industrial pyramid, if you’re familiar, there is a notion of the cell, which is where the real production gets done. And then at the level above that — they call it the line level or the plant level where the [paddler?] lines are or the plant data center level — you have another layer of compute.
Pankaj Bhagra: 00:43:26.596 You can push your model either at the plant level, at the aggregated level, or you can push it all the way to the least level of your model where you would like to run it. And using the data pipelines, and we will show you what the data pipelines looks like, and what they’re composed of. It’s primarily Kapacitor functions with added user-defined functions. So you, basically, start producing a stream data, and you say that like, hey, what you would like to do with it. You would like to clean the data. You would like to run from the data. And you would like to analyze and predict something out of. But all of this is running in a distributed fashion, and in a fashion in which you’re managing and looking at the results from a central entity called fogSM. And this drives the insights, and it’s all stitched together as a single platform. If time permits, we would like to show you this in a demo also.
Pankaj Bhagra: 00:44:23.635 Zooming a little bit more what the pipelines looks like, now, this might resonate a lot with the people who are writing the TICK script by hand. They would see that these are standard Kapacitor node or InfluxDB nodes. You start doing the stream processing, and you start writing your TICK script, which is basically streamed. Then you do a topic selection; which topic you would like to listen from; which measurement you would like to hear from; if you want to do group by — and then you would start using the vendor nodes and additional nodes for filtering transformations. You could potentially join multiple streams, and then you do a lambda transformation. So you take the EvalNode, and the TICK script allows you that it could be a directed graph. You can split your graph also. So you can take the same stream and split it into two different parts, and you apply different functions on different part of your tree traversals.
Pankaj Bhagra: 00:45:31.434 So you can say that like, hey, take that result. You select on certain topic. Clean that stream, join it with some other data, and then do a lambda transformation. And as part of it, this resulted few streams — you would like to expose it to two different kind of protocols and would like to see the result from both and compare. And a lot of our users, the way they use it is they have multiple models. And they’re not sure during the cleaning phase which model is yielding the best result. So they expose it to multiple models running in parallel and looking at the accuracy of each model in parallel. And then once they run it for a few months and feel confident this model works better than the other, they basically purge the part of the stream. So during your training, your testing and fitting part, you might have your graph in which you might run it parallelly with multiple models. But once you have refined, and you have seen the results which one is yielding the best, you push the other part of it.
Pankaj Bhagra: 00:46:36.117 And this is where we have written some wrappers on top of TICK script, basically, to simplify or give the ability for people who are not familiar of writing either the script or people who are not understanding the deeps of the statistical or the machine learning. We are providing them these kind of tool sets, or what we call the “lego” set. And people can try the applications very quickly just by a drag and drop. So we’ll show you that. This is just a screenshot of what the function looks, but you see that there’s a path for input. You select whether you want to start with a stream or a batch. Then you go into some transformation functions. How do you like to select? How would like to filter? Then you go to some analytics; which analytics you would like to. And these are all user-defined functions. So Anil and team [inaudible], a lot of data science has been going to add these user-defined functions into this platform, which are basically — we feel that they’re very commonly used functions. So we have done the principal component analysis — heavily used for people to do — even for bringing and identifying which features are the most important and reducing the data or like the density-based scan, emphasis continues to be on unsupervised learning because as your solution grows, you don’t how people are going to use it. So any techniques which we can do with unsupervised learning, we are spending a lot of time to improve those. Nothing more I would like to add. Maybe a little more of this in the demo.
Pankaj Bhagra: 00:48:23.729 Just another screenshot of what the functions look like. I think the people who have used the TICK script would see that the majority of these nodes are the Kapacitor nodes, except in the areas where we have done significant amount of user-defined functions. So these are all user-defined functions which we have added, just using the standard definition or the templates of the UDFs as documented. But now, inside this, this whole code is [done?] outside the scope of the Kapacitor, which runs as a pythonic code. Or in certain cases, we have also written a block code to do the direct or partial performance. A little bit of extensions in the output nodes also. So let’s say you would like to — there are some built-in nodes. The Alert, the HTTPOut are built-in nodes on InfluxDB. We have added extensions because our user community would like — MQTT, AMQP, Kafka. And one interesting node is a PuntNode where we, basically, punt the packet out of the Kapacitor to an additional docker. And this way though, you can extend this pipeline to any function. So now, you don’t even need to write the function inside the Kapacitor. It’s punted out to a docker, and the docker chain can, basically, take it up and — whichever way they would like to ingest the data, process the data and doesn’t even want to return back the Kapacitor pipelines of it, or it can loop it back into the Kapacitor pipeline and continue the processing.
Pankaj Bhagra: 00:49:55.130 So those are the basic building blocks or the nodes you can use to build the pipelines and build very quickly your application in the fogSM. Think about that as a template. And then you will deploy these templates to your edges, to do your function. And that’s the way you compose an application. So I took a very complex example of building a pipeline, but your pipeline could be much more simpler than that. But just wanted to give you one flavor that using this, you can build a very complex event processing or a very simplified one based on the user interest. And this is all done by the user. It’s not done by, let’s say, us or any — it’s a solution integrator.
Pankaj Bhagra: 00:50:37.421 A little bit on why did we pick the InfluxDB, or why did we choose the TICK stack, right? What were our requirement, and how did the TICK stack become a natural choice? So in an industrial world, one of the biggest challenge is that the amount of data which we’re exposed to was — the data ingestion rate was just too humongous. We were getting like 100K records per second in certain places. And we needed some mechanism in which even if we’re not analyzing each bit by bit, we’re storing entire of this data and not shoving this data to the big data lakes and have the data locally, process the data locally, ingest the data locally. And in certain cases, we’re not very interested in the historical data going back in two years. People are interested in last one week of data just because if something happens, we can go back. So having very configurable retention policies, auto-purging capability of the InfluxDB was very important for us. So in majority of cases, on the edges, it’s either a week worth or a month worth of a data, and only the important data is replicated into the big data lakes. So fast ingestion, configurable retention policies was absolutely needed, and InfluxDB just does the best in this world.
Pankaj Bhagra: 00:52:05.184 Query responses, timely — since we’re doing streaming analytics, we didn’t have a heavy query requirements. It was timely use for the visualization and doing tens of queries per node was sufficient for us. So if your queries are well-behaved, it just works. We needed native visualization support. So we ended up using Grafana for that. And we have done a lot of this stitching with a single sign-on, all the security pieces. We’ll touch upon this in the bullet number eight where we have used the encryption, Auth, RBAC, certificates. All of them hook from the TICK but stitches together in our product as a single sign-on. We have definitely needed streaming analytics capability, and Kapacitors and UDFs did the trick for us.
Pankaj Bhagra: 00:53:00.480 We also needed a — before you can analyze something, you need to do a lot of data collection. This is not a machine data but with the compute nodes, what are they doing? How well the TICK stack is running its own metrics? How your dockers are doing? So we collect the metrics or the information of all the nodes, and we timely use the Telegraf for doing the metric collection both for the workloads on our nodes or the user application, whether they’re running inside the Windows or inside the Linux. All data is collected together, and we store these things locally on the local InfluxDB and visualize it and provide the insights and actionable events on top of it. There is no single point of failure. The data is replicated. The TICK stack is open and extendable, so we’re able to write user-defined functions. We’re able to write our wrappers on top of it, how to convert the graph to TICK and back and forth. The single sign-on integration and Grafana integration prove that.
Pankaj Bhagra: 00:54:10.637 And last but not the least, the lower footprint was the most important. We’re not running these things in a cloud where the compute is unbounded. We have limited bounds, so we need to run in a very, very tight footprint fashion. So we do that. Some benchmarks, I leave it here. I think we’re a little behind time, so I would like to show a little bit of demo. I’m not going to the benchmarks. I leave it here just at the highlights like high data ingestion rate — if you’re doing very simple analytics, you can do it 100K records per second per node. But if you’re doing complex event processing like you’re doing ML kind of things where you’re exposing it to sidekick partners, NumPy, performance drops a little bit, but it’s still humongous. It’s 5K records per second. We have done a great amount of acceleration with the machine learning libraries for pythonic, and comparing with the open standards, you see 3X performance. Some very significant use cases where we were able to store more than a million series per node with the 4 billion records per node, and this is where — let’s say, the series were exploding, and we had to go to InfluxDB 1.6 or using the data engines to store the indexes much more efficiently. I leave the benchmark for people to read it later on. So first, do we have time for demo, or we’re running out of time?
Chris Churilo: 00:55:46.104 No. Absolutely. Let’s go ahead. And so if anybody who’s on the call, if you need to head out because it is at the top of the hour, we’ll keep going. And I want to make sure that we can get this recorded, and so you can see and watch, or you can come back later and watch the recording at your own time.
Pankaj Bhagra: 00:56:10.881 Let me show you a quick demo. I’ll try to keep this in for next 10 minutes so that I’m not — overwhelm and not significantly overstretch your time. So what we’re doing is we’re going to login into the fog system manager, central entity. You run it on the cloud or you run it on-prem. This is your command and control center. You log in, and you basically see the entire inventory. This is just my test topology, so you don’t see that many node. You only see two compute nodes, the node number 151-1 and 151-0. These are the two compute nodes, which are stitched to my inventory tree. So that’s what I have. But in real world, you’ll see hundreds of nodes connected to this fog system manager. And this is where you, basically, start deploying applications, deploying pipelines into these distributed edges.
Pankaj Bhagra: 00:57:13.712 So you start from the top. And we talked probably a little bit about the functions we have, about the app store, the config stores, the data stream, the security pieces. These are controlled from the fogSM. So now, once you’re into the inventory, you start seeing that like, “Okay. What do we have here?” If you look at the host node, you can look at the native — the time series database on this, right? When we pointed our nodes to node number 151, the data is still there. The data is not pulled out. It’s not streamed to the central entity. At this time, we have requested that node on demand on that time to look at that data. The data remains locally. The data is private to the node. Only on-demand application can reach out to that data. And this data is what Telegraf is pushing into the local InfluxDB node, and you’re visualizing it using Grafana, right? So all of this is stitched to your management station, but the data remains local to the node. That’s the way we scale. We’re not streaming all the data together all the time to one entity and making big data lakes.
Pankaj Bhagra: 00:58:31.397 This is just raw system; what the metrics looks like, the Telegraf output. So you have how long the system is being up, how much was the disk usage, the network usage, and the other kind of usage — interesting, how many and which machines you’re running and stuff like that, right? Now, on this node there, you can see that I’m just running some other applications, some docker application and some devices being connected to it. Now, how do you deploy an application here? So you go to the app store and, basically, onboard applications. You can bring your own docker containers here. You would see some standard machine learning dockers or some of the analytics dockers which you can add to this app store. And once it’s there, you can just deploy it from here into the system. I’m not going to walk through that pieces. Similarly, you can deploy VMs, whether it’s the Windows VMs or the Linux VMs or the native executables. If it’s Windows, you can push the applications also in that. But our clear interest is more about the data stream, right? So let’s switch topic to the data stream.
Pankaj Bhagra: 00:59:41.728 And this is where basically, you would see the canvas, right? So you can create a new data stream. So how do you build a new data stream? So you take a canvas, and you say that, “I would like to build an application.” And this is where we show to you a screenshot where you can start taking your nodes. You can say that, “Hey, I would like to start my application by doing stream analytics.” Take a stream and put some stream, and then I would like to transform this data. So I would like to evaluate certain things, but before evaluation, I would like to do some kind of filtering. So I would say do some topic selection, do some windowing on that node. And after doing all these things, I would, basically, start building the connectivity between them. So I’ll just say, “Stitch these nodes together.” And I’ll say that after doing this, I would like to evaluate this stream. And then let’s go and do some interesting stuff.
Pankaj Bhagra: 01:00:36.140 I have done the topic selection. I’ve done some micro-windowing, collected some batches and then giving it to evaluation where you can make your lambda functions. But this is where I would like to do something interesting. I would like to do some kind of a density-based scan, or I want to do some kind of a correlations function. These are some user-defined functions which we added into pipelines to do the real analytics work. This is where the AnalyticsPlus team helps us to write these functions, extend our ability to do the real things people are interested in the world. So user can build up this application, select these things, build a graph, and say that what you would like to do the output of this result. Now, you can say that like, “Hey, I would like to send this output to an AlertNode, or I would like to send this output to the InfluxDBOut node, or I would like to send this to something else.”
Pankaj Bhagra: 01:01:32.684 So very quickly you can build this application. Once all your nodes being fully configured — as you can see that StreamNode, you don’t have to do anything. But FromNode, you would like to specify what topic you would like. So I would like to listen from which database, which measurement, what’s your retention policies, what do you want the group by. You fill it up, and you save it. This node is now configured. Once you configure each and every node, your pipeline is ready. You can save your pipeline, and then you start deploying your pipeline. That’s all the end user is doing. So he’s not doing the programming at this TICK script. He doesn’t end up — he doesn’t need to know how the density-based scan is written. He needs to understand, “Yes, I’m doing unsupervised learning, and I’m doing clustering,” but he doesn’t need to know how to write. He needs to know how to use it but not necessarily write. So that’s where we’re simplifying or making this experience of the user to be much more richer where if they’re familiar with data science, if they’re familiar with [how fast?] the analysis is, they don’t have to all the time go to the machine learning. There are a lot of times it’s just aggregations and visualizations and those things helped, right?
Pankaj Bhagra: 01:02:50.973 So once you save it — once you build all this thing, you save it, right? And then the rest of time, I’m not going to save this entire — build this entire thing. I’ll go and show you some of the previously pre-built pipelines. So you have a pipeline for, let’s say for example, a quality control, right, where you selected from stream, from certain topic. You did window on that, so you’re doing every five seconds, last five seconds picture of the data. You evaluate this, and you expose it to a pre-built algorithm for anomaly detection. And then the result of this, you throw it out to the InfluxDBOut node. And you’re storing that into the [inaudible] so that we can see the results for the anomaly. And once this pipeline has being built, you can deploy it. I can see that where it’s been deployed. It’s already been deployed on one of the node, so I can take a look at the status of it, and I can even see how it’s been doing.
Pankaj Bhagra: 01:03:52.345 So if I just click on the status of it, now it’s showing me that, hey, this pipeline has been working. It’s processing nodes. The data is coming into the system. And it’s analyzing the result and showing this result out into the InfluxDB node. Now, we gather the stats every five second and see the response of the pipeline. If the pipelines stops, the failure is being exposed to. And you can, basically, zoom open these things, and it’s showing you the exact error happening on that particular node on the pipeline. So this is a tick node, which basically is that average execution time is that many nanoseconds; how many emitted points it has. If it has an error, it will flag you out. If it has a series — if it has a cardinality increase, if there’s more go threads, you would see that like, “Hey, it’s falling behind,” so it’s working of more go threads to process that.
Pankaj Bhagra: 01:04:48.005 So it’s just not that you build an application, but you also look at the performance of your pipelines, the results of pipeline because we’re throwing back into the InfluxDBOut node. Now, let’s go and take a look at the result of this thing, right? So I’m going to go back into the inventory tree. I have deployed some data sources, which are extracting some machine data. And they are bound towards some kind of a device or an asset, right? So it’s kind of here simulating a robot, robot IDs from some randomly generated number here. And if I pointed out to say that like, “Hey, show me what data it’s been sending,” right? Now, I’m pointing it to our database, and it’s saying it’s sending a sinusoidal data, right, some kind of it and then regenerated data, right? So you can see that it’s a sinusoidal data, and it’s varying in magnitude. Sometimes it’s high amplitude. Sometimes it’s lower amplitude. And it’s randomly also generating some kind of a noise.
Pankaj Bhagra: 01:05:57.812 So if I go back on that node one more time — oops. I clicked on — so if I go back here — so earlier we looked at the raw data, but what’s more interesting is that let’s look at the analyzed data. I go into a different dashboard. I go to analytics dashboard. And on top of just the raw data, which is basically what we saw, that sinusoidal data with some random noise in it, how do you analyze the data, right? So we have built some pipelines, which we showed, let’s say — pipelines which have been deployed on the node. And now, the result of those pipeline is also being individualized on it. So here, it’s basically the anomaly detection result, which the pipeline has been deployed by the end user. And now, you can zoom into any part of the pipeline, and you can see the result of this —
Pankaj Bhagra: 01:06:54.041 So using your machine learning, now you can even identify the very, very small blips in the data. Let’s say, for example, there’s a blip over here. It’s been captured like — that’s like, hey, even though there’s a small blip, we can capture it as an anomalous point. So this way, you can expose a very high rich function to the end user without them really learning the data science stuff. Now, we have just not — we deployed one pipeline, which I showed you, the quality control pipeline. There are more pipelines which are being deployed. For example, we have a pipeline for correlation, which is doing simple correlation, pairwise correlation between various sensor inputs, and it’s throwing its output on the same data. It’s looking at the same raw data, and it’s throwing its output results after doing a pairwise correlation, which is a user-defined function. It’s throwing its results into the correlation measurement. And we can go back to our device and see that when the pipeline is being deployed, can we take a richer look at that device? So if I go back to analytics dashboard, let’s say — just I have a raw data. I have anomaly detection data. And I have a pairwise correlation. So you, basically, can expose the same data to various kind of algorithms because — look at result. If it doesn’t look good, you go back and change the pipeline and deploy the pipeline.
Pankaj Bhagra: 01:08:28.706 So we showed you how the pipelines are being deployed, but let’s deploy one of them. So let me take a look at another pipeline, a much, much more simpler pipeline. This is a pipeline which is saying that look at the raw data, do some windowing — five seconds, every five seconds take the last five minutes of data. You do some very lambda function, very simple. Here, it’s saying that if the data is more than zero, basically, you’re looking at the positive data; then you store the data. Otherwise, you zero it. So you’re not interested in negative data, a very simple business logic. And you store this result into a reduced data, right? So you select the node. You say that like, “I’m interested in dropping all the negative data. I want to only retain the positive data as is, and I would like to show this in the result.” You would have written this in a flip of a second with TICK script if you’re a TICK user. But if not, then it would be a rocket science for you. So you simplify this by writing a template like this. And then you come in here, and you say deploy, right? So you just go on this deploy button, and it will ask you, “Where would you like to deploy?” You remember that I don’t have too many nodes, so they’re not showing up. If I have too many edges, I can deploy in all or on subset of them. I’m going to select one node where I would like to deploy it, and I will just push the deploy button.
Pankaj Bhagra: 01:09:51.242 Basically, it now starts communicating to that node. If that node is able to ingest it, it’s able to ingest that pipeline, it’s been deployed. I look at the status of this node. Now, it’s being deployed onto that edge node. I can look at the status of this and see how it’s been doing. So with that, let’s say, you can control these pipelines. You build these pipelines in the cloud. You push these pipelines, whether they’re doing a very simple data reduction or doing very complex machine learning. You build these things, plug in your model and push these things at date, right? Now, it’s producing results. I go back and look at the result of the device. Now, if you look at it, right, I’m looking at, say, the different dashboard. I have a sample raw data coming in, and I deploy a pipeline just to say, “Reduce the data. Drop anything which is less than zero.” And I have a data which is a reduced data set. A simple Grafana shot here, what are you looking for is a reduced data, which is what the InfluxDBOut node is putting. And very quickly you can build your dashboard. Very quickly you can build your pipelines. Very quickly you can deploy and look at the results of this through a single pane of glass through fogSM.
Pankaj Bhagra: 01:11:10.972 Now, you’re not worrying about — and this is the way people scale. The technology remains the same, but this is how you securely scale your system. You’ve just not deployed your algorithm, and you don’t know the result of it, and you’re relying on something else. This is where the data science people love it, right? They would like to iterate. They would like to see the results. Things change from what you deployed three months back to what’s today. If somebody reports your model accuracy is not accurate anymore, why it’s not accurate? How do I see it? How do I get a feel of it? How do I iterate over it? How do I tweak it? So all you have to — the data science does, basically, come in here. Build a pipeline, deploy a pipeline, take a look at the result of this pipeline, security and simple management station. Whether you run this in cloud, whether you run it in on-prem, this is a fully scalable system to build and manage your entire inventory, right? So that’s what I wanted to give you a glimpse of it. We didn’t touch on certain other pieces of this platform, which is about deploying and managing your workloads and VMs and docker management, deploying your applications, deploying your configs, deploying — all of this is stitched together. But eventually, if you’re looking more from the data sides of it, pipelines, how do you build your pipelines, how do you deploy your pipelines, how you look at the data in secure fashion and iterate over it is what we’re trying to build. I just wanted to give a glimpse of it. If there’s more interest, we can take questions. I’ll try to address as part of the journey and some of the —
Chris Churilo: 01:12:48.214 Yeah. So Shisha has been waiting patiently, and Shisha asks, “Is there way —?” And I think you’ve covered this, but let’s still articulate that. “Is there a way to choose the algorithm automatically based on data and use cases because users may not be data scientists after all?”
Pankaj Bhagra: 01:13:06.000 Right. So it’s like can we — if the question is like, can we choose the algorithms automatically? No. We don’t choose the algorithms automatically. We let user pick up the algorithms. However, we give the capability to end user to deploy more than one algorithm. So you would like to see the results of density-based scan and one [cluster?] VM together on the same data set, you can do so. But it’s like the pipeline on its own will not infer that in this kind of a data set we will recommend you to use this model.
Anil Joshi: 01:13:45.644 Right. And I think, Pankaj, just to add what you’re saying. I think the current capabilities exist in terms of — let’s say there are three different classification models. So you can actually deploy them in the pipeline and see the accuracy of those three, and then you can select one out of them. That exists today.
Pankaj Bhagra: 01:14:06.041 Exactly.
Anil Joshi: 01:14:06.608 But you’re asking, I think — in the ideal world, what you want to do is I suppose you just — even in those situation, you have to identify what is it that you want to understand from this data. Are you interested in the correlation? Are you interested in anomaly detection? Are you interested just in descriptive statistics? So the end user still has to articulate that. And maybe this is a natural language processing problem. You define what would you like to see from the data, and based upon that data, we can actually make some recommendations. And there are some systems now, I think, being created — to actually create these systems a little bit smarter so the algorithm can be chosen. But still it is far away from practicality. And then I think, at some point, you can have — the end user have to specify what is it that they would like to understand from this data. That problem statement has to come. Once the problem statement is there, then perhaps a recommendation could be made by the system. But that does not exist today, but we still are thinking about how to make it smarter so that algorithm could be chosen — once the statement is available, what is it that they would like to do.
Pankaj Bhagra: 01:15:25.539 Right.
Chris Churilo: 01:15:26.890 So Shisha, hopefully, that answers your question. So it’s not quite there yet. It’s not going to do everything for us, but it’s definitely a lot closer. I mean, the fact that we don’t have to create our own TICK scripts is pure joy for me because troubleshooting those are not easy. So I guess one of the questions that I have is, what led you to coming up with that UI for the TICK script?
Pankaj Bhagra: 01:15:55.188 It’s our own pain point. If I can’t write it, how can I convince my user to write it, right? So that’s the necessity. And it is not about writing the TICK script. Basically, it’s the whole — eventually, it’s an end user. How do you enable the bigger set of users to use your platform? So how do they build an application, underlying it’s a [inaudible] anything else? How do you build an application? How do you deploy it, and how do you monitor it? How do you look the results of it, right? And it’s like how do you know that — the previous question which the audience had, how do I know which algorithm is yielding better results? I do not know. It’s like unless you experiment the things, a lot of data exploration, a lot of — it’s like during the model setting time, you have to try out — yes. You have your — it’s my data science team would be laughing at me at this stage but — they know this, but not everybody knows. So which algorithm is going to yield the best result. So every trigger and every end use case is different. So the more tools you give to your community, the more empowerment you have for them. You’re out of the loop, right, so you can do more interesting stuff next.
Chris Churilo: 01:17:18.282 Yeah. I mean, if we think back to the slides that were describing the devices having some kind of data they send to some system and eventually doing analytics. It’s already a formidable task, right, just collecting all that data. And then if you were to layer on top of it making it really hard to then be able to even do any kind of basic analysis yet, then people would just definitely walk away from it. So yeah.
Pankaj Bhagra: 01:17:48.328 They [hold?] results of data ingestion, data storage, data management in secure fashion, right? So now, you’re talking about the — Anil talked about the healthcare or the privacy in the industrial rights data that it’s like — yes. They let you play with the data, but it comes with a very high bar. You cannot just expose this data without a proper authentication or role-based access or the data leakage or — it’s like if that happens, they kick you out of the door right there. So those are the important pieces you need to build into the platform.
Chris Churilo: 01:18:23.889 Right. And I don’t know if the audience will know this or not, but even just like trying to collect that data from all those legacy systems is quite a feed in itself. As you were talking about, there are still lots of legacy systems, all kinds of very close protocols that you guys have built some conversion, so you have some critical conversions from those various systems. But no factory has machines from one single vendor. You’ve got them from — there’s the machines, subunits, everything else from lots and lots of different vendors, so.
Pankaj Bhagra: 01:19:02.192 We still deal with — we laugh at it, but we’re still exposed to Windows XPs, Windows 2000s, and things running on fumes, RS-430 to -485. And people only talk about those, and you say like, “Where is my AMQP and rest?” Say, it’s come 5 years down the road, 10 years down the road; we’d have it.
Chris Churilo: 01:19:24.863 Right. And I think the one thing that you didn’t mention, but I bet you guys deal with it a lot is I hear in a lot of these industrial settings that there are still a lot of paper and a lot of spreadsheets. Is that the case with a lot of your customers as well? It’s that you have to also help them just put that data into some kind of electronic format in some cases.
Pankaj Bhagra: 01:19:49.771 I mean, it’s like — that’s the genesis here, right? It’s like some of our customers — we talked about some of the use cases. They’re doing a random sampling of the parts they’re producing, one part a day. For a quality control, they’re not looking into each and every part. And yes. They might be doing a microscopic analysis on one randomly chosen part in a day in an entire shift, right? That happens with the 99 percentile of it. It just comes with a big faith and big trust in that like, “Hey, I can hear the sound, or I can write it on a spreadsheet better.” So data-driven insight is a key, fundamental driving force in the industrial world. We have seen that time and again. Systems are not. I shouldn’t say that they’re not automated, but they’re automated with, let’s say, 10-years-old or 20-years-old technologies. Their prime time are far — or [disruption?] it.
Chris Churilo: 01:20:49.961 Yeah. Well, I mean, unfortunately, these are expensive equipment, and it’s been working for a long time. They were just trying to get the best ROI out of the systems that they had. And now, that they’re in a bit of a jam because a lot of that stuff, as you mentioned, is very old, it’s probably even hard to find people that even understand or even know how to work with some of those operating system. So I appreciate our audience today — it’s 25 minutes after the hour — for hanging out with us. If you do have any questions, I will leave the line open for just a couple of more minutes. But as I mentioned in the beginning of the call, if you do have questions that you want to just email to me, I would be happy to send those questions to our speakers so they can answer them for you as well. And I will do an edited video, as I mentioned, and post it so you can take another listen. And maybe while we’re waiting for questions, why don’t we just look at some of the — the section that you kind of glossed over on some of the testing that you did.
Pankaj Bhagra: 01:21:55.921 Sorry, Chris. Say that again. Some of the testing or some of the —?
Chris Churilo: 01:21:58.590 Yeah. Some of the testing that you did of the performance for InfluxDB. Just as we’re waiting for a couple of other questions to come in.
Pankaj Bhagra: 01:22:07.132 Right. So let’s say, for example, benchmark — everybody would be interested in it; let’s say, how well the platform suits; where does it fit in, right? I mean, is it fitting like one record per second or a million records per second? So we’re somewhere in between. We are not yet touting the cloud scale, the systems which are built for cloud scale. But when you have — if you try to solve this problem locally at the edges, we have seen typically — this is the kind of a maximum we have seen even though you’re trying to hit the 100K records per second. We would like to analyze all of them through simple things, but the subset of these records, you would go to the — in the order of complex analytics is 5K records per second. In terms of benchmarking, I would say what we did is that we took a standard Influx, TICK suite itself, and we expose this just as a [inaudible] open source. And then we did a lot of tuning and tweaking around the x86 making it —
Pankaj Bhagra: 01:23:12.871 And the key thing is that here when we run it in our system, it’s turn on with full security. So the data is encrypted, full RBAC, authentication turned on, and multiple token-based authentication, whether system is in local or outside. With all of that, our big ask was that like, “Hey, you should not dip the performance of this to native.” So we had to do a lot of work of tuning and tweaking pieces of the nodes which are available in the TICK stack to make sure that it’s rightly configured, rightly suited, optimized for a given platform, and we’re able to at least meet or beat the open source numbers. So we have a write throughput — it’s like it’s not exactly 3X, 4X scale here but just want to show you that we’re able to — given the Advantech MIC-7900 is a standard industrial PC. It comes with, I think, four-core Intel i5 series processor, 8 gig of RAM, 256 gig disk. That’s the compute engine on which — and we have benchmarks on our website for various platforms, so on Advantech system, on Dell system, on a Kontron system, Siemens system. We have done a lot of extensive benchmarks. And we thought the write throughput, query throughput — simple analytics which is, basically, just looking at the data. And just doing statistical analysis, you can pretty match what the write throughput is. And the write throughput, we’re talking about just not writing the data into the broker bus but ingesting and indexing the InfluxDB. That’s what we’re hitting, 62K records per second.
Pankaj Bhagra: 01:25:05.810 And the same data, because we’re doing steaming analytics — if you’re doing simple analytics, you’re able to match, let’s say, a full line rate of that ingestion. So that’s gives you capability that you can look and analyze each and every data if you’re doing simple analytics. But now, if you’re saying that like, “I would like to do clustering on that data,” right? So now, we have to take the data out of the TICK script and give it to the user-defined function. Now, it’s basically, some guys have taken it out of the user-defined function, and then you’re doing rich analytics stuff. So you’re throwing it to the — throwing it to the more interesting algorithms, performance definitely — but you’re not throwing all of that data into the complex event processing because here is a multi-node and — so the hair that we have seen on this kind of a platform, we can still look at the 2,000 records per second. And then we’re doing a very, very complex event processing. In slightly busier nodes, let’s say kind of an i7 class — we’re not yet talking about the Xeon class clusters. On i7 class cluster, we hit 5,000 records per second. That’s what we claimed; that on a single-node basis, you can do up to 5K records per second, but that’s where our limits lies. But remember, this is on a per-node basis, and you have thousands and hundreds of these nodes working. You distribute the problems. You’re not aggregating the data and looking at it one shot. You’re distributing the problem and solving the problem at the edges itself. That’s what we have done. The write throughput, the query throughput, the simple analytics throughput, and the very, very complex event processing throughput.
Pankaj Bhagra: 01:26:52.557 One key thing I would like to highlight here is that this is where we have done significant amount of work of tuning a lot of machine learning libraries, which is written in Python. We have done significant improvement on compiling and optimizing that for x86 platform. That means statically using the assembly instructions of x86. And this is where the graph would show you that we have seen 3X performance. And this is where it matters the most because a lot of time for visualization, you’re doing simple analytics, but for real insights and driving MLs, you need to have those libraries to finely tuned. And of course, some people end up going into all of it, into hardware acceleration of VPUs and GPUs. We’re trying to do still at the CPU level. We have some other project going on with those accelerations, but for machine data, your digital data, which is node data, vision data, you’re able to achieve 3X performance direct into the CPU itself. That’s one thing where it was needed, and we had to put significant effort to make it better.
Chris Churilo: 01:28:02.087 Yeah. But you’re absolutely right. I mean, you have to tune it for your use case, or else the benchmarks are going to be meaningless.
Pankaj Bhagra: 01:28:12.610 Yeah. Yeah. And this is like a standard benchmark test which we — I forgot the name, but it’s InfluxDB bulk test where it’s — we just wanted to — we don’t want to use something which cannot be industry compared. So we took that benchmark test — and then we have the compared numbers that we can see that with the open source being released on a given platform, feed on the test on our system, and then run it on this. People are [inaudible] what is the fogOS? It’s basically built on top of Linux or even on top of CentOS or Red Hat Linux and a significant part of open source integration in this. TICK stack being integral part of it in all the places, but it has other components of virtualization, containerization, managing containers, VMs, config management, and other security pieces for that. Primarily used for industrial IOD but also applied in other IOD verticals.
Chris Churilo: 01:29:25.902 All right. Looks like we don’t have any questions. But as I mentioned, if you do have questions, you guys are welcome to send me an email. Everyone has my email with the invite for today’s webinar. I do want to thank our speakers. It was very informative. I’m super impressed with what you guys have built. You’ve really done a good job of making it simple, especially given that complex environment that your customers are faced with lots of legacy systems, lots of machines, lots of different architectures that they also have to face with — based on their adoption of technologies. It might be slow. It might be fast. A lot of variances that they have to deal with. So thank you so much. And I hope everyone who was on the call has a great day. And I hope you guys look forward to getting the email with the recording. Any last words from you, Anil or Pankaj?
Anil Joshi: 01:30:20.957 No. I think I’m good. But then thanks for the opportunity, Chris, to present this to the audience. And I’m glad that I think we were able to share some of the work that we have done in the last few years.
Chris Churilo: 01:30:37.900 Awesome.
Pankaj Bhagra: 01:30:39.068 Thanks, Chris. It was wonderful. There were some great questions. And I’m sure if there’s any follow-up, we’re happy to take those questions.
Chris Churilo: 01:30:49.282 Fantastic. All right. We hope everyone has a great day and thanks for joining us. Bye-bye.
Anil Joshi: 01:30:54.207 Thank you. Bye-bye.
Anil Joshi, Founder and CEO; AnalyticsPlus, an Advanced Analytics and predictive modeling company. Anil has cross-industry experience in analytics and has led multi-disciplinary teams to build software and analytics solutions. Formerly, Assistant Professor of Oral Health Policy and Epidemiology at Harvard School of Dental Medicine (HSDM) he taught Biostatistics and Research methods and directed HSDM’s pre-doctoral research program. He has published over 20 research papers in peer-reviewed journals in the area of dentistry, medicine, and population health. As Senior VP at Zacks Investment Research in Chiago, he managed company’s quantitative product lines and sales to financial institutions and hedge funds. His team built quantitative trading models for Wall Street firms and created research products that were sold to institutional investors. Anil also founded IntelliH, Inc., a remote patient monitoring platform using IOT medical devices and wearables where real-time analytics is used to identify high risk patients. Anil holds a Masters in Public Health from the University of Michigan, Ann Arbor. He completed his Postdoctoral Research Fellowship in Medical Informatics at Brigham and Women’s Hospital, Harvard University and holds a Bachelors in Dentistry from Calcutta University, India.
Pankaj Bhagra, Co-Founder and Software Architect at Nebbiolo Technologies, a leader of Edge computing platform with focus on enabling and managing application for IoT and Analytics at the Edge. Pankaj is leading an effort at Nebbiolo to purpose build secure distributed computing platform for the edge to enable streaming analytics, and enabling cloud native technologies, which can be managed from centralized location from On-Prem or On-Cloud. Prior to founding Nebbiolo Technologies, Pankaj was a Principal Engineer at Cisco Systems. He has extensive experience in networking technologies, software architecture, and distributed and open embedded systems with real-time characteristics. Pankaj has a Master’s degree in Computer Science & Engineering from the University of Texas, Austin, and a Bachelor’s degree in Electrical and Computer Engineering from the Indian Institute of Technology, Varanasi, India."
Track and graph your Aerospike node statistics as well as statistics for all of the configured namespaces.
Knowing how well your webserver is handling your traffic helps you build great experiences for your users. Collect server statistics to maintain exceptional performance.
Collect and graph performance metrics from the MON and OSD nodes in a Ceph storage cluster.
Use the Dovecot stats protocol to collect and graph metrics on configured domains.
Easily monitor and track key web server performance metrics from any running HAProxy instance.
Gather metrics about the running Kubernetes pods and containers for a single host.
Collect and act on a set of Mesos statistics and metrics that enable you to monitor resource usage and detect abnormal situations early.
Gather and graph metrics from this simple and lightweight messaging protocol ideal for IoT devices.
Gather phusion passenger stats to securely operate web apps, microservices & APIs with outstanding reliability, performance and control.
The Prometheus plugin gathers metrics from any webpage exposing metrics with Prometheus format.
Monitor the status of the puppet server – the success or failure of actual puppet runs on the end nodes themselves.