ADLINK and InfluxDB Deliver Operational Efficiency for Defence Industry with Edge IoT
Session date: Jun 25, 2019 08:00am (Pacific Time)
A major military and aerospace contractor operating globally is using ADLINK Edge to connect existing software, equipment and systems securely, seamlessly and cost-effectively. The company is now able to extract real-time operational data from its materials testing chamber and stream real-time data to drive efficiency by enabling predictive maintenance.
In this webinar, Chris Montague, Senior Solutions Architect, IoT Solutions and Technology at ADLINK, will share the benefits they gained such as reduced downtime from planned and unplanned maintenance shutdowns, improved machine performance, greater accuracy and reduced cost. He will explain how ADLINK Edge and InfluxDB fit together in a highly effective Edge IoT deployment which delivers operational efficiency for the end user.
Watch the Webinar
Watch the webinar “ADLINK and InfluxDB Deliver Operational Efficiency for Defence Industryentityt with Edge IoT” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “ADLINK and InfluxDB Deliver Operational Efficiency for Defence Industry with Edge IoT”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers: Chris Churilo: Director Product Marketing, InfluxData Chris Montague: Senior Solutions Architect, IoT Solutions and Technology, ADLINK
Chris Montague: 00:00:01.197 Okay. Thanks, Chris. Right. So I don’t know if any of you looks at bios these days, but I did put one up. So I’ll just give a very short introduction to myself. So yeah. I’m Chris Montague. I’m a senior solutions architect, and I’ve been working for ADLINK now for just about a year. And what I do on my day-to-day basis is try to help customers connect unconnected bits of machinery kit, be that a PLC, as power machine or sensors on the outside of the machine or just a whole bunch of CNC machines or something like that. So what I do is I try and work out the best way that they can actually get data, and so they can actually do something with it, as my boss would say, sort of where the data plummets. Everybody wants lots of data, but you’ve got to have some way of getting to it first. And we’re usually one of those first calls before you can do anything with it. So with that, let me take you through a couple of the slides. So ADLINK - I’m not going to go into too much detail here, but we’re a global company, and we’re one of the leaders in Edge computing. We provide very robust platforms in space for real-time data connectivity, solution, applications, etc. Now, we’ve got a couple of different divisions work for the company. So we actually do have a hardware division. So we actually do make a lot of hardware. And what we also do is we have a software division which is the division that I work for. And so that’s what I’m going to be focusing on today. Chris Montague: 00:01:59.474 I am going to show you a demo which is one of the things I like to do rather than just PowerPoint you to death. I think it’s better to see stuff. So with that, let me just roll this forward. Yeah. We are global. And like I said, we’ve got some offices all around the country and even though those offices might not be next door to you, it doesn’t need to say that we can’t to get to anywhere around the globe. So my region is Europe, but well, I see colleagues in the States as well. So what do we do? Okay. Right. So one of the most important things for ADLINK is software is something which we call the Data River. Now, the Data River isn’t a physical thing. Okay? It’s the way that we transport data in real time, right, from whatever device is connected to the Edge. Now, when I’m saying in real time, I do mean in real time. And it doesn’t matter what form and how the data gets there, but once it actually makes it to our Data River, we can then push that transformed data to wherever it needs to go. Now, one of the most important things, of course, when you’re shipping lots of data anywhere is you need somewhere to store that data. We actually do have a way of storing some data in transit. You do tend to need a database for that, and that’s where Influx has been used in most of our applications. And I’m going to show you that later on. Don’t worry if the diagram doesn’t make sense. It will become very clear. Chris Montague: 00:03:59.270 So what does a Data River give you, and why are we interested? Well, it is very easy to integrate, and it doesn’t matter what the topology is. It’s very highly performant. Because of its low latency throughput because it’s a - what it’s called - a publish-subscribe model - and what it means then is we only publish the data once, and any number of nodes can subscribe to see that data securely. Okay? Once again, I’m not going to go into full detail because I thought lots of it will actually come out as I push through the slides here. Okay? So just as a mini example there on the bottom of the page, what I’m saying is that we can take real data - and it doesn’t really matter what form it’s in, whether it’s Modbus data, whether it’s MQTT or Canvas. It doesn’t matter. As soon as it’s normalized and being pushed to the Data River, we can then push that to any node in any format that we need to. Chris Montague: 00:05:06.881 So the user case I’m talking about today. Okay. So why is it important, and how have we used InfluxDB? Well, the challenge was for one of our customers - and they are in the defense industry. And they have these large test chambers which are American. It’s American company - Thermotron is an American company. American-owned chambers where they test equipment to make sure that it’s fit for use in the field. These machines are, by definition, I would say, a simple device in terms of what they’re actually doing because it’s either they’re baking something or freezing something or whatever. But the mechanism ascertained to the refrigeration units and the heating, the cooling that goes into that, and the circuitry does often mean that it could fail. And it very often does fail. Now, why do they care? Well, when you actually have a number of different bits of devices to test to a schedule, that schedule can be easily upset when one of your test machines, all right, is out of commission. So these machines will typically take, say, a couple of hours to power cycle up. And it’s only at that point in time that they would actually say, “Hey, I’ve got a problem.” And at that stage, you’d have to move any of the priority materials you’re testing or communicational devices to another machine. And that’s only just the start of the issue because then you have to authorize overtime. And then there’s testing around the clock, 24/7. And then that means that there’s more stress on the other machines. That’s because this one machine is out of use. And that isn’t even the issue. Chris Montague: 00:07:03.447 The issue really is - it’s loss of revenue. And each callout was costing into the $100,000 per year for the company just for call-out charges because you have to be a specialist to fix these machines. And the first thing that somebody would do is they would actually come to the panel which you can see on the little diagram there. They would punch it in some codes, and it would spit back a code to them and say, “Yeah, you need this part.” And then that would take another week after they’ve ordered that part, right? So there’s a long process there. Let’s say on average it’s taking a week. And so that means that machine is out of commission for a week. And you don’t necessarily always get hold of the engineer, right, immediately. Because sometimes they’re available. Sometimes it can take a day, at least, just a 24-hour callout. So that creates a bit of an issue and also has a knock-on effect for all these devices that you need to test. So what they actually wanted to do is they actually wanted to be able to connect to this machine and its data, its PLC that’s in there, to work out why the machines were breaking down. And Thermotron themselves were very open to say, “Yeah. Hey, please do. Not a problem at all. Obviously we’re happy to fix it, but it would be nice to know why these things are breaking when they do.” But the only way you can do that is by gathering lots of data. The machine knows only what it knows about. It doesn’t know, for example, there’s water pushed into the machine, into the chamber at a certain pressure. Chris Montague: 00:08:51.091 So there are lots of external effects as well that need to be monitored. So the two main ones were water pressure and water temperature. Okay. So we need a sensor for that as well. And that’s what we’ve done. We connected an external sensor into a branch into a pipe. So the water going into the machine, we can actually monitor that because the pumping station was on the other side of the site, a very long way away, so we’re talking over half a mile. Okay? So what do we do? Okay. So we’d connected to these chambers, and as I just mentioned, we actually strapped on some of the external sensors so we could actually monitor what’s actually going into the machine and monitor the PLC controller within the machine itself. And within a couple of days, we were streaming data from the machine and at the same time from the external sensors. This is important because the only way to diagnose any issue is you need the data collected in a way that you can monitor it, in a way that you can analyze it. And the best way to do that is in a time series fashion. That’s why we’re talking about it today. So the first thing we’ve done is we actually created some dashboards for them. So the engineers that were available on that shift, they could see the key metrics and the key tags that were pulled out from that PLC and on a dashboard, on a screen. So I always put it down to a couple of different things. So the first thing is that if you’re going to connect to a machine or - whatever the site, whether it’s in the field or it’s in the factory, the first thing you need to be able to do is you need to visualize your data in some way, shape, or form. Nobody wants to see zeros and ones. You need to make it meaningful. Chris Montague: 00:10:48.897 And what we’ve done is we’ve provided some dashboards so they could see a certain temperature of the chamber, when it’s ramping up, when it’s ramping down, and a whole bunch of other stats which were identified as being important. Now, the issue around that is that you or any one person - unless they’re looking at the screen all the time - you can’t capture absolutely every element. And then, even with just the screen itself, you still don’t have those external elements of the water pressure, water temperature there. So we put the water pressure and the water temperature on the dashboard as well. Great. So now we can visualize everything. That’s good. Now, if the machine breaks, and somebody actually wasn’t looking at the dashboard, the first thing they need to do is say, “Okay. What time frame did it break down in?” So we can actually then turn around and say, “Okay. Let’s have a look at why this machine is breaking down.” And the only way to do that is you need to look retrospectively. Even if it’s in near time, you still need to do a retrospective. So that’s why we collected data into a database, and Influx was perfect for that. So we’re collecting that data and being able to visualize it on the dashboard. What we also had was a secondary dashboard, but this wasn’t in real time. This was near-real time. So this dashboard was running with a delay of about 10 seconds. And that was a Grafana dashboard. So we used Grafana in front of Influx so we could visualize and then batch that data to see what was going on for the key metrics at that certain point in time. Okay. Chris Montague: 00:12:42.511 We also can extract using Grafana from that database, and you can put it into management reporting, and started analyzing that to an external source and trying to work out why that machine is failing, which they did do. Okay. Now, I’ll come back to that a bit later on. But by giving the ability to visualize the data, right, you also need the ability to store data. You need to be able to use data to extract that data to do something with it, whether that’s pushing it to an enterprise service, or it’s extracting it so you can actually use it for some form of reporting, or both. Okay? I would say you always need those elements involved. And then the most important one which we provide for the customer is that we are doing this in real time. There’s no point in saying, “Yeah. We think the machine has failed because of these reasons, but we can’t get to the data for a couple of days,” and we get to it instantly. Okay. So with that, to bring that all together, to bring it a bit to life, I’m going to run what we call the ADLINK Edge Demo which is the demo I created last year just so we can show exactly how we solve some of our customer’s issues and problems, and how they can visualize how we can help them. I can’t see any questions. If there are any - I don’t know, Chris, if you’re there, are there any questions as a natural pause if you like? Chris Churilo: 00:14:27.568 No. Yeah. No. If there is, guys, don’t forget. Feel free to write them in. And we’ll get them at the end, now. Chris Montague: 00:14:35.550 Yeah. No. Absolutely no problem at all. No problem at all. Okay. Right. So let’s get to the demo. So first off, I’m going to show you what our platform could do. It’s a platform ever-expanding services, and it’s capable of reading data - as I’ve just described - from various sources and pushing it to IoT Edge. And then what we do is we will transform and then interpret that information. And then we push it to multiple streams to display that live and historic data through dashboards and enterprise-type services as required. So before I go into showing you that, let me walk you through what you can see on my screen at the moment. Let’s make this a little larger for you. So this is the hardware I brought with me today. And on the screen - we can see here I’m touching - is one of our small, powerful gateways. And it contains an Atom processor, some memory SSD storage which is used to both store and run our Edge software solution. Right. This is attached to this temperature sensor here, okay, through this [inaudible] and connected on RS-485 and to my gateway using the Modbus protocol because that’s still the protocol of choice for most machines out there today, we’re finding. Now what I’ve also got attached to it is this Raspberry Pi. And that’s on this Ethernet port here. And that’s running the OPC UA protocol. And the data is generated using this Sense HAT on top of the Raspberry Pi there. Chris Montague: 00:16:32.224 And for those that don’t know, the Sense HAT will produce pressure, humidity, and temperature as a value that you can interpret and use. So that’s the hardware side of things. Okay? So let’s go and have a look at the ADLINK Edge dashboard. Hopefully everyone can still see my screen. Can you confirm up for me, quickly Chris? Chris Churilo: 00:17:15.023 Yeah. Yeah. I think we’re all still in awe that we’ve actually saw your hand touch [laughter] that gateway. Chris Montague: 00:17:21.688 Yes. Yes. Live demos are dangerous, but I like living dangerously. Chris Churilo: 00:17:29.312 Well, no. You’re living on the Edge. Chris Montague: 00:17:31.734 Yeah. Exactly. Exactly. So what are we looking at now? So the software on the screen is our - this is ADLINK Edge software UI, right? And this configuration is optional. So it’s not required to run any of the services. They run anyway. It’s just a way of being able to get to and configure and install a number of services on the gateway. We make it easy for people. So everything that you’re seeing here on the screen, all these services here are running locally on the gateway. Okay? So it’s important to actually just establish that, that what we’re talking about here, this is ring-fenced. Yeah? So this would be on the Edge, what I call true Edge. Just a little sidebar, I did have a good chat with one of my friends who works for Microsoft, and he’s an architect there. And he was talking about Edge, and I’m saying, “Sorry. No. Please stop talking about Edge. The Edge is on your premise, inside your firewall, inside your DMZ. Edge is not cloud.” And he goes, “Yeah. Okay.” So that’s where I’m going from. It’s Edge of your devices, Edge of what you’re doing. Cloud is not the Edge. The Edge is your platform in the way I’m describing it, anyway. So as you can see, I’ve got a number of different services installed here. Okay? And what I’m going to do, I’m just going to walk you through some of these services so you can actually see how we actually run them. So you can see the core technologies here and our software on the screen. And what we do is all of these services are stored locally and used out to configuration or to show dashboarding or how data is used. So let’s go and do that through Node-RED. Chris Montague: 00:19:25.375 Now, Node-RED is built into our solution. In this case, it’s used to transform data and generate a dashboard with its built-in capabilities. And what I’m going to do here is I’m going to shrink this slightly. So you can see my hand again. Okay. So as I was mentioning here, this is the temperature sensor. So when I actually touch the temperature sensor, you can see it reacting in live time, right, to the temperature on the dashboard here. Okay? Now, a lot of the time I do that with a cup of coffee because it a larger effect, right, but I’m only drinking water. I had a lot of coffees today in the afternoon here in the UK. So that’s great. So what I’ve shown you there is that we can stream live data, okay, in live time. So you can see that there’s an instant reaction to me touching that temperature sensor. Okay? Now, we see that we want to take it to the next stage because we can read that same source of information and then send it simultaneously to a database to be published in near-live time. And I got another dashboard to demonstrate that, and that is called Grafana. I will make this a little bit larger. So for those of you who don’t know, it’s just another dashboard solution, differs slightly to Node-RED in this instance because it’s connecting into InfluxDB to display that near-live and historical data. You can see I’ve got a refresh rate quite quickly here just for demo purposes of five seconds. So when I actually put my finger actually on the temperature, you’ll see that within those five seconds, straightaway, you can see it going up there. Okay. Chris Montague: 00:21:22.189 And so this is important because what I’m showing here is that I’ve got a combination of different data sources with different protocols being displayed at the same time in near-live time on a dashboard, being streamed through a database, yeah, so Modbus and OPC UA. Okay? Now, the good thing about these Raspberry Pi’s is that you can affect the humidity, can affect the pressure or the temperature so I always leave temperature out of it. It’s just next processor. But if I just breathe on that, you slowly start to see the humidity start to climb from the humidity sensor on my Raspberry Pi. Okay? So that’s great, right, because what I’m saying here is that we have shown, right, we can stream live data. Yeah? And from multiple sources, internally, using our Data River technology, and all of this is running on this little, small, but powerful gateway. Okay? But the important thing here is that the same subscription - sorry, the same published event on our Data River is being subscribed to by Node-RED service and Influx service and a Grafana service through InfluxDB. So we are reading once and displaying it multiple times or pushing out data into multiple streams at the same time. So it’s an extremely efficient way of doing things. Chris Montague: 00:23:08.021 Now, that’s great. Right? So, so far, what have we got? We’ve got a dashboard. That’s great. And then we’ve got some internal resources. It’s running on an internal database. And that’s great. Now, let’s say we’ve got a number of different branches or offices in different, multiple locations as well. They might want to see that data as well. You want to push to a remote station somewhere, very important for some of our oil and gas customers and that’s - so for that, you need to be able to connect to some external services. And for an example of that, I have a few running here when IBM Watson wakes itself up. I’m on IBM Watson. Chris Churilo: 00:24:00.851 Well, so finally live demo got you a little bit [laughter]. Chris Montague: 00:24:04.909 It will come back. It’s unreachable every now and again. I mean, I’ve been talking on purpose a bit longer than I normally do when I run through this because I know that it expires a lot of the time. Chris Churilo: 00:24:18.727 There we go. Perfect. Chris Montague: 00:24:20.537 So what I’m showing you there is that I’m putting my hand on a temperature sensor. You can take my word for it, or I can actually just show you again quickly. But what we have is that I am touching it, and the temperature sensor is changing. Now, the difference here is, as you can see, is that it’s raw data. Yeah? I just wanted to show you that we stream raw data as it is. Now, it’s a very simple example, but it’s just temperature and we just divide it by 10. So yeah. It’s a very simple transformation, but for pressure or for something else, from psi to kPa, it might be a bit more difficult. Or if you have a number of derived values that you actually want to only pick the last value within the last, say, second or so. Okay? We can handle all of that. And once again, yeah, the humidity sensor down below - I breathe on that, and you’ll slowly start to see that change as well. It’s always quite strange, I think, because it always goes down before it goes up [laughter]. No idea why. I’m putting down the Raspberry Pi for that one. Somebody probably can tell me why on the call. So that’s great. So now we are streaming our data to a live enterprise cloud service. Yeah? And it is near-live time now. It’s actually quicker than the way Grafana is handling it from Influx. Which is one of the issues I’ll talk about later on. So that’s great. And we can also do the same to any number of different time services. So I’ve got the same here. AWS doesn’t have the same array of dashboards built-in that IBM Watson’s cloud does have. So you can just see the Shadow value there. So putting the finger on the temperature sensor, you’ll see once again - yeah, immediate effect. Okay? Chris Montague: 00:26:16.996 So I’m streaming. Now, I update it in live time to multiple nodes, right, internally and externally, reading it once. Yeah? And so I’m consuming data from our Data River, right, or different types of data, right, all at the same time. So that’s great. And finally, we’ve got an API connector for all the main types of [inaudible], and this one’s Google. Where all I’m showing here the reason this is called Hannover Messe - that’s the last time that I ran this demo, you’ll see, in the big, European Hannover Messe show from showing Google that. So that’s great. But what if it’s not possible to, say, [inaudible] that stream locally? or connectivity is limited internally, like you would have for a factory, for example. Well, we have a solution for that. And give me two seconds because I just need to switch on my iPad. It’s the one thing I did forget so this is a natural break. Two seconds. [silence] Chris Montague: 00:28:03.871 Okay. Right. So what I’m going to show you is - we get to see - I’ve got an iPad here. That’s the right way. I’m connected. So as you can see on the back of the gateway, we have a Wi-Fi card in there, right? Could be a LoRa card or something like that or Bluetooth or whatever is actually required. But I’ve got Wi-Fi. And so that means then I have the ability to connect to my gateway securely over Wi-Fi. Okay. So what I’m going to do is connect to that spot, to an IP address. And hopefully it will connect us. So now you can see, all right, we have a remote version of my dashboard in Node-RED running. Yeah? It’s the same as the Node-RED dashboard here on my desktop. One more thing. I’m going to touch the temperature sensors so then you can see that they are pretty much in sync with each other. You should hope so because everything’s in the same room and it’s very close to each other. But we’re streaming data in live time. Now, that’s all great because we’re pushing data northbound, right? But what happens if we actually want to push a command back to a machine which might be and has been called by some clients. So if you work in a hazardous environment, you might not be able to - might want to switch a machine off without going into a room or something like that. Well, you can do that. And what I’ve got here is - you can see that it’s a test slider. Yeah? So if I actually just put my finger on here and push the slider, you can actually see it changing over here. Yeah? Chris Montague: 00:29:59.202 So what I’m showing you here is that I can actually control from the [inaudible] which is well. So I can adjust and control and push that back down to the Edge immediately. Okay? And there you go. That’s it in a nutshell, right, in terms of what I’m actually showing you here. So while I demonstrate that we can connect the unconnected using both standard and nonstandard - it’s all types of connectors, be that serial or digital. We can stream the data anywhere, as I’ve just demonstrated. I’ve streamed to lots of different things. And what I’m doing is controlling the security at the Edge. And just to put a bit more meat around that, let me just summarize it in this diagram. And then I’ll show you the bit of data movement that’s within another diagram I’ve created. So what I’ve shown you is that with a variable input - we’ve got a temperature sensor. I’m pushing that through a digital unlock converter. And one of our app services which run on our gateway picks that up. Then I’m pushing that same information for humidity and pressure using OPC UA for App Services from our Raspberry Pi. I am then displaying it onto a Node-RED dashboard. And then I’m also displaying it onto a Grafana dashboard through InfluxDB. I’ve shown I can connect to it via Wi-Fi if I want that roaming capability. And then also, you can actually connect to any enterprise services, should it be allowed northbound. So all of that is exactly [inaudible] the enterprise services at the top, what we’ve actually done for the user case I showed you before I showed you the demo. Chris Montague: 00:31:57.646 Albeit the sensors are a little bit different. Some are bolted into a pipe. But it’s virtually exactly the same thing. So just find me a diagram here before we go into a bit more session. What I’ve shown you there is that we’ve got temperature, and we’re pushing it through [the adapt?], through the Modbus protocol and pushing that onto our Data River. Now, I’ve only shown you a couple of protocols here. And as already explained, we can run multiple protocols at the same time. I think we’re up to 150 different types of protocols at the moment. Once that data is published on our Data River, we can then push it - which I have done; and I’ve shown you most of these back through InfluxDB to Grafana dashboard onto Node-RED directly. I’ve shown you the temperature sensor being shown in IBM Watson, AWS, and I’ve shown you Google as well. And this is also going to other sources. I’ve done exactly the same thing. So my Raspberry Pi pushing pressure and humidity onto the Data River. And then I’m consuming that data to a couple of nodes I’ve configured to be subscribed to, which is InfluxDB. And then through to Grafana and onto IBM Watson, as I’ve also already shown you. I’ve shown you our Edge UI. And that allows me to connect through the Wi-Fi and to push and control and stream data back to the Edge as well. So that’s pretty much it in a nutshell. And I’ll take a natural pause there, just in case there’s any questions before I dive into the bit of how we use Influx and the challenges and that sort of stuff. Chris Churilo: 00:33:45.738 I just have a really quick question. So why InfluxDB? It looks like this Data River could just go directly into Watson or AWS. So why did you guys need InfluxDB in this case? Chris Montague: 00:34:02.358 Right. So in most cases we’ve seen so far and especially around Europe, most customers are quite sensitive and don’t trust the data going to an enterprise cloud service of any description, let alone one that - because most of these people are engineers, or they just want their machines to run properly. And that’s when I get a step too far for them. So the only way you can actually get around that is by making sure you store that data somewhere which is local and somewhere which can take those multiple streams of data in a highly performant way. And that’s exactly what it does. And most of the time, when we actually do demos, or we actually leave some demo kits for customers to play with in a digital experiment or a what you call it what’s the word I’m looking for proof of concept. And then you need somewhere where you can secure that data so it’s not being sent somewhere else, so it’s all stored nice and efficiently on our device, right at the Edge. And that means then we can use that data in any way we see fit. So it makes sense to bake it into our solution. So we do have a connector for that which I’m going to go and explain, how we actually do what we do. So that makes sense. Chris Churilo: 00:35:39.323 Excellent. Okay. Yeah. No. That totally makes sense. Yeah. No. And I think we hear that from a lot of process industries. Yeah. On-prem is still very, very important. And not everything can go to the cloud yet. Chris Montague: 00:35:52.593 Matrix does that. Matrix. And even if you don’t want to push all of your data immediately, you do need some way of storing at least a day’s worth of data. And a lot of the data that comes off on [inaudible] - you might not actually want it all, but the first thing you need to do before you decide and start filtering that data is you need to be able to see it, to know what’s good and what’s bad and what’s different. Yeah? So okay. It’s going to get a bit geeky. So hopefully everybody can see my screen. So what is under the covers? So under the covers we use a containerized platform, so Docker. Yeah? So each one of our apps is a Docker container, effectively. And as you can see, there are some things you might notice. One of them there is InfluxDB running there. And we actually have a way of connecting to that InfluxDB service because we actually have written our own way to connect to that using this service here. We call it our InfluxDB Service. And that’s how we actually stream the data actually on to that platform. Now what does that look like? Well, it’s just a simple connector. Chris Montague: 00:37:22.051 So if I actually just pop back to my Google - and let me just stretch this out. [inaudible] Edge - if I go to the InfluxDB Service which is what that container’s running through, a UI. What I’m doing - it’s a local host because we actually have a local network set up for all our Docker containers. And we’re connecting to the port which certainly most people know about. And then you can subscribe to with a number of different topics depending on the type of data. Now, the reason we set it up that way - because our underlying technology is called EDS, right, Edge Distribution System. So it’s a military standard type of way of shipping data from A to B and back again. Yeah. And it’s that publish-subscribe model. So we have a number of topics. So one topic would be for temperature, for example. And we’ve used a different topic for something else. And all this is - really, this is just an XML configuration. Right. And so we’re just actually setting up. It can be more complex than that. But I’m just showing you a simple version. So how do we actually get some data from certain services? So I should show you the Modbus service. Well, that’s a serial port connection, TTY. It’s using Modbus RTU. And these are the data bits and the stop bits and the serial mode it’s running in. And then I put in the values that I’m interested in. And this is where Influx pulls the data and its tags, if you like it, it understands. Yeah. This is where we put what we programmed into our Influx service. So I have decided to call it, my Entity and my Identifier, as “temperature” and “temperature1”. Chris Montague: 00:39:13.943 And then there’s a whole bunch of services, and I think you need to configure for it. And at the bottom here, I’ve got a poll rate in nanoseconds. And so at this point in time, what I’m doing is I’m polling something like a million times a second, right, to get a temperature value over a serial protocol. Total overkill, but it’s just proof that we can do it. Right? That might be okay for a digital interface, for OPC UA or something else over Ethernet. But yes. Something running up Baud Rate of 9600 - it is probably overkill. But the important thing here and the reason why it’s so efficient before it even reaches the database is we are grabbing that last value. Yeah? So if you really want one value per second, right, we’re scanning a million times a second, right, when I go looking for that, finally, when I want to read it, I don’t care about all the other values. Right? I just want to grab the value of when I require it. Yeah? So you can actually tune that per service. Chris Montague: 00:40:22.202 So coming back to Influx, what we have here is we have our own influxdb -service. So if I actually just drop into the Influx container, Influxd, Influx - and if I show databases, I really got one database on here. No. Actually databases. Right? I’m showing there that’s what my database is actually called. Running for my InfluxDB - if I show you some of our data - I’ll use my DB. Show series. And lo and behold, what can we see there? So what I’m showing there is the data which I’ve configured which I mentioned before temperature, temperature1, for instance. And for OPC UA, if I actually come back to this as well, look at my OPC UA connector, scroll down to it. Now you can actually see pressure and humidity there, yeah, and the instance ID I’ve given which matches what we can actually see there. Okay? So that’s basically how we’re actually getting data into InfluxDB. Okay? Do I show you anything else? Let’s just show entire tag keys maybe or show measurements. Chris Montague: 00:42:11.165 So you can see there that the database is actually streaming - we’re streaming data into the database. And you know that it’s live because I’ve shown you that. So that’s all great. Right? And we’re streaming from multiple sources because we don’t care where it’s coming from, or what type of data it is, or what rate it’s coming at, or if we need to sample it up. What about the issues? Right. Okay. Now, some are related to [inaudible] services and some are because certain things weren’t available. So for example, why do we use Grafana? Okay. Grafana, just like everything else - it’s great for showing things, but it’s not live. It’s five seconds. That can be a lifetime for some people, which is why we also run Node-RED, right? Just so we’ve got that live thing. So Node-RED can do some things Grafana can’t and vice-versa. You can actually connect to a database with Node-RED as well. But it’s not as easy for a customer to do it. It requires some customization. So the issues, then. So when you actually have a lot of time series data, right, Grafana starts to slow considerably. And a lot of that can be due with how much data you’re actually looking at. Now, in my instance here, right, you’re not going to see much because what I do is I regularly go into the database and because it’s a demo, and I want it to be slick, and I want it to be fast - and I just clear down the data. I actually go in there, and I just delete it from the database where time is, say, less than a month from today, right, because I’ve got some data to play with. Chris Montague: 00:44:03.761 Because over a period of time we found, and we found on customer sites as well that the dashboard does start to get laggy very, very quickly. And it all depends, obviously, on the refresh rate you actually do have. Okay? And while it’s updating, it’s taking up processor time. And so what are the things particularly which I was glad to see a couple of weeks ago with InfluxDays was Influx was coming out with their own dashboards because I don’t always need complex dashboards. I just need some way to show that live data. Can I show live data in version two? Well, how to play with it and we’ve seen it as well. Well, yes. We can. And so that means in the nicest way possible - I’m not saying we would stop using Grafana, but I wouldn’t need it in some instances. And I also wouldn’t need Node-RED because straight away I’m getting that visualization with the up-and-coming version two. So that’s one of the things I would say has been one of the pains. And actually, [inaudible] been out to show lots of data at the same time without having to have a slow refresh rate or trying to put the sample down to a smaller window. It’s not an easy thing to do. And that’s an everybody problem. Yeah? And then also from a tuning viewpoint as well. Chris Montague: 00:45:43.521 And I was a former DBA years ago, not for time series databases, a database called Progress. I used to do a lot of work with those. And it was a pain trying to track down some issues because, for example, somebody forgot to put an index on one table, which means instead of actually going straight to the record, it read every time. And what should be two reads turns into a million reads very quickly. Okay? Well, time series works in a totally different way, but we still have the similar challenges when it comes to volume of data. So be able to batch up or see volumes of data in a nice way, without having to have a huge server, would be huge benefit of our customers’ [inaudible], as well for what we showcase. And also would it be a very good way of being able to do things at the Edge. And what I mean by that is running models at the Edge. I still wouldn’t, especially not on an Atom processor, if I was doing some AI or some analytics. I would still model it, especially if it’s hard-hitting, somewhere else. And then push your model to the Edge, and so you just run the model up the Edge rather than doing it all at the Edge. So I hope that makes sense. I’ve just [noticed we’ve got?] nine minutes left. Any questions? Chris Churilo: 00:47:17.159 Yeah. And if you do have any questions for Chris, just please put them in the Q&A or the chat. Or if you want to have a conversation, just raise your hand, and I can unmute all lines. That was really great, Chris. What I was really impressed with is how deeply integrated you pulled all these things together. I thought the UI that you guys created was really nice, providing for a really nice experience for your customers. Do you guys have any plans on doing any kind of predictions with the time series data that you’re collecting, to try to help, especially with the one customer example that you gave, with the Thermotron, letting them know perhaps when a part might need to be replaced? Chris Montague: 00:48:03.734 Very good question. Thank you for asking that. Yes. We do. Because one of the things that anything that has moving parts - it either falls under a maintenance schedule or it breaks. And sometimes it can break. And so one of the things that we would like to do we can offer it which we can’t currently because we’ve only got a small box connected to a couple different machines up there - and so we’re constrained. And so the first thing they have to do is they have to download the data which they use using Grafana. And they do some calculations somewhere else. And a lot of that could be done instantly with using Influx straight out of the box. And then we can actually easily just put a maintenance window on there and say, “Okay -“ once you actually have that data collected, you could say, “Okay. Based on the facts of what we have, then the maintenance window isn’t 20 days away, it’s actually 10, based on the usage of this machine or based on the way it’s actually being used.” So there’s a whole host of ways where we can actually showcase that. And they’ve showcased it internally because they’ve just started the journey, this customer. For their industry, 4.0 and predicted maintenance is the first headache on the way to being able to do something about that issue. Whether it’s automation or whether it’s actually getting some intelligence down the Edge so you know then how to react so that maintenance window can be reduced or even better than that, we go from break/fix to predictive. We’re like, “Yeah. This machine is going to break. Let’s not use that one today. Let’s use one of the other ones.” Yeah? Chris Churilo: 00:49:50.663 Yeah. Yeah. Now that’s a really great point. And also, just listening to you, there’s so many little things that can add to that time that [inaudible] machine’s out of commission. You talked about the team, whether the team’s available. Is the part even available? Did you even order it? It takes a week. Being able to just reduce those, I’m sure is a really important goal for any of your customers. Chris Montague: 00:50:16.825 Yeah. Absolutely. And another example I gave, the customer working within the UAE, a large drinks manufacturer. I had mentioned the name. And they’ve got a problem. They supply the soft drinks for the entire UAE region. And when it gets hot there in the factories, there’s no cooling inside these factories. The plastic wrap that goes around the bottle - it stretches. When it stretches, the machine jams. When the machine jams, the [inaudible] on it stops. And it’s a 40-minute stop in the production because they’ve got guys who have all those [inaudible] and a [inaudible] during [inaudible] on site. But when it breaks - let’s say it breaks for that minimum of, say, 40 minutes, and then takes an hour for that machine to come back online. All of a sudden, they’ve lost a run of 50,000 bottles. Yeah? In a low margin business, that’s huge. So they want to increase their efficiency - so one of the things they want, one of the things we’re doing for them now - they haven’t got it fully up-and-running yet. But I said, right, the way they got around it initially was they put their big wrap in cold storage. But all that meant is that, instead of breaking at 12 o’clock, it broke at 1 o’clock. Right [laughter]? So they bought some time. But that’s because they didn’t understand the exact issue. And the issue is, okay, what’s the maximum temperature you can run these machines at before things start to stretch, and machines start to jam. That’s what we need to know. Chris Montague: 00:51:58.373 So we’ve got some [inaudible] sensors on there. And unfortunately, yeah, it’s going to have to break so we can work out what thresholds, of course. But we’re monitoring that, and we’re collecting that data to be able to then control that machine, and we can even slow it down. We actually connect it to the PLCs and ask Siemens S7s to say, “Hold on. I’m running too fast. And based on the speed that the plastic is being pushed through this machine, it’s generating another one degree of temperature, and that’s one degree too much. So you either need to slow down what we’re doing so we can actually be more efficient, and we don’t break. Right? Or we might need to stop altogether for a bit. Right?” And that’s better than actually break/fixing all the time. There are a lot more efficient ways of doing that. And the other problem they’ve got is they have a lot of manual processes. They’ve got guys who are actually looking at counters of how many bottles are being produced. And they go down and they get their little spreadsheet out, and they fill in a number. And then they go and have a meeting. And they discuss those numbers at the end of the day. And then they pass those [inaudible] numbers up to management. And they’re always a rough estimate because one person takes it at one minute past four. The other person takes it at 1 minute past 4 and 20 seconds, or at 3:59, whatever. So there are logging issues as well. And so they don’t know where and what volume they should be doing things at because they’re not getting the level of reporting. So we’re automatically solving that issue for them just by collecting the data. And so that wasn’t somebody asked for it, it was just like, “Added benefit.” Chris Churilo: 00:53:43.954 Yeah. And then those employees can do something that’s probably more important than just looking at counters. Which, I mean, no human wants to do that. It’s boring to begin with. Chris Montague: 00:53:55.240 Quite. Chris Churilo: 00:53:59.208 Awesome. We don’t have any questions, but everybody that’s on, we are recording this session. So after a quick edit, I’ll post it. And if you do have questions after the fact, which is often the case, don’t fret. Just send me an email, and I will get you in contact with Chris. And you can chat with him that way. Hey, Chris, this was really great. I loved the live demo. I think you have nerves of steel to be able to do that [laughter]. Chris Montague: 00:54:29.585 Yeah. It’s well -practiced, and like they say, you’ve got to live on the Edge a little bit. Chris Churilo: 00:54:36.339 Awesome. All right. Thanks, everybody, for joining us today and come back. We always have a lot of great customer use cases. And I guess that if you do have any questions for Chris, just let me know. Chris, it was a pleasure. And I look forward to continue to work with you. Chris Montague: 00:54:52.812 Brilliant. No problem at all. Thank you so much, Chris. Chris Churilo: 00:54:55.041 Bye, bye. Chris Montague: 00:54:55.977 Okay. Thanks. Bye, everyone.
Senior Solutions Architect, IoT Solutions and Technology at ADLINK
Chris is a Senior Solution Architect at ADLINK working closely with industrial customers enabling them to optimize operational efficiency by connecting the unconnected and streaming data from the edge to the enterprise. Chris is an experienced IoT professional with over 20 years experience in the software and IT systems market. Chris originally started his IT career writing code to optimise and streamline databases for large public sector clients, before graduating to Systems Administration and DBA roles, to put his analytic skills to good use. Chris has architected and delivering large scale IT projects for customers in the Manufacturing and Finance verticals (for the likes of BP and Bank of England). He has also presented at multiple IT events, on many IT issues, for subjects like How to optimise your data, and Architecting highly resilient solutions. In his private life, Chris is an avid photographer, and loves to take pictures of many different subjects mainly, portraits, landscapes, and sports. This coupled with his love for the great outdoors provides him with many opportunities to see the world from a different viewpoint. He is also a keen cook and creates many of his own recipes.