Coming soon! Our webinar just ended. Check back soon to watch the video.
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
Webinar Date: 2021-03-30 08:00:00 (Pacific Time)
Algist Bruggeman NV produces yeast for large-scale bakeries and home bakers. The company lacked insight into its fermentation process as its sensor data collection process was manual. Production data was committed to paper, making it difficult to compare batches, aggregate production parameters or detect anomalies.
Factry.IO’s data historian, built on InfluxDB, has helped the company collect process data, enabling it to gain more insight into its production process and provide predictive maintenance.
In this webinar, learn about Algist Bruggeman NV’s business outcomes and the technical setup of linking time series data with ERP, planning and quality data for operational improvement.
Watch the Webinar
Watch the webinar “Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
Here is an unedited transcript of the webinar “Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Frederik Van Leeckwyck: Co-Founder and Business Development Manager, Factry.IO
- Ivo Lemmens: Project Manager Automation, Algist Bruggeman
Caitlin Croft: 00:00:04.347 Hello, everyone, and welcome to today’s webinar. My name is Caitlin Croft, and I am really excited to be joined by Frederik and Ivo, who will be showcasing this industrial IoT use case. Once again, please feel free to post any questions you may have in the Q&A or the chat. We will answer all the questions at the end. Without further ado, I’m going to hand things off to Frederik and Ivo.
Frederik Van Leeckwyck: 00:00:38.661 All right. Thanks, Caitlin, for the introduction and let me welcome you all. So good morning, good afternoon, and good evening to all of you. In today’s talk, which will take about an hour, we’ll be using OPC-UA to extract IIoT time-series data from PLCs and SCADA systems. And of course, this story will boil down to data collection on one end, from which they use this protocol, and then next to that, we’ll use that data in an integration project as well. So the story that we’ll share stems from a project that Algist Bruggeman and Factry did in 2016, 2017. And the goal there was to improve data integrity, gain more insights into production process of yeasts and ultimately, hopefully making better yeast. So that was the start of the project, the goal of the project, and of course, we established many data collection and integration systems for that. So in today’s talk, I’ll hand over immediately to Ivo, who will introduce himself and the company, Algist Bruggeman. Then I’ll introduce myself and the company, Factry. Then we’ll dive into the main part of the presentation, the situation and the challenge, the solution and outcomes, some unexpected benefits that came around in the end, and we’ll end with a Q&A session. So with this, I’m going to pass the word to Ivo already. Ivo?
Ivo Lemmens: 00:02:29.796 Hello, Frederik. Thank you for this nice introduction and welcome to everybody and thank you for joining this webinar. I’m always thrilled to see that so many people are interested in using this database, this time series database in an industrial environment, as we do and as we did, started some years ago. First, I’m Ivo. I’m project manager automation at Algist Bruggeman. And in that position, I’m responsible for automation, for electricity, as I used to say years ago, everything that is wired. But nowadays, some things are wireless, so also these parts are coming to my field. And especially in this position, I was project lead for the MES implementation, the database implementation, and everything that goes around this Historian project. And in my free time, I’m a runner and a scuba diver. So you know my personality. Something about Algist Bruggeman, Algist Bruggeman is a factory or production plant located in Belgium, near the nice city of Ghent, historic city, which you can visit when the pandemic is over, and really we’d like to invite everyone to come and see.
Ivo Lemmens: 00:04:09.596 So Algist Bruggeman, located in Ghent, started this production in 1884, just as a reminder to see that it’s not just a startup but 170 years or so of evolvement of new technologies. And for about Algist Bruggeman, we can say that we produce fresh yeast, fresh yeast for the bakery industry, primarily, in all kinds of forms — fresh yeast, liquid yeast, dried yeast — for the big industrial bakeries, the small bakeries just around the corner, and every bakery just in between from size and whatever products they make. Beside that bakery yeast, we also produce yeast specialties for wine, for beer, beer specialties, and also some ingredients for the pharmaceutical industries. For the ones who are interested, no, we do not produce any vaccines. Algist Bruggeman is a part of a very big international group, the Lesaffre Group, and is a global leader in fermentation, especially of yeast. And since one century and a half, they are continuously growing, and they have now a turnover of €2 billion, 10,000-plus employees in almost every country in the world, with products and production sites and sales offices all over.
Ivo Lemmens: 00:06:01.227 The Belgian site, however, is more modest, has about 170 people employed and €120 million. In gross, we can say that the Lesaffre Group is making yeast for one out of three breads made worldwide and that about one billion people in the world is eating on a daily basis some of the ingredients made by Lesaffre Group. Just to give a situation that Algist Bruggeman, part of a very large group, it’s not the small company just around the corner, you can say. Maybe interesting for you to see how our production process is running. And our production of yeast is done in somewhat you could compare to breweries, to dairy production sites, but of course, it is rather special. We are producing yeast, and yeast is a kind of a small cell, cellular form, somewhere in between an animal and a plant or yeast. We receive the yeasts — we produce about 50 or 60 different types of yeast — and we start from one single cell, and we produce them in a very hygienic environment on a petri dish. We multiply the yeast when we supply the right ingredients. And then we multiply it. It would come to a small bottle, and the bottle becomes bigger, and we have bigger bottles. And then I don’t know if you — you don’t see my pointer, do you? No.
Ivo Lemmens: 00:08:08.004 But we see a fermentation — yeah, Frederik, thank you. That is our first fermentation tank. It contains thousands of liters of yeast in a water solution. After about 12, 15 hours, one batch is completed. The water is separated from the yeast, the yeast cream. The water goes one way. That’s going to our water treatment plant where every ingredient in it is recuperated, is revalorized into fertilizer or to feeds industry. And the water, of course, is going back to where it belongs into nature but very well purified. The cream yeast is our product, and that goes to the bakeries in liquids via a tank or in smaller containers, 1,000 liters, or in a compact yeast, in a pressed yeast, as we call it, a fresh yeast in blocks of one kilo or a half a kilo to smaller bakeries, or in a dried form, and then we have a product that is like a powder form.
Ivo Lemmens: 00:09:31.682 In the next slide, now we can see a typical view of our production facility on the inside — some pumps, some pipes, a very big fermenter — and it’s all located in the nice photo, as Frederik is pointing out. That’s the location near Ghent. On the left side of the next slide here, we see the rotary vacuum filter where we use the compressed yeast. And on the right side, we have the dried yeasts packed in a half a kilogram, which is suited for export because the difference between the dried yeast and the fresh yeast is the shelf life. [Dry] yeast, as it is a living product there in still a wet environment, has a shelf life of three weeks, while the dry yeast has a shelf life of two years or even longer and can also be stored in warmer storage rooms. The history, Frederik, I think that’s more up to you to define where we started and how we took our journey to implement the Historian.
Frederik Van Leeckwyck: 00:11:01.346 Yeah. Thanks, Ivo. So indeed, you remember, Ivo, it was in 2016 that we started working together with our open source project, the OPC-UA logger, which we used to [crosstalk] —
Ivo Lemmens: 00:11:20.605 [crosstalk] interrupt you, Frederik, but in 2016, indeed, we started with InfluxDB. But I have to admit that a few years earlier, we started with someone else, not InfluxDB, but something industrial. It did not work out for us.
Frederik Van Leeckwyck: 00:11:38.827 Yeah, yeah. So let’s call it, we started in 2016 with this open system. Is that a good way of putting it?
Ivo Lemmens: 00:11:45.358 Exactly what I meant.
Frederik Van Leeckwyck: 00:11:47.395 [laughter] Okay, Ivo. So indeed, in 2016, we started with this, yeah, more open system based on the OPC-UA logger and InfluxDB, the first versions that came out back in 2016. And like Ivo said, they were using something before. And with all new things, you have to start to trust the system. So they started with logging temperatures, purely for food safety, and then gradually expanded to the data collection on site. And right now, if I’m not mistaken, we are collecting data from a lot of data sources on that site, from about 50 PLCs, a bit over 4,000 tags, if I’m not mistaken, and some at 1 hertz resolution, sometimes also a little bit lower resolution. But that’s the footprint we have there. Right?
Ivo Lemmens: 00:12:45.868 That’s correct. But the number of PLCs is still growing, and the number of tags exponentially.
Frederik Van Leeckwyck: 00:12:51.184 Yeah. Mostly, it doesn’t go down. [laughter] So that has been running for a couple of years in the meantime. And a little bit about me, I’m the co-founder of Factry, and I’m responsible for business development. I have an engineering degree, bioscience engineering degree, which means that already back in my studies, I was looking at the scientific approach of fermentation processes. And here, some of these worlds come together. Right? And a couple of years ago, I did large parts of the development of this software project together with my colleague here today. And so I mentioned I was co-founder of Factry. What we do and what we’re going to be talking about today is the digital factory. Right? On the one hand, you have the OT technologies — so the operational technologies, the PLCs, the data systems here at the bottom — and on top, you have the world of IT, in which, for example, InfluxDB is situated. And for a long time, both worlds have evolved separately. And of course, now, more and more, these worlds are converging again, and the digital factory is happening right across both worlds. So what we try to do constantly is to make sure that data that is present or is generated in the OT systems, to extract that process data as quickly as possible and get it into IT systems because at that level and in those systems, the fastest and the largest progress is being made. So that’s the story of the digital factory and what we are going to be talking about. So to do that, Factry builds Factry Historian, which is a system to collect and store data from the processing equipment. And then finally, once it is stored, we’re going to visualize it. And the storage happens in InfluxDB, which is why we are here, of course.
Frederik Van Leeckwyck: 00:15:03.533 So collecting all of this data, what problems does this solve and which points we’ll be touching on today? If you collect as high a resolution data in the InfluxDB time-series database, we see in practice that companies do three things with it. On the one hand, they use it to look back. Something went wrong, what happened? And Ivo will share some examples of that later in this presentation. On the other hand, as data is coming near real time, you can also use it to display information about, for example, progress of your production process in near real time. And since we’re gathering lots and lots of data, you can also start using it as a basis to predict what’s going to happen in the future, maybe. And then, as a last slide, we are still looking for talented people to join our team, so have a look at our website at the Jobs page.
Frederik Van Leeckwyck: 00:15:59.276 All right. So we come to the main part of this presentation now in which we are going to explain the starting point, the questions that Algist Bruggeman and we had, and how we overcame these with a stepwise implementation of an architecture that we put together. Now, what is this challenge? We talked in the beginning about we want to improve this production process, make a learned run from that data. So what are the typical questions you get if you want to improve that production? The typical questions you ask are, for example, how does a specific fermentation perform, or does a specific fermentation or a specific batch perform differently on the one fermenter compared to the other, or how well does a certain fermentation parameter follow its reference curve set out by the recipe? So these are questions that are asked, and if you can answer these, you can learn from your processes the things that are actually happening in the production site and hopefully improve on them.
Frederik Van Leeckwyck: 00:17:19.080 Now, what is curious is that asking these questions is pretty easy, but answering them seems to be hard. Now, why is it hard to answer these questions quickly? Well, we found that one of the reasons is that data is not readily available. And so, for example, fermentations, they are followed up on paper. So the operators in the control room, they note down values from what’s happening in the fermentation process, note these on paper. But what happens with this paper in the end, this ends up in a folder. And this is, of course, useful. If something went wrong or you have something to investigate, you can still find it in that folder. But if you have to answer these questions that we asked earlier, you might have to gather hundreds of these pages to really gain insights. So you would need to bring that together. On the other hand, there’s, of course, already IT systems in place. So some data is already registered, but sometimes this is isolated in systems that are specifically built for one or another purpose. So the end result is that information is a bit scattered and it’s cumbersome to answer these questions.
Frederik Van Leeckwyck: 00:18:39.831 So the problem is you have different data sources. And in Algist Bruggeman’s case, that was, for example, recipe management from an Excel. Fermentation progress happens on paper. Lab results are performed in LIMS software, which are excellent in their own way. But to answer these questions, you need to somehow bring all of this together, which is an integration challenge. So why is that? This is just an example. This is not specific to Algist Bruggeman. But you can see that data is not inherently — or explicitly linked. On the moment you ask that question, it’s up to the human or humans to start putting all of this information together by creating one system, going into another system, going into another, and so on. So on the moment you start answering these questions, you’re going to look up all of this information by hand and hopefully find an answer. So the goal here is to bring everything together so that we have an error-free link and that we can learn from the data more quickly.
Frederik Van Leeckwyck: 00:19:47.019 So let’s bring this data together. That’s the challenge. And when can we say that we have succeeded? I think, based on these two criteria, this should work out. So we can say that we brought this data together in a good way if the human that is initially here in the center has become a user, not the person linking the data. So that person is more or less right outside the — is not in the center anymore. And the three questions, [inaudible] example, of course, can be answered in a reasonable time frame, so more or less automatically. If you manage to do that, I believe we have managed to bring this data together and we can, in fact, learn from everything that’s happening already in the production site.
Frederik Van Leeckwyck: 00:20:39.590 So to make that happen, we took a holistic approach. There’s multiple steps in the production process, and we need some information about a few of these steps. First of all, we have the parts before the fermentation, so things that are happening beforehand, which is, for example, planning and recipe management in this case. Once the fermentation is happening or we get closer to the actual fermentation, we have to manage the dispatching of fermentation, we have to collect data from what’s happening in our fermentation process in InfluxDB, and we can use that to automate the completion of these fermentation sheets. And then after fermentation, the people in the lab are doing analyses to get information about the quality and other parameters of the end result, which are, as mentioned, put into a special software called LIMS. And if we want to get information about the output of our fermentation, we should link those lab results as well. And the end result is then finally used for reporting. So now we’re going to go through each of these steps — before, during, and after — and show how we linked all of this data together.
Frederik Van Leeckwyck: 00:22:11.362 So Step 1, first, we need to get the batch information. So in time, the company is scheduling the batches that they are going to produce on their equipment, so their fermenters. So there’s a planning of production orders, and that’s done in a special planning software that is tightly linked with the ERP system. These production orders, more or less, this boils down to start and end times, or expected start and end times, of upcoming batches, the equipment these batches will run on, and of course, some further information like a batch ID, a recipe ID, a recipe version if recipes change over time, that kind of information. To synchronize data from the ERP system to an MES system, which is more or less what we built, there’s a standard called B2MML, the Business To Manufacturing Markup Language. And for this project, we implemented this in Go and also open-sourced this. So this is a messaging system you can use to — yeah, as it says, business to manufacturing, so from the ERP to the production floor. So this graph you’ll see coming back a couple of times. We’ll start closing the loop. Well, now we built the first components. We have information from our ERP of our upcoming batches, and we send that to the Factry MES.
Frederik Van Leeckwyck: 00:23:47.523 Then the next step is we know what we’re going to produce, but we also need some parameters. So we need to get data from our recipes. So the recipes, they contains all reference values for many different recipes, the material feed, so which materials to feed at what rate and so on, and some critical process parameters to check how the process is evolving. Finally, these recipes are used to provide set points to the PLCs that will in the end actually control the production process. Furthermore, these recipes also provide reference values to compare with the actuals that will be recorded later on in the production process. So on the one hand, you have the steering parameters. On the other hand, you also get some references to see if the fermentation is actually happening, what’s actually going on, and if we are not deviating too far from what is expected. So we also have this recipe data into the system. So we can get ERP data, where, what, and some recipe data as well. So if we summarize this, now we know — from both systems, we know when we will produce, we know what we will produce, we know how we’re going to identify what we’re going to produce, and also where, so which fermenter are we going to use to run a certain batch. Now, what’s missing? The actuals, the actual process. So let’s go to the next step.
Frederik Van Leeckwyck: 00:25:26.664 In Step 3, we’re close to the actual process already. So now we have the recipes. We have all the parameters that will control the fermentation process. So we have the upcoming batches that are synched from the planning and ERP system, and we also have the batch IDs, the expected start and end times. We have the recipe information. Actually, we have all information together in a central system to load into the SCADA system so that the PLCs can start with actually controlling the fermentation process. So from this moment on, the operators can just press Start, and the SCADA system is preconfigured to actually start the fermentation process. So from this moment on, we have an error-free link between planning and production. So no more manual selecting of a recipe from a drop-down list or typing in a batch ID that might contain errors, like a small o or a capital O, small letters or large letters, or O versus zero, those kinds of things. There’s an error-free link between planning and production.
Frederik Van Leeckwyck: 00:26:38.857 So right now, this is a bit of the overview that we get. So we have, again, ERP and recipe data into the MES. This is used to send to the SCADA system, which will, of course, interface with the PLCs that will actually control the process. And the operators are using the system already to start the fermentations from the MES system. Of course, on the SCADA system itself, they’re also using that to control the fermentation process. Now the fermentation has started, the batch has started. So from now on, we’re really — well, actually, all the time, but now this is becoming important for the fermentation process. We’re collecting high-resolution data from the different PLCs and SCADA systems that are controlling the production process. As we mentioned in the beginning, this is, in fact, much broader than just the fermentation process. This is happening for a lot of things, for a lot of PLCs and controllers across the whole site, but in this case, we’re focusing only on the fermentation, of course.
Frederik Van Leeckwyck: 00:27:48.703 So from that moment on, you really get this high-resolution data from what’s happening in your fermenters, and you can use this for dashboarding, for process analysis with the raw data coming out of your fermentation process. So this becomes a data source for storing what actually happened during production. In the recipes, we put forward what we will expect to happen. And now we are recording what actually happened during production. And for that, we use the OPC-UA protocol. So this protocol has changed quite a lot. It has opened a lot of, sometimes in the past, rather closed systems to control industrial processes. So with the OPC-UA protocol, we can now get to data that are made available from PLCs and SCADA systems much more easily. So newer PLCs, they sometimes have OPC-UA servers on board, or SCADA systems have them. Sometimes you need to do some protocol translation. But this is, in fact, a good standard to rely on to get your really high-resolution process data from. So in this case —
Ivo Lemmens: 00:29:07.239 Can I interrupt, Frederik?
Frederik Van Leeckwyck: 00:29:08.751 Absolutely, Ivo, go ahead.
Ivo Lemmens: 00:29:10.546 Well, you mentioned OPC-UA. Well, it’s not the only protocol we use. There are many different protocols, but OPC obviously is most important. And what you don’t know yet, or you know it, is we are actually ingesting our HVAC from our new lab via a BACnet protocol. Just [crosstalk] —
Frederik Van Leeckwyck: 00:29:33.996 Yeah, I don’t. [laughter]
Ivo Lemmens: 00:29:34.258 In BACnet?
Frederik Van Leeckwyck: 00:29:36.292 Yes, yes, indeed, with BACnet. And yeah, what you’re actually doing here, there’s two big types of metrics. On the one hand, we have polled metrics that are typically used for — [we have] sensor values, like every second, every five seconds, analog values, more or less, temperatures, pressures, those kind of things. And there’s also monitored-type metrics that are more used for logging the states, like production steps or a valve that’s opening or closing, where you will only collect a data point on change rather than sampling every second or five seconds or a few minutes. Important there is that for monitored values that the actual timestamp of change is recorded as well rather than a rounded second or five seconds for polled metrics. So the end result at Algist Bruggeman, it looks a little bit like this. So what we see is a heavily sensored Grafana dashboard with a raw process data coming out of the production system. Do you want to add anything to that, Ivo?
Ivo Lemmens: 00:30:59.120 Well, it’s a fermentation process, and some recipes here are mentioned. So it’s not only numeric value, but also literal string values which are supported. And we can monitor almost everything. Aside from our direct SCADA system where the operators work, we have this Grafana to look in the past what happened.
Frederik Van Leeckwyck: 00:31:34.742 So that’s the visualization of the raw data coming out of the processing. And how does this work? We have this data. Sometimes you have PLCs. You have other data sources that are being collected. So the collectors are responsible for [inaudible] OPC-UA. For example, with the SCADA system, they retrieve this data every second, every five seconds, or on change, and then forward this information to a backend where potential other [metadata] is added to that, like InfluxDB tags, for example. And then finally, the end result is forwarded to the time-series database, InfluxDB, in particular. And once it’s there, we mostly use Grafana to finally visualize the end result. So it’s this flow from the data source, the SCADA system or the PLC, into the database and then finally visualized, as we’ve just seen.
Frederik Van Leeckwyck: 00:32:36.290 So as mentioned, these collectors, they talk an industrial protocol such as OPC-UA on one end and HTTP on the other. And they’re also responsible — or they’re built according to a store-and-forward mechanism. They’re responsible for local buffering. So in case of bad network connectivity, they will store their data, first, in memory, then on disk. And then finally, they will also keep a local copy of their configuration for cold starts. this. And this is also something, I believe, that Algist Bruggeman has done a really good job in is — I cannot stress enough — the importance of naming to follow, for example, a hierarchical structure or another logical structure. For example, Algist Bruggeman, they use AREA.EQUIPMENT.SENSORID, and this is really well thought out. If you do this well in the beginning, there’s a lot of benefits to reap there. You can use a lot of the more advanced functionality, for example, templating, which happens much easier if your naming is done well. There’s a lot of benefits to that. And we’ve seen in this Grafana screenshot, in fact, a little bit earlier, this is, for example, here on the top left, an example of why you can use this drop-down functionality, this templated functionality, because of proper naming.
Frederik Van Leeckwyck: 00:33:58.466 So right now, we’ve added another piece to this puzzle. We have the raw data collection from the SCADA system, from the PLCs, into the Historian in InfluxDB. Now we have this really high-resolution data, and now we’re going to use this to complete our fermentation sheets. So at the regular intervals, the operators note down on paper the values that they see on the SCADA system screen, for example. But actually, these values are already present in InfluxDB at this point. So we can sample the relevant columns, let’s say, from this fermentation sheet at regular intervals, at the right intervals, and put these on a digital fermentation sheet, more or less. The operator is partially relieved of repetitive tasks. And in their user interface, they automatically see whether some critical parameters deviate too much from the expected values that we know from the recipe and so that they are guided and [get to] take action if that would be necessary.
Frederik Van Leeckwyck: 00:35:11.462 So what we did, we added this small arrow here. And what we see right now is that we have kind of like a loop now. We know from the recipes and the planning what we want to have, more or less, the expected start and end times and the expected values from the recipes. And then it records in the Historian in InfluxDB what’s actually happening. And we can now compare this with what we actually expected. Right? So we get this closing of this loop. So the fermentation process goes on. It ends. And then there’s the further downstream processing, as we have seen in the beginning, into a dried yeast and a liquid yeast, fresh yeast. The lab analysis is done in the LIMS system, so a separate piece of software built specifically for that. So they perform analyses for each batch. And this information is also synched with all centrally collected data. So yield, dry matter content, those kinds of things, they’re also added to this and linked to these batches.
Frederik Van Leeckwyck: 00:36:22.058 So in the end, what we now have is this error-free link between the planning, the recipes, what we wanted to do in production, what we actually did, and then finally, the end result from the lab. And all of this is now together. So now, because we have all this information together, we can start answering these questions that we had before, like show me all the batches from February 2021 that followed recipe X and had a dry matter content of at least Y, for example. Or show me the reference curve of parameter X and how it evolves for a specific batch. And how does this compare with the reference values from all batches that followed the same recipe? So for this, you need to remap all your batches according to relative time, for example, if we map it to — starting at our 0 to our 12th, for example, rather than absolute time as they are stored in the time-series database. But you can, of course, always see the raw process data at the second level resolution for a batch in Grafana.
Frederik Van Leeckwyck: 00:37:27.657 So this is end result of the whole setup we put together, so this link, as we’ve seen many times before. And once all the data is synchronized and properly linked in the MES system — which is relational data which can then be used for reporting, while the raw processed data is also present, of course, and can be used for really more high-level analysis in the — or more detailed analysis in the dashboard we created, for example. And all of this has been done with open technologies, luckily, like OPC-UA and standards. This gives vendor independence or technology independence, at least, in cases that the source code is available. And we’ve seen that this setup — and we mentioned in the beginning, 2016, 2017. This has been running in production for years already, with maintenance over time, of course. But these tools have really shown to be of great value to put such an industrial system, information system together, in fact.
Frederik Van Leeckwyck: 00:38:38.426 So I believe with this story, we can really show that we helped take a step towards this digital factory by linking what’s happening in the control of the actual process and getting that information out and storing that [inaudible] to get new insights and learn from that. But the story, in fact, doesn’t really end here because that’s kind of like the big project we put together. But as soon as we start collecting data, we mentioned in beginning — we did it in the beginning only for temperatures, and then more and more data is added. And Ivo just told — and we can expect more PLCs and more tags, more data to be added as well. So in the end, there’s still more benefits coming out of the system that maybe initially weren’t really foreseen. And Ivo would like to present a couple of unexpected benefits of this system, or we call them unexpected benefits, as well. So, Ivo, maybe back to you?
Ivo Lemmens: 00:39:43.462 Yeah. Thank you, Frederik. Well, unexpected, yeah, for sure, but very, very welcome benefits. And as I pointed out earlier, four items here. That’s our MRP functionality, where we do some prediction. We have real-time machine monitoring. We had a simple story of a cold-room debugging and the monitoring of our MES and our Historian function. But to start with, our MRP functionality was really, really welcome. You remember it’s still the process of — or is a schedule of our production, where we have storage tanks, a large storage tank of a lot of products we need during our fermentation. And one of the issues was we, as a production plant, are still growing. The storage tanks does not grow with production. So we have to manage our storage capacity in a better way, meaning we have to see that the unloading trucks — or have enough space in our container storage tanks to unload the complete load or to take decisions in time when to order some new loadings.
Ivo Lemmens: 00:41:17.547 And that we couldn’t do before, or it was very difficult and time consuming. But now our system knows what we are going to do, what we are going to produce in the next coming days, at least for seven days, as we make our production schedule. So we know what we produce. We know exactly when we will produce it and also what ingredients will be used during that production. So we can also forecast how much of a certain raw material we need. And as we set that out to our level of storage tanks, then we know when some kind of storage tank is running empty or when it has to be refilled. Not only for our storage tanks we do it, but also for internal production. And we see some — yeah, I can say it, it’s for production of [a loss?], I think. We sterilize where an operator has to know for his shift, which normally is 8 hours [inaudible], what flow he has to produce, how many liters he has to produce per hour or in his complete shift over the next 8 hours. And Factry makes some nice buttons so he can very quick see in the next 8 hours, 24 hours, or 24 hours in advance how the production flow [will] see in the next few days.
Ivo Lemmens: 00:43:02.825 Another unexpected but very useful thing we see with the monitoring is that we can do some real-time monitoring of machines we did not use before. As we have standard SCADA systems, we see how motors behave, how vibrations are. That’s all looked after by the operator itself. But when we have some special things that would overflow the operator, for example, valve maintenance, then we do the logging of the opening and closing of a valve via our Historian database. And the next slide gives some example. So here we have the logging of two valves, and below we see the cutout for one valve. And when we see that the output of a PLC, the control of a PLC, does not match immediately with the input of that same valve, the feedback of that valve, then we have some kind of time delay, some delta T. And that delta T is kind of measure for the wear of a valve. The longer it takes from start opening a valve till the valve is completely opened, that gives us some indication of a valve is ready for repair or removal or not. So that is one thing.
Ivo Lemmens: 00:44:55.908 Another thing we had, a real practical example, was a debugging of a cold room. It was a very funny one. We have one cold room, which is fully automatically, a bit too automatically also. The doors are opening automatically while some carriages are taking products in and out. And once in a while, at weekends when there is no production, those doors went open and closed, and nobody could think about what was the real reason. That was a time we started logging all in and outputs of that door. The little wagon was running in and out. And it appeared that once in a week or twice in a month, there was some problem with one detector who was not positioned right to the reflector. And because there is no one inside in the cold room, certainly not during the weekends, we had a very good example here of what the preventative maintenance or the prediction — not prediction, but the monitoring — can help us in the root-cause analysis of some failures.
Ivo Lemmens: 00:46:39.401 And then, of course, what we also can do is monitoring our MES and our Historian itself. We made — or Factry made — some nice dashboard, measuring or logging all the services running. And when one service is running out of time, that is monitored, and that pops up, and we can take some action to put it right. So these are four interesting benefits we have, but of course, we are thinking in the future also, what are we going to add next? And certainly, we are thinking about something like predictive maintenance that we don’t have right now. But this database will certainly help us to have sufficient data to learn some algorithm and to help us in maintenance. Frederik, up to you. Oh, yeah. [crosstalk].
Frederik Van Leeckwyck: 00:47:53.733 But that’s your advice. [laughter].
Ivo Lemmens: 00:47:56.092 Frederik mentioned it also. The system is so easy to install. It’s so easy to implement and to use. The cost is low. I only can advise, when you are in doubt to install it or not, just give it a try. Just try it and you will learn, at least, something. You will learn how to use it, what benefit it can give to you. And if it isn’t your thing, if it isn’t right for you, you can try something else. But we are very, very confident with this InfluxDB.
Frederik Van Leeckwyck: 00:48:47.263 Thanks, Ivo. And you can read a bit more about this on the blog as well. So with this, I would actually start wrapping up, Going back to the beginning, we were able, after putting all of this together, to answer those three questions in a reasonable amount of time now. We’ve put a human not anymore in the center of the information flow, not the centerpiece of linking all data together, but this person has become a user of the information that is flowing more or less automatically and is properly linked. Then we’ve seen with these last four examples that there’s additional benefits being that additional benefits popped up because we now have a common data platform where we can add more and more things. As Ivo mentioned, with the detector and the reflector, for example, the database is there. Just add the signals and you learn from that. Same with the valves, and same with the monitoring of the systems that are doing all of this work. So there’s additional benefits coming all the time because you look at it as a platform. You look at it as a platform rather than a point solution. And all of this has been done with open protocols and open source software.
Frederik Van Leeckwyck: 00:50:13.831 So the takeaways here, I believe, are we would propose to build a platform for your process data and not a collection of point solutions that do one thing and only one thing. Build a platform, collect everything there, and build your applications on top of that. Think really well about your naming structure because you will reap benefits in the end. Work iteratively with all stakeholders. Nobody has the magic solution with your data. But if you have many stakeholders on your production site, working together with that data, asking questions, wanting to collect more data so they can get more insights, this is a really beautiful engine to see running. And I believe we can definitely say that data has had a clear impact on the business with this case. So with this, I’d like to thank you and open the floor for questions.
Caitlin Croft: 00:51:19.741 Awesome. Thank you, Frederik and Ivo. We have a ton of questions for you. So a couple people did ask if the slides are going to be made available. Yes. They will be made available later today as well as the recording. So the first question is, is the download of the recipe done from the SCADA or through a HMI of the MES system?
Ivo Lemmens: 00:51:47.578 A technical question, Frederik?
Frederik Van Leeckwyck: 00:51:51.569 Would you like to answer that, Ivo?
Ivo Lemmens: 00:51:56.246 Yeah. So the recipes are stored in our MES system, and the MES system is pushing the recipes down to the SCADA database. That gives us the enormous advantage that when there is a network interruption, that we don’t rely — that we don’t need the MES system or the upper system. We can continue producing with our local SCADA system. But when there is the change in recipe, it is immediately pushed down to the SCADA systems.
Caitlin Croft: 00:52:43.550 I’m not quite sure if this question is asking if you guys are using other protocols or if the system does. Anyway, any other protocols besides OPC-UA and BACnet, for example, Modbus and MQTT?
Ivo Lemmens: 00:53:01.724 Well, in our factory, not. We like to standardize as much as possible. But MQTT, everything, in fact, is possible. Modbus is possible. MQTT is possible. [Internet IP] is possible, whatever you want, in fact.
Caitlin Croft: 00:53:23.163 Awesome. Do you use Grafana for alerting, or do you use something else?
Ivo Lemmens: 00:53:30.165 We use primarily our SCADA system because we learn our operators to look primarily to the SCADA system because that is how they control the equipment and how they are alerted in the first way. Grafana is used also on the shop floor in the factory for the operators for a higher level, say, for the MES system for the MRP systems, etc., and for the higher level of the people in [the factory].
Caitlin Croft: 00:54:15.296 Great. Do you keep your asset information in separate storage and maintain it centrally, or do you have that distributed across systems, for example, using templates for Influx?
Ivo Lemmens: 00:54:32.978 Frederik, maybe you can answer that.
Frederik Van Leeckwyck: 00:54:35.825 That question is not very clear to me, in fact, what is meant with asset information.
Ivo Lemmens: 00:54:40.358 We hand it off to you.
Frederik Van Leeckwyck: 00:54:43.773 [laughter] Yeah. Maybe that person can elaborate on the question — or, Ivo, do you know what this [crosstalk]?
Ivo Lemmens: 00:54:52.621 No. But I can answer maybe partially. We do not store everything in one database [inaudible] right. I think we have several databases installed, well, in the same Historian. But several databases, yes, about eight or nine, I think.
Caitlin Croft: 00:55:18.638 Are you using the OPC-UA Telegraf plugin to pull the data from the PLCs and SCADA?
Frederik Van Leeckwyck: 00:55:27.857 No. We are using the open source collector that we shared in the beginning. We’ve been using that for a couple of years, and now it’s one of Factry’s collectors.
Caitlin Croft: 00:55:45.719 The data from your ERP and recipe system, is that mirrored in Factry? If so, in which database, as this is not time-series data?
Frederik Van Leeckwyck: 00:55:57.395 Yeah, you’re right. This is not time-series data. This is relational data, or you can map it to relational data, at least, definitely for the recipes. So that is indeed mirrored in the Factry database. So for that we use a Postgres database that will hold all the reference values for certain recipes, the versions, etc., those kind of things. And it is that information that is finally synched with the SCADA system, as Ivo mentioned. And then the time-series database is used to store what is actually happening in production. And it is those two things that we then bring together again in the reporting, what did we expect from the recipes, which is in the relational database, and what has actually happened, which is in the time-series database.
Caitlin Croft: 00:56:44.880 What kind of security do you have in place for the SCADA system?
Ivo Lemmens: 00:56:51.556 Well, this relates to the SCADA system, so for the Influx database. Well, the production network is strictly separate from the rest of the network. It’s virtualized. Every entry is blocked, so no hardware entry — I mean USB ports or whatever. Everything is closed and only accessible via special gateways and firewalls.
Caitlin Croft: 00:57:27.508 In the environments — oh.
Frederik Van Leeckwyck: 00:57:28.047 [crosstalk] add to that, which I believe is also one of the benefits of using these kind of technologies, is that if you extract all your raw data and you put it into InfluxDB, which is more at a higher level, then you can have more people accessing that data without having to mess with firewalls and giving people access to something that is strictly for production.
Caitlin Croft: 00:57:57.148 In the environments I work with, we sometimes already have 1,000 tags per machine, and we have more than 200 machines. This environment uses a historian which is a very closed system. What is your experience with the data volumes and throughput of InfluxDB in this environment? I’m assuming it’s on-prem.
Frederik Van Leeckwyck: 00:58:19.979 From our experience — but maybe the people from InfluxDB can add to that. But from our experience, this a really, really performance system, even on single [nodes]. But there are, of course, cases where this is not enough and you would need to scale, for example, with clustering. It all depends on, yeah, a bit of how your data is going to be structured and also what resolution you’re going to collect your data. So 1,000 tags per machine, 200 machines, but if that would be mean, for example, 1 write per minute, that’s a lot. That’s a big difference compared to, for that amount, once per second, of course. So the write speed is to be taken into account, and on the other hand, the read speed [inaudible]. What are you doing? What is consuming that data? All of that should be taken into account to design your systems. But maybe the people from Influx can add [inaudible] to that question — or to that answer as well.
Caitlin Croft: 00:59:23.574 Okay. I think it speaks stronger when our users say it’s really [crosstalk]. We, of course, will say that. So I know we already are a couple minutes over. There are a bunch more questions. We’ll try to get through a few more of them. And if there’s a question that you have that hasn’t been answered today, you can always email me, and I’m happy to connect you with Frederik and Ivo. All right. Let’s see here. Are you collecting electricity and energy consumption data?
Ivo Lemmens: 01:00:01.159 Yeah, for sure. Yeah. That’s not only electricity but also water consumption, [inaudible] consumption, everything what goes with that. Yeah. Not only the raw production, but also everything around it. Yeah.
Caitlin Croft: 01:00:24.075 Are you using an asset performance management system, like Maximo, to track all assets and their work orders?
Ivo Lemmens: 01:00:34.259 For the moment, we are using — now we are using a computer maintenance management system called Coswin, but it’s at this moment not connected to our InfluxDB or any [of our] systems. I say not for this moment, but that’s certainly in our heads in the future. Yeah.
Caitlin Croft: 01:01:01.747 Okay. Great. What kinds or types of sensors are you using to collect the data?
Ivo Lemmens: 01:01:12.830 Well, since it’s an industrial plant, most of the sensors are — when we’re talking analog, it’s 4 to 20 milliamps, and when it’s a digital signal, it is a 1 or a 0. They are all connected to a PLC. And that PLC then is connected to the collector, in fact, and the collector is pushing the data up to the InfluxDB. And like Frederik said, we are now up to — I said 50, but I think it’s closer to 60 PLCs right now.
Caitlin Croft: 01:01:54.442 Are any of those sensors wireless? Was there any issues with wiring them?
Ivo Lemmens: 01:02:00.619 Issues in wiring? No, because it’s not especially done for the InfluxDB. It’s for controlling and measuring and updating, whatever. Wireless sensor are — well, I would say no, but there are some wireless protocols we use, but that is because wiring between two decentralized I/O islands are better. But not wireless sensors, no.
Caitlin Croft: 01:02:40.786 This might be a question for Frederik. Where is InfluxDB running, cloud or local infrastructure? [crosstalk] —
Frederik Van Leeckwyck: 01:02:48.656 This is running locally. Yeah.
Caitlin Croft: 01:02:50.764 Okay. Yeah. It sounds like this person might be also working with [factories], and there’s always concerns about having things on-prem versus in the cloud. Do you bring all of the process parameters to SCADA and then send it to your database, or do you use independent gateways, like Modbus to MQTT, at different stages of the process to send the data to your servers?
Ivo Lemmens: 01:03:24.296 So we use mainly OPC-UA to send the data to the — Frederik, help me if I’m explaining this wrong. But we have a — I can call some brand name here. We have a SIMATIC NET server running, who is collecting the data via a special SIMATIC protocol, I think. It’s getting the data from the PLCs and then via the Factry collector to the InfluxData. That is one part of it. We have the PLCs connected directly via OPC-UA because the new PLCs have OPC-UA on board. And then we have BACnet also connected to the PLC and to an [on-prem] — it’s a Matrikon server. I don’t know. I think so. Yeah, it’s a Matrikon server. But there was another part of the question, Caitlin, the different stages?
Caitlin Croft: 01:04:41.330 Yes. Yeah.
Ivo Lemmens: 01:04:42.179 Now we use [for difference] — we use for one PLC always the same connection. But what we do is we not only archive the raw data from the — I mean the analog values, the digital values of the PLC, but we also log some internal tags of the PLC, say, the program steps or some internal values to follow the stages of the recipe or the process running, if that’s an answer to the question.
Caitlin Croft: 01:05:25.530 So I’m going to take one more question live. So I apologize to everyone that we aren’t going to be able to get all of the questions answered. I just want to be respectful of everyone’s time. But of course, you guys can always email me, and I’m happy to connect you with Frederik and Ivo. The last question is: Is BACnet used directly — or is BACnet used directly to send data to InfluxDB, or is it mapped to OPC-UA and then to InfluxDB?
Ivo Lemmens: 01:06:00.324 It is BACnet to OPC-UA. So there is the Matrikon server who is, in fact, a converter between BACnet and OPC-UA. That’s right.
Caitlin Croft: 01:06:13.518 Great. All right. Well, thank you, everyone, today for joining today’s webinar. Once again, it has been recorded, and the recording as well as the slides will be made available later today. And please feel free to email me if you have any other questions for Frederik and Ivo. Thank you, everyone, and I hope you have a good day.
Frederik Van Leeckwyck: 01:06:37.036 Thank you.
Ivo Lemmens: 01:06:37.114 You’re welcome, everyone.
Frederik Van Leeckwyck
Co-Founder and Business Development Manager, Factry.IO
Frederik Van Leeckwyck is the Co-Founder & BD manager at Factry.IO. Factry.IO is a solution that provides real-time & historical insights to everyone in a factory environment - from the plant manager to the operator all using open source technologies. His many years of experience in the Industrial IoT industry have made him realize that the industry needed a fresh approach to control & automation.
Project Manager Automation, Algist Bruggeman
Ivo Lemmens is the Project Manager Automation at Algist Bruggeman. In this role, he is responsible for automation and electrical maintenance. Since 2016, he has pioneered the use of open source software, for example InfluxDB, in the industrial setting.