Coming soon! Our webinar just ended. Check back soon to watch the video.
How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets
Webinar Date: 2019-07-30 08:00:00 (Pacific Time)
In this webinar, learn how a long-time Industrial IT Consultant helps his customer make the leap into providing visibility of their processes to everyone in the plant. This journey led to the discovery of untapped opportunity to improve operations, reduce energy consumption, and minimize plant downtime. The collection of data from the individual sensors has led to powerful Grafana dashboards shared across the organization.
Watch the Webinar
Watch the webinar “How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets” by filling out the form and clicking on the download button on the right. This will open the recording.
Here is an unedited transcript of the webinar “How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Chris Churilo: Director Product Marketing, InfluxData
- Bastian Maeuser: IT Consultant, NETZConsult
[Bastian Maeuser] I’m Bastian Maeuser. I run a consulting company in Germany. Thanks for having me here. It’s an honor to share my experience with other Influx users. And today, I will have the focus on the project I did on a manufacturing plant from the print industry. I tried to keep everything as IT-ish as possible, but probably you will learn a little bit of print specifics or specific things from the print industry. We cannot avoid this completely.
So I think I will start off. What is the situation which is not specific to the print industry but to many manufacturing industries is that on the one hand, we have manpower, consumables, and energy. We put it in a black box. There’s a bunch of machines. And on the other hand, there’s a product that’s falling out of the machine and hopefully, the dollars are spent are less than the dollars I get from a product. Many business number validation is made on assumptions. Too few validations to what is actually happening inside my black box are really made. And this was an issue, and this was a question that was coming up specifically on that customer. So we made a project out of it.
We had quick fast results. We did the initial implementation as a proof of concept that was really a matter of just a few days because just accorded the time to awesome on the TICK Stack is really very small, so you can make proof of concept very quick. Most of the work of the whole project work was to interpret the numbers we get and to validate the numbers and to look into the process where we gain the metrics from. Do we get a measurement directly from the instrument or is it already a calculated value? This was a major part of all the work. Not the programming itself or setting up the database or during the technical stuff. I will say a few words on the machinery which is an operation here. It is a high-volume offset printing machine. It’s one Manroland LITHOMAN IV, 80-page machine, two LITHOMAN S, 96-page web offset-printing machine and a few smaller machines. The high-volume machines have an output of roughly 4.8 million A4 pages per hour per unit. Usually, the whole compound here runs in 24/7 production. We have a variable maintenance window at the weekend which is between 24 and 36 hours. And if you want to run such a machine or such a company, you don’t buy it key ready. You have to order or you have stuff from different suppliers, and this brings a lot of complexities into all this stuff.
Okay. The dilemma. Industrial plants. And this is not only for print. I have customers from the textile industry and from Foliator industry as well. The industry can suffer from a notorious high heterogeneity of the data sources and the excess protocols. They all have their own subsystems. They all have their own development philosophy. It’s hard to bring this together. If you want to have permanent monitoring or reporting, until we did something there, it was work from hell. The technical director went two times a year, aggregated the data of all the separate reporting systems that were delivered with the subunits and tried– we called it the Excel sheet of hell. Aggregated all into that. And besides the fact that it was always weeks of work, it was prone to errors, and a frustrating thing for the technical director as well. And to be honest, what is the benefit if you know that you didn’t perform well half a year later? It’s not really very helpful. And this is a problem with all the reportings. Most of them are job done, so you get a report of what was done after the job. A job runs between one and three days, and usually, if something is running wrong or something is developing in the wrong direction, you want to know when it happens and not when the job is finished and you had massive waste and you had the problems already. So this was one of the main intentions to start up the whole project to have real-time data.
None of the systems is really capable of delivering real-time data. If you are lucky, you have some database interface where you can pull data from, then you have to write some custom code where you pull the data from and aggregate it. So you can put it to another database. There are some systems – I’ll talk about this later as well – where you have to do some moderate reverse engineering because you don’t always have the cooperation of the vendor of the machine. It is very difficult sometimes. They always tend to lock you in to say, “Okay, we have this and this and this tool. It’s a super-duper Java application and you can have reports after the job and see what went wrong,” that don’t really get what you want. So we took the opportunity and made it ourselves. We had the plant control from the main plant control system. It is called P-Compi MI. It’s a Postgres database where we have near real-time data which we can pull in an interval. Meanwhile, since a few weeks from Manroland, the main vendor, we had a six-month negotiation with them. So they stream us important metrics from the color control on an MQTT broker that was a huge step for them and for us of course as well. So we have real-time quality KPI which we can derive which wasn’t paid attention until now, and this was very, very useful for us. On the 80-page machine, we’re still working on that. There is another vendor for the color control but he promised to deliver it on MQTT as well.
Another important thing on the compound is energy. We use a lot of electrical energy and the peak is around about 4.5, 4.6 megawatts. And so we want to know where is the energy going. That was already an energy measurement solution in place but it was not consequently deployed. We had to add some sensors to specific places to follow the energy flows from the main transformator stations where which subunit uses which amount of energy in which machine regime or in which state of the machine. But we also want to have– for business number validation, we want to have ERP data. So there is a very special solution. It’s called Chroma. It’s from a German software development company. It’s a traditional Oracle 12 based client-server Windows application without any usable API. So we had to make a direct Oracle connector. So we just looked at the data model. It’s 1,001 tables. It’s a database of hell, but we worked through it and can extract the essentials we need on our dashboards.
We have the robot cell. It’s running internally with MS SQL, but we don’t have any client access there. But after some talks, we gained access to an OPC UA server and were able to build a bridge between the OPC UA system and an MQTT broker where it finally ends up in Influx. Then we have a fluid management system. We are promised. We don’t have it yet. The fluid management system is for, like the name says, for managing fluids, for remoistening the web when it comes out of the dryer. This is not done yet but we will get it on an MQTT broker as well. And we have an ink supply system, central ink supply systems. We use big amounts of ink and this is why we want to know what’s going on there as well.
So at the beginning of the project, what approaches do we have to solve a problem or to improve the situation? Excel, we’ve been there. Really no way. That wouldn’t work. It doesn’t scale. That’s not an option. RRD collectors, that’s how statistics were collected like 10 years ago. It doesn’t scale well in terms of flexibility of the data model, and with high amounts of data, it gets slow and sloppy. You don’t want that. We thought about making a specific application in the table relation structure. So we make a data model specific for our– yeah, which is tailored for our data and for the applications we build on top or would have to build on top of it. But this wasn’t such a great idea because table relation structures tend to not scale so well with time series data because tables grow with some millions or billions of records that get slow. You cannot break up them. It ends up in a database of hell. You don’t want that as well.
The Elastic Stack, we have looked into it, but I knew this from IT projects already it’s more aimed at alphanumeric information, lock information, and doing stuff with textural information. We have more numbers we want to work with, so this was not the best thing. The best options that turned out was whether we use Graphite and Grafana or we use TICK and Grafana. And from other open-source projects, I already experienced some scalability problems with Graphite that tended to get a little bit sloppy on high interest rates. So yeah, InfluxDB, Telegraf, and Chronograf, and Kapacitor was the database back end we decided to use, and Grafana for the visualization. The nice thing about that was that – I think it’s on the market or available since a year – if you use this stack, you have the option to use Loud ML as a machine learning API and to make some kind of one-click machine learning that comes in handy in specific situations.
The decision for the TICK Stack specifically were– the interest rates, we don’t have a problem to feed it with very high interest rates. We made some tests. We stopped testing at 500 values per second. On a single instance on a bare-metal machine here, we don’t come even close to that, so this won’t be any blocking factor. The storage engine concept, in my opinion, is compelling. The speed of all data as well, the chart concept comes in very handy there. The space efficiency, the compression is amazing. The space reclaim you have if data runs out of a retention policy, it simply deletes chart the file from the hard disk and you have it available again. For comparison, in a table in an SQL database, usually, if you delete, you have one billion records, you delete 100 million, you don’t reclaim the space again. It can just be used for new data, but reclaiming the disk space won’t work on a normal SQL database that way. Using this stack, we have an extensive ecosystem of plugging for input and output that comes very handy. This is specifically Telegraf. It acts for us as a kind of Swiss army knife for connecting various MQTT sources and OPC UA sources, and that made this very attractive. I’m aware of the fact that you can use Telegraf with other databases as well, but why should we? And last but not least, it has proven to be production-ready because many big names in IT already rely on it. There is [PayPal?], for instance, there’s Tado, there’s Tesla that uses for the Mothership API. So we were no beta testers. So that was important for us as well to go to that decision.
This is a little diagram, a simplified diagram of the components we use. On the one hand, some of our data sources, we have to pull them. The rest data we can get from a REST API. There’s some SQL connectors we use for the main print or for the main plant control system which we connect with ODBC, and there is our ERP system which we pull with the Oracle connector. We can all do this with a Node-RED flow which comes in very handy here. And on the other hand, we have the streaming metrics. We would love to have all metrics streaming but we don’t have it yet. I received the color control data from the color control system. We spent some weeks on the MQTT broker, so every second we get from a densitometric camera specific values and we have performance data from the robot cells. Those are two ABB robots for palletizing and they have an OPC UA server, and there’s a little tool, you can find it on GitHub where you can put this data from OPC UA to MQTT. And then we use Node-RED to transform it in a way we can use it best to put it to InfluxDB. We first put it back to MQTT. There it goes to Telegraf. Then we have a unified structure in the InfluxDB where we can build our dashboards from. So the main visualization is done with Grafana or with dashboards which are used in the company are made with it.
Steps we had to take which are quite important. Identify the data source that matter. You don’t want to monitor everything because then you are drowned in a sea of data you cannot work with, and yeah, that comes a little bit un-handy. We had to deploy instrumentation and extend it where it was required. Like I said at the beginning, there were some places where we had to place additional power usage sensors. There were some places where we had to add pressurized air sensors. We also monitor our pressurized air usage in the plant. We had to do some technical interface design, some work with plain Telegraf, some REST APIs we can call, some require moderate coding. Most is done in Node-RED, and some others are done with some custom Python coding to adapt it to our ERP system to the Oracle database because Node-RED cannot talk natively to Oracle. But that was done in a few days. This is no rocket science at all. There was the dashboard design, so finally give that data some colorful life, some nice and shiny dashboards, and finally derive KPIs from it. That was a thing where we had to have intensive talks with the technical directors here and all the people that are deep in print what data or what aggregation of what data makes sense to derive a KPI, what displays– or yeah, what’s an indicator for the performance for you? What is helpful for you to show, to know, to have as a real-time value?
And of course, we wanted to define criteria for alerts. There are very simple things. For instance, if the machine isn’t running since two hours but the dryer is running, make a little metamos, a Slack notification to the director’s channel that just informs them the machine isn’t producing but the dryer is running. He can ignore it or he can see, okay, we now have 12 hours of maintenance. I maybe should shut off the dryer. Nice side story about this energy thing. Usually, when there was maintenance going on at the weekend, they always said, “Okay, we can leave the dryer running. It is only using gas for like €300 in those 36-hour window.” That is true. It’s only using that amount of gas. But if the dryer is running, they didn’t keep in mind that the cooling tower is running as well. And the cooling tower is using a whopping 300 to 350 kilowatt, and this is like yeah, 7,000 to 8,000 euros in that 36-hour time span. So this actually made sense to turn off the dryer because there was a cascade or– yeah, a cascade of other things that depended on this. And this was also important for us in defining those alerts. For instance, if the dryer is not running, someone has to make sure the cooling tower isn’t running as well. These are all things that haven’t been done before and that gave us enormous opportunity to save money on energy and a few hours of work that happened very in the beginning of the project.
Difficulties we stepped over were a few. What you always have to do, or what probably has to be done is some moderate amount of reverse engineering because you don’t always have a machine vendor that helps you. Often, in the machine-building industry, they are machine builders. They have one or two developers, and this isn’t their main thing. They have designed a software that runs the machine. This is on the market since 10, 15, 20 years, and nothing, or only a few things have changed since then. And if you approach them and say, okay, we want to have metrics export or we want to have an export of the live data on the MQTT broker, they often don’t even know MQTT and they don’t know InfluxDB as well. They are not into current technology for some reason, and this is where the job starts to get a little bit complicated. You have to be a psychologist sometimes to convince them and not give them a feeling that they are behind – at least that was my impression – and get them with on the train. Sometimes it works. Sometimes it doesn’t work. You have to deal with very outdated hard and software. Some of the subunit controls are still running Windows 98 and Windows XP and they probably won’t release newer versions of it. This is probably not going to happen. And you have to deal with it, and you have to cope with it and see if it is feasible to get an export or to get some kind of interface, some kind of simple board to extract data from this if you can get a more recent version of their product.
Now, like I said, the negotiations with the machine suppliers can be challenging. There are the bigger companies. They are very open to this. Sometimes they want to sell you their own– they call it Maintellizence. It’s a Siemens product, I think. They want to sell you their own solution but then you’re still stuck with their subunit or with their specific– or shall I say their specific sub-instrumentation. They don’t see what the other units are doing because yeah, the interfaces are too simple. So you don’t want this. And the data validation was partially difficult. You have one value coming from three units and all three values are different. And you have to find out why they are different, which is the real measured value, and which value has already been calculated or tailored in some way by some fancy algorithm in their unit control that was much reading of documentation where it was available or trying to talk with their engineering departments.
ood habits turned out to be a few. Implement security right away. So if you set up the MQTT broker to connect the various data sources, implement authentication right in front, better to use some TLS client certificates. You won’t do it afterwards if everything is running. Do it when you start off a project. It costs like five minutes. We have put everything in separate VLANs to have it completely separated from the internet and yeah, to keep things clean. Like I said some slides ago, collecting every data or every metric that may be available isn’t a good idea either because yeah, you are drowned in a sea of data which you might not use or you might not need. So don’t do it. Also, avoid redundancy of the values. Sometimes you get the same value with slightly different timestamps because a machine or a– let’s say a unit back in the machine might be giving you a specific sensor value a little bit later than in front of the machine. So decide from where you want to have the value and don’t calculate with it two times, or don’t make the error and use the one value of this sensor you take from the front in one dashboard and in another one from the other sensor or from the other unit. That isn’t a good idea.
And do an interpretation documentation. It’s not so easy to have a good overview of how measurement values come together. Where is the physical location of the sensor? Is it a raw value or is already some kind of control or has already some kind of process out of this value? And if it’s the case, because you cannot get the raw value, what is the formula for that alteration? It’s always good to have some words written about it. We did that in a wiki that turned out to come very handy. And don’t end up in having a directory full of custom scripts. We’re very happy to just put all data transformations in Node-RED flows. So it’s really much more structured and it’s easier to have a good overview, what has been changed and to keep standards on collecting all your data. You can have one sub-flow, for instance, convert a data source or a raw value to an Influx-compliant value. And if you have for every data source a separate Python script or something like that, it’s not a good idea. It doesn’t scale well.
Okay. Some words about what we are currently gathering. Like I said, we are gathering detailed data about electric consumption that was indeed the biggest saving aspect we were able to achieve. We got very specific metrics about paper usage and paper waste. We use like 100,000 metric tonnes per year of paper and then you want to quantify the waste. When has the waste happened? Where are the waste-intensive time spends? Do they correlate with specific paper use or ink use or do they correlate with a specific– other environmental parameters? This can be very– or the visibility tools can be very helpful to identify waste causes. We are currently working on– this is the last step in this, let’s say industry 4.0 approach to use signals we generate from the data we collect to InfluxDB to go away from fixed washing cycles which is standard in the offset printing industry to go to variable washing cycles. Because we have the color data, over time, we can have a little algorithm that detects if a dot gain value– I can show it on the next slide. If a specific value is getting steeper, then you know you have to wash. But you don’t have to wash, for instance, every two reels. But washing is usually done or has to be done to keep the print quality clean.
But you can save massive amounts of energy and paper if you only wash when required and not wash when the experience of the people says, okay, you should wash every two reels. Because sometimes you can wash every four reels because the print product or the constellation of ink and environmental parameters allows you to do so. And we want to predict situations to avoid unplanned downtime. A little side story, we have the post-processing where all the print products are picked, and there are conveyor belts, there are plenty of drive for those conveyor belts where we’re getting metrics from. And once there was an outage, we have seven values per drive like torque, revolutions per hour, power amperage or the voltage amperage, temperature, fan speed. And we noticed a slight– when we had this outage, we noticed a slight deviation in the amperage the motor was using that wasn’t too obvious because it was depending on the torque that was used in that specific moment. It’s a very varying– the value is varying or fluctuating throughout the running machine. So we could use a Loud ML, learn that situation in front of that outage. And indeed, the TensorFlow which is used by Loud ML detected this pattern some weeks later– or it was about two months later. We were able to order the drive, the conveyor belt drive in advance and we were able to avoid the downtime from 6 to 10 hours, what it usually takes if you have to get the spare parts. And yeah, that was a big success in using the data or the metrics we gather in InfluxDB with a machine learning tool.
The ink supply is also an interesting thing we took a closer look at. The company uses massive amounts of ink, like 2,700 metric tonnes per year it was in 2018. We wanted to validate the consumption. We are now able to have a better way to focus required deliveries because we can, with very simple algorithms, see when we’ll probably– the tank will be empty based on the history. This is helpful on the one hand. On the other hand, we have the volumetric measurement units on the printing machines themselves and it turned out– there is a little interesting side story that we were wondering 800 tonnes of ink were missing in the calculation. Nobody knew where they were. We used 800 tonnes more in 2018 that were calculated in the ERP system because what the machine counted was totally wrong. That was because there is this volumetric counter. The ink flows through it and it’s such a mechanical. It is with gear drives, a mechanical thing. And the machine assumes 250 ticks of those gear drive counter correlate to one metric liter. That is right when the machine is new. But those volumetric counters have a mechanical degradation because there’s particles in the ink and it’s very thick fluid, and yeah, this is prone to wear. So we had a closer look and took a bucket and found out that if we leave one real volumetric liter, depending on the color and on the machine, we get much less ticks. So we were able to calculate a correction key for this value or for this counter and now have the real ink values. And this is where our 800 tonnes of ink were missing. So this was very helpful for the business number validation because what the machine, the [PCOM?] system said it would have consumed was totally wrong. Because they don’t even have a correction key or the possibility to adjust the tick counter to what the machine interprets as volumetric liters, we now do it in Grafana. So this was a very helpful thing for us, and especially for the sales and controlling people.
This is the big tactical overview for four of the bigger printing machines that are shared across the organization. So everybody knows what’s going on in the machines. In the top, we have the names, we have the speeds of the machine, we have– we call it delta E deviation for the up and downside of the web. This is an indicator for quality. We have the delta E deviation across all regulating bounds. There are 72 of them across the breadth width. This is very print specific, I know. And we have the dot gain summary which tells us about the print quality and about how much the actually detected printed product matches what the print play production assumed it would be. So the whole process, we have the print play production in-house of course as well, and this is important for us to see, do we hit the quality target we want to hit?
This is all this in a time series overview. So this green bar is for us an area where we say, okay, this is within the tolerance of the dot gain. We call it dot gain. Those are measured by density cameras which are traversing across the web. And as long as everything is within the green area– which is not fixed, it’s variable. We get all this data via MQTT from the print control system or from the plant control system. And as soon, you immediately see if a drop is running out of the tolerance. Yeah. This is a sample where it’s looking really good. When we started doing this, we had samples where it really wasn’t looking good. So it was important for us to have this data visible to everyone to get an awareness for this. They didn’t even use the densitometric values for quality control and this was a huge leap. This next slide, I hope you can see it and it’s not too choppy. This, actually, is the camera we use where we get the raw data from which leads to these graphs where we can see it for each of the colors we print for CMYK. So it’s quite intuitive for the plant operator and for the print operator to see if he’s working good or not.
Okay. Let’s load the next slide. Okay, having videos in the slides is always a bad idea. Okay. This is, for instance, a waste quantification thing at the beginning of a job. We can see how the waste is developing over time when the job is running. Here, a new job. I had to blur this out because it is sensitive data. Where the green starts, the job is running. We get this from the ERP system because we know when a new job is loaded in the machine. And here we see how the waste percentage is accumulating throughout the job. And yeah, this is nice to see if a shift– or if there are things that are causing too much waste. Okay, we have more interesting metrics. We have over waste, washing waste, reel numbers. So if we have too many– the blue lines are washing cycles. The red lines are wet cuts. For instance, these are events the machine generates, and in the lower end, we have different wheels we used with a number and so we can, for instance, assess quality issues with specific reels and talk with a paper supplier about it. This is very helpful as well.
We see how many wastes we’ve done through washing. Only through washing within the two days, it was 2.1 tonnes. It was 179 tonnes of paper use at all, and 174 tonnes were actually good printing product. We have a deep analysis. I had to blur several stuff as well, but we see the real measured power consumption. So over two days, we used 32 megawatt-hours, 249 cubic meters of gas. We did 31,000 good revolutions per hour. All revolutions per hour were 33,000, and this is the samples. So revolutions always contain more than one example per revolution. We see who was working on the machine. We see what job was in the machine from the ERP system and which standard was loaded in the machine from the [PCOM?] plant control system. So we have consumption versus efficiency, versus the staff, versus job, versus consumables, versus our quality KPIs we were able to derive from it. That’s a more deep analysis. We now have the exact measured ink amounts. We have business numbers here. The controlling people are very happy about this. We can exactly now validate if what we say we would use as power and ink for a job if that really matches the calculation. And we have efficiency analytics in terms of cost for waste. So if we have a web cut, it always takes time to get the machine running again. In this case, it was 58 minutes within two days, and we can correlate this to a business number. So consumables in dollars and incidence in time in dollars. So people in the controlling and in the management are very happy about this.
Achievements we did so far. We got finally production real-time data. Some are near real-time. The ones we pull from the databases. Some we pulled at 10 seconds, some with 30 seconds. Not all SQL queries were super performant. So we cannot hammer the database of the PCOM system and also the ERP system with it. But for some data, 30 seconds is quite okay for us. Meanwhile, we also got some of the metrics streaming. The color control data is coming in real-time from the camera through the MQTT to Node-RED to InfluxDB and we have it immediately. We were able to do significant energy savings. The example I told with the dryer thing where it got visible through using the visibility tools that, yeah, they left the dryer running and used a little bit of gas, but they used high amounts of electrical energy that made up savings of a high six-digit number in euros per year. And this is something you can count.
We have fine-grain values. Even if we want to look at a job that was running six months ago, because it was a similar paper, we can look in-depth in it and have the very fine-grain values because InfluxDB allows us to do so. We don’t have to make the values un-sharp to be able to handle it. We can just kill it. We are collecting all plant metrics since 9 months, 10 months now, and it’s about 800 values per second per unit. We have, I think it’s 15 to 16 gigabytes which it takes a disk at the moment, so it’s really nothing. It can handle that very well. We have Loud ML and TensorFlow in place, and we are experimenting with it. There are values where it makes sense or where it’s useful to develop ML models on, and there are other things or other metrics where it doesn’t make sense or where it’s difficult to place it on. But the metrics from the conveyor belt, right, for instance, it’s a perfect example where machine learning works. We have the anomaly detection. This is what we do for instance with the triggers on the energy consumption we have. So we have a rule set. If this unit isn’t running, then this unit shouldn’t be running as well. This wasn’t available before. And some reminders that if there is no production, you should double-check if it’s really required if specific units which are running do have to run. Sometimes they do have to run because the revision window or the maintenance window requires to have it running. But sometimes they don’t need to run because the machine is simply cold and dark or should simply be cold and dark.
We have the possibility of the close integral validation of business numbers with the actual real measurements. Super helpful. People are happy with it. And we have successfully escaped the vendor lock-in that kept us inflexible. We only had the view on the part of all our data but not the view for the whole, and TICK Stack helped us in having this. TICK Stack with Grafana is really super helpful for this. What we are planning for the future, we want to deploy more instrumentation. There are ideas to have vibration and waveform analysis for predicting [inaudible] fails, do more sophisticated machine learning models for the conveyor belt drives and for fans of the dryer. Probably, we will need specialized hardware for this. We are testing stuff on this. We want even more metrics. Still, some data sources it would be better if we could get this delivered somewhere on an MQTT broker and not some data sources we get the data on a hacky way because we reverse-engineered it and just extracted from where we needed but it’s not so super clean. It would be nicer to have a proper process for this. We still have ongoing talks with the vendors to expand the possibility to work with MQTT. For instance, Manroland, where we already got this first version for the color control is planning to have this parameterized so we can go to the plant configuration and just select which values do we want to have on an MQTT broker and the machine or the process control system extracts it or exports it to the MQTT broker. Currently, there is still some hacky stuff in it. But yeah, I’m– now, how is it called? I’m really hoping that this will be improved in the future.
And the last step we are currently taking is to use signals we currently generate from the color control to reduce the washing waste which we actually have because– I go two sides back. One second. Let me look. Yeah. We have those values. For instance, this is a density value. And there is a specific geometric shape the value can make which says that we should do a washing cycle and we could detect this, or we are currently testing this to detect this development or this value. So if it uses or it if takes this specific shape, we give a signal now through the washing cycle. And this is the actual ongoing project or the actual task I’m working on to give the feedback or to use signals. We have here automated to use a physical interaction in the machine. Yeah, and this is– then we are where we want it to be to have some modern solution that actually saves us money, saves waste and saves CO2 and electricity. Yeah, that’s it so far. That were my experiences over the past nine months in implementing the metrics in an industrial environment and yeah, we did something there.
Awesome. That was even better than the first time I listened to you talk about this, Bastian. I mean, so many details here. And just what really comes to light for me is that it’s not about just collecting metrics. What you’re really trying to get to is really trying to understand all the way down to that job level about what’s going on so you can then help the customers understand or your customer understand if there are any kind of quality inconsistencies, Is it because of the paper, the ink, the personnel, the machine? And having that kind of visibility allows you to really fine-tune everything and hopefully, your customer is able to produce even more, right? At a higher quality.
This is exactly the point. And there are already things that got visible and actions that were taken. We had specific ink and paper combinations that simply didn’t work well. In the past, yeah, they just coped with it and they had more web cuts and more incidents by printing and quantity issues. Meanwhile, we know from the data we have, if they want to make a job with that combination of paper and ink again, we need to talk about it if we really want to do this or we have to talk about before we start the job to change parameters. There are lots of parameters which can change in the production process to avoid having, for instance, many web cuts which are a big cost driver and which nobody pays for, which is always our risk. And this only comes to visibility with the Grafana thing because we just see it through the discrete graph. We can always see it on the job level or on the paper level, or we can even use it on the staff level. Regarding the transparency, the other aspect is– not everyone is happy with it to be so transparent in the beginning. But once they see that they can perform better, they get happy with it. So in the beginning, you’re a little bit of a psychologist. But once they see this works and they have a benefit, it’s beneficial for their work, yeah, they’re not so against this.
[Chris Churilo] Yeah, because it’s not about replacing them with the solutions. It’s about enhancing the production.
Yeah, you’re right. It’s about enhancing the production. The replacing phase has already taken the industry [inaudible] where the automation already took place. But the whole one unit is driven with four people and it can be done with less, and of course, we don’t want to do it with less but we maybe want to have the opportunity to produce less waste and optimize the process because in the long run– this is some kind of closing word for me. I’m pretty certain that traditional, or companies from traditional industry sectors that don’t gain those few percent of efficiency for using those technologies especially in non-growing market segments will be those who will vanish from the market if times get a little bit rougher. And there are always times where it gets a little bit rougher where some companies vanish from the market. You have to do the preparation for this now.[Chris Churilo] That’s right. If anybody else has any questions for Bastian, please feel free to go ahead and post them in the chat or in the Q&A. Or you can go ahead and raise your hand as well. We just have a few more minutes. I’d be happy to have you ask your question out loud. And this is your chance. But if you do have some questions later on, feel free to just shoot me an email and I can definitely pass it on to Bastian. It happens all the time. After we listen to a webinar, we start to realize, “Oh, maybe I should have asked a little bit more about his data architecture,” or, “How was he able to get some of these vendors to open up the pipe so to speak so we can get those metrics?” So don’t fear. If you do have questions afterwards, we can definitely make sure that we get those answered as well.
So we’ll just keep the lines open for another minute or two, and I have to say, I think when we talked before, Bastian, I was also super impressed with just your really great Grafana dashboards. I think the way that you laid out those panels were pretty impressive and really simple to understand. Even if you didn’t talk and tell me, I could actually tell what was going on in those panels which is pretty cool. So we do have a question from the audience from Jorge, and Jorge asks, which areas did you save the most energy?
[Bastian Maeuser] Well, pardon. Which year did we save the most–?
[Chris Churilo] No. I’m sorry. Which area. Which part of the production run.
[Bastian Maeuser] Yeah. It definitely was the aspect that we didn’t have– or obviously, there was no central energy management in place before. And in the maintenance hours, it’s like 24 to 36 hours at the weekends where we don’t produce or where not 24/7 production is in place and where maintenance can be done. Yeah, they had this dryer running. They turned off the lights, okay, fine, and they turned off the post-processing. But one of the major energy consumers, it’s the dryer because the dryer is there for drying the web when it comes out of the printing units. It uses a bit of gas but it’s not the big cost driver. But the dryer has to be cooled. And the cooling tower, yeah, it uses, depending on the weather, between 300 and 350 kilowatts permanently. And so you have 350 kilowatts running for one and a half day without any use. So why not shut down the dryer? But the argument or the thing was they said, “Okay, because if we start on Monday in the morning, we have to wait about an hour until the dryer is hot, or until we can produce again.” But this can all be handled because there is a timer in the dryer. You can say, okay, start on Monday at 6 o’clock two hours before production and then it’s heated up already, it just needs to be done.
And so it was an energy saving of about €800,000 just by keeping in mind that you shouldn’t leave the dryer running over the weekend. It’s just one click they need to do on the machine when they leave on Saturday morning when the maintenance window opens. That was the biggest cost saving. It was to have it visible where energy is being used because we have Grafana dashboards where all subunits are in use, and I can see how many kilowatts are used by what subunit. And we can see a machine that isn’t producing but it’s using 300 to 400 kilowatts. Then something must be wrong. Now the idle usage usually is about, I think, 50 kilowatts, which is in use for pressurized air and some things that can’t be shut off during the maintenance window. But the whole or main energy saving was electricity.
[Chris Churilo] That’s really cool. Jorge, hopefully that answered your question. And it was kind of a reminder to our childhood when our parents used to yell at us to shut the lights off to save energy. I mean, every little bit makes a difference, right? And Jorge responded, “Yes, thank you.”
[Bastian Maeuser] Okay. Cool.
[Chris Churilo] All right. Well, we’re a little bit over and I don’t see any other questions. But as I mentioned, don’t worry. If you do have questions, which oftentimes people do, feel free to shoot me an email and I will forward it on to Bastian. And also we will just do a quick edit of this video and then we will post it and it will go live later on today. You also get an automated email first thing in the morning so you can take another listen to this webinar. And I absolutely recommend it because having heard this for a second time, I already learned a couple of more things that I completely missed. So this was a super-rich presentation, Bastian, and we’re so excited and so thrilled that you were so happy to share your story with us.
[Bastian Maeuser] I’m happy for it as well. If I may say one closing word?
[Chris Churilo] Yeah.
[Bastian Maeuser] About the whole thing, I think this is always and will stay a consultant job [laughter] because the vendors want to keep you in a lock-in. But the manufacturer has another view on the whole topic and yeah, he wants to save money, and the vendor doesn’t care if he saves money. They just want to keep his customer. And so from the manufacturer’s standpoint, it’s a bad thing to get a consultant for it, and yeah, to get this not out of one hand from one of the machine vendors or from one of the unit vendors because that didn’t work for like 50 years and it won’t work in the next 50 years.
[Chris Churilo] That’s a really good point and one that I hadn’t even considered. But yeah, it isn’t in their best interest, the vendors, because yeah, their neck isn’t on the line. They don’t have some kind of SLA that they have to adhere to when it comes to the actual production of these units, so. Very good ending. Thanks again so much. This was just really wonderful, and I hope everyone has a wonderful day and starts thinking about some really cool ways that they can implement using InfluxDB in their environments. Once again, thank you so much, Bastian, and we really appreciate it.
[Bastian Maeuser] Thank you as well. It was an honor.
[Chris Churilo] Thank you. Bye-bye.
Bastian Maeuser is an IT consultant at NETZConsult in Germany. He has over 22 years of experience in Network Engineering and has been building modern IoT solutions for Industrie organization for the past 5 years.