How an Industrial DataOps Solution Improves OEE With a Time Series Database
Session date: May 04, 2021 08:00am (Pacific Time)
HighByte develops industrial software used by manufacturers to address the data context and integration challenges created by Industry 4.0. Their Intelligence Hub collects, merges and models data from industrial assets, products, processes and systems at the Edge using open standards and native connections. Discover how HighByte uses InfluxDB for industrial data collection and entity-mapping.
In this webinar, Aron Semle will dive into:
- HighByte's approach to providing context to industrial data
- How to use Intelligence Hub and InfluxDB to store time series data with context
- How to use Fluxlang to quickly enable OEE (overall equipment effectiveness) applications
Watch the Webinar
Watch the webinar “How an Industrial DataOps Solution Improves OEE With a Time Series Database” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “How an Industrial DataOps Solution Improves OEE With a Time Series Database”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Aron Semle: CTO, HighByte
Caitlin Croft: 00:00 Hello, everyone. Welcome to today’s webinar. My name is Caitlin. And I’m really excited to have Aron from HighByte joining us today. Super excited to have all of you here. This session is being recorded and will be made available for replay tomorrow. Please post any questions you may have for Aron in the Q&A, and we will take all questions - we will answer all questions at the end of the session. And without further ado, I’m going to hand things off to Aron.
Aron Semle: 00:33 Hello, everyone. Thanks for joining us. Again, I’m excited - a CTO at HighByte here. I’m excited to do this with Influx because I couldn’t remember, probably seven or eight years ago in manufacturing, when kind of new time series data storage stuff was coming out, no-SQL distributed data storage. And Influx kind of started to join the scenes, some questions about what it was. And it’s cool to be able to see how much it’s evolved over the years and to where we’re at now, and how HighByte and Influx can work together to create a better solution. All right. So the agenda quickly, so I’ll run through some slides just to introduce you to Highbyte because there’s probably a lot on the call that aren’t familiar with us. We’re a startup. And then I’ll jump into more of a demo, kind of I’ll build a project for you in real-time, showing how HighByte and Influx can work together to store data in Influx with context, which will be critical. And then lastly, I’ll kind of overlay that with an OEE application. So some new capabilities that Influx built into the Flux lang to calculate OEE for some raw data, which is pretty cool. So I’m from HighByte, right? HighByte, we’re a startup founded in August 2018. We’re headquartered here in Portland, Maine, where you can still wear flannel in - what is it, May? I think it’s May now. Yeah. So still a little chilly, but come visit us when the pandemic is over, it will be over, but awesome town. So our mission, right, is to provide manufacturers with the critical data and infrastructure that they need to achieve Industry 4.0 or industrial transformation. And specifically, kind of the genealogy of the HighByte team, a lot of us come from Kepware Technologies. Originally, we were focused on connectivity on the manufacturing floor - manufacturing floor and communicating with machines.
Aron Semle: 02:20 We’re very intimately familiar with kind of the operations side of the business and what data is like and the skill set of the folks in that field. And we’re helping bridge that gap between that OT layer and the IT layer in terms of getting data with context up to new sources. So in terms of a broader picture of what’s happening in the manufacturing ecosystem, it’s a big step change, right? When you look to the left, Industry 3.0, this is what we call the Purdue model in manufacturing, which has historically been kind of the architecture. There’s different levels, right? We have wired IO, machine PLCs from Siemens, Allen-Bradley, you name it. We have a connectivity layer with OPC that collects that data from the machines and then provides it up to local applications, which are SCADA (Supervisory Control and Data Acquisition) and MES (Manufacturing Execution Systems). These are specific to the factory [inaudible] system. Now, in Industry 3.0, the critical part is that these are kind of all their own segments and the data flow is - it’s somewhat bidirectional, but you go wired IO to PLC, PLC to OPC. There’s no connectivity between the layers, just up and down. So Industry 4.0 comes along, and what’s happening? Well, it’s kind of blown up on either sides, right? The whole wired IO and the sources of data in the factory is exploding, right? So it’s not just PLCs anymore. It’s things that sort of look like PLCs. It’s [inaudible] sensors. It’s kind of you name it, where we’re sourcing data. And then the demand for the data up top, right, is also expanding pretty exponentially as companies look at data, not just as - they look at it as an asset, right? And they’re trying to create data flows and data ops to get that data up to these new systems. And that could be at the very top, that’s Azure and AWS and the machine learning and all the things that they offer. But even inside the factory, that’s maintenance systems, ERP systems.
Aron Semle: 04:15 And what’s happening is you kind of see this spider web of connectivity because the traditional connectivity between the layers is pretty simple. Now, we’re trying to create all these point-to-point connections between systems, and that’s getting really complex and really difficult. So kind of the source of the problem that HighByte is focused on, that we’re very intimately familiar with, is that industrial data, in general, lacks context and uniformity. And what does that mean? So as an analogy, if I look back six or seven years ago when we started pushing data to the cloud from the OPC layer, the equivalent of what we did is we basically took all the data and sent it to a giant drive, a file system right out on the internet. And on that file system, if you looked at it, our data ended up being every data point or a tag that we define in manufacturing. It ended up being its own CSV file. So just consider each data point - pressure, temperature, whatever - ended up as a file on this file drive. And then it got this cryptic name as to what it was. And then we just dump the data in there. And we kind of wiped our hands and said, “IT, you deal with it.” Right. So IT comes in and they get this drive full of like 10,000 data points with these cryptic names and they have no idea what any of it means. And they’re just trying to do like a Power BI report on the data. So you ended up spending a lot of times on the IT side trying to do simple things like, “Okay, of all these files, which ones relate to this machine, and what was the state of this machine at 12:00 noon on a Friday?” Right. That’s actually a lot of computation to go write scripts and pull apart those files and figure out what’s what. And then once you do that, the problem is it doesn’t scale, right? When you go to the next machine, it’s going to be a different set of files with a different set of names, when you go across factories. It’s not a great solution. So you ended up spending a lot of money and time on the IT side trying to put the context back around the data, that context, which is actually down on the factory floor with the people and the processes that understand it, right? So that’s created a huge problem in the industry.
Aron Semle: 06:11 And what you’ll see is there’s a huge need in Industry 4.0 for that data to be accessible in the cloud and other environments, right? So you have this big demand on the data. You have the data we’re providing, which lacks context. And the way people are trying to solve this today as they’re writing custom python scripts or they’re writing Azure functions or AWS Lambda expressions to try to piecemeal this context back into the data. And it’s just not a scalable solution and it ends up costing a lot of money and time. So introduce HighByte, right? What we’re doing is we’re focused on the edge - that edge could be at the machine layer and the factory layer - connecting to the multitude of sources that are in the factory, right, to bring that data in, model it, which is kind of the core of what we do to say that every injection molding machine you have conforms to this model, and then allowing you to move that model up fully contextualized to whatever end system, either ERP, AWS, Azure, etc. And when you do that, it has this exponential effect of just making it much more scalable because I can - in the cloud, I can develop one dashboard that can work on all my press machines across all my factories. And since we’re intimately familiar with the OT or operations layer, we’re building this such that the OT environment could quickly take it, use it, fill in those models, and there you go. And I’ll show this. And intelligence - I’ll skip this slide. And I do want to just make sure I don’t miss this one, which is the InfluxDays, May 10th and 11th of this year coming right up. So make sure to register. And Caitlin will have more information on that. All right. Enough with slides. So I’m going to build this from the ground up, for better or worse. Cross your fingers. I think it’ll go fine, but to provide some quick context, so I have InfluxDB running in a Docker image - I’m on Windows - and specifically a version 2.0.5. And in that version, they added the OEE capability. So that’s important to note if you want to try this as well.
Aron Semle: 08:08 The one issue I had running this in Docker is that the Docker image timestamp - this is a Windows problem, I think - fell behind, right? And right now, you can see the dates correct. But the problem is, if this time falls behind and you’re trying to push data with a newer timed influx, it will reject the data like any time series store will. So just if you do run into that problem, restart your machine or make sure your time sync the Docker image. All right. So I’ve got it running in Docker. And then I have HighByte here running on my local machine out of my dev environment. So what I’m going to do first is I’m going to jump over to Influx. And once you have that on Docker, local 8086 will get you there. And in here, quickly, I’m just going to create two things. I’m going to create two buckets, right? One bucket, I’m going to call OEE data in. And that’s going to be my raw OEE data coming from a HighByte. And then I’m going to create another bucket just called OEE and that’s going to be my processed OEE data, ultimately. And the last thing I’m going to need in here is a token, which is basically just Influx’s authorization to allow me to write to those buckets. So I could create an all-access token which would let me do anything to the REST API, probably not a best practice, so I’m going to create one specifically that limits me to these buckets. I’m just going to call this OEE. And that token is really just a large number here, so I’m going to copy that. All right. Then I’m going to jump over to HighByte. So HighByte is a Java-based application that can run inside a Docker. Again, I’m running it off my host. This is the configuration screen that you first see. In HighByte, we have the concept of connections. Those are the inputs and outputs. So data I can connect - sources I can collect from and sources I can push data to. Modeling, which is kind of the core of the platform. This is where I define that every injection molding machine or every press machine looks like this for this application.
Aron Semle: 10:08 And then lastly is flows. And flows are just going to connect the data sources, the inputs to the outputs. So the first thing I’m going to do is I’m going to create an InfluxDB connection. And Influx has a REST API, so we’re actually going to use - again, in here, you can see the various connections that we have. But I’m going to choose a REST client for this. And the base URL - I’ve got a cheat sheet if you see me looking left - is localhost:8086. And the one thing we’re going to want to add is the authorization. So this is just a header. And it’s the token that I copied this. This is the token that we just created. And the one change - this is HighByte-specific, but the one change we’ll want to make is we want to change this to URL-encoded. So for URL-encoding equals - is equivalent to 3D. So that’s the only thing you need to change. I think all the tokens I’ve seen generated by Influx have the equals at the end. Everything else is fine. So I’m going to submit that. And then I’m going to create an output. So I have a connection now to Influx. Now, this will be an output on that connection. I’m going to call it OEE. And I’m going to have the endpoint, which is the rest of the URL that I’m going to push to over REST, and that’s going to look like this. So this is in the Influx documentation, but API v2, right? I specify the bucket, right? I define that OEE data in bucket and then the org is HighByte. So when you set up, at least in the Docker image, you have to define an org. There’s not a default. So I set mine to HighByte. And you do need that as like a namespace when you go and write data. And the last of the precision of the timestamps that HighByte’s going to write to Influx, and those are in millisecond precision, they’re [inaudible].
Aron Semle: 12:01 So I’ve got that. And the last thing we have is we have a - this is a generic REST output, right? And by default, it will output JSON format. Now, influx has what they call a line protocol. And so what we’re doing - I’m just going to paste this in. This is Apache FreeMarker is what it’s called. Sorry, I got a - the zoom gets in the way, but let me just. So we use a technology called Apache FreeMarker, which is a templating engine, which basically says with these inputs, I can generate any output. It’s used a lot to generate HTML, but you can use it to generate anything. And specifically for this case, if you google Influx line protocol, you’ll see Influx by default. This is kind of their default input is this line protocol where there’s a measurement, there’s these things called tags, tag sets, field sets, which are the actual values, and then a timestamp. So what this is doing is this is using that Apache FreeMarker to format the data in that line format. So this is going to be the model type which we’ll see. This will be the name of the instance, which would be like our press machine, and then it’s taking all the elements that we define and kind of flattening those out. And I’ll give you a demo of what that actually looks like. But for now, just know that we change the output format. So the other thing I’m actually going to create, just so you can visualize that, is a generic REST output. And what I’m going to do is - there’s a site I use called webhooks, which I’m going to push data to. And this just gives you a unique URL that you can push data to. And I’m going to set up an output called test with an endpoint, and that’s the GUI for the URL. And I’m going to copy the same Influx formatter over here, and that’s just going to output it to this web so you can see what the line format actually looks like.
Aron Semle: 14:01 So I’ve got my outputs defined, but now I’ve got to go source the data from the factory, right? So to do that, I’m going to create a connection to OPC. And I’ve got a local KEPServer running that this will connect to. And I’m going to - it’s localhost and then the port is 49320. There’s no security on here, so everything’s anonymous. And this is going to be an input, right? So I’m going to select the input. So I’m going to hit the browse button. And this is me browsing the OPC address space for the data. And specifically in here, I’ve set up a press machine called press one. And I’m just going to bring in all these tags. I don’t necessarily need them all, but I’m going to import those. And you’ll see on my imports, I can then go in and do a read and that value’s zero right now, so I can kind of test some of that. And I’m just going to make sure - watch my quick client, so it starts generating live data. So now that I have that, I’m going to move over to my model. And so what I want to do is generate a model for this OEE application. I could model anything, right? I could model the asset. I can model the end application. But here, we’re going to generate an OEE model. And a model in HighByte is just a definition, right? We’re just going to define what - anybody who wants to enable that OEE capability, these are the set of attributes you need to support. So let me bring my cheat sheet back up and we need - all right. We need a part count. The number of parts we’ve produced, right, when we need a bad count, the bad quality parts and then a state. And I’m going to be a little rigorous and just set the data to these. And the state is actually going to be a string.
Aron Semle: 17:47 And then output, I’m going to select two things. I’m going to select my OEE output on influx. I’m going to select my REST output for testing. And then this is a trigger interval. We’re going to keep it really simple and just execute every five seconds. So every five seconds, we’re going to go grab the data, fill in the press one model, send it off to Influx into the REST endpoint. And we’ll turn this on, and I’m going to bring in my debug environment. Everything looks good. So here we go. So this is my REST test endpoint, right? So we’ll jump to influx after this. But this is just to show you what the data format, that line protocol looks like coming out of HighByte to Influx. And you can see that’s my model definition. I called it OEE. My name is my instance name, which is press one. And this is my field set. So these are my actual data values that I’m pushing. And here’s that timestamp in milliseconds. And you’ll just see the part count is increasing, 41, 43, etc. So if I jump back to Influx, now that I’ve define those buckets, I’ve got the OEE data in and you can see this is the model name, right? Here’s the fields off of that model. So I could look at bad count, part counter state. And this is the name. So if I had press machine one, two, three, four, I could filter on that. This is using kind of their built-in viewer, right? And I’ll set it to five minutes. And you can see this is my part count kind of going up, this is my bad count going up. This is all simulated data, right, but you can see it’s really easy to visualize that data. And I have the full context.
Aron Semle: 19:30 So I basically have that data model that we defined in HighByte, which we could change too, right? We could add attributes to this and they would automatically show up in Influx. It’s one of the great things about Influx. It’s a write-only schema, is I think how they refer to it. So I can make dynamic changes to the data that I’m writing in. My clients have to respond to that and pull the new attributes, but it doesn’t require any SQL to go do a table transformer or do any of that. So it’s fairly dynamic. So now, I want to do some OEE on this data. So what I’m going to do here - this is the new feature that was added in 2.0.5. So I’m going to create a new query. And what I’m going to do is I’m going to skip the nice visual and I’m going to go straight to what they call the script builder, so I can write the actual query out in their syntax. And again, I’m going to use a quick cheat sheet. And I’m going to copy this and then just kind of explain what it’s doing. All right. So first, let’s run it and see if it works. Looks like it worked. And I’m going to use the table format. So you can quickly - histograms kind of show the data however you want. Table will just show the raw data. And right now, it’s showing me the results of both queries. So I’m actually going to get rid of this one and go to the OEE one only. And what you’ll see is this is the - I have very preliminary data I’d put in here, right? So this is kind of - the numbers look a little funny, but they’re reasonable. Maybe not availability yet, but as the data builds, it will, right? So they’re taking the raw data, those three attributes we provided, and then they’re running the calculation to calculate OEE over a timeframe.
Aron Semle: 21:18 So in the query, what I’m saying is my raw data end bucket, OEE data, I’m going to filter on a time range. And this is a very small time range for OEE, right? I’m just saying from now to five minutes ago, so the last five minutes. I’m queuing off my measurement, which is OEE, that was the model that I pushed into there, and then I’m looking at press one specifically. So that’s the instance in HighByte. And then this is a little - I’m not an Influx expert, but basically, I’m creating a pivot table to kind of move the data around such that I can then move it into the new capability for OEE. And this is it, right? So I brought in this experimental OEE library that was added in to 2.0.5, and then this is the OEE calculation. So I’m taking the data set, passing it along. I’m defining what the running state is, so it’s “running”. If you’ll remember, back here, I had that simple calculation to change it when there was a one. And then this is why the numbers don’t make any sense. Because my planned time is one hour and I don’t have an hour’s worth of data, so I’m going to change that to like five minutes. So that was my planned uptime for the period, right? I was planning to be running, in this case, five minutes, not that reasonable, but just for example. And then cycle time. So of that running time, every five seconds I produce a part. This is some of the static data you feed in, then you feed it in the raw data that’s coming from HighByte. And then down here, what I’m doing is I’m just outputting it to another bucket, which is the OEE bucket that I created. And if I submit this, it’ll start to look a little better, right? So OEE Is about 81%, availability is 90%, performance is just below. So that looks - and it’s pretty decent OEE, but as you can see, taking that raw data stream and I’m calculating OEE from it.
Aron Semle: 23:05 And you could go build visuals off of this again. I’m not an influx expert, so I’m not going to demo that part of it, but at least, shows you kind of the basic capability. If you format the data and push it in a certain format, Influx can go and figure out the time intervals between up and down, calculate your uptime, etc., and do some pretty - and they did this pretty quick, right? We kind of partnered with them. We talked about OEE and I think within a few days maybe, they had kind of built some of this capability into the experimental stream. So pretty cool. The last thing I want to show you is just the dynamic schema, right? So you can come in to HighByte - one of the things with HighByte, we get asked a lot about modeling, right? And should we adopt the ISA-95 data model or should we adopt MTConnect or whatever data models are out there? And the answer is absolutely. Start from somewhere, if that’s a good starting point, but we do believe that data models are application-specific. And since applications are all code, things are in flux all the time, right? They’re changing. So although you might base it off a data model, chances are that you’re going to need to modify and change that over time. So one of the things we built - we think about in the platform is the ability to quickly and dynamically add states. So for example, I’m just going to go in and add cycle time to this. So I’m adding it to the base model. This is all running. Everything’s still going, right? And I’m going to come into my instance, and over OPC, I’m going to grab my cycle time tag, save that.
Aron Semle: 24:36 And again, everything’s flowing. This wouldn’t be true with a lot of data storage, but since Influx is a write-only schema, I can jump into the explorer again. And I’m going to go back to the training wheels version here for me. OEE data, if I expand this, you can see cycle time was just instantly added. And I can filter on that. So that data property just showed up, right, without any additional work, we didn’t need to go in and change a SQL database, etc. That’s a really cool capability, especially when manufacturing, is we’re trying to figure out this data modeling piece and we’re looking for data storage off, I’ll say, unified namespace. This concept of a MQTT broker is one implementation of it that has all the data, the ability to store that data very easily and know that it can dynamically change, and that storage is still going to work, especially in these early stages, if it’s an industry, as we’re trying to figure that out, is really powerful, right? And it’s pretty cool. So one other thing I’ll just show quick. So in terms of HighByte and being able to source data, I showed OPC, I showed Influx. There’s a number other connection. SQL’s a big one in the factory, so like Microsoft SQL Server. We’ve got connectivity to that.
Aron Semle: 25:54 So I have a local instance on 18.104.22.168. And if I can remember my login, 1, 2, 3, 4 just to briefly kind of show you - oops. I may have forgot my log in. But anyway, you’ll be able to see all the SQL tables, select data from SQL. We can source data from multiple sources. We don’t currently have the ability to ingest data from Influx over the REST API. Just anticipating that might be a question because the REST API returns CSV data. And our REST input currently expects JSON data, but we know we need to extend that to support XML, CSV is another example, in the future. So currently, it’s pushing data into Influx. Again, I showed the Docker version, but Influx actually has a web-hosted version that works just as well. I don’t know if the OEE capability is up there yet, but that’s another really easy way to get started with Influx and pushing data in the kind of the model format. So I know that was quick, Caitlin, but I think I’ll pause there, and maybe if there’s any open questions, I can branch off and demo some other stuff if there’s interest, but.
Caitlin Croft: 27:25 I think you’ve kind of just surprised everyone. It doesn’t look like there’s any questions right now. But I thought someone wanted to - just give me one second here. Thought I saw a hand raised, but I think you must have answered their question.
Aron Semle: 27:44 I see one now from -
Caitlin Croft: 27:47 Oh, here we go. First of all, thank you, Aron. I have two questions. One, what if the time interval for a flow is 500 milliseconds, but the REST API for one of its inputs returns a response more than 500 milliseconds. How do we diagnose this issue?
Aron Semle: 28:08 Okay. So the time interval for the flow is half a second, but the REST API for one of its inputs returns a response more than five hundred milliseconds. It’s slower. I’m guessing that’s slower.
Caitlin Croft: 28:19 Yes, it sounds like it. Yeah.
Aron Semle: 28:21 Yeah. So the way to do that and in HighByte is you would set your flow to five hundred milliseconds, right? And what you can do is in the publish - or in the triggers, we have this on change on true while true, right? And if you said it’s a on change - wait, no, one second. Set it to always. You do it in the publish mode. So what you could do is basically set this up so it’s only going to trigger - it’s only going to send to the output if there’s changes. So you would be pulling the REST API at five hundred milliseconds, but if it’s only changing once a second, only every other flow trigger would actually publish to Influx, as an example. That would be the way to do that.
Caitlin Croft: 29:11 So the next question is we creating an OPC and an MQTT connection. After that, we created multiple inputs for each connection. Does HighByte subscribe these inputs via data change and caching somewhere, or the inputs are used when the model needs to be created?
Aron Semle: 29:31 Oh, we got an advanced user. Yeah. So I can answer that question. So we have - let me show you. All right. So I’m going to create an MQTT input. I’ve got a local Mosquitto broker running and I’m going to define input on a topic data. And what I’ll do is I’ll bring up my client, which it has trouble - I use MQTT.fx, but it has trouble on multiple screens. So I’m basically going to publish to that data, simple JSON message. One second. All right. So I’m going to read the input here. The short answer is we cache, right? So I just sent that over - I misspelled hello work. The short answer is we cache. So I just sent this over - and we subscribe to this data inputs. So data came in. This was the message that I wrote over in MQTT.fx. This is cache in HighByte So if I continue to read this, right, I’m going to get the same value. In the future, we’ll add something like cache timeout stuff or maybe only read once as options. This is the way it works currently. So any kind of not request, response input like an MQTT, I think might be the only one, Sparkplug as well, will cache that input value. And again, the way to - if you only want the changes, the way to do that is in your flow trigger to configure it to only changes. And then we’ll basically read the cache, see that nothing changed and we won’t output the value.
Caitlin Croft: 31:15 Perfect. It looks like you did answer their question. They said thank you very much.
Aron Semle: 31:21 Oh, perfect. Yeah.
Caitlin Croft: 31:22 All right. Next question, do we need to flatten the data to send it to InfluxDB?
Aron Semle: 31:28 Yes. So I can demo that really quick, too. So in modeling, when you look across, there’s model hierarchies and relationships, right? So I could create a child model. And in here you’re just going to call it child. And this is all in line, right, so I’m going to give it an attribute and I’m going to save that. So now, if you go into modeling, through that view, I’ve created a child model that has an attribute any, and then the OEE model references it, right? So this is a little more complex of a use case but what I’ll do is the child model, I’ll go create an instance of it. Child instance is fine. And I’m going to fill it in with a random - you know what? Let’s just grab the MQTT info we were just messing with. So you can see that’s there and string thing that I was playing with. So I’ll bring that in and save it. And then in my instance of press one, I’m going to reference that child model. Now, if I tried to send this to Influx, which it’s trying to do right now, that’s probably not going to work. So I’m just going to send it to the test output because I’m actually curious of what it’ll look like. Yeah, I see. So the line protocol looks all good until we get to the child instance and then it gets weird, right? It’s basically outputting it as a JSON string, which is fine in some platforms that can handle hierarchy, but not in the Influx case.
Aron Semle: 33:08 So the way to fix that is to go to connections on Influx and we have this flattened model values, which basically means if there’s any hierarchy in the model, we’re going to flatten it out. So I’m going to turn that on. And then go turn on influx. Oops, I forgot I’m using a REST test input. So we’ll enable it in both spots, the REST test output. So I’m going to turn on the REST test output and you’ll see, hopefully, the new one. Let me send it to Influx and see if this - if this doesn’t - I’m in a dev environment. If this doesn’t work, it should, so I’ll probably fix it right after the call. So let me get the Influx again. There we go. But sometimes you’ve got to jump off the page for it to - or count by count state. Okay. Yeah. It doesn’t look like - but essentially, that would be the flow. So I’ll go look at if that’s broken in my dev environment. But essentially, what would happen is that child model would appear like child name incidents underscore attribute would be a field. So that’s the concept of flattening the hierarchy so that that hierarchy just becomes a unique name, field name that would then go to the output. But for some reason, when I turned it on, even in the REST output, it doesn’t look like it’s doing that. So that was Paul - or Pollo. Yeah. So I’ll go look at that. But that’s how that should work, Pollo, because we have a number of endpoints that can - SQL is another one, right? It can’t handle hierarchy and data models so you have to flatten it out.
Caitlin Croft: 35:12 The joys of doing live demos during a webinar, right?
Aron Semle: 35:16 Something’s got to give. I’ll take that one.
Caitlin Croft: 35:22 Awesome. Well, we’ll stay on the line just a little bit longer, see if anyone else has any more questions. In the meantime, I just want to remind everyone, again, we have InfluxDays coming up. So next week, we have the Flux strings. So if anyone is interested in attending, there is a fee attached to it. But if there’s anyone who’s interested in it, please email me and we can look into getting you a free ticket to it. So there is the Flux training and there also is the conference itself. And the conference is completely free and it’s on May 18th and 19th. So I hope to see you all there. It’s a lot of fun. Well, okay, another question, what do you think about storing a single file in each point using the OPC tag name as Influx tags?
Aron Semle: 36:26 You think about storing a single field in each point. So let me jump out of here. I think what you mean is essentially these would be the tag names, which is - it’s interesting. So if you look at the traditional way to send OPC data to a system like influx’s, what you’d end up with is your bucket and then your measurement would be all the tag names. And when you click on the tag names, you would get just a value, right? And potentially, if the OPC product wanted to send quality and timestamp, it could too. So you’d see value, quality, and timestamp. My opinion is that that’s actually - or that’s the equivalent of sending all the data to a bunch of CSV files out on a file share drive and telling IT to deal with it because those tag names are fairly cryptic. So I would guard against that because even when - like as an example, right, if these are just flat tags and I’ve started to actually do some more customization on the UI and build a screen for an asset like that press machine, if I’m just working off the measuring points as tag names, it’s going to be - I don’t know how I’d actually do it in flux. I’d have to have some scripting or something to go map the set of tag names to this part of the dashboard versus if I build that one dashboard and I’ve modeled my data such as this, right? I could have press machine two over here, and actually, I could demo that really quick. So what I’ll do is I’ll just - let me nix the tag hierarchy stuff we created because that wasn’t quite working.
Aron Semle: 38:05 But basically, I can go and create a new press machine, right, press two. Based on the OEE model, I can go fill that in with OPC data. And I’m not going to map it correctly. And then just bring that into my flow, right? So my instances, now I’m going to select press two and keep that flow running, assuming - might not like - I’m going to nix the cycle time too. All right. Turn that back on. There we go. It had - the child model was still in one of the instances, but basically, now if I go off this page and come back, you’ll see - hopefully, pretty soon, sometimes there’s a delay. There it is. So there’s press two, right? So I could build this dashboard and filter off press one or press two or both. And since they’re using the same data model, right, that becomes really trivial. But if these are all tag names, yeah, it’s easy to go pull up and chart a single tag, but good luck if you’re trying to build anything more complex on top of that. And then even on the query side, if you’re writing the queries against Influx, you’re going have to piece together those tag names. So can Influx store tag names, straight tags? Absolutely. Do I think that’s the best approach in terms of manageability over time? No.
Caitlin Croft: 39:57 Great. Hi, Aron, do you expect that we can read data from InfluxDB to HighByte in the next HighByte release?
Aron Semle: 40:07 Yeah, pretty soon. So I think we’ve got a 2.0 major release coming in July. I don’t know if it’ll be in there, but potentially, shortly after. This is - actually, as part of the prep for this demo is the first time I try to do a read out and I realize that Influx, it’s pretty cool. I mean, you can essentially write the query language and post that. And we’re pretty close to it. We can send the request, but the response comes back in that CSV format. It’s a pretty light lift. It’s more just we’re trying to prioritize based on market demand. So I mean, if you’re definitely interested and can use that feature, reach out to me and let us know. We could get it in pretty easily because we already handle - we have a CSV input. You’ll see in our CSV files we can - so we can handle that format pretty easily. It’s just not part of the REST connecter, currently.
Caitlin Croft: 40:59 Yeah. And for anyone who wants to connect with Aron directly, all of you should have my email address. So feel free to email me if you want to reach out to Aron. And Aron’s email is up there on the screen as well. So I’m sure he’d be happy to hear from you as well. All right. Well, thank you, everyone, for joining today’s webinar. If there aren’t any more questions, we’ll stay on here in a minute or so. Thank you, everyone, for joining today. It was a really great session. Once again, it has been recorded and the slides will also be made available for review by tomorrow morning. So you’ll be able to check out the slides as well as the recording by tomorrow morning. So thank you, everyone, for joining today’s webinar. And I hope to see a bunch of you at InfluxDays and the Flux training next week. So it’ll be a lot of fun. Thank you, everyone, and thank you, Aron.
Aron Semle: 42:01 Thank you.
Chief Technology Officer, HighByte
Aron Semle is the CTO of HighByte, focused on guiding technology and product strategy through product development, technical evangelism, and supporting customer success during the pre-sales, post-sales, and renewal cycle. Aron previously worked at Kepware and PTC from 2008 until 2018 in a variety of roles including software engineer, product manager, R&D lead, and director of solutions management, helping to shape the company's strategy in the manufacturing operations market. For the past two years, Aron has worked as an entrepreneur and co-founder of upBed, a Maine-based startup developing technology to provide autonomy and person-centered care for elderly populations. Aron has a bachelor's degree in Computer Engineering from the University of Maine, Orono.