How Cisco Reduces Air Pollution With a Digital Twin Created Using Python, LoRa, Google Cloud, and InfluxDB
Session date: Oct 04, 2022 08:00am (Pacific Time)
Cisco is the worldwide leader in IT and networking technology. Their hardware, software, and services are used to help provide easy access to information anywhere and anytime. Nearly every internet connection in the UK uses Cisco technology and they are investing in projects to support innovation — including smart cities, transportation, healthcare, manufacturing, and cybersecurity. There is a team dedicated to improving sustainability and renewable energy practices within the organization.
Cisco aims to help customers reduce their carbon footprint, improving operational efficiency, addressing health and safety within roadways construction sites. Cisco’s solutions are being used to collect metrics about the environment, construction activities, GPS, and machine vibration. Discover how Cisco is using InfluxDB to aggregate sensor, telematic, and network data to measure and reduce noise pollution generated by excavators and large machinery. Their solutions are being used to help collect air pollution, air pressure, CO2, NO2, O2, PM, O3, SO2, humidity, barometric pressure, and temperature data. Cisco’s goals include improving air quality, health, and safety at construction sites.
Join this webinar as Ehsan Fazel dives into:
- How to ingest metrics from networks’ routers into a time series platform
- Overall architecture of the IoT monitoring system
- Best practices to collect and analyze industrial IoT sensor data using LoRa at the edge — learn how to collect noise pollution metrics
- How to overcome challenges for remote sites without reliable power and communication
- Tips on how to use an SDK to send data to/from InfluxDB for cybersecurity analysis
Watch the Webinar
Watch the webinar “How Cisco Reduces Air Pollution With a Digital Twin Created Using Python, LoRa, Google Cloud, and InfluxDB “ by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “How Cisco Reduces Air Pollution With a Digital Twin Created Using Python, LoRa, Google Cloud, and InfluxDB “. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Caitlin Croft: Sr. Manager, Customer and Community Marketing, InfluxData
- Ehsan Fazel: Systems Architect, Cisco
Caitlin Croft: 00:00:00.876 Hello everyone, and welcome to today’s webinar. My name is Caitlin and I’m joined today by Bria on the InfluxData site and Ehsan from Cisco. And today, Ehsan is going to be talking about how Cisco is using InfluxDB, LoRa, Google, Cloud, and Python to create a digital plan that is helping reduce air pollution in the UK. I won’t steal his thunder but really excited to have you all here. Please post any questions you may have for Ehsan in the Q&A or the chat, and I will be monitoring both, and we’ll answer them at the end. And this session is being recorded so you will be able to check out the recording as well as the slides by tomorrow morning. And I just want to remind everyone to please be courteous and friendly to all attendees and speakers. We want to make sure this is a fun and safe place for all. Without further ado, I’m going to hand things off to Ehsan.
Ehsan Fazel: 00:01:05.025 Thanks Caitlin. And thanks to everybody for joining the webinar, listening to presentation that I have for you. My name is Ehsan Fazel. I’m a systems architect working for Cisco. And this webinar is talking about how we have helped our customer to put control over the amount of pollution that they exert into the environment. I start this with my bio. I’m a presales systems architect. What it means is I help my customers solve their business challenges by leveraging our technology. And not just Cisco’s technology but also ecosystem around it in order to deliver exactly what our customer wanted to achieve. I have years of experience on building IT and OT solution working in — as well. And I’ve recently been interested in exploring to connect the unconnected or the IoTs, if you like, right, and leveraging my networking expertise in order to connect the unconnected and get data out of the unconnected and make sense out of it and put a good use into that. At a personal level, I love cycling and group exercises, whatever makes my heart beat faster and pump the energy in, right? I love sharing what I learned, and this is an example of that. My family and I are living in historical beautiful city of Cambridge in England.
Ehsan Fazel: 00:02:47.341 Now I have an agenda for you talking about Cisco. I’m sure you may hear of our name. If you haven’t, I’m sure you have now. We’ll be talking about who we are, what we do, what we care about, talking about the project and the sensors that we have deployed, the challenges that we have to operationalize the environment going to the technical elements with the architecture and how different parts have been put together in order to deliver a solution end to end. Talking about the data flow, decoding and writing data into InfluxDB, some tips and tricks that I’ve learned when I’ve been working with Influx. This is not my first project. There’s a few, and I put a few of them together. And then last but not least, talk about InfluxDB as well. So about Cisco, I’m sure you heard the name. The first thing is we care about the environment. We care about the climate change. We have corporate social responsibility team that are investing in various projects in order to tackle the climate changes. And that’s through supply chain, circular economy. And the products and solutions that we make, they’re not just going to be more energy efficient but also how they have been used in order to address the challenges. In terms of the vertical that we are operating specific to IoT, I put it into buckets of the transportation area, distributed asset factories, utilities, smart and connected cities. And all of them are underpinned by the technology that we built over the years. They’re addressing specific and very challenging environment to connect the unconnected, the last mile, different interfaces, the scale, the speeds, and all kinds of various challenges that has been thrown at Cisco. So we have a solution that address that.
Ehsan Fazel: 00:05:03.566 So now switching the gear, talking about the project itself, the project was about building a dual carriageway nine miles worth of road in southwest of England. The site has got a significant — is a significant archaeological in there as well as the environmental impact is of significant because of the ecologically is a fragile environment. So there has been quite a lot of sensitivity around it. And we work with a construction company who builds the road in partnership in order to identify the area that we can help them to deliver efficiency, reduce the impact on the environment. And the two main areas that we help them — as I mentioned, working with the construction company and with the site, with the site manager who managing the construction activities. The first was the plant efficiency. What I mean by the plant, the diggers, the bulldozers, the excavators, the earth movers that moving tons of soils to different parts of the construction sites and how can we help them giving ideas as to how do they operate and what are the things they can do differently in order to make the plant operate more efficiently with less impact on the environment. We’re not just only there but also helping and giving them data to the construction site such that they can use these data to plan various activities. The nine miles of the road, obviously, it does have different phases. Different activities happens across different section of the road. And by giving the right information to the people who are carrying out the activities, they can make a better decision to deliver more efficiency.
Ehsan Fazel: 00:07:05.538 So it all starts with sensing. Obviously, you send something. You give information to the site. And the site will make a decision as to what is the best course of action is in order to address specific challenges. The first is about the noise. So we measure the noise level on site. You will see if my flash shows on the screen there is a noise sensor on the fence that hook on the fence that actually sense the noise, the amount of noise that is being generated by various activities around this very specific location. And the two reasons for that one is the health and safety of the construction workers as well as the local authorities who are responsible for having a safe environment for their residents nearby. So therefore, this is something we can help in order to address the health and safety of the workers as well as nearby residents. Equality, needless to say, there are two impacts both, again, on the residence and a construction workers. And the impacts of the construction activity on if a plant is moving around, how much does it generate — how much CO2 or CO or sulfurs or other hazardous gas are being generated in the area compared to other locations where there is no activities, and we can report on that. And obviously, it is essential to give advice to the site whether it is necessary to put specific measures in place to address them. We’re also measuring temperature, humidity, and pressure. We report live data to the site, and that helps them to plan various activities based on the climatic information, the moisture in the air. And that would help them in order to reduce the impact on the environment. We attach magnetic GPS to a number of construction plants and also vibration sensor. And what it does and the reason for that is so we can track where that vehicle is, and based on the path that they take, whether that’s the most optimized path or whether they use traffic lanes which they shouldn’t. And that gives them some information to work on how they can optimize the routes that they’re taking.
Ehsan Fazel: 00:09:39.105 And vibration, obviously, is another element that — we’re measuring the plant’s activities, whether the engine is on and is not moving and what’s the ratio between the total time the engine is on versus it’s moving. And that gives them a good parameters to measure the efficiency of the construction plants. The picture on the right, this is actually an air quality sensor to measure various parameters, and then here is one of excavators in the area that operating. So moving on, we had a number of challenges to get data out, right? And that’s what we like to tackle. The first was we didn’t have electricity. We’ve not been blessed by electricity, I would say. You may have noticed some solar panels powering some of the sensors, but the site by itself did not have any solar panel at all and — sorry, just solar panel. No power grids. And we’ve been limited in terms of how much electricity we had. There’s a very patchy coverage. So patchy 4G coverage. There’s no comms. Very little 4G. And then, obviously, we had to measure wide range of parameters that is being kind of sensed within the environment. And then more importantly, we had to share these live data with the site so that they can make the right course of action. So in terms of how did we handle that, we did have a little bit of electricity and our equipment thankfully just consume nine watts of power. That’s based on ARM processor. It’s been focused since the product has been developed to be able to adaptable with the environments where there is a limitation with the amount of available electricity, which obviously, helped us to be able to get nine watts of power from the site. So we didn’t have much resistance from them.
Ehsan Fazel: 00:11:41.162 And equally, we didn’t have much problem with the heat. But if you were to deploy these gateways where the housing that you need to put these equipments is small, then you need to care about the heat dissipation from that. Obviously, having low-power consumption means less heat. And therefore, you can put it in smaller boxes. So that helps a lot. The patchy 4G coverage has been handled by specific feature on InfluxDB that allows us to do store forwarding. Wide range of measurements for various sensors, we address this by — in fact, Cisco does have a LoRaWAN technology, but we choose to use a third party that could provide wider range of sensors that suits our needs. And then, again, we take advantage of InfluxDB to be able to share the live data or near real-time data with the site. Now moving into architecture, I’ll have a multiple double click on this. But to start with at the very high level, we had the sensors that’s sending data over LoRaWAN. And if you’re not familiar with LoRaWAN technology, it is a wireless technology that goes quite far, a range of a few miles if you got line of sight. It consumes very little power. So the sensors are relatively inexpensive to run. They’re expensive sensors too, but generally, they are cost effective for what we wanted to do. And they’re low-power consumption. They go really high range, but the amount of data rate is small, and it’s perfect for what we wanted to do.
Ehsan Fazel: 00:13:30.503 So they send the data over the air, and then they’ve been received by the LoRaWAN Gateway. There is an onboarding process just to make sure the sensors that we have put in are the sensors that are legitimate and the ones that are being actually authorized to communicate. And then we have a Cisco router IR1101 model that has got multiple features, serial ports, ethernet ports, can run computer as well. So it gives us plenty — and 4G as well. Plenty of choices of connectivity with a very small footprint. And our specific use case was using the ethernet side as well as using the built-in compute feature that allows us to perform activities that is necessary for where we had limited connectivity. And then, also, obviously, we have 4G. We just choose the SIM card. And then the roaming SIM card that allows us to connect with the best available signal. And then there’s some code running in there, and I had to be able to update the code to make some changes and monitor it and all the rest of the activities that is required. And in order to do that, I run a VPN gateway on a Google Cloud platform as a software router. And I had flexible VPN that I could tap into wherever I wanted. And that gives me an anchor point inside somewhere fixed with a fixed IP address that I could then manage the gateway.
Ehsan Fazel: 00:15:08.429 And then I had a management platform which I limited in terms of where I could access the system. So it makes it secure and accessible, both at the same time. And the internet was available for me to be able to pump the data outside of the environment. So that’s a high-level architecture of the system. Now talking about the data flow, the sensors as I mentioned, they’re broadcasting data every now and then based on specific interval that they have been configured to send data out. And then the data is being received by the LoRaWAN Gateway. LoRaWAN Gateway does basic processing, and as soon as it process the data, send the data via API POST into the router. Or the destination is, in fact, a container runs inside of the router, and then that container does the decoding, parsing the data, validating, logging, and then all the necessary things that is required. And then I use Google Cloud Storage for repository, which I always suggest to have a main repository just going to be impacted. And then once the container does all the decoding, parsing, and the logging, and send the data into InfluxDB, then that gives me a visualization and other third parties that has been party into this project. So now, again, double clicking on the gateway — or on the container itself. So how did I go about it? Any data that I’ve been received apart from just doing some basic checks, tend to not check a lot to start with. I have some basic limiting feature applied. Apart from that, I said, “Okay, I don’t even check API key just in case there are some adversaries try to get into our system.” And then send a copy of the raw data into Google Cloud Storage for future analysis. It always help to collect as much forensic as you can. The cost is very slim as well as I could set up some life cycle to actually purge the data that are old and I no longer need them. S0 that way, I try to keep the costs down as well as having a copy of the data.
Ehsan Fazel: 00:17:51.170 Then the next thing is actually adjacent object key validation. It is very important before you do anything else. So here on the bigger square, I’ve got a sample of data that I’m receiving. The first thing I do, I check the field that I need to be able to read the data. So the data is a Base64 data. It’s kind of a little bit of encryption. So anybody look into it, they’ll say, “Oh, that’s a Base64.” But they are in hex format. Once you decode that Base64, you need to convert it into hex. And interesting about LoRaWAN is because it’s very low power, you don’t have tons of bandwidth and tons of energy to be able to assemble your packet before you send it out. And therefore they’re using hex in order to limit the amount of data that is being broadcasted over the air. So I need to make sure it exists, also, the timestamp as well as the device name because that’s how I can distinguish, “Is it the noise sensor? Is it air quality sensor? Is it CO2 sensor? What sensor is that?” So inside of the code, I do the adjacent object key validation, right? If it failed, there’s nothing we can do about it — or I can do about it because basically, the packet is malformatted or there’s no use for it. But if it passed, the next thing I’m going to do is check the device name just to identify the device type. And then based on the device specific, then I call the decoder function. And a decoder function will go through slice and dicing of the bits and bytes of a data based on the instruction that is supplied by the sensor manufacturer. And then I can say, “Right now, the CO level is the number you see down below,” or “sulfur,” or “SO2 is that level,” or particle matters and humidity pressure and the battery level as well, right? So these are what the function returns back to me. Now I have a clean data that I can now start using them. So that’s the kind of the process of flowing over data and decoding of the data inside the container itself.
Ehsan Fazel: 00:20:25.605 And the next one is actually the tips and tricks that I’m going to share with you. How did I write the data into InfluxDB? I start this by saying why did I choose time series databases. Obviously, I had data that varies by time, so naturally, it’s structured based on time series. Therefore, my natural choice was a time series databases. And we have a number of choices in the market. Well, why did I choose — but I had a choice of SQL, I had choices of documents-based databases as well. Obviously, you can, but you’re not going to make much out of them. There’s going to be more challenges to deal with later on. So why InfluxDB Cloud? That was my obvious choice. I had other projects in the past. My time series databases is cloud based so I don’t need to maintain any infrastructure. It’s all there. So all I need to do is just make an API call, and wherever I need it to be, I can query the data whenever I want it. That was obvious choice for me to choose InfluxDB along with having visualization capabilities amongst others. I mean, Python STK was there as well, which is awesome because the code that I wrote actually is based on Flask API. And then that Flask API — obviously, Flask API Gateway allows me to be able to just have one single monolith application that handles just about all the operations I needed. But one of the things you need to bear in mind, if you have the sensor, you still need a network. And once you double click on the solution itself, just don’t forget, you still need networks. You still need your sensors to be able to connect different parts together. Yeah, and monitor your application and infrastructure is key. As soon as you start running your application, you just need to see is it running? Is it not running? Just make sure you handle your exceptions quite well, right? You build test cases. You make sure your application is sustainable. It runs all the time. I run it on my laptop just as a container locally. Just try to hammer it as hard as I could just to make sure it works. It can handle the exceptions and it’s all robust and it won’t fail. Otherwise, if my Flask API stops working, the entire operation would stop. And obviously, I would not have data, I would have gap. Right? And that’s not good. And try to avoid that. So make sure if you write your application, make sure you test it to make sure it can handle all kinds of exceptions.
Ehsan Fazel: 00:23:20.627 One thing that came to mind, if you try to use InfluxDB, you go through the wizards, Python STK, and that’s exactly what it’s going to give you. It’s all well and good. It’s great. But here’s a challenge I had. I had to monitor around about 30 different parameters across multiple sensors. And I had to create a point and then tag it and identify which fields do I want to record. And if I have new sensors, it comes with a different field. That means tons and tons of reworking that has to happen. And if I search to see if InfluxDB — or Python STK can handle adjacent object, actually, I didn’t get a straight answer. But after a bit of digging, I realized, actually, if I put it inside of a list and then from the payload, I get to extract the time value, extract the measurements and just put it as a sensor name. I can basically put my entire field, no matter what, in just one nested object inside of it. And the beauty of InfluxDB and surprise to me was it just understand a whole lot. It decomposed everything and just put it in the right location. So if I had other or new sensors, I did not need to rewrite the code. And then it was just there, and it was so convenient for me. It was just so surprising because I thought, “Well, I need to make sure every single tag field is absolutely correct.” And it would have been a nightmare for me to manage multiple different sensors and types. So that was an interesting thing. Needless to say, make sure you have a good CI/CD pipeline. I know the code may need to be changed from time to time. You may need to adjust something. You want to add the battery measurements because the sensor may have died and you don’t know what was the cause of sensor has died. And maybe that was because the battery ran out. But you can find a trend and you can be proactive. So these are the things that if I were to make changes, then having a CI/CD pipeline always help to be able to make changes on to your code as you need it.
Ehsan Fazel: 00:25:43.673 Now let’s look into some of the outcome of adding InfluxDB, what are the things I really liked. It was so easy to use and cost effective. Obviously, time series databases runs very efficiently, and that was very easy for me to use. And there’s a life cycle around the data. I did not need to store the data more than 60 days. So the cost was quite slim for this very specific project. What I really loved was just sharing the link. So I’ve done all the work on the back end, but the construction site are asking me, “Can I see the data?” So I could do two things right. One, I could create a dashboard, which is pretty good. Really powerful. So the top one, let’s double click on it while I’m here as well. I’m measuring noise sensor. During the night, the noise level goes down. During the day, noise level becomes higher, 57 DB, which is still within the norm. And that just shows a natural trend that is happening. Humidity, again, exactly the same. Actually, in fact, yeah, the time are equal during the night. Higher humidity level compared to day where you have the sun. They’re always good to actually do some analysis on the payload itself, not just saying computer says X, Y, or Z. But if you have an entire visibility into payload and the sensors, then you’re going to be in a much better shape. And air temperature as well, it follow a specific trend. And then you can probably do other work in order to remove the outliers. I’ve been so lucky. I did not have much of the outliers on this one, but I did some other deeper analysis. The share link was pretty good. The top right actually gives the humidity. And I took this this morning, actually. All I had to do, just create a shared link and then click on the link. I done it on my mobile. So I had the data on my fingertips, right? I do not need to build a mobile app or just do anything. It’s just literally there so simple. So very small amount of time from where you are to the time that you deliver value to the end users, if you like. So that’s something I really liked.
Ehsan Fazel: 00:28:15.516 The Data Explorer is really good, very easy to use. It gives you tons of options in order to consume and visualize the data based on your specific needs. So you don’t need to be a data scientist. By having just basic understanding of the data and what to expect would put you in a very good spot in order to start delivering valuable visualization from a data. And then needless to say, Python STK was a bonus because it’s a very popular language. There are tons of flexibility in terms of what you want to do. And then you don’t need to integrate or learn yet another language. And so that was a bonus for me. The area I would say would be good for improvement, although, the UI was so beautiful and pretty and that works on the desktop browser, it would be good to see maybe some enhancement there. And I know it’s really hard to make something work for all kinds of scenarios. You still get quite a lot of built in — or built in functionalities, but it would be good to have something more flexible, specifically for browser in a different kind of sizes. I think given so many mobiles are in use, I think that would be very handy and helpful. The other area, needless to say, I mean, the documentation is pretty good, but there are always improvements for documentation. Maybe the older version versus newer version Influx2.0 — I think the documentation is far better, but migrating from one to the other was a little bit kind of maybe cumbersome. But apart from that, I mean, my experience with InfluxDB was pretty amazing. And yeah, no, thank you, Caitlin, for organizing this. I’m really delighted that our projects went well, and the customer saw the value. And thanks for sponsoring me. And just want to say thank you and stay in touch. If you find this interesting, useful, you have a feedback, I’m more than willing to listen to your feedback. If you got a project in mind, yeah, just happy to have a chat and see what you looking to achieve. And then I can give you my feedback, my ideas. Or if there’s a Cisco technology specifically involved, I would be happy to give you some ideas as to how best you can use them or alternate option or maybe put you in touch with a relevant account team that can help you further. With that, thank you so much. I’ll hand it back to Caitlin.
Caitlin Croft: 00:31:12.041 Fantastic. Thank you, Ehsan. That was a great presentation. If you guys have any questions, please feel free to post them in the Q&A and chat. Don’t be shy. I know Ehsan would love to answer your questions. While we wait, we’ll just give everyone a minute. I just want to remind everyone of InfluxDays coming up. It is completely free. It’s coming up November 2nd and 3rd. So please be sure to come. We’d love to see you there. Hang out. There’s going to be lots of amazing sessions and some fun activities for everyone who’s there. And we’d love to see you guys at the watch parties. All right. So, Ehsan, I know we’ve talked about this, but can you tell us a little bit more about how you are clearing your data? Are you using Flux? How are you doing that?
Ehsan Fazel: 00:32:05.983 Yeah, sure. I didn’t really need to use Flux language. I know that’s super powerful. It’s great. And that’s an area, I guess, if I were to have very complex data sets, then definitely I would need to do Flux language to query specific one, detect anomalies, all kinds of things. It is on the road map, but we really didn’t need to. I just have a tons of things kind of built in and ready for me to consume. So I did not need to use Flux.
Caitlin Croft: 00:32:41.154 Totally fair. It can do a lot. It’s kind of amazing what Flux can do. Let’s see, someone would like to know if you are using any sort of analytics service or if you had applied any AI to the data?
Ehsan Fazel: 00:32:58.507 Very good question. We have done it as a kind of a secondary data set to identify the outliers. In fact, we had very few. It wasn’t done by myself, but we had a data scientist who had to use various methods in order to model the data and to see if that repeats itself. Some of the things like temperature, yes, you would see a cyclical with upwards downwards trends based on specific season. But one of the things that could not be measured actually or detected, noise sensor because you could have a clap, right? And then the clap is normal. It happens. And if you detect it as an anomaly, which obviously wouldn’t be valid. The short answer, we have done some work on it, but that’s separate from this project. And I wish I would have the results I could share, but I’m sure I’ll get them next week. Probably, I’ll add them into my LinkedIn afterwards.
Caitlin Croft: 00:34:07.492 Awesome. Yeah. It’s always fascinating to see what people are doing once you have that time series data. Creating those, the machine learning models, applying AI to it to kind of expand the use of your data.
Ehsan Fazel: 00:34:23.883 Absolutely. Absolutely. Yeah.
Caitlin Croft: 00:34:26.862 So I know you kind of touched on this a little bit, but it sounds like you guys have already gained a lot from collecting your timestamp data. What are you hoping next? Is there something that people are still trying to understand or some anomalies that are happening on site that you’re not quite sure of what actually is happening there, and you’re kind of thinking, “I’ve solved this problem. What’s the next problem I want to solve”?
Ehsan Fazel: 00:34:54.039 Yeah, no, of course. Yeah. So that was environmental sensing projects, I would say. Very basic checks. It becomes more significant when you’re dealing with industrial, if you like. That’s the next project that you’re exploring. How can we monitor SCADA system where we try to find challenges from Cisco’s angle or networking angle where you have to convert different protocols into IP and then they do data analysis on them. And mainly they are time-series based, and that’s the next thing that actually we’re exploring on different customer sets.
Caitlin Croft: 00:35:41.373 That’s fantastic. Yeah. I’m always amazed at all the different things that you guys are doing over there at Cisco. And how has been — trying to understand noise pollution and everything, how has been the response by the community? I can only imagine that they are appreciative of trying to understand it better to reduce it and make life better for them.
Ehsan Fazel: 00:36:08.713 Yeah, the two aspects into that — obviously, the construction companies are businesses, right? They need to protect themselves, and if their complaints to say noise was too loud, they are liable and they have to pay fines to local authorities. That way, they had a ground to say it did happen or it did not happen. And given we had proof points from a sensor that was a certified sensor and not just something off somewhere, the cheapest, it did have CE certification on the sensor itself. We had a proof point to say where the data is sourced from and what journey did it take to the point that now we have it. So it was really helpful for the construction company to invest on this technology to protect themselves in case there are complaints of all kinds. Equally, to the air pollution, the sensors are industrial grade sensors, and they have certification. And that was actually one of the main reason why we cared so much about the data. We could just do it as cheap as we could, but we said, “Let’s do it as best as we could.” This is the way of Cisco doing things. We like to do things right, not just necessarily let’s do it and move on. So, yeah, I think it’s a great point, but I suggest any projects you want to do, just make sure you do it right with full visibility and proof points. And that would just put you in a better spot. You can do more things.
Caitlin Croft: 00:37:54.632 Absolutely. And you showed some of the dashboarding that you’ve done in the UI. Was there any anomalous data that you’re like, “Oh, that’s a weird spike,” or “that’s a weird dip. I wonder what that is”? And what was it?
Ehsan Fazel: 00:38:13.889 Another great question. It needed very deep analysis on every single data points. There are two things into consideration. One, we didn’t have many sensors of the same type. So it was very hard for us to be able to compare anomaly with the norm and outliers. We did have only — the only way we could just comparing the same data set with the same data sets, and see if we see the same data in the same range or not. And that was just fitting a polynomial against our previous data and then repeating the same just to see if there is a spike or anomaly. We didn’t have a lot of it. We checked it. Again, I don’t have the very details just right now, but I do — I mean, my colleagues who’s done the data analysis on the specific, we didn’t have tons of anomalies.
Caitlin Croft: 00:39:20.024 Well, that’s great. I mean, that’s always a good news.
Ehsan Fazel: 00:39:24.286 Sure.
Caitlin Croft: 00:39:25.264 Well, awesome. Thank you, Ehsan. If anyone has any last minute questions, please feel free to post them. And if you were using InfluxDB, we would love to hear from you and learn more about your use case and send some swag your way and all sorts of fun stuff. So Bria has already posted the link. So if you’re using InfluxDB and you’d love to share that with us, we’d love to hop on the phone and learn more. Once again, this session has been recorded and will be made available by tomorrow morning. So really appreciate everyone joining today’s webinar. And thank you so much to Ehsan for sharing your use case.
Ehsan Fazel: 00:40:08.139 Thank you so much.
Caitlin Croft: 00:40:09.239 Thank you. Bye.
Ehsan Fazel: 00:40:10.915 Bye-bye.
Systems Architect, Cisco
Ehsan Fazel has over 20 years of experience in the IT industry starting from network engineer, then into consulting and now as a Systems Architect in the sales organization. Ehsan was born in the UK and completed his schooling in Iran - including a bachelor's degree in mechanical engineering. He ventured into IT and networking since 2001 starting with Internet Service Providers during their boom and then working as a customer solution engineer for various telcos. Ehsan joined Cisco as a customer support engineer in 2010 at Learning at Cisco, overseeing all aspects of operations for expert-level certification (also known as CCIE). In his most recent engagement, Ehsan is creating a data visualisation pipeline that ingests sensor data and parses it in order to create a real-time view of the environmental data for roadway construction sites.