Coming soon! Our webinar just ended. Check back soon to watch the video.
How InfluxDB Enables NodeSource to Run Extreme Levels of Node.js Processes
Webinar Date: 2020-07-21 08:00:00 (Pacific Time)
NodeSource empowers developers by providing OSS and Enterprise levels of Node.js runtime, tooling and support. Their flagship product, N|Solid Runtime, includes hardened Node.js LTS releases, increased visibility into production applications and better security monitoring and alerts. NodeSource aims to solve pain points across the software development lifecycle for developers and DevOps engineers. Their experts help organizations improve the security, compliance and risk position for all npm packages. A time series database has strengthened NodeSource’s competitive advantage.
In this webinar, Nathan White and Mike Nedelko will dive into:
- Their ability to simplify the complexity of Node.js
- How they help customers run mission-critical Node.js applications with software, training and support
- NodeSource’s approach to handling up to 9k processes by using InfluxDB
Watch the Webinar
Watch the webinar “How InfluxDB Enables NodeSource to Run Extreme Levels of Node.js Processes” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
Here is an unedited transcript of the webinar “How InfluxDB Enables NodeSource to Run Extreme Levels of Node.js Processes”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Nathan White: Senior Solutions Architect, NodeSource
- Mike Nedelko: Vice President of Products, NodeSource
Caitlin Croft: 00:00:04.686 Hello everyone. Once again, my name is Caitlin Croft. I’m super excited to be joined today by Nate White and Mike Nedelko of NodeSource who will be presenting on how InfluxDB enables NodeSource to run extreme levels of Node.js processes. Once again, friendly reminder, please feel free to add any questions you may have in the Q&A box or the chat window. And without further ado, I am going to hand it off to Nate and Mike.
Mike Nedelko: 00:00:42.572 Hi everyone. Nice to meet you all. We’re very excited to be part of this webinar today. Today, some of the things that we’re going to go through is we’re going to introduce you into NodeSource and how we are using InfluxDB, specifically how we came to using Influx, what kind of solution we needed, how this looks, essentially, in action, what the implementation looks like, as well as some challenges and some strengths that we’re definitely observing with the InfluxDB suite. Some of the things that — right from the get-go, who is NodeSource? Well, we are the principal distributor of Node.js on Linux. Whenever people are deploying Node into production on any Linux environment they’re usually getting their Node from us, which is something that we’re super proud of. We, essentially, taking that understanding and that position within the ecosystem, and we are considering ourselves as acting as an interface to the enterprise, especially our enterprise customers. There we are productizing our expertise that is represented through, not only our position within the ecosystem, but also through our team members who are co-contributors to the Node.js project, in and of itself, and also very critical package maintainers, including Nate, himself, from who you will hear in a second.
Mike Nedelko: 00:02:07.601 In order to make our expertise available and provide performance, as well as security insights into what’s happening under the hood inside of Node and make it really accessible. Our expertise is ultimately made available in two buckets. One is through our services, as you can see here on the left, and for our product, which you can see here on the right. The product that we’re specifically going to talk about is our Node.js enterprise runtime called N|Solid, which is an enterprise version of the open-source project that is available out in the wild. And what we’re doing is we’re essentially making some augmentations that allow you to access the internal behavior of what is going on inside of the runtime, and we’re exposing this through a console — the screenshots of which you can see here — that allow you to not only access performance details, performance metrics, diagnostic capabilities, and security insights, but also provide a bi-directional control mechanism to control what’s happening in the runtime and how the runtime behaves. As you can here on that slide, we’re using Influx to keep track of all the process data. And without further ado, I’ll hand over to Nate who’s going to tell you a little bit about how that all works, in detail.
Nathan White: 00:03:43.307 Great. Thank you, Mike. So, yeah, N|Solid is just a drop-in replacement for Node, but we’ve done that by actually augmenting the runtime and building an agent off the side — a sidecar — to make sure that it’s very performant and that we can get metrics out of the runtime that are generally not possible or very difficult to do without huge amounts of overhead. And with all these metrics and analytics that we’re getting, we’re looking at serving large installations of Nodes; hundreds or thousands of processes running at the same time across different environments. And in order to do that, we’re using InfluxDB, and we’ll look at that a little bit closer. Some of the things that InfluxDB drives is the data aggregation; and the things that we do, as far as getting rich views, are around the analytics of each individual process, in terms of live metrics. Diagnostic data — we’re able to capture CPU profiles or memory snapshots in order to detect memory leaks, and also security auditing. There’s a lot of security tools out there that can help you out, and they’re all excellent. What’s unique about N|Solid is the ability to actually run security scans on the actual code that’s actually being executed. So we can audit those things. And so if you had some kind of process that flipped through auditing procedures, we can make that visible for you. So we’re going to take a little bit of a step back and kind of look at the need for a solution and how we kind of — how we came to using InfluxDB with the N|Solid.
Nathan White: 00:05:37.332 So as Mike was pointing out, some of the features of the N|Solid console is the ability to see all these processes that are actually being manipulated at the same time, to be able to query them, to graph them, and even to build filters around there that actually create real-time triggers or alert mechanisms, or even actionable items to capture forensics data, like CPU profiles and the live security auditing as well. The console also has an API and integrations with other interfaces, like StatsD. Some of the feature things is memory leak detection actually doing — instead of the manual process [inaudible] going a little bit further and actually doing that proactively for the end user, and then other things around Async tracing and whatnot. In order to be able to do all these robust features and this advanced kind of insight into these processes to help make sure that environments stay up and are proactive, we do use InfluxDB to kind of be that data-central hub within this architecture. And so we’ll go into this a little bit more, but our product is built off of the open-source version of InfluxDB, and we actually distribute it with the open-source version. However, we’ve had customers that have had higher demands with the number of processes and kind of the constraints of what they need to be able to do with that data, and InfluxDB Enterprise actually offers a robust opportunity to kind of scale that out and even reach higher capacities.
Nathan White: 00:07:44.856 And when we’re talking about this huge 3X improvement, which is actually phenomenal, that’s actually not a limitation of InfluxDB itself. Influx could go even higher. We are still solving issues with — or, I wouldn’t say, “Issues,” because 9,000’s amazing — is constraints with the console — with a web interface. So the decision-making process of how we came about selecting InfluxDB and the process for using this in our architecture. The important thing to realize is that we’re not a SaaS-based solution; we’re an on-prem solution. Because of the enterprise and constraints around security and compliance issues and — being able to have full control or kind of autonomy over that data and all those kind of regulatory factors — this is actually something that you can install into your cloud or into your solution or into your on-prem solution. So with that said, there’s a need to kind of bundle this up and to kind of figure out how all those kind of interplay, so we needed that potential analytics hub for processing. So we’re going to take a look at this historically. So N|Solid — work started on this — we started working on this in early 2014; InfluxDB was launched late 2013. So this was a very new technology at that time. We were very new at that time. And at that time we kind of drank the Kool-Aid of what was going on in the community — what was going on in the ecosystem.
Nathan White: 00:09:32.008 And so we leaned into Etcd at that time, because it seemed to have a little bit more community support. And Etcd is — not taking away from Etcd at all. It’s extremely powerful. It has its use case. It’s being used extensively in configuration management for Kubernetes, and it serves its purpose there extremely well. It’s high-availability. It’s trusted. It kind of was the first thing out there with kind of this cloud-native aspect. Some of the cons for us, in terms of initially choosing Etcd was prioritizing its high availability so early on in our product development life cycle. It’s limited, in terms of how you access the data. So if you’re doing configuration management and key-value stores, like environmental variables or kind of configuration stuff, Etcd is great, but for what we wanted to do with data-rich, kind of analytics drawn, and different views or lenses, and kind of slicing that data, it didn’t really serve that purpose well. And at that time, Etcd was being really driven kind of hard by that Docker community. So we kind of went — I guess you could say that we were a little bit biased by that, in all those kind of factors. So that led us to — after that initial release we started to see these constraints and these limitations within our product offering, and what we could actually do and what we could achieve and what our vision was for the N|Solid product. So we quickly kind of started — the engineering team quickly started gathering and looking at other viable options.
Nathan White: 00:11:32.445 So we knew we kind of wanted to lean into a time series database, and InfluxDB quickly rose to the top of the list, so we quickly worked to migrate to InfluxDB. One of the things that was really important to us is — one of the unique value propositions of N|Solid is the real-time aspect. So there are a lot of APM tools out there across the board, from Datadog to New Relic and whatnot, and there’s a variance, in terms of how available that data is. It’s not necessarily real time; there’s actually a staging period. And what we’ll see sometimes is anywhere between a minute to five-minute delay before you actually see those results. What we want to see is be proactive. So our sampling mechanism is every three seconds, so there’s a three second latency between what is happening and what you’re actually seeing and what you’re being alerted on. So because of that there’s a huge amount [of writes?] occurring. InfluxDB is really poised to deliver on that. The other thing is, it’s because of these queries and these kind of security things and performance kind of intelligence that we’re providing into the product, having a rich query capability is actually extremely important and makes that much easier for us, where we don’t have to roll that solution and solves a lot of that right out of the gate, and InfluxDB does a great job. And we’ll look at some of those advantages [inaudible] looking at scale or in different environments.
Nathan White: 00:13:34.475 There’s actually a lot of different things that you can do to kind of configure and to fine-tune it for your application needs. And the other thing — this is probably a little bit more unique to our use case, which we’ll get into, is around kind of the self-contained. A single binary is all you need to run InfluxDB [inaudible] actually [inaudible]. So the ease of distributing it was actually a critical aspect for us as well; it simplified a lot of steps. So when using InfluxDB, how did we integrate this into our product? So we actually try to limit what the customer has to do with configuring InfluxDB. So out of the box, our product just works. And InfluxDB is just kind of magically there and it’s provided. However, from a security, from a configuration standpoint, we have a lot of different configuration mechanisms that customers can do to actually control the cardinality, change their permission, and even change how the indexing works within InfluxDB. There’s a plethora of other options [inaudible] that [inaudible]. So looking at the console — so talking about this, so that as we go further in this conversation — so you guys have a little bit of more context here.
Nathan White: 00:15:27.299 So going into our dashboard here, this is a live — this is a live dashboard of our N|Solid console. So right here we have 5,000 processes [inaudible]. These are just kind of demo processes that are up and running. So we see this being broken down by application. So if you have multiple microservices, we could actually see these broken out by those individual microservices. So [inaudible] we can actually see these processes across our environment and clustering in the different loads, the CPU utilization, and the heap usage across these individual processes. What we can do is we can actually create filters, like CPU used, and we can go ahead and say, “Hey. If something goes over 60% —” we can set a filter. Right? And in that aspect, what we can then do is allow a user to, then, proactively trigger actions or other kind of mechanisms or alerts, or even send that into a Slack message. And then we can get reports on security vulnerabilities within your npm package or your ecosystem, and tell you remedies, as far as what’s going on within those individual processes. So all this data that you’re seeing and looking at all these processes is being driven by InfluxDB. I just kind of wanted to hand wave and let you see that from a higher-level view, of some of the power that Influx provides to us.
Nathan White: 00:17:26.603 So getting into the implementation details a little bit about how we’re using InfluxDB. So it’s important to kind of highlight and kind of reiterate that we’re kind of a unique user of it since we’re packaging InfluxDB into a product. And as a result, we’re actually offering 24/7 support to our customers on a unique set of issues. So we do support the issues that might come up with InfluxDB, related to our product, and a plethora of other things. And also, since we are distributing a Node binary, and we’re distributing N|Solid across multiple different environments, including OSX — the Mac OS — and Unix, different Linux platforms and different builds and different architectures there, even including Alpine. We have a lot of different unique challenges that kind of arise from that. So we’ll talk a little bit about our use case. I think a lot of this is going to be general and kind of best practices along the way. So in our journey with using InfluxDB we learned quite a few things that were probably obvious, but wanted to highlight some of the things that we had to run into — that we learned along the way. So logging was kind of a critical one, in terms of kind of squeezing out performance or even kind of managing InfluxDB in a sane manner. So the verbosity of the logs that are actually spit out from Influx can be insane.
Nathan White: 00:19:25.866 And so tuning those properly is actually a really important attribute, in terms of actually getting the optimal performance out of your application, and also out of InfluxDB itself, since it happened to write those logs out. So finding that balance of what you need in order to have the retention of the log, so if there is an issue. We found that turning off HTTP log, because that’s actually just logging every kind of action or transaction that’s going into the database, which can be actually frightening to try to parse through. And because of those logs, you do need to have a log rotation. And since we were dealing with customer installations we don’t have, I guess — sometimes we run into limitations in what that environment looks like or how much disk space [inaudible]. So having the awareness of some of those things and kind of proactively being on top of those things was a really important thing. So just kind of highlighting that, and that’s just kind of a general best practice around any kind of tool or database. Right? Performance and availability. Right? So squeezing out the best performance out of Influx. What we did was we actually wrote a custom implementation of buffering writes internally. And so there was pros and cons of that, but what we were doing was actually — for our availability needs, if a process, for some reason, got disconnected or had a network hiccup, it could actually buffer those writes out to Influx and they wouldn’t be lost. So we could write batch — but one of the problems with this is our write batch sizes grew massive under load, so we had issues there.
Nathan White: 00:21:27.487 HTTP keep-alive support, where we could actually keep the connection open, did actually offer help with writes, and it also provided relay features for slave DB’s. So we will talk a little bit about that more later. So that will go into querying and that kind of stuff. So if you need to do expensive operations on your InfluxDB or, actually, kind of do more investigative forensics work, having some of those — being able to kind of provision those other databases is really helpful. Fine-tuning the retention policies of your data and finding the use cases around your data is actually really vital, in terms of actually making it performant and actually scaling out Influx in the long term. So we always want to keep the data as long as we want, right, ideally. However, what we found is live data, one week kind of threshold was, in 99.9% of the cases, was more than adequate for investigation and for forensics work to cover up anomalies or to kind of go back retroactively. And what that allowed us to do was to — we didn’t get rid of the data after a week, but it allowed us to kind of rebalance those indexes and to optimize around what Influx was good at and to kind of find a nice balancing point there. So around some of the performance and availability, we were looking at how we could squeeze out more out of this mechanism within our architecture.
Nathan White: 00:23:26.466 We did consider UDP for a split second, but, thankfully, we didn’t go down that route. Utilizing multiple sockets within Influx was beneficial. We removed and adjusted delays in our user end-code applications and some of those mechanisms within Influx. One of the things that we had was a — we removed record order insertions constraint. So before we had kind of this requirement there, and we removed that, and that actually helped a lot with our queuing kind of mechanism or our batch write. So we realized that we didn’t need that, especially with how we started to develop our apps and how we wrote our queries to kind of coincide with that. And that was a tremendous win for us. Preventing DNS lookups just kind of helped to optimize that network performance and that network configuration. And those are just some of those flags that are — or, environmental variables that are out there that Influx provides to kind of, again, fine-tune everything. But I want to take a step back and kind of point out something. These are some lower-level things. If you’re just starting off with InfluxDB, out of the box, it’s amazing how well it performs and the ease and simplicity. This is, again, what we’re talking about here is scaling this out to hundreds of thousands, if not millions of writes or transactions happening per second. So at that level, some of these things, then, become important.
Nathan White: 00:25:14.355 So one of the great things about Influx is that it actually provides [inaudible] of — I think the learning curve is actually very nice. It’s actually typically fairly gentle to get in. The documentation is great. The community’s excellent. But if you need those power features and you go under the hood a little bit more, there’s actually all kinds of bells and whistles and flags to kind of fine-tune it for your needs. So with that said, when we look at some of the [inaudible] at those things in our use case, what are some of the challenges that we faced utilizing Influx? So I think data integrity is one of those things that we’re constantly kind of butting our head up against. So as you look at utilizing Influx, it’s really important — and this is more about understanding your application, understanding your customers or your use case and the shape of your data and how you want to access that. Because it’s always going to be a game of trade-offs of what’s going to be performant; having a lot of rich views is going to consume a lot of memory and a lot of space. So striking that balance of your data and how that looks is really important. So understanding that and really mapping out those use cases is extremely vital to achieving the best performance and the best experience with Influx.
Nathan White: 00:27:02.449 We ran into, early on — you could say these are just things that you have to — with any kind of data transformation tool or kind of hydration out of a database: proper escaping, comma-delimited. We had some records that are comma-delimited strings in our application layer. We convert those to an array and making sure that those things are handled. We have info records that are kind of just like metadata. We were writing those with unique sub-millisecond timestamps. And we’re using the timestamp data type within Influx, but we were using those kind of features to ensure that there was no data collision within InfluxDB, because of the nature of how our application utilizes that [inaudible] processor on an application layer. So there could be some of those in — so those were some ways that we could avoid some of those things. So you need to kind of understand your use case and be able to kind of plan those things out accordingly. And I think the big one is understanding the cardinality for your data and planning around that. Because that’s going to be the real big factor, in terms of your expense and the overhead of how much memory and how powerful of a machine InfluxDB needs to run on, based off of your needs and how you access that data.
Nathan White: 00:28:47.484 So another pain point that we had, which is relatively minor, but — and this is unique to — this goes back to what I was sharing before, which is our unique use case of distributing InfluxDB to our customers and embedding that into a product, since we were targeting multiple platforms. And we’re trying to make this as easy for customers as possible and working across different scenarios. It did present challenges for us. So this is something that a lot of people can avoid, depending if you’re just building it for a single application or if it’s for your own personal application. You likely are not going to experience any of these kind of issues. So we ended up having to kind of build a little bit of intelligence before we started up InfluxDB to detect what kind of environment we were running in and what kind of kernel flags were actually set, and also understanding that we might have to limit ourselves in some of those without kind of actively, or with an iron fist, overwriting settings that a customer might’ve already had and not knowing what the impact of that might be. So with all those kind of factors it added a little bit more overhead, in terms of the robustness and the configuration around how we deploy that. Because we wanted to kind of make that hardened and not understanding those environments, you’re kind of being blinded sometimes. We were proactively kind of guarding against some of that stuff. So you could say it puts shackles on us a little bit, in terms of how we use Influx, and it can prevent — or, it can prevent other issues around bugs that might be in one platform or another.
Nathan White: 00:30:49.297 But generally, the releases around — this was a while ago, we had an Alpine — a particular feature that we were using broke in Alpine. But in general, the releases around InfluxDB are extremely stable, and the feature sets in parody across those environments is, actually, extremely good. So again, those are just a unique set of challenges. I think this is the big one; the richness of data that’s actually provided by Influx is amazing. And so there’s kind of a tendency to put your — like a kid in a candy store, and you just want to kind of grab at everything. And you got to be careful with that because of what that can actually mean for your performance. So you can cause huge CPU spikes and memory explosions depending on how your queries are written and how you’re actually accessing InfluxDB. So understanding that and constraining yourself, possibly, in your application layer, putting those constraints in place to prevent queries or access to the data that could degrade your performance to a level that’s unacceptable, where you might have outages or delays in some of those reporting mechanisms. So some of those things are — pagination is an obvious one. More filters; make sure that there’s actually a limit. Within our console, we actually provide an API that queries Influx directly.
Nathan White: 00:32:46.874 However, the customer doesn’t talk to — it’s a wrapper that talks directly to Influx, but we don’t allow the customer to go directly there. As a result, we require certain filters to make sure that the queries are not — that they’re going to be performant and that they’re going to be limited in the scope, as far as what’s being returned back, and validations and restrictions on those queries to kind of make sure that you’re not hitting expensive indexes. One of the things I was pointing out before, if you do have those needs to kind of do data science works or investigations or you’re triaging an issue or you have an ongoing outage and you need to kind of investigate those things, doing a master slave kind of configuration and creating that within InfluxDB can be very helpful to not impact your write performance or impact the performance on those applications, so that you can be able to do that without actually impeding on the nature of your applications. So that’s another great aspect of Influx in those kind of capacities. This is kind of the funny one, because Influx is a NoSQL database, so to speak, but one of the growing pains is schema versioning; it’s vital and you need to have a plan of action. Because we had growing pains with this along the way with, actually, how our data shapes — how we were writing into our database was changing or evolving as the nature of our application — or, as we added new features.
Nathan White: 00:34:45.820 So we would need to transition or migrate that data to those new schemas and kind of keep track of how we did those migration processes. Because if you’re trying to query something that’s not there or the key changed or whatnot, things can start to go a little wonky. So having a good understanding or a good mechanism for keeping that in place is something that I would highly recommend, or some kind of plan. And it could be as simple as having a single key that you start off with that’s like, “Version 1.” And you just increment that as you roll out new features, in terms of tables or the schema of your database, and that will save you a lot of time and energy and frustrating evenings.
Nathan White: 00:35:40.745 So let’s talk about all the fun stuff around Influx. So some of the strengths that we see — I think this is — by far the most obvious is the continuous queries with regards to InfluxDB. So the query nature, which is SQL within Influx, makes it very familiar coming from other database technologies. And it’s very rich, in terms of what it provides and what you can actually accomplish with those queries. But the most extremely kind of compelling feature of those is continuous queries, where you can actually have — you can make a query and leave it open. So as data changes or new data comes in that matches that criteria, it continues down that stream so you can kind of get an updating aggregation of what’s going on. So this is great for dashboards or live graphing of any kind of metrics or insight into your data. And it also can be great for doing aggregation reports, like around our security vulnerability features. So this, alone, has been a game changer, in terms of being able to provide features and the richness and the robustness; and it’s so much fun to play with. So if you haven’t played with continuous queries, I highly encourage you to go explore those and learn how to work with those data streams and some of the power and features that you can kind of get out of that. And obviously, Influx was made for writing, so you can throw a huge amount of data at it, and it can really handle it.
Nathan White: 00:37:40.098 It’s fast, it’s scalable, and it allows for aggregating that in real time. So these are all amazing things that Influx kind of offers; a unique set and a nice balance for these types of time series-related problem sets. Right? So in terms of the — it’s likely that most of us won’t hit those kind of performance constraints. Influx is there and it can really kind of meet those demands of huge amounts of data being thrown at it. The neat thing for us that InfluxDB offered was — it’s actually really easy to test and debug. And the nature of the data — and when I was talking about understanding your data shapes and understanding your schemas, once we got a really good handle or a lockdown on that we were able to take huge chunks of data emulated or simulated over a whole course of a year for like 2,000 processes and squeeze that in and be able to replay that data. And so we can actually run scenario tests and do capacity testing and test out the capacity of Influx, but also test out the capacity of our application and work around some of those mechanisms that some of our larger enterprise customers may be running into and see if some of those things — the logs and the tools are great, if used wisely. And there’s a great community. There’s a lot of great documentation. As you run into issues, Influx is pretty good at actually uncovering what is going on and exposing that to you. With a little bit — with a little bit of work, you can kind of get right back on track and back to your problem instead of trying to figure out what’s going on with Influx.
Nathan White: 00:39:40.776 Overall, it’s an extremely stable tool and it will tell you when you’re not using it properly. So with that said, I’m going to go ahead and — I encourage you, if you want to see more in action, go ahead and try out N|Solid. You can go ahead and try it for free through nodesource.com, and you can get this running on your application. If you’re running Node.js right now, there’s no modifications to your application. You just run it and you will actually get instant visibility and be able to get all this performance insight and forensics insight into your application. And I guess I’ll pass that back over to the Influx team.
Caitlin Croft: 00:40:34.354 Thank you, Nate. That was great. So I’m sure you guys all know we have InfluxDays. And in the fall, normally we would be in San Francisco in person, hanging out with all of you, but, of course, this year it is going to be virtual. But it’s still going to be a really fun event. So the InfluxDays is going to be held on November 10th and 11th. It’ll be completely virtual. And about two weeks prior to that we’ll have Flux training. I know that’s super popular, the hands-on Flux training. And right now we are looking for speakers, so please go online and submit your call for papers. We’d love to see what you’re doing. We’d love to see what the community is doing, and share those stories with the much broader community. Even though, obviously, this is considered our North America event, as it is virtual, anyone can join. It was really cool — the “London event” we actually had people from around the world join, so I’m really excited to see where everyone joins from in November. So while everyone kind of thinks of their questions that they might have for Mike and Nate, I just wanted to give everyone a couple of reminders and a couple of helpful links. So if you were having a lot of fun here and you want to learn a little bit more about InfluxDB today, we do have our virtual time series meetup, which is at 10:00 am Pacific, which is 5:00 pm GMT. I will throw the link into the chat. Feel free to join. It’s free. There will be some swag giveaways. There has been a change in the speaker, but it’ll still be a really good virtual event.
Caitlin Croft: 00:42:30.374 And also, here is the InfluxDays link, just in case you are interested in submitting your call for papers. And lastly, if you are interested in joining our Slack channel, please feel free to join us there as well. So there’s lots of different resources. I love getting to meet all of you over Slack. It is fun at Influx/Days, even with it being virtual. There’s tons of conversations going on, and just like an in-person event, you start recognizing people’s names and stuff like that; so it’s a really good event. All right. I’m just checking to see if there are any questions. We’ll just keep the lines open just for another minute in case people have some last-minute questions. That was such a great presentation. I think it’s great how you’ve chosen to embed InfluxDB into your product, and no one knows it’s there, which is kind of cool. It’s kind of stealthy that way. It’s always fun seeing the different applications that InfluxDB can have in various applications. So I think that you guys have done a fantastic job of helping the Node.js community as well by using InfluxDB.
Mike Nedelko: 00:44:02.074 Sweet. Thanks, Caitlin. Yeah. We think so too. I mean, the good sign of a good database implementation is that the user doesn’t necessarily know about it or need to feel that it is there, so we’re very happy with using Influx. Generally, as Nate already mentioned, if users were interested, to go onto nodesource.com and check it out. We do have a self-guided demo where you can see all of this. We firmly believe that N|Solid is the only node you should be running in production, because it gives you all the insights and metrics and security goodness, as well as diagnostics. So if people want to head over there, you can easily sign up for a free trial. Check it out. Run a couple of processes. Take a couple of CPU snapshots and then get going.
Caitlin Croft: 00:44:44.288 Great. Well, that looks like that is everything. So thank you everyone again for joining today’s webinar. It has been recorded and will be available for replay later tonight. And we always have lots of these different webinars, so be sure to check out our website to see what other webinars we have lined up. Thank you everyone.
Mike Nedelko: 00:45:08.391 Thanks. Take care.
Nathan White: 00:45:09.162 Great. Thank you.
Mike Nedelko: 00:45:10.473 Bye.
Senior Solutions Architect, NodeSource
Nathan started his career in the FinTech and military sectors in the late 90s. Passionate about open source Nathan has worked with a wide variety of companies solving a copious set of problems. Working at LearnBoost launched a Node.js platform on 0.1.133 and helped to co-author mongoose. In his spare time he likes to contribute back to community through educational outreaches. Working with the local community colleges, he helped to develop a grant and curriculum to get under-employed individuals back into the workforce through technology with an 87% job placement rate.
Vice President of Products, NodeSource
Mike is a Product Director with 10+ years in leading the delivery of products for start-ups, UN Agencies, educators, and Fortune100 partners. He particularly enjoys working with global teams across business, design, engineering, and marketing to develop appealing developer experiences. He firmly believes in technology for good and remains involved with several initiatives designed to address humanitarian issues.