Simplifying the Complexity of Node.js with InfluxDB

Rate this content
Bookmark

Learn how NodeSource strengthens their competitive advantage building their product on InfluxDB to increased visibility into production applications and presents better security monitoring and alerts into their solution.

8 min
09 Jun, 2021

Video Summary and Transcription

NodeSource's NSolid simplifies Node.js for Windows DB and provides analytics, diagnostics, and security. InfluxDB is used for data aggregation and real-time monitoring, with a three-second latency sampling mechanism. Challenges with Utility Influx are faced, but InfluxDB handles large amounts of data and is easy to test and debug. Ensolve is recommended for production to benefit from insights, security, and diagnostics.

1. Introduction to NodeSource and NSolid

Short description:

I'm going to talk about simplifying the complexity of Node.js for Windows DB. NodeSource is the main Node.js distributor on Linux. Today, we are going to talk about our Node.js Enterprise Runtime called NSolid, which provides analytics, diagnostics, security, and is production-ready. We use InfluxDB for data aggregation and real-time monitoring. InfluxDB quickly became our top choice due to its time series database capabilities.

Hi, everybody. My name is Mariam Bilan. I'm full stack product designer at NodeSource. And today, I'm going to talk about simplifying the complexity of Node.js for Windows DB. It is important to note that this talk will not be possible without the incredible team of NodeSource engineers who curate the content. And as an expert navigator, they've managed to simplify JavaScript specifically Node.js for me and why we use InputDB in our infrastructure. But then, let's start.

At NodeSource, we are the main Node.js distributor on Linux. Our value is centered on our expertise and the ability we have to translate performance data into a product, accessible, interpretable, actionable, do so in production. We are experts Node.js guides that help organizations and developers use Node to its fullest through our tools and consult. For years, we have been known as the Node company, always focused on Node.js, what became the idea, became the idea.

Specifically today we are going to talk about our Node.js Enterprise Runtime called NSolid, which is an Enterprise version of the open source project that is available out in the web. And what we are doing is we are essentially making some implementations that allow you to access the internal behavior of what is going on inside of the Runtime, and we're exposing this to a console. We have amazing case studies supporting the unique features of NSolid. You can access performance details, performance metrics, diagnostic capabilities, security insights, but also provide a bi-directional control mechanism to control what's happening in the Runtime and how the Runtime behaves. So with NSolid you have analytics, diagnostics, security, and best of all is directing in production. Also within NSolid, you have flexible integration, specialized alerts, cloud native and container ready. And probably you are thinking, how does it work? So we're using inflows to keep track of all the process data. With all of these metrics and analytics that we're getting, we're looking at serving large installations of nodes, hundreds or thousands of processes running at the same time across different environments. And in order to do that, we are using InfluxDB. InfluxDB drives with the data aggregation. InfluxDB gave us rich use, each individual processing, their supply metrics, diagnostic data, capture CPU profiles or memory snapshots in order to detect memory leaks, and also security. So we knew we kind of wanted to lean into a time series database. And InfluxDB quickly rostered the top of the list. So we quickly worked to migrate to InfluxDB. One of the things that was really important to us is one of the unique values propositions of Ensolving is the real-time aspect. So there are a lot of APM tools across the board, from Datatank to New Relic and whatnot. And there's a variance in terms of how available the data is. It's not necessarily real time. There's actual staging period.

2. InfluxDB Integration and Challenges

Short description:

We want to be proactive, so our sampling mechanism has a three-second latency. InfluxDB simplifies distribution and integration into our product. We offer configuration mechanisms for customers to control cardinality and permissions. We face challenges with Utility Influx, but InfluxDB meets the demands of handling large amounts of data. It is easy to test and debug. We recommend using Ensolve for production to benefit from its insights, security, and diagnostics.

And what we'll see sometimes is anywhere between a minute to five minutes delayed before you actually see those results. What we want to see is be proactive. So our sampling mechanism is every three seconds. So there's a three second latency between what's happening and what you are actually seeing and what you are being alerted on. So because of there's a huge amount of processes occurring, InfluxDB is really poised to deliver that.

A single binary is all you need to run InfluxDB actually. So the ease of distributing was actually a critical aspect for us as well. It simplified a lot of the steps. So when using InfluxDB, how did we integrate this into our product? So we actually tried to limit what the customer has to do with configuring InfluxDB. So out of the box, our product just works. And InfluxDB is just kind of magically there and it's provided. However, from a security, from a configuration standpoint, we have a lot of different configuration mechanisms that customers can do to actually control the cardinality, change their permissions, and even change how the indexing works with InfluxDB.

So it's important to kind of highlight and kind of reiterate that we are kind of a unique user of size we're packing InfluxDB into a product. And as a result, we're actually offering 24-7 support to our customers on a unique set of issues. So we don't support the issues that might come up with InfluxDB related to our product and have to cover other things. So one of the great things about Influx is that it actually provides. I think the learning curve actually is very nice. It's actually very gentle to get in. The documentation is great. The community's excellent. But if you need those forward features and you go under the hood a little bit more, there's actually all kind of bells and whistles and flags to kind of fine-tune it for your needs. So we can say when we look at some of those things in our use case, what are some of the challenges that we face with Utility Influx.

So I think that integrity is one of those things that we're constantly kind of guarding our head up against. So as you look at Utility Influx, it's really important. And this is more about understanding your application, understanding your customers or your use case and the shape of your data and how you want to access that. Influx is there and it can really kind of meet those demands of huge amounts of data being provided. Anything for us that Influx DB offers is actually really easy to test and debug. The good side of a good database implementation is that the user doesn't necessarily know about it or need to build that isn't there. So we're happy with using Influx. Generally, if users were interested to go onto Nodesource.com and check it out, we firmly believe that then Ensolve is the only node you should be running in production because it gives you all the insights and magic and security goodness as well as diagnostics. So if people want to head over there, you can easily sign up, check it out, run a couple of processes, take a couple of CPU snapshots and then get going right now on Ensolve. Thanks for watching.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Remix Conf Europe 2022Remix Conf Europe 2022
195 min
How to Solve Real-World Problems with Remix
Featured Workshop
- Errors? How to render and log your server and client errorsa - When to return errors vs throwb - Setup logging service like Sentry, LogRocket, and Bugsnag- Forms? How to validate and handle multi-page formsa - Use zod to validate form data in your actionb - Step through multi-page forms without losing data- Stuck? How to patch bugs or missing features in Remix so you can move ona - Use patch-package to quickly fix your Remix installb - Show tool for managing multiple patches and cherry-pick open PRs- Users? How to handle multi-tenant apps with Prismaa - Determine tenant by host or by userb - Multiple database or single database/multiple schemasc - Ensures tenant data always separate from others
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.