Experimenting with Deno for Easier Kubernetes Deployments

Rate this content
Bookmark

As we all know, dealing with Kubernetes YAML is not very intuitive (especially for those just getting starting) and the more resources and dependencies are added the messier and more complex the process becomes. In this talk, we'll explore how we can use Typescript and Deno to bring typing, composition, code-reuse, and testing as an alternative to YAML - that doesn't include these capabilities, all while still remaining declarative and easy to use.

31 min
25 Mar, 2022

Video Summary and Transcription

The Talk discusses using Dino and TypeScript to simplify writing and managing Kubernetes YAML configurations. It explores the challenges of working with large YAML files and introduces a unique solution. The Talk also highlights the features and benefits of Deno, such as its secure runtime and powerful typing capabilities. It demonstrates how Deno can be used to create and modify Kubernetes objects, and emphasizes the advantages of using a general-purpose language for configuration. The Talk concludes by discussing the potential applications of this approach beyond Kubernetes deployments.

Available in Español

1. Introduction to Dino and YAML in Kubernetes

Short description:

I'm going to talk about how we can use Dino to define our Kubernetes deployment in an easier way. Let's talk about YAML and its challenges. Kubernetes YAMLs are huge and complex, making manual writing difficult. Today, we'll discuss how to write YAMLs easier using DNO and TypeScript. I'm a cofounder of Lifecycle and an open source maintainer of Tweak. Lifecycle is a collaboration tool for development teams. We'll also explore how resources in Kubernetes work.

Hi, everyone. I'm Ishai. I'm the CTO of Lifecycle. Today I'm going to talk about how we can use Dino to define our Kubernetes deployment in a much more easier way than what we are usually doing.

To give some context, let's talk about YAML. Because Kubernetes is basically, like, all the configuration of Kubernetes is like this big, giant YAML file. And you probably, like, even if you didn't use Kubernetes, you know the YAML files from other places, like from GitHub Actions or from Docker Compose or CloudFormation. And basically, like, every tool, I mean, so many tools, like, in the DevOps ecosystem, are using YAML to define this configuration. And to be honest, for me, the first time I encountered this YAML file, it was terrible. I didn't, I mean, it wasn't convenient, like, copying from different YAMLs and merging them, and the indentation was weird and, like, DRA object differences. But I got used to that. And I will get, I'm, like, writing lots of YAMLs, and it works well. But the thing is that Kubernetes YAMLs, they're like, huge and complex, and there are so many of them. Which in this case, like, writing the YAMLs manually simply doesn't cut it in terms of, like, scale and correctness and even, like, fun and sanity.

So, today I'm going to talk about how we can write these YAMLs easier and more specifically, how we can do it with using DNO and TypeScript.

A few words about myself. As I mentioned, I'm a cofounder of Lifecycle. I'm a software engineer. I've used Kubernetes in the past five years. I'm also open source maintainer of Tweak. It's a Cloud native feature management solution. Two words on Lifecycle, it's a collaboration tools for development teams based on a like previous environment. It's a product in public beta that we launched like a few weeks ago, and we are very excited about it. You are welcome to check it out on Lifecycle.io.

Okay, so, what's on the agenda today? We are going to look at Kubernetes. We are going to define the same configuration in DNO, and we'll summarize. A few words about how resources in Kubernetes works. Basically, the idea is that we have the Kubernetes users, which can be developers or administrators. They are pushing a definition to the API server using YAML, and we have the controllers of Kubernetes that make all the magic happening. Now, these resources can be everything from a cron job, or a network thing, or a volume, or a bucket.

2. Introduction to YAML and Kubernetes Resources

Short description:

Kubernetes is vast and awesome. Everything is described with YAML, making it extensible and enabling the creation of custom resources. I'll show a naive use case with a Tetris application and demonstrate how to run and delete it. Then, I'll explain a naive definition of a service using a deployment object, which is more powerful than a pod.

It's really vast and it's pretty awesome. I mean, it's really one of the selling points of Kubernetes, in my opinion. Everything is data, everything is resource. Every resource is described with YAML. It's extensible, so we can have custom resources, and we really have those kind of things for monitoring, or for scaling, or for platform, like a build or CI stuff. And this ecosystem of Kubernetes is huge, and everything is defined in YAML. So it's pretty amazing.

So I'm going to show how these YAMLs look like in a very naive use cases. So we'll be like on the same page of what we are trying to solve in this kind of YAMLs. So the first example I'm going to show an application that is like a Tetris application. We can say that we have the definition of Kubernetes. Every resource has the version of the group of this resource, the API version it's called in Kubernetes. The kind, in this case, is a pod. A pod is simply compute units that run containers. And if I want to run it, I'm using the Kubernetes CLI and I'm just running it. I either have like a working Kubernetes cluster with all of the subgroups. It's a real server. So, we can see these examples live. So, I run the Tetris application. And basically, we now have a container that runs Tetris. It's not very useful, because I want to interact with it. So, I'm going to delete this definition. The same way, instead of applying the file, I'm deleting it.

And let's look at how usually again, a very naïve definition of a service will look like. So, I have the same configuration of the application. But with CPU and memory limits, I'm using a deployment object, which is a bit more powerful than a pod, because it's basically create a number of pods. For example, in this case, it's like two pods. It's also like, if one of them is going down, it's going to revive them. It's also designed for a rollout strategy. Basically, deployment is the way we usually define an application.

3. Defining Services and Exposing Applications

Short description:

I'm going to define a service connected to the same deployment, and use Ingress to expose it externally. Running and deleting the Tetris application demonstrates the complexity of YAML configurations. A single Kubernetes service definition can be huge, especially for cloud native distributed apps. Multiple YAMLs are often used in real-world scenarios. The orchestration capability is showcased with a complex application. Working with large YAML files can be exhausting and error-prone. There are many tools and solutions to simplify this problem, and today I'll introduce a unique solution.

Another thing I'm going to define is a service that is connected to the same deployment using this selector, and the service is designed to create like a DNS and a load balancer for the pods that are created by the deployment. So we have the deployment, we have the service, and the last thing we need is to expose it externally because I want to run it like in the browser that is not running in the cluster, so I'm going to use another resource called Ingress.

So all these resources are simply for running an application and exposing it, and let's see that it works. So I'm going to run the service and I'm going to run here the Tetris, and we see that it's working. We have a Tetris application here that is running, and again I'm going to delete it, and notice that this is like fishing lines of code for a very naive configuration. Most configurations are much more complex. We can have like volume, auto scaling, configuration, environment level, maybe things related to monitoring and lots and lots of stuff.

And that's the idea here that a single Kubernetes service definition can be huge, and if we are running like a full cloud native distributed app, it's even more difficult. So if I'll take for example this application that is one of the Docker examples. So we have a voting app, Result app, Redis, DB, like a very complex application for a very simple thing. Like it's overengineered by purpose. So here I will have like the multiservice YAML, but in the real world, it will be probably several YAMLs and not a single file. And we have the Nase space, we have deployment of the DB, we have the service for the DB, deployment of the Redis, service of the Redis, the application, two applications, one is a Python, one is a NodeJab, the .Networker, and you can see that we have lots and lots of stuff here. And again, if I'm going to run it, we can see that we have all these pods running and everything like all these applications. We can see the vote, the Redis, the DB, and we have also services and stuff like that. And we can see it here as well. So, we have the voting application and the result. So, here we can choose cats or dogs, and you can see that it's changing. Very, like, very complex application for very simple stuff. But it's designed to show the orchestration capability. I'll just delete it. And the thing is that this kind of, again, this YAML was very, very naive. And when you are working with this YAML, which are so big, even for the simple cases, it can be exhausting. Not to mention very airborne. So, can it be simpler than splotted? So, yes. And there are many tools and solutions for this problem. And actually, there are so many tools, because this problem, like, everyone that uses Kubernetes encounter it in one way or another. And whether it is solved by using different abstraction or using templates or something like that. There are multiple solutions. And today, I'm going to show another one that I think that has specific properties that make it very interesting and unique.

4. Introduction to Deno Runtime

Short description:

Deno is a runtime for JavaScript and TypeScript, similar to NodeJS but with a different feel. It's a secure runtime for both languages and not powered by browser API and ecosystem tooling.

So, it's based on Deano. And for those of you who are not familiar with Deano, Deano is a run time for JavaScript and TypeScript. That's the Wikipedia definition. And in a nutshell, it's something very similar to NodeJS. It's created actually by the same creator. It's a secure run time for both TypeScript and JavaScript. It wasn't powered by browser API and ecosystem tooling. So, it has a different feel. The node, even though it's based on V8. And some will say the next evolution of Node.js, I don't necessarily think so. That's why we have the question mark here.

5. Introduction to Dino CLI and TypeScript

Short description:

Let's take a look at Dino and its CLI. Dino does everything from bundling to testing. It's similar to Go and supports TypeScript out of the box. TypeScript is a powerful type system. I'll show how easy it is to write Kubernetes definitions in TypeScript. We import Kubernetes definition files from URLs and have autocomplete and type checking. The types are generated, and we import using URLs in Dino.

Let's take a look at Dino. The first thing is the CLI. So, everything is in Dino. The only tool you need to work with Dino is the CLI. It's not like a node that you have Node and NPM and maybe just for testing and TypeScript, if you want to compile TypeScript code. Dino, the CLI does everything from bundling, compilation, running, installing, testing and everything. So, it's really nice. It's very similar to Go if you use Go.

Dino supports out of the box TypeScript, so we don't need to have a TypeScript compiler. It's already built in. And the thing about TypeScript that is like a super set of JavaScript with extremely powerful type system. I'm sure many of you used it in the past. And today I'm going to show how easy it is to write these Kubernetes definitions in TypeScript. So, let's look at some examples. I'll start with a simple example. So, basically, that's like the example of the pod. And you can notice here that first of all, I'm importing this Kubernetes definition file from a URL. And you can see that I have API. And basically, everything that's possible to get to Kubernetes, I have in it full autocomplete. And type corrector. So, here we have, we are adding a container without a name. So, it's going to tell you the name is missing. And same goes with, if I'm using the wrong type. So, again, regular type correctness. And the thing is that these types are generated. And I'm generating, I have to generate the types of all Kubernetes common APIs. And the other things I'm importing here is from the SD library of Dino. In Dino, every import, we don't have NPM and packages like that. We're just importing URL. So, it makes it very, very friendly to consume.

6. Running Example and Creating YAML in TypeScript

Short description:

Now I'm going to run this example and get the relevant YAML created in TypeScript. We can use variables to avoid replication and errors. Tests can be created without the need for additional tools. Using a general-purpose language for configuration provides composition and abstraction benefits. Creating abstractions in TypeScript makes it easy to define and consume configurations. Kubernetes allows for patching resources.

Now I'm going to run this example, and I'm getting the relevant YAML only created in TypeScript. Now, in this example, it doesn't provide that much value, but if we'll start looking back at the more complex example.

So, that's an example of the deployment and service and ingest I showed before. And first of all, you can see that we are using variables. So, I'm using like label variable and metadata, and then I can use it in several places. I don't need to do this replication. That can also cause errors. And I'm doing the same thing. I'm just dumping it in YAML. So, if I'll run this definition, we will have the pretty much the same YAML we had before. With a few lines of code. And I can also apply it to the cluster. So, that way, that's how I'm just going to dump the YAML created by the TypeScript to my Kubernetes.

Okay. Another interesting thing is we can have tests. I created a test for this service. And again, I'm not using any other tool. I'm just using the DNS CLI here. I don't need Mocha or CHI or any other tool for running tests. So, I can run tests. I can define this definition, this configuration in TypeScript. And the moment I'm using a real or a general-purpose language for doing this definition, I also get all the benefits from this language. So, composition is much composition and abstraction are much more powerful.

So, here I created an abstraction called application. And again, it's the same definition we saw before, but notice that I just defined the name, the image, the OSNAME and the thing is derived from parameters. You can see the definition here. And the definition is not important. In this case, it's important that it's very easy to create this kind of definition, and it's very easy to consume them. And the good thing about it, because it creates like a resource set, I can still patch them afterwards. So, my abstraction gets like this 506 field, but Kubernetes allows us to modify the resources.

7. Creating Services and Abstractions

Short description:

I'm creating services and patching their properties, such as the replica number and TLS settings. The resulting YAML is concise and allows for higher-level abstractions. I'm also adding volumes and using expose to expose applications externally. This approach reduces code and provides flexibility. Tests can be created for application abstractions, and a fluent API can be used to add and remove components. The powerful type system of TypeScript catches errors and enhances productivity.

The resources have much more features that I can tweak with. So, here I'm creating the service, and I'm going to patch the replica number to three, or change the TLS of the ingress to have a TLS based on secrets that they have in a cluster, or change the host name.

Again, afterwards, I'm just printing this YAML, and if I run this example, we will have the same definition of YAML, and it gets much more useful the moment we try to create multiple services. So here is the example that we saw before with the big YAMLs, but with like almost four times less amount of code. And most important, it's not just the number of lines of code we have here, is how concise they are. We are just specifying the things we want and we can be talking like higher level obstruction.

So I'm creating an ASPACE, I'm creating the DB application. Here is an example that I'm adding the volume, so it's not part of the contract of the application, but again, I can patch it afterwards. Same goes with Redis, we have the vote application that I'm using the expose to expose it externally. Same with the result application, because there are two different application and the work is just running the Docker image. So, it's less amounts of code and still, we have the same flexibility because we can always, like, change these properties and, like, use we can also, like, do some logics. I mean, based on some properties in one object we want to derive the right value for other properties as well. So, it can be also, like, much less error points.

For this application abstraction, I also create tests. That's also we can create reusable objects that are testable, and everyone can use them. Another example is creating, like, an API that is more fluent, like the application API that I created. So, here it's, like, it's more functional. We can add and remove stuff. It's like a fluent API. And what's interesting in this example is how powerful the type system of TypeScript is. So, you can notice that if I'm removing the add service here, we don't have the expose thing anymore. So, it's not going to work. And because we don't have the expose, we don't have the ingress, as well. And if I'm adding the service, we still don't have the ingress. I'll have just a service. In order to have the ingress object, I need to expose it. So, basically, it safeguards me from doing mistakes. For example, forgetting to do something or... I mean, the type system can catch lots of these errors. And also, not just good for error, it's also good for giving me good, nice, sort of, complete features so I can be more productive.

8. Creating Service Blueprint and Flexibility

Short description:

I created a service blueprint called monitor, taken from the Prometheus project, that allows other developers to easily configure and monitor their services in Kubernetes. This blueprint provides flexibility for developers to customize their deployments while using predefined resources.

This approach, I can also take to create, like, I call it a blueprint. But it's very similar. So, I created a service blueprint that simply does all this kind of configuration. And also, in this case, it's a custom resource called monitor that is taken from a project called Prometheus. So, here we are using a service monitor that monitors my service. Because, again, the resource of Kubernetes can be like everything, especially if we go to use custom resources. The custom resources here are also generated here. So, I've generated the resources for the Prometheus project. And so, I can use them here as well. And yeah, this service blueprint is something that I can give to other developers and they can use it. Something like that the platform team can build and then other developers can use it and consume it. But the good thing about it is that they still have flexibility to change these properties afterwards. So, again, if they want to define their deployment, something here they want to change, they can do it. And that's like a good thing to have this flexibility.

9. Running Malicious File in Deno Sandbox

Short description:

I'm going to show an example of running a malicious file in Deno and how it fails due to the sandbox environment. By default, Deno restricts access to environment variables, file systems, and networks. Running the malicious file without permissions results in a permission denied error. However, running it with the allow env flag grants access to environment variables.

And the last two examples I'm going to show are things that are very interesting in DNO and in TypeScript. The first one is that I'm going to show an example. You are seeing that they're always importing skips from URLs. And you're thinking what if this URL is something like poison or malicious? So, I'm going to run this example, which here we have like a malicious file. And we see that it's going to fail. And the reason is because by default it runs in a sandbox. When I'm doing run, by default, the TypeScript file doesn't have access to environment variable, to file system, to network. And this example, this evil utility is like getting my environment variable, stealing my SSH credentials, and sending them to evil service. If I'm going to run it with allow env, so we can see that it's permission denied, require, and access run again with allow env.

10. Limiting Access and Use in CI Systems

Short description:

I can limit the read folder that Deno allows to access and restrict network access to API GitHub.com. This prevents scripts from stealing environment variables or reading files from the file system. It's a useful feature for applications like CI systems.

So again I can do it in the script with allow env, and we'll see the same example that it requires read access to home SSH. And also, if I do like allow read dot, it still won't work because... it shouldn't work probably because the other permission is as well. Let's see. So currently it fails on allow net, but if I do allow net... So you see that we don't have read access to this folder because when I did allow read, I allow it to read only this folder and not, for example, the home VS code SSH folder. So that's another thing that we can also limit the read folder that it allows to access is the net. So if I'm doing something like that, it means we can only engage the network to API GitHub.com. So that way we can make it much more... I mean, it means that a script that wants to do a very naive thing to steal our environment variable or read a file from a file system and then send it somewhere, it won't work in Deno. So that's a nice thing when we're thinking about how we are going to use this application and usually it's like in CI systems.

11. Power of Typing in Deno

Short description:

In this section, I demonstrate the power of typing in Deno. By importing resources and leveraging the TypeScript type system, we can easily modify Kubernetes objects. We can use it for patching, validation, and more. The tooling in Deno is simple yet powerful, requiring minimal code and providing autocomplete and correctness. It eliminates the need for project configuration and external dependencies. While not perfect, Deno offers composition features, declarative composition, parameters, abstraction, overlays, and patching capabilities. It's a versatile tool for working with Kubernetes.

Another thing that is very interesting is the power of task of typing. In this case, I'm going to show an example of modifying an existing file. So if we'll take the kubernetes file that you remember from my previous example. Here I have deployment with replicas too and I'm going to run it against the pipe that is going to change it and we can see that we have like a replicas free and if we look at the code here.

So basically I'm importing the resource from the stdin. I'm doing a traversal on resource of specific types. Here it's like deployment and notice again the type, the power of type script type system. So here I have like all the definition of say my API groups in Kubernetes, and here I'll have only the resources under appsv1. So if I'll choose, for example, daemon set, I'll have here the deployment is actually a daemon set object so it doesn't have a replica, but if I'll use deployment, and it will work because the point is under v1 and it has replicas and everything like we have autocomplete and correctness, basically means we filter the correct type and also have the right object that we can modify if we want.

We can use it for modifying object, for like patching, you can also use it for validation, like a quick and very easy validation. So here I define if I have an ingress without TLS, for an exception, and I'm going to run it, and we are going to get this exception. And again, everything is like autocomplete and very easy to consume. And this file like can run from anywhere. We don't need, we just need Deno CLI and this file and we simply run it. We don't need project configuration, we don't need to install dependencies, everything is in this file. So it's quite an amazing, in my opinion, the tooling of Deno. I mean, all the examples I've shown, the ingredients here, it's not like a very powerful complex framework or tooling. It's simply Deno, some type generation, a minimal amount of code using building blocks and basically that's all. So it's very simple and we get so much from it. It's not perfect because we still need to have code generation for getting the Kubernetes type. The definition of Kubernetes API sometimes are not reliable enough. We need utility. But it's very powerful. If we take a step back and think about what's important in a tool like that, or at least I can say what's important for me. So for me, it's important I can have composition features, have composition that is declarative. I can use parameters. I can create abstraction. I can do overlays. I have a resource and I want to do some patch to it. I find this example.

12. Benefits of Using Reno for Configuration

Short description:

I want correctness, type safety, testability, and easy code sharing. I want a developer-friendly language with minimal boilerplate. Security is important, and Reno compares well to popular tools like Elm Customize. Using a general-purpose programming language for configuration has many benefits, and Reno offers specific advantages such as no dependencies, TypeScript support, and a good ecosystem. The code generation is from an open-source toolkit created at Lifecycle.

I want correctness. I want type safety. I want testability. I want code sharing that it will be easy to import other abstraction or configuration or share my configuration with others. I want it to be developer-friendly, like good familiar programming language, IDE support. Minimal boilerplate. I don't want to have a project setup for defining my file, like doing this implement. I want something very, very minimal.

And there's also the security. I don't want to have something that will risk my CI, CD process. I think Reno is not bad in this kind of metrics. I compared it to Elm Customize. Of course, it's not accurate, but Elm Customize are very popular tools in the Kubernetes ecosystem. I think that Reno can get lots of these capabilities in a very good way.

To summarize, first of all, I think that there are many benefits of using general purpose programming language for configuration. It was done before. Pulumi does it very well. They are probably the pioneers of this approach. And we have the AWS CDK and CDK4JS, and JKCFG all do something similar, or not something actually much more powerful. But I think that Reno has specific benefits. There are no dependencies, no boilerplate. We get TypeScript out of the box. It's easy to use code. Very good ecosystem. Good and flexible security. And the code generation that I imported from, are from a small open source library that we created at Lifecycle. It's a toolkit. It can be easily integrated with other tools. It's open source. You are more than welcome to check it out.

13. Simplifying YAML Configuration and Its Challenges

Short description:

This approach is not just for Kubernetes, but can be applied to other complex configuration systems. The goal is to simplify configuration files. A poll showed that developers encounter YAML configuration in many tools. YAML can be intimidating due to the amount of configuration required. Writing YAML for CI/CD pipelines and Kubernetes deployments is tedious and difficult. YAML is notorious for causing headaches if malformatted. The audience is interested in using this approach for non-Kubernetes deployments and the speaker explains how it can be done.

It's a very early stage, obviously. And I think this approach is not necessarily just for Kubernetes. It can be for other complex configuration systems as well. And the point from this talk is everything here was an experimentation. But the end goal is that our configuration files should be a lot simpler. Can and should be. Thank you very much.

How many tools, frameworks and software as a service in your stack are using YAML based configurations? So this poll showed that most of us, as developers, we tackle the challenges of configuration in YAML in our, I mean, in lots of our tools. I mean, I thought originally that it will be, most of the developers will have like less than free. I expected that nobody will be don't use any of this at all, because I think we see it like every everywhere, but the fact that most of our developers use YAML, more than free tools that have YAML configuration, shows that it's something that we encounter quite a lot.

Absolutely, absolutely. And I think, well, this is my opinion, but it's also backed up by previous polls I ran myself on Twitter, that the folks that come from a front-end developer background, particularly, they don't want to do any configurations at all. What's your opinion on this? Yeah, I think, I mean, when I started developing with YAML configuration, even when I worked at front-end and back-end, the thing is that YAML can be a bit intimidating. At first, I thought it was like, I want to just write a JSON file. Although, I mean, because that was familiar to me, although YAML should be like a more human readable format of the same format. So, the thing about YAML is because I think that the amount of configuration is intimidating. And we must use them, I mean, when I'm dealing for example with a package JSON or other JSON configuration files, usually they are like handled by some script or CLI or tool. But many of these YAMLs, we are encoding them. And there is so much configuration. It's a bit like, I think, when I did a Webpack configuration, if you did it manually, it's very tedious. It's very difficult. And we need to write lots of configuration in YAML for like configuring our CI, CD pipeline, our Kubernetes deployments. And we basically see it everywhere, but in a much larger scale than what we used to when dealing with configuration. When writing applications.

Absolutely. And YAML is a language that has like the fame or is notorious for giving you headaches. Like if you malformat anything, you're immediately down the hill with everything. So, yeah, that was a very interesting question. It's also very interesting to learn that folks are using it on more than three tools or whatever they have in their stack. And they want to know, there is a question by the crowd, they want to know how difficult will it be to use this tool approach that you're proposing in non-Kubernetes deployments? Yeah, so basically the examples I've shown in this presentation are how we can take this big chunk of Kubernetes, write them into a TypeScript and enjoy all the good tooling we have from TypeScript and composition and all the benefits we get from a general purpose language.

14. Using OpenAPI and Challenges of Deno

Short description:

These type definitions are generated from the OpenAPI spec of Kubernetes and can be used for other configurations with JSON schema. Deno has challenges in terms of API compatibility with Node and requires learning new standard libraries. Good editor support is crucial. The speaker's background in developer tools and dealing with complex configurations led to co-founding a company in this space.

And the thing is that these type definitions are generated from the OpenAPI spec of Kubernetes and in OpenAPI we have the format is using a JSON schema, so we can use the same approach for other configuration that have JSON schema, and most of the configuration have them. We just need to have these tools that generate the types, and it won't be that different than the tool I use in the presentation, and there are probably other tools that convert from a JSON schema definition to TypeScript, and all the other stuff is basically the same. So it should be relatively easy to accomplish that.

So basically what you're saying is you just need to adapt the schema to the needs of the tool you're trying to configure, and then it's possible? Yeah. We are connecting the schema to TypeScript, and then we are just writing TypeScript and serializing it back to Yaml. Yeah.

Okay. And we have another question coming from the folks. What are the challenges of using Deno, in general, from your perspective? So I think one of the challenges is that Deno is different than Node in terms of API. It's not compatible. It doesn't have compatibility with Node, so we can't use Node models that are very specific to Node. We need to learn new standard libraries. Although Deno does share, by design, it is designed to be similar to browser API. So if we are frontend developers, we'll be comfortable using the API of Deno. And the fact that it's a new tool, we need good editor support because it has some features that we don't use in regular importing packages. So these are challenging these are some challenges, but it does get easier after playing with it a bit and afterward it's really nice. Absolutely.

And this is a question I asked also Matias, because I am very interested in how developers end up working with tooling or end up working with configurations with YAML. What's your background? How did you end up working or even founding a startup? Okay. So it's like two different... I mean… Different questions. How did you get here and why? The first one. Yeah. So I basically love the developer tools. In the previous company I worked, I used building-like tools internally and specifically the configuration stuff is from lots of challenging and dealing with many massive complex configurations. And one of the reasons I co-founded the company in this landscape of developer tools is because I really love this area. So you wanted to make it easier for other developers, right? Yeah. But they're two different products. But yeah, definitely. Absolutely.

Okay. Thank you very much for the talk and for everything you have taught us today. Remember that you can continue to ask questions to Ishai on discord, devops-talks, talk sorry, dash Q&A, and that you can join this special chat for the speaker room where Ishai is going to be there answering even more questions. Thank you so much. Thank you. Bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.

Workshops on related topic

JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.