Deno 2.0


Deno 2.0 is imminent and it's bringing some big changes to the JavaScript runtime. In this talk, we'll introduce the new features including import maps, package.json auto-discovery, and bare specifiers. We'll discuss how these improvements will help address issues like duplicate dependencies and disappearing dependencies. Additionally, we'll delve into the built-in support for deno: specifiers on the registry and its role in providing a recommended path for publishing. Come learn about how these updates will shape the future of the JavaScript ecosystem and improve backwards compatibility with Node applications.


Hello. Good morning, everyone. My talk is not called Dino 2.0. It's called forced optimization. It's this trick you do when you apply for conference talks where you just give some title and some description and then the night before you make it up as you're going along. Node is quite old at this point. Maybe 13, 14 years old. My original goal with Node was to force developers to easily build optimal servers by forcing them to only use async I.O. Not 100% true. There's synchronous I.O. in Node, but to a large extent, or at least with network I.O., you're kind of forced to use non-blocking I.O. This is really standard these days. Any platform is making use of non-blocking I.O., but in 2008, this was not the case. There was a lot of people writing kind of threaded blocking I.O. servers. These days, easily building servers and optimal servers needs more than just async I.O. There's a lot that goes into this. You're managing cloud configurations. You're choosing a database. You're thinking about how data might be replicated around the world. You're navigating, especially if you're using Node, you're navigating kind of a plethora of kind of tool chains and workflows that may or may not work nicely together. You're dealing with supply chain security. Just doing non-blocking I.O. isn't getting you all of the way there. With deno, the goal is really a continuation of this original goal, but a little bit more expansive and modern. deno continues the pursuit of the same goals, but thinking about this kind of holistically as a service that you're building and deploying to a public cloud. In order to achieve this, certain requirements I think are obvious. First and foremost, I'm interested in building systems that are maximally accessible, that have a very large developer base. That is why javascript. javascript is not necessarily the greatest language on earth, but it is the most accessible language on earth. This system needs to have excellent latency everywhere. Whether you're accessing the system from Japan or you're accessing it from New York City, you should not kind of be penalized for where you are in the world. The system should certainly be serverless. You kind of want things that scale down to zero and scale up to as large as necessary. This is kind of a big deal these days, is that there is a lot of configuration that goes in here. You're dealing with Terraform, you're dealing with various config files from every possible library. You're dealing with lots of boilerplate, lots of frameworks. What are frameworks anyway? It's just boilerplate that you kind of lay down in advance in order to get running. We want to reduce that as much as possible in order to move people forward. It should be secured by default. javascript is a great language for security because it is actually a sandbox and has the ability to restrict people from accessing the underlying system. deno is attempting to meet these requirements and ever more getting closer to this. deno 2.0 is coming out this summer and we're working towards this. We're kind of ever thinking about this in terms of how do we build kind of optimal cloud services, optimal servers. I want to go through a couple of aspects of this and a couple of features of deno 2.0 that are under development and demo them and just give you a sense for how this thing works. First and foremost, deno when we started a couple of years ago was very much on a parallel track to Node and this has been difficult for people to adopt. There's a lot of the javascript ecosystem depends on npm libraries, depends on Node APIs and implementing these built-in modules is relatively important for people to get up and running quickly. So let me just try to demo some of this. Is that visible? So builtin.js and you can import read file sync from node colon fs and you can read file sync, say some file. Etsy password doesn't actually have any nefarious details in it these days, but is nevertheless kind of a good example of security. So let's just log out this file here. So when we run this with deno, you of course get deno's big thing is that there is no security by default. There's no access to the system by default rather. And so every time you try to access the file system, you're going to get prompt. And what it's asking me here is, do you want to actually allow this? And you can say, no, I don't. In which case the program is going to fail. Or you can say, yes, I do want that, in which case you kind of get out this buffer. In Node, you're probably used to this without a Node specifier. And Node these days is moving, is encouraging people to use the Node schema in that import specifier. And deno kind of takes a hard line stance here that we are not going to have these kind of FS bare specifiers and whatnot. So this induces a little bit of incompatibility with Node. But I think for good reason, right? This is not too big. And it gives you kind of a nice error message that kind of tells you what to do. So hopefully it's not too confusing for people. So we've got built in Node modules. I just showed the FS module. But the compatibility layer is quite robust at this point. npm is like this add-on to Node that Isaac Schluter created. And this has always been kind of a tension between Node itself and npm. But it's clear that the entire javascript ecosystem is in npm. And it is very useful to be able to reach into that ecosystem and make use of modules. And for that reason, deno has built in support for npm now. So think of it as like the npm client built directly into Node. It can go and download those packages. It doesn't actually create a local Node modules folder. It's pretty deep compatibility. So even supports things like add-ons with the Node api. It does apply these security constraints. So if that npm package is trying to reach into the network or to access the disk, that is going to be blocked by deno's permission system. The npm ecosystem, when you start looking into it, has a lot of old code out there, a lot of common JS, a lot of bizarre kind of semantics around package JSON. deno implements all of that, but it's very much hidden behind this kind of cute npm specifier. So you can't actually use common JS as a deno developer. We're very adamant that everything should be ECMAScript now. And you cannot use require. But when you actually reach into this ecosystem, reach in through this kind of npm specifier portal, internally, it is actually loading common JS modules and doing pretty crazy stuff to make this possible. So as a demo of this, let me code up a little express server here and run this in deno. So is that still visible? Yeah, I'll do this. And forgive my vimming here. I'm extremely old and crotchety about this stuff. So import express from npm colon express. This is kind of all you need here. And just a little bit of boilerplate, you go express, and that gives you an app. And then you can do app.get. And if you get the slash root, then you get a request and a response, blah, blah, blah. And then you can do response send, I think. Yes. Send hello, maybe with a new line. And I guess you have to listen on port 3000. And it's always good to console log something. Otherwise, it's a little confusing. So this is going to be running at local host 3000. So now I've got something that is going to be using some common JS under the hood. Express is relatively old. And let's see how this runs in deno. So I'm just going to do deno run express. You'll see that it's trying to access the CWD. I think this is something inside of Express, actually, maybe reading argv or something. We have to allow access for that. It's also trying to read environment variables. I'm not sure why Express is trying to read environment variables, maybe for node end or something. But let's allow that. And then it's trying to get access to run the server on port 3000. So we'll allow that. And finally, we've got our little web server running that is using Express under the hood. And as you can see, there's no node modules folder. This all kind of got installed automatically. So if you know what you're going to do, you can say allow read, you can allow it to read things in the local directory, you can allow net and allow env to bypass those prompts. And I'm going to add this reload flag just so you can see it download these npm modules. Luckily, the Wi-Fi is working great. So that is what an npm install looks like in deno. It's completely transparent. It happens very fast in the background. And it kind of just works like that. And you get these security permissions for free. So this is making deno more accessible and getting us closer to this goal of building servers building cloud services that you can just kind of hammer out in a couple of seconds. There's a new feature in deno that we haven't really talked about yet. And you'll have to keep it secret because it's not yet announced. But we have a feature called deno KV. And this is a pretty advanced key value database that is built directly into deno. It allows you to store javascript objects. It has normal key value operations like get, set, list, delete. It has atomic transactions. And I'll talk a little bit more about that. I'll just say it's relatively advanced in that regard. And it's really built into the CLI and backed by SQLite. So this is already in deno. It's behind an unstable flag because it's still being developed. But when deno 2 is released in a couple of months, this is going to be a core part of the api. And the idea is that you can get started building application servers. Maybe not things that really require a lot of relations and a whole lot of data. But I think a lot of applications actually fit pretty well into a KV model like this, especially one that is consistent and has transactions. So let me demo that a little bit. And let me use VS code for this one just so I have some run deno in it. And oops. Muscle memory is just Vim always. So I've kind of auto-generated some boilerplate here. By the way, okay, I'll refrain from commenting on that. So I'm using VS code for type completion and stuff with this KV api. So the way this works is you do deno.openkv. And this takes in a weight. And it's red here because this is an unstable api. And so I need to do unstable. You won't have to do this once this is actually working. And what this gives you is a reference to this database. And let's just console log this. By the way, is this visible? You guys should just let me know if this is completely unintelligible. Okay, and terminal, okay. Just here. So deno run main. Main TS. So again, this is behind an unstable flag. So if you're just running it without anything, it's going to crash. You have to provide this unstable. But yeah, now it's opened the database. So what? So let's just kind of build up this example a little bit. So KV set. And the key for this, this is a key value store. So there's kind of this hierarchical key space. And the key is a javascript array, actually. Instead of kind of serializing out a string with kind of slashes in between, this gets relatively messy. We just use an array here. So let's pretend that we're creating a users table. And Alice, I think, is my example. So we're going to set users slash Alice. And the value is just going to be something. Some javascript object, right? And if we await that, then we will have set this thing. And of course, you can kv.get this. And we'll just grab that key. And I'll just console log this x here. Delete this one. So right, now we've pulled out this Alice object. You'll see there's this version stamp there. That's a little interesting. We'll get into that. But I think another thing to note here is no permissions are necessary. There is kind of a default database in here. And you can kind of configure this to kind of store it in a file. Hopefully this is backed by SQLite. So these are actually stored in a SQLite database. So let me expand this example a little bit into a server that counts how many visits, how many requests have gone to the server, right? So we're just updating a counter. So the purpose of this is to demonstrate this transactional object, this transaction here. So yeah, what we're going to do, let me modify my example here. So rather than setting a users table here, well, first of all, let me set up a server. So import serve from HGPS, oops, std HGPServer.ts. So this is the built-in DNO server api. And this takes a request and it returns a response. Return new response. Servers are always much harder when standing in front of a bunch of people like this. Response. There we go. Okay, so this sets up a little web server. And let's just comment out these bits here. And we'll just run this. Now that I'm running this, it's asking for net access. So we don't have to do that each time. Let's just do allow net. Okay, now it's listening on port 8000 and we can curl localhost 8000 and we get hello. Great. Maybe make it slightly smaller. So what we want to do is update this counter now. So let's store this. Let's just define this key to be counter. This is where we're going to store this integer, right? And what we'll do is say KV atomic mutate. And here we'll say type is sum key. And then the value is, it's going to be a little bit messy here, but just bear with me. So it's codeeno kv un64 and let's just ignore what that means for the time being. And we'll commit that transaction and await that. And it's complaining because I don't have an async here. So now we're incrementing this counter value. And we need to grab that counter value, that value out of here. So we'll get key and let's await that. It's all asynchronous. And I'm just going to console log that for the time being. Makes sense? So let's try this one more time. So now when I curl this, this is the kind of value that I'm getting back from this get request, right? And you can see that the value here is a KV U64 1. And if I hit it again, you'll see that those things increment. So let's just grab out that value, value, value. Let's just call that counter and counter and oops, not console log it. Let's actually return that in a response here. So OK, so when I'm running it now, I should get 6, 7, 8, 9. And if I kill this program and restart it, that counter is persisted. Cool. So we've got a little stateful application where I didn't need to install any dependencies. I don't need any sort of database. Let me segue a little bit here. This is all local development. And I started this by saying, what we're trying to do is build a platform for real applications, not just a single instance of this, but instances that run worldwide that are pretty real in a sense. deno deploy is kind of the other half of deno, right? deno is open sourced, MIT licensed, very much like Node, thing that runs on your terminal. deno deploy uses a lot of that technology, runs in the cloud, runs in 35 data centers worldwide, and is a serverless edge functions platform. It's powering, for example, Netlify edge functions or Superbase edge functions. Just have to explain that. So now here's the real part of this. Let's take this example and let's deploy it worldwide to all of these data centers so that we are incrementing this counter kind of statefully worldwide. The interesting thing about this api is that it also maps not just to a SQLite database that runs locally, but to a geo replicated, strongly consistent database in deno deploy. This is powered by foundation DB, which is, you look it up if you don't know, but it's powering things like iCloud or Snowflake, scalable database. This requires zero configuration, zero provisioning, zero orchestration, and it's really optimized for fast reading so that you can do fast local reads in like 20 milliseconds, because that's what you're doing. That is often what you're doing. If you're building an e-commerce site, you are listing a lot of products from the database and that needs to be really fast. If you're running at the edge, if you're running your application server everywhere all at once, you need to be able to read that data very, very fast. By the way, I've got some instructions here if you want to try it out, but let me just wave my hands and say, this is very much under development still. So you'll have to beg me for access if you want that. But let me demo it here and let's see here, So this is deno deploy. I won't go through this, but I'll create a new project. I'm going to just use the little built-in editor here that we have. So I've created a new domain name. Hopefully this is visible, funny swan 75. Let me just rename this to Node Congress. Hopefully that's not taken. Okay, so now I've got It's returning a Hello World server. And let me just take this code copy and just delete this and paste that in there and save and deploy. Hopefully, I'll get a zero returned here for a zero counter on the left side. Oh, five already. Jeez, you guys are fast at this. So every time I'm reloading this, I'm checking the value of that counter. And yeah, this page should load fast in Tokyo or wherever. Yeah, I guess let me go back to my slides here. The idea here is that you can just open an editor, hammer out a couple of lines of code and have a real application server that doesn't just run locally, but is kind of prepared to scale. And I think this is getting you very close to building real applications in just a couple lines of code and really fulfilling this idea of kind of forcing optimization on users. There's more under development here. There's cache, persistent queues, background workers, object store, of course, ever better backwards compatibility for people. Like I said, deno 2.0 is coming out this summer and this KV stuff will be underway here soon. It will be released in deno deploy soon. My goal is still the same, right? With Node, the goal was to restrict programs to async IOPrimitives to help developers build optimal local servers. And in deno, it's just an expanded version of that, restricting programs to distributed cloud primitives, the right primitives to help developers build optimal global services. And let me just leave you with this analogy, right? I think there is a new computing abstraction emerging here, right? Cache is to javascript as Elf is to Wasm. There is kind of a post-UNIX future emerging and people like James are working on this in winter CG. And I'll leave it at that. So thank you for your time. Thank you, Ryan. If you can take a sit. Please remember to ask your questions on Slido. We have two right now. So the first question is, thank you for the great talk, by the way. It was very interesting. Would deno try to fix javascript floating point arithmetic in the future? No, I don't think so. I mean, deno is trying to be browser compliant and that is really an issue for tc39 and the definition of the language. deno is not going to try to kind of step outside of that. Okay, cool. And the next question is, there is a way to allow access only to specific libs? No. So I think what this question is referring to is kind of allowing, for example, network access to one dependency, but not allowing it to other dependencies. This is not possible as far as we can see. We would love this feature. There's work happening in tc39 and I think there's going to be some talks about this later, but this is very much research. At the moment, the permission system is kind of process-based. Cool. And will the person be able to host KB on existing cloud providers to trust and GDPR? Yes. So you can take deno and you can go deploy this wherever you want. You go put it on aws lambda. You'll have a local SQLite version of this. You can put this on cloud Run. You can put it wherever you want. That's all accessible. The foundation DB backend is specific to deno deploy. What do you envision KV being used for the most? Yeah, we think that I think traditionally KV data stores are pretty limited in their feature set and so people tend to reach for relational databases very often. With these transactions and this consistency model, we still believe that there's a world where relational databases are necessary in applications, obviously, but I think this expands the scope quite a lot. So lots of, it's hard to put a number on it, but some number of simple-ish applications to medium complexity applications, I think can fit very well into this KV abstraction. And what is ELF? ELF is the executable binary format in Linux. So the analogy I was drawing, I should have explained it a little bit more, but ELF is so short. So you're using bash, you're using whatever, ZSH in Linux, and that's kind of a dynamic programming language. It is a dynamic programming language, a really crappy one, but you can launch into binaries written in C, written in C++, written in rust. Those are ELF executables. And the analogy is with javascript, you have this kind of shell, this javascript dynamic language where you can launch into these Wasm executables that are written in C++, rust, et cetera, for more complicated behavior. Thank you. We just have one minute left. And why are you always using the root user in the terminal? Sorry? Yeah, that's one of the questions. Are you always using the root user in the terminal? Oh, no, I'm using my own user. Was I logged in as root? No. Cool. In your first KV example, would it be possible to list all the users? Yeah, and you should look at the documentation. There's a cursor api, so you can do pagination through the... You can list, say, the users table and paginate through that pretty easily. Cool. Is there a way to self-host with the same functionality as provided by To some extent. This foundation DB backend, this distributed KV store is proprietary. Okay. Thank you. We ran out of time, but thank you very much. We practiced our clap, so let's give it a 10. Thank you.
36 min
14 Apr, 2023

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic