Transcription
So hello, everybody. And I'd like to start by thanking the organization for inviting me here and giving me the opportunity of presenting some of the tech that we build to you, especially since today is a special day for me because it's my birthday. And thanks a lot for coming here to celebrate with me. I really appreciate it. So I am Alessandro Pignotti, founder and CTO of Leaning Technologies. And I was born and raised in Rome. I moved to Pisa for my studies, and then I moved here in 2014. And I've been a proud Amsterdamer since. If you wish, you can follow me on Twitter, but I would recommend not to hold your breath while you wait for me to post something. So what do we do? We are a small company that specializes in the niche of compiled to JavaScript and compiled to WebAssembly solutions. In this small niche, I think that we do some pretty cool things. And over the years, we have released three different products. The first one of them was Chirp, which is a C++ tool, JavaScript, and WebAssembly compiler. The second one was ChirpJ, that is not just a compiler for Java, really. It's more like a full Java machine that can run in the browser. And we can use it to run even fully graphical Java applications in the browser right now. And then we decided to move one step further, and we made ChirpX. And ChirpX is not just a product, really, right now. We consider it to be more of a technology. It's a very generic solution to do a bunch of different things, and we will talk about them. And as a first experiment to see how we can make this eventually a product that we can sell, we built WebVM, which is just one of the possible things we can build with this. And we will discuss about that. Because each one of these products will probably require its own couple of hours' talk. We have 20 minutes, so we need to cut and go to the meat. So ChirpX is a technology to securely run binary code in the browser. And there are three main ideas that we followed when making this. We wanted to build something which is generic, robust, and scalable. I think you might have your own intuition about what these terms mean, but I will get in depth later about what I mean exactly by using these words. And in practical terms, what ChirpX is, is a C++ application that we wrote from scratch. We wrote it ourselves. And we compiled it to JavaScript and WebAssembly using Chirp, using our other product, so that it can run in the browser. And so, I know that it's nice to talk about things, but it's also nice to see how things actually do in practice. And everybody recommended me not to do a live demo, but you know, who am I to ever follow good advice? Let's try this. So what I'd like to do, I'd like to prove to you that we can run a full virtualized system in the browser. And what I have here right now is Bash, the shell, running from a Debian distribution, running in the browser. To prove it to you, I can, for example, list the file system. And there is a bunch of things you can expect from an actual system running. But I'd like to prove that I can actually run a binary that has never been seen before. And through that, I guess I'll just write one. And I don't know you guys, but when I want to write a binary from scratch, first thing I do, I open my text editor. Let's do that. So it's incredibly difficult to spell correct on stage. So we have Vim, it's running, and I can now type a very small test case. And what's going on right now is that this whole thing is running from the binaries that you run on your own computer. And I plan to do a very simple word. And to do something which is not completely trivial, I actually want to, instead of returning the usual zero code that will tell the system that the executable has successfully completed, I want to return an error code. I want to see if the shell can deal with that. So let's try this out. Cool. Now I want to compile this. To compile C++ code, C code actually, of course, uses GCC. And we can also enable some optimization, because you never know. And this looks correct. Okay. What's going on in the background here? So GCC is a fairly big executable. And currently the system is loading the required data from the network. And this data comes in blocks, because this is actually a full X2 implementation that runs from a disk device which is baked by a CDN. It's baked by CloudFlare. And this execution is actually completed. We can test if this can even run. It does what we expect it to do. Let's check the error code. It is what we expect. So this is fairly interesting, I think. But maybe I said it myself before, right? We can compile C++ to WebAssembly. So maybe you're thinking that there is some trick here. That maybe this is a special version of GCC that can magically generate code that runs in the browser. And I'd like to prove to you that this is not the case. And to do that, we can actually ask the system. So what is the file type that we just run? The system says it's ELF, which is a binary format run executable on Linux, 32-bit, Intel x86, which is what we expect. And as the last proof, we can even show the code itself. We can dump this program that we built. We can take a look. And this is binary code, which is what you would expect. And okay, but I mean, there are other ways of running C++ code in the browser. So why would we go toward all this complexity? And the issue is that it's complicated, really. So it is true that you can compile C++ to the browser. It is not clear that you can compile any application without manual intervention. But even putting this aside, the point is that this is an extremely generic solution, right? So we can try something completely different now. Let's try Python. So what I've done now, I've fed up the Python interpreter, and I can actually type command directly in the shell one more time. So I'll do the same thing I've done before. Hello. This is also pretty easy. I can return another code, which will also display on screen. And okay, now Python is nice, but it's also relatively simple executable. It's just an interpreter overall. It does not much more than that. So let's try something funnier. Let's try if we can try to run JavaScript. And I will also open again Vim. I will also open one more time and write my simple test case, which is misspelled. And also not to do something completely obvious, I'm actually going to enable the print code option. So what this option does, it actually prints out all the native code that Node.js and actually the internal engine, which is V8, the same engine that is used in Chrome, is generating just to run this small example. Keep in mind that what you're seeing is not something that happens especially because it's running in this virtualized environment. This thing happens every single time you start up Node.js. And what I found interesting is that what this thing shows to you is that ChirpX is actually able to deal with a fairly sophisticated executables, because this code is being generated at runtime. This was never, ever seen before by the engine. It's just been generated, built to memory, and eventually executed. So how do we build something? Oh, by the way, this is a live website. If you want to play with this, you can go and play with that. And if you have bugs, you can report them on GitHub, and members of my team will take care about them. So let's try to define this terminology we've been using before. When building something generic, I mean that we want to do something that does not require any preprocessing, any special metadata. We should not have a special compiler or special build options. We should not have special libraries. None of this. What we do is that we take the binaries as they come out of the Debian packages, and we use them. Robust means pretty much being able to run Node.js, since we need to be able to deal with situations where code is not only generated at runtime, but it's also changed at runtime, maybe modified in place, or maybe just deleted and put somewhere else. This also happens when running code with V8, because code itself is garbage collected and moved around memory over time. And then we wanted to build something scalable. What this means is that Altaw showed you guys just a bunch of LORs. This thing can work on much bigger code bases. We wanted to build something that can work with programs in the wild, which means we want to support multiprocessing, multithreading, thousands of files, and all the sort of features that are effectively used by programs that are real, not just toys. To give you an idea of what we've seen so far, ChirpX is this environment in which all the execution happens. And it's all client-side. It's all in the browser. There is no server-side component doing the execution for us. This is not a trick. And the first thing that starts is Bash. So Bash is the parent process. And then Bash itself can spawn subprocesses, child processes, and I showed you GCC and Python. And all of these are actually completely independent to other spaces, okay? They have their own code and they have their own data. But the issue is that, from the point of view of the system, we don't really know what is code and what is data. These two things are just... It just bites. It's just all data in memory. And to solve this problem, we have actually a two-tiered execution engine. The first tier is an interpreter, and the second tier is an actual JIT engine that can generate highly optimized code. And the interpreter is able to pretty much run code without any information. It will start from the first instruction, put it to the next, and so on and so forth. And as it does this, it will also build the metadata internally about how the code is structured. And with that, it's now possible to fire up the JIT engine, generate optimized web-accessible code out of this. And eventually, all these applications will need to reach the browser somehow, because we need to display text on screen, for example. And this happens, as you would expect, on a native system via syscalls. And syscalls, we implemented them ourselves. So what you saw so far is not a Linux kernel. It is a Linux-compatible ABI, so it's able to run any Linux executable, but it's not Linux itself. And the system call is the place where we stop and implement the system call manually, so that they can interact with the browser. And now you might wonder, why don't we just run everything in the JIT, since it's most likely more efficient? And the issue there is that this is actually not necessarily the case. The way I see it, JIT compilation is pretty much an investment. And you want to make sure that you recover your investment. It's an investment of execution time, really. You pay some execution time now in the hope that in the future, the same code will run faster so that overall, you run faster as well. And just JIT and everything will be inefficient. So we actually, thanks to the interpreter, we can build this metadata. We build what we call the control flow graph of the program. And then when blocks of code become sufficiently hot, they run a sufficient amount of time, we start generating JIT code only from that, and only from the subs that are executed enough and a sufficiently high number of times. And in this way, we can achieve both a good runtime performance and without exceeding the resource of the browser in terms of compiled code. So what can we do with this thing? In terms of features, what we have right now is that we have a fairly complete support for the core x86 instruction set. We do support x87, but it's not as fast. MMX and SSE are both supported, but they are currently scalarized. So we expand them to the equivalent scalar operations, which is of course slower. And our plans in the future is to of course use WebAssembly's extension to be able to shrink in this gap. At the level of the US, we have support for most of the file systems and process handling system calls. The data comes from a disk backend, which is an x2 implementation. We have chosen to use x2 because it's going to be possible for us to extend it in the future to support further extensions and reach the x3 and x4 level without having to rewrite everything from scratch. In terms of persistence, this is pretty interesting. If you change a file in this VM, it will stay there. The persistence is local. It's done using indexedDB, which is great because it's privacy preserving. So we are not going to look at your data. Your stuff is yours and it's going to be stored on your machine. And with this limited amount of features, we can already do a bunch of interesting things. In the context of education, for example, it will make it possible for schools to set up a zero maintenance environment that the students can fire up without having ever to worry about, will this thing run on my computer or maybe today the setup XA is not working correctly. For developers like us, it might make it possible not only to have just web-based IDEs, it will make it possible to have full web-based development environments where you can actually build and run the full pipeline on the client. This will be useful in documentation, to have live documentation for any programming language, not just for programming languages that can run already in the browser. And this may also be useful to open the web to a new category of applications, in particular, heavy duty engineering applications like computer-aided design programs. Usually, this sort of applications do not actually have the full source code available, not even to the developer, because they use binary components which are sold by other companies. And thanks to this system, it doesn't matter you don't have the code. You can run the binaries as they are. Fundamentally, you don't care. And this is what we have now. And what about the future? Well, of course, one thing in my mind is gaming. And to that, we first need to have some sort of graphical support. We're still getting there. And the plan is to actually make the full XOR server run in the browser. Believe it or not, this can work. I've done a prototype some months ago and it's totally possible. And then we need to figure out a way to map OpenGL directly to WebGL. And what's funny is that with this setup, it's quite possible that the overhead may be not even that high because, of course, virtualization implies an overhead in terms of CPU execution. But since we will map OpenGL directly to WebGL, the overhead there is probably going to be much less. And with networking support, which is a whole complicated topic, we may be able to have full development environments where you can fire up a little web application including server-side code from your browser tab, which is then reachable from all over the world from other people with their own browser. And my own personal goal is to reach the point where we can run a fully virtualized desktop environment in a tab so that you can access the website, log in, and you have your data. You close the tab, you're done, you can continue your work somewhere else with your own system. And this is it, really. Feel free to get in touch. And we are actually hiring. We are looking for an intern right now. So if you are interested yourself or you know somebody that could be interested in working with our tech, there is a space. Thank you. Thank you very much, Alessandro. And let's check if we have some questions on Slido. Could I please see this on the screen? Yeah, I think we have some of them. At least this is what I see. Okay, no problem. I will read it from my mobile phone. Yeah, so basically, the first question is very similar to my very initial one. Is it or will it be possible to run Windows applications in the browser from their.exe file? For example, browser, like I mentioned. So to run, fundamentally, the issue is that we implement system calls. And it is in theory possible to implement the Windows system call and run a full Windows stack. Now the tricky part with that is licensing. We don't have license from Microsoft to use all the DLLs from Windows. But what you can do is run Wine. Run the Windows emulation layer for Linux to run the Windows application on top of that. Fair enough. The next one. Do you have auto-completion or are there some limitations? I mean, in the command line. Well, no, it's an actual bash. It is exactly what you will get for your own system. So if auto-completion is properly configured, you will get that as well. Next question is, is it all open source? Of course it's not. So the thing is that this might change in time. As I was saying, currently we are still trying to figure out what will be the productization of this technology. And right now it seems to us that we keep ourselves more open paths by keeping it proprietary. But this might change in the future. We honestly don't know. So far it's going to be only for us to look at the code. But we would like you guys to try the thing if you like. Okey-doke. You said WebVM has a virtual file system backed by CloudFlare. Does it lazy load files or preload the whole VM at once? Does it perform well after all? So the backend is stored by CDN, by CloudFlare. But it's not truly based on files. It's based on blocks. It's a block device. So it's a very traditional block-based file system. And each block is downloaded on demand, only when required. And this means that we can actually support pretty large disk images. The image you've seen so far is 2 gigs. But this is mostly for technical limitations. And we plan to make it possible to go much, much higher in terms of size. Cool, cool, cool. So the next question is, is there offline mode? So you demonstrated this website, like Playground. Is that somehow possible to run this cool stuff when you're disconnected? So if you ask if myself on my computer can have an offline setup, yes, of course. But it's not yet available to the public. It's still part of the fact that we still need to understand exactly what we're going to ship, what will be the APIs, and these sorts of things. So right now, there is no truly offline mode. But it is what it is. We are working there. Cool. Let's take this one. Do you also have access to the system somehow, too? For example, can you run top command? Well, you can run top. But you will only get the information, to be fair, you cannot yet run top. Ideally, you will be able to run top. But you will get information only about the virtualized system. Of course, you can never access the reality of the underlying system, right? This is secure. This is not a security hole in your system. Nice. And from theory to practice, the question is, who is and for what use WebVM right now? In terms of customers, we don't have yet one. But we have partners that we are trying to work on with to try and build the first product really for this. And the main interest we have is from the education sector and the web-based ID sector. These are the people that seem to be the most interesting right now. But my personal goal is to have some gaming people on board. Sounds good. And let's take this one. And I believe this will be the last question. And next we have lunch break. What kind of browsers and devices are required? Any. Chrome, Firefox, Safari will work. So that's easy. Cool. That's the best answer. Thank you very much, folks, for your questions. I'll keep some coins on the stage. So if you answered questions, please feel free to come and grab. And now we have one hour break for the lunch. Some calories needed. Thank you. Thanks. We'll be back.