Building a Voice-Enabled AI Assistant With Javascript

Rate this content
Bookmark

In this talk, we'll build our own Jarvis using Web APIs and langchain. There will be live coding.

21 min
05 Jun, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.

Available in Español

1. Introduction to DevRel and AI

Short description:

Hi, I'm Tejas Kumar, and I run a small but effective developer relations consultancy. We help other developer oriented companies have great relationships with developers through strategic discussions, mentorship, and hands-on execution. Today, we're going to build a voice activated AI assistant using web APIs and JavaScript. The purpose is to have fun while learning and celebrating JavaScript and AI.

Hi, I'm Tejas Kumar, and I run a small but effective developer relations consultancy. What that means is we help other developer oriented companies have great relationships with developers. And we do this through high level strategic discussions, and mentorship, and hiring. Or we do it through low level, hands on execution, like we literally sometimes write the docs, do the talks, etc.

In that spirit, it's important for us to kind of, you know, stay in the loop, and be relevant and be relatable to developers to have great DevRel developer relationships. And sometimes to do that, you just have to build stuff. You see, a lot of conferences these days, are a bunch of DevRel people trying to sell you stuff, and we don't like that. It's DevRel, not DevSell.

And in that spirit, we're not going to sell you anything here, we're just going to hack together. The purpose is to have some fun, to learn a bit, and so on. What we're gonna do in our time together is we're going to build a voice activated AI assistant, like Jarvis from Ironman, using only web APIs, just JavaScript. We'll use VEET for a dev server, but that's it, this works. We're gonna be using some non-standard APIs that do require prefixes and stuff, but if you really wanted to, you could use it in production. You could supply your own grammars and so on. The point today, though, is not that, it's to have fun while learning a bit and also vibing a little bit. All in the spirit of celebrating JavaScript and AI.

2. Building the AI Assistant Plan

Short description:

We're going to use the Web Speech API for speech to text and the speech synthesis API for text to speech. We'll give the text to OpenAI's GPT 3.5 Turbo model and then speak the response. It's a straightforward process using browser APIs that have been around for a while.

So with that, let's get into it by drawing a plan in tldraw. We're gonna go to tldraw, and what do we want to do? Well, we want to first have speech to text. This is using the Web Speech API. From there, we want to take this text and give it to OpenAI, the GPT 3.5 Turbo model. From there, we want to speak. So text to speech from OpenAI. This is the plan. We want to do this with browser APIs. We want to reopen the microphone after GPT 4 talks and have it come back here. This is what we want to do. Let's draw some lines. So it's really just speech to text, an AJAX request and text to speech. This is what we want to do. Not necessarily hard. There are some functions here. This is called the speech recognition we're going to use. That's actually a thing introduced in 2013. It's been around for a while. This is the speech synthesis API. So both of these exist in JavaScript in your browser runtime. They're just ready to use. What we're going to do is use them to fulfill this diagram.

3. Building the Speech Recognition Functionality

Short description:

To build ourselves an assistant, we'll use Chrome's speech recognizer. We'll create a new speech recognition object and add an event listener for the result event. When we get a result, we'll extract the transcript from the first attempt. This API may provide multiple guesses, but we'll stick with the first one.

Now, to do that, we're going to use Chrome because this really works in Chrome but there's ways you can get it to work in other browsers. We're going to open the DS code and get started. We have a blank page with a button that says hi. If we want to look at the code, index.html is HTML, some head, removing the default margin. There's actually a little thing here that my just so I know what my face is covering, a little black box. You can see if I bring this down a little bit. That's where my face goes. Anyway. And then we have this black box, the button that does literally nothing in index.tsx.

Let's start by recognizing my speech. Chrome has a speech recognizer built in. It had it since 2013 and it just works. Other browsers have different implementations and so on. But the goal is to build ourselves an assistant. We're not building a product to sell, we're just learning, having fun to build ourselves an assistant. So in that spirit, what we'll do is we'll say const recognition is new speech recognition, speech recognition. And this will predictably fail because you need a vendor prefix in Chrome, but Chrome doesn't use WebKit, Safari uses WebKit. What's the prefix to use this in Chrome? It's WebKit. I don't know why, but there. And this now should give us no error. So it is there. So what do we want to do? We need an event listener. So we'll add an event listen to this called result, rather listen on the result event. And when we get a result, we are going to const text is the results. Oops, we should maybe get event. The events results, the first result and the first attempt of the first result. So this API will do if we let it, will do many guesses about what I said. And I feel like it's good enough that we just run with the first one. So we'll iterate if we need, but we get the first result and then the first attempt of that result. Transcript.

4. Communicating with Open AI API

Short description:

And let's console that log and say you said text. We have speech to text. Now, let's talk to Open AI and see what it says. We'll communicate with the Open AI API by following the API documentation and using a curl request. We'll convert the curl command to a JavaScript fetch request using the Github copilot labs extension. It's like copilot but on steroids and allows code conversions. It works pretty well.

And let's console that log and say you said text. We need to also start recognizing recognition dot start. Hello. My name is Tejas and I run a Deverell agency. Oh, fantastic. Hello. My name is Tejas and I run Deverell. Close enough. It's working. We have speech to text.

What do we do now? Let's talk to Open AI. Give it the text and then see what it says. To do that we're going to communicate with the Open AI API. So to do that we're going to open up the API documentation. We're going to get a curl request right here. This is an image edit. I want to chat completion.

So I'm going to come here, copy this curl snippet, open Visual Studio code and create a function const askOpenAI and this is probably an async function here. And what we'll do is we have a curl. I want to turn this into a fetch. And there's a powerful extension called Github copilot labs. And this is new. It's like copilot but on steroids and it allows like code conversions and things. It doesn't work very reliably but I figured we could try. So to try that, let's go here. Copilot labs. I'm going to open that and I'm going to highlight this text and say using the custom brush, I'm going to say convert this curl command to a JavaScript fetch request. And it's going to spin a bit. Okay, wow. Not bad.

5. Authorization, Body, and Logging

Short description:

We need an authorization, a bearer token, and a request body. The body should be a JSON string with a model and messages. We'll use the Turbo0301 model and start with a system prompt introducing Jarvis, Tony Stark's personal AI assistant. We'll keep responses concise. We'll log everything said in a list and map it as user content.

We need an authorization, which contains a bearer token. And we of course also need a body. What's the matter here, right? We need another curly. We need a request body. That's very important. So we'll do comma body. And what does this thing expect was JSON string first of all. And it needs a model and messages. So we'll do that. We'll just give it this object here.

I'm going to use Turbo0301 just because it's under less load oftentimes. And we'll say, we'll start with a system prompt. So system, and we'll tell it like who it is. We'll give it an identity statement. Okay. You are Jarvis, Jarvis, Tony Stark, Tony Stark's personal AI assistant. Tony Stark, of course is also Iron Man. Keep your responses as terse and concise as possible. Okay. So that's an instruction.

Now, what we need to do is everything that's said we need to keep in a log because you know, chat GPT is conversational. So every time we recognize speech, we need to append that to a list. Okay. So let's do that. So We'll say const things said is an empty array. And not only are we going to console log this, but instead, we'll things said dot push text, which is a string, but this is a string. Okay, perfect. Now, we'll just map. So we'll say things said dot map role is user content. This is perfect.

6. Asking Open AI and Handling Response

Short description:

And so now we're asking open AI. We're pushing it there. We'll console log the response and see what we get. It's 401 because I don't have a bear token. Hello, I need a suit immediately. Probably talking to the wrong model. Error, invalid request error. Role, user, content. Spread the request. We got back undefined, but the request passwords, choices, zero, message, content.

And so now we're asking open AI. So we're pushing it there. And then we'll or another const response is await, ask open AI. This is not an async function. And now that looks good. So we'll just console log the response and see what we get.

Okay, let's take a look. So, so far, so good. Wait, hello. I need a suit immediately. Okay, well, nothing. It's 401. And that's because I don't have a bear token. I'm about to show you my API key, please don't copy it. Be a nice person. Okay, it can be expensive if you abuse it. Anyway, so, got him. You saw nothing, you saw nothing, you saw nothing.

Hello, I need a brand new suit of armor immediately. How do I do it? 400. Probably because I'm talking to the wrong model. Let's take a look here. What's the problem? Error, invalid request error. Role, user, content. Okay, is not of type object. Right, I need to spread that. Thank you. Hello, I need a suit of armor immediately. Okay, we got back undefined, but the request passwords, choices, zero, message, content. And that's what we want to console.log, response.

7. Speaking the Answer Using Speech Synthesis API

Short description:

First, serialize to JSON. Get the answer and speak it using the Speech Synthesis API. Use the speakStringOfText function and set the voice to the desired one.

First of all, let's return this. Serialize this to JSON. And now we need response.choices, zero.message.content. Alright, this will be our answer, and then we'll just console.log this answer just to be sure. Right, answer.

Okay, let's try this again. I need a suit of armor around the world. What should I call it? Avengers Initiative. Oooh, it's happening. So we have speech to text. We are talking to OpenAI. Now we need text to speech, okay? How can we do this? We can do this using the Speech Synthesis API. This is also just a native web API. Keep in mind, we're writing TypeScript but there's no build tool or anything. This is just straight in the browser.

So let's use Speech Synthesis. So we get the answer, we need to speak the answer. So how do we do this? We'll have a function called speakStringOfText, and what we want to do is const utterance. Exactly, I should have let CoPilot write this. Utterance. So a SpeechSynthesis utterance is an utterance of a string. And what we want to do is, okay, that's pretty basic, but we also want to do some voices. So we'll say const voice is SpeechSynthesis.getVoices, and we'll just get the first voice. Which is usually the British one, the one that I want. And we'll say utterance.voice is this voice. And then we speak. And then, let's actually just leave it there. And what we'll do is we'll say, you know, speak answer. How much money do I need to build Avenger's tower? That's cool. But it didn't speak it.

8. Enabling Speech Recognition and Addressing User

Short description:

To enable speech recognition, a click event needs to be added to the button. This ensures that the browser doesn't randomly speak without user interaction. By assigning an ID to the button and using event listeners, we can start the recognition process. However, the AI assistant may still address the user as Mr. Stark unless specified otherwise through the system prompt.

It didn't speak it because it needs an event. So, what we're going to do is, this is a security consideration. You can't just have things speak to you without a user interaction. You need a click event or something like this.

So, to start listening, we'll add a click event to the button that exists. Just so that the browser isn't protective of the computer just randomly speaking at you. Which can be a bit of a scary experience.

Okay. So, what we'll do is, instead of recognition.start, we'll go back to our button in the HTML. What's the ID? Let's give it an ID. ID is start. And this will now make it a global variable. Isn't that ridiculous? So, what we'll do is, instead of recognition.start, we'll do start.add event listener. Click and then we'll recognition.start. We'll do this, save. So now, it's not listening by default, but I'll click this and then speak and then it should work.

Hey Jarvis, how much money is it going to take to build a new car? I'm sorry, Mr. Stark has not provided me with sufficient details to estimate the cost of building a new car. Please provide more information. Why did it speak to Mr. Stark and say Mr. Stark, unless it knows that I'm not Mr. Stark. Maybe we can, through the system prompt, tell it, I'm Mr. Stark. Okay, let's do that. System prompt, you are Jarvis, Tony Stark, of course, is also Iron Man. Your user is Iron Man or Tony. Let's try this again. Jarvis, what is my favorite color on my soup? I'm sorry, Tony.

9. Closing the Loop and Enabling Conversation

Short description:

We have speech-to-text, we're talking to OpenAI, and now we need text-to-speech. I want it to just be on forever and have a long conversation. Let's close the loop and summarize everything we did. When we finish speaking, we'll resolve the promise. Now, we can start recognition again and have an actual conversation.

I cannot determine your favorite soup color as it is not a standard preference. Thinks on Tony.

Okay, so, it's good. We have speech-to-text, we're talking to OpenAI and now we need text-to-speech. Or rather, we have text-to-speech, but it's not a conversation, like it just stops and then it's done. And then I have to click the button to start speaking again. I want it to just be on forever and just have a long conversation. Okay? Let's close the loop and then summarize everything we did. So, how are we gonna do this? When we finish speaking, so here, what we want to do is, utterance.onEnd, we want to, let's return a new promise. Promise, resolve, and notice how we're not handling errors, that's because I like chaos sometimes. When we finish, when it finishes speaking, we'll resolve the promise. Now, we can await speak and when speak is over, we can start recognition again, and now we can have an actual conversation.

10. JavaScript Conference and Flight to Amsterdam

Short description:

Hey, Jarvis, what's the greatest JavaScript conference? Some popular conferences include JSConf, ReactConf, and Node Summit. Can I fly to Amsterdam in my suit? Yes, it would take approximately 7 to 8 hours to fly from New York to Amsterdam at supersonic speeds. Thanks, Jarvis. I'm going to close the tab now. We just created a voice-commanded Jarvis-like AI using browser-based JavaScript APIs. The Speech Recognition API varies across browsers and is not yet standard. The Speech Synthesis API uses voices from the user's operating system. Let's browse the final code and identify any mistakes or improvements. We have new WebKit speech recognition that starts when the button is clicked.

So, let's start. Hey, Jarvis, what's the greatest JavaScript conference on Earth? Sorry, Tony, as an AI assistant, I am programmed to provide factual responses. There is no objectively greatest conference as it can vary depending on individual preferences. However, some popular conferences for JavaScript include JSConf, ReactConf, and Node Summit.

Okay, around June 1st in Amsterdam, what is the greatest conference around? The greatest JavaScript conference in Amsterdam around the 1st of June is JavaScript Conference. Jarvis, that's not a real conference. I apologize. Tony, there is no conference around June 1st in Amsterdam that matches your criteria for being the greatest.

Okay, can I still fly to Amsterdam in my suit? How fast? How long is it going to take me to get to Amsterdam from the United States? Yes, you can fly to Amsterdam in your suit. It would take approximately 7 to 8 hours to fly from New York to Amsterdam, traveling at supersonic speeds. Thanks a lot, Jarvis. I'm going to close the tab now, okay? Because I didn't create a stop mechanism. Understood, user. Stay safe! We just did that. We just created a voice-commanded Jarvis-like AI using only browser-based, native JavaScript APIs. The Speech Recognition API uses whatever browser recognizer is available. That's why we used Chrome. It varies from browser to browser. It is not standard yet. It probably will be, although it has been 10 years in the browser and still not standard. So, your mileage may vary. The Speech Synthesis API uses voices from a user's operating system only. You can give it custom voices, although there are other solutions to do that. I would invite you to experiment and play. Let's browse the code one last time, the final code, identify maybe mistakes we made or things we could do better, and then wrap up. Okay, let's go. I have Google here. Let's open up the thing again, that's fine. What did we do? We have new WebKit speech recognition. When we click the event, when we click the button rather, we start the recognition.

11. Final Conversations and Conclusion

Short description:

The user event is important as it prevents the browser from randomly listening and spying on people. We keep an array of things said and feed it to OpenAI for more context. We have a loop to listen, speak, and resolve the promise. We make a fetch request to the OpenAI completions API. This project is less than 50 lines of code and uses only native web APIs. You can create a product out of this and consider using Tauri, a tool for creating native desktop-like experiences using web languages and Rust. Thank you for joining the session and supporting our DevRel work.

This user event is important because you don't want your browser, rather your browser doesn't want to just randomly start listening to things and, you know, spy on people. We keep an array of things said and feed this to OpenAI. Notice, we're making a bit of a mistake because when we get an answer we should actually append this, so things said.push, and this will give the AI more context.

This looks good and then we can maybe remove some console.logs and we have this loop where we start listening and then once you say something and the machine answers, then you start listening again. To speak, we are using the speech synthesis utterance that just utters some text and we set the voice to a system voice. This is the default one, we can maybe even change this and see what happens. And this looks good and then when it ends we resolve the promise so that we can come back and start. Lastly we have a fetch to OpenAI completions API. This is just a copy-paste and we send all the things set. So this isn't really that hard, it's less than 50 lines of code and we have a voice-activated Jarvis style assistant using only native web APIs.

Let's have one last conversation with it, in an optimised way, with a different voice and then wrap up. Ok, let's do it. So, hey Jarvis, what is the coolest thing about Amsterdam, June 1st? Sorry, I am not programmed to provide subjective opinions. Would you like me to look up some interesting events happening in Amsterdam on the first of June? Sure, that sounds good. Sometimes it takes a while. Based on my search, here are some events happening in Amsterdam on June 1st. One such event is Exly dance festival, music festival that we organize featuring various DJs. Another festival, Apple Arts & Culture Festival featuring a variety of performances and events.

You can also create a product out of this, with the caveats of browser compatibility and so on. You could turn it into an open source project, invite contributions, and actually have something. Two, I would like to recommend the use of an app, or a system, or tool, like Tauri. For those who haven't heard of Tauri, it's a way to create native desktop-like experiences using web languages, HTML, JS, JavaScript, and the back end is then Rust, where you can pass messages between your front-end with browser-based technologies and Rust to create performant things. Indeed, everybody is rewriting things in Rust, and they think they're cool because of it. And indeed, Rust is very cool. So you could really make a native desktop app using Tauri and this and just give people their own JavaScript. I think that's actually pretty cool, especially if it's connected to their own open AI account that really knows them. There's many ways you can take this forward, but I'm going to leave it here. One last thing for Tauri.app, if you wanted to look into that. But I'm going to leave it here. Thank you so much for entertaining this fun little session, and I hope it was meaningful and valuable for the rest of you. If you'd like to support me, our DevRel work, feel free to follow me. And with that, I want to thank you so much for having me, and I hope you enjoy the rest of the conference.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top Content
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt.In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.

React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
React Advanced Conference 2022React Advanced Conference 2022
22 min
Monolith to Micro-Frontends
Top Content
Many companies worldwide are considering adopting Micro-Frontends to improve business agility and scale, however, there are many unknowns when it comes to what the migration path looks like in practice. In this talk, I will discuss the steps required to successfully migrate a monolithic React Application into a more modular decoupled frontend architecture.
React Advanced Conference 2023React Advanced Conference 2023
22 min
Power Fixing React Performance Woes
Top Content
Next.js and other wrapping React frameworks provide great power in building larger applications. But with great power comes great performance responsibility - and if you don’t pay attention, it’s easy to add multiple seconds of loading penalty on all of your pages. Eek! Let’s walk through a case study of how a few hours of performance debugging improved both load and parse times for the Centered app by several hundred percent each. We’ll learn not just why those performance problems happen, but how to diagnose and fix them. Hooray, performance! ⚡️
React Summit 2023React Summit 2023
24 min
Video Editing in the Browser
Video editing is a booming market with influencers being all the rage with Reels, TikTok, Youtube. Did you know that browsers now have all the APIs to do video editing in the browser? In this talk I'm going to give you a primer on how video encoding works and how to make it work within the browser. Spoiler, it's not trivial!
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
In this talk, we'll take a look at the growing intersection of AI and web development. There's a lot of buzz around the potential uses of AI in writing, understanding, and debugging code, and integrating it into our applications is becoming easier and more affordable. But there are also questions about the future of AI in app development, and whether it will make us more productive or take our jobs.
There's a lot of excitement, skepticism, and concern about the rise of AI in web development. We'll explore the real potential for AI in creating new web development frameworks, and separate fact from fiction.
So if you're interested in the future of web development and the role of AI in it, this talk is for you. Oh, and this talk abstract was written by AI after I gave it several of my unstructured thoughts.

Workshops on related topic

DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Featured WorkshopFree
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
React Advanced Conference 2023React Advanced Conference 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
WorkshopFree
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
WorkshopFree
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
GraphQL Galaxy 2021GraphQL Galaxy 2021
164 min
Hard GraphQL Problems at Shopify
WorkshopFree
At Shopify scale, we solve some pretty hard problems. In this workshop, five different speakers will outline some of the challenges we’ve faced, and how we’ve overcome them.

Table of contents:
1 - The infamous "N+1" problem: Jonathan Baker - Let's talk about what it is, why it is a problem, and how Shopify handles it at scale across several GraphQL APIs.
2 - Contextualizing GraphQL APIs: Alex Ackerman - How and why we decided to use directives. I’ll share what directives are, which directives are available out of the box, and how to create custom directives.
3 - Faster GraphQL queries for mobile clients: Theo Ben Hassen - As your mobile app grows, so will your GraphQL queries. In this talk, I will go over diverse strategies to make your queries faster and more effective.
4 - Building tomorrow’s product today: Greg MacWilliam - How Shopify adopts future features in today’s code.
5 - Managing large APIs effectively: Rebecca Friedman - We have thousands of developers at Shopify. Let’s take a look at how we’re ensuring the quality and consistency of our GraphQL APIs with so many contributors.
JSNation 2023JSNation 2023
57 min
0 To Auth In An Hour For Your JavaScript App
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.js backend + Vanilla JS frontend) to authenticate users with One Time Passwords (email) and OAuth, including:
- User authentication – Managing user interactions, returning session / refresh JWTs- Session management and validation – Storing the session securely for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.