Taming Language Models through TypeScript

Rate this content
Bookmark

You've probably played with bots like ChatGPT and used them to brainstorm. Maybe you've noticed that getting responses in the right structure or format has been a challenge. As people, we're okay with that - but programs are much more finicky. We've been experimenting with using TypeScript itself as a way of guiding and validating responses from language models.

Daniel Rosenwasser
Daniel Rosenwasser
26 min
21 Sep, 2023

Video Summary and Transcription

TypeChat is an open-source library that uses TypeScript types to guide and validate responses from language models. It allows for the creation of complex responses and provides a way to repair errors in the model. TypeChat programs enable better flow of data and the ability to refer to individual steps. The math example demonstrates the use of a program translator and metaprogramming techniques for type safety. Language models trained on both code and prose perform well in this context.

Available in Español

1. Introduction to TypeChat

Short description:

Hi everyone, my name is Daniel Rosenwasser and I'm the program manager on the TypeScript team at Microsoft. Today, I want to talk to you about TypeChat, another thing I've been working on recently with others on my team. In the last few months, you've probably seen a lot about artificial intelligence and large language models. These language models are very powerful, and you may have had the idea of trying to bring the smarts of one of these language models into your app, maybe to add a natural language interface in some way.

Hi everyone, my name is Daniel Rosenwasser and I'm the program manager on the TypeScript team at Microsoft. Today, I want to talk to you about TypeChat, another thing I've been working on recently with others on my team.

So, in the last few months, you've probably seen a lot about artificial intelligence and large language models. These language models are very powerful, and you've probably gotten the chance to even use them in the form of something like ChatGBT or a similar sort of chat program where you can ask the model questions, it can give you answers, you can iterate through ideas, and it's just a great way to creatively iterate through concepts and whatnot.

That works really well for people, but these language models are very powerful, and you may have had the idea of trying to bring the smarts of one of these language models into your app, maybe to add a natural language interface in some way. So for an example, let's say we have some sort of app that's supposed to help us plan out plans for a day, plans for the weekend, maybe specific to a location. It doesn't seem impractical given the fact that these models have been trained on so much data in the world, so they might know a lot about Seattle or some other city that you're trying to scout out ideas for.

2. Using TypeScript types to guide language models

Short description:

In this app, you want to get data from language models. However, language models often produce data that is not easy to parse. You can get language models to respond in JSON, but it may not always match the expected format. TypeScript types can guide the language model and provide the desired format. However, the language model may still produce responses that don't conform exactly. Validation is needed to push the AI to try again.

In this app, maybe we just want to be able to ask a question, that's our user intent, and then get a set of results with a couple of venues and each of their descriptions as well. So that seems all good, but how would you go about trying to get one of these language models to produce data that you can use in this app?

The thing that you may have realized in trying to do something like this is you end up trying to often pamper the AI to give it in a specific format, and even once it's in that format, it comes up with natural language that's just not always going to be easy to parse. So for example, here's a prompt and response that took a couple of tries to get even reasonable, right? One of the things that you might notice is I end up trying to just give it a little bit of guidance on how I expect the answer to be, right?

This isn't so bad, right? But mostly because it's come up with a regular answer in a specific format. And so it's actually given me a format of list items. Each of them is numbered. And then between the venue and the description, I have a colon. Now, is the AI always going to come back with that schema or format? Not necessarily. A lot of these language models are non-deterministic. So you really can't count on this format. But even if you could count on that format, you can't always trust that the data is going to be uniformly parsable, right?

So for example, in this case, I have a list and I have everything in this format. What about trying to split the colon, right? You might try to say, let me just try to shave each of the numbers off and then split by the colon. But what if one of the items in your list has a colon inside of the title or something like that, right? You're basically trying to do natural language parsing at this point, right? And now you have a bug, and now you have to figure out how to be resilient against that. And so this ends up being a little bit, impractical for most people, right? It is very hard to parse natural language.

But many of you probably also realize that you can get the language models to respond in the form of JSON. And that's great, right? Now you actually have something that your app can easily just do a JSON.parse or whatever, and get the data that you need and work off of that. But that only really works for sort of simple examples, right? Here I was able to say, here's an example of the JSON that I want to get back, right? I have a venue and a description, a venue and a description. And the AI is pretty good at figuring that out. But it doesn't tell us about maybe optional properties, maybe the case where you have three different kinds of objects that you expect to be in a specific position, things like that. And so just giving examples would be impractical because you would get into these combinatorial explosions of all the types of things that you would want to actually provide. So examples aren't enough. What you need is something a little bit more.

And it turns out there is a format that does work out pretty well, for the most part, in our experience. And it's something that you're all familiar with here at this TypeScript conference, which is types. TypeScript types are actually a great format for describing the exact format that we want out of a language model and all the sorts of shapes that we're expecting. So types are actually really good at guiding the language model into the thing that we want, right? And you can actually take the types in an application, like the actual text of the types in your program, take a user intent, and craft a prompt that you can send into an AI service, into a language model. And that will provide you with JSON, You would say, give me a response in the form of JSON, here's the type that you should conform to when you provide that response. And so, now you're able to guide the language model.

But like I said, the language model isn't always going to come back with a response that is actually exactly as we expect, right? So maybe it comes back with JSON, but it's not of the format precisely, right? So we can always say, hey, you didn't get me correct in the JSON, try again. But you need something else that pushes an AI so that you can say try again as well. And that's the validation.

3. Using TypeScript types for response validation

Short description:

Once you have the types, you can use them to validate the response from the language model. By constructing a program that checks each property and value, TypeScript can ensure the response is correct. If there's an error, it can be fed back into the model for repair. This approach allows for the creation of complex responses, such as taking orders for coffee. TypeChat, an open-source library, encapsulates these ideas and makes it easy to get started. The sentiment example, which categorizes user sentiment as positive, negative, or neutral, demonstrates the capabilities of TypeChat.

And so, actually, once you have the types, you can use the types to validate the response from the language model as well. And so, what we actually just decided to do is, we said, hey, if we have the types and we have the JSON, it's very trivial to construct a small program where you can get TypeScript itself to check each of the properties and each of the values that have come back, because it's very simple at that point. Because JSON is really just roughly a subset of JavaScript. And JavaScript is a subset of TypeScript.

And so, you can actually create a program, have TypeScript type check it, and if it has an error, you feed that error back into the language model, and have the language model try to repair whatever its response was from the last time. That works out pretty dang well. And so, you can use this to actually construct even more complex responses, right? So, you can actually say, hey, maybe I am a kiosk that takes orders for coffee or something like that, right? And so, the orders are effectively in the shape of a cart. A cart has many items. Those items can be one of several different kinds of items. And we give that schema. We give a user intent, like I want two lattes, one tall and one grande. And the language model can or good language models can typically respond with well-formed JSON that conforms to the type. And it depends on the quality of your model, right? You can have it repair a couple of times. But generally speaking, many of the ones that you're available to people today that are broadly available do come back with a good response here.

And so I think that this is a good time to give a quick demo of TypeChat, which is this library that we put together to encapsulate all these ideas and make it easy to get running with them. So I'm going to open up the TypeChat repository, right? It's all open source. You can get it today. We announced it about a month ago or so. And so we want to first look at the readme for the examples, right? So this is the readme at the top level. But to get a sense of TypeChat itself, we want to take a look at the examples. In the examples folder, we have a readme with a table that describes each of the examples that we have. And they basically go from easiest to understand to maybe the most difficult. So the sentiment example that we have is kind of like the hello world of TypeChat. It's a classifier which takes a user intent, right? It takes a sentence from the user in a prompt, sends that to a language model and tries to categorize that sentiment as whether or not the sentence was positive, negative, or neutral. So let's go into that. We have a sentiment schema. And this schema file is what's actually going to be used when we send it over to the language model. We're actually going to take the full text of this schema file. But notice that it's just a very simple response, right? It's just an object with a property on it that gets a string that is either the string negative, neutral, or positive. Also, just something to keep in mind is I mentioned we're going to literally send the entire text of the file over to the language model.

4. Constructing the Language Model

Short description:

Comments can be useful for coaxing the language model to give a response that fits a desired shape. In the sentiment example, the language model categorizes different sentences. The code for constructing the language model involves the 'create language model' function, which takes environment variables and loads the schema file. The 'JSON translator' is used to ensure the response conforms to the desired type.

So, that also means that things like comments are also going to be useful for kind of coaxing the language model to give you a response that fits a shape better than just the raw types themselves. So, you can also communicate more than just the types.

So, we're already in the sentiment example. Everything is already built here. So, what I'm going to do is run the example. So, this thing creates a prompt and I'm going to just write hello world. So, the language model that we're using right now, which is GPT35 turbo, is categorizing this sentence as neutral. Every single time I do this, as many times as I put an exclamation mark, it's always neutral for some reason. But if I write something like type chat is pretty cool. That's positive.

So, let's take a look at what the code looks like to actually construct this. So, first off, we have this function called create language model. Create language model actually takes the environment variables that we have in our process, and that's populated by a .n file. So, that .n file, I'm not going to show you the full contents, but you can configure things like the model that you want and some other data. Create language model currently is just there to sort of bootstrap you so you can get an easy model that's accessible that loads up either open AI models or Azure Open AI models. Really, a model is a very general concept. You can bring any model that you want. You can bring something that's trained locally. You can bring something that is a completely different service. Type chat is agnostic to all of that. All that we care about is, can you take a string and make a connection through a language model and eventually return either a success or a failure on the string. We just care about, can you give us a string from a string? Anyways.

So, we create the language model, we infer it from our environment variables and then we actually load up the schema file. This is the actual schema file that we have adjacent to main.ts. Now, there's some subtlety there and I'll talk about that in a second, but notice that this file right here corresponds exactly to this file right here. And then we create what's called a JSON translator. And this thing is going to take the model, take the schema and the name of the type that we want our response to conform to. And this is sort of like the entry point in our types file, right? We might have many types in this file. We may have a whole net or graph of types and we want to make sure that we know the exact one that we're starting off with. And so we're feeding in the name of the type, we're feeding in the actual type that we've loaded up in the program here, and then we have a translator object.

5. Processing Requests and Examples

Short description:

We have a process requests function that simplifies creating a command line prompt. Each request is run through the translator and eventually the language model. The resulting data is well-typed and strongly typed thanks to TypeScript. The sentiment example is easy to get started with, and we also have a more sophisticated coffee shop example. Depending on the model, it can work consistently well or have occasional issues.

And we also have this process requests function, which is really just there to make it easy to create a command line prompt because Node for some reason makes it somewhat difficult. You give it the prompt that you want to have every single, you know, the text that you want to print at every single prompt, and then a callback every single time you get a user intent or a sentence, right? So this is every line that gets entered at the prompt.

So we run each of the requests through the translator, the translator eventually calls the language model, the language model responds, and then the translator takes that, uses the types, constructs the program and internally uses TypeScript to check the program to make sure that it has succeeded. And if it has not, it possibly does some retries and maybe backs off a little bit as well. And so if we're not successful, we just error and just exit. But if we are, what we end up with is well-typed data. So we get our sentiment, a success of sentiment response, our data is a sentiment response, and then we have that well-typed sentiment. And because TypeScript is type-checking this thing, we know that this thing is strongly and static or strictly typed. And so that is very easy to get going. Altogether, about 22 lines of code. And there is a little bit of subtlety there because in our build, we actually have to do a little bit of extra stuff. So if you're using something like ts-node, this isn't a problem, but because we are actually looking for the TS input file, we're actually using the schema file types, we actually just copy them all over into the applet directory. And it's a little bit rudimentary, but it works pretty well. Right? And so if you don't want to do this, you can always use something like ts-node. And there are probably easy ways of getting this working with something like deno or bon or whatever, right?

So the sentiment example is pretty easy to get going. We also have another example of a coffee shop. So in our coffee shop, we have a slightly more sophisticated schema. I'll close the other files right here. And so this is similar to what I had in the presentation just a little bit ago. Just bear with me for a second. We have a cart, the cart has items, the items are each of these line items, and then you know et cetera, et cetera, et cetera. So we can order apple brand muffins, blueberry muffins, bagels, whatever. And so I can actually exit this prompt, go up, go into the coffee shop example, and then I can just run this thing. And I could say one small latte and a bagel please. And so we look at the output here, we actually get well formed JSON and for each of the items, we have one latte drink, it's a short, and a bagel, right? And the quantity is one, the quantity is one. And you know, depending on your model, you'll actually end up finding that this is this will also this often works consistently well, or it can have occasional issues. So, you know, let's just try this one. Give me one tall latte, and a banana. And so notice that actually fail here. It actually ended up saying, I didn't understand the following, and then listed that thing out.

6. Schema and Unknown Text

Short description:

When creating schemas, it's good to give them a degree of uncertainty. Unknown text is used when the language model doesn't understand a concept. It allows for more robust applications and provides the ability to preview user requests.

And that's because of how we formed our schema, right? The interesting thing is that when you create these schemas, it's actually good to give them a degree of uncertainty in a lot of cases, right? In cases where a language model can't quite understand the user intent, it's good to have an out. And so the out here, the degree of freedom, whatever you want to call it that we've added, is we have this thing called unknown text. And so whenever the language model doesn't understand a concept, it tends to put things in this unknown text bucket, right?

And so what will happen is as we make our orders, this thing works basically the same way. But what we say is, whenever we encounter an unknown type, then we say, I couldn't understand the following. And then we just run through the whole thing and explain and we elaborate, right? So I can say I want one tall latte and a banana and a gorilla holding the banana. And it's able to translate each of those into an unknown, which is a banana and a gorilla holding the banana, right? So that works pretty well. Generally speaking, if you feel like a user is going to get something wrong, it's actually pretty good to add these degrees of freedom. So saying something like I permit unknown text, or maybe a Boolean that said the order was not clear or something like that can help with your application and you can be a little bit more robust, maybe flag specific things down. And one of the key things about something like type chat is because you get it all in JSON, you can always preview exactly what a user was going to get when they made one of these requests. So that's like maybe the easiest part of type chat to kind of grok.

7. Creating TypeChat Programs

Short description:

People often try to plan or script programs with AI, using the basics of TypeChat to create commands. However, the linear sequence of commands lacks the ability to refer back to earlier arguments. To address this, a pseudo-language is created, where the AI responds in JSON. This fake representation is then converted into a real program using TypeScript for type checking, ensuring well-typed programs between each step. This concept of TypeChat programs allows for better flow of data and the ability to refer to individual steps.

One of the things that we also realized was that people are often trying to do some form of planning or scripting or programs with AI, right? Maybe you have an application, you have many, many, many different tasks and they can all be run subsequently in sort of this order. And you can use the basics of type chat, like the type chat approach that I just showed you, to create commands, right? So you can create objects with a command name and maybe some arguments or properties that follow that could be used as arguments in some way.

And it turns out many, many programs like VS code and others use a form of performing actions that's similar to this, right? There's a command name and then like some properties that follow. And you can also sequence them too. But one of the problems with this is that, well, you only get a linear sequence with this, right? And you don't have a way of referring back to an earlier argument in some cases. And so what you wanna be able to do is also validate one that you're getting the right flow of data, right? That each of these steps actually has the right type of data that it produces that can be used in the next step. But also, you wanna be able to refer to each of the individual steps prior because maybe you need to have some intermediate results before you run an action or something like that.

And so, one question that you might ask is, hey, what if you just generated code and you just took a very similar approach to type chat that you had before, right? All you need is types. And so maybe you say, here's my API, generate a program like this. That works well sometimes, but there are a couple of issues, right? First is, we do want type chat to be cross-language over time, right? We do want to be able to bring this to other languages like Python or C Sharp or others. But when you say I'm going to create explicit code, you need to be able to also say I want to sandbox this as well. Or maybe I want to be able to have something that, like, interprets the real code in some way. But then you end up creating a very rich interpreter that really has to interpret a lot of JavaScript as well or whatever language that you create. And then you end up with a lot of other issues like, okay, well, I have APIs that are async versus sync. So now the AI needs to generate code that knows about that as well. And it turns out when you have the real language, it tries to draw outside the lines a lot. So, if you have JavaScript, it's going to try to take full advantage of the JavaScript that's available to it. That's not always ideal.

So, what we actually kind of found was, you can throw the AI off enough by just creating a pseudo-language in a sense. And you say, actually, just respond in JSON. And the JSON should look like this. And it can piece that together really well enough and say, hey, you know, there are these meta properties that each of these properties can refer back or specify that I have a function or whatever. And so, this is basically a fake language or representation of language that's just kind of serialized to JSON. And it works well enough. But where do you get the validation from? How do you know that the inputs from writeFile are actually of the correct type? The trim text creates a string and that string can be used as the output, the input to the writeFile call. Well, we can then take the fake representation, the fake program, take that JSON, turn it into a real program, and then use TypeScript to create a program that actually does need to be type checked. And so, when we do that type checking, then we can validate whether or not each of the inputs and outputs corresponds accordingly. And so, now you do have a well-typed program between each step and you know that the indices correspond correctly. And so, that's another part of type chat is that we have this concept of type chat programs as well. So, I can really quickly show that as well because I'm very short on time at this point.

8. Using a Program Translator for Math Example

Short description:

In this section, we explore the math example and the use of a program translator instead of a JSON translator. The program always expects a specific type called API, which can be exported. Mathematical expressions can be written directly, creating a JSON program that is validated and interpreted. Metaprogramming techniques can be used to ensure type safety. The math example demonstrates the ability to perform complex operations and create steps that can be referenced later. Language models trained on both code and prose perform well in this context. Check us out on GitHub for more information and engage with us on Twitter and other platforms.

But I'm going to go into the math example. That's not where it is. And then I'm going to run the program as well. And my terminal doesn't work so well with this. But let's look at the math schema. The math schema is really just a set of methods for add, subtract, multiply, divide, negate. And then some identity stuff and then like I don't know how to handle this right. That degree of freedom that I mentioned before.

And in this program we're actually not using a JSON translator. We're using a program translator. And the key idea here is that it always expects a specific type called API. So I have to export a type called API here. And so what I can do is I can actually just write mathematical expressions and I can say something like add one and 41, right? And so what that does is it actually creates that JSON program before. This is what we construct to actually validate the program. We actually create this thing. And then we have an interpreter that actually runs it. So when you implement this thing, you effectively create an interpreter in the form of like create or handle call. Right? And so you have this evaluate JSON program function and you provided a callback and then it just takes a function name and the arguments that get provided to it. And there's a lot of metaprogramming stuff that you can do to make sure that this is all well-typed.

So at the end, um, jumping back to our math example, I can do, search sort of sophisticated things. Like I can say, um, add one and 41 and then double it after, and I have like a typo there. Um, and in some cases we'll inline this, but in other cases, maybe because of how I've said it, it will create a step and the step can then be referenced in one of the other steps. So it says add, and then it says multiply. And it turns out that the language models that have been trained on both code and pros do pretty well here. Um, so that was our demo. Um, unfortunately I'm short on time, but what we really do encourage people to do is check us out on GitHub. Um, we have discussions, we have issues, we're really looking to hear from you and you can follow us elsewhere on Twitter or Blue Sky or other places as well. Um, so thanks very much for your time and that's a wrap take care.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TypeScript and React: Secrets of a Happy Marriage
React Advanced Conference 2022React Advanced Conference 2022
21 min
TypeScript and React: Secrets of a Happy Marriage
Top Content
TypeScript and React are inseparable. What's the secret to their successful union? Quite a lot of surprisingly strange code. Learn why useRef always feels weird, how to wrangle generics in custom hooks, and how union types can transform your components.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top Content
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt.In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.

Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
React's Most Useful Types
React Day Berlin 2023React Day Berlin 2023
21 min
React's Most Useful Types
Top Content
We don't think of React as shipping its own types. But React's types are a core part of the framework - overseen by the React team, and co-ordinated with React's major releases.In this live coding talk, we'll look at all the types you've been missing out on. How do you get the props type from a component? How do you know what ref a component takes? Should you use React.FC? And what's the deal with JSX.Element?You'll walk away with a bunch of exciting ideas to take to your React applications, and hopefully a new appreciation for the wonders of React and TypeScript working together.
Monolith to Micro-Frontends
React Advanced Conference 2022React Advanced Conference 2022
22 min
Monolith to Micro-Frontends
Top Content
Many companies worldwide are considering adopting Micro-Frontends to improve business agility and scale, however, there are many unknowns when it comes to what the migration path looks like in practice. In this talk, I will discuss the steps required to successfully migrate a monolithic React Application into a more modular decoupled frontend architecture.
Power Fixing React Performance Woes
React Advanced Conference 2023React Advanced Conference 2023
22 min
Power Fixing React Performance Woes
Top Content
Next.js and other wrapping React frameworks provide great power in building larger applications. But with great power comes great performance responsibility - and if you don’t pay attention, it’s easy to add multiple seconds of loading penalty on all of your pages. Eek! Let’s walk through a case study of how a few hours of performance debugging improved both load and parse times for the Centered app by several hundred percent each. We’ll learn not just why those performance problems happen, but how to diagnose and fix them. Hooray, performance! ⚡️

Workshops on related topic

React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
Best Practices and Advanced TypeScript Tips for React Developers
React Advanced Conference 2022React Advanced Conference 2022
148 min
Best Practices and Advanced TypeScript Tips for React Developers
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
Are you a React developer trying to get the most benefits from TypeScript? Then this is the workshop for you.In this interactive workshop, we will start at the basics and examine the pros and cons of different ways you can declare React components using TypeScript. After that we will move to more advanced concepts where we will go beyond the strict setting of TypeScript. You will learn when to use types like any, unknown and never. We will explore the use of type predicates, guards and exhaustive checking. You will learn about the built-in mapped types as well as how to create your own new type map utilities. And we will start programming in the TypeScript type system using conditional types and type inferring.
Working With OpenAI and Prompt Engineering for React Developers
React Advanced Conference 2023React Advanced Conference 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
Deep TypeScript Tips & Tricks
Node Congress 2024Node Congress 2024
83 min
Deep TypeScript Tips & Tricks
Top Content
Workshop
Josh Goldberg
Josh Goldberg
TypeScript has a powerful type system with all sorts of fancy features for representing wild and wacky JavaScript states. But the syntax to do so isn't always straightforward, and the error messages aren't always precise in telling you what's wrong. Let's dive into how many of TypeScript's more powerful features really work, what kinds of real-world problems they solve, and how to wrestle the type system into submission so you can write truly excellent TypeScript code.
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
WorkshopFree
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.