JS Character Encodings


Character encodings can be confusing for every developer, providing pitfalls even for the most experienced ones, so a lot of the time we want to end up with something that “just works” without an in-depth understanding of the involved concepts. In this talk, Anna will give an overview over what they are, what the JavaScript language provides to interact with them, and how to avoid the most common mistakes in Node.js and the Web.


I am working at mongodb, working on the developer tools team. So the shell and the GUI and the VS Code extension for the database, but this talk has absolutely nothing to do with that. Alright, so let's jump in. So about a month ago or so I saw this tweet which got somewhat popular on Twitter and some people are laughing, you get the joke. Obviously the easiest way to get the length of a string in javascript is to do object spread in it, then call object.keys on that object and then use array prototype reduce to sum up the length of that array. So we all know what the joke is. But let's take a step back. Why are character encoding something that we care about or have to deal with? So the typical situation that you're in is you're a software developer and you're writing software, you're writing a program. That program does not exist in isolation, there is something else out there, literally anything but your program like a file system, network, other programs, other computers, anything like that. And obviously you want your software to be able to communicate with that. And the default way to communicate anything is to use strings. You can put basically anything in a string, any data you have you can serialize into a string so it would be nice if we could talk with these other programs using strings. Unfortunately that's not how it works. Your program is typically run by an operating system that has no idea what a string is. If it's a javascript program, which is going to be the case for many of you, a javascript string is something that the javascript engine understands, but your operating system has no idea what to do with that. You can't just pass it directly to that. And that also means you can't pass it to other things. So the solution that people came up with is you have your string and for each character in that string you assign that character a number and then you come up with some clever way to assign or convert these numbers into a sequence of bytes. And this feels like a very, very basic discussion to have, but I think it's important to have that distinction in mind. When I say string I mean sequence of characters, like text. This intermediate representation, which for the most part you don't care about, I'm going to refer to that as code points because that is the language that Unicode uses for this. And then your output is a sequence of bytes. Obviously when you're decoding you go these steps in reverse. If you take anything away from this talk, it's that strings and sequences of bytes are different things. So historically how have people approached that? Back in the 70s when Americans had not yet discovered that there's something besides America in the world, you came up with a way to assign, like a standardist way to assign numbers to characters and those were characters from 1 to 128 and that's enough space for lowercase and uppercase English alphabet and some special characters and who needs more than that? Then the next iteration, which is a little more popular around the 90s I would say, is you discover that there are other languages out there besides English and you say, well ASCII is 128 characters, so 7 bits, bytes usually have 8 bits, so we have another 128 characters available. And the solution that people came up with was like, you know, you're probably either going to have like Greek text or Slavic text or Arabic text, you're not going to mix these probably. So for each of these you create a character encoding. And so these ISO 8859 character encodings, they're like 16 different character encodings where each of the additional characters that are not ASCII have an additional meaning. But you can't mix, you can't have a single byte sequence that can represent both, say, Greek and Arabic text and sometimes you might want that. So something that got popular towards the end of the 90s is Unicode. And so Unicode essentially solves that problem by saying, yeah, we're not going to stick to single byte encodings, we're just going to have as many code points as we want. There is a limitation, like around 1 million code points currently, but that's, I mean, we're not close to hitting that currently. I don't think we're going to get that many emojis, so I think that's okay. What is sometimes relevant for javascript is that the first 265 code points match one of these prior encodings, namely ISO 88591. That doesn't mean by itself that it is compatible with ASCII because that's only the code point, not the actual transformation to byte sequences. But then you have multiple encodings to do that, and the one that we all know and use every day is UTF-8. And this one is backwards compatible with ASCII because, you know, the first 127 bytes match ASCII exactly and it uses all the other bytes to represent other characters that don't fit into that range. And then there's UTF-16, which javascript people might also care about from time to time, where the idea is more closely to, you know, two bytes per character. This made a lot of sense when Unicode was first introduced because back then, you know, nobody expected that there might be more than 65,000 characters to care about. So, you know, two byte was a very natural choice for that. But with things like emoji being introduced, we've stepped outside that range. So some things have to be represented by pairs of two bytes, so four bytes in total. So people sometimes say that javascript uses UTF-16 and, like, well, there might be something to that. So I have here the output of the Unicode command line util, if you've never used that, it is a very neat tool for, you know, finding out information about individual characters or looking up characters based on their code points, all that kind of stuff. Whoever wrote that, I am very thankful. So there is an example of what this looks like in UTF-16, I've highlighted that. And then, you know, what happens when you use Node to print out the length of a string that only contains this single hamster-face character? It says two, even though it's one character. And then you can dig further and you see that, like, this one character compares equal to a string comprised of two escape sequences. And these escape sequences happen to match exactly how UTF-16 serializes things. And so you might say, well, javascript uses UTF-16. I'm done. The reality is that, you know, UTF-16 is a character encoding. It's a way of transforming sequences of characters into sequences of bytes. There's no sequence of bytes in here. This is not an encoding thing. It just happens to have some similarities. So like, in some ways, javascript lets you interact with strings as if they were stored using UTF-16. Sometimes they might be. But also, javascript engines can use whatever storage they want to. And they're, practically speaking, not always going to use UTF-16 because, you know, if you have ASCII-only text, you don't need that. If you have ASCII-only text, it's wasting half the bytes in your storage. And javascript engines are made to be very efficient because people care about that. So one thing that we did, and this is the only mongodb work reference that I have here. So we had a project last year to improve the startup performance of one of our tools that we maintain. And so we shipped this tool by basically gluing Node together with a webpack bundle of our CLI code. Sounds easy enough, right? And so webpack has this flag for emitting ASCII-only output from its minifier. Does that by replacing non-ASCII characters with escape sequences. And so when we did that, the webpack bundle got a bit larger. That's to be expected. Escape sequences are longer than their characters that they represent. But the overall executable that we shipped got 15% smaller. And that is because we could not, we didn't need to start data as UTF-16 anymore. We could just pass it to the javascript engine as ASCII data. And that actually sped things up by 3.5%, which was a pretty neat, very easy win for a single line change. So yeah, for example, V8 can use Latin1 or UTF-16 as backends for javascript strings. I think JS Core can use UTF-8 backends. You don't get to see that. You don't get to interact with the underlying storage of strings. So it doesn't use UTF-16. Okay, so let's go back to the example from the beginning, from that slide from Twitter. Obviously this is what you would use to get the length of a string. But obviously this is right in some ways and not right in some other ways, because this is a single character and it shouldn't have a length of two, or maybe it should. Luckily javascript is aware that these things happen. And so when you use anything that uses the javascript iterable protocol, like for off or erase, you can get the proper answer, when proper answer means you actually care about the number of Unicode characters. And if you do this, you're probably going to say, well, isn't this terribly inefficient, creating a temporary array for just to get the length of a string? And the answer is obviously yes. You can improve on that a bit by actually using a loop and not allocating an array. But still, this is like several orders of magnitude slower than just doing dot length. And what's the story here? I mean, you're just going to have to pick one of these and think about why you want the length of a string and why that matters. And it's going to have to live with the fact that there's no fast way to get the number of characters from a string in javascript. One thing I wanted to mention, really think about why you want to get the length of a string. Like, what do you want to do with that? Because you care, for example, about the number of characters something takes up when printing it in terminal, because you want to tab align things or something. In that case, there's an npm package out there. It does a lot of things that you would never think about, because some characters are invisible, so they don't take up any space at all, all that stuff. There's always an npm package for what you actually want. All right. So let's go back to the basics here. What we want to do and what we want to do in javascript is we want to get from strings to byte sequences. And so if you're used to node.js, you might say, I'm just using buffer. That's how I do things. And that's fine. But I'm not going to care about that, because in my eyes, buffers is very much a legacy api in Node. There's web api standard replacements for a lot of things in the buffer api. And so there's no real reason to use it anymore. Encoding things is easy enough. You can create text encoder instances, which they only allow you to evade. That is a limitation to some degree, but also for the most part, you don't want to use anything else. So easy enough. Then for decoding, things get a bit more tricky. If I pass the UN8 array that I just got as output from the previous step, it decodes it again, works perfectly. But the api does have some interesting configurability that you might want to know about. So first of all, text decoder actually understands multiple character encodings. For the most part, you're not going to care about that, but it does. And that can be handy sometimes. There's a fatal proof, like in this example, there's a fatal Boolean option when creating one. The semantics of that are that you're decoding data and that data may or may not be valid. And you have to handle error somehow. You have to think about what you do. Two options that are pretty standard are presented here. One is either you do fatal faults, which means you're just taking into account replacement characters like the one on the title slide of the start, which unfortunately didn't make it into the schedule because somebody thought it was an encoding error. I think that's pretty funny. But yeah, so and if you use fatal true, then encoding errors will actually result in an exception when you call decode. Sometimes that's what you want, because you actually want valid input and don't want to the fact that you're losing data because it might be corrupted. And then there's the scream true flag, which is best explained by an example. So I hope that's big enough on the screen. So you have two chunks of data that logically come from the same source. And like you, you want to decode them from UTF-8 and what happens is that you can't because this happens to be a character that split across two chunks. That happens sometimes, for example, when you're doing network I.O. You might not get data chunks from the network that are neatly aligned to your characters because it's just a byte stream. TCP doesn't care about where your chunk boundaries are. It just gives you bytes as they come in. And that is where this flag comes into play. You pass it to every call but the last one if you're decoding a stream of data and the text decoder instance keeps in mind which partial characters it had already seen. So it has a window of like, which are the last bytes that I saw. And it just, it is smart and keeps in mind what you already passed to it. And so this is my, like one of my very, very big pet peeves. People get this wrong all the time in Node. And I get why. So this is from the actual official node.js documentation and there's a bug in there. Pretty much what I just described, which is that, you know, you have this, this common pattern where you define data to be a string and then you have a stream and you do attach an onDataListener and that listener, it, you know, appends the chunk to that data string. And what that does under the hood is there's a lot of implicit details here. Adding something to a string, converts it to a string. Chunk in this case is node.js buffer. Calling toString on a node.js buffer transforms it, decodes it from UTF-8 by default. That's all implicitly happening here. But it suffers from the problem I just described where you're like, you know, chunks might not be neatly aligned to character boundaries. Luckily, this is something that's pretty easy to fix. So like, let's go to the node.js documentation and open a pull request. So it's a pretty easy one line fix. node.js streams have this setEncoding property where you can just tell it to like, you know, hey, decode incoming data using this encoding. And then it's going to do the exact same thing that I just described using TextDecoder, where it keeps in mind which characters it has already seen. And that's a live pull request. And it uses the same thing under the hood in node.js, actually, like TextDecoder and this setEncoding thing. Another node.js bug that I wanted to talk about that you sometimes see out there and like always makes me want to go like, come on. So somebody wrote a hash function here and that just does a simple SHA-2256 of a string. You know, it takes a string as an argument, returns a string, like a hexadecimal string as its output. And it does that by creating a crypto api hash object, calls update on that with a screen, tells it to interpret that as binary data and then calls digits to get the hex to result. And obviously that might not look that bad at a glance. But like what actually can happen is that you can pass different strings to this hash function and get the same result. And that's bad. That's like the exact opposite of what hash functions are for. And so what happens here? Binary is actually a legacy alias for ISO-88591 in node.js. This is the case because long, long ago before you had 8-array and buffers were a thing in javascript, you still wanted to deal with the binary data sometimes. And one way that you could do that was you could use strings and just pretend that like, you know, your first 256 bytes correspond to your first 256 Unicode code points, which happens to exactly be ISO-88591. And so that was called a binary string. I haven't heard that term used in like real world production usage in 20 years or something, but yeah, that's why that alias is there. Sometimes people still pass like binary to node.js APIs because they think it tells Node to interpret something as like binary data or whatever. It doesn't do that. It's almost always a bug when you pass binary as a string to a node.js api. And especially with crypto APIs, like think about what happens. They always work on byte sequences. That's how all crypto things are designed. So if you just omit that parameter, it actually does the right thing. It uses UTF-8 by default. So I'm at the end of my talk. Some things to keep in mind. So like you are using encodings under the hood or not, or whether you know it or not. Sometimes we have built some extractions to make it work as seamless as possible, but that doesn't mean that you can forget about it. It's still something when you convert between sequences of bytes and sequences of characters, you have to think about it. One lesson that I think is not that surprising, but like why is UTF-8 so popular? It's because it's ASCII compatible. That's the reason. And so like always something to keep in mind when you're building something new, if it's compatible with existing big players out there, then that is the best way to get your stuff adopted. I'm just going to skip that because I'm running out of time. But so don't assume that javascript is using UTF-16. It might not be. You don't know what happens under the hood, but also don't pretend that it doesn't because sometimes it acts like it does. And then one final thing. Don't just copy code from the docs. They might be wrong. All right, that was me. Thank you, Ana, for this great talk. Now we have one question, but please ask more questions. So the question is, circling back to the trending question, what is the best way to find the length of a string in JF? Well the best way is to first think, what does length of a string mean for you? If you care about the number of individual characters, why do you care about that? If you care about the number of javascript string elements, which like UTF-16 code units, why do you care about that? Or if you use string width, why do you want the width of a string when you print it to the terminal? Different semantics, different answers. Cool. Cool question. So the second question is, if that length returns the real length of a multi-by character, how does it behave when used in a traditional for loop with array indexing notation? So if I'm understanding the question right, it's tricky. Because you are going to have situations where if you have a character that's split into two surrogate pairs, is what they call it in UTF-16, then if you iterate over a string using the standard for loop with an index, you're going to see these two things show up separately. This is not something that I included in my talk, but it's something that you generally want to think about. That can happen. If you want to know how to handle that well, you might know that there's javascript char code at an api on strings. There's also something called code point at. And there's a subtle difference for these multi-by characters where code point at actually gives you the full Unicode code point of this character and the next one together in that case. That is a good way to handle that if you run into that situation. But yeah. Nice. The next question is, the collision example is crazy. Can you explain what happens from a technical point of view? Do A cross L have the same byte representation? Can I turn around and ask this yet the question? So yeah, no. Right. So what happens is that A is like 65 in ASCII, like uppercase A. And the Polish uppercase L that I use is 65 plus 256. So what happens is that when you tell node.js to use ISO 88591 to convert these two bytes, that second character is not representable using that Kera encoding. And node.js doesn't throw on that or something. It just silently truncates the code point for that character. And so because truncating means truncating to a single byte, what you end up with is like plus 256 falls away, and you end up with the same value for that byte. It is true that javascript engines actually use ASCII. I never know how to pronounce that. But yeah. I don't know either. ASCII in the back end most of the time. And auto converts as soon as you use the character outside of ASCII. Yeah. So obviously javascript engines do the work to check whether they can represent characters in their input in the encoding that they want to start in. Again, javascript engines are very, very smart about this kind of stuff. So like there's a lot of internal string representation for a string. I'm mostly familiar with V8 because I'm a Node person. Different engines might do different things. But like so for example, in V8 you might end up with situations where, for example, you have a string and you created that from concatenating to other strings, and it actually starts that as a concatenated representation of these two strings. And one of these might be ASCII, one of these might be not ASCII. But don't try to outsmart the engine. I mean, that's always a good advice for javascript. Good advice. So what do you think about WTF8? Okay, I'm just going to assume that people in here might not all be familiar with that. If you want to know what that is, then look it up. I think that typically you want standard UTF-8 and you want the standard validation that for example, TextDecoder gives you. And like, just stick with that because that's the most standardized thing you can get. Obviously that like, it's a variant of UTF-8 that handles these code points outside the 65,000 range a bit differently. Not better, but different. And I don't know, there are use cases for it. But if you don't have a good reason for using it, then don't. Which one is the most recommended encoding format right now? UTF-8. That's very simple. Sorry. No, yeah, there's a good reason why it's the default encoding for like basically every javascript api that exists. What is the safest way to truncate a string after 15 characters, adding dot dot dot at the end? Yeah, the safest way is also like the most laborious way of doing this, I guess. What I would do, what I have done in the past when running into this problem is like to use the code point at api that I mentioned in an earlier question to check whether the 14th character in that case of the string is a double byte character, and then adjusting the index where you cut off at like, depending on whether it is one of those at 14 or 15. And then you can just use string prototype slice or a substring or whatever api you want to use. But yeah, it's not pretty, but it is correct and, you know, people might otherwise actually notice that you're just cutting off in the middle of an emoji or something. Cool. In all Python, versus we force UTF-8 encoding on top of file, is there a way to force an encoding in JS? To forge? Force. I mean, you can pass an explicit encoding parameter to most javascript APIs that do encoding or decoding. Text encoder, as I mentioned, is one of the exceptions to that because it only supports UTF-8 because you're only supposed to use UTF-8, unless you have a very good reason not to. But otherwise, I mean, node.js APIs that do encoding or decoding take an explicit parameter and other encodings do as well. And what's the fastest way to handle long streams? To do what with? Just do what you would usually do. And then if you're running into performance issues, you can take a look in more detail. But generally speaking, I mean, write idiomatic javascript and trust the engine that it's making smart decisions for you, for the most part. Cool. Nice. The last question would be, what is the string length of this? Yeah, like, how do you define length? I think if you pass this to the string with packages that I mentioned earlier, it might just say for, because that is do it. And obviously the other cases I can't really answer. Well, thank you very much, Ana. This was a great talk. And now...
33 min
14 Apr, 2023

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic