Unicode Characters and Strings

توضیح مختصر

And so the ord() function tells us the numeric value of a simple ASCII character. And that's really because, as soon as these ideas came out, it was really clear that UTF-8 is the best practice for encoding data moving between systems. And it would make a Unicode constant by prefixing 'u before the quote and that's a separate kind of thing.

  • زمان مطالعه 17 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

متن انگلیسی درس

So we started this entire course printing hello world. And I just said “hello world” and out comes hello world. And, it’d be nice if that was super simple. And, in 1970, it was simple because there was pretty much one character set. Even in 1970, when I started, we didn’t even have lowercase characters. We just had uppercase characters. And I tell you, we were happy. When we just had uppercase characters. You kids these days with your lowercase characters, and numbers, and slashes, and stuff. So, the problem that computers have is, they have to come up with a way, I mean, computers don’t understand letters actually. What computers understand is numbers, and so we had to come up with a mapping between letters and numbers. And so, we came up with a mapping and there’s been many mappings historically. The one that sort of is the most common mapping of the 1980s is this mapping called ASCII, the American Standard Code for Information Interchange. And it says basically, this number equals this letter. So for example, the number for hello world, for capital H, the number is 72, somebody just decided that the capital H was going to be 72. Lowercase e, the number is 101. And new line, Is 10. So if you are really and truly going to look at, what was going on inside the computer, it’s storing these as numbers. But the problem is that, there are 128 of these, which means that, you can’t put every character into a 0 through 128. And so in the early days, we just kind of dealt with whatever characters were possible. Like I said, when I started you could only do uppercase, you couldn’t even do lowercase. And so, there is this function, as long as you are dealing with simple values, that you can say, hey, what is the actual value for the letter H?. And it’s called ord, which stands for ordinal. What’s the ordinal? What is the number corresponding to H? And, that’s 72. What’s the number corresponding to lowercase e? It’s 101. And, what’s the number corresponding to new line? And, that is a 10. Remember new line is a single character. This also kind of explains why the lowercase letters are all greater than the uppercase letters, because they’re ordinal. For ASCII, now, there’s so many character sites. But just for the default, old school, 128 characters that we could represent with ASCII, the uppercase letters had a lower ordinal than lowercase letters. So, Hi is less than z, z, z, all lowercase. And that’s because all lowercase letters are less, I mean, all uppercase letters are less than all lowercase letters, actually, this could be a, a, a, that’s what it should’ve said there. Okay, so don’t worry about that, just know that they are all numbers. And in the early days, life was simple. We would store every character in a byte of memory, otherwise known as 8 bits of memory. It’s the same thing when you say, I have many gigabyte USB stick. That’s a 16 gigabyte USB stick, that means there are 16 billion bytes of memory on there, which means we could put 16 million characters on here in the old days, okay? So the problem is, is the old days, we just had so few characters that we could put one character in a byte. And so the ord() function tells us the numeric value of a simple ASCII character. And so, like I said, if you take a look at this, the e is 101, and that h, capital H is, 72, and then the new line which is here line feed which is 10. Now, we could represent these in hexadecimal which is base 16 or octo which is base 8. Or actual binary which is what’s really going on, which is nothing but 0s and 1s. But these are just, this is the binary for 10, 0001010. And so, these three are just alternate versions of these numbers. The numbers go up to 127, and if you look at the binary, you can see in this, this is actually 7 bytes of binary, you can see that’s all 1. So it starts at all 0s, goes into all 1s, and so it’s 0s and 1s are what the computers always do. And if you go all the way back to the hardware, the little wires and stuff, the wires are taking 0s and 1s. So, this is what we did. In the 60s and 70s, we just said whatever we’re capable of squeezing in, we’re just totally happy. We’re not going to have anything tricky, and like I said, half way early in my undergraduate career, I started to see lowercase letters and I’m like, that’s really beautiful, lowercase letters. Now, the real world is nothing like this. There are all kinds of characters. And, they had to come up with a scheme by which we could map these characters. And for a while, there were a whole bunch of incompatible ways to represent characters other than these ASCII, also known as Latin character sets, also known as Arabic character sets. This other character set just completely invented their own way of representing. And so you had this situations where Japanese computers pretty much couldn’t talk to American computers or European computers at all. I mean, the Japanese computers just had their own way of representing characters. And the American computers had their own way of representing characters, and they just couldn’t talk. But, they invented this thing called Unicode. So, Unicode is this universal code for hundreds of millions of different characters and hundreds of different character sets. So that instead of saying, sorry, you don’t fit with your language from some South Sea Island, it’s okay, we’ve got space in Unicode for that. And so, Unicode has lots and lots of character not 128. Lots and lots of characters. And so, there was a time like I said in the 70s and 80s, where everyone had something different, and even like in the early 2000s as the Internet, what happened was as the Internet came out, it became an important issue to have a way to exchange data. And so we had to kind of say, well, it’s not sufficient for have Japanese computers to talk to Japanese computers, and American computers to talk to American computers, where Japanese and American computers exchange data. So they built these character sets, and so there is Unicode which is sort of this abstraction of all the different possible characters. And there are different ways of representing them inside of computers. And so, there is a couple of simple things that you might think are good ideas that turn out to be not such good ideas, although they are used. So, the first thing we did is these UTF-16, UTF-32 and UTF-8 are basically ways of representing a larger set of characters. Now the gigantic one is 32 bits, which is 4 bytes, it’s four times as much data for a single character. So that’s quite a lot of data, so you’re dividing the number of characters by four. So if this is 16 gigabytes, it can only handle 4 billion characters or something. Divided by 4, right, 4 bytes per character. And so, that’s not so efficient. And then there’s a compromiser like that have2 bytes but then, you have to pick. This can do all the characters. This can do sort of lots of character sets, but turns out that even though you might instinctively thing that like UTF-32 is better than UTF-16 and UTF-8 is the worst, it turns out that UTF-8 is the best. So UTF-8 basically says, it’s either going to be one, two, three, or four characters. And there’s little marks that tell it when to go from one to four. The nice thing about it is that, UTF overlaps with ASCII, right? And so, if the only characters you’re putting in are the original ASCII or Latin 1 character set, then UTF-8 and ASCII are literally the same thing. And then use a special character that’s not part of ASCII to indicate flipping from one-byte characters to two-byte characters or three-byte characters or four-byte. So it’s a variable length. And so, you can automatically detect. You can just be reading through a string and say, whoa, I just saw this weird marker character, I must be in UTF-8. And then if I’m in UTF-8, then I can sort of expand this and find represent all those character sets and all those characters in those character sets. And so what happened was, is they went through all of these things, and as you can kind of see from this graph, the graph doesn’t really say much other than the fact that UTF-8 is awesome and getting awesomer in every other way of representing data is becoming less awesome, right? And this is 2012 so, that’s long time ago. So this is like UTF-8 rocks. And that’s really because, as soon as these ideas came out, it was really clear that UTF-8 is the best practice for encoding data moving between systems. And that’s why we’re talking about this right now. Finally, with this network, we’re doing sockets, we’re moving data between systems. So your American computer might be talking to a computer in Japan, and you got to know what character set is coming out, right? And you might be getting Japanese characters, even though everything I’ve shown you is non Japanese characters, or Asian characters, or whatever right? So UTF-8 turns out to be the best practice, if your moving a file between a two systems, or if you’re moving network data between two systems, we recommend, the world recommends you UTF-8. Okay, so if you think about your computer, inside your computer the strings that are inside, your Python like x equals hello world, we really don’t care what their syntax is. And if there’s a file, usually the Python running on the computer, and the file had the same character set, they might be UTF-8 inside Python, it might be UTF-8 inside, but we don’t care. You open a file, and that’s why we didn’t have to talk about this when we were opening files. Even though you might someday encounter a file that’s different than your normal character set, it’s rare, okay. So files are inside the computer, strings are inside the computer, but network connections are not inside the computer. And we get databases, we’re going to see they’re not inside of the computer either. And so this is also something that’s changed from Python 2 to Python 3. It was actually a big deal, a big thing, and most people think it’s great. I actually think it’s great. Some people are grumpy about it. But, I think those people just are people that fear change. So, there were two kinds of string in Python. There were a normal old string and a Unicode string. And, so you could see that Python 2 would be able to make a sting constant, and that’s type string. And it would make a Unicode constant by prefixing ‘u before the quote and that’s a separate kind of thing. And then you had to convert back and forth between Unicode and strings. What we’ve done in Python 3 is, this is a regular string and this is Unicode string, but you’ll notice they’re both strings. So it means that, inside of the world of Python, if we’re pulling stuff in you might have to convert it, but inside Python, everything is Unicode. You do not have to worry about it, every string is kind of the same, whether it has Asian characters or Latin characters or Spanish characters or French characters it’s just fine. So this simplifies this, but then there is certain things that we are going to have to be responsible for. So, the one kind of string That we sort of haven’t used yet, but becomes important and it’s present in both Python 2 and Python 3. Remember how I said in the old days, a character and a byte were the same thing. And so, there’s always been a thing like a byte string and they do this by prefixing the b,and that says, this is a string of bytes that mean this character. And if you look at the byte string in Python 2, and then you look at a regular string in Python 2, they’re both type string. The bytes are the same as string, and the Unicode is different. So, these two are the same in Python 2. And these two are different in Python 2. I’m not doing a very good picture of that . So the byte string and then regular string are the same. And the regular string and the Unicode string are different. So what happened is in Python 3, the regular string and the Unicode string are the same. And now, the byte string and the regular string are different. Okay, so bytes turn out to be raw un encoded that might be UTF-8, might be UTF-16, and it might be ASCII, we don’t know what it is, we don’t know what its encoding is. So it turns out that this is the thing that we have to manage or dealing with data from the outside. So in Python 3, all the strings internally are Unicode. Not UTF-8, not UTF-16, not UTF-32. And if you just open a file, it pretty much usually works. If you talk to a network now, we have to understand. Now the key thing is, is we have to decode this stuff. We have to realize what is the character set of the stuff we’re pulling in. Now the beauty is, is because 99% or may be 100% of stuff you’re ever going to run across just uses UTF-8, it turns out to be relatively simple. So, there is this little decode operation. So if you look at this code right here, when we talk to an external resource, we get a byte array back. Like the socket gives us an array of bytes, which are characters. But they need to be decoded. So we know if this could be UTF-8 or UTF-16 or ASCII. So there is this function that’s part of byte arrays. So data.decode() says, figure this thing out, and the nice thing is, is you can tell at what character set it is. But by default, it assumes UTF-8 or ASCII, dynamically. Because ASCII and UTF-8 are of which compatible with one another. So if it’s old data, you’re probably getting ASCII. If it’s newer data, you’re probably getting UTF-8. And literally, it’s a law-diminishing returns. It’s very rare that you get anything other than those two. So, you almost never have to tell it what it is, right? So you just say, decode it. Look at it, it might be ASCII, it might be UTF-8. But whatever it is, by the time it’s done with that, it’s a string. It’s all Unicode inside of this. So this is bytes, and this is Unicode. So, decode goes from bytes to Unicode. And you also can see when we’re looking at the sending of the data, we’re going to turn in into byte. So in code, takes this strings and makes it into bytes. So this is going to be bytes, They’re properly encoded in UTF-8. Again, you could have put a thing here UTF-8, but it just assumes UTF-8. And this is all ASCII, so it actually doesn’t do anything, so but that’s okay. And then we’re sending the bytes out the command. So we have to send the stuff out, so then we receive it, we decode it. When we send it, we encode it. Out in this real world is where the UTF-8 is. Here, we just have Unicode. And so before we do the send and before we receive, we have to encode and decode this stuff, so that it works out and it works out correctly. And so you can look at the documentation for both the encode and the decode. Decode is a method in the bytes class and it says, you can see that the encoding, we’re telling it, you can say it’s not UTF-8, asking UTF-8 aren’t the same thing. The default is UTF-8, which is probably all your ever going to use, and the same thing is through strings can be encoded using UTF-8 into a byte array, and then we send that byte array out to the outside world. And, It sounds more complex than it is. So after all that, think of it this way, on the way out, we have our internal string, before we send it, we have to encode it, and then we send it. Getting stuff back, we receive it, it comes back as bytes. We happen to know what UTF-8 or we’re letting it automatically detect UTF-8, and decode it, and now we have a string. And now internally inside of Python, we can write files, we can do all kinds of stuff, in and out of this stuff, and it sort of works all together. It’s just that this is, UTF-8?? This is the outside world. And so you kind of have to look at your program and say, okay, when am I talking to the outside world? Well, in this case, it’s when I’m talking to a socket, right? I’m talking to a socket, so I have to know enough to encode and decode, as I go in and out of the socket. So it looks kind of weird when you all start and start seeing all these encodes and decodes, but they actually make sense. They’re sort of like this barrier between the outside world and our inside world. So that inside, our data is all completely consistent, and we can mix strings from various sources without regard to the character set of those strings. So now what we’re going to do is, we’re going to rewrite that program It’s a short program, but we’re going to make it even shorter.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.