Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.
Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.
The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.
Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.
turns out one should read the article before commenting
You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.
So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.
It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.
Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.
Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.
The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.Plus in Rust, you can instead use.chars().count()
as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.turns out one should read the article before commenting
No offense, but did you read the article?
You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.
So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.
yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.
No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠
Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)
It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.