@envyandgreed I got the same result. Confused.
or use 'for ... of'
strings are sequences of unsigned 16-bit values. The most commonly used Unicode
characters (those from the “basic multilingual plane”) have codepoints that fit in
16 bits and can be represented by a single element of a string. Unicode characters whose
codepoints do not fit in 16 bits are encoded following the rules of UTF-16 as a sequence
of length 2 (two 16-bit values) might represent only a single Unicode character:
var p = "π"; // π is 1 character with 16-bit codepoint 0x03c0 var e = "e"; // e is 1 character with 17-bit codepoint 0x1d452 p.length // => 1: p consists of 1 16-bit element e.length // => 2: UTF-16 encoding of e is 2 16-bit values: "\ud835\udc52"
not on characters. They do not treat surrogate pairs specially, perform no normalization
of the string, and do not even ensure that a string is well-formed UTF-16.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.