简体   繁体   中英

Python returns length of 2 for single Unicode character string

In Python 2.7:

In [2]: utf8_str = '\xf0\x9f\x91\x8d'
In [3]: print(utf8_str)
👍
In [4]: unicode_str = utf8_str.decode('utf-8')
In [5]: print(unicode_str)
👍 
In [6]: unicode_str
Out[6]: u'\U0001f44d'
In [7]: len(unicode_str)
Out[7]: 2

Since unicode_str only contains a single unicode code point (0x0001f44d), why does len(unicode_str) return 2 instead of 1?

Your Python binary was compiled with UCS-2 support (a narrow build) and internally anything outside of the BMP (Basic Multilingual Plane) is represented using a surrogate pair .

That means such codepoints show up as 2 characters when asking for the length.

You'll have to recompile your Python binary to use UCS-4 instead if this matters ( ./configure --enable-unicode=ucs4 will enable it), or upgrade to Python 3.3 or newer, where Python's Unicode support was overhauled to use a variable-width Unicode type that switches between ASCII, UCS-2 and UCS-4 as required by the codepoints contained.

On Python versions 2.7 and 3.0 - 3.2, you can detect what kind of build you have by inspecting thesys.maxunicode value ; it'll be 2^16-1 == 65535 == 0xFFFF for a narrow UCS-2 build, 1114111 == 0x10FFFF for a wide UCS-4 build. In Python 3.3 and up it is always set to 1114111.

Demo:

# Narrow build
$ bin/python -c 'import sys; print sys.maxunicode, len(u"\U0001f44d"), list(u"\U0001f44d")'
65535 2 [u'\ud83d', u'\udc4d']
# Wide build
$ python -c 'import sys; print sys.maxunicode, len(u"\U0001f44d"), list(u"\U0001f44d")'
1114111 1 [u'\U0001f44d']

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM