简体   繁体   中英

Decode bad escape characters in python

So I have a database with a lot of names. The names have bad characters. For example, a name in a record is José Florés I wanted to clean this to get José Florés

I tried the following

name = "    José     Florés "
print(name.encode('iso-8859-1',errors='ignore').decode('utf8',errors='backslashreplace')

The output messes the last name to ' José Flor\\\\xe9s '

What is the best way to solve this? The names can have any kind of unicode or hex escape sequences.

ftfy is a python library which fixes unicode text broken in different ways with a function named fix_text .

from ftfy import fix_text

def convert_iso_name_to_string(name):
    result = []

    for word in name.split():
        result.append(fix_text(word))
    return ' '.join(result)

name = "José Florés"
assert convert_iso_name_to_string(name) == "José Florés"

Using the fix_text method the names can be standardized, which is an alternate way to solve the problem.

We'll start with an example string containing a non-ASCII character (ie, “ü” or “umlaut-u”):

s = 'Florés'

Now if we reference and print the string, it gives us essentially the same result:

>>> s
'Florés'
>>> print(s)
Florés

In contrast to the same string s in Python 2.x, in this case s is already a Unicode string, and all strings in Python 3.x are automatically Unicode. The visible difference is that s wasn't changed after we instantiated it

You can find the same here Encoding and Decoding Strings

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM