简体   繁体   中英

writing unicode to binary file in python

I'm wondering how to write unicode (utf-8) to a binary file. Here's the background: I've got a 40 byte header (10 ints), and a table with a variable number of triple-int structs. Writing these was cake.

Now, I want to add a bunch of strings to the end of the file.

Writing regular ASCII based strings is easy:

value = ('ab')
s = struct.Struct('2s')
packed_data = s.pack(value)

I learned how to do this from the Interpret strings as packed binary data .

But is there a way to do this for unicode (utf-8) based strings?

Any ideas? Has anyone done this before?

Unicode != UTF-8. UTF-8 is a binary encoding of Unicode, so just write the UTF-8 string just as you would an ASCII string. No need to pack an encoded string either. It's already "just a bunch of bytes".

# coding: utf8
import struct
text = u'我是美国人。'
encoded_text = text.encode('utf8')

# proof packing is redundant...
format = '{0}s'.format(len(encoded_text))
packed_text = struct.pack(format,encoded_text)
print encoded_text == packed_text # result: True

So just encode your Unicode strings and append them to the file after writing your packed ints.

unicode.encode('utf-8') will return a byte string encoded in UTF-8; just check for the length before packing.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM