[英]Convert UTF-16 to UTF-8 and remove BOM?
We have a data entry person who encoded in UTF-16 on Windows and would like to have utf-8 and remove the BOM. 我们有一个数据输入人员在Windows上以UTF-16编码,并希望拥有utf-8并删除BOM。 The utf-8 conversion works but BOM is still there. utf-8转换有效但BOM仍然存在。 How would I remove this? 我该如何删除? This is what I currently have: 这就是我目前拥有的:
batch_3={'src':'/Users/jt/src','dest':'/Users/jt/dest/'}
batches=[batch_3]
for b in batches:
s_files=os.listdir(b['src'])
for file_name in s_files:
ff_name = os.path.join(b['src'], file_name)
if (os.path.isfile(ff_name) and ff_name.endswith('.json')):
print ff_name
target_file_name=os.path.join(b['dest'], file_name)
BLOCKSIZE = 1048576
with codecs.open(ff_name, "r", "utf-16-le") as source_file:
with codecs.open(target_file_name, "w+", "utf-8") as target_file:
while True:
contents = source_file.read(BLOCKSIZE)
if not contents:
break
target_file.write(contents)
If I hexdump -CI see: 如果我hexdump -CI看到:
Wed Jan 11$ hexdump -C svy-m-317.json
00000000 ef bb bf 7b 0d 0a 20 20 20 20 22 6e 61 6d 65 22 |...{.. "name"|
00000010 3a 22 53 61 76 6f 72 79 20 4d 61 6c 69 62 75 2d |:"Savory Malibu-|
in the resulting file. 在结果文件中。 How do I remove the BOM? 如何删除BOM?
thx 谢谢
This is the difference between UTF-16LE
and UTF-16
这是UTF-16LE
和UTF-16
之间的区别
UTF-16LE
is little endian without a BOM UTF-16LE
是没有 BOM的小端 UTF-16
is big or little endian with a BOM UTF-16
是带有 BOM的大端或小端 So when you use UTF-16LE
, the BOM is just part of the text. 因此,当您使用UTF-16LE
,BOM只是文本的一部分。 Use UTF-16
instead, so the BOM is automatically removed. 请改用UTF-16
,以便自动删除BOM。 The reason UTF-16LE
and UTF-16BE
exist is so people can carry around "properly-encoded" text without BOMs, which does not apply to you. UTF-16LE
和UTF-16BE
存在的原因是人们可以携带“正确编码”的文本而不使用BOM,这不适用于您。
Note what happens when you encode using one encoding and decode using the other. 请注意使用一种编码进行编码并使用另一种编码进行解码时会发生什 ( UTF-16
automatically detects UTF-16LE
sometimes, not always.) ( UTF-16
有时会自动检测UTF-16LE
。)
>>> u'Hello, world'.encode('UTF-16LE')
'H\x00e\x00l\x00l\x00o\x00,\x00 \x00w\x00o\x00r\x00l\x00d\x00'
>>> u'Hello, world'.encode('UTF-16')
'\xff\xfeH\x00e\x00l\x00l\x00o\x00,\x00 \x00w\x00o\x00r\x00l\x00d\x00'
^^^^^^^^ (BOM)
>>> u'Hello, world'.encode('UTF-16LE').decode('UTF-16')
u'Hello, world'
>>> u'Hello, world'.encode('UTF-16').decode('UTF-16LE')
u'\ufeffHello, world'
^^^^ (BOM)
Or you can do this at the shell: 或者你可以在shell上做到这一点:
for x in * ; do iconv -f UTF-16 -t UTF-8 <"$x" | dos2unix >"$x.tmp" && mv "$x.tmp" "$x"; done
Just use str.decode
and str.encode
: 只需使用str.decode
和str.encode
:
with open(ff_name, 'rb') as source_file:
with open(target_file_name, 'w+b') as dest_file:
contents = source_file.read()
dest_file.write(contents.decode('utf-16').encode('utf-8'))
str.decode
will get rid of the BOM for you (and deduce the endianness). str.decode
将为您摆脱BOM(并推断出字节顺序)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.