简体   繁体   English

Pandas df.to_csv("file.csv" encode="utf-8") 仍然为减号提供垃圾字符

[英]Pandas df.to_csv("file.csv" encode="utf-8") still gives trash characters for minus sign

I've read something about a Python 2 limitation with respect to Pandas' to_csv(... etc...).我读过关于 Pandas 的 to_csv(... 等等...) 的 Python 2 限制的内容。 Have I hit it?我打中了吗? I'm on Python 2.7.3我在 Python 2.7.3

This turns out trash characters for ≥ and - when they appear in strings.当 ≥ 和 - 出现在字符串中时,这会变成垃圾字符。 Aside from that the export is perfect.除此之外,出口是完美的。

df.to_csv("file.csv", encoding="utf-8") 

Is there any workaround?有什么解决方法吗?

df.head() is this: df.head() 是这样的:

demography  Adults ≥49 yrs  Adults 18−49 yrs at high risk||  \
state                                                           
Alabama                 32.7                             38.6   
Alaska                  31.2                             33.2   
Arizona                 22.9                             38.8   
Arkansas                31.2                             34.0   
California              29.8                             38.8  

csv output is this csv output是这个

state,  Adults ≥49 yrs,   Adults 18−49 yrs at high risk||
0,  Alabama,    32.7,   38.6
1,  Alaska, 31.2,   33.2
2,  Arizona,    22.9,   38.8
3,  Arkansas,31.2,  34
4,  California,29.8, 38.8

the whole code is this:整个代码是这样的:

import pandas
import xlrd
import csv
import json

df = pandas.DataFrame()
dy = pandas.DataFrame()
# first merge all this xls together


workbook = xlrd.open_workbook('csv_merger/vaccoverage.xls')
worksheets = workbook.sheet_names()


for i in range(3,len(worksheets)):
    dy = pandas.io.excel.read_excel(workbook, i, engine='xlrd', index=None)
    i = i+1
    df = df.append(dy)

df.index.name = "index"

df.columns = ['demography', 'area','state', 'month', 'rate', 'moe']

#Then just grab month = 'May'

may_mask = df['month'] == "May"
may_df = (df[may_mask])

#then delete some columns we dont need

may_df = may_df.drop('area', 1)
may_df = may_df.drop('month', 1)
may_df = may_df.drop('moe', 1)


print may_df.dtypes #uh oh, it sees 'rate' as type 'object', not 'float'.  Better change that.

may_df = may_df.convert_objects('rate', convert_numeric=True)

print may_df.dtypes #that's better

res = may_df.pivot_table('rate', 'state', 'demography')
print res.head()


#and this is going to spit out an array of Objects, each Object a state containing its demographics
res.reset_index().to_json("thejson.json", orient='records')
#and a .csv for good measure
res.reset_index().to_csv("thecsv.csv", orient='records', encoding="utf-8")

Your "bad" output is UTF-8 displayed as CP1252.您的“坏”输出是 UTF-8,显示为 CP1252。

On Windows, many editors assume the default ANSI encoding (CP1252 on US Windows) instead of UTF-8 if there is no byte order mark (BOM) character at the start of the file.在 Windows 上,如果文件开头没有字节顺序标记 (BOM) 字符,则许多编辑器假定默认的 ANSI 编码(美国 Windows 上的 CP1252)而不是 UTF-8。 While a BOM is meaningless to the UTF-8 encoding, its UTF-8-encoded presence serves as a signature for some programs.虽然 BOM 对 UTF-8 编码毫无意义,但它的 UTF-8 编码存在可用作某些程序的签名。 For example, Microsoft Office's Excel requires it even on non-Windows OSes.例如,即使在非 Windows 操作系统上,Microsoft Office 的 Excel 也需要它。 Try:尝试:

df.to_csv('file.csv',encoding='utf-8-sig')

That encoder will add the BOM.该编码器将添加 BOM。

encoding='utf-8-sig does not work for me. encoding='utf-8-sig对我不起作用。 Excel reads the special characters fine now, but the Tab separators are gone, However, encoding='utf-16 does work correctly: special characters OK and Tab separators work. Excel 现在可以很好地读取特殊字符,但是制表符分隔符消失了,但是, encoding='utf-16确实可以正常工作:特殊字符可以,制表符分隔符可以。 This is the solution for me.这是我的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM