繁体   English   中英

df.to_csv构造输出

[英]df.to_csv structuring the output

我试图写一个输出到csv但我得到不同的格式。

如何获得干净的输出我会改变什么。

码:

import pandas as pd 
from datetime import datetime
import csv

df = pd.read_csv('one_hour.csv')
df.columns = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']

count_med = df.groupby(['date'])[['count']].median()
unique_med = df.groupby(['date'])[['unique']].median()
date_count = df['date'].nunique()
#print count_med
#print unique_med

cols = ['date_count', 'count_med', 'unique_med']
outf = pd.DataFrame([[date_count, count_med, unique_med]], columns = cols)
outf.to_csv('date_med.csv', index=False, header=False)

输入:巨大的数据文件只有几行。

2004-01-05,21:00:00,22:00:00,Mon,16553,783
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636

产量

63,"              count
date               
2004-01-05  10766.0
2004-01-06  11530.0
2004-01-07  11270.0
2004-01-08  14819.5
2004-01-09  12933.5
2004-01-10  10088.0
2004-01-11  10923.0
2004-02-03  14760.5
...             ...
2004-02-07  10131.5
2004-02-08  11184.0

[63 rows x 1 columns]","            unique
date              
2004-01-05   633.0
2004-01-06   741.0
2004-01-07   752.5
2004-02-03   779.5
...            ...
2004-02-07   643.5

[63 rows x 1 columns]"

但预期的输出不应该是这样的。

预期输出:将值与日期一起舍入

2004-01-05,10766,633 
2004-01-06,11530,741
2004-01-07,11270,752

试试这个:

cols = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']

df = pd.read_csv(fn, header=None, names=cols)

df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().to_csv('d:/temp/out.csv', header=None)

out.csv:

2004-01-05,764,17044.0
2004-01-06,757,17262.0

你需要:

import pandas as pd
import io

temp=u"""2004-01-05,21:00:00,22:00:00,Mon,16553,783
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), parse_dates=[0], names=['date', 'startTime', 'endTime', 'day', 'count', 'unique'])
print (df)

outf = df.groupby('date')['count', 'unique'].median().round().astype(int)
print (outf)
            count  unique
date                     
2004-01-05  17534     783
2004-01-06  16658     752


outf.to_csv('date_med.csv', header=False)

时间

In [20]: %timeit df.groupby('date')['count', 'unique'].median().round().astype(int)
The slowest run took 4.47 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 2.67 ms per loop

In [21]: %timeit df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().astype(int)
The slowest run took 4.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 3.64 ms per loop

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM