[英]Python pandas error while reading and writing csv file
我不是 python 的家伙,但我必须偶尔写下这样的东西。 几个月前我写了这段代码,它没有任何错误地达到目的。 但是今天当我需要对一些更新的 csv 文件使用相同的脚本时。 它给了我一些我自己无法修复的错误。 请查看下面的代码以及错误。
import pandas as pd
#import xlsxwriter
data_df = pd.read_excel("New2020Snap.xlsx")
data_df['MaxDate'] = data_df.groupby(['LeadId', 'LeadStatus'])['CreatedDate'].transform('max')
data_df['MinDate'] = data_df.groupby(['LeadId', 'LeadStatus'])['CreatedDate'].transform('min')
data_df['Difference'] = pd.to_datetime(data_df['MaxDate']) - pd.to_datetime(data_df['MinDate'])
agg_df = data_df.groupby(['LeadId', 'LeadStatus', 'Email']).agg(MaxDate=('CreatedDate', 'max'),
MinDate=('CreatedDate', 'min')).reset_index()
agg_df['Difference'] = pd.to_datetime(agg_df['MaxDate']) - pd.to_datetime(agg_df['MinDate'])
#data_df.to_json(orient='records')
with pd.ExcelWriter('../out/ComputedReport.xlsx', engine='XlsxWriter') as writer:
data_df.to_excel(writer, sheet_name='New Computed Data', index=False)
agg_df.to_excel(writer, sheet_name='Computed Agg Data', index=False)
print(data_df)
以下是我从运行上述脚本中收到的错误。
Traceback (most recent call last):
File "C:\Users\w-s\IdeaProjects\PythonForEverybody\src\pandas_read_opps.py", line 6, in <module>
data_df['MaxDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('max')
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\generic.py", line 511, in transform
result = getattr(self, func)(*args, **kwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1559, in max
return self._agg_general(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1017, in _agg_general
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\generic.py", line 255, in aggregate
return self._python_agg_general(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1094, in _python_agg_general
return self._python_apply_general(f, self._selected_obj)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 892, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, data, self.axis)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\ops.py", line 213, in apply
res = f(group)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1062, in <lambda>
f = lambda x: func(x, *args, **kwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1017, in <lambda>
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "<__array_function__ internals>", line 5, in amax
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py", line 2705, in amax
return _wrapreduction(a, np.maximum, 'max', axis, None, out,
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py", line 85, in _wrapreduction
return reduction(axis=axis, out=out, **passkwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\generic.py", line 11468, in stat_func
return self._reduce(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\series.py", line 4248, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\nanops.py", line 129, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\nanops.py", line 873, in reduction
result = getattr(values, meth)(axis)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\_methods.py", line 39, in _amax
return umr_maximum(a, axis, None, out, keepdims, initial, where)
TypeError: '>=' not supported between instances of 'datetime.datetime' and 'str'
Process finished with exit code 1
所以基本上我正在处理相同代码的两个单独副本,每个副本都有细微的变化。 但是,我能够修复的那个粘贴在下面。 我按照纪尧姆·安萨奈-亚历克斯爵士在我的问题下的第一条评论中的建议进行了更改。 并且我将在此编辑后将其标记为正确的答案建议了确切的代码行。 所以代码中存在以下错误。
所以我的代码的工作副本如下。
import pandas as pd
#import xlsxwriter
data_df = pd.read_excel("OppAvgStageDuration.xlsx")
#suggested by the first comment and answered by the accepted one.
data_df['CloseDate'] = pd.to_datetime(data_df['CloseDate'])
data_df['MaxDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('max')
data_df['MinDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('min')
data_df['Difference'] = pd.to_datetime(data_df['MaxDate']) - pd.to_datetime(data_df['MinDate'])
agg_df = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage']).agg(MaxDate=('CloseDate', 'max'),
MinDate=('CloseDate', 'min')).reset_index()
agg_df['Difference'] = pd.to_datetime(agg_df['MaxDate']) - pd.to_datetime(agg_df['MinDate'])
#data_df.to_json(orient='records')
with pd.ExcelWriter('../out/ComputedReportOpps.xlsx', engine='xlsxwriter') as writer:
data_df.to_excel(writer, sheet_name='New Computed Data', index=False)
agg_df.to_excel(writer, sheet_name='Computed Agg Data', index=False)
print(data_df)
目前您正在将派生的MaxDate
和MinDate
列转换为to_datetime()
,但尝试从一开始就将源CreatedDate
列转换为to_datetime()
:
data_df = pd.read_excel("New2020Snap.xlsx")
data_df['CreatedDate'] = pd.to_datetime(data_df['CreatedDate'])
如果这不起作用,那么我认为这是纪尧姆所评论的,它具有混合格式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.