This is the continuation of this . Let's say I have code that does this table I want
df.groupby('A')['B'].value_counts().unstack().stack(dropna=False
).reset_index(name="Count").set_index(['A', 'B'])
|----|----|-------|
| A | B | Count |
| a1 | b1 | 1 |
| | b2 | 1 |
| | b3 | NaN |
| a2 | b1 | 1 |
| | b2 | NaN |
| | b3 | 1 |
The problem is that there is the case where B column might have multivariate values, eg many unique values. So the groupping A column values are a bit far away :) Eventually, this all should be stored in some Excel format file df.to_excel()
. The solution was proposed to generate such as Excel files per A values. Eg instead of groupped.xlsx where you have all this pivot table in once, to have A_a1.xlsx, A_a2.xlsx files.
Question: how do you do it?
I have some options in mind like to get the list of unique A values and just do something like df_loc = df.loc[df['A'] == 'a1']
, but maybe there is more cool way?
If I understand correctly you're looking for an individual excel file for each A value? If so, the following should work:
for i in df.a.unique(): df[df['a'] == i].to_excel(path+_i)
You can tweak the path for your needs but this is a pretty easy way to do what you're looking for.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.