I want to perform analysis on a dataframe. This is my dataframe format.
df_Input = pd.read_excel("/home/cc/Downloads/date.xlsx")
ID | BOOK | Type
-----------------------
1 | ABC | MAR
45 | PQR | TAB
45 | EDF | Fin
1 | DCF | oop
45 | PQR | TAB
I want to find count(count of every unique value) and unique values that each unique ID can hold. The output should be a dataframe as shown below.
ID | BOOK_Count | Book_values |Type_count | Type_values
-----------------------------------------------------------
1 | 2 | [ABC,DCF] | 1 | [MAR,oop]
45 | 2 | [PQR,EDF] | 2 | [Fin,TAB]
I tried doing it but with a lot of loops. Thanks in advance
IIUC, you can use this:
df_out = df.groupby('ID')['BOOK','Type'].agg(['nunique', lambda x: list(set(x))])
df_out = df_out.rename(columns={'nunique':'count', '<lambda>':'values'})
df_out.columns = df_out.columns.map('_'.join)
print(df_out)
OUtput:
BOOK_count BOOK_values Type_count Type_values
ID
1_1 2 [ABC, DCF] 2 [MAR, oop]
45_2 2 [EDF, PQR] 2 [TAB, Fin]
Let's say we have this dataframe :
ID BOOK type
0 1 ABC MAR
1 0 PQR TAB
2 1 EDF Fin
3 0 DCF oop
4 1 PQR TAB
You can use json
aggregate format as follow :
aggreg = {
'BOOK':{
'BOOK_COUNT' : len,
'BOOK_values' : lambda r : r.tolist()
},
'type':{
'Type_COUNT' : len,
'Type_values' : lambda r : r.tolist()
}
}
Then, use groupby
:
df.groupby('ID').agg(aggreg)
#output :
BOOK type
BOOK_COUNT BOOK_values Type_COUNT Type_values
ID
0 2 [PQR, DCF] 2 [TAB, oop]
1 3 [ABC, EDF, PQR] 3 [MAR, Fin, TAB]
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.