I have following dataframe
df = pd.DataFrame({'ItemType': ['Red', 'White', 'Red', 'Blue', 'White', 'White', 'White', 'Green'],
'ItemPrice': [10, 11, 12, 13, 14, 15, 16, 17],
'ItemID': ['A', 'A', 'B', 'B', 'C', 'C', 'D', 'D']})
I would like get records (rows) with ItemIDs that contain only "White" ItemType in a form of a DataFrame
I have attempted following solution:
types = ['Red','Blue','Green']
~df.groupby('ItemID')['ItemType'].any().apply(lambda u: u in(types))
But this gives me an incorrect result (D should be False) and in a form of a series.
A False
B False
C True
D True
Thank you!
You should avoid using apply
here, as it is usually quite slow. Instead, assign a flag
column before you groupby
, and then use all
to assert that none of a groups values are in types
:
df.assign(flag=~df.ItemType.isin(types)).groupby('ItemID').flag.all()
ItemID
A False
B False
C True
D False
Name: flag, dtype: bool
However, just to demonstrate the logic of the operation, and show what was incorrect about your approach, here is a working version using apply
:
~df.groupby('ItemID').ItemType.apply(lambda x: any(i in types for i in x))
You need to use any
inside the lambda, as opposed to on the Series before using apply
.
To access rows where this condition is met, you may use transform
:
df[df.assign(flag=~df.ItemType.isin(types)).groupby('ItemID').flag.transform('all')]
ItemType ItemPrice ItemID
4 White 14 C
5 White 15 C
An alternative method is to calculate an array of non-white ItemID
values. Then filter your dataframe:
non_whites = df.loc[df['ItemType'].ne('White'), 'ItemID'].unique()
res = df[~df['ItemID'].isin(non_whites)]
print(res)
ItemType ItemPrice ItemID
4 White 14 C
5 White 15 C
You can also use GroupBy
, but it's not absolutely necessary.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.