Nowadays, I'm facing a problem that here have some datasets that contain glitches. Like in a dataset has a number column. externally can easily be recognized that the maximum field has numbers. But its datatype is Object. Not only that some of the fields have non-numeric values.
for example:
A dataset has " Age " column: [23, 34, 54, 33, pp, 27, 43] and its datatype is object.
Now, Chake this out it has a string value " pp " into the number value. what we have known as a glitch in the dataset.
Now my question is how can I found those rows that contain the glitches like " pp ".
Here is an image of what I want to discuss with you
Thanks.
You can use pd.to_numeric()
with coercing errors (from non-numeric values) to NaN
, and then check for NaN
with isna()
. Then, use .loc
to locate the row(s) with those NaN
values (from non-numeric values):
df.loc[pd.to_numeric(df['Age'], errors='coerce').isna()]
Demo
data = {"Age": [23, 34, 54, 33, 'pp', 27, 43] }
df = pd.DataFrame(data)
df.loc[pd.to_numeric(df['Age'], errors='coerce').isna()]
Age
4 pp
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.