I have dates in this format 12/29/2011 as a string I guess and I only need Year so I write this function to extract year only but I got
" ValueError: cannot convert float NaN to integer " Seems like I have Nan's somewhere and only solution I can think of is to drop the rows with Nan's but I cant do that coz I need the data from other columns.
def get_year(date):
year = ''
try:
year = date[-4:]
except TypeError:
year = str(date)[0:4]
return (year).astype(int)
The get_year function works when I use this code
for i in df.index:
if (not pd.isna(df['yearOpened'][i]) and get_year(df['yearOpened'][i]) > 1955):
print('something')
I am using .loc and wants to know how to skip Nan's using .loc
`df.loc[get_year(df['yearOpened'])]`
You can use Python's inbuilt datetime library to grab the year from your string with ease.
from datetime import datetime
date = '12/29/2011'
dt = datetime.strptime(date, '%m/%d/%Y') #create datetime object
dt.year
Output: 2011
OR
You could use the pandas.to_datetime function which will handle the nan values for you.
import pandas as pd
import numpy as np
dates = ['12/29/2011', '12/30/2012', np.nan]
dt = pd.to_datetime(dates)
dt.year
Output: Float64Index([2011.0, 2012.0, nan], dtype='float64')
Edit (in response to comments)
To get a DataFrame of all the bad rows you can simply index out all the rows that return NaT
from the pd.to_datetime(df['dates'], errors='coerce')
operation.
data = {'dates': ['12/29/2011', '12/30/2012', np.nan, '1/1/9999'],
'values': [1,2,3,4]}
df = pd.DataFrame(data)
dt = pd.to_datetime(df['dates'], errors='coerce')
bad_rows = df[dt.isna()] #indexing out all rows which contain nan values
bad_rows.to_csv('bad_data.csv')
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.