简体   繁体   中英

How to force Pandas "read_csv" function to keep blank values

I have a csv containing 1 column and 2302 rows, 8 of which are blank. By blank I mean they are completely blank (ie no spaces or anything).

When I read the csv into Pandas dataframe using the Python code below, the output "judg_count" got truncated to 2294 rows (ie the 8 rows with blank values were automatically removed) as opposed to 2302 rows.

judg_count=pd.read_csv('sample_data.csv')

I have tried the multiple codes below:

judg_count=pd.read_csv('sample_data.csv').fillna(' ')

judg_count=pd.read_csv('sample_data.csv').replace('',np.nan)

judg_count = pd.read_csv('sample_data.csv', na_filter= False)

judg_count = pd.read_csv('sample_data.csv').fillna(value = 0)

Unfortunately none of them worked because when I called the "judg_count" variable it still always returned 2294 rows with the blank rows automatically removed.

My question is is there a way to force Pandas to preserve those blank rows when reading the csv?

Below is a screenshot of some of the rows in my CSV. Note the blank value at cell #25:

在此处输入图片说明

There is a keyword you can pass:

judg_count=pd.read_csv('sample_data.csv', skip_blank_lines=False)

You can read more in the docs here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM