In the following csv file, some columns contain data with different datatypes, ie some rows are integers and some rows are strings.
"ip_v", "ittl", "olen", "mss", "OS_type"
"*", 64, 0, "*", "Windows"
4, 64, 0, 1430, "Linux"
"*", "64-", 0, 1460, "MAC-OS"
I read the csv file into pandas dataframe
df = pd.read_csv("file.csv")
And iterate through each row in a for loop and check the type of each value to proceed further.
But, though the values types are different in csv file, in python all values are read as strings
. For example, when I check the type for "ittl"
column values in each row, they all read as string but I was expecting row 0 and 1 to be int
and row 2 to be str
.
Why am I facing this problem, what is going on?
for index, row in df.iterrows():
print(row['ittl'], type(row['ittl']))
Output:
64 <class 'str'>
64 <class 'str'>
64- <class 'str'>
i dont know but if you want to make your column as an int type you can make this:
df['ittl'] = df['ittl'].astype(int)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.