Probably easy, but couldn't find it.
Want to copy large chunk of data in row where concat data wasn't properly filled in so NaN is below values.
Small example:
df1 = {'col1': ['a', 'b','c','d','e','f','g',np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan]}
df1 = pd.DataFrame(data=df1)
Did this:
df1['col1'][7:14] = df1['col1'][0:7]
Worked fine.
But what about larger data sets where I don't know the index slicing? Is there a built-in function for this?
Try 1) not to chain index, 2) passing numpy array on assignment:
df.loc[7:14, 'col1'] = df.loc[:7,'col1'].values
Output:
col1
0 a
1 b
2 c
3 d
4 e
5 f
6 g
7 a
8 b
9 c
10 d
11 e
12 f
13 g
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.