![](/img/trans.png)
[英]How to truncate a column in a Pandas time series data frame so as to remove leading and trailing zeros?
[英]Pandas Count zeros in time series
我有一個每日時間序列 [1980 年至今],我需要在其中檢查每個每日時間步長是否為零並系統地刪除記錄。 我最終想對這個解決方案進行矢量化,這樣我就可以在繼續我的分析之前對這些操作進行預處理。 如果我有數據框df
:
date name elev_exact swe
0 1990-10-30 COTTONWOOD_CREEK 2337.816 0.01524
1 1990-10-30 EMIGRANT_SUMMIT 2252.472 0.00000
2 1990-10-30 PHILLIPS_BENCH 2499.360 0.05334
3 1990-10-30 PINE_CREEK_PASS 2048.256 0.00000
4 1990-10-30 SALT_RIVER_SUMMIT 2328.672 0.00000
5 1990-10-30 SEDGWICK_PEAK 2392.680 0.00000
6 1990-10-30 SHEEP_MTN 2026.920 0.00000
7 1990-10-30 SLUG_CREEK_DIVIDE 2202.180 0.00000
8 1990-10-30 SOMSEN_RANCH 2072.640 0.00000
9 1990-10-30 WILDHORSE_DIVIDE 1978.152 0.00000
10 1990-10-30 WILLOW_CREEK 2462.784 0.01778
11 1991-03-15 COTTONWOOD_CREEK 2337.816 0.41910
12 1991-03-15 EMIGRANT_SUMMIT 2252.472 0.42418
13 1991-03-15 PHILLIPS_BENCH 2499.360 0.52832
14 1991-03-15 PINE_CREEK_PASS 2048.256 0.32258
15 1991-03-15 SALT_RIVER_SUMMIT 2328.672 0.23876
16 1991-03-15 SEDGWICK_PEAK 2392.680 0.39878
17 1991-03-15 SHEEP_MTN 2026.920 0.31242
18 1991-03-15 SLUG_CREEK_DIVIDE 2202.180 0.29464
19 1991-03-15 SOMSEN_RANCH 2072.640 0.29972
20 1991-03-15 WILDHORSE_DIVIDE 1978.152 0.35052
21 1991-03-15 WILLOW_CREEK 2462.784 0.60706
22 1991-10-25 COTTONWOOD_CREEK 2337.816 0.01270
23 1991-10-25 EMIGRANT_SUMMIT 2252.472 0.01016
24 1991-10-25 PHILLIPS_BENCH 2499.360 0.02286
25 1991-10-25 PINE_CREEK_PASS 2048.256 0.00508
26 1991-10-25 SALT_RIVER_SUMMIT 2328.672 0.01016
27 1991-10-25 SEDGWICK_PEAK 2392.680 0.00254
28 1991-10-25 SHEEP_MTN 2026.920 0.00000
29 1991-10-25 SLUG_CREEK_DIVIDE 2202.180 0.00762
30 1991-10-25 SOMSEN_RANCH 2072.640 0.00000
31 1991-10-25 WILDHORSE_DIVIDE 1978.152 0.00508
32 1991-10-25 WILLOW_CREEK 2462.784 0.02032
問題是我想找到超過一個零swe
測量值的日子,並且只保留最大elev_exact
的觀察結果。 然后我需要將所需的零記錄合並回df
。
這是一個可以實現我想要的 groupby 循環:
result = pd.DataFrame()
for name, group in df.groupby('date'):
non_zero = group.where(group.swe >0).dropna()
if not group.equals(non_zero):
zeros = group.where(group.swe == 0).dropna()
zero_kept = zeros.loc[zeros.elev_exact.idxmax()]
out = non_zero.append(zero_kept)
out = out[out.elev_exact >= zero_kept.elev_exact]
result = pd.concat([result, out])
else:
result = pd.concat([result, non_zero])
我不介意使用groupby
但我想更有條理地使用它,所以我沒有內部if-else
循環。
這是我對這個問題的思考方式
zero_count = df.groupby('date').apply(lambda x: np.count_nonzero(x==0))
zero_count = zero_count.where(zero_count >1).dropna()
zero_count > 1
分隔日期zero_fix = zero_count.where(zero_count >1).dropna()
fixes = df[df.date.isin(zero_fix.index)].dropna()
fixes = fixes.loc[fixes[fixes.swe==0].groupby('date')['elev_exact'].idxmax().to_list()]
df
。df.loc[:,'threshold'] = df.date.map(lu_dict)
df = df.replace(np.nan, 0)
df = df[df.elev_exact >= df.threshold].drop('threshold', axis=1)
這也有效,但 lambda 函數是第 1 步非常慢。 還有另一種計算零的方法嗎?
預期輸出:
date name elev_exact swe
2 1990-10-30 PHILLIPS_BENCH 2499.360 0.05334
5 1990-10-30 SEDGWICK_PEAK 2392.680 0.00000
10 1990-10-30 WILLOW_CREEK 2462.784 0.01778
11 1991-03-15 COTTONWOOD_CREEK 2337.816 0.41910
12 1991-03-15 EMIGRANT_SUMMIT 2252.472 0.42418
13 1991-03-15 PHILLIPS_BENCH 2499.360 0.52832
14 1991-03-15 PINE_CREEK_PASS 2048.256 0.32258
15 1991-03-15 SALT_RIVER_SUMMIT 2328.672 0.23876
16 1991-03-15 SEDGWICK_PEAK 2392.680 0.39878
17 1991-03-15 SHEEP_MTN 2026.920 0.31242
18 1991-03-15 SLUG_CREEK_DIVIDE 2202.180 0.29464
19 1991-03-15 SOMSEN_RANCH 2072.640 0.29972
20 1991-03-15 WILDHORSE_DIVIDE 1978.152 0.35052
21 1991-03-15 WILLOW_CREEK 2462.784 0.60706
22 1991-10-25 COTTONWOOD_CREEK 2337.816 0.01270
23 1991-10-25 EMIGRANT_SUMMIT 2252.472 0.01016
24 1991-10-25 PHILLIPS_BENCH 2499.360 0.02286
26 1991-10-25 SALT_RIVER_SUMMIT 2328.672 0.01016
27 1991-10-25 SEDGWICK_PEAK 2392.680 0.00254
29 1991-10-25 SLUG_CREEK_DIVIDE 2202.180 0.00762
30 1991-10-25 SOMSEN_RANCH 2072.640 0.00000
32 1991-10-25 WILLOW_CREEK 2462.784 0.02032
您可以嘗試這樣做,將數據幀拆分為非零和零,然后按最高 elev_exact 對零數據幀進行排序,並在日期列上使用帶有子集的drop_duplicates
。 最后,使用pd.concat
將數據幀重新連接在一起並排序:
df_nonzeroes = df[df['swe'].ne(0)]
df_zeroes = df[df['swe'].eq(0)].sort_values('elev_exact', ascending=False).drop_duplicates(subset=['date'])
df_out = pd.concat([df_nonzeroes, df_zeroes]).sort_index()
print(df_out)
輸出:
date name elev_exact swe
0 1990-10-30 COTTONWOOD_CREEK 2337.816 0.01524
2 1990-10-30 PHILLIPS_BENCH 2499.360 0.05334
5 1990-10-30 SEDGWICK_PEAK 2392.680 0.00000
10 1990-10-30 WILLOW_CREEK 2462.784 0.01778
11 1991-03-15 COTTONWOOD_CREEK 2337.816 0.41910
12 1991-03-15 EMIGRANT_SUMMIT 2252.472 0.42418
13 1991-03-15 PHILLIPS_BENCH 2499.360 0.52832
14 1991-03-15 PINE_CREEK_PASS 2048.256 0.32258
15 1991-03-15 SALT_RIVER_SUMMIT 2328.672 0.23876
16 1991-03-15 SEDGWICK_PEAK 2392.680 0.39878
17 1991-03-15 SHEEP_MTN 2026.920 0.31242
18 1991-03-15 SLUG_CREEK_DIVIDE 2202.180 0.29464
19 1991-03-15 SOMSEN_RANCH 2072.640 0.29972
20 1991-03-15 WILDHORSE_DIVIDE 1978.152 0.35052
21 1991-03-15 WILLOW_CREEK 2462.784 0.60706
22 1991-10-25 COTTONWOOD_CREEK 2337.816 0.01270
23 1991-10-25 EMIGRANT_SUMMIT 2252.472 0.01016
24 1991-10-25 PHILLIPS_BENCH 2499.360 0.02286
25 1991-10-25 PINE_CREEK_PASS 2048.256 0.00508
26 1991-10-25 SALT_RIVER_SUMMIT 2328.672 0.01016
27 1991-10-25 SEDGWICK_PEAK 2392.680 0.00254
29 1991-10-25 SLUG_CREEK_DIVIDE 2202.180 0.00762
30 1991-10-25 SOMSEN_RANCH 2072.640 0.00000
31 1991-10-25 WILDHORSE_DIVIDE 1978.152 0.00508
32 1991-10-25 WILLOW_CREEK 2462.784 0.02032
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.