[英]Creating new pandas columns based on conditions on existing columns
我有一個數據框,如下所示:
col1 = ['a','b','c','a','c','a','b','c','a']
col2 = [1,1,0,1,1,0,1,1,0]
df2 = pd.DataFrame(zip(col1,col2),columns=['name','count'])
name count
0 a 1
1 b 1
2 c 0
3 a 1
4 c 1
5 a 0
6 b 1
7 c 1
8 a 0
我試圖找到零的數量與“名稱”列中每個元素對應的零+一的總和的比率。 首先,我將計數匯總如下:
for j in df2.name.unique():
print(j)
zero_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0]
full_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0] + zero_one_frequencies[zero_one_frequencies['name'] == j][1]
zero_pb = zero_ct / full_ct
one_pb = 1 - zero_pb
print(f"ZERO rations for {j} = {zero_pb}")
print(f"One ratios for {j} = {one_pb}")
print("="*30)
輸出如下所示:
a
ZERO ratios for a = 0 0.5
dtype: float64
One ratios for a = 0 0.5
dtype: float64
==============================
b
ZERO ratios for b = 1 0.0
dtype: float64
One ratios for b = 1 1.0
dtype: float64
==============================
c
ZERO ratios for c = 2 0.333333
dtype: float64
One ratios for c = 2 0.666667
dtype: float64
==============================
我的目標是向數據框中添加 2 個新列:“name_0”和“name_1”,其中包含“name”列中每個元素的比率值。 我嘗試了一些東西,但沒有給出預期的結果:
for j in df2.name.unique():
print(j)
zero_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0]
full_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0] + zero_one_frequencies[zero_one_frequencies['name'] == j][1]
zero_pb = zero_ct / full_ct
one_pb = 1 - zero_pb
print(f"ZERO Probablitliy for {j} = {zero_pb}")
print(f"One Probablitliy for {j} = {one_pb}")
print("="*30)
condition1 = [ df2['name'].eq(j) & df2['count'].eq(0)]
condition2 = [ df2['name'].eq(j) & df2['count'].eq(1)]
choice1 = zero_pb.tolist()
choice2 = one_pb.tolist()
print(f'choice1 = {choice1}, choice2 = {choice2}')
df2["name"+str("_0")] = np.select(condition1, choice1, default=0)
df2["name"+str("_1")] = np.select(condition2, choice2, default=0)
該列使用名稱元素“c”的值進行更新。 這是可以預料的,因為最后計算的值用於更新所有值。
你能幫我理解是否有另一種方法可以有效地使用 np.select 嗎?
預期輸出:
name count name_0 name_1
0 a 1 0.000000 0.500000
1 b 1 0.000000 1.000000
2 c 0 0.333333 0.000000
3 a 1 0.000000 0.500000
4 c 1 0.000000 0.666667
5 a 0 0.500000 0.000000
6 b 1 0.000000 1.000000
7 c 1 0.000000 0.666667
8 a 0 0.500000 0.000000
我無權訪問 zero_one_frequencies df。 所以我冒昧地嘗試以我的方式解決問題。
import pandas as pd
import numpy as np
col1 = ['a','b','c','a','c','a','b','c','a']
col2 = [1,1,0,1,1,0,1,1,0]
df2 = pd.DataFrame(zip(col1,col2),columns=['name','count'])
df2["name_0"] = 0
df2["name_1"] = 0
for name in df2['name'].unique():
df_name = df2[df2['name'] == name]
prob_1 = sum(df_name['count']/df_name.shape[0])
for count in df2['count'].unique():
indx = np.where((df2['name'] == name) & (df2['count'] == count))
df2["name_" + str(count)].loc[indx] = np.abs(((count +1) % 2) - prob_1)
輸出:
name count name_0 name_1
0 a 1 0.000000 0.500000
1 b 1 0.000000 1.000000
2 c 0 0.333333 0.000000
3 a 1 0.000000 0.500000
4 c 1 0.000000 0.666667
5 a 0 0.500000 0.000000
6 b 1 0.000000 1.000000
7 c 1 0.000000 0.666667
8 a 0 0.500000 0.000000
為了理解 np.select 我建議看這篇文章。
以下代碼修復了該問題。 但是,我找不到使用 numpy.select 獲得相同效果的方法。
df2["name"+str("_0")] = 0.0
df2["name"+str("_1")] = 0.0
for j in df2.name.unique():
print(j)
zero_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0]
full_ct = zero_one_frequencies[zero_one_frequencies['name'] == j][0] + zero_one_frequencies[zero_one_frequencies['name'] == j][1]
zero_pb = zero_ct / full_ct
one_pb = 1 - zero_pb
print(f"ZERO Probablitliy for {j} = {zero_pb.tolist()[0]}")
print(f"One Probablitliy for {j} = {one_pb.tolist()[0]}")
print("="*30)
for idx in df2[df2['name']== j ].index:
print("Index:::", idx)
if df2['count'].iloc[idx] == 0:
df2.at[idx, "name"+str("_0")] = zero_pb.tolist()[0]
print(f'Count for {j} at index {idx} is {a}')
print('printing name_0: ', df2["name"+str("_0")].iloc[idx])
print("*"*30)
elif df2['count'].iloc[idx] == 1:
df2.at[idx, "name"+str("_1")] = one_pb.tolist()[0]
print(f'Count for {j} at index {idx} is {b}')
print('printing name_1: ', df2["name"+str("_1")].iloc[idx])
print("*"*30)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.