![](/img/trans.png)
[英]Filter array column in a dataframe based on a given input array --Pyspark
[英]Filter an array in pyspark dataframe
星火版本:2.3.0
我有一個 PySpark dataframe 有一個數組列,我想通過應用一些字符串匹配條件來過濾數組元素。 例如:如果我有這樣的 dataframe
Array Col
['apple', 'banana', 'orange']
['strawberry', 'raspberry']
['apple', 'pineapple', 'grapes']
我想過濾每個數組中包含字符串“apple”或以“app”開頭的元素等。我將如何在 PySpark 中實現這一點?
有人能告訴我如何在 pyspark 中實現它嗎?
您可以使用 spark 2.4+ 中的高階函數:
df.withColumn("Filtered_Col",F.expr(f"filter(Array_Col,x -> x rlike '^(?i)app' )")).show()
+--------------------------+------------+
|Array_Col |Filtered_Col|
+--------------------------+------------+
|[apple, banana, orange] |[apple] |
|[strawberry, raspberry] |[] |
|[apple, pineapple, grapes]|[apple] |
+--------------------------+------------+
對於較低版本,您可能適合使用 udf:
import re
def myf(v):
l=[]
for i in v:
if bool(re.match('^(?i)app',i)):
l.append(i)
return l
myudf = F.udf(myf,T.ArrayType(T.StringType()))
df.withColumn("Filtered_Col",myudf("Array_Col")).show()
您可以將filter與exist結合使用,它屬於高階函數,它將檢查數組中的任何元素是否包含word
另一種方法是 UDF -
sparkDF = sql.createDataFrame([(['apple', 'banana', 'orange'],),
(['strawberry', 'raspberry'],),
(['apple', 'pineapple', 'grapes'],)
]
,['arr_column']
)
sparkDF.show(truncate=False)
+--------------------------+
|arr_column |
+--------------------------+
|[apple, banana, orange] |
|[strawberry, raspberry] |
|[apple, pineapple, grapes]|
+--------------------------+
starts_with_app = lambda s: s.startswith("app")
sparkDF_filtered = sparkDF.filter(F.exists(F.col("arr_column"), starts_with_app))
sparkDF_filtered.show(truncate=False)
+--------------------------+
|arr_column |
+--------------------------+
|[apple, banana, orange] |
|[apple, pineapple, grapes]|
+--------------------------+
def filter_string(inp):
res = []
for s in inp:
if s.startswith("app"):
res += [s]
if res:
return res
else:
return None
filter_string_udf = F.udf(lambda x: filter_string(x),ArrayType(StringType()))
sparkDF_filtered = sparkDF.withColumn('arr_filtered',filter_string_udf(F.col('arr_column')))\
.filter(F.col('arr_filtered').isNotNull())
sparkDF_filtered.show(truncate=False)
+--------------------------+------------+
|arr_column |arr_filtered|
+--------------------------+------------+
|[apple, banana, orange] |[apple] |
|[apple, pineapple, grapes]|[apple] |
+--------------------------+------------+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.