繁体   English   中英

查看字符串是否包含不同数据帧中的子字符串

[英]looking if String contain a sub-string in differents Dataframes

我有 2 个数据框:

df_1,列id只包含字符和数字 ==> 规范化,和id_no_normalized示例:

 id_normalized   |  id_no_normalized
    -------------|-------------------
    ABC          |  A_B.C
    -------------|-------------------
    ERFD         |  E.R_FD
    -------------|-------------------
    12ZED        |   12_Z.ED

df_2,列name只包含字符和数字 ==> 规范化附加

例子:

name
----------------------------
googleisa12ZEDgoodnavigator
----------------------------
internetABCexplorer
----------------------------

如果name (dataset_2)存在,我想查看id_normalized (dataset_1) name (dataset_2) 如果我找到它,我就取id_no_normalized的值并将它存储在dataset_2一个新列中

期待结果:

   name                         |   result
    ----------------------------|----------
    googleisa12ZEDgoodnavigator |  12_Z.ED
    ----------------------------|----------
    internetABCexplorer         |  A_B.C
    ----------------------------|----------

我是用这个代码做到的:

df_result = df_2.withColumn("id_no_normalized", dft_2.name.contains(df_1.id_normalized))
    return df_result.select("name", "id_normalized")

无法正常工作,因为它在id_normalized中找不到 id_normalized。

Second solution, it work only when I limited the output on 300 rows almost, but when I return all the data, is took many time running and not finish:

   df_1 = df_1.select("id_no_normalized").drop_duplicates()
df_1 = df_1.withColumn(
    "id_normalized",
    F.regexp_replace(F.col("id_no_normalized"), "[^a-zA-Z0-9]+", ""))
df_2 = df_2.select("name")
extract = F.expr('position(id_normalized IN name)>0')
result = df_1.join(df_2, extract)
return result

如何更正我的代码以解决它? 谢谢

我们可以使用交叉连接并在新的 DF 上应用 UDF 来解决这个问题,但我们再次需要确保它适用于大数据集。

from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType

data1 = [
 {"id_normalized":"ABC","id_no_normalized":"A_B.C"},
 {"id_normalized":"ERFD","id_no_normalized":"E.R_FD"},
 {"id_normalized":"12ZED","id_no_normalized":"12_Z.ED"}
]

data2 = [
 {"name": "googleisa12ZEDgoodnavigator"},
 {"name": "internetABCexplorer"}
]

df1 = spark.createDataFrame(data1, ["id_no_normalized", "id_normalized"])
df2 = spark.createDataFrame(data2, ["name"])

df3 = df1.crossJoin(df2)
search_for_udf = udf(lambda name,id_normalized: name.find(id_normalized), returnType=IntegerType())
df4 = df3.withColumn("contain", search_for_udf(df3["name"], df3["id_normalized"]))
df4.filter(df4["contain"] > -1).show()


>>> df4.filter(df4["contain"] > -1).show()
+----------------+-------------+--------------------+-------+
|id_no_normalized|id_normalized|                name|contain|
+----------------+-------------+--------------------+-------+
|           A_B.C|          ABC| internetABCexplorer|      8|
|         12_Z.ED|        12ZED|googleisa12ZEDgoo...|      9|
+----------------+-------------+--------------------+-------+

我相信有一些火花技术可以使交叉连接高效。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM