繁体   English   中英

Pyspark-基于数据框中的2列的不同记录

[英]Pyspark - distinct records based on 2 columns in dataframe

我有2个数据框,例如df1df2

df1数据来自数据库,而df2是我从客户端收到的新数据。 我需要处理新数据,并根据是新记录还是要更新的现有记录执行UPSERTs

样本数据输出:

df1= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 01","NJ"),("xxx2","81A01","TERR NAME 01","NJ"),("xxx3","81A01","TERR NAME 01","NJ"),("xxx4","81A01","TERR NAME 01","CA"),("xx5","81A01","TERR NAME 01","ME")], ["zip_code","territory_code","territory_name","state"])
df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code","territory_code","territory_name","state"])

df1.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 01|   NJ|
|    xxx2|         81A01|  TERR NAME 01|   NJ|
|    xxx3|         81A01|  TERR NAME 01|   NJ|
|    xxx4|         81A01|  TERR NAME 01|   CA|
|    xxx5|         81A01|  TERR NAME 01|   ME|
+---------------------------------------------

# Print out information about this data
df2.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 55|   NY|
|    xxx2|         81A01|  TERR NAME 55|   NY|
|    x103|         81A01|  TERR NAME 01|   NJ|
+---------------------------------------------

预期结果:我需要将df2数据帧与df1进行比较。 根据以上比较,创建2个新的数据集,即要更新的记录和要附加/插入数据库的记录。

如果ZIP_CODE&territory_code是相同的,那么它是一个更新,否则它是一个INSERT数据库。

例如:INSERT的新数据框输出:

 +--------+--------------+--------------+-----+
 |zip_code|territory_code|territory_name|state|
 +--------+--------------+--------------+-----+
 |    x103|         81A01|  TERR NAME 01|   NJ|
 +---------------------------------------------

用于UPDATE的新数据框:

+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 55|   NY|
|    xxx2|         81A01|  TERR NAME 55|   NY|
+---------------------------------------------

有人可以帮帮我吗? 我正在使用AWS Glue。

更新:解决方案(使用加入和减去)

df3 = df1.join(df2, (df1.zip_code == df2.zip_code_new) & (df1.territory_code == df2.territory_code_new))
df5=df3.drop("zip_code", "territory_code", "territory_name", "state")
df5.show()

+------------+------------------+------------------+---------+
|zip_code_new|territory_code_new|territory_name_new|state_new|
+------------+------------------+------------------+---------+
|        x103|             81A01|      TERR NAME 01|       NJ|
+------------+------------------+------------------+---------+

df4=df2.subtract(df5)
df4.show()

+------------+------------------+------------------+---------+
|zip_code_new|territory_code_new|territory_name_new|state_new|
+------------+------------------+------------------+---------+
|    xxx1    |         81A01    |  TERR NAME 55    |   NY    |
|    xxx2    |         81A01    |  TERR NAME 55    |   NY    |
+------------------------------------------------------------+

对于RDS数据库更新,我使用pymysql / Mysqldb:

db = MySQLdb.connect("xxxx.rds.amazonaws.com", "username", "password", "databasename")
cursor = db.cursor()

#cursor.execute("REPLACE INTO table SELECT * FROM table_stg")
insertQry = "INSERT INTO table VALUES('xxx1','81A01','TERR NAME 55','NY') ON DUPLICATE KEY UPDATE territory_name='TERR NAME 55', state='NY'"
n=cursor.execute(insertQry)
db.commit()
cursor.fetchall()
db.close()

谢谢

这是一个解决方案草图:

  1. 将两个框架投影到您的唯一键上(邮政编码和区域)

  2. 使用spark数据框api计算两个数据框之间的交集和差异。 请参见此链接: 如何获得两个DataFrame之间的差异?

  3. 对键的交点进行更新

  4. 对差异进行插入(在新数据框内,不在现有数据内)

在scala中,它看起来像这样-在python中应该非常相似:

import org.apache.spark.sql.SparkSession

case class ZipTerr(zip_code: String, territory_code: String, 
    territory_name: String, state:String)

case class Key(zip_code: String, territory_code: String)

val spark: SparkSession

val newData = spark.createDataFrame(List(
  ZipTerr("xxx1", "81A01", "TERR NAME 01", "NJ"),
  ZipTerr("xxx2", "81A01", "TERR NAME 01", "NJ"),
  ZipTerr("xxx3", "81A01", "TERR NAME 01", "NJ"),
  ZipTerr("xxx4", "81A01", "TERR NAME 01", "CA"),
  ZipTerr("xx5","81A01","TERR NAME 01","ME")
))

val oldData = spark.createDataFrame(List(
  ZipTerr("xxx1","81A01","TERR NAME 55","NY"),
  ZipTerr("xxx2","81A01","TERR NAME 55","NY"),
  ZipTerr("x103","81A01","TERR NAME 01","NJ")
))

val newKeys = newData.map(z => Key(z.getAs("zip_code"), z.getAs("territory_code")))
val oldKeys = oldData.map(z => Key(z.getAs("zip_code"), z.getAs("territory_code")))

val keysToInsert = newKeys.except(oldKeys)
val keysToUpdate = newKeys.intersect(oldKeys)

这有帮助吗?

注意:变量的名称表明您正在使用胶粘动态框架。 但是,您正在使用sqlContext.createDataFrame函数向它们分配普通spark数据帧。

为了清楚起见,请在此处使用代码段重现解决方案:

df1= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 01","NJ"),("xxx2","81A01","TERR NAME 01","NJ"),("xxx3","81A01","TERR NAME 01","NJ"),("xxx4","81A01","TERR NAME 01","CA"),("xx5","81A01","TERR NAME 01","ME")], ["zip_code","territory_code","territory_name","state"])
df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code_new","territory_code_new","territory_name_new","state"])

df1.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 01|   NJ|
|    xxx2|         81A01|  TERR NAME 01|   NJ|
|    xxx3|         81A01|  TERR NAME 01|   NJ|
|    xxx4|         81A01|  TERR NAME 01|   CA|
|    xxx5|         81A01|  TERR NAME 01|   ME|
+---------------------------------------------

# Print out information about this data
df2.show()
+------------+------------------+------------------+---------+
|zip_code_new|territory_code_new|territory_name_new|state_new|
+------------+------------------+------------------+---------+
|    xxx1    |         81A01    |  TERR NAME 55    |   NY    |
|    xxx2    |         81A01    |  TERR NAME 55    |   NY    |
|    x103    |         81A01    |  TERR NAME 01    |   NJ    |
+------------------------------------------------------------+

获取新记录,可以使用“ 附加 ”操作其插入到mysql中

df3 = df1.join(df2, (df1.zip_code == df2.zip_code_new) & (df1.territory_code == df2.territory_code_new))
df5=df3.drop("zip_code", "territory_code", "territory_name", "state")
df5.show()

+------------+------------------+------------------+---------+
|zip_code_new|territory_code_new|territory_name_new|state_new|
+------------+------------------+------------------+---------+
|        x103|             81A01|      TERR NAME 01|       NJ|
+------------+------------------+------------------+---------+

然后获取需要更新到mysql数据库的其余记录。 如果需要纯python,我们可以使用arr = df1.collect() ,然后for r in arr: ,否则,可以使用pandas迭代器处理每个记录。

df4=df2.subtract(df5)
df4.show()

+------------+------------------+------------------+---------+
|zip_code_new|territory_code_new|territory_name_new|state_new|
+------------+------------------+------------------+---------+
|    xxx1    |         81A01    |  TERR NAME 55    |   NY    |
|    xxx2    |         81A01    |  TERR NAME 55    |   NY    |
+------------------------------------------------------------+

希望这可以帮助需要帮助的人。 请让我知道在上述情况下是否有更好的数据帧迭代方法。 谢谢

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM