简体   繁体   English

使用一个表更新Spark中的另一个表

[英]using one table to update another table in spark

I have two table or dataframes , and I want to using one to update another one. 我有两个表或dataframes ,我想用一个来更新另一个。 Also I have know spark sql does not support update a set a.1= b.1 from b where a.2 = b.2 and a.update < b.update . 我也知道spark sql不支持update a set a.1= b.1 from b where a.2 = b.2 and a.update < b.update Please suggest me how can i achieve this as it is not possible in spark. 请建议我如何实现此目标,因为这不可能产生火花。

table1 表格1

+------+----+------+
|number|name|update|
+------+--- -------+
|     1|   a| 08-01|
|     2|   b| 08-02|
+------+----+------+

table2 表2

    +------+----+------+
    |number|name|update|
    +------+--- -------+
    |     1|  a2| 08-03|
    |     3|   b| 08-02|
    +------+----+------+

I want to get this: 我想得到这个:

    +------+----+------+
    |number|name|update|
    +------+--- -------+
    |     1|  a2| 08-03|
    |     2|   b| 08-02|
    |     3|   b| 08-02|
    +------+----+------+

Are there have any other way to do this in spark? 还有其他方法可以做到这一点吗?

Using pyspark , you could use subtract() to find the number values of table1 not present in table2 , and consequently use unionAll of the two tables where table1 is filtered down to the missing observations from table2 . 使用pyspark ,你可以使用subtract()找到number的值table1不存在于table2 ,因此使用unionAll其中两个表table1被过滤下来从丢失的观测table2

diff = (table1.select('number')
        .subtract(table2.select('number'))
        .rdd.map(lambda x: x[0]).collect())

table2.unionAll(table1[table1.number.isin(diff)]).orderBy('number').show()
+------+----+------+
|number|name|update|
+------+----+------+
|     1|  a2| 08-03|
|     2|   b| 08-02|
|     3|   b| 08-02|
+------+----+------+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM