[英]How to apply function to Pyspark dataframe column?
我有一个看起来像这样的数据框:
+-----------+-------+-----------------+
|A |B | Num|
+-----------+-------+-----------------+
| BAKEL| BAKEL| 1 341 2323 01415|
| BAKEL| BAKEL| 2 272 7729 00307|
| BAKEL| BAKEL| 2 341 1224 00549|
| BAKEL| BAKEL| 2 341 1200 01194|
| BAKEL| BAKEL|1 845 0112 101159|
+-----------+-------+-----------------+
我想要这样的输出:
+-----------+-------+---------------+
|A |B | Num|
+-----------+-------+---------------+
| BAKEL| BAKEL| 1341232301415|
| BAKEL| BAKEL| 2272772900307|
| BAKEL| BAKEL| 2341122400549|
| BAKEL| BAKEL| 2341120001194|
| BAKEL| BAKEL| 18450112101159|
+-----------+-------+---------------+
最后一列的值中的空格已被删除。
如何使用pyspark做到这一点?
使用函数regexp_replace()
解决此问题-
from pyspark.sql.functions import regexp_replace
myValues = [('BAKEL','BAKEL','1 341 2323 01415'),('BAKEL','BAKEL','2 272 7729 00307'),
('BAKEL','BAKEL','2 341 1224 00549'),('BAKEL','BAKEL','2 341 1200 01194'),
('BAKEL','BAKEL','1 845 0112 101159'),]
df = sqlContext.createDataFrame(myValues,['A','B','Num'])
df = df.withColumn('Num',regexp_replace('Num',' ',''))
#Convert String to Long (integral value)
df = df.withColumn('Num', df['Num'].cast("long"))
df.show()
+-----+-----+--------------+
| A| B| Num|
+-----+-----+--------------+
|BAKEL|BAKEL| 1341232301415|
|BAKEL|BAKEL| 2272772900307|
|BAKEL|BAKEL| 2341122400549|
|BAKEL|BAKEL| 2341120001194|
|BAKEL|BAKEL|18450112101159|
+-----+-----+--------------+
df.printSchema()
root
|-- A: string (nullable = true)
|-- B: string (nullable = true)
|-- Num: long (nullable = true)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.