![](/img/trans.png)
[英]how to retrieve a column from pyspark dataframe and and insert it as new column within existing pyspark dataframe?
[英]How do I add a new column to a Spark DataFrame from function value (using PySpark)
我有一個來自sql的數據框:
log = hc.sql("""select
, ip
, url
, ymd
from log """)
和函數,它們從數據幀應用“ ip”值並返回三個值:
def get_loc(ip):
geodata = GeoLocator('SxGeoCity.dat', MODE_BATCH | MODE_MEMORY)
result = []
location = geodata.get_location(ip, detailed=True)
city_name_en = str(processValue(location['info']['city']['name_en']))
region_name_en = str(processValue(location['info']['region']['name_en']))
country_name_en = str(processValue(location['info']['country']['name_en']))
result = [city_name_en, region_name_en, country_name_en]
return result
我不知道如何將值傳遞給函數get_loc()並將返回值作為映射列“屬性”添加到現有數據框。 使用python 2.7和PySpark。
我不知道get_loc做什么。
但是您可以如下使用UDF:
from pyspark.sql import functions as f
def get_loc(ip):
return str(ip).split('.')
rdd = spark.sparkContext.parallelize([(1, '192.168.0.1'), (2, '192.168.0.1')])
df = spark.createDataFrame(rdd, schema=['idx', 'ip'])
My_UDF = f.UserDefinedFunction(get_loc, returnType=ArrayType(StringType()))
df = df.withColumn('loc', My_UDF(df['ip']))
df.show()
# output:
+---+-----------+----------------+
|idx| ip| loc|
+---+-----------+----------------+
| 1|192.168.0.1|[192, 168, 0, 1]|
| 2|192.168.0.1|[192, 168, 0, 1]|
+---+-----------+----------------+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.