繁体   English   中英

Geoip2 的 python 库在 pySpark 的 map function 中不起作用

[英]Geoip2's python library doesn't work in pySpark's map function

我正在使用 geoip2 的 python 库和 pySpark 来获取一些 IP 的地理地址。 我的代码是这样的:

geoDBpath = 'somePath/geoDB/GeoLite2-City.mmdb'
geoPath = os.path.join(geoDBpath)
sc.addFile(geoPath)
reader = geoip2.database.Reader(SparkFiles.get(geoPath))
def ip2city(ip):
    try:
        city = reader.city(ip).city.name
    except:
        city = 'not found'
    return city

我试过了

print ip2city("128.101.101.101")

有用。 但是当我尝试在 rdd.map 中执行此操作时:

rdd = sc.parallelize([ip1, ip2, ip3, ip3, ...])
print rdd.map(lambda x: ip2city(x))

据报道

    Traceback (most recent call last):
  File "/home/worker/software/spark/python/pyspark/rdd.py", line 1299, in take
    res = self.context.runJob(self, takeUpToNumLeft, p)
  File "/home/worker/software/spark/python/pyspark/context.py", line 916, in runJob
    port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
  File "/home/worker/software/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/home/worker/software/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
TypeError: Required argument 'fileno' (pos 1) not found

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

谁能告诉我如何使 ip2city function 在 rdd.map() 中工作。 谢谢!

您的代码问题似乎来自reader对象。 它不能作为闭包的一部分正确序列化并发送给工人。要处理此问题,必须在工人上实例化它。 处理此问题的一种方法是使用mapPartitions

from pyspark import SparkFiles

geoDBpath = 'GeoLite2-City.mmdb'
sc.addFile(geoDBpath)

def partitionIp2city(iter):
    from geoip2 import database

    def ip2city(ip):
        try:
           city = reader.city(ip).city.name
        except:
            city = 'not found'
        return city

    reader = database.Reader(SparkFiles.get(geoDBpath))
    return [ip2city(ip) for ip in iter]

rdd = sc.parallelize(['128.101.101.101', '85.25.43.84'])
rdd.mapPartitions(partitionIp2city).collect()

## ['Minneapolis', None]

zero323 中的示例有效。 下面是为 RDD 的每个分区创建循环然后演示循环结构的更改。 它还利用产量将结果返回到 dataframe。

from pyspark import SparkFiles

geoDBpath = 'GeoLite2-City.mmdb'
sc.addFile(geoDBpath)
    
def maxmind_ip(ip):
    import geoip2.database
    reader = geoip2.database.Reader(SparkFiles.get(geoDBpath))
    for row in ip:
        try:
            response = reader.city(row.ipaddress)
            ip_lat = str(response.location.latitude)
            ip_long = str(response.location.longitude)
        except:
            #print('Unable to find lat/long for '+ip)
            ip_lat = 'NA'
            ip_long = 'NA'
        #return t.Row('IP_LAT', 'IP_LONG')(ip_lat, ip_long)
        yield [row.ipaddress, ip_lat, ip_long]
    reader.close()
    
ip_maxmind_results = df_actIP_small.rdd.mapPartitions(maxmind_ip).toDF(["ipaddress","IP_LAT","IP_LONG"]) 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM