簡體   English   中英

在地圖中編碼reduceByKey(lambda)不起作用pySpark

[英]coding reduceByKey(lambda) in map does'nt work pySpark

我不明白為什么我的代碼不起作用。 最后一行是問題:

import findspark
findspark.init()
from pyspark import SparkConf, SparkContext
from pyspark.sql.types import StringType
from pyspark import SQLContext
conf=SparkConf().setMaster("local").setAppName("mein soft")
sc=SparkContext(conf=conf)
sqlContext=SQLContext(sc)

lines=sc.textFile("File.txt")
#lines.repartition(3)
lines.getNumPartitions()

def lan_map(x):
    if "word1" and "word2" in x:
        return ("Count",(1,1))
    elif "word1" in x:
        return ("Count",("1,0"))
    elif "word2" in x:
        return ("Count",("0,1"))
    else:
        return ("Count",("0,0"))
    
mapfun=lines.map(lan_map)

mapfun.reduceByKey(lambda x, y: (x[0]+y[0], x[1]+y[1])).collect() 

和錯誤:

-------------------------------------------------- ------------------------- Py4JJavaError Traceback (last recent call last) in 1 #Esto resume lo que se hicimos 3 celdas atrás。 ----> 2 mapfun.reduceByKey(lambda x,y: (x[0]+y[0], x[1]+y[1])).collect() 3 4 #mapfun.reduceByKey(noMeFuncaLambdaAsiQueHagoEsto( mapfun.x,mupfun.y)).collect() 5 #Esto nos devuelve directamente el recuento de cuántas veces aparece "Python" y cuántas aparece "Spark"

C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd.py in collect(self) 947 """ 948 with SCCallSiteSync(self.context) as css: --> 949 sock_info = self. ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 950 返回列表(_load_from_socket(sock_info, self._jrdd_deserializer)) 951

C:\spark-3.1.2-bin-hadoop3.2\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in call (self, *args) 1302 1303 answer = self.gateway_client。發送命令(命令)-> 1304 return_value = get_return_value(1305 答案,self.gateway_client,self.target_id,self.name)1306

C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\sql\utils.py in deco(*a, **kw) 109 def deco(*a, **kw): 110 嘗試:- -> 111 返回 f(*a, **kw) 112 除了 py4j.protocol.Py4JJavaError 作為 e: 113 轉換 = convert_exception(e.java_exception)

C:\spark-3.1.2-bin-hadoop3.2\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[ type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "調用 {0}{1}{2} 時出錯。\n". 328格式(target_id,“。”,名稱),值)

Py4JJavaError:調用 z:org.apache.spark.api.python.PythonRDD.collectAndServe 時出錯。 :org.apache.spark.SparkException:作業因階段失敗而中止:階段 0.0 中的任務 0 失敗 1 次,最近失敗:階段 0.0 中丟失任務 0.0(TID 0)(LAPTOP-PB7QDPVE 執行器驅動程序):org.apache .spark.api.python.PythonException:回溯(最近一次調用最后一次):文件“C:\spark-3.1.2-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\worker.py”,行604,在主文件“C:\spark-3.1.2-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\worker.py”中,第 594 行,在進程文件“C:\spark-3.1. 2-bin-hadoop3.2\python\pyspark\rdd.py”,第 2916 行,在 pipeline_func 返回 func(split, prev_func(split, iterator)) 文件“C:\spark-3.1.2-bin-hadoop3.2 \python\pyspark\rdd.py”,第 2916 行,在 pipeline_func 返回 func(split, prev_func(split, iterator)) 文件“C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd. py”,第 418 行,在 func 返回 f(iterator) 文件“C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd.py”,第 2144 行,在 combineLocally 中 merge.mergeValues(iterator)文件“C:\spark-3.1.2-bin-hadoop3.2\python\lib\py spark.zip\pyspark\shuffle.py",第 242 行,在 mergeValues 中 d[k] = comb(d[k], v) if k in d else creator(v) File "C:\spark-3.1.2- bin-hadoop3.2\python\pyspark\util.py",第 73 行,在包裝器中 return f(*args, **kwargs) File "",第 2 行,在 TypeError:+ 中不支持的操作數類型:' int' 和 'str'

在 org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:517) 在 org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:652) 在 org .apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:635) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:470) at org.apache .spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 在 scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209) 在 scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215) 在 scala。 collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write( ShuffleWriteProcessor.scala:59) 在 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 在 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask. scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org .apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(未知源)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(未知源)在 java.lang.Thread.run(未知源)

驅動程序堆棧跟蹤:位於 org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207) 的 org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)。 spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray .scala:55) 在 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) 在 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206) 在 org.apache.spark.scheduler。在 scala.Option.foreach(Option.scala: 407) 在 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079) 在 org.apache.spark.scheduler.DAGSchedulerEventProcessLoop。 doOnReceive(DAGScheduler.scala:2445) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376) at org.apache .spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868) at org.apache.spark.SparkContext.runJob(SparkContext .scala:2196) 在 org.apache.spark.SparkContext.runJob(SparkContext.scala:2217) 在 org.apache.spark.SparkContext.runJob(SparkContext.scala:2236) 在 org.apache.spark.SparkContext.runJob( SparkContext.scala:2261) 在 org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030) 在 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 在org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 在 org.apache.spark.rdd.RDD.withScope(RDD.scala:414) 在 org.apache.spark.rdd.RDD.collect (RDD.scala:1029) 在 org.apache.spark.api.python.PythonRDD$。 collectAndServe(PythonRDD.scala:180) at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(未知來源) 在 sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke (ReflectionEngine.java:357) 在 py4j.Gateway.invoke(Gateway.java:282) 在 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 在 py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) 原因:org.apache.spark.api.python.PythonException: Traceback (最近一次調用最后一次): File " C:\spark-3.1.2-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\worker.py”,第 604 行,在主文件“C:\spark-3.1.2-bin-hadoop3. 2\python\lib\pyspark.z ip\pyspark\worker.py”,第 594 行,處理中 文件“C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd.py”,第 2916 行,在 pipeline_func 返回 func(split, prev_func(split, iterator)) 文件“C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd.py”,第 2916 行,在 pipeline_func 返回 func(split, prev_func(split, iterator))文件“C:\spark-3.1.2-bin-hadoop3.2\python\pyspark\rdd.py”,第 418 行,在 func 返回 f(iterator) 文件“C:\spark-3.1.2-bin-hadoop3 .2\python\pyspark\rdd.py",第 2144 行,在 combineLocally merge.mergeValues(iterator) 文件“C:\spark-3.1.2-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\ shuffle.py",第 242 行,在 mergeValues d[k] = comb(d[k], v) if k in d else creator(v) File "C:\spark-3.1.2-bin-hadoop3.2\ python\pyspark\util.py",第 73 行,在包裝器中 return f(*args, **kwargs) File "",第 2 行,在 TypeError 中:+: 'int' 和 'str' 不支持的操作數類型

在 org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:517) 在 org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:652) 在 org .apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:635) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:470) at org.apache .spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 在 scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209) 在 scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215) 在 scala。 collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write( ShuffleWriteProcessor.scala:59) 在 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 在 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask. scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org .apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker( Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ... 還有 1 個

我感到很失落,以至於我什至無法從我的 funmap 中返回一個位置。 我的意思是這不應該工作:

mapfun[1]

我已經嘗試過使用函數。 但我失敗得更糟:

def fun2(x,y):
    x[0]+y[0]
    x[1]+y[1]
mapfun.reduceByKey(fun2(x,y)).collect()

您收到錯誤

類型錯誤:+ 不支持的操作數類型:“int”和“str”

因為您的元組值是字符串,即("1,0")而不是(1,0) ,python 目前不會應用此運算符+或添加intstr (string) 數據類型。

此外,您在 map 函數中的比較似乎存在邏輯錯誤,其中 x 中有"word1" and "word2" in x因為這只會檢查"word2"是否在x中。 我會推薦以下重寫:

def lan_map(x):
    if "word1" in x and "word2" in x:
        return ("Count",(1,1))
    elif "word1" in x:
        return ("Count",(1,0))
    elif "word2" in x:
        return ("Count",(0,1))
    else:
        return ("Count",(0,0))

或者可能更短

def lan_map(x):
     return ("Count", (
         1 if "word1" in x else 0,
         1 if "word2" in x else 0
     ))

讓我知道這是否適合您。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM