![](/img/trans.png)
[英]java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$
[英]Spark java.lang.NoSuchMethodError
我使用Spark on YARN上的scipy余弦相似度運行以下udf。 我首先在樣本30的數據觀察上測試了這一點。 它運行良好,並在5秒內創建一個余弦相似度矩陣。
這是代碼:
def cosineSimilarity(df):
""" Cosine similarity of the each document with other
"""
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType
from scipy.spatial import distance
cosine = udf(lambda v1, v2: (
float(1-distance.cosine(v1, v2)) if v1 is not None and v2 is not None else None),
DoubleType())
# Creating a cross product of the table to get the cosine similarity vectors
crosstabDF=df.withColumnRenamed('id','id_1').withColumnRenamed('w2v_vector','w2v_vector_1')\
.join(df.withColumnRenamed('id','id_2').withColumnRenamed('w2v_vector','w2v_vector_2'))
similardocs_df= crosstabDF.withColumn('cosinesim', cosine("w2v_vector_1","w2v_vector_2"))
return similardocs_df
#similardocs_df=cosineSimilarity(w2vdf.select('id','w2v_vector'))
similardocs_df=cosineSimilarity(w2vdf_sample.select('id','w2v_vector'))
然后我試圖傳遞整個矩陣(58K記錄)並運行一段時間,然后給我以下錯誤:
我想提一下,有一次它確實在5分鍾內運行了整個數據。 但現在在整個數據上它給我這個錯誤,而它運行在樣本上沒有問題。
WARN org.spark_project.jetty.servlet.ServletHandler (ServletHandler.java:doHandle(667)) - Error for /jobs/
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.getDispatcherType()Ljavax/servlet/DispatcherType;
at org.spark_project.jetty.servlets.gzip.AbstractCompressedStream.doCompress(AbstractCompressedStream.java:248)
at org.spark_project.jetty.servlets.gzip.AbstractCompressedStream.checkOut(AbstractCompressedStream.java:354)
at org.spark_project.jetty.servlets.gzip.AbstractCompressedStream.write(AbstractCompressedStream.java:229)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at org.apache.spark.ui.JettyUtils$$anon$2.doGet(JettyUtils.scala:86)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.spark_project.jetty.servlets.gzip.GzipHandler.handle(GzipHandler.java:479)
at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.spark_project.jetty.server.Server.handle(Server.java:499)
at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.spark_project.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.spark_project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:744)
2017-02-23 21:01:48,024 WARN org.spark_project.jetty.server.HttpChannel (HttpChannel.java:handle(384)) - /jobs/
我也在pyspark中遇到了這個錯誤,我通過在spark-submit命令中添加一些jar解決了這個問題。
--jars /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark/lib/spark-examples-1.6.0-cdh5.9.0-hadoop2.6.0-cdh5。 9.0.jar
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.