簡體   English   中英

在PyCharm IDE中添加spark-csv軟件包

[英]Adding spark-csv package in PyCharm IDE

我已經通過python獨立模式成功加載了spark-csv庫

$ --packages com.databricks:spark-csv_2.10:1.4.0

運行上面的命令

運行上面的命令時,它將在此位置創建兩個文件夾(jar和緩存)

C:\Users\Mahima\.ivy2

里面有兩個文件夾。 其中之一包含這些jar文件-org.apache.commons_commons-csv-1.1.jar,com.univocity_univocity-parsers-1.5.1.jar,com.databricks_spark-csv_2.10-1.4.0.jar

我想在PyCharm(Windows 10)中加載此庫,該庫已設置為運行Spark程序。 所以我在項目解釋器路徑中添加了.ivy2文件夾。 主要是我得到的錯誤是:

An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org

完整的錯誤日志如下:

16/06/27 12:54:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
 File "C:/Users/Mahima/PycharmProjects/wordCount/wordCount.py", line 10, in <module>
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('flight.csv')
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\readwriter.py", line 137, in load
return self._df(self._jreader.load(path))
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\utils.py", line 45, in deco
return f(*a, **kw)
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.csv.DefaultSource
at java.net.URLClassLoader$1.run(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
at scala.util.Try.orElse(Try.scala:82)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:62)
... 14 more


Process finished with exit code 1

我已經將罐子添加到項目解釋器路徑。 我要去哪里錯了? 請提出一些解決方案。 提前致謝

解決方案是添加一個名為“ PYSPARK_SUBMIT_ARGS”的環境變量,並將其值設置為“ --packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell”。 它將正常工作。

  • 在控制台上執行sqlContext.read.format('com.databricks.spark.csv')之前,不要保證已實際安裝該軟件包。 實際上命令

     sqlContext.read.format('com.dummy.csv') 

也不會返回任何錯誤

  • 您可以將包添加到您的火花上下文

     sc.addPyFile("com.databricks_spark-csv_2.10-1.4.0.jar") 
  • 您可以在一行中打開一個csv文件,而無需安裝任何軟件包

     sc.textFile("file.csv").map(lambda line: line.split(",")).toDF 

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM