[英]"Python was not found but can be installed" when using spark-submit on Windows
我已經按照此處描述的步驟在 Windows 上安裝了 PySpark,Spark 版本為 3.1.2,包類型為 Apache Hadoop 2.7 預構建,而 python 版本為 3.9.6。
我想用 wordcount 示例嘗試 spark-submit,所以我轉到 SPARK_HOME 目錄中的命令提示符並輸入以下內容:
bin\spark-submit examples\src\main\python\wordcount.py README.md
但是,我收到了這條消息:
Python was not found but can be installed from the Microsoft Store: ms-windows-store://pdp/?productid=9NJ46SX7X90P
我不知道出了什么問題,我確保在安裝 Python 時將其添加到 PATH 中,並且命令 bin\pyspark 似乎也可以正常工作。 我也嘗試過設置>應用程序>應用程序執行別名並禁用所有python選項,但它不起作用。
編輯:這是我嘗試應用程序執行別名方法時收到的錯誤消息:
Exception in thread "main" java.io.IOException: Cannot run program "python3": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:97)
at org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 15 more
search
環境變量轉到environment variables
窗口PYSPARK_PYTHON
且值為python
的新environment variable
正如您在啟動PySpark
后看到的那樣, a.take()
僅在PySpark
能夠檢測到Python
時才有效
或者您也可以使用您提到的文檔中顯示的command
運行wordcount.py
來確認。
C:\spark-3.3.0-bin-hadoop3>bin\spark-submit examples\src\main\python\wordcount.py README.md
這是上述命令的輸出,即counting words in a file
22/06/24 11:53:33 INFO SparkContext: Running Spark version 3.3.0
...
guide](https://spark.apache.org/contributing.html): 1
information: 1
get: 1
started: 1
contributing: 1
project.: 1
...
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.