[英]"Python was not found but can be installed" when using spark-submit on Windows
我已经按照此处描述的步骤在 Windows 上安装了 PySpark,Spark 版本为 3.1.2,包类型为 Apache Hadoop 2.7 预构建,而 python 版本为 3.9.6。
我想用 wordcount 示例尝试 spark-submit,所以我转到 SPARK_HOME 目录中的命令提示符并输入以下内容:
bin\spark-submit examples\src\main\python\wordcount.py README.md
但是,我收到了这条消息:
Python was not found but can be installed from the Microsoft Store: ms-windows-store://pdp/?productid=9NJ46SX7X90P
我不知道出了什么问题,我确保在安装 Python 时将其添加到 PATH 中,并且命令 bin\pyspark 似乎也可以正常工作。 我也尝试过设置>应用程序>应用程序执行别名并禁用所有python选项,但它不起作用。
编辑:这是我尝试应用程序执行别名方法时收到的错误消息:
Exception in thread "main" java.io.IOException: Cannot run program "python3": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:97)
at org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 15 more
search
环境变量转到environment variables
窗口PYSPARK_PYTHON
且值为python
的新environment variable
正如您在启动PySpark
后看到的那样, a.take()
仅在PySpark
能够检测到Python
时才有效
或者您也可以使用您提到的文档中显示的command
运行wordcount.py
来确认。
C:\spark-3.3.0-bin-hadoop3>bin\spark-submit examples\src\main\python\wordcount.py README.md
这是上述命令的输出,即counting words in a file
22/06/24 11:53:33 INFO SparkContext: Running Spark version 3.3.0
...
guide](https://spark.apache.org/contributing.html): 1
information: 1
get: 1
started: 1
contributing: 1
project.: 1
...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.