简体   繁体   English

pyspark:NameError:未定义名称'spark'

[英]pyspark : NameError: name 'spark' is not defined

I am copying the pyspark.ml example from the official document website: http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Transformer我正在从官方文档网站复制 pyspark.ml 示例: http ://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Transformer

data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
df = spark.createDataFrame(data, ["features"])
kmeans = KMeans(k=2, seed=1)
model = kmeans.fit(df)

However, the example above wouldn't run and gave me the following errors:但是,上面的示例无法运行并给我以下错误:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-28-aaffcd1239c9> in <module>()
      1 from pyspark import *
      2 data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
----> 3 df = spark.createDataFrame(data, ["features"])
      4 kmeans = KMeans(k=2, seed=1)
      5 model = kmeans.fit(df)

NameError: name 'spark' is not defined

What additional configuration/variable needs to be set to get the example running?需要设置哪些额外的配置/变量才能使示例运行?

You can add 你可以加

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)

to the begining of your codes to define a SparkSession, then the spark.createDataFrame() should work. 在您的代码开始定义SparkSession,然后spark.createDataFrame()应该工作。

Since you are calling createDataFrame() , you need to do this: 由于您调用createDataFrame() ,因此需要执行以下操作:

df = sqlContext.createDataFrame(data, ["features"])

instead of this: 而不是这个:

df = spark.createDataFrame(data, ["features"])

spark stands there as the sqlContext . spark就像sqlContext一样站在那里。


In general, some people have that as sc , so if that didn't work, you could try: 一般来说,有些人认为它是sc ,所以如果不起作用,你可以尝试:

df = sc.createDataFrame(data, ["features"])

Answer by 率怀一 is good and will work for the first time. 按怀一回答很好,并且会第一次工作。 But the second time you try it, it will throw the following exception : 但是第二次尝试时,会抛出以下异常:

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=pyspark-shell, master=local) created by __init__ at <ipython-input-3-786525f7559f>:10 

There are two ways to avoid it. 有两种方法可以避免它。

1) Using SparkContext.getOrCreate() instead of SparkContext() : 1)使用SparkContext.getOrCreate()而不是SparkContext()

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)

2) Using sc.stop() in the end, or before you start another SparkContext. 2)最后使用sc.stop() ,或者在启动另一个SparkContext之前。

If it errors you regarding other open session do this:如果它错误您关于其他打开的会话,请执行以下操作:

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate();

spark = SparkSession(sc)
scraped_data=spark.read.json("/Users/reihaneh/Desktop/nov3_final_tst1/")

如果您使用 python,则必须按以下方式导入 spark,然后它将创建一个 spark 会话,但请记住这是一种旧方法,尽管它会起作用。

from pyspark.shell import spark

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM