简体   繁体   中英

Getting a Null Pointer Exception when I am trying to start PySpark

I am starting pyspark using the following command

./bin/pyspark --master yarn --deploy-mode client --executor-memory 5g

And I get the following error

15/10/14 17:19:15 INFO spark.SparkContext: SparkContext already stopped.
Traceback (most recent call last):
  File "/opt/spark-1.5.1/python/pyspark/shell.py", line 43, in <module>
    sc = SparkContext(pyFiles=add_files)
  File "/opt/spark-1.5.1/python/pyspark/context.py", line 113, in __init__
    conf, jsc, profiler_cls)
  File "/opt/spark-1.5.1/python/pyspark/context.py", line 178, in _do_init
    self._jvm.PythonAccumulatorParam(host, port))
  File "/opt/spark-1.5.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 701, in __call__
  File "/opt/spark-1.5.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorParam.
: java.lang.NullPointerException
        at org.apache.spark.api.python.PythonAccumulatorParam.<init>(PythonRDD.scala:825)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:214)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:745)

For some reason, I am also getting this message

 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!

And

WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.1.112:48644] has failed, address is now gated for [5000] ms. Reason: [Disassociated]

And probably this is why I the SparkContext is stopping.

I am using Spark 1.5.1 and Hadoop 2.7.1 with Yarn 2.7.

Does anyone know why the Yarn application exits before anything happens?

For additional information, here is my yarn-site.xml

        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>26624</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>1024</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-mb</name>
                <value>26624</value>
        </property>
        <property>
                <name>yarn.nodemanager.vmem-pmem-ratio</name>
                <value>2.1</value>
        </property>

and here is my mapred-site.xml

    <property>
            <name>mapreduce.map.memory.mb</name>
            <value>2048</value>
    </property>
    <property>
            <name>mapreduce.map.java.opts</name>
            <value>-Xmx1640M</value>
            <description>Heap size for map jobs.</description>
    </property>
    <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>16384</value>
    </property>
    <property>
            <name>mapreduce.reduce.java.opts</name>
            <value>-Xmx13107M</value>
            <description>Heap size for reduce jobs.</description>
    </property>

I was able to fix this problem by adding

spark.yarn.am.memory 5g

to the spark-default.conf file.

I think it was a memory related issue.

The default value to this parameter is 512m

I had a somewhat similar problem, and when I looked at the Hadoop GUI on port 8088 and I clicked on the application link in the ID column for my PySpark job, I saw the following error:

Uncaught exception: org.apache…InvalidResourceRequestException Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=8, maxVirtualCores=1

If I changed my script to use --executor-cores 1 instead of my default ( --executor-cores 8 ), then it worked. Now I just need to get the admins to change some Yarn setting to allow more cores, such as yarn.scheduler.maximum-allocation-vcores , see https://stackoverflow.com/a/29789568/215945

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM