简体   繁体   中英

Spark-yarn ends with an error exitCode=16, how to solve that?

I am using Apache Spark 2.0.0 and Apache Hadoop 2.6.0. I am trying to run my spark application on my hadoop cluster.

I used the command lines:

bin/spark-submit --class org.JavaWordCount \
    --master yarn \
    --deploy-mode cluster \
    --driver-memory 512m \
    --queue default \
    /opt/JavaWordCount.jar  \
    10

However, Yarn ends with an error exictCode=16 :

17/01/25 11:05:49 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/01/25 11:05:49 INFO impl.ContainerManagementProtocolProxy: Opening proxy : hmaster:59600
17/01/25 11:05:49 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
17/01/25 11:05:49 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 16, (reason: Shutdown hook called before final status was reported.)
17/01/25 11:05:49 INFO storage.DiskBlockManager: Shutdown hook called

I tried to solve this issue with this topic , but it doesn't give a pratical answer.

Does anyone know how to solve this isssue ?

Thanks in advance

Just Encountered this issue. Excess memory is being used by JVM. Try adding the property

  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

in the yarn-site.xml of all nodemanagers and restart. It worked for me

Refer : https://issues.apache.org/jira/browse/YARN-4714

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM