简体   繁体   中英

How to submit a python wordcount on HDInsight Spark cluster from Jupyter

I am trying to run a python wordcount on a Spark HDInsight cluster and I'm running it from Jupyter. I'm not actually sure if this is the right way to do it, but I couldn't find anything helpful about how to submit a standalone python app on HDInsight Spark cluster.

The code :

import pyspark
import operator
from pyspark import SparkConf
from pyspark import SparkContext
import atexit
from operator import add
conf = SparkConf().setMaster("yarn-client").setAppName("WC")
sc = SparkContext(conf = conf)
atexit.register(lambda: sc.stop())

input = sc.textFile("wasb:///example/data/gutenberg/davinci.txt")
words = input.flatMap(lambda x: x.split())
wordCount = words.map(lambda x: (str(x),1)).reduceByKey(add)

wordCount.saveAsTextFile("wasb:///example/outputspark")

And the error message I get and don't understand :

ValueError                                Traceback (most recent call last)
<ipython-input-2-8a9d4f2cb5e8> in <module>()
      6 from operator import add
      7 import atexit
----> 8 sc = SparkContext('yarn-client')
      9 
     10 input = sc.textFile("wasb:///example/data/gutenberg/davinci.txt")

/usr/hdp/current/spark-client/python/pyspark/context.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    108         """
    109         self._callsite = first_spark_call() or CallSite(None, None, None)
--> 110         SparkContext._ensure_initialized(self, gateway=gateway)
    111         try:
    112             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,

/usr/hdp/current/spark-client/python/pyspark/context.pyc in _ensure_initialized(cls, instance, gateway)
    248                         " created by %s at %s:%s "
    249                         % (currentAppName, currentMaster,
--> 250                             callsite.function, callsite.file, callsite.linenum))
    251                 else:
    252                     SparkContext._active_spark_context = instance

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=pyspark-shell, master=yarn-client) created by __init__ at <ipython-input-1-86beedbc8a46>:7 

Is it actually possible to run python job this way? If yes - it seems to be the problem with SparkContext definition... I tried different ways:

sc = SparkContext('spark://headnodehost:7077', 'pyspark')

and

conf = SparkConf().setMaster("yarn-client").setAppName("WordCount1")
sc = SparkContext(conf = conf)

but no success. What would be the right way to run the job or configure SparkContext?

If you running from Jupyter notebook than Spark context is pre-created for you and it would be incorrect to create a separate context. To resolve the problem just remove the lines that create the context and directly start from:

input = sc.textFile("wasb:///example/data/gutenberg/davinci.txt")

If you need to run standalone program you can run it from command line using pyspark or submit it using REST APIs using Livy server running on the cluster.

It looks like I can answer my question myself. Some changes in the code turned out to be helpful:

conf = SparkConf()
conf.setMaster("yarn-client")
conf.setAppName("pyspark-word-count6")
sc = SparkContext(conf=conf)
atexit.register(lambda: sc.stop())

data = sc.textFile("wasb:///example/data/gutenberg/davinci.txt")
words = data.flatMap(lambda x: x.split())
wordCount = words.map(lambda x: (x.encode('ascii','ignore'),1)).reduceByKey(add)

wordCount.saveAsTextFile("wasb:///output/path")

I just resolved a similar bug in my code to find it was down to the fact that pyspark only accepts one object from SparkContext(). Once submitted, any changes and running of code will hit that problem and return the error message initialisation. My solution was simply to restart the platform kernel and when the notebook is restarted to rerun my notebook script. It then ran without error.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM