简体   繁体   中英

IBM Bluemix sc not defined

I am getting the following error when I try to run one of the samples that is given for the Apache Spark service on IBM Bluemix:

NameErrorTraceback (most recent call last)
<ipython-input-5-7de9805c358e> in <module>()
----> 1 set_hadoop_config(credentials_1)

<ipython-input-2-e790e4773aec> in set_hadoop_config(credentials)
      1 def set_hadoop_config(credentials):
      2     prefix = "fs.swift.service." + credentials['name']
----> 3     hconf = sc._jsc.hadoopConfiguration()
      4     hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
      5     hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")

NameError: global name 'sc' is not defined

I am loading a simple CSV file using the insert to code options on the data sources palette. However, the credentials that are generated do not have the 'name' attribute in it.

credentials['name'] is not in the key value pairs that are generated after I click on insert to code.

I want to know if there is any other way to load the data or this issue an IBM Bluemix issue.

You were hit by a Bluemix issue. The sc variable is defined by default, holding a SparkContext. But if the Spark master is not reachable when the Python notebook kernel starts, you'll notice a delay of several seconds, then the kernel comes up but sc is undefined. Your queston is already two days old (was it missing one of the tags?), so things should have recovered by now. Just give it another try. If it fails, restart the kernel. If you still got no sc , contact Bluemix support about an issue with the Apache Spark service.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM