[英]Pyspark error for java heap space error
I am new to spark using Spark 1.6.1 with two workers each having Memory 1GB and 5 Cores assigned, running this code on a 33MB file. 我是使用Spark 1.6.1的新手,有两个工作人员,每个工作人员分配了1GB的内存和5个内核 ,并在33MB的文件上运行此代码。
This Code is used to Index word in spark. 该代码用于索引spark中的单词。
from textblob import TextBlob as tb
from textblob_aptagger import PerceptronTagger
import numpy as np
import nltk.data
import Constants
from pyspark import SparkContext,SparkConf
import nltk
TOKENIZER = nltk.data.load('tokenizers/punkt/english.pickle')
def word_tokenize(x):
return nltk.word_tokenize(x)
def pos_tag (s):
global TAGGER
return TAGGER.tag(s)
def wrap_words (pair):
''' associable each word with index '''
index = pair[0]
result = []
for word, tag in pair[1]:
word = word.lower()
result.append({ "index": index, "word": word, "tag": tag})
index += 1
return result
if __name__ == '__main__':
conf = SparkConf().setMaster(Constants.MASTER_URL).setAppName(Constants.APP_NAME)
sc = SparkContext(conf=conf)
data = sc.textFile(Constants.FILE_PATH)
sent = data.flatMap(word_tokenize).map(pos_tag).map(lambda x: x[0]).glom()
num_partition = sent.getNumPartitions()
base = list(np.cumsum(np.array(sent.map(len).collect())))
base.insert(0, 0)
base.pop()
RDD = sc.parallelize(base,num_partition)
tagged_doc = RDD.zip(sent).map(wrap_words).cache()
For Smaller File < 25MB the code is working fine but gives error for files whose size is larger that 25MB. 对于小于25MB的较小文件,该代码可以正常工作,但对于大于25MB的文件,则会出错。
Help me resolve this issue or provide an alternative to this problem ? 帮助我解决此问题或提供替代方法?
That's because of the .collect(). 那是因为.collect()。 You lose everything when you transform your rdd into a classic python variable (or np.array), all data is collected on the same place.
当您将rdd转换为经典的python变量(或np.array)时,您将丢失所有东西,所有数据都收集在同一位置。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.