简体   繁体   中英

Spark RDD: Using collect() at range() object

I want to convert the numbers 0 to 99 into a RDD.

rd1 = range(1, 100)
test = sc.parallelize(rd1)

When I use the collect() function...

print(test.collect())

...I receive the following error message:

PicklingError: Could not pickle object as excessively deep recursion required.

According to this documentary, it's supposed to work. Can you tell me what I'm doing wrong?

Thank you very much.

If someone else has the same problem. I could solve it by selecting only the lines I want to execute.

I think that other scripts that ran parallel led to the error.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM