简体   繁体   English

Spark RDD:在 range() object 处使用 collect()

[英]Spark RDD: Using collect() at range() object

I want to convert the numbers 0 to 99 into a RDD.我想将数字 0 到 99 转换为 RDD。

rd1 = range(1, 100)
test = sc.parallelize(rd1)

When I use the collect() function...当我使用 collect() function ...

print(test.collect())

...I receive the following error message: ...我收到以下错误消息:

PicklingError: Could not pickle object as excessively deep recursion required.

According to this documentary, it's supposed to work.根据这部纪录片,它应该可以工作。 Can you tell me what I'm doing wrong?你能告诉我我做错了什么吗?

Thank you very much.非常感谢。

If someone else has the same problem.如果其他人有同样的问题。 I could solve it by selecting only the lines I want to execute.我可以通过只选择我想要执行的行来解决它。

I think that other scripts that ran parallel led to the error.我认为其他并行运行的脚本导致了错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM