简体   繁体   English

Mac 终端上的 Python - 杀死:9

[英]Python on Mac Terminal - Killed: 9

I wrote the following program in Python (working with a text file of 1000 words and finding all permutations of 4 words) and ran it on the Mac terminal:我在 Python 中编写了以下程序(使用 1000 个单词的文本文件并找到 4 个单词的所有排列)并在 Mac 终端上运行它:

from itertools import permutations

with open('wordlist.txt') as f:
    content = f.readlines()
content = [x.rstrip() for x in content] 

g = open("finalwordlist.txt", "a")

g.write('%s' % "\n".join(map("".join, permutations(content,4))))

After a short while, I got the following output from the terminal:过了一会儿,我从终端得到了以下output:

Killed: 9

No output was written to the output text file, so I assume the program terminated before the write() step.没有 output 被写入 output 文本文件,所以我假设程序在 write() 步骤之前终止。

This program worked when I was finding permutations of sizes less than 4 (ex. 1,2,3).当我发现大小小于 4(例如 1、2、3)的排列时,这个程序就起作用了。 Was the Killed: 9 because of the size of all the permutations?被杀者:9 是因为所有排列的大小吗? Or something to do with the Mac terminal environment?还是与Mac终端环境有关?

How would I be able to work around this error?我将如何解决此错误? Thanks!谢谢!

Let's see... you've generated a single Python object of roughly 1000^4 permutations of 4 words each.让我们看看...您已经生成了单个 Python object,每个排列大约 1000^4 个 4 个字。 Average word length in usage is about 4.5 letters, but the average length of a lexicon is more -- I'll take a stab that it's more than 25 characters per permutation.使用中的平均单词长度约为 4.5 个字母,但词典的平均长度更长——我会尝试一下,每次排列超过 25 个字符。 This gives you 2.5 * 10^13 bytes in your object.这会在 object 中为您提供 2.5 * 10^13 字节。

How does your RAM allocation do with a single string of 250 Terabytes?您的 RAM 分配如何处理单个 250 TB 的字符串? If you blow your memory limits, what is the error message?如果您破坏了 memory 限制,错误信息是什么?

Yes, this is an issue with you Mac environment: SegFault is not an error message.是的,这是 Mac 环境的问题:SegFault不是错误消息。 It appears that your system crashed so hard, that it never got back to issue the Python memory fault.看来您的系统崩溃得太厉害了,以至于它再也没有发出 Python memory 故障。

BTW, the repair is "obvious" -- quit trying to write 250 Tb in one call , There is nothing in the output differentiation that requires the block write.顺便说一句,修复是“明显的”——停止尝试在一次调用中写入 250 Tb, output 差异中没有任何内容需要块写入。 I hope, Instead.我希望,相反。 write them one at a time.一次写一个。

for combo in permutations(content,4):
    g.write("".join(combo) + "\n")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM