[英]OSError: [Errno 24] Too many open files using Nibabel
I have a python3.6 program, using nibabel package to analyze medical images in NIFTI format.我有一个python3.6程序,使用nibabel包来分析NIFTI格式的医学图像。
import glob
import nibabel as nib
health = [nib.load(pt) for pt in glob.glob(healthdir+'*.nii')] # len = 200
health_data = [h.get_data() for h in health]
It occurred OSError: [Errno 24] Too many open files
in the last line.它发生了OSError: [Errno 24] Too many open files
最后一行OSError: [Errno 24] Too many open files
。 I used following code and found out that it occurred the error in the last element.我使用了以下代码,发现它在最后一个元素中发生了错误。
health_data = []
for i in range(len(health)):
try:
health_data.append(health[i].get_data())
except:
print(i) # 199
I have tried to search relative topic such as Nibabel: IOError: [Errno 24] Too many open files: .我试图搜索相关主题,例如Nibabel: IOError: [Errno 24] Too many open files: . However, it doesn't solve the problem.但是,它并不能解决问题。 Also, I prefer not to use ulimit
.另外,我不喜欢使用ulimit
。 Thanks!谢谢!
Not familliar with Nibabel but Try with
不familliar与Nibabel但尝试with
health_data = []
for filepath in glob.glob(healthdir+'*.nii'):
with nib.load(filepath) as health:
health_data.append(health.get_data())
**NOT TESTED **未测试
You may need to delete the object after using it. 使用后,您可能需要删除该对象。
def show_origin_image(name,s=100,max_limit=None, min_limit=None):
origin = name
file_name_list = [each for each in os.listdir(origin) if not each.startswith('.')]
file_name_list = file_name_list[min_limit:max_limit]
dimension = 2
width_num = 6
height_num = math.ceil(len(file_name_list) / width_num)
plt.figure(figsize=(15, height_num * 2.8))
data_list = []
for n,each in enumerate(file_name_list, 1):
agent = nib.load(os.path.join(origin, each), keep_file_open=False)
three_d_data = np.asarray(agent.dataobj)
size = three_d_data.shape
image = np.take(three_d_data, s, dimension)
plt.subplot(height_num, width_num, n)
plt.imshow(image, 'gray')
plt.axis('off')
data_list.append(three_d_data)
# add delete operation!
del agent
return data_list
I had the same problem importing a number of self generated NIfTI images.我在导入许多自生成的 NIfTI 图像时遇到了同样的问题。
Using nilearn
instead of nibabel
solved the problem for me.使用nilearn
而不是nibabel
为我解决了这个问题。
from nilearn.image import smooth_img
import glob
image_dir = glob.glob(some_path + '*.nii')
images = smooth_img(image_dir, fwhm=None)
image_maps = []
for img in images:
img_data = img.get_fdata()
image_maps.append(img_data)
del img_data
Worked with 10 000 images for me and took around 12min.为我处理了 10 000 张图像,耗时约 12 分钟。
smooth_img
reads in the nifti and applies a smoothing kernel with size fwhm
(full width half maximum... I think). smooth_img
读取 nifti 并应用大小为fwhm
的平滑内核(全宽半最大值......我认为)。 I did this because it works and I need this smoothing in a different situation in the script.我这样做是因为它有效,而且我需要在脚本中的不同情况下进行平滑处理。 You can check out nilear.image.load_img
.您可以查看nilear.image.load_img
。 It should do the same.它应该做同样的事情。
Best最好的事物
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.