简体   繁体   English

加载 3D Niftii 图像并保存轴向、冠状、矢状的所有切片?

[英]Load 3D Niftii images and save all slices for axial, coronal, saggital?

I have some 3D Niftii datasets of brain MRI scans (FLAIR, T1, T2,..).我有一些 3D Niftii 脑 MRI 扫描数据集(FLAIR、T1、T2、..)。 The FLAIR scans for example are 144x512x512 with Voxel Size of 1.1, 0.5, 0.5 and I want to have 2D-slices from axial, coronal and sagittal view, which I use as input for my CNN.例如,FLAIR 扫描为 144x512x512,体素大小为 1.1、0.5、0.5,我想从轴向、冠状和矢状视图中获得 2D 切片,我将其用作 CNN 的输入。

What I want to do: Read in.nii files with nibabel, save them as Numpy array and store the slices from axial, coronal and sagittal as 2D-PNGs.我想做的事:用 nibabel 读取 in.nii 文件,将它们保存为 Numpy 数组,并将轴向、冠状和矢状切片存储为 2D-PNG。

What I tried:我尝试了什么:

-Use med2image python library - 使用 med2image python 库

-wrote own python script with nibabel, Numpy and image - 使用 nibabel、Numpy 和图像编写了自己的 python 脚本

PROBLEM: The axial and coronal pictures are somehow stretched in one direction.问题:轴向和冠状图像以某种方式向一个方向拉伸。 Sagittal works out like it should.矢状面的效果应该很好。

I tried to debug the python script and used Matplotlib to show the array, that I get, after我尝试调试 python 脚本并使用 Matplotlib 来显示我得到的数组,之后

image = nibabel.load(inputfile)
image_array=image.get_fdata()

by using for example:通过使用例如:

plt.imshow(image_array[:,:, 250])
plt.show()

and found out, the data is already stretched there.并发现,数据已经延伸到那里。

I could figure out to get the desired output with我可以想办法得到所需的 output

header = image.header
sX=header['pixdim'][1]
sY=header['pixdim'][2]
sZ=header['pixdim'][3]
plt.imshow(image_array[:,:, 250],aspect=sX/sZ)

But how can I apply something like "aspect", when saving my image?但是如何在保存图像时应用“方面”之类的东西? Or is there a possibility to already load the.nii file with parameters like that, to have data, that I can work with?或者是否有可能已经加载带有类似参数的.nii文件,以获得我可以使用的数据?

It looks like, the pixel dimensions are not taken care of, when nibabel loads the.nii image.看起来,当nibabel加载.nii图像时,像素尺寸没有得到照顾。 But unfortunately there's no way for me to solve this problem..但不幸的是,我没有办法解决这个问题..

I found out, it doesn't make a difference for training my ML Model, if the pictures are stretched, or not, since I also do this in Data augmentation.我发现,无论图片是否拉伸,训练我的 ML Model 并没有什么不同,因为我也在数据增强中这样做。 Opening the nifty volumes in Slicer or MRICroGL showed the volumes, as expected, since these programs also take the Header into account.正如预期的那样,在 Slicer 或 MRICroGL 中打开漂亮的卷显示了卷,因为这些程序还考虑了 Header。 And also the predictions were perfectly fine (even though, the pictures were "stretched", when saved slice-wise somehow)而且预测也非常好(即使图片被“拉伸”,当以某种方式保存切片时)

Still, it annoyed me, to look at stretched pictures and I just implemented some resizing with cv2 :尽管如此,看拉伸的图片还是让我很恼火,我只是用cv2实现了一些调整大小:

def saveSlice(img, fname, path):
    img=numpy.uint8(img*255)
    fout=os.path.join(path, f'{fname}.png')
    img = cv2.resize(img, dsize=(IMAGE_WIDTH, IMAGE_HEIGTH), interpolation=cv2.INTER_LINEAR)
    cv2.imwrite(fout, img)
    print(f'[+] Slice saved: {fout}', end='\r')

The results are really good and it works pretty well for me.结果非常好,对我来说效果很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM