简体   繁体   中英

Best way to make 3D model out of CT scans with python?

I want to make a 3D model from the heart by using CT scans. I used Blender but it didn't work out well. This Python script is running too slow. Does someone know a better way to do this?

import bpy
import os
import pydicom
import numpy as np

path = "./Desktop/EMC/"

files = sorted(os.listdir(path + "/Data/Head/"))  

data = np.zeros((245, 512, 512)) 

for i in range(len(files)):
    layer = pydicom.dcmread(path + "/Data/Head/" + files[i]) # read dcm files

    data[i] = layer.pixel_array 

for yy in range(245):
    for xx in range(512):
        for zz in range(512):
            print("X: {}, Y: {}, Z: {}".format(xx, yy, zz))

            c = data[yy, xx, zz]



            bpy.ops.mesh.primitive_cube_add(location=(xx / 500, zz / 500, yy / 500))
            bpy.ops.transform.resize(value=(0.001, 0.001, 0.001))

            activeObject = bpy.context.active_object # Select active object 
            mat = bpy.data.materials.new(name="MaterialName") 
            activeObject.data.materials.append(mat) #Add Material

            bpy.context.object.active_material.diffuse_color = (c, c, c) #change color

            if zz == 511:        #Join objects and remove doubles
                item='MESH'
                bpy.ops.object.select_all(action='DESELECT')
                bpy.ops.object.select_by_type(type=item)
                bpy.ops.object.join()

                bpy.ops.object.mode_set(mode='EDIT')
                bpy.ops.mesh.remove_doubles()
                bpy.ops.object.mode_set(mode='OBJECT')

print("DONE")

Blender is faster with one object that has one million vertices than if it had 1000 objects of 1000 vertices each. Rather than creating multiple cubes and joining them together, use bmesh to add multiple cubes within the one mesh object.

Another consideration is the use of operators , each operator call does a scene update and redraw, by working directly with mesh data and then doing one update you prevent a lot of unneeded updates.

Using random colours instead of ct images, the following script runs in about 10% of the time. I also made the cubes bigger, if the initial cubes are too small, remove doubles can merge more than what you want, you can always scale it down after building the mesh.

import bpy
import bmesh
import mathutils
import numpy as np

# replace these two lines with your data filling code
x_size = y_size = z_size = 10
data = np.random.rand(x_size, y_size, z_size)

me = bpy.data.meshes.new("Mesh_new")
scene = bpy.context.scene
obj = bpy.data.objects.new("CT_Scan_new", me)
scene.objects.link(obj)
scene.objects.active = obj
obj.select = True

bm = bmesh.new()

for yy in range(y_size):
    for xx in range(x_size):
        for zz in range(z_size):
            c = data[yy, xx, zz]
            bmesh.ops.create_cube(bm, size=0.1,
                    matrix=mathutils.Matrix.Translation((xx / 10, zz / 10, yy / 10)))
            mat = bpy.data.materials.new(name="MaterialName")
            mat.diffuse_color = (c, c, c)
            obj.data.materials.append(mat)
            mat_idx = len(obj.data.materials)-1
            bm.faces.ensure_lookup_table()
            for i in range(6):
                # assign the last material to the last six faces created
                bm.faces[-i].material_index = mat_idx

bmesh.ops.remove_doubles(bm, verts=bm.verts, dist=0.001)
bm.to_mesh(me)

You can do this without ANY Python! Just use Meshroom.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM