繁体   English   中英

使用cython加速功能

[英]Speed up function using cython

我正在尝试加快我的职能之一。

def get_scale_local_maximas(cube_coordinates, laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.

A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.

Parameters
----------
cube_coordinates : (n, 3) ndarray
      A 2d array with each row representing 3 values, ``(y,x,scale_level)``
      where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
      position of a point in scale space.
laplacian_cube : ndarray of floats
    Laplacian of Gaussian scale space. 

Returns
-------
output : (n, 3) ndarray
    cube_coordinates that satisfy the local maximum criteria in
    scale space.

Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""

amount_of_layers = laplacian_cube.shape[2]
amount_of_points = cube_coordinates.shape[0]

# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)

for point_index, interest_point_coords in enumerate(cube_coordinates):
    # Row coordinate
    y_coord = interest_point_coords[0]
    # Column coordinate
    x_coord = interest_point_coords[1]
    # Layer number starting from the smallest sigma
    point_layer = interest_point_coords[2]
    point_response = laplacian_cube[y_coord, x_coord, point_layer]

    # Check the point under the current one
    if point_layer != 0:
        lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
        if lower_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

    # Check the point above the current one
    if point_layer != (amount_of_layers-1):
        upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
        if upper_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

# Return only accepted points
return cube_coordinates[accepted_points_index]

这是我尝试使用Cython加快速度的尝试:

# cython: cdivision=True
# cython: boundscheck=False
# cython: nonecheck=False
# cython: wraparound=False
import numpy as np
cimport numpy as cnp

def get_scale_local_maximas(cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.

A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.

Parameters
----------
cube_coordinates : (n, 3) ndarray
      A 2d array with each row representing 3 values, ``(y,x,scale_level)``
      where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
      position of a point in scale space.
laplacian_cube : ndarray of floats
    Laplacian of Gaussian scale space. 

Returns
-------
output : (n, 3) ndarray
    cube_coordinates that satisfy the local maximum criteria in
    scale space.

Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""

cdef Py_ssize_t y_coord, x_coord, point_layer, point_index
cdef cnp.double_t point_response, lower_point_response, upper_point_response
cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2]
cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0]

# amount_of_layers = laplacian_cube.shape[2]
# amount_of_points = cube_coordinates.shape[0]

# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)

for point_index in range(amount_of_points):

    interest_point_coords = cube_coordinates[point_index]
    # Row coordinate
    y_coord = interest_point_coords[0]
    # Column coordinate
    x_coord = interest_point_coords[1]
    # Layer number starting from the smallest sigma
    point_layer = interest_point_coords[2]
    point_response = laplacian_cube[y_coord, x_coord, point_layer]

    # Check the point under the current one
    if point_layer != 0:
        lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
        if lower_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

    # Check the point above the current one
    if point_layer != (amount_of_layers-1):
        upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
        if upper_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

# Return only accepted points
return cube_coordinates[accepted_points_index]

但是我看不到速度的提高。 而且我也尝试用memoryview cnp.double_t[:, :, ::1]替换cnp.ndarray[cnp.double_t, ndim=3] ,但这只会减慢整个代码的速度。 我会感谢我的代码的任何提示或更正。 我对Cython比较陌生,可能做错了什么。

编辑:

我完全重写了在Cython中的功能:

def get_scale_local_maximas(cnp.ndarray[cnp.int_t, ndim=2] cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.

A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.

Parameters
----------
cube_coordinates : (n, 3) ndarray
      A 2d array with each row representing 3 values, ``(y,x,scale_level)``
      where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
      position of a point in scale space.
laplacian_cube : ndarray of floats
    Laplacian of Gaussian scale space. 

Returns
-------
output : (n, 3) ndarray
    cube_coordinates that satisfy the local maximum criteria in
    scale space.

Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""

cdef Py_ssize_t y_coord, x_coord, point_layer, point_index
cdef cnp.double_t point_response, lower_point_response, upper_point_response
cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2]
cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0]

# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)

for point_index in range(amount_of_points):

    interest_point_coords = cube_coordinates[point_index]
    # Row coordinate
    y_coord = interest_point_coords[0]
    # Column coordinate
    x_coord = interest_point_coords[1]
    # Layer number starting from the smallest sigma
    point_layer = interest_point_coords[2]
    point_response = laplacian_cube[y_coord, x_coord, point_layer]

    # Check the point under the current one
    if point_layer != 0:
        lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
        if lower_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

    # Check the point above the current one
    if point_layer != (amount_of_layers-1):
        upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
        if upper_point_response >= point_response:
            accepted_points_index[point_index] = False
            continue

# Return only accepted points
return cube_coordinates[accepted_points_index]

之后,我用我的函数和向量化的建议函数做了一些基准测试:

%timeit compiled.get_scale_local_maximas_np(coords, lapl_dummy)
%timeit compiled.get_scale_local_maximas(coords, lapl_dummy)

%timeit dynamic.get_scale_local_maximas_np(coords, lapl_dummy)
%timeit dynamic.get_scale_local_maximas(coords, lapl_dummy)

10000 loops, best of 3: 101 µs per loop
1000 loops, best of 3: 328 µs per loop
10000 loops, best of 3: 103 µs per loop
1000 loops, best of 3: 1.6 ms per loop

compiled名称空间表示使用Cython编译的这两个函数。

dynamic名称空间代表普通的Python文件。

因此,我得出结论,在这种情况下,numpy方法更好。

您还可以改善Python代码,因为您还没有“已经完成numpy的98%的工作”:您仍在遍历坐标数组的行并每行执行1-2次检查。

您可以使用numpy的“ fancy indexing”和掩码将其完全矢量化:

def get_scale_local_maximas_full_np(coords, cube):
    x, y, z = [ coords[:, ind] for ind in range(3) ]

    point_responses = cube[x, y, z]
    lowers = point_responses.copy()
    uppers = point_responses.copy()
    not_layer_0 = z > 0
    lower_responses = cube[x[not_layer_0], y[not_layer_0], z[not_layer_0]-1]
    lowers[not_layer_0] = lower_responses  

    not_max_layer = z < (cube.shape[2] - 1)
    upper_responses = cube[x[not_max_layer], y[not_max_layer], z[not_max_layer]+1]
    uppers[not_max_layer] = upper_responses

    lo_check = np.ones(z.shape, dtype=np.bool)
    lo_check[not_layer_0] = (point_responses > lowers)[not_layer_0]
    hi_check = np.ones(z.shape, dtype=np.bool)
    hi_check[not_max_layer] = (point_responses > uppers)[not_max_layer]

    return coords[lo_check & hi_check]

我已经生成了一组更大的数据来测试性能:

lapl_dummy = np.random.rand(100,100,100)
coords = np.random.random_integers(0,99, size=(1000,3))

我得到以下计时结果:

In [146]: %timeit get_scale_local_maximas_full_np(coords, lapl_dummy)
10000 loops, best of 3: 175 µs per loop

In [147]: %timeit get_scale_local_maximas(coords, lapl_dummy)
100 loops, best of 3: 2.24 ms per loop

但是,当然要小心性能测试,因为它通常取决于所使用的数据。

我对Cython的经验很少,无法为您提供帮助。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM