簡體   English   中英

高效計算 3d 點雲中基於網格的點密度

[英]Efficiently calculating grid-based point density in 3d point cloud

我有一個 3d 點雲矩陣,我正在嘗試計算矩陣內較小體積內的最大點密度。 我目前正在使用 3D 網格直方圖系統,我循環遍歷矩陣中的每個點並增加相應網格正方形的值。 然后,我可以簡單地找到網格矩陣的最大值。

我已經編寫了有效的代碼,但是對於我正在嘗試做的事情來說速度非常慢

import numpy as np

def densityPointCloud(points, gridCount, gridSize):
    hist = np.zeros((gridCount, gridCount, gridCount), np.uint16)

    rndPoints = np.rint(points/gridSize) + int(gridCount/2)
    rndPoints = rndPoints.astype(int)


    for point in rndPoints:
        if np.amax(point) < gridCount and np.amin(point) >= 0:
            hist[point[0]][point[1]][point[2]] += 1

    return hist


cloud = (np.random.rand(100000, 3)*10)-5
histogram = densityPointCloud(cloud , 50, 0.2)
print(np.amax(histogram))

有什么捷徑可以讓我更有效地做到這一點嗎?

這是一個開始:

import numpy as np
import time
from collections import Counter

# if you need the whole histogram object
def dpc2(points, gridCount, gridSize):

    hist = np.zeros((gridCount, gridCount, gridCount), np.uint16)
    rndPoints = np.rint(points/gridSize) + int(gridCount/2)
    rndPoints = rndPoints.astype(int)
    inbounds = np.logical_and(np.amax(rndPoints,axis = 1) < gridCount, np.amin(rndPoints,axis = 1) >= 0)

    for point in rndPoints[inbounds,:]:
        hist[point[0]][point[1]][point[2]] += 1

    return hist

# just care about a max point
def dpc3(points, gridCount, gridSize):

    rndPoints = np.rint(points/gridSize) + int(gridCount/2)
    rndPoints = rndPoints.astype(int)
    inbounds = np.logical_and(np.amax(rndPoints,axis = 1) < gridCount,
        np.amin(rndPoints,axis = 1) >= 0)
    # cheap hashing
    phashes = gridCount*gridCount*rndPoints[inbounds,0] + gridCount*rndPoints[inbounds,1] + rndPoints[inbounds,2]
    max_h, max_v = Counter(phashes).most_common(1)[0]

    max_coord = [(max_h // (gridCount*gridCount)) % gridCount,(max_h // gridCount) % gridCount,max_h % gridCount]
    return (max_coord, max_v)

# TESTING
cloud = (np.random.rand(200000, 3)*10)-5
t1 = time.perf_counter()
hist1 = densityPointCloud(cloud , 50, 0.2)
t2 = time.perf_counter()
hist2 = dpc2(cloud,50,0.2)
t3 = time.perf_counter()
hist3 = dpc3(cloud,50,0.2)
t4 = time.perf_counter()
print(f"task 1: {round(1000*(t2-t1))}ms\ntask 2: {round(1000*(t3-t2))}ms\ntask 3: {round(1000*(t4-t3))}ms")
print(f"max value is {hist3[1]}, achieved at {hist3[0]}")
np.all(np.equal(hist1,hist2)) # check that results are identical
# check for equal max - histogram may be multi-modal so the point won't
# necessarily match
np.unravel_index(np.argmax(hist2, axis=None), hist2.shape)

這個想法是做一次所有的 if/and 比較:讓 numpy 做它們(在 C 中有效),而不是在 Python 循環中“手動”做它們。 這也讓我們只迭代將導致hist遞增的點。

如果您認為您的雲將有大量空白空間,您也可以考慮為hist使用稀疏數據結構 - memory 分配可能成為非常大數據的瓶頸。

沒有科學地對此進行基准測試,但似乎運行速度快了 2-3 倍(v2)和 6-8 倍(v3)! 如果您想要所有與最大值相關的點。 密度,很容易從Counter object 中提取那些。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM