简体   繁体   中英

how to accelerate the training of RPN or SSD neuron network

We know that in SSD, there are three output feature maps. During training, I have to caculate the loss between the output of the SDD and the ground truth every pixel. So how can I complete this "from pixel to pixel operation" quickly? Now I use the nested for loop to complete the caculation of loss, but it is extremely slow. Say another example, there is an output map indicating the probability whether there is an object or not, say the size is 100×100, and I set the threshould 0.5. So how can I know which pixel of the output map has a value above 0.5? Now I use the for loop, like for i in range(100): for y in range(100): if map[i][y]>0.5 do something ,it is too slow? So how can I solve it?

Given the information in your question, you could use the numpy library to work with numpy.ndarray , you would then be able to perform a wide range of element-wise operations without the need for loops.

For instance, with map a 2d numpy.ndarray , map > 0.5 returns a boolean array of the same size as map , indicating which elements are greater than 0.5. You could then use this to perform your computations only with the relevant elements.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM