简体   繁体   中英

What is the memory layout of the weights and bias of dnn module?

I would like to know memory layout of dnn module so I could port the weights to another library.

I can access the weight and bias as following

cv::Mat weight = input_net.getParam(input_layer_name.c_str(), 0);
cv::Mat bias = input_net.getParam(input_layer_name.c_str(), 1);

If I have a convolution layer, with 3 input filters, 64 output filters and 3x3 kernel, how would the memory layout looks like? If it is convolution layer, I should have 3*3*3*64 weights and 64 bias. How could I know the position of each weigh and bias in the weight and bias matrix?

More precisely, how could I access the weight shown in the graph(A,C,N)?

例

It has W x H x InCh x OutCh layout. From the lowest to the highest index. Something like:

w[0]: (x1, y1, inc1, outc1)
w[1]: (x2, y1, inc1, outc1)
...
w[n-1]: (xn, y1, inc1, outc1)
w[n]:   (x1, y2, inc1, outc1)
w[n+1]: (x2, y2, inc1, outc1)
...

and further.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM