简体   繁体   中英

coverting roi pooling in pytorch to nn layer

I have a an mlmodel using ROI pooling for which I am using this (adapted from here ) (non NN layer version)

def forward(self, features, rois):
        batch_size, num_channels, data_height, data_width = features.size()
        num_rois = rois.size()[0]
        outputs = Variable(torch.zeros(num_rois, num_channels, self.pooled_height, self.pooled_width)).cuda()

        for roi_ind, roi in enumerate(rois):
            batch_ind = int(roi[0].data[0])
            roi_start_w, roi_start_h, roi_end_w, roi_end_h = np.round(
                roi[1:].data.cpu().numpy() * self.spatial_scale).astype(int)
            roi_width = max(roi_end_w - roi_start_w + 1, 1)
            roi_height = max(roi_end_h - roi_start_h + 1, 1)
            bin_size_w = float(roi_width) / float(self.pooled_width)
            bin_size_h = float(roi_height) / float(self.pooled_height)

            for ph in range(self.pooled_height):
                hstart = int(np.floor(ph * bin_size_h))
                hend = int(np.ceil((ph + 1) * bin_size_h))
                hstart = min(data_height, max(0, hstart + roi_start_h))
                hend = min(data_height, max(0, hend + roi_start_h))
                for pw in range(self.pooled_width):
                    wstart = int(np.floor(pw * bin_size_w))
                    wend = int(np.ceil((pw + 1) * bin_size_w))
                    wstart = min(data_width, max(0, wstart + roi_start_w))
                    wend = min(data_width, max(0, wend + roi_start_w))

                    is_empty = (hend <= hstart) or(wend <= wstart)
                    if is_empty:
                        outputs[roi_ind, :, ph, pw] = 0
                    else:
                        data = features[batch_ind]
                        outputs[roi_ind, :, ph, pw] = torch.max(
                            torch.max(data[:, hstart:hend, wstart:wend], 1)[0], 2)[0].view(-1)

        return outputs

I want to convert the pytorch model to caffe and hence I need to convert the above to an NN layer for which I am using the below (adapted from here )

def forward(self, input, rois):
    output = []
    rois = rois.data.float()
    num_rois = rois.size(0)

    rois[:,1:].mul_(self.spatial_scale)
    rois = rois.long()
    for i in range(num_rois):
        roi = rois[i]
        im_idx = roi[0]
        im = input.narrow(0, im_idx, 1)[..., roi[2]:(roi[4]+1), roi[1]:(roi[3]+1)]
        op = nn.functional.adaptive_max_pool2d(input = im, output_size = self.size) 
        output.append(op)

    return torch.cat(tuple(output), dim=0)

The output returned doesn't seem to match for the above methods even though they are performing the same function. I seem to hit a deadlock. Could anyone point if I am making any obvious mistake in the above ?

Found the issue - The rois after multiplication with spatial scale were being rounded down and had to call round function before calling long like so

rois = rois.data.float()
num_rois = rois.size(0)

rois[:,1:].mul_(self.spatial_scale)
rois = rois.round().long() ## Check this here !!

Hope this helps someone!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM