简体   繁体   中英

The different with the image convolve with the conv2D of keras

The goal is trying to using a customize weight on the first layer of model to full fill the function of highpass filter---make the first layer of Model is the same as the high pass filter to the image.

1.first, the similar solution will be: using a high pass filter in the image processing, and generate a new image, and use it in the model. ---this is have to use the image processing, which is cost of time.

2.I want to set the a layer of Conv2D , which is also able to high pass the image. with a custom filter( as a intializer). the basic is that the filter and the conv2D is both using convolution rules.

but the results are different from the first solution.

#The image processing code:
    kernel55 = np.array([[-1, 2, -2, 2, -1], 
                         [2, -6, 8, -6, 2], 
                         [-2, 8, -12, 8, -2], 
                         [2,-6, 8, -6, 2],
                         [-1, 2, -2, 2, -1]])/12
        # load the image, pre-process it, and store it in the data list
        image = cv2.imread('1.pgm',-1)
        image = ndimage.convolve(image, kernel55)
        print(image)

#the first layer of the Model:

    def kernel_init(shape):
        kernel = np.zeros(shape)
        kernel[:,:,0,0] = np.array([[-1, 2, -2, 2, -1], 
                             [2, -6, 8, -6, 2], 
                             [-2, 8, -12, 8, -2], 
                             [2,-6, 8, -6, 2],
                             [-1, 2, -2, 2, -1]])/12
        return kernel
    #Build Keras model
    model = Sequential()
    model.add(Conv2D(1, [5,5], kernel_initializer=kernel_init, 
                     input_shape=(256,256,1), padding="same",activation='relu'))
    model.build()

test_im=cv2.imread('1.pgm',-1)  # define a test image
test_im=np.expand_dims(np.expand_dims(np.array(test_im),2),0)
out = model.predict(test_im)

The problem is : using the image processing is able to produce a proper high passed image, but using the Conv2D is not the same result.

I am assuming two results should be the same or similar, the it turns out not...

Why, and it there any problem of my thoughts?

Apologies for the incomplete answer, but I've got something that partially works, and some explanation. Here's the code:

import cv2
import numpy as np
import scipy.ndimage as ndimage
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv2D

#The image processing code:
#the first layer of the Model:

def kernel_init(shape):
    kernel = np.zeros(shape)
    kernel[:,:,0,0] = np.array([[-1, 2, -2, 2, -1],
                         [2, -6, 8, -6, 2],
                         [-2, 8, -12, 8, -2],
                         [2,-6, 8, -6, 2],
                         [-1, 2, -2, 2, -1]])
    #kernel = kernel/12
    #print("Here is the kernel")
    #print(kernel)
    #print("That was the kernel")
    return kernel

def main():
    print("starting")
    kernel55 = np.array([[-1, 2, -2, 2, -1],
                         [2, -6, 8, -6, 2],
                         [-2, 8, -12, 8, -2],
                         [2,-6, 8, -6, 2],
                         [-1, 2, -2, 2, -1]])
    # load the image, pre-process it, and store it in the data list
    image = cv2.imread('tiger.bmp',-1)
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    myimage = cv2.resize(gray,(256,256))
    myimage = myimage
    print("The image")
    #print(myimage)
    print("That was the image")
    segment = myimage[0:10, 0:10]
    print(segment)

    imgOut = ndimage.convolve(myimage, kernel55)
    #imgOut = imgOut/12
    print(imgOut.shape)
    cv2.imwrite('zzconv.png', imgOut)

    #print(imgOut)
    segment = imgOut[0:10, 0:10]
    print(segment)

    #Build Keras model
    print("And the Keras stuff")
    model = Sequential()
    model.add(Conv2D(1, [5,5], kernel_initializer=kernel_init, input_shape=(256,256,1), padding="same"))
    model.build()

    test_im=myimage
    test_im = test_im.reshape((1, 256, 256, 1))
    print(test_im.shape)
    imgOut2 = model.predict(test_im)
    imgOut2 = imgOut2.reshape(256, 256)
    print(imgOut2.shape)
    #imgOut2 = imgOut2 / 12
    imgOut2[imgOut2 < 0] += 256

    cv2.imwrite('zzconv2.png', imgOut2)

    #print(imgOut2)
    segment = imgOut2[0:10, 0:10]
    print(segment)

Here are the things to note:

  • It's an image, pixels are bytes, anything bigger than a byte may be truncated and may be truncated incorrectly (note that I've had to remove your "/12" on the kernel. That's why I've added the "+=256" section.
  • You can't assume that the "padded" areas will come out identical. I don't know what values keras and opencv use for padding, but it doesn't seem to be the same values. Your output images should only be identical from [3,3] (ie a border of 3 pixels on all sides might differ).
  • Check your kernel before you use it. It was being rounded to -1 and 0 on my system. Presumably using integer arithmetic. Adding the line "kernel=kernel/12" gave more correct results for the kernel, but the rounding within the convolution function seemed to screw things up again, so I've left it without the "/12"
  • The Relu was screwing things up, again because of the rounding (anything below zero that keras wasn't correctly truncating to unsigned byte was being filtered out by the activation function).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM