简体   繁体   中英

Discrete Wavelet Transform in Java creates white Spots in the Image

in my Java program, a image is loaded into the programm, then transformed using the discrete wavelet Transform and the resulting coefficients are used as picture data for the output image.

The process works fine with natural images: http://imgur.com/Pk3kUs7

However, if I transform for example a cartonish image, white spots appear on dark edges int he approximation subband: http://imgur.com/kLXyBvd

Here is the code for the forwardDWT:

private int[][] transformPixels(int[][] pixels, int widthHeight) {
    double[][] temp_bank = new double[widthHeight][widthHeight];
    double a1 = -1.586134342;
    double a2 = -0.05298011854;
    double a3 = 0.8829110762;
    double a4 = 0.4435068522;

    // Scale coeff:
    double k1 = 0.81289306611596146; // 1/1.230174104914
    double k2 = 0.61508705245700002;// 1.230174104914/2
    for (int i = 0; i < 2; i++) {
        for (int col = 0; col < widthHeight; col++) {
            // Predict 1
            for (int row = 1; row < widthHeight - 1; row += 2) {
                pixels[row][col] += a1 * (pixels[row - 1][col] + pixels[row + 1][col]);
            }
            pixels[widthHeight - 1][col] += 2 * a1 * pixels[widthHeight - 2][col];

            // Update 1
            for (int row = 2; row < widthHeight; row += 2) {
                pixels[row][col] += a2 * (pixels[row - 1][col] + pixels[row + 1][col]);
            }
            pixels[0][col] += 2 * a2 * pixels[1][col];

            // Predict 2
            for (int row = 1; row < widthHeight - 1; row += 2) {
                pixels[row][col] += a3 * (pixels[row - 1][col] + pixels[row + 1][col]);
            }
            pixels[widthHeight - 1][col] += 2 * a3 * pixels[widthHeight - 2][col];

            // Update 2
            for (int row = 2; row < widthHeight; row += 2) {
                pixels[row][col] += a4 * (pixels[row - 1][col] + pixels[row + 1][col]);
            }
            pixels[0][col] += 2 * a4 * pixels[1][col];
        }

        for (int row = 0; row < widthHeight; row++) {
            for (int col = 0; col < widthHeight; col++) {
                if (row % 2 == 0)
                    temp_bank[col][row / 2] = k1 * pixels[row][col];
                else
                    temp_bank[col][row / 2 + widthHeight / 2] = k2 * pixels[row][col];

            }
        }

        for (int row = 0; row < widthHeight; row++) {
            for (int col = 0; col < widthHeight; col++) {
                pixels[row][col] = (int) temp_bank[row][col];
            }
        }
    }
    return pixels;
}

This is the DWT with the CDF9/7 fitlerbanks implemented using the lifting scheme, similar to the DWT in JPEG2000.

The algorithm has two limitations:

  1. Only grayscale data can be processed
  2. The width and height of the Image must be the same and a product of 2^n, eg 256x256, 512x512 etc.

Because it might also be that the gray values might calculated wrong, here is the other code for loading the image, starting the transformation ,the conversion of the rgb values to grayscale and the conversion back to rgb:

public BufferedImage openImage() throws InvalidWidthHeightException {
    try {
        int returnVal = fc.showOpenDialog(panel);
        if (returnVal == JFileChooser.APPROVE_OPTION) {
            File file = fc.getSelectedFile();
            BufferedImage temp = ImageIO.read(file);
            if (temp == null)
                return null;
            int checkInt = temp.getWidth();
            boolean check = (checkInt & (checkInt - 1)) == 0;
            if (checkInt != temp.getHeight() & !check)
                throw new InvalidWidthHeightException();
            int widthandHeight = temp.getWidth();
            image = new BufferedImage(widthandHeight, widthandHeight, BufferedImage.TYPE_BYTE_GRAY);
            Graphics g = image.getGraphics();
            g.drawImage(temp, 0, 0, null);
            g.dispose();

            return image;

        }
    } catch (IOException e) {
        System.out.println("Failed to load image!");
    }
    return null;

}

public void transform(int count) {
    int[][] pixels = getGrayValues(image);
    int transformedPixels[][];
    int width = pixels.length;
    transformedPixels = transformPixels(pixels, width);
    width/=2;

    for (int i = 1; i < count + 1; i++) {
        transformedPixels = transformPixels(transformedPixels, width);
        width/=2;
    }
    width = pixels.length;
    transformedImage = new BufferedImage(width, width, BufferedImage.TYPE_BYTE_GRAY);
    for (int x = 0; x < width; x++) {
        for (int y = 0; y < width; y++) {
            transformedImage.setRGB(x, y, tranformToRGB(transformedPixels[x][y]));
        }
    }

}

private int tranformToRGB(double d) {
    int value = (int) d;
    if (d < 0)
        d = 0;
    if (d > 255)
        d = 255;
    return 0xffffffff << 24 | value << 16 | value << 8 | value;
}

private int[][] getGrayValues(BufferedImage image2) {
    int[][] res = new int[image.getHeight()][image.getWidth()];
    int r, g, b;
    for (int i = 0; i < image.getWidth(); i++) {
        for (int j = 0; j < image.getHeight(); j++) {
            int value = image2.getRGB(i, j);
            r = (value >> 16) & 0xFF;
            g = (value >> 8) & 0xFF;
            b = (value & 0xFF);
            res[i][j] = (r + g + b) / 3;
        }
    }
    return res;
}

Note: Because the width and the height of the image is expected to be the same, i sometimes just use Width also for height.

EDIT: As suggested by @stuhlo, I've added a check for values for the approximation subband in the forwardDWT:

for (int row = 0; row < widthHeight; row++) {
            for (int col = 0; col < widthHeight; col++) {
                if (row % 2 == 0) {
                    double value = k1 * pixels[row][col];
                    if (value > 255)
                        value = 255;
                    if (value < 0)
                        value = 0;
                    temp_bank[col][row / 2] = value;
                } else {
                    temp_bank[col][row / 2 + widthHeight / 2] = k2 * pixels[row][col];
                }
            }
        }

Unfortunatly, now the subabnd for the horizontal details turns black.

Your problem is caused by the fact that samples of subbands need more bits to be stored than samples of original image.

I would suggest to use bigger datatype to store samples of subbands and normalize them back to 8-bits values for displaying.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM