简体   繁体   English

线性模糊图像

[英]Linear-Blurring an Image

I'm trying to blurr an image by mapping each pixel to the average of the N pixels to the right of it (in the same row).我试图通过将每个像素映射到它右侧(在同一行)的 N 个像素的平均值来模糊图像。 My iterative solution produces good output, but my linear-algebra solution is producing bad output.我的迭代解决方案产生了良好的输出,但我的线性代数解决方案产生了糟糕的输出。

From testing, I believe my kernel-matrix is correct;从测试来看,我相信我的内核矩阵是正确的; and, I know the last N rows don't get blurred, but that's fine for now.而且,我知道最后 N 行不会变得模糊,但现在很好。 I'd appreciate any hints or solutions.我很感激任何提示或解决方案。

迭代解输出 在此处输入图片说明

iterative-solution output (good), linear-algebra output (bad)迭代解输出(好),线性代数输出(坏)

在此处输入图片说明

original image;原图; and here is the failing linear-algebra code:这是失败的线性代数代码:

def blur(orig_img):
    # turn image-mat into a vector
    flattened_img = orig_img.flatten()
    L = flattened_img.shape[0]
    N = 3

    # kernel
    kernel = np.zeros((L, L))
    for r, row in enumerate(kernel[0:-N]):
        row[r:r+N] = [round(1/N, 3)]*N
    print(kernel)

    # blurr the img
    print('starting blurring')
    blurred_img = np.matmul(kernel, flattened_img)
    blurred_img = blurred_img.reshape(orig_img.shape)
    return blurred_img

The equation I'm modelling is this:我正在建模的方程是这样的:

在此处输入图片说明

One option might be to just use a kernel and a convolution?一种选择可能是只使用内核和卷积?

For example if we load a gray scale image like so:例如,如果我们像这样加载灰度图像:

import numpy as np
import matplotlib.pyplot as plt

from PIL import Image
from scipy import ndimage

# load a hackinsh grayscale image
image = np.asarray(Image.open('cup.jpg')).mean(axis=2)
plt.imshow(image)
plt.title('Gray scale image')
plt.show()

示例图像

Now one can use a kernel and convolution.现在可以使用内核和卷积。 For example to create a filter that filters just one rows and compute the value of the center pixel as the difference between the pixels to the right and left one can do the following:例如,要创建一个过滤器,只过滤一行并计算中心像素的值作为左右像素之间的差异,可以执行以下操作:

# Create a kernel that takes the difference between neighbors horizontal pixes
k = np.array([[-1,0,1]])
plt.subplot(121)
plt.title('Kernel')
plt.imshow(k)
plt.subplot(122)
plt.title('Output')
plt.imshow(ndimage.convolve(image, k, mode='constant', cval=0.0))
plt.show()

简单的边缘检测器

Therefore, one can blurr an image by mapping each pixel to the average of the N pixels to the right of it by creating the appropiate kernel.因此,可以通过创建适当的内核将每个像素映射到其右侧 N 个像素的平均值来模糊图像。

# Create a kernel that takes the average of N pixels to the right
n=10
k = np.zeros(n*2);k[n:]=1/n
k = k[np.newaxis,...]
plt.subplot(121)
plt.title('Kernel')
plt.imshow(k)
plt.subplot(122)
plt.title('Output')

plt.imshow(ndimage.convolve(image, k, mode='constant', cval=0.0))
plt.show()

基于右的模糊

The issue was incorrect usage of cv2.imshow() in displaying the output image.问题是在显示输出图像时cv2.imshow()使用不正确。 It expects floating-point pixel values to be in [0, 1];它期望浮点像素值在 [0, 1] 中; which, is done in the below code (near bottom):这是在下面的代码中完成的(靠近底部):

def blur(orig_img):
    flattened_img = orig_img.flatten()
    L = flattened_img.shape[0]
    N = int(round(0.1 * orig_img.shape[0], 0))

    # mask (A)
    mask = np.zeros((L, L))
    for r, row in enumerate(mask[0:-N]):
        row[r:r+N] = [round(1/N, 2)]*N

    # blurred img = A * flattened_img
    print('starting blurring')
    blurred_img = np.matmul(mask, flattened_img)
    blurred_img = blurred_img.reshape(orig_img.shape)
    cv2.imwrite('blurred_img.png', blurred_img)

    # normalize img to [0,1]
    blurred_img = (
        blurred_img - blurred_img.min()) / (blurred_img.max()-blurred_img.min())
    return blurred_img

在此处输入图片说明

Ammended output修正输出

Thank you to @CrisLuengo for identifying the issue.感谢@CrisLuengo 发现问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM