繁体   English   中英

iOS上的UIImage上的运动模糊效果

[英]Motion Blur effect on UIImage on iOS

有没有办法在UIImage上获得运动模糊效果? 我尝试了GPUImage,Filtrr和iOS Core Image,但是所有这些都有规则的模糊-没有运动模糊。

我也尝试过UIImage-DSP,但它的运动模糊几乎不可见。 我需要更强大的东西。

当我在存储库中发表评论时,我只是向GPUImage添加了运动和缩放模糊。 这些是GPUImageMotionBlurFilter和GPUImageZoomBlurFilter类。 这是缩放模糊的一个示例:

GPUImage变焦模糊

对于运动模糊,我在单个方向上进行了9次高斯模糊。 使用以下顶点和片段着色器可以实现此目的:

顶点:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 uniform highp vec2 directionalTexelStep;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     gl_Position = position;

     textureCoordinate = inputTextureCoordinate.xy;
     oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
     twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
     threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
     fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
     oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
     twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
     threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
     fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
 }

分段:

 precision highp float;

 uniform sampler2D inputImageTexture;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
     fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;

     gl_FragColor = fragmentColor;
 }

作为一种优化,我通过使用角度,模糊大小和图像尺寸来计算片段着色器外部的纹理样本之间的步长。 然后将其传递到顶点着色器中,以便我可以在那里计算纹理采样位置并在片段着色器中对其进行插值。 这样可以避免在iOS设备上进行依赖的纹理读取。

缩放模糊要慢得多,因为我仍然在片段着色器中进行这些计算。 毫无疑问,有一种方法可以优化这一点,但是我还没有尝试过。 变焦模糊使用9次命中的高斯模糊,其中方向和每个样本的偏移距离根据像素位置与模糊中心的变化而变化。

它使用以下片段着色器(和标准的直通顶点着色器):

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 blurCenter;
 uniform highp float blurSize;

 void main()
 {
     // TODO: Do a more intelligent scaling based on resolution here
     highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;

     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;

     gl_FragColor = fragmentColor;
 }

请注意,出于性能原因,这两个模糊都在9个样本处进行了硬编码。 这意味着在较大的模糊大小下,您将开始在此处看到有限样本中的伪像。 对于较大的模糊,您需要多次运行这些滤镜或扩展它们以支持更多的高斯采样。 但是,由于iOS设备上有限的纹理采样带宽,更多的采样将导致渲染时间大大缩短。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM