简体   繁体   English

GPUImage自定义OpenGL ES着色器导致黑色图像

[英]GPUImage custom OpenGL ES shader resulting in black image

Working on another OpenGL ES image filter based on this : 基于此,在另一个OpenGL ES图像过滤器上工作:

uniform sampler2D texture;
uniform float amount;
uniform vec2 texSize;
varying vec2 texCoord;
void main() {
    vec4 color = texture2D(texture, texCoord);
    vec4 orig = color;

    /* High pass filter */
    vec4 highpass = color * 5.0;

    float dx = 1.0 / texSize.x;
    float dy = 1.0 / texSize.y;
    highpass += texture2D(texture, texCoord + vec2(-dx, -dy)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(dx, -dy)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(dx, dy)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(-dx, dy)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(-dx * 2.0, -dy * 2.0)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(dx * 2.0, -dy * 2.0)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(dx * 2.0, dy * 2.0)) * -0.625;
    highpass += texture2D(texture, texCoord + vec2(-dx * 2.0, dy * 2.0)) * -0.625;
    highpass.a = 1.0;

    /* Overlay blend */
    vec3 overlay = vec3(1.0);
    if (highpass.r <= 0.5) {
        overlay.r = 2.0 * color.r * highpass.r;
    } else {
        overlay.r = 1.0 - 2.0 * (1.0 - color.r) * (1.0 - highpass.r);
    }
    if (highpass.g <= 0.5) {
        overlay.g = 2.0 * color.g * highpass.g;
    } else {
        overlay.g = 1.0 - 2.0 * (1.0 - color.g) * (1.0 - highpass.g);
    }
    if (highpass.b <= 0.5) {
        overlay.b = 2.0 * color.b * highpass.b;
    } else {
        overlay.b = 1.0 - 2.0 * (1.0 - color.b) * (1.0 - highpass.b);
    }
    color.rgb = (overlay * 0.8) + (orig.rgb * 0.2);

    /* Desaturated hard light */
    vec3 desaturated = vec3(orig.r + orig.g + orig.b / 3.0);
    if (desaturated.r <= 0.5) {
        color.rgb = 2.0 * color.rgb * desaturated;
    } else {
        color.rgb = vec3(1.0) - vec3(2.0) * (vec3(1.0) - color.rgb) * (vec3(1.0) - desaturated);
    }
    color = (orig * 0.6) + (color * 0.4);

    /* Add back some color */
    float average = (color.r + color.g + color.b) / 3.0;
    color.rgb += (average - color.rgb) * (1.0 - 1.0 / (1.001 - 0.45));

    gl_FragColor = (color * amount) + (orig * (1.0 - amount));
}

Per my question yesterday , I knew to assign precision to each float and vec. 根据昨天的问题 ,我知道为每个float和vec分配精度。 This time it compiled fine, however when I go to apply the filter in GPUImage (eg by setting the value of clarity to 0.8 ), the image goes black. 这一次,它编译罚款,但是当我去申请在GPUImage过滤器(例如,通过设定的值clarity0.8 ),图像变黑。 My gut tells me this is related to the texture size, but without knowing how GPUImage handles that, I'm kinda stuck. 我的直觉告诉我这与纹理大小有关,但是在不知道GPUImage如何处理纹理的情况下,我有点卡住了。

Here's my implementation in Objective-C: 这是我在Objective-C中的实现:

.h 。H

#import <GPUImage/GPUImage.h>

@interface GPUImageClarityFilter : GPUImageFilter
{
    GLint clarityUniform;
}

// Gives the image a gritty, surreal contrasty effect
// Value 0 to 1
@property (readwrite, nonatomic) GLfloat clarity;

@end

.m .m

#import "GPUImageClarityFilter.h"

#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageClarityFragmentShaderString = SHADER_STRING
(
 uniform sampler2D inputImageTexture;
 uniform lowp float clarity;
 uniform highp vec2 textureSize;
 varying highp vec2 textureCoordinate;
 void main() {
     highp vec4 color = texture2D(inputImageTexture, textureCoordinate);
     highp vec4 orig = color;

     /* High pass filter */
     highp vec4 highpass = color * 5.0;

     highp float dx = 1.0 / textureSize.x;
     highp float dy = 1.0 / textureSize.y;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx, -dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx, -dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx, dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx, dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx * 2.0, -dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx * 2.0, -dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx * 2.0, dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx * 2.0, dy * 2.0)) * -0.625;
     highpass.a = 1.0;

     /* Overlay blend */
     highp vec3 overlay = vec3(1.0);
     if (highpass.r <= 0.5) {
         overlay.r = 2.0 * color.r * highpass.r;
     } else {
         overlay.r = 1.0 - 2.0 * (1.0 - color.r) * (1.0 - highpass.r);
     }
     if (highpass.g <= 0.5) {
         overlay.g = 2.0 * color.g * highpass.g;
     } else {
         overlay.g = 1.0 - 2.0 * (1.0 - color.g) * (1.0 - highpass.g);
     }
     if (highpass.b <= 0.5) {
         overlay.b = 2.0 * color.b * highpass.b;
     } else {
         overlay.b = 1.0 - 2.0 * (1.0 - color.b) * (1.0 - highpass.b);
     }
     color.rgb = (overlay * 0.8) + (orig.rgb * 0.2);

     /* Desaturated hard light */
     highp vec3 desaturated = vec3(orig.r + orig.g + orig.b / 3.0);
     if (desaturated.r <= 0.5) {
         color.rgb = 2.0 * color.rgb * desaturated;
     } else {
         color.rgb = vec3(1.0) - vec3(2.0) * (vec3(1.0) - color.rgb) * (vec3(1.0) - desaturated);
     }
     color = (orig * 0.6) + (color * 0.4);

     /* Add back some color */
     highp float average = (color.r + color.g + color.b) / 3.0;
     color.rgb += (average - color.rgb) * (1.0 - 1.0 / (1.001 - 0.45));

     gl_FragColor = (color * clarity) + (orig * (1.0 - clarity));
 }
);
#else
NSString *const kGPUImageClarityFragmentShaderString = SHADER_STRING
(
 uniform sampler2D inputImageTexture;
 uniform float clarity;
 uniform vec2 textureSize;
 varying vec2 textureCoordinate;
 void main() {
     vec4 color = texture2D(inputImageTexture, textureCoordinate);
     vec4 orig = color;

     /* High pass filter */
     vec4 highpass = color * 5.0;

     float dx = 1.0 / textureSize.x;
     float dy = 1.0 / textureSize.y;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx, -dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx, -dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx, dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx, dy)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx * 2.0, -dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx * 2.0, -dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(dx * 2.0, dy * 2.0)) * -0.625;
     highpass += texture2D(inputImageTexture, textureCoordinate + vec2(-dx * 2.0, dy * 2.0)) * -0.625;
     highpass.a = 1.0;

     /* Overlay blend */
     vec3 overlay = vec3(1.0);
     if (highpass.r <= 0.5) {
         overlay.r = 2.0 * color.r * highpass.r;
     } else {
         overlay.r = 1.0 - 2.0 * (1.0 - color.r) * (1.0 - highpass.r);
     }
     if (highpass.g <= 0.5) {
         overlay.g = 2.0 * color.g * highpass.g;
     } else {
         overlay.g = 1.0 - 2.0 * (1.0 - color.g) * (1.0 - highpass.g);
     }
     if (highpass.b <= 0.5) {
         overlay.b = 2.0 * color.b * highpass.b;
     } else {
         overlay.b = 1.0 - 2.0 * (1.0 - color.b) * (1.0 - highpass.b);
     }
     color.rgb = (overlay * 0.8) + (orig.rgb * 0.2);

     /* Desaturated hard light */
     vec3 desaturated = vec3(orig.r + orig.g + orig.b / 3.0);
     if (desaturated.r <= 0.5) {
         color.rgb = 2.0 * color.rgb * desaturated;
     } else {
         color.rgb = vec3(1.0) - vec3(2.0) * (vec3(1.0) - color.rgb) * (vec3(1.0) - desaturated);
     }
     color = (orig * 0.6) + (color * 0.4);

     /* Add back some color */
     float average = (color.r + color.g + color.b) / 3.0;
     color.rgb += (average - color.rgb) * (1.0 - 1.0 / (1.001 - 0.45));

     gl_FragColor = (color * clarity) + (orig * (1.0 - clarity));
 }
);
#endif

@implementation GPUImageClarityFilter

@synthesize clarity = _clarity;

#pragma mark -
#pragma mark Initialization and teardown

- (id)init;
{
    if (!(self = [super initWithFragmentShaderFromString:kGPUImageClarityFragmentShaderString]))
    {
        return nil;
    }

    clarityUniform = [filterProgram uniformIndex:@"clarity"];
    self.clarity = 0.0;

    return self;
}

#pragma mark -
#pragma mark Accessors

- (void)setClarity:(GLfloat)clarity;
{
    _clarity = clarity;

    [self setFloat:_clarity forUniform:clarityUniform program:filterProgram];
}

@end

One other thing I thought of doing is applying GPUImage's built-in low pass and high pass filters, but I get the feeling that would end up a rather clunky solution. 我想做的另一件事是应用GPUImage的内置低通和高通滤波器,但是我感觉最终将是一个笨拙的解决方案。

That's probably due to textureSize not being a standard uniform that is provided for you as part of a GPUImageFilter. 这可能是由于textureSize不是作为GPUImageFilter的一部分提供给您的标准统一。 inputImageTexture and textureCoordinate are standard uniforms provided by one of these filters, and it looks like you're providing the clarity uniform. inputImageTexturetextureCoordinate是这些过滤器之一提供的标准制服,看起来您正在提供clarity制服。

Since textureSize isn't set, it will default to 0.0. 由于未设置textureSize ,因此默认值为0.0。 Your 1.0 / textureSize.x calculation will then divide by zero, which tends to lead to black frames in an iOS fragment shader. 然后,您的1.0 / textureSize.x计算将被零除,这会导致iOS片段着色器中出现黑框。

You could either calculate and provide that uniform, or instead take a look at basing your custom filter on GPUImage3x3TextureSamplingFilter instead. 您既可以计算并提供该制服,也可以查看将自定义过滤器基于GPUImage3x3TextureSamplingFilter。 That filter base class passes in the result of 1.0 / textureSize.x as the texelWidth uniform (and the matching texelHeight for the vertical component). 该过滤器基类将1.0 / textureSize.x的结果作为texelWidth统一texelWidth (以及垂直组件的匹配texelHeight )。 You don't have to calculate this. 您不必计算这个。 In fact, it also calculates the texture coordinates of the surrounding 8 pixels, so you can cut out four of the calculations above and convert those to non-dependent texture reads. 实际上,它还会计算周围8个像素的纹理坐标,因此您可以删除上面的四个计算,并将其转换为非依赖性纹理读取。 You'd just need to calculate the four texture reads based on 2 * texelWidth and 2 * texelHeight to finish off the remaining four reads. 您只需要基于2 * texelWidth2 * texelHeight计算四个纹理读取即可完成其余四个读取。

You may in fact be able to break this operation into multiple passes to save on calculations, doing a small box blur, then an overlay blend, then the last stage of this filter. 实际上,您可能可以将该操作分为多次进行以节省计算,进行小盒子模糊处理,然后进行叠加混合,然后执行此过滤器的最后一个阶段。 That could speed this up further. 这样可以进一步加快速度。

So, you can override 因此,您可以覆盖

(void)setupFilterForSize:(CGSize)filterFrameSize

Method to setup width & height factor like GPUImageSharpenFilter . 设置宽度和高度因子的方法,例如GPUImageSharpenFilter

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM