简体   繁体   English

如何更改UIImage / UIImageView的单个像素的颜色

[英]How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter: 我有一个UIImageView,我对其应用了过滤器:

testImageView.layer.magnificationFilter = kCAFilterNearest;

So that the individual pixels are visible. 使各个像素可见。 This UIImageView is within a UIScrollView, and the image itself is 1000x1000. 此UIImageView在UIScrollView中,并且图像本身为1000x1000。 I have used the following code to detect which pixel has been tapped: 我使用以下代码来检测哪个像素已被点击:

I first set up a tap gesture recognizer: 我首先设置了轻击手势识别器:

UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];

Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped: 然后使用拍子的位置来产生拍子的坐标,UIImageView的像素通过该坐标被拍出:

- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
    CGPoint touchPoint = [gesture locationInView:testImageView];

    NSLog(@"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);

}

I would like to be able to tap a pixel, and have its colour change. 我希望能够点按一个像素并进行颜色更改。 However, none of the StackOverflow posts I have found have answers which work or are not outdated. 但是,我发现的所有StackOverflow帖子都没有有效的答案或尚未过时的答案。 For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped. 但是,对于熟练的编码人员,您也许可以帮助我破译较旧的文章,使之有用,或者使用我上面的代码来检测UIImageView的哪个像素,自行制作简单的修复程序。

All help is appreciated. 感谢所有帮助。

Edit for originaluser2: 编辑为originaluser2:

After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. 在跟随originaluser2的帖子之后,当我在他的物理设备上通过他的示例GitHub项目运行代码时,运行代码可以完美运行。 However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors: 但是,当我在自己的应用程序中运行相同的代码时,遇到图片被空格替换的错误,并出现以下错误:

<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.

The code clearly works, as demonstrated by me testing it on my phone. 该代码显然可以正常工作,正如我在手机上对其进行测试所证明的那样。 However, the same code has produced a few issues in my own project. 但是,相同的代码在我自己的项目中产生了一些问题。 Though I have the suspicion that they are all caused by one or two simple central issues. 尽管我怀疑它们都是由一两个简单的中心问题引起的。 How can I solve these errors? 我该如何解决这些错误?

You'll want to break this problem up into multiple steps. 您将需要将此问题分解为多个步骤。

  1. Get the coordinates of the touched point in the image coordinate system 获取图像坐标系中触摸点的坐标
  2. Get the x and y position of the pixel to change 获取要更改的像素的x和y位置
  3. Create a bitmap context and replace the given pixel's components with your new color's components. 创建一个位图上下文,并用新颜色的组件替换给定像素的组件。

First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView . 首先,要获取图像坐标系中触摸点的坐标,可以使用我在UIImageView编写的类别方法。 This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view. 这将返回一个CGAffineTransform ,它将根据视图的内容模式将一个点从视图坐标映射到图像坐标。

@interface UIImageView (PointConversionCatagory)

@property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
@property (nonatomic, readonly) CGAffineTransform imageToViewTransform;

@end

@implementation UIImageView (PointConversionCatagory)

-(CGAffineTransform) viewToImageTransform {

    UIViewContentMode contentMode = self.contentMode;

    // failure conditions. If any of these are met – return the identity transform
    if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
        (contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
        return CGAffineTransformIdentity;
    }

    // the width and height ratios
    CGFloat rWidth = self.image.size.width/self.frame.size.width;
    CGFloat rHeight = self.image.size.height/self.frame.size.height;

    // whether the image will be scaled according to width
    BOOL imageWiderThanView = rWidth > rHeight;

    if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {

        // The ratio to scale both the x and y axis by
        CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;

        // The x-offset of the inner rect as it gets centered
        CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;

        // The y-offset of the inner rect as it gets centered
        CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;

        return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
    } else {
        return CGAffineTransformMakeScale(rWidth, rHeight);
    }
}

-(CGAffineTransform) imageToViewTransform {
    return CGAffineTransformInvert(self.viewToImageTransform);
}

@end

There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. 这里没有什么复杂的,只是比例尺适合/填充的一些额外逻辑,以确保将图像居中考虑在内。 You could skip this step entirely if your were displaying your image 1:1 on screen. 如果您要在屏幕上以1:1的比例显示图片,则可以完全跳过此步骤。

Next, you'll want to get the x and y position of the pixel to change. 接下来,您需要获取要更改的像素的x和y位置。 This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral. 这非常简单–您只想使用上述类别属性viewToImageTransform来获取图像坐标系中的像素,然后使用floor将值进行积分。

UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];

...

-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {

    if (!imageView.image) {
        return;
    }

    // get the pixel position
    CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
    PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};

    // replace image with new image, with the pixel replaced
    imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}

Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color. 最后,您将要使用我的另一个分类方法– imageWithPixel:replacedByColor:用给定颜色的替换像素获取新图像。

/// A simple struct to represent the position of a pixel
struct PixelPosition {
    NSInteger x;
    NSInteger y;
};

typedef struct PixelPosition PixelPosition;

@interface UIImage (UIImagePixelManipulationCatagory)

@end

@implementation UIImage (UIImagePixelManipulationCatagory)

-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {

    // components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
    const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
    UInt8* color255Components = calloc(sizeof(UInt8), 4);
    for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);

    // raw image reference
    CGImageRef rawImage = self.CGImage;

    // image attributes
    size_t width = CGImageGetWidth(rawImage);
    size_t height = CGImageGetHeight(rawImage);
    CGRect rect = {CGPointZero, {width, height}};

    // image format
    size_t bitsPerComponent = 8;
    size_t bytesPerRow = width*4;

    // the bitmap info
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;

    // data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
    UInt8* data = calloc(bytesPerRow, height);

    // get new RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create bitmap context
    CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

    // draw image into context (populating the data array while doing so)
    CGContextDrawImage(ctx, rect, rawImage);

    // get the index of the pixel (4 components times the x position plus the y position times the row width)
    NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

    // set the pixel components to the color components
    data[pixelIndex] = color255Components[0]; // r
    data[pixelIndex+1] = color255Components[1]; // g
    data[pixelIndex+2] = color255Components[2]; // b
    data[pixelIndex+3] = color255Components[3]; // a

    // get image from context
    CGImageRef img = CGBitmapContextCreateImage(ctx);

    // clean up
    free(color255Components);
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    free(data);

    UIImage* returnImage = [UIImage imageWithCGImage:img];
    CGImageRelease(img);

    return returnImage;
}

@end

What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. 首先,以255 UInt8格式找出要写入像素之一的颜色的成分。 Next, it creates a new bitmap context, with the given attributes of your input image. 接下来,它将使用输入图像的给定属性创建一个新的位图上下文。

The important bit of this method is: 此方法的重要部分是:

// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a

What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color. 这样做是根据给定像素的索引(基于像素的x和y坐标)–然后使用该索引用替换颜色的颜色分量替换该像素的分量数据。

Finally, we get out an image from the bitmap context and perform some cleanup. 最后,我们从位图上下文中获取图像并执行一些清理。

Finished Result: 最终结果:

在此处输入图片说明


Full Project: https://github.com/hamishknight/Pixel-Color-Changing 完整项目: https//github.com/hamishknight/Pixel-Color-Changing

You could try something like the following: 您可以尝试以下操作:

UIImage *originalImage = [UIImage imageNamed:@"something"];

CGSize size = originalImage.size;

UIGraphicsBeginImageContext(size);

[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];

// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM