简体   繁体   中英

Core Graphics - draw a gray scale image using array of integers

I am trying to create a UIImage using core graphics.

My wish is to draw an image divided into 4 different gray scale areas/pixels.

....White....Gray.....

.....Gray.....Black...

So using core graphics, i would like to define an array of 4 different int8_t which correspond to the desired image:

int8_t data[] = {
    255,   122,
    122,     0,
};

255 is white,

122 is gray,

0 is black

The best reference for a similar code that i could have found is here

This reference refers to an RGB image, so came up with this code per my own common sense (sorry for objective-C french - this is not my reference:)):

- (UIImage *)getImageFromGrayScaleArray {
    
    int width = 2;
    int height = 2;
  
    int8_t data[] = {
        255, 122,
        122, 0,
    };

    CGDataProviderRef provider = CGDataProviderCreateWithData (NULL,
                                                               &data[0],
                                                               width * height,
                                                               NULL);

    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
    
    CGImageRef imageRef = CGImageCreate (width,
                                         height,
                                         [self bitsPerComponent],
                                         [self bitsPerPixel],
                                         width * [self bytesPerPixel],
                                         colorSpaceRef,
                                         kCGBitmapByteOrderDefault,
                                         provider,
                                         NULL,
                                         NO,
                                         kCGRenderingIntentDefault);
    UIImage *image = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    
    return image;
}


- (int)bitsPerPixel {
    return 8 * [self bytesPerPixel];;
}

- (int)bytesPerPixel {
    return [self bytesPerComponent] * [self componentsPerPixel];
}

- (int)componentsPerPixel {
    return 1;
}

- (int)bytesPerComponent {
    return 1;
}

- (int)bitsPerComponent {
    return 8 * [self bytesPerComponent];
}

But... this code gives me this whole black UIImage:

在此处输入图像描述

Can someone please reference me to a point where i can read and understand how to do such a task? The amount of data about core graphics seem to be quite scarce when trying to do such a task. And the time for all these guesses... forever:)

You're close...

Gray scale images need TWO components per pixel: brightness and alpha.

So, with just a couple changes (see the comments):

- (UIImage *)getImageFromGrayScaleArray {
    
    int width = 2;
    int height = 2;
    
    // 1 byte for brightness, 1 byte for alpha
    int8_t data[] = {
        255, 255,
        122, 255,
        122, 255,
        0, 255,
    };
    
    CGDataProviderRef provider = CGDataProviderCreateWithData (NULL,
                                                               &data[0],
                                                               // size is width * height * bytesPerPixel
                                                               width * height * [self bytesPerPixel],
                                                               NULL);
    
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
    
    CGImageRef imageRef = CGImageCreate (width,
                                         height,
                                         [self bitsPerComponent],
                                         [self bitsPerPixel],
                                         width * [self bytesPerPixel],
                                         colorSpaceRef,
                                         // use this
                                         kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
                                         // instead of this
                                         //kCGBitmapByteOrderDefault,
                                         provider,
                                         NULL,
                                         NO,
                                         kCGRenderingIntentDefault);
    
    UIImage *image = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    
    return image;
}


- (int)bitsPerPixel {
    return 8 * [self bytesPerPixel];;
}

- (int)bytesPerPixel {
    return [self bytesPerComponent] * [self componentsPerPixel];
}

- (int)componentsPerPixel {
    return 2;  // 1 byte for brightness, 1 byte for alpha
}

- (int)bytesPerComponent {
    return 1;
}

- (int)bitsPerComponent {
    return 8 * [self bytesPerComponent];
}

Edit -- I think there's an issue with the memory buffer addressing using the above code. After some testing, I'm getting inconsistent results.

Give it a try with this modified code:

@interface TestingViewController : UIViewController
@end
@interface TestingViewController ()
@end
@implementation TestingViewController

// CGDataProviderCreateWithData callback to free the pixel data buffer
void freePixelData(void *info, const void *data, size_t size) {
    free((void *)data);
}

- (UIImage*) getImageFromGrayScaleArray:(BOOL)allBlack {
    
    int8_t grayArray[] = {
        255, 122,
        122, 0,
    };
    
    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };
    
    int width = 2;
    int height = 2;
    
    int imageSizeInPixels = width * height;
    int bytesPerPixel = 2; // 1 byte for brightness, 1 byte for alpha
    unsigned char *pixels = (unsigned char *)malloc(imageSizeInPixels * bytesPerPixel);
    memset(pixels, 255, imageSizeInPixels * bytesPerPixel); // setting alpha values to 255
    
    if (allBlack) {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = blackArray[i]; // writing array of bytes as image brightnesses
        }
    } else {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = grayArray[i]; // writing array of bytes as image brightnesses
        }
    }
    
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
                                                              pixels,
                                                              imageSizeInPixels * bytesPerPixel,
                                                              freePixelData);
    
    CGImageRef imageRef = CGImageCreate(width,
                                        height,
                                        8,
                                        8 * bytesPerPixel,
                                        width * bytesPerPixel,
                                        colorSpaceRef,
                                        kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);
    
    UIImage *image = [UIImage imageWithCGImage:imageRef];
    
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    
    return image;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    
    self.view.backgroundColor = [UIColor systemTealColor];
    
    UIImage *img1 = [self getImageFromGrayScaleArray:NO];
    UIImage *img2 = [self getImageFromGrayScaleArray:YES];
    
    UIImageView *v1 = [UIImageView new];
    UIImageView *v2 = [UIImageView new];
    
    v1.image = img1;
    v1.backgroundColor = [UIColor systemYellowColor];
    v2.image = img2;
    v2.backgroundColor = [UIColor systemYellowColor];
    
    v1.contentMode = UIViewContentModeScaleToFill;
    v2.contentMode = UIViewContentModeScaleToFill;
    
    v1.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v1];
    v2.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v2];
    
    UILayoutGuide *g = [self.view safeAreaLayoutGuide];
    
    [NSLayoutConstraint activateConstraints:@[
        
        [v1.topAnchor constraintEqualToAnchor:g.topAnchor constant:40.0],
        [v1.centerXAnchor constraintEqualToAnchor:g.centerXAnchor],
        [v1.widthAnchor constraintEqualToConstant:200.0],
        [v1.heightAnchor constraintEqualToAnchor:v1.widthAnchor],
        
        [v2.topAnchor constraintEqualToAnchor:v1.bottomAnchor constant:40.0],
        [v2.centerXAnchor constraintEqualToAnchor:self.view.centerXAnchor],
        [v2.widthAnchor constraintEqualToAnchor:v1.widthAnchor],
        [v2.heightAnchor constraintEqualToAnchor:v2.widthAnchor],
        
    ]];
}

@end

We add two 200x200 image views, and set the top .image to the returned UIImage using:

    int8_t grayArray[] = {
        255, 122,
        122, 0,
    };
    

and the bottom image using:

    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };
    

Output:

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM