简体   繁体   English

CALayer可拖动面罩

[英]CALayer Draggable Mask

I have two images. 我有两个图像。 Both of the same scene, of the same size, that take up the entire screen. 占据整个屏幕的相同场景,相同大小的两个场景。 One is image blurry, one is in focus. 一种是图像模糊,一种是聚焦。 The desired effect is that the user will initially see the blurry image and as they drag their finger across the screen from left to right they will see the part of the image to the left of where they drag be in focus (for example if they drag only halfway across, then the left-half of the scene is the focused image while the right half is still blurry. 理想的效果是,用户最初将看到模糊的图像,并且当他们从左向右在屏幕上拖动手指时,他们将看到他们拖动的焦点所在位置左侧的图像部分(例如,如果拖动仅在一半时,场景的左半部分是聚焦图像,而右半部分仍然模糊。

I'm doing this by subclassing UIImageView with the focused image as the image - then adding a CALayer of the blurry image with a mask applied, then changing the location of the mask according to touchesBegan/touchesMoved. 我这样做是通过将UIImageView子类化为聚焦图像作为图像,然后添加带有蒙版的模糊图像的CALayer ,然后根据touchesBegan / touchesMoved更改蒙版的位置。 The problem is that performance is very slow using the approach below. 问题在于,使用以下方法时性能非常慢。 So I'm wondering what I'm doing wrong. 所以我想知道我在做什么错。

@interface DragMaskImageView : UIImageView
@end    

@implementation DragMaskImageView{
    BOOL userIsTouchingMask;
    CALayer *maskingLayer;
    CALayer *topLayer;
    float horzDistanceOfTouchFromCenter;
}

- (void)awakeFromNib {
    topLayer = [CALayer layer];    
    topLayer.contents = (id) [UIImage imageNamed:@"blurryImage.jpg"].CGImage;
    topLayer.frame = CGRectMake(0, 0, 480, 300);
    [self.layer addSublayer:topLayer];

    maskingLayer = [CALayer layer];
    maskingLayer.contents = (id) [UIImage imageNamed:@"maskImage.png"].CGImage;
    maskingLayer.anchorPoint = CGPointMake(0.0, 0.0);
    maskingLayer.bounds = CGRectMake(0, 0, 480, 300);
    [topLayer setMask:maskingLayer];
}

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
    CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];

    if (touchPoint.x < maskingLayer.frame.origin.x) {
        NSLog(@"user is touching to the left of mask - disregard");
        userIsTouchingMask = NO;
    } else {
        NSLog(@"user is touching ");
        horzDistanceOfTouchFromCenter = touchPoint.x - maskingLayer.frame.origin.x;
        userIsTouchingMask = YES;
    }    
}

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
    if (userIsTouchingMask) {

        CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];
        float newMaskX = touchPoint.x - horzDistanceOfTouchFromCenter;

        if (newMaskX < 0) {
            newMaskX = 0;
        }
        if (newMaskX > 480) {
            newMaskX = 480;
        }
        maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
    }   
}

I checked the related thread core animation calayer mask animation performance but setting shouldRasterize to YES on any layer doesn't seem to help the performance problem. 我检查了相关的线程核心动画calayer mask动画性能,但是在任何图层shouldRasterize设置为YES似乎都不会解决性能问题。

Perhaps the problem is that the layer's implicit animation is causing it to appear to be sluggish. 也许问题在于该图层的隐式动画导致其显得迟钝。 Turning off the implicit animation should fix the problem: 关闭隐式动画应该可以解决此问题:

[CATransaction begin];
[CATransaction setDisableActions:YES];
maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
[CATransaction commit];

This is why @Rizwan's work-around works. 这就是@Rizwan的解决方法起作用的原因。 It's a way of bypassing the implicit animation. 这是绕过隐式动画的一种方法。

I ran into the same problem, and updating the frame of the mask always lags behind the touch. 我遇到了同样的问题,并且更新蒙版的框架总是落后于触摸。 I found that actually re-creating the mask with the new frame ensures that the mask is always updated instantly. 我发现实际上使用新框架重新创建蒙版可以确保始终立即更新蒙版。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM