[英]my iphone app taking up so much memory
I made gallery app using UICollectionView but i got bad performance about allocation like below. 我使用UICollectionView制作了图库应用,但如下所示的分配性能却很差。
I couldn't find where it is bad. 我找不到哪里不好。 Where should I explicitly release object?
我应该在哪里显式释放对象? Let me know Please.
请让我知道。
following code is doubtful about it. 以下代码对此表示怀疑。
In collectionView, 在collectionView中,
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
...
dispatch_async(all_queue, ^{
ALAssetRepresentation *representation = [asset defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:[representation fullResolutionImage]
scale:[representation scale]
orientation:(UIImageOrientation)[representation orientation]];
NSString *filename = [representation filename];
NSLog(@"%@", filename);
NSLog(@"Loaded Image row : %d", indexPath.row);
vector<cv::Rect> faces = [ImageUtils findFeature:image minsize:MIN_FACE_SIZE
withCascade:face_cascade];
Mat imageMat = [ImageUtils cvMatFromUIImage:image];
for(unsigned int i = 0; i < es.size(); ++i) {
rectangle(imageMat, cv::Point(es[i].x, es[i].y),
cv::Point(es[i].x + es[i].width, es[i].y + es[i].height),
cv::Scalar(0,255,255),5);
}
dispatch_async(dispatch_get_main_queue(), ^{
[faceImage setImage:[ImageUtils UIImageFromCVMat:imageMat]];
[cell setNeedsDisplay];
});
});
return cell;
}
Called Method 调用方法
+ (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
Another Method 另一种方法
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
The other method 另一种方法
+(cv::vector<cv::Rect>)findFeature:(UIImage *)image minsize:(cv::Size)minSize withCascade:(CascadeClassifier)cascade
{
vector<cv::Rect> faces;
Mat frame_gray;
Mat imageMat = [ImageUtils cvMatFromUIImage:image];
cvtColor(imageMat, frame_gray, CV_BGRA2GRAY);
equalizeHist(frame_gray, frame_gray);
cascade.detectMultiScale(frame_gray, faces, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, minSize);
frame_gray.release();
imageMat.release();
return faces;
}
Its because your UImage resolution is too high. 这是因为您的UImage分辨率太高。 You have to find a way to reduce its size.
您必须找到一种减小其尺寸的方法。
Use dequeueReusableCellWithReuseIdentifier
while creating collection view cells. 创建集合视图单元格时,请使用
dequeueReusableCellWithReuseIdentifier
。
Also resize your image in which you are processing, this will definitely reduce your size. 还要调整正在处理的图像的尺寸,这肯定会减小尺寸。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.