https://github.com/czboosj/HJImagesToVideo 自己修复了这个库的内存泄漏
https://github.com/Willib/ImagesToVideo
swift 版本的图片处理
From: http://www.itqls.com/index.php?m=Home&c=Article&a=index&id=63
最近做一个无线设备相关的项目,
需要把扫描得到的图像,合成视频.
由于扫描得到的图像 是通过 drawRect 绘图画出来的,不是一个 Image
而图片转视频的方法是需要用 CGImage去实现的.
方法如下:
由于我需要得到 Image 数组, 就需要把我绘制出来的图片转换一下,
截屏: (正确姿势)
- (UIImage*)screenCap
{
CGSize size= CGSizeMake((int)self.bounds.size.width, (int)self.bounds.size.height);
// size 为尺寸, YES 为不透明,第三个参数是缩放比例,直接根据屏幕设置保持清晰度
// 关于这个不透明好多博客都说错了,opaque 的意思是不透明...
UIGraphicsBeginImageContextWithOptions(size, YES, [UIScreen mainScreen].scale);
// 遇到这个没拆分出来直接写,后发现内存泄漏,所以提出来 然后Relese
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
CGContextRelease(context);
UIGraphicsEndImageContext();
return viewImage;
}
通过这个方法可以拿到当前屏幕的图像转换成的 Image
但是,如果需要拿到大量的图,比如100张,他就相当于去截图100张,
内存消耗会非常恐怖.
所以 调用 以上方法 screenCap 的时候, 在外面包一层 autoRelease 去解决.
图片转视频:
- (void)saveVideo:(NSMutableArray *)imageArr withPaths:(NSString *)paths andCallBack:(void(^)(void))callBack
{
if (!imageArr.count) {
return;
}
UIImage *image = self.screenCap;
CGSize sizeImage = image.size;
int width = ((int) (sizeImage.width / 16) * 16);
int height = ((int) (sizeImage.height / 16) * 16);
NSLog(@"%d,%d",width,height);
CGSize size = CGSizeMake(width, height);
NSError *error = nil;
unlink([paths UTF8String]);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:paths]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
if(error)
NSLog(@"error = %@", [error localizedDescription]);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
if ([videoWriter canAddInput:writerInput])
NSLog(@"I can add this input");
else
NSLog(@"i can't add this input");
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
CVPixelBufferRef buffer = NULL;
NSUInteger fps = 10;
int frameCount = 0;
double numberOfSecondsPerFrame = 0.1;
double frameDuration = fps * numberOfSecondsPerFrame;
for (UIImage *img in imageArr)
{
//convert uiimage to CGImage.
buffer = [self pixelBufferFromCGImage:[img CGImage] size:size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount * frameDuration, (int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (!append_ok)
{
NSError *error = videoWriter.error;
if (error != nil)
{
NSLog(@"Unresolved error %@,%@.", error, [error userInfo]);
}
}
}
else
{
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok)
{
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
CVPixelBufferRelease(buffer);
}
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWriting];
callBack();
}];
NSLog(@"outside for loop");
}
下面这个方法是拿到 一个 buffer 的 , 通过它去拼接视频.
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4 * size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
这里会有一个内存飙升的问题,所以要注意 release.
文章评论