iOS中为直播APP集成美颜功能

iOS中为直播APP集成美颜功能 获取GPUImage 处理后的buffer

From: http://www.jianshu.com/p/dde412cab8db

最近需要给直播项目中添加美颜的功能,调研了很多SDK和开源代码(视决,涂图,七牛,金山云,videoCore等),综合成本/效果/对项目侵入性,最后决定使用一款基于GPUImage实现的 BeautifyFaceDemo美颜滤镜。

关于滤镜代码和实现思路可以到BeautifyFace Github和作者琨君简书中查看。

集成GPUImageBeautifyFilter和GPUImage Framework

首先需要集成好GPUImage,通过观察目前iOS平台,90%以上美颜方案都是基于这个框架来做的。
原来项目中的AVCaptureDevice需要替换成GPUImageVideoCamera,删除诸如AVCaptureSession/AVCaptureDeviceInput/AVCaptureVideoDataOutput这种GPUImage实现了的部分。修改一些生命周期,摄像头切换,横竖屏旋转等相关逻辑,保证前后行为统一。

声明需要的属性

@property (nonatomic, strong) GPUImageVideoCamera *videoCamera;  
//屏幕上显示的View
@property (nonatomic, strong) GPUImageView *filterView;
//BeautifyFace美颜滤镜
@property (nonatomic, strong) GPUImageBeautifyFilter *beautifyFilter;

然后初始化

self.sessionPreset = AVCaptureSessionPreset1280x720;
self.videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:self.sessionPreset cameraPosition:AVCaptureDevicePositionBack];

self.filterView = [[GPUImageView alloc] init];
[self.view insertSubview:self.filterView atIndex:1]; //省略frame的相关设置

//这里我在GPUImageBeautifyFilter中增加个了初始化方法用来设置美颜程度intensity
self.beautifyFilter = [[GPUImageBeautifyFilter alloc] initWithIntensity:0.6];

为filterView增加美颜滤镜

[self.videoCamera addTarget:self.beautifyFilter];
[self.beautifyFilter addTarget:self.filterView];

然后调用startCameraCapture方法就可以看到效果了

[self.videoCamera startCameraCapture];

到这里,仅仅是屏幕显示的内容带有滤镜效果,而作为直播应用,还需要输出带有美颜效果的视频流

输出带有美颜效果的视频流

刚开始集成的时候碰见一个坑,原本的逻辑是实现AVCaptureVideoDataOutputSampleBufferDelegate方法来获得原始帧

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;

而GPUImageVideoCamera也实现了一个类似的代理:

@protocol GPUImageVideoCameraDelegate <NSObject>
@optional
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer;
@end

而替换之后发现输出的流依旧是未经美颜的图像,看了实现后发现果不其然,GPUImageVideoCameraDelegate还是通过AVCaptureVideoDataOutputSampleBufferDelegate直接返回的数据,所以想输出带有滤镜的流这里就得借助GPUImageRawDataOutput了

CGSize outputSize = {720, 1280};
GPUImageRawDataOutput *rawDataOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:CGSizeMake(outputSize.width, outputSize.height) resultsInBGRAFormat:YES];
[self.beautifyFilter addTarget:rawDataOutput];

这个GPUImageRawDataOutput其实就是beautifyFilter的输出工具,可在setNewFrameAvailableBlock方法的block中获得带有滤镜效果的数据

__weak GPUImageRawDataOutput *weakOutput = rawDataOutput;
__weak typeof(self) weakSelf = self;

[rawDataOutput setNewFrameAvailableBlock:^{
    __strong GPUImageRawDataOutput *strongOutput = weakOutput;
    [strongOutput lockFramebufferForReading];

    // 这里就可以获取到添加滤镜的数据了
    GLubyte *outputBytes = [strongOutput rawBytesForImage];
    NSInteger bytesPerRow = [strongOutput bytesPerRowInOutput];
    CVPixelBufferRef pixelBuffer = NULL;
    CVPixelBufferCreateWithBytes(kCFAllocatorDefault, outputSize.width, outputSize.height, kCVPixelFormatType_32BGRA, outputBytes, bytesPerRow, nil, nil, nil, &pixelBuffer);

    // 之后可以利用VideoToolBox进行硬编码再结合rtmp协议传输视频流了
    [weakSelf encodeWithCVPixelBufferRef:pixelBuffer];

    [strongOutput unlockFramebufferAfterReading];
    CFRelease(pixelBuffer);

}];

目前依旧存在的问题

经过和其他产品对比,GPUImageBeautifyFilter磨皮效果和花椒最为类似。这里采用双边滤波, 花椒应该用了高斯模糊实现。同印客对比,美白效果一般。

还存在些关于性能的问题:

1 调用setNewFrameAvailableBlock后很多机型只能跑到不多不少15fps
2 在6s这代机型上温度很高,帧率可到30fps但不稳定

Update(8-13)

  1. 关于性能问题,最近把项目中集成的美颜滤镜(BeautifyFace)里用到的 GPUImageCannyEdgeDetectionFilter 替换为 GPUImageSobelEdgeDetectionFilter 会有很大改善,而且效果几乎一致,6s经过长时间测试没有再次出现高温警告了。(替换也十分简单,直接改俩处类名/变量名就可以了)

  2. 分享一个BUG,最近发现当开启美颜的时候,关闭直播内存竟然没有释放。分析得出GPUImageRawDataOutput的setNewFrameAvailableBlock方法的block参数仍然保持着self,解决思路就是将GPUImageRawDataOutput移除。

先附上之前的相关release代码:

[self.videoCamera stopCameraCapture];
[self.videoCamera removeInputsAndOutputs];
[self.videoCamera removeAllTargets];

开始以为camera调用removeAllTargets会把camera上面的filter,以及filter的output一同释放,但实际camera并不会’帮忙’移除filter的target,所以需要添加:

[self.beautifyFilter removeAllTargets]; //修复开启美颜内存无法释放的问题

关闭美颜output是直接加在camera上,camera直接removeAllTargets就可以;
开启美颜output加在filter上,camera和filter都需要removeAllTargets。


oc 版本的转图片

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
/ CIContext 的 - render:toBitmap:rowBytes:bounds:format:colorSpace 据说这个会因为系统问题ios9.0 ,产生内存泄漏
// AVFoundation 捕捉视频帧,很多时候都需要把某一帧转换成 image
+ (CGImageRef)imageFromSampleBufferRef:(CMSampleBufferRef)sampleBufferRef
{
@autoreleasepool { // 可以加一个自动内存管理
// 为媒体数据设置一个CMSampleBufferRef
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
// 锁定 pixel buffer 的基地址
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// 得到 pixel buffer 的基地址
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// 得到 pixel buffer 的行字节数
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// 得到 pixel buffer 的宽和高
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// 创建一个依赖于设备的 RGB 颜色空间
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateDeviceGray
/// 据说GPUImage 只能只用这个
//CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// 用抽样缓存的数据创建一个位图格式的图形上下文(graphic context)对象
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
//根据这个位图 context 中的像素创建一个 Quartz image 对象
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// 解锁 pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// 释放 context 和颜色空间
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// 用 Quzetz image 创建一个 UIImage 对象
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
// 释放 Quartz image 对象
// CGImageRelease(quartzImage);
return quartzImage;
}
}

swift 版本获取buffer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
///摄像头
var videoCamera:GPUImageVideoCamera!
var movieWriter:GPUImageMovieWriter!
var filterGroup:GPUImageFilterGroup!
var gpuImageView:GPUImageView!
var recSize = CGSize(width: 640, height: 480)
/// 创建测试GPUIamge 放假录
func createGPUImage(){
filterGroup = GPUImageFilterGroup()
videoCamera = GPUImageVideoCamera(sessionPreset: AVCaptureSessionPreset640x480, cameraPosition: AVCaptureDevicePosition.back)
videoCamera.outputImageOrientation = .portrait
videoCamera.addAudioInputsAndOutputs() // 开启声音捕获
videoCamera.horizontallyMirrorRearFacingCamera = false
videoCamera.horizontallyMirrorFrontFacingCamera = false// 镜像策略
let filter = GPUImageSepiaFilter()
videoCamera.addTarget(filterGroup)
filterGroup.addFilter(filter)
videoCamera.delegate = self
// videoCamera.outputTextureOptions =
gpuImageView = GPUImageView(frame: self.view.bounds)
self.view.addSubview(gpuImageView)
// 必须设置开始和结尾不然白屏
filterGroup.initialFilters = [filter]
filterGroup.terminalFilter = filter
filterGroup.addTarget(gpuImageView)
let dataoutput = GPUImageRawDataOutput(imageSize: recSize, resultsInBGRAFormat: true)
filterGroup.addTarget(dataoutput)
weak var weakDataoutput = dataoutput
weak var weakSelf = self
dataoutput?.newFrameAvailableBlock = {
if let weakDataoutput = weakDataoutput{
weakDataoutput.lockFramebufferForReading()
let outbytes = weakDataoutput.rawBytesForImage
let bytesPerRow = weakDataoutput.bytesPerRowInOutput()
var pixelBuffer:CVPixelBuffer? = nil
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(weakSelf!.recSize.width), Int(weakSelf!.recSize.height), kCVPixelFormatType_32BGRA, outbytes!, Int(bytesPerRow), nil, nil, nil, &pixelBuffer)
// 如果直播就可以发送这个流的 pixelBuffer
weakDataoutput.unlockFramebufferAfterReading()
//swift 不需要CFRelease(pixelBuffer)
}
}
videoCamera.startCapture()
// DispatchQueue.main.async {
// self.startREC()
// DispatchQueue.main.asyncAfter(deadline: DispatchTime.now()+5) {
// self.stopREC()
// }
// }
}
func startREC(){
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true)[0]+"/test.mov"
unlink(path.cString(using: String.Encoding.utf8))
movieWriter = GPUImageMovieWriter(movieURL: URL(fileURLWithPath: path), size: recSize)
movieWriter.encodingLiveVideo = true
videoCamera.audioEncodingTarget = movieWriter
filterGroup.addTarget(movieWriter)
movieWriter.startRecording()
}
func stopREC(){
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true)[0]+"/test.mov"
filterGroup.removeAllTargets()
videoCamera.audioEncodingTarget = nil
movieWriter.finishRecording()
UISaveVideoAtPathToSavedPhotosAlbum(path, self, nil, nil)
}

swift 3 的转码 CMSampleBuffer -> UIImage

    func willOutputSampleBuffer(_ sampleBuffer: CMSampleBuffer!) {
                                               // CMSampleBufferRef
        if  sampleBuffer != nil{
//        let cimage = BuilderVideo.image(fromSampleBufferRef: sampleBuffer)
            let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
            let myCIimage         = CIImage(cvPixelBuffer: myPixelBuffer!)
           let videoImage        = UIImage(ciImage: myCIimage)
            DispatchQueue.main.async {
                self.testImageView.image = videoImage
            }

        }
//        CGImageRelease


    }

}


    //参考http://stackoverflow.com/questions/41623186/cmsamplebuffer-from-avcapturevideodataoutput-unexpectedly-found-nil
Author

陈昭

Posted on

2017-05-04

Updated on

2021-12-27

Licensed under

You need to set install_url to use ShareThis. Please set it in _config.yml.
You forgot to set the business or currency_code for Paypal. Please set it in _config.yml.

Kommentare

You forgot to set the shortname for Disqus. Please set it in _config.yml.