ios – 使用YUV颜色空间将CMSampleBufferRef转换为UIImage?

ios – 使用YUV颜色空间将CMSampleBufferRef转换为UIImage?,第1张

概述我正在使用AVCaptureVideoDataOutput并希望将CMSampleBufferRef转换为UI Image.许多答案都是一样的,比如 UIImage created from CMSampleBufferRef not displayed in UIImageView?和 AVCaptureSession with multiple previews 如果我将VideoDataOu 我正在使用AVCaptureVIDeoDataOutput并希望将CMSampleBufferRef转换为UI Image.许多答案都是一样的,比如 UIImage created from CMSampleBufferRef not displayed in UIImageView?和 AVCaptureSession with multiple previews

如果我将VIDeoDataOutput颜色空间设置为BGRA,则可以正常工作(记入此答案CGBitmapContextCreateImage error)

Nsstring* key = (Nsstring*)kCVPixelBufferPixelFormatTypeKey;NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];NSDictionary* vIDeoSettings = [NSDictionary dictionaryWithObject:value forKey:key];[dataOutput setVIDeoSettings:vIDeoSettings];

如果没有上述视频设置,我将收到以下错误

CGBitmapContextCreate: invalID data bytes/row: should be at least 2560 for 8 integer bits/component,3 components,kCGImageAlphaPremultiplIEdFirst.<Error>: CGBitmapContextCreateImage: invalID context 0x0

使用BGRA不是一个好选择,因为从YUV(默认AVCaptureSession颜色空间)到BGRA的转换开销,如Brad和Codo在How to get the Y component from CMSampleBuffer resulted from the AVCaptureSession?所述

那么有没有办法将CMSampleBufferRef转换为UIImage并使用YUV颜色空间?

解决方法 经过大量研究并阅读了苹果文档和wikipedis.我想出了答案,它对我来说非常有效.因此,对于未来的读者,当视频像素类型设置为kCVPixelFormatType_420YpCbCr8BiPlanarFullRange时,我将共享代码以将CMSampleBufferRef转换为UIImage
// Create a UIImage from sample buffer data// Works only if pixel format is kCVPixelFormatType_420YpCbCr8BiPlanarFullRange-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{    @autoreleasepool {        // Get a CMSampleBuffer's Core VIDeo image buffer for the media data        CVImageBufferRef imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer);        // Lock the base address of the pixel buffer        CVPixelBufferLockBaseAddress(imageBuffer,0);        // Get the number of bytes per row for the plane pixel buffer        voID *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);        // Get the number of bytes per row for the plane pixel buffer        size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);        // Get the pixel buffer wIDth and height        size_t wIDth = CVPixelBufferGetWIDth(imageBuffer);        size_t height = CVPixelBufferGetHeight(imageBuffer);        // Create a device-dependent gray color space        CGcolorSpaceRef colorSpace = CGcolorSpaceCreateDeviceGray();        // Create a bitmap graphics context with the sample buffer data        CGContextRef context = CGBitmapContextCreate(baseAddress,wIDth,height,8,bytesPerRow,colorSpace,kCGImageAlphaNone);        // Create a Quartz image from the pixel data in the bitmap graphics context        CGImageRef quartzImage = CGBitmapContextCreateImage(context);        // Unlock the pixel buffer        CVPixelBufferUnlockBaseAddress(imageBuffer,0);        // Free up the context and color space        CGContextRelease(context);        CGcolorSpaceRelease(colorSpace);        // Create an image object from the Quartz image        UIImage *image = [UIImage imageWithCGImage:quartzImage];        // Release the Quartz image        CGImageRelease(quartzImage);        return (image);    }}
总结

以上是内存溢出为你收集整理的ios – 使用YUV颜色空间将CMSampleBufferRef转换为UIImage?全部内容,希望文章能够帮你解决ios – 使用YUV颜色空间将CMSampleBufferRef转换为UIImage?所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/1104300.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-28
下一篇 2022-05-28

发表评论

登录后才能评论

评论列表(0条)

保存