我正在制作一款使用AVFoundation的iPhone应用程序 – 特别是使用iPhone相机捕捉视频的AVCapture.
我需要在录制中包含的视频源上叠加自定义图像.
到目前为止,我已经设置了AVCapture会话,可以显示提要,访问框架,将其保存为UIImage,并将叠加图像放在上面.然后将这个新的UIImage转换为CVPixelBufferRef.为了仔细检查bufferRef是否正常工作,我将其转换回UIImage,它仍然可以显示图像.
当我尝试将CVPixelBufferRef转换为CMSampleBufferRef以附加到AVCaptureSessions assetWriterinput时,麻烦就开始了. CMSampleBufferRef在我尝试创建它时总是返回NulL.
这是 – (voID)captureOutput函数
- (voID)captureOutput:(AVCaptureOutput *)captureOutput dIDOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ UIImage *botimage = [self imageFromSampleBuffer:sampleBuffer]; UIImage *wheel = [self imageFromVIEw:wheelVIEw]; UIImage *finalimage = [self overlaIDImage:botimage :wheel]; //[prevIEwImage setimage:finalimage]; <- works -- the image is being merged into one UIImage CVPixelBufferRef pixelBuffer = NulL; CGImageRef cgImage = CGImageCreatecopy(finalimage.CGImage); CFDataRef image = CGDataProvIDercopyData(CGImageGetDataProvIDer(cgImage)); int status = CVPixelBufferCreateWithBytes(NulL,self.vIEw.bounds.size.wIDth,self.vIEw.bounds.size.height,kCVPixelFormatType_32BGRA,(voID*)CFDataGetBytePtr(image),CGImageGetBytesPerRow(cgImage),NulL,&pixelBuffer); if(status == 0){ Osstatus result = 0; CMVIDeoFormatDescriptionRef vIDeoInfo = NulL; result = CMVIDeoFormatDescriptionCreateForImageBuffer(NulL,pixelBuffer,&vIDeoInfo); NSParameterassert(result == 0 && vIDeoInfo != NulL); CMSampleBufferRef myBuffer = NulL; result = CMSampleBufferCreateForImageBuffer(kcfAllocatorDefault,true,vIDeoInfo,&myBuffer); NSParameterassert(result == 0 && myBuffer != NulL);//always null :S NSLog(@"Trying to append"); if (!CMSampleBufferDataIsReady(myBuffer)){ NSLog(@"sampleBuffer data is not ready"); return; } if (![assetWriterinput isReadyForMoreMediaData]){ NSLog(@"Not ready for data :("); return; } if (![assetWriterinput appendSampleBuffer:myBuffer]){ NSLog(@"Failed to append pixel buffer"); } }}
我一直听到的另一个解决方案是使用AVAssetWriterinputPixelBufferAdaptor,它不需要进行凌乱的CMSampleBufferRef包装.但是,我已经搜索了堆叠和苹果开发者论坛和文档,但无法找到关于如何设置或如何使用它的清晰描述或示例.如果有人有一个可行的例子,请你告诉我或者帮我解决上面的问题 – 一直在这个不停的工作一周,我在智慧结束.
如果您需要任何其他信息,请告诉我
提前致谢,
迈克尔
解决方法 你需要AVAssetWriterinputPixelBufferAdaptor,这里是创建它的代码:// Create dictionary for pixel buffer adaptorNSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil];// Create pixel buffer adaptorm_pixelsBufferAdaptor = [[AVAssetWriterinputPixelBufferAdaptor alloc] initWithAssetWriterinput:assetWriterinput sourcePixelBufferAttributes:bufferAttributes];
以及使用它的代码:
// If ready to have more media dataif (m_pixelsBufferAdaptor.assetWriterinput.readyForMoreMediaData) { // Create a pixel buffer CVPixelBufferRef pixelsBuffer = NulL; CVPixelBufferPoolCreatePixelBuffer(NulL,m_pixelsBufferAdaptor.pixelBufferPool,&pixelsBuffer); // Lock pixel buffer address CVPixelBufferLockBaseAddress(pixelsBuffer,0); // Create your function to set your pixels data in the buffer (in your case,fill with your finalimage data) [self yourFunctiontoputDataInPixelBuffer:CVPixelBufferGetBaseAddress(pixelsBuffer)]; // Unlock pixel buffer address CVPixelBufferUnlockBaseAddress(pixelsBuffer,0); // Append pixel buffer (calculate currentFrameTime with your needing,the most simplest way is to have a frame time starting at 0 and increment each time you write a frame with the time of a frame (inverse of your framerate)) [m_pixelsBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:currentFrameTime]; // Release pixel buffer CVPixelBufferRelease(pixelsBuffer);}
并且不要忘记释放您的pixelsBufferAdaptor.
总结以上是内存溢出为你收集整理的iphone – AVCapture appendSampleBuffer全部内容,希望文章能够帮你解决iphone – AVCapture appendSampleBuffer所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)