我正在尝试创建一个名为ClouDWriter的iPad应用.应用程序的概念是绘制您在云中看到的形状.下载应用程序后,在启动ClouDWriter后,将向用户呈现实时视频背景(来自后置摄像头),其上面有一个OpenGL绘图层.用户可以打开应用程序,将iPad指向天空中的云,并绘制他们在显示屏上看到的内容.
该应用程序的一个主要特征是用户记录会话期间显示器上发生的事情的视频屏幕捕获.实时视频输入和“绘图”视图将成为平面(合并)视频.
关于当前如何工作的一些假设和背景信息.
>使用Apples AVCamCaptureManager(AVCam示例项目的一部分)作为大部分相机相关代码的基础.
>使用AVCaptureSessionPresetMedium作为预设启动AVCamCapture会话.
>开始通过vIDeoPrevIEwLayer将相机输出作为背景输出.
>使用openGL覆盖直播vIDeoPrevIEwLayer,其视图允许“绘图”(手指绘画样式). “绘图”视图背景是[UIcolor clearcolor].
在这一点上,想法是用户可以将iPad3相机指向天空中的某些云,并绘制他们看到的形状.此功能完美无瑕.当我尝试对用户会话进行“平面”视频屏幕捕获时,我开始遇到性能问题.由此产生的“平面”视频会使摄像机输入与用户绘图实时重叠.
一个具有类似于我们正在寻找的功能的应用程序的一个很好的例子是Board Cam,可在App Store中找到.
要启动此过程,视图中始终会显示“记录”按钮.当用户点击记录按钮时,期望是在再次点击记录按钮之前,会话将被记录为“平面”视频屏幕捕获.
当用户点击“记录”按钮时,代码中会发生以下情况
> AVCaptureSessionPreset已从AVCaptureSessionPresetMedium更改为AVCaptureSessionPresetPhoto,允许访问
- (voID)captureOutput:(AVCaptureOutput *)captureOutput dIDOutputSampleBuffer(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
> isRecording值设置为YES.
> dIDOutputSampleBuffer开始获取数据并从当前视频缓冲区数据创建图像.它通过调用来完成此 *** 作
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
> self.currentimage设置为this
>应用程序根视图控制器开始覆盖drawRect以创建展平图像,用作最终视频中的单个帧.
>该帧被写入平面视频
为了创建一个平面图像,用作单独的框架,在根VIEwController的drawRect函数中,我们获取AVCamCaptureManager的dIDOutputSampleBuffer代码接收的最后一帧.那是在下面
- (voID) drawRect:(CGRect)rect { NSDate* start = [NSDate date]; CGContextRef context = [self createBitmapContextOfSize:self.frame.size]; //not sure why this is necessary...image renders upsIDe-down and mirrored CGAffinetransform flipVertical = CGAffinetransformMake(1,-1,self.frame.size.height); CGContextConcatCTM(context,flipVertical); if( isRecording) [[self.layer presentationLayer] renderInContext:context]; CGImageRef cgImage = CGBitmapContextCreateImage(context); UIImage* background = [UIImage imageWithCGImage: cgImage]; CGImageRelease(cgImage); UIImage *bottomImage = background; if(((AVCamCaptureManager *)self.captureManager).currentimage != nil && isVIDeoBGActive ) { UIImage *image = [((AVCamCaptureManager *)self.mainContentScreen.captureManager).currentimage retain];//[UIImage CGSize newSize = background.size; UIGraphicsBeginImageContext( newSize ); // Use existing opacity as is if( isRecording ) { if( [self.mainContentScreen isVIDeoBGActive] && _recording) { [image drawInRect:CGRectMake(0,newSize.wIDth,newSize.height)]; } // Apply supplIEd opacity [bottomImage drawInRect:CGRectMake(0,newSize.height) blendMode:kCGBlendModenormal Alpha:1.0]; } UIImage *newImage = UIGraphicsGetimageFromCurrentimageContext(); UIGraphicsEndImageContext(); self.currentScreen = newImage; [image release]; } if (isRecording) { float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0; [self writeVIDeoFrameatTime:CMTimeMake((int)millisElapsed,1000)]; } float processingSeconds = [[NSDate date] timeIntervalSinceDate:start]; float delayRemaining = (1.0 / self.frameRate) - processingSeconds; CGContextRelease(context); //redraw at the specifIEd framerate [self performSelector:@selector(setNeedsdisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01]; }
createBitmapContextOfSize如下
- (CGContextRef) createBitmapContextOfSize:(CGSize) size { CGContextRef context = NulL; CGcolorSpaceRef colorSpace = nil; int bitmapByteCount; int bitmapBytesPerRow; bitmapBytesPerRow = (size.wIDth * 4); bitmapByteCount = (bitmapBytesPerRow * size.height); colorSpace = CGcolorSpaceCreateDeviceRGB(); if (bitmapData != NulL) { free(bitmapData); } bitmapData = malloc( bitmapByteCount ); if (bitmapData == NulL) { fprintf (stderr,"Memory not allocated!"); CGcolorSpaceRelease( colorSpace ); return NulL; } context = CGBitmapContextCreate (bitmapData,size.wIDth,size.height,8,// bits per component bitmapBytesPerRow,colorSpace,kCGImageAlphaPremultiplIEdFirst); CGContextSetAllowsAntialiasing(context,NO); if (context== NulL) { free (bitmapData); fprintf (stderr,"Context not created!"); CGcolorSpaceRelease( colorSpace ); return NulL; } //CGAffinetransform transform = CGAffinetransformIDentity; //transform = CGAffinetransformScale(transform,size.wIDth * .25,size.height * .25); //CGAffinetransformScale(transform,1024,768); CGcolorSpaceRelease( colorSpace ); return context;}
– (voID)captureOutput:dIDOutputSampleBuffer fromConnection
// Delegate routine that is called when a sample buffer was written- (voID)captureOutput:(AVCaptureOutput *)captureOutput dIDOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ // Create a UIImage from the sample buffer data [self imageFromSampleBuffer:sampleBuffer];}
– (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer下面
// Create a UIImage from sample buffer data - modifed not to return a UIImage *,rather store it in self.currentimage- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{ // unlock the memory,do other stuff,but don't forget: // Get a CMSampleBuffer's Core VIDeo image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer,0); // uint8_t *tmp = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); int bytes = CVPixelBufferGetBytesPerRow(imageBuffer); // determine number of bytes from height * bytes per row //voID *baseAddress = malloc(bytes); size_t height = CVPixelBufferGetHeight(imageBuffer); uint8_t *baseAddress = malloc( bytes * height ); memcpy( baseAddress,CVPixelBufferGetBaseAddress(imageBuffer),bytes * height ); size_t wIDth = CVPixelBufferGetWIDth(imageBuffer); // Create a device-dependent RGB color space CGcolorSpaceRef colorSpace = CGcolorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress,wIDth,height,bytes,kCGBitmapByteOrderDefault | kCGImageAlphaPremultiplIEdFirst); // CGContextScaleCTM(context,0.25,0.25); //scale down to size // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGcolorSpaceRelease(colorSpace); free(baseAddress); self.currentimage = [UIImage imageWithCGImage:quartzImage scale:0.25 orIEntation:UIImageOrIEntationUp]; // Release the Quartz image CGImageRelease(quartzImage); return nil; }
最后,我使用writeVIDeoFrameatTime:CMTimeMake将其写入磁盘,代码如下:
-(voID) writeVIDeoFrameatTime:(CMTime)time { if (![vIDeoWriterinput isReadyForMoreMediaData]) { NSLog(@"Not ready for vIDeo data"); } else { @synchronized (self) { UIImage* newFrame = [self.currentScreen retain]; CVPixelBufferRef pixelBuffer = NulL; CGImageRef cgImage = CGImageCreatecopy([newFrame CGImage]); CFDataRef image = CGDataProvIDercopyData(CGImageGetDataProvIDer(cgImage)); if( image == nil ) { [newFrame release]; CVPixelBufferRelease( pixelBuffer ); CGImageRelease(cgImage); return; } int status = CVPixelBufferPoolCreatePixelBuffer(kcfAllocatorDefault,avAdaptor.pixelBufferPool,&pixelBuffer); if(status != 0){ //Could not get a buffer from the pool NSLog(@"Error creating pixel buffer: status=%d",status); } // set image data into pixel buffer CVPixelBufferLockBaseAddress( pixelBuffer,0 ); uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer); CFDataGetBytes(image,CFRangeMake(0,CFDataGetLength(image)),destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data if(status == 0){ BOol success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time]; if (!success) NSLog(@"Warning: Unable to write buffer to vIDeo"); } //clean up [newFrame release]; CVPixelBufferUnlockBaseAddress( pixelBuffer,0 ); CVPixelBufferRelease( pixelBuffer ); CFRelease(image); CGImageRelease(cgImage); } }}
一旦isRecording设置为YES,iPad 3的性能就会从大约20FPS变为5FPS.使用Insturments,我能够看到以下代码块(来自drawRect :)导致性能下降到无法使用的级别.
if( _recording ) { if( [self.mainContentScreen isVIDeoBGActive] && _recording) { [image drawInRect:CGRectMake(0,newSize.height) blendMode:kCGBlendModenormal Alpha:1.0]; }
这是我的理解,因为我正在捕捉全屏,我们失去了“drawInRect”应该给予的所有好处.具体来说,我在谈论更快的重绘,因为理论上,我们只更新显示的一小部分(传入CGRect).再次,全屏捕捉,我不确定drawInRect可以提供几乎同样多的好处.
为了提高性能,我想如果我要缩小imageFromSampleBuffer提供的图像和绘图视图的当前上下文,我会看到帧速率的增加.不幸的是,CoreGrAPIcs.Framework不是我过去曾经使用的东西,所以我不知道我能够有效地将性能调整到可接受的水平.
任何CoreGraphics Guru都有输入?
此外,ARC关闭了一些代码,分析仪显示一个泄漏,但我认为这是一个误报.
即将推出,ClouDWriter™,天空是极限!
解决方法 如果你想要不错的录音性能,你将需要避免使用Core Graphics重新绘制内容.坚持纯OpenGL ES.你说你已经在OpenGL ES中完成了手指画,所以你应该能够将它渲染成纹理.实时视频馈送也可以指向纹理.从那里,您可以根据手指绘画纹理中的Alpha通道对两者进行叠加混合.
使用OpenGL ES 2.0着色器非常容易.实际上,我的GPUImage开源框架可以处理视频捕获和混合部分(如果您从绘制代码中提供渲染纹理,请参阅FilterShowcase示例应用程序以获取覆盖在视频上的图像示例).您必须确保绘画使用的是OpenGL ES 2.0,而不是1.1,并且它与GPUImage OpenGL ES上下文具有相同的共享组,但我将在CubeExample应用程序中展示如何执行此 *** 作.
我还通过使用纹理缓存(在iOS 5.0上)以高性能方式为您在GPUImage中处理视频录制.
您应该能够通过使用类似我的框架和保留在OpenGL ES中的内容,以稳定的30 FPS录制720p视频(iPad 2)或1080p视频(iPad 3).
总结以上是内存溢出为你收集整理的ios – 具有视频背景和绘画覆盖性能问题的全视频屏幕捕获全部内容,希望文章能够帮你解决ios – 具有视频背景和绘画覆盖性能问题的全视频屏幕捕获所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)