ios – 当我从捕获输出协议中调用它时,为什么我的图像没有更新?

ios – 当我从捕获输出协议中调用它时,为什么我的图像没有更新?,第1张

概述我想做一些非常简单的事情.我希望以全屏显示视频图层,并且每秒更新一次使用当时获得的CMSampleBufferRef的UI Image.但是我遇到了两个不同的问题.第一个是改变: [connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];[connection setVideoMinFrameDuration:CMTimeMake(1, 1)] 我想做一些非常简单的事情.我希望以全屏显示视频图层,并且每秒更新一次使用当时获得的CMSampleBufferRef的UI Image.但是我遇到了两个不同的问题.第一个是改变:

[connection setVIDeoMaxFrameDuration:CMTimeMake(1,1)];[connection setVIDeoMinFrameDuration:CMTimeMake(1,1)];

还会修改视频预览图层,我认为它只会修改av foundation将信息发送给委托的速率,但它似乎会影响整个会话(看起来更明显).所以这使我的视频每秒更新一次.我想我可以省略这些行,只需在委托中添加一个计时器,以便每秒将CMSampleBufferRef发送到另一个方法来处理它.但如果这是正确的方法,我不知道.

我的第二个问题是UIImageVIEw没有更新,或者有时它只更新一次并且之后没有更改.我正在使用此方法来更新它:

- (voID)captureOutput:(AVCaptureOutput *)captureOutputdIDOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer       fromConnection:(AVCaptureConnection *)connection {    //NSData *jpeg = [AVCaptureStillimageOutput jpegStillimageNSDataRepresentation:sampleBuffer] ;    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];    [imageVIEw setimage:image];    // Add your code here that uses the image.    NSLog(@"update");}

我从苹果的例子中得到了什么.通过读取更新消息检查每秒正确调用该方法.但图像根本没有变化. sampleBuffer也是自动销毁还是我必须释放它?

这是另外两个重要的方法:
查看载荷:

- (voID)vIEwDIDLoad{    [super vIEwDIDLoad];    // Do any additional setup after loading the vIEw,typically from a nib.    session = [[AVCaptureSession alloc] init];    // Add inputs and outputs.    if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {        session.sessionPreset = AVCaptureSessionPreset640x480;    }    else {        // Handle the failure.        NSLog(@"Cannot set session preset to 640x480");    }    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVIDeo];    NSError *error = nil;    AVCaptureDeviceinput *input = [AVCaptureDeviceinput deviceinputWithDevice:device error:&error];    if (!input) {        // Handle the error appropriately.        NSLog(@"Could create input: %@",error);    }    if ([session canAddinput:input]) {        [session addinput:input];    }    else {        // Handle the failure.        NSLog(@"Could not add input");    }    // DATA OUTPUT    dataOutput = [[AVCaptureVIDeoDataOutput alloc] init];    if ([session canAddOutput:dataOutput]) {        [session addOutput:dataOutput];        dataOutput.vIDeoSettings =         [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]                                    forKey: (ID)kCVPixelBufferPixelFormatTypeKey];        //dataOutput.minFrameDuration = CMTimeMake(1,15);        //dataOutput.minFrameDuration = CMTimeMake(1,1);        AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVIDeo];        [connection setVIDeoMaxFrameDuration:CMTimeMake(1,1)];        [connection setVIDeoMinFrameDuration:CMTimeMake(1,1)];    }    else {        // Handle the failure.        NSLog(@"Could not add output");    }    // DATA OUTPUT END    dispatch_queue_t queue = dispatch_queue_create("MyQueue",NulL);    [dataOutput setSampleBufferDelegate:self queue:queue];    dispatch_release(queue);    captureVIDeoPrevIEwLayer = [[AVCaptureVIDeoPrevIEwLayer alloc] initWithSession:session];    [captureVIDeoPrevIEwLayer setVIDeoGravity:AVLayerVIDeoGravityResizeAspect];    [captureVIDeoPrevIEwLayer setBounds:vIDeolayer.layer.bounds];    [captureVIDeoPrevIEwLayer setposition:vIDeolayer.layer.position];    [vIDeolayer.layer addSublayer:captureVIDeoPrevIEwLayer];    [session startRunning];}

将CMSampleBufferRef转换为UIImage:

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {    // Get a CMSampleBuffer's Core VIDeo image buffer for the media data    CVImageBufferRef imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer);     // Lock the base address of the pixel buffer    CVPixelBufferLockBaseAddress(imageBuffer,0);     // Get the number of bytes per row for the pixel buffer    voID *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);     // Get the number of bytes per row for the pixel buffer    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);     // Get the pixel buffer wIDth and height    size_t wIDth = CVPixelBufferGetWIDth(imageBuffer);     size_t height = CVPixelBufferGetHeight(imageBuffer);     // Create a device-dependent RGB color space    CGcolorSpaceRef colorSpace = CGcolorSpaceCreateDeviceRGB();     // Create a bitmap graphics context with the sample buffer data    CGContextRef context = CGBitmapContextCreate(baseAddress,wIDth,height,8,bytesPerRow,colorSpace,kCGBitmapByteOrder32little | kCGImageAlphaPremultiplIEdFirst);     // Create a Quartz image from the pixel data in the bitmap graphics context    CGImageRef quartzImage = CGBitmapContextCreateImage(context);     // Unlock the pixel buffer    CVPixelBufferUnlockBaseAddress(imageBuffer,0);    // Free up the context and color space    CGContextRelease(context);     CGcolorSpaceRelease(colorSpace);    // Create an image object from the Quartz image    UIImage *image = [UIImage imageWithCGImage:quartzImage];    // Release the Quartz image    CGImageRelease(quartzImage);    return (image);}

在此先感谢您提供给我的任何帮助.

解决方法 从captureOutput的文档:dIDOutputSampleBuffer:fromConnection:method:

This method is called on the dispatch queue specifIEd by the output’s sampleBufferCallbackQueue property.

这意味着如果您需要使用此方法中的缓冲区更新UI,则需要在主队列上执行此 *** 作,如下所示:

- (voID)captureOutput:(AVCaptureOutput *)captureOutput dIDOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer       fromConnection:(AVCaptureConnection *)connection {    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];    dispatch_async(dispatch_get_main_queue(),^{        [imageVIEw setimage:image];    });}

编辑:关于你的第一个问题:我不确定我是否理解这个问题,但是如果你想每秒只更新一次图像,你也可以在“dIDOutputSampleBuffer”方法中使用“lastimageUpdateTime”值进行比较,看看是否有足够的时间通过更新那里的图像,否则忽略样本缓冲区.

总结

以上是内存溢出为你收集整理的ios – 当我从捕获输出协议中调用它时,为什么我的图像没有更新?全部内容,希望文章能够帮你解决ios – 当我从捕获输出协议中调用它时,为什么我的图像没有更新?所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/web/1017044.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-23
下一篇 2022-05-23

发表评论

登录后才能评论

评论列表(0条)

保存