使用MonoTouch在iOS中进行视频捕获

使用MonoTouch在iOS中进行视频捕获,第1张

概述我有代码在Objective-C中创建,配置和启动视频捕获会话而没有问题.我将示例移植到C#和MonoTouch 4.0.3并遇到一些问题,这里是代码: void Initialize () { // Create notifier delegate class captureVideoDelegate = new CaptureVideoDeleg 我有代码在Objective-C中创建,配置和启动视频捕获会话而没有问题.我将示例移植到C#和Monotouch 4.0.3并遇到一些问题,这里是代码:

voID Initialize ()    {           // Create notifIEr delegate class         captureVIDeoDelegate = new CaptureVIDeoDelegate(this);        // Create capture session        captureSession = new AVCaptureSession();        captureSession.SessionPreset = AVCaptureSession.Preset640x480;        // Create capture device        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.VIDeo);        // Create capture device input        NSError error;        captureDeviceinput = new AVCaptureDeviceinput(captureDevice,out error);        captureSession.Addinput(captureDeviceinput);        // Create capture device output        captureVIDeoOutput = new AVCaptureVIDeoDataOutput();        captureSession.AddOutput(captureVIDeoOutput);        captureVIDeoOutput.VIDeoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;        captureVIDeoOutput.MinFrameDuration = new CMTime(1,30);        //        // ISSUE 1        // In the original Objective-C code I was creating a dispatch_queue_t object,passing it to        // setSampleBufferDelegate:queue message and worked,here I Could not find an equivalent to         // the queue mechanism. Also not sure if the delegate should be used like this).        //        captureVIDeoOutput.SetSampleBufferDelegatequeue(captureVIDeoDelegate,???????);        // Create prevIEw layer        prevIEwLayer = AVCaptureVIDeoPrevIEwLayer.FromSession(captureSession);        prevIEwLayer.OrIEntation = AVCaptureVIDeoOrIEntation.LandscapeRight;        //        // ISSUE 2:        // DIDn't find any VIDeoGravity related enumeration in Monotouch (not sure if string will work)        //        prevIEwLayer.VIDeoGravity = "AVLayerVIDeoGravityResizeAspectFill";        prevIEwLayer.Frame = new RectangleF(0,1024,768);        this.VIEw.Layer.AddSublayer(prevIEwLayer);        // Start capture session        captureSession.StartRunning();    }    #endregion    public class CaptureVIDeoDelegate : AVCaptureVIDeoDataOutputSampleBufferDelegate    {        private VirtualDeckVIEwController mainVIEwController;        public CaptureVIDeoDelegate(VirtualDeckVIEwController vIEwController)        {            mainVIEwController = vIEwController;        }        public overrIDe voID DIDOutputSampleBuffer (AVCaptureOutput captureOutput,CMSampleBuffer sampleBuffer,AVCaptureConnection connection)        {            // Todo: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonotouch.Foundation.modelattribute        }    }

问题1:
不确定如何正确使用SetSampleBufferDelegatequeue方法中的委托.也找不到与dispatch_queue_t对象相同的机制,该对象在Objective-C中正常工作以传入第二个参数.

问题2:
我没有在Monotouch库中找到任何VIDeoGravity枚举,不确定传递具有常量值的字符串是否有效.

我已经找到了解决这个问题的任何线索,但没有明确的样本.任何有关如何在Monotouch中执行相同 *** 作的示例或信息都将受到高度赞赏.

非常感谢.

解决方法 这是我的代码.好好利用它.我只是删除了重要的东西,所有的初始化都在那里,以及读取样本输出缓冲区.

然后我有代码处理CVImageBuffer形式一个链接的自定义ObjC库,如果你需要在Monotouch中处理它,那么你需要加倍努力并将其转换为CGImage或UIImage.在Monotouch(AFAIK)中没有这个功能,所以你需要自己绑定它,从普通的ObjC. ObjC中的样本如下:how to convert a CVImageBufferRef to UIImage

public voID InitCapture ()        {            try            {                // Setup the input                NSError error = new NSError ();                captureinput = new AVCaptureDeviceinput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.VIDeo),out error);                 // Setup the output                captureOutput = new AVCaptureVIDeoDataOutput ();                 captureOutput.AlwaysdiscardsLateVIDeoFrames = true;                 captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate,dispatchQueue);                captureOutput.MinFrameDuration = new CMTime (1,10);                // Set the vIDeo output to store frame in BGRA (compatible across devices)                captureOutput.VIDeoSettings = new AVVIDeoSettings (CVPixelFormatType.CV32BGRA);                // Create a capture session                captureSession = new AVCaptureSession ();                captureSession.SessionPreset = AVCaptureSession.PresetMedium;                captureSession.Addinput (captureinput);                captureSession.AddOutput (captureOutput);                // Setup the prevIEw layer                prevLayer = new AVCaptureVIDeoPrevIEwLayer (captureSession);                prevLayer.Frame = liveVIEw.Bounds;                prevLayer.VIDeoGravity = "AVLayerVIDeoGravityResize"; // image may be slightly distorted,but red bar position will be accurate                liveVIEw.Layer.AddSublayer (prevLayer);                StartliveDeCoding ();            }            catch (Exception ex)            {                Console.Writeline (ex.ToString ());            }        }public voID DIDOutputSampleBuffer (AVCaptureOutput captureOutput,Monotouch.CoreMedia.CMSampleBuffer sampleBuffer,AVCaptureConnection connection)        {               Console.Writeline ("DIDOutputSampleBuffer: enter");            if (isScanning)             {                CVImageBuffer imageBuffer = sampleBuffer.GetimageBuffer ();                 Console.Writeline ("DIDOutputSampleBuffer: calling decode");                //      NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWIDth(imageBuffer),CVPixelBufferGetHeight(imageBuffer),CVPixelBufferGetBytesPerRow(imageBuffer));                // call the decoder                DecodeImage (imageBuffer);            }            else            {                Console.Writeline ("DIDOutputSampleBuffer: not scanning");            }            Console.Writeline ("DIDOutputSampleBuffer: quit");        }
总结

以上是内存溢出为你收集整理的使用MonoTouch在iOS中进行视频捕获全部内容,希望文章能够帮你解决使用MonoTouch在iOS中进行视频捕获所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/1033517.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-24
下一篇 2022-05-24

发表评论

登录后才能评论

评论列表(0条)

保存