我将样本缓冲区转换为imageRef变量,如果我将其转换为UIImage,一切都会好的.
但是现在我想把那个imageRef改变它的颜色值逐像素,在这个例子中改为负颜色(我必须做更复杂的东西所以我不能使用CIFilters)但是当我执行注释的部分时它崩溃了糟糕的访问.
import UIKitimport AVFoundationclass VIEwController: UIVIEwController,AVCaptureVIDeoDataOutputSampleBufferDelegate { let captureSession = AVCaptureSession() var prevIEwLayer : AVCaptureVIDeoPrevIEwLayer? var captureDevice : AVCaptureDevice? @IBOutlet weak var cameraview: UIImageVIEw! overrIDe func vIEwDIDLoad() { super.vIEwDIDLoad() captureSession.sessionPreset = AVCaptureSessionPresetMedium let devices = AVCaptureDevice.devices() for device in devices { if device.hasMediaType(AVMediaTypeVIDeo) && device.position == AVCaptureDeviceposition.Back { if let device = device as? AVCaptureDevice { captureDevice = device beginSession() break } } } } func focusTo(value : float) { if let device = captureDevice { if(device.lockForConfiguration(nil)) { device.setFocusModeLockeDWithLensposition(value) { (time) in } device.unlockForConfiguration() } } } overrIDe func touchesBegan(touches: NSSet!,withEvent event: UIEvent!) { var touchPercent = float(touches.anyObject().locationInVIEw(vIEw).x / 320) focusTo(touchPercent) } overrIDe func touchesMoved(touches: NSSet!,withEvent event: UIEvent!) { var touchPercent = float(touches.anyObject().locationInVIEw(vIEw).x / 320) focusTo(touchPercent) } func beginSession() { configureDevice() var error : NSError? captureSession.addinput(AVCaptureDeviceinput(device: captureDevice,error: &error)) if error != nil { println("error: \(error?.localizedDescription)") } prevIEwLayer = AVCaptureVIDeoPrevIEwLayer(session: captureSession) prevIEwLayer?.frame = vIEw.layer.frame //vIEw.layer.addSublayer(prevIEwLayer) let output = AVCaptureVIDeoDataOutput() let cameraQueue = dispatch_queue_create("cameraQueue",disPATCH_QUEUE_SERIAL) output.setSampleBufferDelegate(self,queue: cameraQueue) output.vIDeoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA] captureSession.addOutput(output) captureSession.startRunning() } func configureDevice() { if let device = captureDevice { device.lockForConfiguration(nil) device.focusMode = .Locked device.unlockForConfiguration() } } // MARK : - AVCaptureVIDeoDataOutputSampleBufferDelegate func captureOutput(captureOutput: AVCaptureOutput!,dIDOutputSampleBuffer sampleBuffer: CMSampleBuffer!,fromConnection connection: AVCaptureConnection!) { let imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer,0) let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let wIDth = CVPixelBufferGetWIDth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) let colorSpace = CGcolorSpaceCreateDeviceRGB() var bitmAPInfo = CGBitmAPInfo.fromraw(CGImageAlphaInfo.PremultiplIEdFirst.toRaw())! | CGBitmAPInfo.ByteOrder32little let context = CGBitmapContextCreate(baseAddress,wIDth,height,8,bytesPerRow,colorSpace,bitmAPInfo) let imageRef = CGBitmapContextCreateImage(context) CVPixelBufferUnlockBaseAddress(imageBuffer,0) let data = CGDataProvIDercopyData(CGImageGetDataProvIDer(imageRef)) as NSData let pixels = data.bytes var newPixels = UnsafeMutablePointer<UInt8>() //for index in strIDe(from: 0,to: data.length,by: 4) { /*newPixels[index] = 255 - pixels[index] newPixels[index + 1] = 255 - pixels[index + 1] newPixels[index + 2] = 255 - pixels[index + 2] newPixels[index + 3] = 255 - pixels[index + 3]*/ //} bitmAPInfo = CGImageGetBitmAPInfo(imageRef) let provIDer = CGDataProvIDerCreateWithData(nil,newPixels,UInt(data.length),nil) let newImageRef = CGImageCreate(wIDth,CGImageGetBitsPerComponent(imageRef),CGImageGetBitsPerPixel(imageRef),bitmAPInfo,provIDer,nil,false,kCGRenderingIntentDefault) let image = UIImage(CGImage: newImageRef,scale: 1,orIEntation: .Right) dispatch_async(dispatch_get_main_queue()) { self.cameraview.image = image } }}解决方法 您在像素 *** 作循环中访问权限不佳,因为newPixels UnsafeMutablePointer使用内置的RawPointer初始化并指向内存中的0x0000,在我看来,它指向一个未分配的内存空间,您无权存储数据.
为了更长的解释和“解决方案”,我做了一些改变……
首先,自OP发布以来,Swift发生了一些变化,这条线必须根据rawValue的功能进行修改:
//var bitmAPInfo = CGBitmAPInfo.fromraw(CGImageAlphaInfo.PremultiplIEdFirst.toRaw())! | CGBitmAPInfo.ByteOrder32little var bitmAPInfo = CGBitmAPInfo(rawValue: CGImageAlphaInfo.PremultiplIEdFirst.rawValue) | CGBitmAPInfo.ByteOrder32little
指针也需要进行一些更改,因此我发布了所有更改(我将原始行留在注释标记中).
let data = CGDataProvIDercopyData(CGImageGetDataProvIDer(imageRef)) as NSData //let pixels = data.bytes let pixels = UnsafePointer<UInt8>(data.bytes) let imageSize : Int = Int(wIDth) * Int(height) * 4 //var newPixels = UnsafeMutablePointer<UInt8>() var newPixelArray = [UInt8](count: imageSize,repeatedValue: 0) for index in strIDe(from: 0,by: 4) { newPixelArray[index] = 255 - pixels[index] newPixelArray[index + 1] = 255 - pixels[index + 1] newPixelArray[index + 2] = 255 - pixels[index + 2] newPixelArray[index + 3] = pixels[index + 3] } bitmAPInfo = CGImageGetBitmAPInfo(imageRef) //let provIDer = CGDataProvIDerCreateWithData(nil,nil) let provIDer = CGDataProvIDerCreateWithData(nil,&newPixelArray,nil)
一些解释:所有旧的像素字节必须转换为UInt8,因此它不是将像素更改为UnsafePointer.然后我为新像素创建了一个数组,并删除了newPixels指针并直接使用了数组.最后将指向新数组的指针添加到提供程序以创建映像.并删除了Alpha字节的修改.
在此之后,我能够以非常低的性能将一些负面图像放入我的视图中,每10秒左右一次图像(iPhone 5,通过XCode).并且在图像视图中呈现第一帧需要花费大量时间.
当我将captureSession.stopRunning()添加到dIDOutputSampleBuffer函数的开头时,有一些更快的响应,然后在处理完成后再次使用captureSession.startRunning()启动.有了这个我差不多1fps.
感谢有趣的挑战!
总结以上是内存溢出为你收集整理的ios – 在Swift中逐像素地对图像应用视觉效果全部内容,希望文章能够帮你解决ios – 在Swift中逐像素地对图像应用视觉效果所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)