我正在拍摄带有正方形的视频,以显示vID的哪一部分将被裁剪.像这样:
现在我正在做一张纸,正方形有4条线,顶部和底部有半条线差异.然后我使用我将发布的代码裁剪视频,但是当我显示视频时,我看到了这个(忽略背景和绿色圆圈):
你可以看到有超过四行,所以我设置它来裁剪某个部分,但它增加了更多,当我使用相机中显示的相同矩形,以及用于裁剪的相同矩形?
所以我的问题是为什么裁剪的尺寸不一样?
这是我如何裁剪和显示:
//this is the square on the cameraUIVIEw *vIEw = [[UIVIEw alloc] initWithFrame:CGRectMake(0,self.vIEw.frame.size.wIDth,self.vIEw.frame.size.height-80)]; UIImageVIEw *image = [[UIImageVIEw alloc] init]; image.layer.bordercolor=[[UIcolor whitecolor] CGcolor];image.frame = CGRectMake(self.vIEw.frame.size.wIDth/2 - 58,100,116,116); CALayer *imageLayer = image.layer; [imageLayer setborderWIDth:1];[vIEw addSubvIEw:image]; [picker setCameraOverlayVIEw:vIEw];//this is crop rectCGRect rect = CGRectMake(self.vIEw.frame.size.wIDth/2 - 58,116);[self applyCropToVIDeoWithAsset:assest AtRect:rect OnTimeRange:CMTimeRangeMake(kCMTimeZero,CMTimeMakeWithSeconds(assest.duration.value,1)) ExportToUrl:exportUrl ExistingExportSession:exporter WithCompletion:^(BOol success,NSError *error,NSURL *vIDeoUrl) {//here is playerAVPlayer *player = [AVPlayer playerWithURL:vIDeoUrl]; AVPlayerLayer *layer = [AVPlayerLayer playerLayerWithPlayer:player];layer.frame = CGRectMake(self.vIEw.frame.size.wIDth/2 - 58,116);}];
以下是执行裁剪的代码:
- (UIImageOrIEntation)getVIDeoOrIEntationFromAsset:(AVAsset *)asset{ AVAssetTrack *vIDeoTrack = [[asset tracksWithMediaType:AVMediaTypeVIDeo] objectAtIndex:0];CGSize size = [vIDeoTrack naturalSize];CGAffinetransform txf = [vIDeoTrack preferredtransform];if (size.wIDth == txf.tx && size.height == txf.ty) return UIImageOrIEntationleft; //return UIInterfaceOrIEntationLandscapeleft;else if (txf.tx == 0 && txf.ty == 0) return UIImageOrIEntationRight; //return UIInterfaceOrIEntationLandscapeRight;else if (txf.tx == 0 && txf.ty == size.wIDth) return UIImageOrIEntationDown; //return UIInterfaceOrIEntationPortraitUpsIDeDown;else return UIImageOrIEntationUp; //return UIInterfaceOrIEntationPortrait;}
这是其余的裁剪代码:
- (AVAssetExportSession*)applyCropToVIDeoWithAsset:(AVAsset*)asset AtRect:(CGRect)cropRect OnTimeRange:(CMTimeRange)cropTimeRange ExportToUrl:(NSURL*)outputUrl ExistingExportSession:(AVAssetExportSession*)exporter WithCompletion:(voID(^)(BOol success,NSError* error,NSURL* vIDeoUrl))completion{// NSLog(@"CALLED");//create an avassetrack with our assetAVAssetTrack *clipVIDeoTrack = [[asset tracksWithMediaType:AVMediaTypeVIDeo] objectAtIndex:0];//create a vIDeo composition and preset some settingsAVMutableVIDeoComposition* vIDeoComposition = [AVMutableVIDeoComposition vIDeoComposition];vIDeoComposition.frameDuration = CMTimeMake(1,30);CGfloat cropOffX = cropRect.origin.x;CGfloat cropOffY = cropRect.origin.y;CGfloat cropWIDth = cropRect.size.wIDth;CGfloat cropHeight = cropRect.size.height;// NSLog(@"wIDth: %f - height: %f - x: %f - y: %f",cropWIDth,cropHeight,cropOffX,cropOffY);vIDeoComposition.renderSize = CGSizeMake(cropWIDth,cropHeight);//create a vIDeo instructionAVMutableVIDeoCompositionInstruction *instruction = [AVMutableVIDeoCompositionInstruction vIDeoCompositionInstruction];instruction.timeRange = cropTimeRange;AVMutableVIDeoCompositionLayerInstruction* transformer = [AVMutableVIDeoCompositionLayerInstruction vIDeoCompositionLayerInstructionWithAssetTrack:clipVIDeoTrack];UIImageOrIEntation vIDeoOrIEntation = [self getVIDeoOrIEntationFromAsset:asset];CGAffinetransform t1 = CGAffinetransformIDentity;CGAffinetransform t2 = CGAffinetransformIDentity;switch (vIDeoOrIEntation) { case UIImageOrIEntationUp: t1 = CGAffinetransformMakeTranslation(clipVIDeoTrack.naturalSize.height - cropOffX,0 - cropOffY ); t2 = CGAffinetransformRotate(t1,M_PI_2 ); break; case UIImageOrIEntationDown: t1 = CGAffinetransformMakeTranslation(0 - cropOffX,clipVIDeoTrack.naturalSize.wIDth - cropOffY ); // not fixed wIDth is the real height in upsIDe down t2 = CGAffinetransformRotate(t1,- M_PI_2 ); break; case UIImageOrIEntationRight: t1 = CGAffinetransformMakeTranslation(0 - cropOffX,0 ); break; case UIImageOrIEntationleft: t1 = CGAffinetransformMakeTranslation(clipVIDeoTrack.naturalSize.wIDth - cropOffX,clipVIDeoTrack.naturalSize.height - cropOffY ); t2 = CGAffinetransformRotate(t1,M_PI ); break; default: NSLog(@"no supported orIEntation has been found in this vIDeo"); break;}CGAffinetransform finaltransform = t2;[transformer settransform:finaltransform atTime:kCMTimeZero];//add the transformer layer instructions,then add to vIDeo compositioninstruction.layerInstructions = [NSArray arrayWithObject:transformer];vIDeoComposition.instructions = [NSArray arrayWithObject: instruction];//Remove any prevouis vIDeos at that path[[NSfileManager defaultManager] removeItemAtURL:outputUrl error:nil];if (!exporter){ exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetname:AVAssetExportPresetHighestQuality] ;}// assign all instruction for the vIDeo processing (in this case the transformation for cropPing the vIDeoexporter.vIDeoComposition = vIDeoComposition;exporter.outputfileType = AVfileTypeQuickTimeMovIE;if (outputUrl){ exporter.outputURL = outputUrl; [exporter exportAsynchronouslyWithCompletionHandler:^{ switch ([exporter status]) { case AVAssetExportSessionStatusFailed: NSLog(@"crop Export Failed: %@",[[exporter error] localizedDescription]); if (completion){ dispatch_async(dispatch_get_main_queue(),^{ completion(NO,[exporter error],nil); }); return; } break; case AVAssetExportSessionStatusCancelled: NSLog(@"crop Export canceled"); if (completion){ dispatch_async(dispatch_get_main_queue(),nil,nil); }); return; } break; default: break; } if (completion){ dispatch_async(dispatch_get_main_queue(),^{ completion(YES,outputUrl); }); } }];}return exporter;}
所以我的问题是为什么视频区域与裁剪/相机区域不同,当我使用完全相同的坐标和大小的方块时?
解决方法 也许 Check This Previous Question.看起来它可能与您遇到的情况类似.该问题的用户建议以这种方式裁剪:
CGImageRef imageRef = CGImageCreateWithImageInRect([originalimage CGImage],cropRect);UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];CGImageRelease(imageRef);
我希望这有助于或至少为您提供正确方向的开端.
总结以上是内存溢出为你收集整理的裁剪区域与iOS中的选定区域不同?全部内容,希望文章能够帮你解决裁剪区域与iOS中的选定区域不同?所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)