我正在Swift创建一个测试应用程序,我想使用AVMutableComposition从我的应用程序文档目录中将多个视频拼接在一起.
我在某种程度上取得了成功,我的所有视频都被缝合在一起,一切都显示出正确的肖像和风景.
但是,我的问题是,所有的视频都是按照编辑中最后一个视频的方向显示的.
我知道要解决这个问题,我将需要添加每个轨道的层次说明,但是我似乎无法得到这个正确的答案,我发现整个编译似乎是以横向视频简单地缩放以适应肖像视图,所以当我将手机转到它的一边来观看风景视频时,他们仍然很小,因为它们已经被缩放到肖像大小.
这不是我正在寻找的结果,我想要预期的功能,即如果一个视频是横向,它显示在纵向模式下缩放,但如果手机旋转,我想要这个横向视频填充屏幕(就像在简单地查看照片中的风景视频)和相同的肖像,以便在纵向观看时是全屏幕,当侧面转动时,视频会缩放到景观尺寸(就像在照片中观看纵向视频时一样).
总而言之,我想要的结果是,当查看具有横向和纵向视频的编辑时,我可以使用我的手机查看整个编辑,并且横向视频是全屏和纵向缩放,或者在观看同一个视频时肖像视频是全屏幕,横向视频缩放到大小.
有了所有答案,我发现不是这样,当从照片导入视频以添加到编辑中时,它们似乎都有非常意外的行为,当添加使用前置摄像头拍摄的视频时,这些随机行为清楚我目前从图书馆导入的实施视频,“selfIE”视频出现在正确的大小没有这些问题).
我正在寻找一种旋转/缩放这些视频的方法,以便它们总是以正确的方向和比例显示,这取决于用户持有手机的方式.
编辑:我现在知道,我不能在一个视频中同时拥有景观和纵向方向,所以我期望的结果是将最终的视频呈现在横向.我已经弄清楚如何切换所有的方向和尺度,以获得一切相同的方式,但我的输出是一个肖像视频,如果有人可以帮助我改变这一点,所以我的输出是景观,将不胜感激.
下面是我的功能来获取每个视频的指令:
func vIDeotransformForTrack(asset: AVAsset) -> CGAffinetransform{ var return_value:CGAffinetransform? let assetTrack = asset.tracksWithMediaType(AVMediaTypeVIDeo)[0] let transform = assetTrack.preferredtransform let assetInfo = orIEntationFromtransform(transform) var scaletoFitRatio = UIScreen.mainScreen().bounds.wIDth / assetTrack.naturalSize.wIDth if assetInfo.isPortrait { scaletoFitRatio = UIScreen.mainScreen().bounds.wIDth / assetTrack.naturalSize.height let scaleFactor = CGAffinetransformMakeScale(scaletoFitRatio,scaletoFitRatio) return_value = CGAffinetransformConcat(assetTrack.preferredtransform,scaleFactor) } else { let scaleFactor = CGAffinetransformMakeScale(scaletoFitRatio,scaletoFitRatio) var concat = CGAffinetransformConcat(CGAffinetransformConcat(assetTrack.preferredtransform,scaleFactor),CGAffinetransformMakeTranslation(0,UIScreen.mainScreen().bounds.wIDth / 2)) if assetInfo.orIEntation == .Down { let fixUpsIDeDown = CGAffinetransformMakeRotation(CGfloat(M_PI)) let windowBounds = UIScreen.mainScreen().bounds let yFix = assetTrack.naturalSize.height + windowBounds.height let centerFix = CGAffinetransformMakeTranslation(assetTrack.naturalSize.wIDth,yFix) concat = CGAffinetransformConcat(CGAffinetransformConcat(fixUpsIDeDown,centerFix),scaleFactor) } return_value = concat } return return_value!}
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks let mix_composition = AVMutableComposition() var total_time = kCMTimeZero // Loop over vIDeos and create tracks,keep incrementing total duration let vIDeo_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVIDeo,preferredTrackID: CMPersistentTrackID()) var instruction = AVMutableVIDeoCompositionLayerInstruction(assetTrack: vIDeo_track) for vIDeo in vIDeos { let shortened_duration = CMTimeSubtract(vIDeo.duration,CMTimeMake(1,10)); let vIDeoAssetTrack = vIDeo.tracksWithMediaType(AVMediaTypeVIDeo)[0] do { try vIDeo_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero,shortened_duration),ofTrack: vIDeoAssetTrack,atTime: total_time) vIDeo_track.preferredtransform = vIDeoAssetTrack.preferredtransform } catch _ { } instruction.settransform(vIDeotransformForTrack(vIDeo),atTime: total_time) // Add vIDeo duration to total time total_time = CMTimeAdd(total_time,shortened_duration) } // Create main instrcution for vIDeo composition let main_instruction = AVMutableVIDeoCompositionInstruction() main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero,total_time) main_instruction.layerInstructions = [instruction] main_composition.instructions = [main_instruction] main_composition.frameDuration = CMTimeMake(1,30) main_composition.renderSize = CGSize(wIDth: UIScreen.mainScreen().bounds.wIDth,height: UIScreen.mainScreen().bounds.height) let exporter = AVAssetExportSession(asset: mix_composition,presetname: AVAssetExportPreset640x480) exporter!.outputURL = final_url exporter!.outputfileType = AVfileTypeMPEG4 exporter!.shouldOptimizeforNetworkUse = true exporter!.vIDeoComposition = main_composition // 6 - Perform the Export exporter!.exportAsynchronouslyWithCompletionHandler() { // Assign return values based on success of export dispatch_async(dispatch_get_main_queue(),{ () -> VoID in self.exportDIDFinish(exporter!) }) }
对不起,长期的解释,我只是想确保我非常清楚我的问题,因为其他答案没有为我工作.
解决方法 我不知道你的orIEntationFromtransform()给你正确的方向.我想你尝试修改它或尝试像:
extension AVAsset { func vIDeoOrIEntation() -> (orIEntation: UIInterfaceOrIEntation,device: AVCaptureDeviceposition) { var orIEntation: UIInterfaceOrIEntation = .UnkNown var device: AVCaptureDeviceposition = .UnspecifIEd let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVIDeo) if let vIDeoTrack = tracks.first { let t = vIDeoTrack.preferredtransform if (t.a == 0 && t.b == 1.0 && t.d == 0) { orIEntation = .Portrait if t.c == 1.0 { device = .Front } else if t.c == -1.0 { device = .Back } } else if (t.a == 0 && t.b == -1.0 && t.d == 0) { orIEntation = .PortraitUpsIDeDown if t.c == -1.0 { device = .Front } else if t.c == 1.0 { device = .Back } } else if (t.a == 1.0 && t.b == 0 && t.c == 0) { orIEntation = .LandscapeRight if t.d == -1.0 { device = .Front } else if t.d == 1.0 { device = .Back } } else if (t.a == -1.0 && t.b == 0 && t.c == 0) { orIEntation = .Landscapeleft if t.d == 1.0 { device = .Front } else if t.d == -1.0 { device = .Back } } } return (orIEntation,device) }}总结
以上是内存溢出为你收集整理的ios – 使用AVMutableComposition拼接(合并)视频时修正方向全部内容,希望文章能够帮你解决ios – 使用AVMutableComposition拼接(合并)视频时修正方向所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)