ios – AudioConverter#FillComplexBuffer返回-50并且不转换任何内容

ios – AudioConverter#FillComplexBuffer返回-50并且不转换任何内容,第1张

概述我强烈关注 this Xamarin sample(基于 this Apple sample)将LinearPCM文件转换为AAC文件. 该示例工作得很好,但在我的项目中实现,FillComplexBuffer方法返回错误-50并且InputData事件未被触发一次,因此没有任何转换. 只有在设备上进行测试时才会出现该错误.在模拟器上进行测试时,一切都很顺利,最后我得到了一个很好的编码AAC文件. 我强烈关注 this Xamarin sample(基于 this Apple sample)将linearPCM文件转换为AAC文件.

该示例工作得很好,但在我的项目中实现,FillComplexBuffer方法返回错误-50并且inputData事件未被触发一次,因此没有任何转换.

只有在设备上进行测试时才会出现该错误.在模拟器上进行测试时,一切都很顺利,最后我得到了一个很好的编码AAC文件.

我今天尝试了很多东西,我发现我的代码和示例代码之间没有任何区别.你知道这可能来自哪里吗?

我不知道这是否与Xamarin有关,因为Xamarin样本效果很好,所以看起来并不是这样.

这是我的代码的相关部分:

protected voID Encode(string path){  // In class setup. file at TempWavfilePath has DecodedFormat as format.  //   // DecodedFormat = AudioStreamBasicDescription.CreatelinearPCM();  // AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()  // {  //   Format = AudioFormatType.MPEG4AAC,//   SampleRate = DecodedFormat.SampleRate,//   ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,// };  // AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);  // EncodedFormat = encodedFormat;  // Setup converter  AudioStreamBasicDescription inputFormat = DecodedFormat;  AudioStreamBasicDescription outputFormat = EncodedFormat;  AudioConverterError converterCreateError;  AudioConverter converter = AudioConverter.Create(inputFormat,outputFormat,out converterCreateError);  if (converterCreateError != AudioConverterError.None)  {    Console.Writeline("Converter creation error: " + converterCreateError);  }  converter.EncodeBitRate = 192000; // AAC 192kbps  // get the actual formats back from the Audio Converter  inputFormat = converter.CurrentinputStreamDescription;  outputFormat = converter.CurrentOutputStreamDescription;  /*** input ***/  Audiofile inputfile = Audiofile.OpenRead(NSUrl.Fromfilename(TempWavfilePath));  // init buffer  const int inputBufferBytesSize = 32768;  IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);  // calc number of packets per read  int inputSizePerPacket = inputFormat.BytesPerPacket;  int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;  AudioStreamPacketDescription[] inputPacketDescriptions = null;  // init position  long inputfileposition = 0;  // define input delegate  converter.inputData += delegate(ref int numberDataPackets,AudioBuffers data,ref AudioStreamPacketDescription[] dataPacketDescription)  {    // how much to read    if (numberDataPackets > inputBufferPacketSize)    {      numberDataPackets = inputBufferPacketSize;    }    // read from the file    int outNumBytes;    AudiofileError readError = inputfile.ReadPackets(false,out outNumBytes,inputPacketDescriptions,inputfileposition,ref numberDataPackets,inputBufferPtr);    if (readError != 0)    {      Console.Writeline("Read error: " + readError);    }    // advance input file packet position    inputfileposition += numberDataPackets;    // put the data pointer into the buffer List    data.SetData(0,inputBufferPtr,outNumBytes);    // add packet descriptions if required    if (dataPacketDescription != null)    {      if (inputPacketDescriptions != null)      {        dataPacketDescription = inputPacketDescriptions;      }      else      {        dataPacketDescription = null;      }    }    return AudioConverterError.None;  };  /*** OUTPUT ***/  // create the destination file   var outputfile = Audiofile.Create (NSUrl.Fromfilename(path),AudiofileType.M4A,AudiofileFlags.EraseFlags);  // init buffer  const int outputBufferBytesSize = 32768;  IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);  AudioBuffers buffers = new AudioBuffers(1);  // calc number of packet per write  int outputSizePerPacket = outputFormat.BytesPerPacket;  AudioStreamPacketDescription[] outputPacketDescriptions = null;  if (outputSizePerPacket == 0) {    // if the destination format is VBR,we need to get max size per packet from the converter    outputSizePerPacket = (int)converter.MaximumOutputPacketSize;    // allocate memory for the PacketDescription structures describing the layout of each packet    outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];  }  int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;  // init position  long outputfileposition = 0;  long totalOutputFrames = 0; // used for deBUGging  // write magic cookie if necessary  if (converter.CompressionMagiccookie != null && converter.CompressionMagiccookie.Length != 0)  {    outputfile.Magiccookie = converter.CompressionMagiccookie;  }  // loop to convert data  Console.Writeline ("Converting...");  while (true)  {    // create buffer    buffers[0] = new AudioBuffer()    {      NumberChannels = outputFormat.ChannelsPerFrame,DataByteSize = outputBufferBytesSize,Data = outputBufferPtr    };    int writtenPackets = outputBufferPacketSize;    // LET'S CONVERT (it's about time...)    AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets,buffers,outputPacketDescriptions);    if (converterFillError != AudioConverterError.None)    {      Console.Writeline("FillComplexBuffer error: " + converterFillError);    }    if (writtenPackets == 0) // EOF    {      break;    }    // write to output file    int inNumBytes = buffers[0].DataByteSize;    AudiofileError writeError = outputfile.WritePackets(false,inNumBytes,outputPacketDescriptions,outputfileposition,ref writtenPackets,outputBufferPtr);    if (writeError != 0)    {      Console.Writeline("WritePackets error: {0}",writeError);    }    // advance output file packet position    outputfileposition += writtenPackets;    if (FlowFormat.FramesPerPacket != 0) {       // the format has constant frames per packet      totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);    } else {      // variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)      for (var i = 0; i < writtenPackets; ++i)      {        totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;      }    }  }  // write out any of the leading and trailing frames for compressed formats only  if (outputFormat.BitsPerChannel == 0)  {    Console.Writeline("Total number of output frames counted: {0}",totalOutputFrames);     WritePackettableInfo(converter,outputfile);  }  // write the cookie again - sometimes codecs will update cookies at the end of a conversion  if (converter.CompressionMagiccookie != null && converter.CompressionMagiccookie.Length != 0)  {    outputfile.Magiccookie = converter.CompressionMagiccookie;  }  // Clean everything  Marshal.FreeHGlobal(inputBufferPtr);  Marshal.FreeHGlobal(outputBufferPtr);  converter.dispose();  outputfile.dispose();  // Remove temp file  file.Delete(TempWavfilePath);}

我已经看过this SO question了,但是没有详细的C / Obj-C相关答案似乎不符合我的问题.

谢谢 !

解决方法 我终于找到了解决方案!

我只需要在转换文件之前声明AVAudioSession类别.

AVAudioSession.SharedInstance().Setcategory(AVAudioSessioncategory.AudioProcessing);AVAudioSession.SharedInstance().SetActive(true);

由于我还使用了一个AudioQueue到RenderOffline,我实际上必须将类别设置为AVAudioSessioncategory.PlayAndRecord,因此离线渲染和音频转换都可以工作.

总结

以上是内存溢出为你收集整理的ios – AudioConverter#FillComplexBuffer返回-50并且不转换任何内容全部内容,希望文章能够帮你解决ios – AudioConverter#FillComplexBuffer返回-50并且不转换任何内容所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/1099824.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-28
下一篇 2022-05-28

发表评论

登录后才能评论

评论列表(0条)

保存