iOS – 生成并播放无限简单的音频(正弦波)

iOS – 生成并播放无限简单的音频(正弦波),第1张

概述我正在寻找一个非常简单的iOS应用程序,它带有一个启动和停止音频信号的按钮.信号只是一个正弦波,它将在整个播放过程中检查我的模型(音量的实例变量),并相应地改变音量. 我的困难与任务的不确定性有关.我理解如何构建表格,填充数据,响应按钮按下等等;然而,当谈到只是无限期地继续(在这种情况下,声音)时,我有点卡住了!任何指针都会很棒! 谢谢阅读. 这是一个简单的应用程序,它将按需生成频率.你还没有指定 我正在寻找一个非常简单的iOS应用程序,它带有一个启动和停止音频信号的按钮.信号只是一个正弦波,它将在整个播放过程中检查我的模型(音量的实例变量),并相应地改变音量.

我的困难与任务的不确定性有关.我理解如何构建表格,填充数据,响应按钮按下等等;然而,当谈到只是无限期地继续(在这种情况下,声音)时,我有点卡住了!任何指针都会很棒!

谢谢阅读.

解决方法 这是一个简单的应用程序,它将按需生成频率.你还没有指定是做iOS还是OSX,所以我已经选择了OSX,因为它稍微简单一些(没有搞乱音频会话类别).如果你需要iOS,你可以通过查看音频会话类别基础知识并交换RemoteIO音频单元的默认输出音频单元来找出丢失的位.

请注意,这样做的目的纯粹是为了演示一些核心音频/音频单元的基础知识.如果你想开始变得比这更复杂,你可能想要查看AUGraph API(也是为了提供一个干净的例子,我没有做任何错误检查.在处理Core Audio时总是要进行错误检查).

您需要将AudioToolBox和AudioUnit框架添加到项目中才能使用此代码.

#import <AudioToolBox/AudioToolBox.h>@interface SWAppDelegate : NSObject <NSApplicationDelegate>{    AudioUnit outputUnit;    double renderPhase;}@end@implementation SWAppDelegate- (voID)applicationDIDFinishLaunching:(NSNotification *)aNotification{//  First,we need to establish which Audio Unit we want.//  We start with its description,which is:    AudioComponentDescription outputUnitDescription = {        .componentType         = kAudioUnitType_Output,.componentSubType      = kAudioUnitSubType_DefaultOutput,.componentManufacturer = kAudioUnitManufacturer_Apple    };//  Next,we get the first (and only) component corresponding to that description    AudioComponent outputComponent = AudioComponentFindNext(NulL,&outputUnitDescription);//  Now we can create an instance of that component,which will create an//  instance of the Audio Unit we're looking for (the default output)    AudioComponentInstanceNew(outputComponent,&outputUnit);    AudioUnitinitialize(outputUnit);//  Next we'll tell the output unit what format our generated audio will//  be in. Generally speaking,you'll want to stick to sane formats,since//  the output unit won't accept every single possible stream format.//  Here,we're specifying floating point samples with a sample rate of//  44100 Hz in mono (i.e. 1 channel)    AudioStreamBasicDescription ASBD = {        .mSampleRate       = 44100,.mFormatID         = kAudioFormatlinearPCM,.mFormatFlags      = kAudioFormatFlagsNativefloatPacked,.mChannelsPerFrame = 1,.mFramesPerPacket  = 1,.mBitsPerChannel   = sizeof(float32) * 8,.mBytesPerPacket   = sizeof(float32),.mBytesPerFrame    = sizeof(float32)    };    AudioUnitSetProperty(outputUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_input,&ASBD,sizeof(ASBD));//  Next step is to tell our output unit which function we'd like it//  to call to get audio samples. We'll also pass in a context pointer,//  which can be a pointer to anything you need to maintain state between//  render callbacks. We only need to point to a double which represents//  the current phase of the sine wave we're creating.    AURenderCallbackStruct callbackInfo = {        .inputProc       = SineWaveRenderCallback,.inputProcRefCon = &renderPhase    };    AudioUnitSetProperty(outputUnit,kAudioUnitProperty_SetRenderCallback,kAudioUnitScope_Global,&callbackInfo,sizeof(callbackInfo));//  Here we're telling the output unit to start requesting audio samples//  from our render callback. This is the line of code that starts actually//  sending audio to your speakers.    AudioOutputUnitStart(outputUnit);}// This is our render callback. It will be called very frequently for short// buffers of audio (512 samples per call on my machine).Osstatus SineWaveRenderCallback(voID * inRefCon,AudioUnitRenderActionFlags * ioActionFlags,const AudioTimeStamp * inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,audiobufferlist * ioData){    // inRefCon is the context pointer we passed in earlIEr when setting the render callback    double currentPhase = *((double *)inRefCon);    // ioData is where we're supposed to put the audio samples we've created    float32 * outputBuffer = (float32 *)ioData->mBuffers[0].mData;    const double frequency = 440.;    const double phaseStep = (frequency / 44100.) * (M_PI * 2.);    for(int i = 0; i < inNumberFrames; i++) {        outputBuffer[i] = sin(currentPhase);        currentPhase += phaseStep;    }    // If we were doing stereo (or more),this would copy our sine wave samples    // to all of the remaining channels    for(int i = 1; i < ioData->mNumberBuffers; i++) {        memcpy(ioData->mBuffers[i].mData,outputBuffer,ioData->mBuffers[i].mDataByteSize);    }    // writing the current phase back to inRefCon so we can use it on the next call    *((double *)inRefCon) = currentPhase;    return noErr;}- (voID)applicationWillTerminate:(NSNotification *)notification{    AudioOutputUnitStop(outputUnit);    AudioUnitUninitialize(outputUnit);    AudioComponentInstancedispose(outputUnit);}@end

您可以随意调用AudioOutputUnitStart()和AudioOutputUnitStop()来开始/停止生成音频.如果要动态更改频率,可以传入指向包含renderPhase double和另一个表示所需频率的struct的指针.

在渲染回调中要小心.它是从实时线程调用的(不是来自与主运行循环相同的线程).渲染回调受到一些相当严格的时间要求的限制,这意味着你的回调中不应该做很多事情,例如:

>分配内存
>等待互斥锁
>从磁盘上的文件中读取
> Objective-C消息传递(是的,认真的.)

请注意,这不是执行此 *** 作的唯一方法.我只是用这种方式演示了它,因为你已经标记了这个核心音频.如果您不需要更改频率,可以将AVAudioPlayer与包含正弦波的预制声音文件一起使用.

还有Novocaine,它隐藏了很多这种冗长.您还可以查看Audio Queue API,它与我编写的Core Audio示例非常相似,但是您可以将它与硬件分离得更多(即,它对您在渲染回调中的行为方式不太严格).

总结

以上是内存溢出为你收集整理的iOS – 生成并播放无限简单的音频(正弦波)全部内容,希望文章能够帮你解决iOS – 生成并播放无限简单的音频(正弦波)所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/1101156.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-28
下一篇 2022-05-28

发表评论

登录后才能评论

评论列表(0条)

保存