在iOS上使用OpenAL进行声音捕获

在iOS上使用OpenAL进行声音捕获,第1张

概述我正在尝试使用OpenAL在iOS上进行声音捕获(我正在编写一个跨平台库,这就是为什么我要避免使用iOS特定的方式来录制声音). 开箱即用的OpenAL捕获功能不起作用,但存在一个已知的解决方法: open an output context before starting capture.此解决方案适用于iOS 5.0. 但是在iOS 5.1.1上,解决方法仅对我尝试记录的第一个示例有帮助. ( @H_301_0@ @H_301_0@ 我正在尝试使用OpenAL在iOS上进行声音捕获(我正在编写一个跨平台库,这就是为什么我要避免使用iOS特定的方式来录制声音).
开箱即用的OpenAL捕获功能不起作用,但存在一个已知的解决方法: open an output context before starting capture.此解决方案适用于iOS 5.0.

但是在iOS 5.1.1上,解决方法仅对我尝试记录的第一个示例有帮助. (我在开始捕获之前将AudioSession切换到PlayAndRecord并打开默认输出设备.录制完样本后,我关闭设备并将会话切换回原来的状态.)
对于第二个示例,重新打开输出上下文没有帮助,也没有捕获声音.

有没有一种已知的方法来处理这个问题?

// Here's what I do before starting the recordingoldAudioSessioncategory = [audioSession category];[audioSession setcategory:AVAudioSessioncategoryPlayAndRecord error:nil];[audioSession setActive:YES error:nil];// We need to have an active context. If there is none,create one.if (!alcGetCurrentContext()) {    outputDevice = alcopenDevice(NulL);    outputContext = alcCreateContext(outputDevice,NulL);    alcMakeContextCurrent(outputContext);}// Capture itselfinputDevice = alcCaptureOpenDevice(NulL,frequency,FORMAT,bufferSize);....alcCaptureCloseDevice(inputDevice);// Restoring the audio state to whatever it had been before captureif (outputContext) {    alcDestroyContext(outputContext);    alccloseDevice(outputDevice);}[[AVAudioSession sharedInstance] setcategory:oldAudioSessioncategory                                  error:nil];
解决方法@H_403_21@ 这是我用来模拟捕获扩展的代码.
一些评论:

>在整个项目中,OpenKD用于例如线程原语.你可能需要更换这些电话.
>我必须在开始捕获时对抗延迟.结果,我不断地继续读取声音输入并在不需要时扔掉它. (例如,提出了这样的解决方案,例如,here.)这反过来需要捕获有效的通知,以便释放对麦克风的控制.您可能想也可能不想使用这种污泥.
>我必须定义一个单独的函数alcGetAvailableSamples,而不是alcGetIntegerv(设备,ALC_CAPTURE_SAMPLES,1和& res).

简而言之,此代码不太可能在您的项目中按原样使用,但希望您可以
调整它以满足您的需求.

#include <stdbool.h>#include <stddef.h>#include <stdint.h>#include <KD/kd.h>#include <AL/al.h>#include <AL/alc.h>#include <AudioToolBox/AudioToolBox.h>#import <Foundation/Foundation.h>#import <UIKit/UIKit.h>#include "KD/kdext.h"struct inputDeviceData {    int ID;    KDThreadMutex *mutex;    AudioUnit audioUnit;    int nChannels;    int frequency;    ALCenum format;    int sampleSize;    uint8_t *buf;    size_t bufSize;    // in bytes    size_t bufFilledBytes;  // in bytes    bool started;};static struct inputDeviceData *cachedindata = NulL;static Osstatus renderCallback (voID                        *inRefCon,AudioUnitRenderActionFlags  *ioActionFlags,const AudioTimeStamp        *inTimeStamp,UInt32                      inBusNumber,UInt32                      inNumberFrames,audiobufferlist             *ioData);static AudioUnit getAudioUnit();static voID setupNotifications();static voID destroyCachedindata();static struct inputDeviceData *setupCachedindata(AudioUnit audioUnit,ALCuint frequency,ALCenum format,ALCsizei bufferSizeInSamples);static struct inputDeviceData *getinputDeviceData(AudioUnit audioUnit,ALCsizei bufferSizeInSamples);/** I only have to use NSNotificationCenter instead of CFNotificationCenter *  because there is no published name for WillResignActive/WillBecomeActive *  notifications in CoreFoundation. */@interface ALCNotificationObserver : NSObject- (voID)onResignActive;@end@implementation ALCNotificationObserver- (voID)onResignActive {    destroyCachedindata();}@endstatic voID setupNotifications() {    static ALCNotificationObserver *observer = NulL;    if (!observer) {        observer = [[ALCNotificationObserver alloc] init];        [[NSNotificationCenter defaultCenter] addobserver:observer selector:@selector(onResignActive) name:UIApplicationWillResignActiveNotification object:nil];    }}static Osstatus renderCallback (voID                        *inRefCon,audiobufferlist             *ioData) {    struct inputDeviceData *inData = (struct inputDeviceData*)inRefCon;    kdThreadMutexLock(inData->mutex);    size_t bytesToRender = inNumberFrames * inData->sampleSize;    if (bytesToRender + inData->bufFilledBytes <= inData->bufSize) {        Osstatus status;        struct audiobufferlist audiobufferlist; // 1 buffer is declared insIDe the structure itself.        audiobufferlist.mNumberBuffers = 1;        audiobufferlist.mBuffers[0].mNumberChannels = inData->nChannels;        audiobufferlist.mBuffers[0].mDataByteSize = bytesToRender;        audiobufferlist.mBuffers[0].mData = inData->buf + inData->bufFilledBytes;        status = AudioUnitRender(inData->audioUnit,ioActionFlags,inTimeStamp,inBusNumber,inNumberFrames,&audiobufferlist);        if (inData->started) {            inData->bufFilledBytes += bytesToRender;        }    } else {        kdLogFormatMessage("%s: buffer overflow",__FUNCTION__);    }    kdThreadMutexUnlock(inData->mutex);    return 0;}static AudioUnit getAudioUnit() {    static AudioUnit audioUnit = NulL;    if (!audioUnit) {        AudioComponentDescription IoUnitDescription;        IoUnitDescription.componentType          = kAudioUnitType_Output;        IoUnitDescription.componentSubType       = kAudioUnitSubType_VoiceProcessingIO;        IoUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;        IoUnitDescription.componentFlags         = 0;        IoUnitDescription.componentFlagsMask     = 0;        AudioComponent foundioUnitReference = AudioComponentFindNext(NulL,&IoUnitDescription);        AudioComponentInstanceNew(foundioUnitReference,&audioUnit);        if (audioUnit == NulL) {            kdLogMessage("Could not obtain AudioUnit");        }    }    return audioUnit;}static voID destroyCachedindata() {    Osstatus status;    if (cachedindata) {        status = AudioOutputUnitStop(cachedindata->audioUnit);        status = AudioUnitUninitialize(cachedindata->audioUnit);        free(cachedindata->buf);        kdThreadMutexFree(cachedindata->mutex);        free(cachedindata);        cachedindata = NulL;    }}static struct inputDeviceData *setupCachedindata(AudioUnit audioUnit,ALCsizei bufferSizeInSamples) {    static int IDCount = 0;    Osstatus status;    int bytesPerFrame = (format == AL_FORMAT_MONO8) ? 1 :                        (format == AL_FORMAT_MONO16) ? 2 :                        (format == AL_FORMAT_STEREO8) ? 2 :                        (format == AL_FORMAT_STEREO16) ? 4 : -1;    int channelsPerFrame = (format == AL_FORMAT_MONO8) ? 1 :                           (format == AL_FORMAT_MONO16) ? 1 :                           (format == AL_FORMAT_STEREO8) ? 2 :                           (format == AL_FORMAT_STEREO16) ? 2 : -1;    int bitsPerChannel = (format == AL_FORMAT_MONO8) ? 8 :                         (format == AL_FORMAT_MONO16) ? 16 :                         (format == AL_FORMAT_STEREO8) ? 8 :                         (format == AL_FORMAT_STEREO16) ? 16 : -1;    cachedindata = malloc(sizeof(struct inputDeviceData));    cachedindata->ID = ++IDCount;    cachedindata->format = format;    cachedindata->frequency = frequency;    cachedindata->mutex = kdThreadMutexCreate(NulL);    cachedindata->audioUnit = audioUnit;    cachedindata->nChannels = channelsPerFrame;    cachedindata->sampleSize = bytesPerFrame;    cachedindata->bufSize = bufferSizeInSamples * bytesPerFrame;    cachedindata->buf = malloc(cachedindata->bufSize);    cachedindata->bufFilledBytes = 0;    cachedindata->started = FALSE;    UInt32 enableOutput        = 1;    // to enable output    status = AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_EnableIO,kAudioUnitScope_input,1,&enableOutput,sizeof(enableOutput));    struct AudioStreamBasicDescription basicDescription;    basicDescription.mSampleRate = (float64)frequency;    basicDescription.mFormatID = kAudioFormatlinearPCM;    basicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;    basicDescription.mBytesPerPacket = bytesPerFrame;    basicDescription.mFramesPerPacket = 1;    basicDescription.mBytesPerFrame = bytesPerFrame;    basicDescription.mChannelsPerFrame = channelsPerFrame;    basicDescription.mBitsPerChannel = bitsPerChannel;    basicDescription.mReserved = 0;    status = AudioUnitSetProperty(audioUnit,kAudioUnitProperty_StreamFormat,// property key                                   kAudioUnitScope_Output,// scope                                  1,// 1 is output                                  &basicDescription,sizeof(basicDescription));      // value    AURenderCallbackStruct renderCallbackStruct;    renderCallbackStruct.inputProc = renderCallback;    renderCallbackStruct.inputProcRefCon = cachedindata;    status = AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_SetinputCallback,// 1 is output                                  &renderCallbackStruct,sizeof(renderCallbackStruct));      // value    status = AudioOutputUnitStart(cachedindata->audioUnit);    return cachedindata;}static struct inputDeviceData *getinputDeviceData(AudioUnit audioUnit,ALCsizei bufferSizeInSamples) {    if (cachedindata &&         (cachedindata->frequency != frequency ||         cachedindata->format != format ||         cachedindata->bufSize / cachedindata->sampleSize != bufferSizeInSamples)) {            kdAssert(!cachedindata->started);            destroyCachedindata();        }    if (!cachedindata) {        setupCachedindata(audioUnit,format,bufferSizeInSamples);        setupNotifications();    }    return cachedindata;}ALC_API ALCdevice* ALC_APIENTRY alcCaptureOpenDevice(const ALCchar *devicename,ALCsizei buffersizeInSamples) {    kdAssert(devicename == NulL);        AudioUnit audioUnit = getAudioUnit();    struct inputDeviceData *res = getinputDeviceData(audioUnit,buffersizeInSamples);    return (ALCdevice*)res->ID;}ALC_API ALCboolean ALC_APIENTRY alcCaptureCloseDevice(ALCdevice *device) {    alcCaptureStop(device);    return true;}ALC_API voID ALC_APIENTRY alcCaptureStart(ALCdevice *device) {    if (!cachedindata || (int)device != cachedindata->ID) {        // may happen after the app loses and regains active status.        kdLogFormatMessage("Attempt to start a stale AL capture device");        return;    }    cachedindata->started = TRUE;}ALC_API voID ALC_APIENTRY alcCaptureStop(ALCdevice *device) {    if (!cachedindata || (int)device != cachedindata->ID) {        // may happen after the app loses and regains active status.        kdLogFormatMessage("Attempt to stop a stale AL capture device");        return;    }    cachedindata->started = FALSE;}ALC_API ALCint ALC_APIENTRY alcGetAvailableSamples(ALCdevice *device) {    if (!cachedindata || (int)device != cachedindata->ID) {        // may happen after the app loses and regains active status.        kdLogFormatMessage("Attempt to get sample count from a stale AL capture device");        return 0;    }    ALCint res;    kdThreadMutexLock(cachedindata->mutex);    res = cachedindata->bufFilledBytes / cachedindata->sampleSize;    kdThreadMutexUnlock(cachedindata->mutex);    return res;}ALC_API voID ALC_APIENTRY alcCaptureSamples(ALCdevice *device,ALCvoID *buffer,ALCsizei samples) {        if (!cachedindata || (int)device != cachedindata->ID) {        // may happen after the app loses and regains active status.        kdLogFormatMessage("Attempt to get samples from a stale AL capture device");        return;    }    size_t bytesToCapture = samples * cachedindata->sampleSize;    kdAssert(cachedindata->started);    kdAssert(bytesToCapture <= cachedindata->bufFilledBytes);    kdThreadMutexLock(cachedindata->mutex);    memcpy(buffer,cachedindata->buf,bytesToCapture);    memmove(cachedindata->buf,cachedindata->buf + bytesToCapture,cachedindata->bufFilledBytes - bytesToCapture);    cachedindata->bufFilledBytes -= bytesToCapture;    kdThreadMutexUnlock(cachedindata->mutex);}
@H_301_0@ 总结

以上是内存溢出为你收集整理的在iOS上使用OpenAL进行声音捕获全部内容,希望文章能够帮你解决在iOS上使用OpenAL进行声音捕获所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/web/1051938.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-25
下一篇 2022-05-25

发表评论

登录后才能评论

评论列表(0条)

保存