Error[8]: Undefined offset: 4, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

概述我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here). 我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机. 但是当我将数组 我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here).

我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.

但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.

这是我正在采取的转换和流输入的代码:

input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: {                (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> VoID in                let audioBuffer = self.typetobinary(buffer)                stream.write(audioBuffer,maxLength: audioBuffer.count)            })

我的两个转换数据的功能(取自Martin.R的答案here):

func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T {    return value.withUnsafeBufferPointer {        UnsafeRawPointer(
func session(_ session: MCSession,dIDReceive stream: inputStream,withname streamname: String,fromPeer peerID: MCPeerID) {    if streamname == "voice" {        stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode)        stream.open()        var bytes = [UInt8](repeating: 0,count: 8)        stream.read(&bytes,maxLength: bytes.count)        let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes        do {            try engine.start()            audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil)            audioPlayer.play()       }catch let error {            print(error.localizedDescription)        }    }}
.baseAddress!).load(as: T.self) }}func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer(
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] {    let srcleft = audioBuffer.floatChannelData![0]    let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame    let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)    // initialize bytes to 0 (how to avoID?)    var audioByteArray = [UInt8](repeating: 0,count: numBytes)    // copy data from buffer    srcleft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in        audioByteArray.withUnsafeMutableBufferPointer {            [+++].baseAddress!.initialize(from: srcByteData,count: numBytes)        }    }    return audioByteArray}func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {    // format assumption! make this part of your protocol?    let fmt = AVAudioFormat(commonFormat: .pcmFormatfloat32,sampleRate: 44100,channels: 1,interleaved: true)    let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame    let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength)    audioBuffer.frameLength = frameLength    let dstleft = audioBuffer.floatChannelData![0]    // for stereo    // let dstRight = audioBuffer.floatChannelData![1]    buf.withUnsafeBufferPointer {        let src = UnsafeRawPointer([+++].baseAddress!).bindMemory(to: float.self,capacity: Int(frameLength))        dstleft.initialize(from: src,count: Int(frameLength))    }    return audioBuffer}
.baseAddress!).storeBytes(of: value,as: T.self) } return data}

并在接收端:

[+++]

问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?

任何帮助,关于此的想法/意见将非常感激.

您的AVAudioPCMBuffer序列化/反序列化是错误的.

Swift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.

以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:

N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.

[+++] 总结

以上是内存溢出为你收集整理的swift – 尝试通过多路连接将音频从麦克风传输到另一部手机全部内容,希望文章能够帮你解决swift – 尝试通过多路连接将音频从麦克风传输到另一部手机所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 166, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
Error[8]: Undefined offset: 5, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

概述我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here). 我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机. 但是当我将数组 我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here).

我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.

但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.

这是我正在采取的转换和流输入的代码:

input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: {                (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> VoID in                let audioBuffer = self.typetobinary(buffer)                stream.write(audioBuffer,maxLength: audioBuffer.count)            })

我的两个转换数据的功能(取自Martin.R的答案here):

func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T {    return value.withUnsafeBufferPointer {        UnsafeRawPointer(
func session(_ session: MCSession,dIDReceive stream: inputStream,withname streamname: String,fromPeer peerID: MCPeerID) {    if streamname == "voice" {        stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode)        stream.open()        var bytes = [UInt8](repeating: 0,count: 8)        stream.read(&bytes,maxLength: bytes.count)        let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes        do {            try engine.start()            audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil)            audioPlayer.play()       }catch let error {            print(error.localizedDescription)        }    }}
.baseAddress!).load(as: T.self) }}func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer(
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] {    let srcleft = audioBuffer.floatChannelData![0]    let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame    let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)    // initialize bytes to 0 (how to avoID?)    var audioByteArray = [UInt8](repeating: 0,count: numBytes)    // copy data from buffer    srcleft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in        audioByteArray.withUnsafeMutableBufferPointer {            .baseAddress!.initialize(from: srcByteData,count: numBytes)        }    }    return audioByteArray}func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {    // format assumption! make this part of your protocol?    let fmt = AVAudioFormat(commonFormat: .pcmFormatfloat32,sampleRate: 44100,channels: 1,interleaved: true)    let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame    let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength)    audioBuffer.frameLength = frameLength    let dstleft = audioBuffer.floatChannelData![0]    // for stereo    // let dstRight = audioBuffer.floatChannelData![1]    buf.withUnsafeBufferPointer {        let src = UnsafeRawPointer([+++].baseAddress!).bindMemory(to: float.self,capacity: Int(frameLength))        dstleft.initialize(from: src,count: Int(frameLength))    }    return audioBuffer}
.baseAddress!).storeBytes(of: value,as: T.self) } return data}

并在接收端:

[+++]

问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?

任何帮助,关于此的想法/意见将非常感激.

您的AVAudioPCMBuffer序列化/反序列化是错误的.

Swift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.

以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:

N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.

[+++] 总结

以上是内存溢出为你收集整理的swift – 尝试通过多路连接将音频从麦克风传输到另一部手机全部内容,希望文章能够帮你解决swift – 尝试通过多路连接将音频从麦克风传输到另一部手机所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 166, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
Error[8]: Undefined offset: 6, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

概述我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here). 我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机. 但是当我将数组 我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here).

我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.

但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.

这是我正在采取的转换和流输入的代码:

input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: {                (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> VoID in                let audioBuffer = self.typetobinary(buffer)                stream.write(audioBuffer,maxLength: audioBuffer.count)            })

我的两个转换数据的功能(取自Martin.R的答案here):

func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T {    return value.withUnsafeBufferPointer {        UnsafeRawPointer(
func session(_ session: MCSession,dIDReceive stream: inputStream,withname streamname: String,fromPeer peerID: MCPeerID) {    if streamname == "voice" {        stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode)        stream.open()        var bytes = [UInt8](repeating: 0,count: 8)        stream.read(&bytes,maxLength: bytes.count)        let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes        do {            try engine.start()            audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil)            audioPlayer.play()       }catch let error {            print(error.localizedDescription)        }    }}
.baseAddress!).load(as: T.self) }}func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer(
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] {    let srcleft = audioBuffer.floatChannelData![0]    let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame    let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)    // initialize bytes to 0 (how to avoID?)    var audioByteArray = [UInt8](repeating: 0,count: numBytes)    // copy data from buffer    srcleft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in        audioByteArray.withUnsafeMutableBufferPointer {            .baseAddress!.initialize(from: srcByteData,count: numBytes)        }    }    return audioByteArray}func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {    // format assumption! make this part of your protocol?    let fmt = AVAudioFormat(commonFormat: .pcmFormatfloat32,sampleRate: 44100,channels: 1,interleaved: true)    let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame    let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength)    audioBuffer.frameLength = frameLength    let dstleft = audioBuffer.floatChannelData![0]    // for stereo    // let dstRight = audioBuffer.floatChannelData![1]    buf.withUnsafeBufferPointer {        let src = UnsafeRawPointer(.baseAddress!).bindMemory(to: float.self,capacity: Int(frameLength))        dstleft.initialize(from: src,count: Int(frameLength))    }    return audioBuffer}
.baseAddress!).storeBytes(of: value,as: T.self) } return data}

并在接收端:

[+++]

问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?

任何帮助,关于此的想法/意见将非常感激.

您的AVAudioPCMBuffer序列化/反序列化是错误的.

Swift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.

以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:

N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.

[+++] 总结

以上是内存溢出为你收集整理的swift – 尝试通过多路连接将音频从麦克风传输到另一部手机全部内容,希望文章能够帮你解决swift – 尝试通过多路连接将音频从麦克风传输到另一部手机所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 166, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
Error[8]: Undefined offset: 7, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

概述我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here). 我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机. 但是当我将数组 我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here).

我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.

但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.

这是我正在采取的转换和流输入的代码:

input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: {                (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> VoID in                let audioBuffer = self.typetobinary(buffer)                stream.write(audioBuffer,maxLength: audioBuffer.count)            })

我的两个转换数据的功能(取自Martin.R的答案here):

func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T {    return value.withUnsafeBufferPointer {        UnsafeRawPointer(
func session(_ session: MCSession,dIDReceive stream: inputStream,withname streamname: String,fromPeer peerID: MCPeerID) {    if streamname == "voice" {        stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode)        stream.open()        var bytes = [UInt8](repeating: 0,count: 8)        stream.read(&bytes,maxLength: bytes.count)        let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes        do {            try engine.start()            audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil)            audioPlayer.play()       }catch let error {            print(error.localizedDescription)        }    }}
.baseAddress!).load(as: T.self) }}func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer(
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] {    let srcleft = audioBuffer.floatChannelData![0]    let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame    let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)    // initialize bytes to 0 (how to avoID?)    var audioByteArray = [UInt8](repeating: 0,count: numBytes)    // copy data from buffer    srcleft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in        audioByteArray.withUnsafeMutableBufferPointer {            .baseAddress!.initialize(from: srcByteData,count: numBytes)        }    }    return audioByteArray}func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {    // format assumption! make this part of your protocol?    let fmt = AVAudioFormat(commonFormat: .pcmFormatfloat32,sampleRate: 44100,channels: 1,interleaved: true)    let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame    let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength)    audioBuffer.frameLength = frameLength    let dstleft = audioBuffer.floatChannelData![0]    // for stereo    // let dstRight = audioBuffer.floatChannelData![1]    buf.withUnsafeBufferPointer {        let src = UnsafeRawPointer(.baseAddress!).bindMemory(to: float.self,capacity: Int(frameLength))        dstleft.initialize(from: src,count: Int(frameLength))    }    return audioBuffer}
.baseAddress!).storeBytes(of: value,as: T.self) } return data}

并在接收端:

问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?

任何帮助,关于此的想法/意见将非常感激.

您的AVAudioPCMBuffer序列化/反序列化是错误的.

Swift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.

以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:

N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.

[+++] 总结

以上是内存溢出为你收集整理的swift – 尝试通过多路连接将音频从麦克风传输到另一部手机全部内容,希望文章能够帮你解决swift – 尝试通过多路连接将音频从麦克风传输到另一部手机所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 166, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
swift – 尝试通过多路连接将音频从麦克风传输到另一部手机_app_内存溢出

swift – 尝试通过多路连接将音频从麦克风传输到另一部手机

swift – 尝试通过多路连接将音频从麦克风传输到另一部手机,第1张

概述我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here). 我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机. 但是当我将数组 我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢 Rhythmic Fistman’s回答 here).

我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.

但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.

这是我正在采取的转换和流输入的代码:

input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: {                (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> VoID in                let audioBuffer = self.typetobinary(buffer)                stream.write(audioBuffer,maxLength: audioBuffer.count)            })

我的两个转换数据的功能(取自Martin.R的答案here):

func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T {    return value.withUnsafeBufferPointer {        UnsafeRawPointer(
func session(_ session: MCSession,dIDReceive stream: inputStream,withname streamname: String,fromPeer peerID: MCPeerID) {    if streamname == "voice" {        stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode)        stream.open()        var bytes = [UInt8](repeating: 0,count: 8)        stream.read(&bytes,maxLength: bytes.count)        let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes        do {            try engine.start()            audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil)            audioPlayer.play()       }catch let error {            print(error.localizedDescription)        }    }}
.baseAddress!).load(as: T.self) }}func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer(
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] {    let srcleft = audioBuffer.floatChannelData![0]    let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame    let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)    // initialize bytes to 0 (how to avoID?)    var audioByteArray = [UInt8](repeating: 0,count: numBytes)    // copy data from buffer    srcleft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in        audioByteArray.withUnsafeMutableBufferPointer {            .baseAddress!.initialize(from: srcByteData,count: numBytes)        }    }    return audioByteArray}func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {    // format assumption! make this part of your protocol?    let fmt = AVAudioFormat(commonFormat: .pcmFormatfloat32,sampleRate: 44100,channels: 1,interleaved: true)    let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame    let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength)    audioBuffer.frameLength = frameLength    let dstleft = audioBuffer.floatChannelData![0]    // for stereo    // let dstRight = audioBuffer.floatChannelData![1]    buf.withUnsafeBufferPointer {        let src = UnsafeRawPointer(.baseAddress!).bindMemory(to: float.self,capacity: Int(frameLength))        dstleft.initialize(from: src,count: Int(frameLength))    }    return audioBuffer}
.baseAddress!).storeBytes(of: value,as: T.self) } return data}

并在接收端:

问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?

任何帮助,关于此的想法/意见将非常感激.

您的AVAudioPCMBuffer序列化/反序列化是错误的.

Swift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.

以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:

N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.

总结

以上是内存溢出为你收集整理的swift – 尝试通过多路连接将音频从麦克风传输到另一部手机全部内容,希望文章能够帮你解决swift – 尝试通过多路连接将音频从麦克风传输到另一部手机所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/1047938.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-25
下一篇 2022-05-25

发表评论

登录后才能评论

评论列表(0条)

保存