它似乎是逐样本交错的,首先是左声道。通过左声道输入的信号和右声道的静音,我得到:
result = [0.2776, -0.0002, 0.2732, -0.0002, 0.2688, -0.0001, 0.2643, -0.0003, 0.2599, ...
因此,要将其分离为立体声流,可整形为2D数组:
result = np.fromstring(in_data, dtype=np.float32)result = np.reshape(result, (frames_per_buffer, 2))
现在访问左声道,使用
result[:, 0],对于右声道,使用
result[:, 1]。
def depre(in_data, channels): """ Convert a byte stream into a 2D numpy array with shape (chunk_size, channels) Samples are interleaved, so for a stereo stream with left channel of [L0, L1, L2, ...] and right channel of [R0, R1, R2, ...], the output is ordered as [L0, R0, L1, R1, ...] """ # TODO: handle data type as parameter, convert between pyaudio/numpy types result = np.fromstring(in_data, dtype=np.float32) chunk_length = len(result) / channels assert chunk_length == int(chunk_length) result = np.reshape(result, (chunk_length, channels)) return resultdef enpre(signal): """ Convert a 2D numpy array into a byte stream for PyAudio Signal should be a numpy array with shape (chunk_size, channels) """ interleaved = signal.flatten() # TODO: handle data type as parameter, convert between pyaudio/numpy types out_data = interleaved.astype(np.float32).tostring() return out_data
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)