# 前言
本章介绍如何使用Tensorflow实现简单的声纹识别模型,首先你需要熟悉音频分类,我们训练一个声纹识别模型,通过这个模型我们可以识别说话的人是谁,可以应用在一些需要音频验证的项目。
不同的是本项目使用了ArcFace Loss,ArcFace loss:Additive Angular Margin Loss(加性角度间隔损失函数),对特征向量和权重归一化,对θ加上角度间隔m,角度间隔比余弦间隔在对角度的影响更加直接。
使用环境:
- Python 3.7
- Tensorflow 2.3.0
import json
import os
from pydub import AudioSegment
from tqdm import tqdm
from utils.reader import load_audio
# 生成数据列表
def get_data_list(infodata_path, list_path, zhvoice_path):
with open(infodata_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
f_train = open(os.path.join(list_path, 'train_list.txt'), 'w')
f_test = open(os.path.join(list_path, 'test_list.txt'), 'w')
sound_sum = 0
speakers = []
speakers_dict = {}
for line in tqdm(lines):
line = json.loads(line.replace('\n', ''))
duration_ms = line['duration_ms']
if duration_ms < 1300:
continue
speaker = line['speaker']
if speaker not in speakers:
speakers_dict[speaker] = len(speakers)
speakers.append(speaker)
label = speakers_dict[speaker]
sound_path = os.path.join(zhvoice_path, line['index'])
save_path = "%s.wav" % sound_path[:-4]
if not os.path.exists(save_path):
try:
wav = AudioSegment.from_mp3(sound_path)
wav.export(save_path, format="wav")
os.remove(sound_path)
except Exception as e:
print('数据出错:%s, 信息:%s' % (sound_path, e))
continue
if sound_sum % 200 == 0:
f_test.write('%s\t%d\n' % (save_path.replace('\', '/'), label))
else:
f_train.write('%s\t%d\n' % (save_path.replace('\', '/'), label))
sound_sum += 1
f_test.close()
f_train.close()
# 删除错误音频
def remove_error_audio(data_list_path):
with open(data_list_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
lines1 = []
for line in tqdm(lines):
audio_path, _ = line.split('\t')
try:
spec_mag = load_audio(audio_path)
lines1.append(line)
except Exception as e:
print(audio_path)
print(e)
with open(data_list_path, 'w', encoding='utf-8') as f:
for line in lines1:
f.write(line)
if __name__ == '__main__':
get_data_list('dataset/zhvoice/text/infodata.json', 'dataset', 'dataset/zhvoice')
remove_error_audio('dataset/train_list.txt')
remove_error_audio('dataset/test_list.txt')
输出类似如下:
```
----------- Configuration Arguments -----------
audio_db: audio_db
input_shape: (257, 257, 1)
model_path: models/infer_model.h5
threshold: 0.7
------------------------------------------------
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resnet50v2_input (InputLayer [(None, 257, 257, 1)] 0
_________________________________________________________________
resnet50v2 (Functional) (None, 2048) 23558528
_________________________________________________________________
batch_normalization (BatchNo (None, 2048) 8192
=================================================================
Total params: 23,566,720
Trainable params: 23,517,184
Non-trainable params: 49,536
_________________________________________________________________
Loaded 李达康 audio.
Loaded 沙瑞金 audio.
请选择功能,0为注册音频到声纹库,1为执行声纹识别:0
按下回车键开机录音,录音3秒中:
开始录音......
录音已结束!
请输入该音频用户的名称:夜雨飘零
请选择功能,0为注册音频到声纹库,1为执行声纹识别:1
按下回车键开机录音,录音3秒中:
开始录音......
录音已结束!
识别说话的为:夜雨飘零,相似度为:0.920434
```
下载链接:
https://download.csdn.net/download/babyai996/85090063
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)