谷歌 AI 记事:通过序列转导实现联合语音识别和说话人分类

翻译 ALLEN ⋅ 于 2019-08-25 10:16:25 ⋅ 38 阅读

这是一篇协同翻译


能够识别“谁说了什么”,或者能够说话人分类,是通过自动化手段理解人类对话音频的关键步骤。比如,有个医生说,在医生和病人之间的一次医疗对话中,对 “你经常服用心脏药物吗?” 这句话的回答是 "是的" 和反问"是的?"有着本质的不同。

传统的说话人分类系统使用两步,第一阶段是检测声音频谱的变化,以确定说话人在对话中何时发生变化;第二阶段是识别谈话中的单个说话人。这个 基本的多阶段方法 持续了将近 20 年,在那段时间,只有说话人变化检测组件得到了改进。


随着最近开发出的一种新的神经网络模型—循环神经网络传感器 (RNN-T)—我们现在拥有了一个合适的架构来改进说话人分类的性能,解决了我们最近提出的以前说话人分类系统的局限性。 正如我们将会发表在Interspeech 2019的论文 “Joint Speech Recognition and Speaker Diarization via Sequence Transduction,” 中所报告的, 我们已经开发出了一个基于说话人分类系统的 RNN-T 模型,并在性能上有了从约 20% 到 2% 的单词分类错误率的突破,提高了 10 倍。


传统说话人二值化系统

传统的说话人二值化系统依赖于人们如何发声的差异来区分对话中的说话人。虽然在单一阶段,使用简单的声学模型(例如,高斯混合模型)可以相对容易地从音高中识别男性和女性说话人,但是说话人二值化系统使用一个多阶段方法来区分具有潜在相似音高的说话人。首先,基于检测到的语音特征,变化检测算法将对话分成均匀的片段,希望只包含一个说话人。然后,使用深度学习模型将每个说话人的片段映射到一个嵌入式向量。最后,在集群阶段,将这些嵌入向量组合在一起,以便在整个对话中跟踪同一说话人。

在实际应用中,说话人二值化系统与自动语音识别(ASR)系统并行运行,两种系统的输出相结合,将说话人标签归属于被识别的单词。file传统的说话人二值化系统在声域内推断说话人标签,然后将说话人标签覆盖在一个单独的 ASR 系统生成的单词上。


这种方法有几个限制,阻碍了这一领域的进展。首先,对话需要分成几个部分,每个部分只包含一个说话者的说话。否则,嵌入式向量将不能准确地表示说话者。然而,在实践中,变化检测算法是不完善的,导致可能包含多个说话者的片段。第二,聚类阶段要求知道说话者的数量,并且对输入的准确性特别敏感。第三,系统需要在语音信号估计的片段大小和期望的模型精度之间做出艰难的权衡。这一段越长,语音签名的质量就越好,因为模型中有更多关于说话者的信息。这样做的风险是将简短的感叹词加到错误的说话者身上,这可能会产生非常严重的后果,例如,在处理临床或金融对话时,肯定或否定需要被准确地追踪。最后,传统的说话者二值化系统没有一个简单的机制来利用语言线索,而这些线索在许多自然对话中都特别突出。比如“你多久吃一次药?”“在临床对话中,最有可能的是医疗服务人员,而不是病人。同样,“我们什么时候交作业?”这句话很可能是学生说的,而不是老师说的。语言线索也暗示着说话人的说话顺序发生变化的可能性,例如,在一个问题之后。


There are a few exceptions to the conventional speaker diarization system, and one such exception was reported in our recent blog post. In that work, the hidden states of the recurrent neural network (RNN) tracked the speakers, circumventing the weakness of the clustering stage. The work reported in this post takes a different approach and incorporates linguistic cues, as well.

An Integrated Speech Recognition and Speaker Diarization System

We developed a novel and simple model that not only combines acoustic and linguistic cues seamlessly, but also combines speaker diarization and speech recognition into one system. The integrated model does not degrade the speech recognition performance significantly compared to an equivalent recognition only system.


The key insight in our work was to recognize that the RNN-T architecture is well-suited to integrate acoustic and linguistic cues. The RNN-T model consists of three different networks: (1) a transcription network (or encoder) that maps the acoustic frames to a latent representation, (2) a prediction network that predicts the next target label given the previous target labels, and (3) a joint network that combines the output of the previous two networks and generates a probability distribution over the set of output labels at that time step. Note, there is a feedback loop in the architecture (diagram below) where previously recognized words are fed back as input, and this allows the RNN-T model to incorporate linguistic cues, such as the end of a question.
file
An integrated speech recognition and speaker diarization system where the system jointly infers who spoke when and what.

Training the RNN-T model on accelerators like graphical processing units (GPU) or tensor processing units (TPU) is non-trivial as computation of the loss function requires running the forward-backward algorithm, which includes all possible alignments of the input and the output sequences. This issue was addressed recently in a TPU friendly implementation of the forward-backward algorithm, which recasts the problem as a sequence of matrix multiplications. We also took advantage of an efficient implementation of the RNN-T loss in TensorFlow that allowed quick iterations of model development and trained a very deep network.


The integrated model can be trained just like a speech recognition system. The reference transcripts for training contain words spoken by a speaker followed by a tag that defines the role of the speaker. For example, “When is the homework due?” ≺student≻, “I expect you to turn them in tomorrow before class,” ≺teacher≻. Once the model is trained with examples of audio and corresponding reference transcripts, a user can feed in the recording of the conversation and expect to see an output in a similar form. Our analyses show that improvements from the RNN-T system impact all categories of errors, including short speaker turns, splitting at the word boundaries, incorrect speaker assignment in the presence of overlapping speech, and poor audio quality. Moreover, the RNN-T system exhibited consistent performance across conversation with substantially lower variance in average error rate per conversation compared to the conventional system.
file
A comparison of errors committed by the conventional system vs. the RNN-T system, as categorized by human annotators.


Furthermore, this integrated model can predict other labels necessary for generating more reader-friendly ASR transcripts. For example, we have been able to successfully improve our transcripts with punctuation and capitalization symbols using the appropriately matched training data. Our outputs have lower punctuation and capitalization errors than our previous models that were separately trained and added as a post-processing step after ASR.

This model has now become a standard component in our project on understanding medical conversations and is also being adopted more widely in our non-medical speech services.

Acknowledgements

We would like to thank Hagen Soltau without whose contributions this work would not have been possible. This work was performed in collaboration with Google Brain and Speech teams.

本帖已被设为精华帖!
回复数量: 0
    暂无评论~~
    • 请注意单词拼写,以及中英文排版,参考此页
    • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`, 更多语法请见这里 Markdown 语法
    • 支持表情,使用方法请见 Emoji 自动补全来咯,可用的 Emoji 请见 :metal: :point_right: Emoji 列表 :star: :sparkles:
    • 上传图片, 支持拖拽和剪切板黏贴上传, 格式限制 - jpg, png, gif
    • 发布框支持本地存储功能,会在内容变更时保存,「提交」按钮点击时清空
      请勿发布不友善或者负能量的内容。与人为善,比聪明更重要!
    Ctrl+Enter