Integrating articulatory information in deep learning-based text-To-speech synthesis

Beiming Cao, Myungjong Kim, Jan Van Santen, Ted Mau, Jun Wang

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-To-speech (TTS) synthesis. Recently, deep learningbased TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).

Original languageEnglish (US)
Pages (from-to)254-258
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2017-August
DOIs
StatePublished - Jan 1 2017

Fingerprint

Text-to-speech
Speech Synthesis
Speech synthesis
Acoustics
Hidden Markov models
Markov Model
Output
Neural Networks
Model-based
Learning
Deep learning
Baseline
Experiment

Keywords

  • articulatory data
  • deep learning
  • deep neural network
  • Text-To-speech synthesis

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modeling and Simulation

Cite this

Integrating articulatory information in deep learning-based text-To-speech synthesis. / Cao, Beiming; Kim, Myungjong; Van Santen, Jan; Mau, Ted; Wang, Jun.

In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, Vol. 2017-August, 01.01.2017, p. 254-258.

Research output: Contribution to journalArticle

@article{3722ca82118e402cb6edc6a1e798bb31,
title = "Integrating articulatory information in deep learning-based text-To-speech synthesis",
abstract = "Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-To-speech (TTS) synthesis. Recently, deep learningbased TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).",
keywords = "articulatory data, deep learning, deep neural network, Text-To-speech synthesis",
author = "Beiming Cao and Myungjong Kim and {Van Santen}, Jan and Ted Mau and Jun Wang",
year = "2017",
month = "1",
day = "1",
doi = "10.21437/Interspeech.2017-1762",
language = "English (US)",
volume = "2017-August",
pages = "254--258",
journal = "Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH",
issn = "2308-457X",

}

TY - JOUR

T1 - Integrating articulatory information in deep learning-based text-To-speech synthesis

AU - Cao, Beiming

AU - Kim, Myungjong

AU - Van Santen, Jan

AU - Mau, Ted

AU - Wang, Jun

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-To-speech (TTS) synthesis. Recently, deep learningbased TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).

AB - Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-To-speech (TTS) synthesis. Recently, deep learningbased TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).

KW - articulatory data

KW - deep learning

KW - deep neural network

KW - Text-To-speech synthesis

UR - http://www.scopus.com/inward/record.url?scp=85039167284&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85039167284&partnerID=8YFLogxK

U2 - 10.21437/Interspeech.2017-1762

DO - 10.21437/Interspeech.2017-1762

M3 - Article

AN - SCOPUS:85039167284

VL - 2017-August

SP - 254

EP - 258

JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

SN - 2308-457X

ER -