Improving the accuracy of automated cleft speech evaluation

James R Seaward, Rami Robert Hallac, Megan Vucovich, Blaike Dumas, Cortney Van'T Slot, Caitlin Lentz, Julie Cook, Alex A Kane

Research output: Contribution to journalArticle

Abstract

An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 85% agreement with the combined speech pathologist rating, compared with 65% agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.

Original languageEnglish (US)
JournalJournal of Cranio-Maxillofacial Surgery
DOIs
StateAccepted/In press - Jan 1 2018

Fingerprint

Cleft Palate
Quality of Life
Pathologists

Keywords

  • Automated speech evaluator
  • Cleft palate
  • Cleft speech
  • Speech recognition software
  • Velopharyngeal insufficiency

ASJC Scopus subject areas

  • Surgery
  • Oral Surgery
  • Otorhinolaryngology

Cite this

Improving the accuracy of automated cleft speech evaluation. / Seaward, James R; Hallac, Rami Robert; Vucovich, Megan; Dumas, Blaike; Van'T Slot, Cortney; Lentz, Caitlin; Cook, Julie; Kane, Alex A.

In: Journal of Cranio-Maxillofacial Surgery, 01.01.2018.

Research output: Contribution to journalArticle

Seaward, James R ; Hallac, Rami Robert ; Vucovich, Megan ; Dumas, Blaike ; Van'T Slot, Cortney ; Lentz, Caitlin ; Cook, Julie ; Kane, Alex A. / Improving the accuracy of automated cleft speech evaluation. In: Journal of Cranio-Maxillofacial Surgery. 2018.
@article{2fcc23d37395419284b48b0863ad6c11,
title = "Improving the accuracy of automated cleft speech evaluation",
abstract = "An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79{\%}. Our cleft speech evaluator achieved 85{\%} agreement with the combined speech pathologist rating, compared with 65{\%} agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.",
keywords = "Automated speech evaluator, Cleft palate, Cleft speech, Speech recognition software, Velopharyngeal insufficiency",
author = "Seaward, {James R} and Hallac, {Rami Robert} and Megan Vucovich and Blaike Dumas and {Van'T Slot}, Cortney and Caitlin Lentz and Julie Cook and Kane, {Alex A}",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.jcms.2018.09.014",
language = "English (US)",
journal = "Journal of Cranio-Maxillo-Facial Surgery",
issn = "1010-5182",
publisher = "Churchill Livingstone",

}

TY - JOUR

T1 - Improving the accuracy of automated cleft speech evaluation

AU - Seaward, James R

AU - Hallac, Rami Robert

AU - Vucovich, Megan

AU - Dumas, Blaike

AU - Van'T Slot, Cortney

AU - Lentz, Caitlin

AU - Cook, Julie

AU - Kane, Alex A

PY - 2018/1/1

Y1 - 2018/1/1

N2 - An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 85% agreement with the combined speech pathologist rating, compared with 65% agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.

AB - An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 85% agreement with the combined speech pathologist rating, compared with 65% agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.

KW - Automated speech evaluator

KW - Cleft palate

KW - Cleft speech

KW - Speech recognition software

KW - Velopharyngeal insufficiency

UR - http://www.scopus.com/inward/record.url?scp=85056234699&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056234699&partnerID=8YFLogxK

U2 - 10.1016/j.jcms.2018.09.014

DO - 10.1016/j.jcms.2018.09.014

M3 - Article

C2 - 30420149

AN - SCOPUS:85056234699

JO - Journal of Cranio-Maxillo-Facial Surgery

JF - Journal of Cranio-Maxillo-Facial Surgery

SN - 1010-5182

ER -