Improving the accuracy of automated cleft speech evaluation

James R Seaward, Rami Robert Hallac, Megan Vucovich, Blaike Dumas, Cortney Van'T Slot, Caitlin Lentz, Julie Cook, Alex A Kane

Research output: Contribution to journalArticle

Abstract

An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 85% agreement with the combined speech pathologist rating, compared with 65% agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.

Original languageEnglish (US)
JournalJournal of Cranio-Maxillofacial Surgery
DOIs
StateAccepted/In press - Jan 1 2018

Keywords

  • Automated speech evaluator
  • Cleft palate
  • Cleft speech
  • Speech recognition software
  • Velopharyngeal insufficiency

ASJC Scopus subject areas

  • Surgery
  • Oral Surgery
  • Otorhinolaryngology

Fingerprint Dive into the research topics of 'Improving the accuracy of automated cleft speech evaluation'. Together they form a unique fingerprint.

  • Cite this