TY - JOUR
T1 - Psychometric properties of patient-facing ehealth evaluation measures
T2 - Systematic review and analysis
AU - Wakefield, Bonnie J.
AU - Turvey, Carolyn L.
AU - Nazi, Kim M.
AU - Holman, John E.
AU - Hogan, Timothy P.
AU - Shimada, Stephanie L.
AU - Kennedy, Diana R.
N1 - Funding Information:
The work reported here was supported by the Department of Veterans Affairs Health Services Research & Development Quality Enhancement Research Initiative grant #RRP12-496 and a Career Development Award (CDA 10-210) (Shimada). Assistance was provided by Amy Blevins, University of Iowa Hardin Health Sciences Library, who assisted with the article search; Thomas Houston, MD, for review and input on project design; and Ashley McBurney for project assistance. Study sponsors provided funding, but had no role in the design or conduct of the study; sponsors did not review or approve the manuscript prior to submission. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.
PY - 2017/10
Y1 - 2017/10
N2 - Background: Significant resources are being invested into eHealth technology to improve health care. Few resources have focused on evaluating the impact of use on patient outcomes A standardized set of metrics used across health systems and research will enable aggregation of data to inform improved implementation, clinical practice, and ultimately health outcomes associated with use of patient-facing eHealth technologies. Objective: The objective of this project was to conduct a systematic review to (1) identify existing instruments for eHealth research and implementation evaluation from the patient’s point of view, (2) characterize measurement components, and (3) assess psychometrics. Methods: Concepts from existing models and published studies of technology use and adoption were identified and used to inform a search strategy. Search terms were broadly categorized as platforms (eg, email), measurement (eg, survey), function/information use (eg, self-management), health care occupations (eg, nurse), and eHealth/telemedicine (eg, mHealth). A computerized database search was conducted through June 2014. Included articles (1) described development of an instrument, or (2) used an instrument that could be traced back to its original publication, or (3) modified an instrument, and (4) with full text in English language, and (5) focused on the patient perspective on technology, including patient preferences and satisfaction, engagement with technology, usability, competency and fluency with technology, computer literacy, and trust in and acceptance of technology. The review was limited to instruments that reported at least one psychometric property. Excluded were investigator-developed measures, disease-specific assessments delivered via technology or telephone (eg, a cancer-coping measure delivered via computer survey), and measures focused primarily on clinician use (eg, the electronic health record). Results: The search strategy yielded 47,320 articles. Following elimination of duplicates and non-English language publications (n=14,550) and books (n=27), another 31,647 articles were excluded through review of titles. Following a review of the abstracts of the remaining 1096 articles, 68 were retained for full-text review. Of these, 16 described an instrument and six used an instrument; one instrument was drawn from the GEM database, resulting in 23 articles for inclusion. None included a complete psychometric evaluation. The most frequently assessed property was internal consistency (21/23, 91%). Testing for aspects of validity ranged from 48% (11/23) to 78% (18/23). Approximately half (13/23, 57%) reported how to score the instrument. Only six (26%) assessed the readability of the instrument for end users, although all the measures rely on self-report. Conclusions: Although most measures identified in this review were published after the year 2000, rapidly changing technology makes instrument development challenging. Platform-agnostic measures need to be developed that focus on concepts important for use of any type of eHealth innovation. At present, there are important gaps in the availability of psychometrically sound measures to evaluate eHealth technologies.
AB - Background: Significant resources are being invested into eHealth technology to improve health care. Few resources have focused on evaluating the impact of use on patient outcomes A standardized set of metrics used across health systems and research will enable aggregation of data to inform improved implementation, clinical practice, and ultimately health outcomes associated with use of patient-facing eHealth technologies. Objective: The objective of this project was to conduct a systematic review to (1) identify existing instruments for eHealth research and implementation evaluation from the patient’s point of view, (2) characterize measurement components, and (3) assess psychometrics. Methods: Concepts from existing models and published studies of technology use and adoption were identified and used to inform a search strategy. Search terms were broadly categorized as platforms (eg, email), measurement (eg, survey), function/information use (eg, self-management), health care occupations (eg, nurse), and eHealth/telemedicine (eg, mHealth). A computerized database search was conducted through June 2014. Included articles (1) described development of an instrument, or (2) used an instrument that could be traced back to its original publication, or (3) modified an instrument, and (4) with full text in English language, and (5) focused on the patient perspective on technology, including patient preferences and satisfaction, engagement with technology, usability, competency and fluency with technology, computer literacy, and trust in and acceptance of technology. The review was limited to instruments that reported at least one psychometric property. Excluded were investigator-developed measures, disease-specific assessments delivered via technology or telephone (eg, a cancer-coping measure delivered via computer survey), and measures focused primarily on clinician use (eg, the electronic health record). Results: The search strategy yielded 47,320 articles. Following elimination of duplicates and non-English language publications (n=14,550) and books (n=27), another 31,647 articles were excluded through review of titles. Following a review of the abstracts of the remaining 1096 articles, 68 were retained for full-text review. Of these, 16 described an instrument and six used an instrument; one instrument was drawn from the GEM database, resulting in 23 articles for inclusion. None included a complete psychometric evaluation. The most frequently assessed property was internal consistency (21/23, 91%). Testing for aspects of validity ranged from 48% (11/23) to 78% (18/23). Approximately half (13/23, 57%) reported how to score the instrument. Only six (26%) assessed the readability of the instrument for end users, although all the measures rely on self-report. Conclusions: Although most measures identified in this review were published after the year 2000, rapidly changing technology makes instrument development challenging. Platform-agnostic measures need to be developed that focus on concepts important for use of any type of eHealth innovation. At present, there are important gaps in the availability of psychometrically sound measures to evaluate eHealth technologies.
KW - Computers
KW - Evaluation
KW - Psychometrics
KW - Technology
KW - Telemedicine
KW - Use-effectiveness
UR - http://www.scopus.com/inward/record.url?scp=85042774751&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85042774751&partnerID=8YFLogxK
U2 - 10.2196/JMIR.7638
DO - 10.2196/JMIR.7638
M3 - Review article
C2 - 29021128
AN - SCOPUS:85042774751
VL - 19
JO - Journal of Medical Internet Research
JF - Journal of Medical Internet Research
SN - 1439-4456
IS - 10
M1 - e346
ER -