A patient handover involves the transfer of information, responsibility, and authority in a healthcare setting. Structured handovers are critical for effective communication between care providers. Poor patient handovers can contribute to serious medical errors. Therefore, training health profession students on how to effectively perform a structured handover is a core component of their education and will prepare them for clinical practice. This research describes the study conducted to establish inter-rater reliability for a new assessment tool for evaluating learners performing handovers in a simulated setting. This assessment tool focuses on critical items related to handover content, process, and language present in high-quality, structured handovers. The handover simulation which is part of a course called Transitions to Clerkship was recorded for 64 groups of learners. Out of these 64 recorded handovers, 30 videos were selected, through a randomized block design, for grading by four raters who were trained on how to use the tool. A two-way random model was used to calculate the Intraclass Correlation Coefficient (ICC) for inter-rater reliability. ICC for absolute agreement and consistency were 0.507 and 0.617, respectively, suggesting a fair to good level of reliability in the context of this study. The paper concludes with a list of potential factors leading to these reliability scores.