Inter-Rater Reliability of the Mentor Behavioral Interaction Rubric
An objective assessment of a mentor's behavioral skills is needed to assess the effectiveness of mentor training interventions in academic settings. The Mentor Behavioral Interaction (MBI) Rubric is a newly developed, content-valid, observational measure of a mentor's behavioral skill duri...
Saved in:
Published in | The chronicle of mentoring & coaching Vol. 7; no. SI16; p. 466 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
01.11.2023
|
Online Access | Get more information |
Cover
Loading…
Summary: | An objective assessment of a mentor's behavioral skills is needed to assess the effectiveness of mentor training interventions in academic settings. The Mentor Behavioral Interaction (MBI) Rubric is a newly developed, content-valid, observational measure of a mentor's behavioral skill during single-episode interactions with a mentee. The purpose of this study was to assess the inter-rater reliability (IRR) of the MBI Rubric when used to assess video-recorded mentor-mentee interactions. Three of a pool of four faculty raters with expertise in mentor training synchronously rated 26 videos of mentor-mentee interactions using structured guidelines. The MBI Rubric includes six items (Part 1), each with ratings on a 3- or 4-point scale, and ten yes/no items (Part 2) that characterize the content of the interaction. After initial individual ratings were completed, the three raters met, reviewed disagreements, and reached decisions about final item scores by either consensus or majority vote. Mean total Part 1 scores ranged between 1.42-2.69. IRRs ranged from
(Part 1 IRR=0.67) to
(Part 2 IRR=0.83). No training effects were observed, with no decrease (i.e., showing less variability) in inter-rater standard deviations over time. Rater effects in initial individual scoring were observed, with a significant difference between one vs. the other three raters on Part 1 individual scores, with no effects for Part 2 scores. Raters tended to score lower on initial individual scores than the final score for both Part 1 and 2. The MBI Rubric is the first observational measure to assess single episodes of video-recorded mentor-mentee interactions and has demonstrated content validity, and now inter-rater reliability. It may be used in parallel with other instruments to measure the efficacy of mentor training. Limitations include possible ceiling effects, and resource-intensive administration in terms of rater expertise and time. Future work will assess the responsiveness of the Rubric to change in mentor skill and construct validity. |
---|---|
ISSN: | 2372-9848 |