Editorial Type:
Article Category: Research Article
 | 
Online Publication Date: Jan 20, 2020

Predicting Quality of Clinical Performance From Cardiology Fellowship Applications

MD,
MD,
BA,
BS,
PhD,
MS, and
MD
Page Range: 258 – 264
Save
Download PDF

Variables in cardiology fellowship applications have not been objectively analyzed against applicants' subsequent clinical performance. We investigated possible correlations in a retrospective cohort study of 65 cardiology fellows at the Mayo Clinic (Rochester, Minn) who began 2 years of clinical training from July 2007 through July 2013. Application variables included the strength of comparative statements in recommendation letters and the authors' academic ranks, membership status in the Alpha Omega Alpha Honor Medical Society, awards earned, volunteer activities, United States Medical Licensing Examination (USMLE) scores, advanced degrees, publications, and completion of a residency program ranked in the top 6 in the United States. The outcome was clinical performance as measured by a mean of faculty evaluation scores during clinical training.

The overall mean evaluation score was 4.07 ± 0.18 (scale, 1–5). After multivariable analysis, evaluation scores were associated with Alpha Omega Alpha designation (β=0.13; 95% CI, 0.01–0.25; P=0.03), residency program reputation (β=0.13; 95% CI, 0.05–0.21; P=0.004), and strength of comparative statements in recommendation letters (β=0.08; 95% CI, 0.01–0.15; P=0.02), particularly in letters from residency program directors (β=0.05; 95% CI, 0.01–0.08; P=0.009).

Objective factors to consider in the cardiology fellowship application include Alpha Omega Alpha membership, residency program reputation, and comparative statements from residency program directors.

The quality of patient care that academic cardiology practices provide depends on recruiting excellent fellows. Fellows often become faculty members at the institutions where they train and thus influence the future of academic cardiology. The recruitment process is crucial but subjective, often because selection committees lack strong objective evidence for evaluating candidates.

Investigators have studied some aspects of application data to predict subsequent performance. Low Medical College Admission Test scores have correlated with adverse disciplinary action by state licensing boards,1 medical school clerkship grades, and United States Medical Licensing Examination (USMLE) scores.24 The USMLE scores appear to correlate with performance during surgical internship.5 Medical school grades have predicted internship performance in various specialties.6,7 Among internal medicine residency candidates, positive statements in recommendation letters comparing applicants with their peers are associated with professionalism during internship.8 Useful data are available to guide candidate selection for general surgery and surgical subspecialty programs911; however, whether these findings can be extended to other subspecialty fellowships is unclear. Given the limited research into best educational practices in cardiology12 and the lack of data to predict clinical performance among applicants for cardiology fellowships, improving cardiology training necessitates collecting data specific to cardiology fellowship applicants.13,14

To determine whether data collected from standard cardiology fellowship applications can be used to predict clinical performance during training, we conducted a retrospective analysis of cardiac fellows from our institution.

Study Population and Methods

We conducted a retrospective cohort study of 7 classes of cardiology fellows who began 2 years of core clinical training from July 2007 through July 2013 at the Mayo Clinic Cardiovascular Diseases Fellowship program (Rochester, Minn), an accredited general cardiovascular program. Annually, 8 or 9 fellows entered through the National Resident Matching Program, and 1 to 3 fellows joined through research pathways sponsored by the National Institutes of Health or our institution's Clinical Investigator Program. All fellows completed 2 years of required clinical rotations in conjunction with research or subspecialty training.

All fellows who entered the program during the study period were eligible for inclusion. All data were internal, confidential, and deidentified for analytical purposes. Data on rejected applicants were excluded. This study received an exemption from the Mayo Clinic Institutional Review Board.

Data Collection

From application files, we extracted data on variables that fellowship programs typically use to evaluate candidates (Table I).15,16

TABLE I. Independent Variables
TABLE I.

Some medical schools have no Alpha Omega Alpha Honor Medical Society (AΩA) chapter, so AΩA membership was a 3-level variable (yes, no, and “not offered”).

We analyzed 2 variables regarding the authors of each candidate's recommendation letters. First, we assigned a numerical value to the academic rank of each letter's author8: full professor, 5; associate professor, 4; assistant professor, 3; instructor, 2; community physician, 1; and unknown or not stated, 0. When multiple authors signed a letter, we recorded the senior author's rank. We averaged these scores to determine a mean academic rank of the authors for each candidate. Next, we recorded whether each letter was written by the candidate's residency program director or by someone who had a different title.

We also determined the strongest comparative statement in each letter, defined as a phrase that directly compared a candidate with peers.8 Highly positive statements, such as “Dr. X is the best resident with whom I have worked,” received a rating of “most enthusiasm” and a score of 3. Statements such as “Dr. Y is among the best residents with whom I have worked,” were rated “moderate enthusiasm” and scored 2. Statements such as “Dr. Z performs at or above the level of his or her peers” were rated “neutral enthusiasm” and scored 1. Letters without statements scored 0. Quotations embedded within letters were excluded from analysis because they were not direct observations. We averaged the scores for each candidate's letters.

To maximize agreement, 3 authors (MWC, TJB, and KWK) reviewed letters, discussed ratings, and reached consensus. One author's data (MWC) were also included in this study, so other authors reviewed his letters, and we excluded them from the initial calibration process.

Performance Ratings

Our primary outcome was clinical performance, from a composite of all clinical evaluations during the first 2 years of clinical cardiology training. Multiple faculty members rated each fellow on multiple variables after each clinical rotation (scale, 1–5). Ten variables were universal (Table II). Ratings on each variable from each evaluator were averaged for a total score. Scores from all evaluations that a fellow received were averaged into one score.

TABLE II. Core Variables for Evaluating Clinical Performance
TABLE II.

The variables studied have been validated.17,18 Professionalism, task completion, and commitment to one's education correlate with residents' performance at our institution.8,19,20 For internal validation, we calculated the Cronbach α across all raters for the 10 universal variables.

Statistical Analysis

Relationships between continuous predictor and outcome variables were analyzed with use of simple and multiple linear regression and were reported as mean ± SD or median and interquartile range. Categorical variables were reported as number and percentage. Variables with P <0.1 on univariable regression were included in the multiple-regression analysis and were removed in turn until all variables were P ≤0.05. Results of linear regression were reported as β coefficients with 95% CIs. Regression R2 values were used to compare the strength of association between different variables and models. Pairwise associations between multiple-regression variables were evaluated for collinearity. P <0.05 was considered statistically significant. Analyses were conducted with use of SAS version 9.4 (SAS Institute Inc.).

Results

We found 67 fellows eligible for our study but excluded 2 because of incomplete applications. The remaining 65 fellows (mean age, 32 ± 4 yr; 41 men) completed 18 different residency programs. Forty of 43 whose residency program was ranked in the top 6 by the Doximity Residency Navigator16 completed the Mayo Clinic Internal Medicine Residency (Table III).

TABLE III. Distribution of Independent Variables
TABLE III.

Of the 255 letters of recommendation, 62 (24%) had been signed by training program directors (Table IV). Three applications had no program director's letter. Three study investigators (MWC, TJB, and KWK) scored 12 letters on 3 candidates; initial agreement was satisfactory (W=0.95). Differences were reconciled, and the process was repeated for 16 letters on 4 candidates (W=0.96). The remaining 227 letters (on 58 candidates) were divided between the reviewers, who gave each letter one score.

TABLE IV. Distribution of Findings in Recommendation Letters
TABLE IV.

The mean strength of statements comparing learner to peers was 1 ± 1.2. Of the 255 letters, 131 (51%) had no statement, including 28 of 62 program director letters (45%) and 103 of 193 letters from others (53%). Two applications had no comparative statements. The mean academic rank of authors was 3.6 ± 1.8, and full or associate professors authored 173 letters.

In total, 142 faculty completed 4,494 fellow evaluations. Of 27,941 evaluation items, 23 scored a 1 (0.1%), 293 scored 2 (1%), 4,009 scored 3 (14.3%), 16,830 scored 4 (60.2%), and 6,786 scored 5 (24.3%) (mean, 4.07 ± 0.18). Scores of the 10 universal evaluation items had a Cronbach α of 0.98.

Univariable analysis revealed that AΩA membership, completion of a top-6 residency program, and strong comparative statements (particularly from program directors) were significantly associated with the primary outcome (Table V). The lack of association between academic rank and evaluation scores persisted after excluding the 16% of letters in which no academic rank was specified.

TABLE V. Univariable Analysis of Independent Variables and Evaluation Scores
TABLE V.

AΩA Status. Among 25 fellows who did not complete a Mayo Clinic residency, 11 had had no AΩA opportunity. Three of the remaining 14 were AΩA members (mean evaluation score, 4.14 ± 0.12), and 11 were not (mean score, 3.98 ± 0.17) (P=0.03).

Multivariable analysis revealed that AΩA status, training at a top-6 residency program, and strong comparative statements were independently associated with evaluation scores (Table VI). Fellows who had not achieved available AΩA status had independently lower scores than did those who qualified or who lacked AΩA opportunity. The association between strong comparative statements and primary outcome remained significant when including letters from program directors only. Separate analyses revealed no collinearity between independent variables in the multivariable analysis.

TABLE VI. Multivariable Analysis of Independent Variables and Evaluation Scores
TABLE VI.

Discussion

Our findings of factors objectively associated with clinical performance during cardiology fellowship have implications in selecting candidates for fellowship programs.

Research productivity during residency has been associated with future research productivity.21 However, trainees who heavily focus on research may neglect clinical obligations.22 In a small study of internal medicine residents, no positive correlation was found between prior scholarly productivity and clinical performance.23 In contrast, results from a large study of residents at our institution identified a positive association.24 The absence of similar findings in the current study may be related to small sample size, so future investigations should involve a larger cohort of cardiology fellows and multi-institutional designs.

Our finding no association between clinical performance and advanced degrees, awards, volunteer activities, and standardized test scores does not mean that they are unimportant; for example, USMLE scores have been associated with performance in later training5,25 and later performance on standardized tests.11,26,27 The time between USMLE testing and cardiology fellowship (often ≥3 yr) and the relatively narrow range of USMLE scores perhaps restricted our observations, and we did not compare USMLE scores with performance on the cardiology in-training examination.28 Study results from other specialties suggest that USMLE scores can predict good performance on in-training or board examinations,26,27,29 so studies of cardiology fellows are warranted.

Completing a top-6 internal medicine residency program predicted strong clinical performance.16 The methodology underlying Doximity rankings is similar to that of the annual physician survey for the U.S. News & World Report rankings of best hospitals.30 Although perhaps not subject to rigorous academic scrutiny, the Doximity rankings are widely available and may influence candidates' perception of programs.3133 In our study, the relationship between residency and performance during cardiology fellowship may validate the Doximity reputation scores. Possible associations between reputation rankings and recognized criteria for the quality of graduate medical education programs should be investigated.

The association between AΩA status and performance is somewhat surprising. It suggests that early academic excellence can predict performance later. In our study, the 30 cardiology fellowship applicants without AΩA opportunity (24 from international medical schools and 6 from Mayo Medical School) performed better than did applicants who failed to achieve membership at schools that had AΩA chapters. (Of note, some reputable medical schools decline AΩA participation.)

An association between AΩA status and residency program possibly confounds the association between AΩA status and fellowship performance. However, among fellows who did not complete the Mayo Clinic residency, the difference in evaluation scores between AΩA and non-AΩA members suggests that AΩA designation predicts performance independently of residency program. Larger studies are warranted.

The association between favorable comparative statements and subsequent clinical performance replicates our earlier findings that recommendation letters predict professionalism scores among first-year internal medicine residents.8 Of note, the association in the current study pertains to global clinical performance variables related to professionalism (Table II), supporting the power of observation-based assessments in predicting subsequent professional behaviors.

The association between comparative statements and performance was strongest for letters signed by residency program directors (Table VI). Although program directors may have relatively little direct contact with residents, letters may contain information from (or may have actually been written by) associate directors who can judge relative performance on the basis of multiple meaningful clinical observations over time.34 In contrast, a writer's academic rank was less important, implying that a writer's academic reputation is less important than a writer's relationship with a resident.

Fellowship candidates need not tailor their applications to match the predictors identified in our study. Even though our findings objectify application analysis, other variables influence fellowship performance, and selection committees should continue to evaluate applications on their own merits.

Study Limitations

Our single-institution study evaluated relatively few fellows. Regardless, the Mayo Clinic Cardiovascular Diseases Fellowship is one of the largest in the United States, making it suitable for this research from the perspective of sample size and representation of a typical program. Furthermore, our independent variables are used by other cardiology training programs when selecting candidates. Our findings should be validated in multi-institutional studies that include a diverse group of cardiology trainees and fellowship programs.

Forty of our 65 fellows had completed the Mayo Clinic Internal Medicine Residency Program (6th in the 2015 Doximity rankings) and may have excelled during fellowship because of familiarity with our institution. Our findings will probably pertain to cardiology fellowship programs that match many internal candidates.

We applied the 2015 Doximity rankings to fellows who began our program from 2007 through 2013. Substantial variability in the rankings is unlikely and historical rankings were not available, so we think that applying these Doximity data to the period of our study was appropriate.

Our primary outcome, the composite of all clinical evaluations during the first 2 years of cardiology training, has not been formally validated; however, previous assessments with the same instrument items identified valid content, internal structure, and relations to other variables.8,18,19 Furthermore, the current study's outcome had strong internal consistency.17

Finally, our study was limited by the lack of suitable studies for comparison with other groups of cardiology trainees.12

Conclusions

To our knowledge, our investigation is the first to predict educational performance outcomes in cardiology fellows by using fellowship application data. We found that comparative statements in recommendation letters, membership in the AΩA Honor Medical Society, and completion of a top-6 residency program were associated with clinical performance in a large academic cardiology fellowship program. Fellowship selection committees may consider these variables when evaluating candidates for their programs.

References

  • 1.

    Papadakis MA, Teherani A, Banach MA, Knettler TR, Rattner SL, Stern DT, et al.. Disciplinary action by medical boards and prior behavior in medical school. N Engl J Med2005;353(

    25
    ):267382.

  • 2.

    Huff KL, Koenig JA, Treptau MM, Sireci SG. Validity of MCAT scores for predicting clerkship performance of medical students grouped by sex and ethnicity. Acad Med1999;74(

    10 Suppl
    ):S414.

  • 3.

    Julian ER. Validity of the Medical College Admission Test for predicting medical school performance. Acad Med2005;80(

    10
    ):9107.

  • 4.

    Silver B, Hodgson CS. Evaluating GPAs and MCAT scores as predictors of NBME I and clerkship performances based on students' data from one undergraduate institution. Acad Med1997;72(

    5
    ):3946.

  • 5.

    Andriole DA, Jeffe DB, Whelan AJ. What predicts surgical internship performance? Am J Surg 2004;188(

    2
    ):1614.

  • 6.

    Hamdy H, Prasad K, Anderson MB, Scherpbier A, Williams R, Zwierstra R, Cuddihy H. BEME systematic review:predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach2006;28(

    2
    ):10316.

  • 7.

    Markert RJ. The relationship of academic measures in medical school to performance after graduation. Acad Med1993;68(

    2 Suppl
    ):S314.

  • 8.

    Cullen MW, Reed DA, Halvorsen AJ, Wittich CM, Kreuziger LM, Keddis MT, et al.. Selection criteria for internal medicine residency applicants and professionalism ratings during internship. Mayo Clin Proc2011;86(

    3
    ):197202.

  • 9.

    Selber JC, Tong W, Koshy J, Ibrahim A, Liu J, Butler C. Correlation between trainee candidate selection criteria and subsequent performance. J Am Coll Surg2014;219(

    5
    ):9517.

  • 10.

    Sharp C, Plank A, Dove J, Woll N, Hunsinger M, Morgan A, et al.. The predictive value of application variables on the global rating of applicants to a general surgery residency program. J Surg Educ2015;72(

    1
    ):14855.

  • 11.

    Grewal SG, Yeung LS, Brandes SB. Predictors of success in a urology residency program. J Surg Educ2013;70(

    1
    ):13843.

  • 12.

    Allred C, Berlacher K, Aggarwal S, Auseon AJ. Mind the gap: representation of medical education in cardiology-related articles and journals. J Grad Med Educ2016;8(

    3
    ):3415.

  • 13.

    Halperin JL, Williams ES, Fuster V. COCATS 4 introduction. J Am Coll Cardiol2015;65(

    17
    ):172433.

  • 14.

    Sinha SS, Julien HM, Krim SR, Ijioma NN, Baron SJ, Rock AJ, et al.. COCATS 4: securing the future of cardiovascular medicine [published erratum appears in J Am Coll Cardiol 2015;65(24):2677]. J Am Coll Cardiol2015;65(

    17
    ):190714.

  • 15.

    Alpha Omega Alpha Honor Society. Alpha Omega Alpha chapters. Available from: https://alphaomegaalpha.org/chapters.html [cited 2016 Jun 6].

  • 16.

    Doximity. Residency navigator. Available from: https://residency.doximity.com/programs?residency_specialty_id=39 [cited 2016 Jun 6].

  • 17.

    Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med2006;119(

    2
    ):166.e716.

  • 18.

    Beckman TJ, Mandrekar JN, Engstler GJ, Ficalora RD. Determining reliability of clinical assessment scores in real time. Teach Learn Med2009;21(

    3
    ):18894.

  • 19.

    Reed DA, West CP, Mueller PS, Ficalora RD, Engstler GJ, Beckman TJ. Behaviors of highly professional resident physicians. JAMA2008;300(

    11
    ):132633.

  • 20.

    Beckman TJ, Reed DA, Shanafelt TD, West CP. Resident physician well-being and assessments of their knowledge and clinical performance. J Gen Intern Med2012;27(

    3
    ):32530.

  • 21.

    Yang G, Zaid UB, Erickson BA, Blaschko SD, Carroll PR, Breyer BN. Urology resident publication output and its relationship to future academic achievement. J Urol2011;185(

    2
    ):6426.

  • 22.

    Schott NJ, Emerick TD, Metro DG, Sakai T. The cost of resident scholarly activity and its effect on resident clinical experience. Anesth Analg2013;117(

    5
    ):12116.

  • 23.

    Cavalcanti RB, Detsky AS. Publishing history does not correlate with clinical performance among internal medicine residents. Med Ed2010;44(

    5
    ):46874.

  • 24.

    Seaburg LA, Wang AT, West CP, Reed DA, Halvorsen AJ, Engstler G, et al.. Associations between resident physicians' publications and clinical performance during residency training. BMC Med Educ2016;16:22.

  • 25.

    Greenburg DL, Durning SJ, Cohen DL, Cruess D, Jackson JL. Identifying medical students likely to exhibit poor professionalism and knowledge during internship. J Gen Intern Med2007;22(

    12
    ):17117.

  • 26.

    Spurlock DR Jr , HoldenC,HartranftT. Using United States Medical Licensing Examination (USMLE) examination results to predict later in-training examination performance among general surgery residents. J Surg Educ2010;67(

    6
    ):4526.

  • 27.

    Thundiyil JG, Modica RF, Silvestri S, Papa L. Do United States Medical Licensing Examination (USMLE) scores predict in-training test performance for emergency medicine residents? J Emerg Med 2010;38(

    1
    ):659.

  • 28.

    Kuvin JT, Soto A, Foster L, Dent J, Kates AM, Polk DM, et al.. The cardiovascular in-training examination: development, implementation, results, and future directions. J Am Coll Cardiol2015;65(

    12
    ):121828.

  • 29.

    Kay C, Jackson JL, Frank M. The relationship between internal medicine residency graduate performance on the ABIM certifying examination, yearly in-service training examinations, and the USMLE Step 1 Examination. Acad Med2015;90(

    1
    ):1004.

  • 30.

    U.S. News & World Report. U.S. News Hospital rankings and ratings. Available from: https://health.usnews.com/best-hospitals [cited 2016 Dec 12].

  • 31.

    Wilson AB, Torbeck LJ, Dunnington GL. Ranking surgical residency programs: reputation survey or outcomes measures? J Surg Educ 2015;72(

    6
    ):e243e50.

  • 32.

    Rolston AM, Hartley SE, Khandelwal S, Christner JG, Cheng DF, Caty RM, Santen SA. Effect of Doximity residency rankings on residency applicants' program choices. West J Emerg Med2015;16(

    6
    ):88993.

  • 33.

    Peterson WJ, Hopson LR, Khandelwal S, White M, Gallahue FE, Burkhardt J, et al.. Impact of Doximity residency rankings on emergency medicine applicant rank lists. West J Emerg Med2016;17(

    3
    ):3504.

  • 34.

    Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Acad Med2002;77(

    9
    ):9004.

  • Download PDF
Copyright: © 2020 by the Texas Heart® Institute, Houston

Contributor Notes

Address for reprints: Michael W. Cullen, MD, Department of Cardiovascular Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905, E-mail:cullen.michael@mayo.edu

This work was funded in part by the 2016 Mayo Clinic Endowment for Education Research Award.