The NRT Annual Meeting at the end of September 2018, was an opportunity for the NRT sites across the United States to showcase their expanded program knowledge and advances in scientific knowledge. The lessons learned from assessment and evaluation of NRT programs were presented in two sessions during the two day meeting. Presentations ranged from descriptions of frameworks for the evaluation process to methods for visualization of evaluation findings.
Each presenter described information and insight produced by evaluation activities. The range of methods and tools used across the sites reflected the priorities and novelty of the programs as well as the expertise of the participants; hierarchical models, mixed methods, interviews, rubrics, competency models, peer evaluation, social network analysis, mental modeling, and competency models. Regardless, the evaluation activities produced information to drive NRT program toward their goals. A brief description of each presentation is below.
Elijah Carter, University of Georgia, described a hierarchical data use model used to navigate the transition between formative and summative evaluation. Often the formative components of an evaluation are extensive and data rich, which is resource consuming. Framing the kind of information collected in parallel to the development of the program allows programs to understand and prepare for data decision points across time.
Gemma Jiang, Clemson University, discussed the use of social networking analysis to understand the impact of the program activities on social connections. Social networking analysis created maps used to visualize relationships within a program and to provide feedback to the program and to students.
Kate McCleary, University of Wisconsin Madison, discussed the need for an on-going process of formative evaluation. The role of formative evaluation was reframed as integrated into all stages of a program’s development, through pre-implementation to post-implementation, formative evaluation continues to contribute to program learning.
Glenda Kelly, Duke University, presented her work with trainees and content experts to develop learner centered rubrics. Students created customized rubrics in which they defined their goals and tracked their progress across time. Through this work authentic, common student goals were articulated and can be used to guide the instructional priorities of the program.
Rebecca Jordan, Rutgers University, described the use of mental models to track students’ changing transdisciplinary ideas over time. As students learn about use-inspired science their conceptual models of a problem change. Throughout the program students engaged in generating models of their understanding, providing insight into their developing research skills.
Ingrid Guerra Lopez, Wayne State University, outlined an extensive process to develop and validate common competencies across 12 academic perspectives. The implementation reflected the program participants commitment to communication and collaboration across academic boundaries. The process also revealed the parallel purposes of evaluation activities to guide student, faculty and program development.
Dawn Culpepper and Colin Philips, University of Maryland, College Park, described their work together to understand scholarly identity. This study is an exemplar of the feedback loop and close collaboration between evaluators, faculty, and students undertaken to understand how graduate students learn and how a program can support them.
Cheryl Schwab, University of California, Berkeley, presented the results of an Evaluation Survey distributed to the PIs of the NRT sites. The results started a conversation around common practices and challenges across the NRT programs. Those common threads will be further delineated and discuss on the NRT Evaluators website.