Common challenges across NRT sites were evident in attendee conversations during the evaluation and assessment sessions and in the plenary sessions of the NRT Annual Meeting 2018. Gleaned from the participants questions and comments, three categories of challenges for NRT programs are discussed in this blog: framing the role of evaluation, selecting the approach for evaluation, and facilitating the dissemination of evaluation products. Although discussed within the context of a challenge, each provides an opportunity to enhance the use of evaluation in NRT programs.
Frame. Simply asking the question, what is the role of evaluation for NRT programs, opens the door to many responses beyond formative and summative feedback. The question does not specify the full complexity of program evaluation and graduate education. One complexity that was touched upon in the NRT meeting was the different possible combinations of people contributing to the evaluation at each site; each individual contributing to the role of evaluation. This particular complexity makes it tricky to compare the implementation and products of evaluation across sites. The formal process of framing the role of evaluation within a given site helps to elucidate the contribution of the different perspectives and priorities of the people involved. The inclusion of personnel, from external evaluators, internal evaluators, program coordinators, principal investigators, faculty to graduate students, determine how evaluation is framed. It is often a challenge for programs to fully frame the role of evaluation because it is determined by the unique contributions of individuals and their interactions. Although the combination of people contributing to the evaluation of a program is often fluid or undetermined at the beginning stages of a program, establishing an initial frame of perspectives and priorities adds to a more nuanced reponse to the question what is the role of evaluation.
Select. Selecting the approach for program evaluation is constrained by many factors including the overarching context of graduate education. The list below (which is not an exhaustive list) are common factors considered across NRT sites to select an approach to program evaluation. Each generates its’ own set of questions and concerns about what works or doesn’t work for evaluating graduate programs.
- Institutional human subjects review – navigating the process and guidelines within an institution.
- Small sample sizes – drawing evidence from the numbers of participants in graduate programs.
- Tradeoff of different methods – considering (for example) the benefits of qualitative methods verses quantitative methods.
- Level of faculty and stakeholders involvement – assessing the amount of time and training required for participants to contribute to the evaluation design and interpretation of evaluation results.
- Recruiting comparison groups – motivating students or faculty outside the program to participate in the evaluation.
- Cohort effects – accounting for the differences in the composition of each year of entering students.
- Hawthorne effect – changing behavior because participants are aware of the evaluation.
- Burden on students (and faculty) – requiring time and energy to provide evaluation data.
Facilitate. There are many mechanisms to disseminate evaluation findings. Dissemination of evaluation findings happens informally and formally in various ways at the site level, across sites, and to the broader community. The NRT annual meeting provided an opportunity to share and discuss what is happening at other sites. It was also an opportunity for evaluators to meet and build relationships across sites, provide critical feedback and create connections to enhance evaluation. Some ideas emerged during the NRT annual meeting that could further facilitate increased communication of evaluation results and could be directly integrated into the meeting agenda. First, during the poster session of the annual meeting, in a conversation with other attendees, it was proposed that there could be posters for evaluation findings. These posters would be of interest to everyone because evaluation is one of the common threads across all sites. In another conversation someone suggested, an alternative to a poster session, a “rubric fair” to share the assessment and evaluation tools that sites are using. Both ideas are more informal yet structured forums to interact. Second, one of the key suggestions given to new PI’s during their orientation is to plan for dissemination. For evaluation, what are the specific evaluation deliverables for the program? Who is responsible? When are the deliverables due? Following up on the the plan for dissemination during the subsequent annual meetings in the form of a working session focused on dissemination of products would keep the plan current and offer an opportunity for peer feedback. Finally, the meeting offered the opportunity to help facilitate building an evaluation community beyond the meeting. Part of this effort is to connect individuals working on evaluation at the different NRT sites to the NRT evaluator website. Ideas swirled for the ways to curate and share materials on the website. An interim step is to solicit posts from the NRT evaluation community for this site to expand what we know about each others interests and work.
Each NRT site must frame the role of evaluation, select the evaluation approach and faciliate dissemination of evaluation products. The NRT sites collectively can add their experience and knowledge to addressing these issues. Further, the challenges connect the programs and provide a common forum to discuss evaluation and graduate education in the STEM fields.
Please consider sending in a blog post to firstname.lastname@example.org