NRT Programs: A Place for Faculty Development

The Data Science for the 21st Century (DS421) program at UC Berkeley was awarded the NSF Research Traineeship (NRT) Program in 2015. This was the first cycle of funding for the new NRT Program, the priority research theme for the solicitation was Data-Enabled Science and Engineering (DESE).  During the development and writing of the DS421 proposal, the faculty were invested in the prospect of learning from each other and the selected graduate student trainees. An underlying goal for many of the faculty was for they themselves to gain and strengthen their data science skills at a time when the analysis of large data sets from different aspects of human-natural systems were becoming the norm in addressing issues related to environmental change. With common data science needs, as well as, overlapping research questions across water resource management, regional land use, and the impact of economic and climate factors on agriculture, the faculty wrote the NRT proposal to create a space to support learning data science tools to drive research.

During the implementation of the DS421 program three types of faculty learning occurred in the NRT program: 1) sharing disciplinary knowledge, 2) developing  leadership skills, and 3) navigating institutional boundaries. Although the characterizations of each type of learning are intertwined, each exemplifies a unique way in which NRT programs can support faculty development.

Disciplinary Knowledge

The DS421 program brought together faculty with deep disciplinary knowledge from nine different departments across the UC Berkeley. Each home department provided a different working experience for the faculty which in turn influenced their approach to science. The participating departments may or may not consider themselves inherently interdisciplinary or to have an interdisciplinary culture, yet the DS421 program offered the opportunity for faculty to share their disciplinary knowledge with people outside their given departments. In the case of the NRT, cross-interdisciplinary conversations targeted the knowledge needed to address a research question, therefore the flow of knowledge was often uneven. This is fundamentally different than simply swapping information, as it is offering ideas for others to consider for their own questions. For faculty, the co-creation of curriculum, co-teaching, project review and student advising provided places to contribute their knowledge and ask questions outside their departments.  These opportunities were not structured specifically for faculty development but faculty involvement in these activities contributed to an intellectual hub around common interests.

Leadership Skills

Along with deep disciplinary knowledge the faculty recognized the need for the integration of data science training into graduate education. The development of leadership skills started with the process of bringing faculty together from different departments to write this graduate training proposal. Faculty who were involved were selected because of their interest in the intellectual space and for their willingness to collaborate outside their departments. Working outside their departments required them to contribute their time to the common good. The proposal was successfully funded, providing faculty the insight of what was possible with cross departmental collaboration. At the same time, faculty found supportive colleagues and advocates for their work. Further, the administration of the program provided an ongoing example of how to go about gathering support for an idea among faculty, supporting faculty needs, and maintaining the momentum to get things done.

Navigating Institutional Boundaries

The NRT program allows faculty to propose structures that intersect the borders of institutional entities and extend the space for faculty to work. In the space created by DS421, faculty combined the application of varying perspectives on graduate training and approaches to doing research with the common need for data science skills. Faculty participation increased their institutional knowledge of how other departments work, how the university works, and what the opportunities and challenges are within and across both. This institutional knowledge allows faculty to better pursue the seeds of ideas and build relationships outside the current structures of the institution. Faculty become advocates for their own careers and conversant in the university politics, opening doors for themselves and for their students.

NRT programs drive faculty to investigate new avenues of knowledge, develop into leaders and transcend instructional structures. These are added benefits of the NRT programs which are often focused on student learning.

Advertisements

Admission into NRT Programs: Summative and Formative Questions

The admission process for NRT programs at different institutions is similar and members of admission committees often transverse the same steps. The evaluation questions regarding admissions are usually posed in summative terms, for example, the total number of applicants, the number of students admitted, and the respective percentages of females and underrepresented minority students. Formatively, questions can be asked throughout the admission process at decision points that influence who ends up in the program. In the first year of program implementation, there is a learning curve on how to administer and implement an admission process, which is often patterned after prior experiences of program members. This includes navigating admission into the institution, selecting students into the program, and allocating the NRT scholarships. Asking formative questions to evaluate the impact of the different steps of the admission process contributes to clarifying the goals of the admission process, improving the fit of students in the program, and supporting the program’s success.

Admission into the Institution

Before a program can admit a student into an NRT program, the student must apply and be enrolled in the institution. Many programs have set goals for the number of incoming students they would like to admit, other programs focus on admitting students at later stages of their Ph.D. journey. One of the biggest challenges to admission into NRT programs for incoming students is coordinating the due dates for submission of applications and the timelines for incoming students to commit to the institution and then to the program. The unique location of a program within an academic institution may provide the program members with more or less visibility of admission into the institution. In general, to be admitted into an institution a student must achieve a minimum GPA set by the institution and submit the required application and associated letters of recommendation. Depending upon the institution and the affiliated departments or groups, there may also be minimum GRE score requirements. For incoming students the focus of evaluation questions is on outreach, how did the applicants know about the program? Who applied from targeted recruiting efforts and were they admitted into the institution? Further, was recruiting and information dissemination too narrow or too broad? How can the recruiting and information dissemination be improved to target qualified applicants?

Selection of Students for the Program

Ultimately, admission into the institution predetermines the pool of applicants that can be selected for the program. The NRT program admission process can consider two kinds of students for admission into the program, incoming students and students currently enrolled at the institution. The review of program applications, as alluded to, is often done before admission into the institution is ensured for incoming students. Although some programs conduct admissions in parallel with the institutions’ admission process, other programs employ a rolling admissions model. The implementation and timing of different admissions models impact the characteristics of the students who apply. The selection of students into the program is often done by a committee which can be composed of faculty members, program directors, student representatives, and advisory board members. How much faculty service and staff time is required to implement the admission process? Who makes the final admission decisions? Further, what are the different criteria members are using to select students?

Core to the admission process is the selection criteria used to admit students into the program. Criteria are applied to the individual’s interests and experience, as well as, to the attainment of group characteristics. The requested set of application materials reflects the priorities of the program. Written statements or letters of intent often ask applicants to describe research interests and how their educational or employment background relates to the program. Letters of recommendation are often required. Students may have to commit to completing a specified set of tasks (e.g., two courses, internship, writing a chapter). What are reviewers looking for in applications? What is the contribution of each piece of the application to the selection process? How are different experiences or letters of recommendations weighted?

Group characteristics as selection criteria are often applied as a second filter. Programs often want a balance of a set of disciplines (i.e., biology, engineer, policy) around a common research interest. The selection committee may also consider the representation across departments or groups involved in the program. Or consider the number of students from one faculty member or department. The selection criteria may address gender or minority inequities in targeted departments or fields of research. The admission process may include a faculty interview to assess the student’s research interests or a group interview to assess how the student interacts with others. Across individual and group criteria, the criteria used should be explicit to the selection committee. How are the criteria prioritized? Are the criteria applied consistently? Are the criteria relevant and fair?

Allocation of Funding

All NRT programs have at least a year of funding to allocate to a set number of the students admitted into the program. The program model can be one or two years in length. In the case of a two-year program, the students may be eligible for funding the first or second year of the program. The program may have specific rules about funding, for example, “we only fund students in their second year of the program.” Alternatively, if the program is longer than one year and funding is available the second year, the current fellows may be considered for funding in the second year. The availability of other funding sources (e.g., department scholarships, NSF GRFP, etc.) increases the complexity of when and who to award funding to. In addition, the funding can be used as an incentive for recruiting students to the institution and enrollment into the program. How does funding impact recruitment and retention? What is the enrollment and retention of unfunded students? What is the impact of different requirements for funded and unfunded students?

Formative questions at each step contribute to improving the admission process and to achieving the program goals. For every step returning to the question, what are the goals of the admission process, will clarify the goals and help target the kinds of questions that need to be asked.

Communication Workshop Debrief: What works?

The workshop debrief was conducted with two facilitators and the program evaluator to review the evaluation findings and plan for the next workshop. The questions driving the conversation were: What were the goals of the workshop? What went well? And what did not? Four themes emerged in the conversation for what went well. The themes will help to guide future programing: prepare for flexible programming, identify the implicit value of communication approaches, maintain engagement, and provide informal feedback.

Flexible Program

The goals of the communication workshop were broad, participants will be able to 1) create the narrative for their message, 2) prepare to communicate effectively, and 3) use tools to create visuals. To achieve these goals there are a range of experiences and activities that could be proposed to develop the necessary set of skills. For the first workshop the selection of activities and programming was influenced by one of the facilitator’s experiences conducting communication workshops with groups of people both in academic and industry settings. With a wide range of potential participants in the workshop the facilitators needed to have a toolbox of activities and topics to draw from and be flexible. This was the third year that the DS421 Communication workshop was implemented with the same curriculum framework and facilitators, we have learned what expertise and insight facilitators bring to a session. Each session was anchored to a topic (e.g., story development or visual experiment). Based on how the participants responded to a topic the facilitators were prepared to expand the topic or pivot to a new topic. A flexible program allowed the use of many paths to achieve the same goals each year but this approach requires the curation of materials for the toolbox and skilled facilitators to select and drive activities.

Implicit Value

One of the biggest challenges in the delivery of the communication workshop is navigating the implicit value attendees place in the different forms of communicating their work. Traditionally, writing a journal article is perceived as having high value whereas public outreach is perceived as having low value. Yet, the present need for scientists to communicate their knowledge to a larger audience has increased the value of nontraditional forms of communicating science. The students enter the workshop with different levels of communication expertise and experience both in academia and in industry. In addition, the different disciplines and fields vary in their emphasis on communication training and skill development (e.g., presentations, teaching positions, group projects, media use). Even if student see the value of communicating their work to a larger audience, what is viewed as the appropriate approach may vary. We can define the extreme approaches as science is communicated with solid facts versus science is communicated with artistry. Combining facts and artistry to communicate a scientific message causes confusion for many students with research backgrounds, the students are uncomfortable with telling stories with human characters and using the first person. This is not surprising given that in many fields the use of personal pronouns is still unacceptable in scientific writing. It is crucial to know how students value different forms of communication as well as what communication approach is appropriate for their work. Given this insight different activities can be implemented in the workshop to bring everyone to a common understanding of the purpose of communication. Drawing from a basic premise of communication training, it is essential to know your audience.

Engagement

From the results of observations and surveys, students who are clearly engaged in the activities of a workshop session report a higher impact. During improvisation sessions engagement is easier to observe, this is also true when sessions have an interactive component. Engagement is contagious, if people around you are engaged you are more likely to be engaged. When other participants are not engaged it makes it harder for the participants who are motivated to engage. According to one facilitator, the ideal number of students in a workshop is 15. This allows everyone to participate and interact with the facilitator. When the number of participants increases there are more opportunities to disengage. With smaller numbers the facilitator can personalize the content and have deeper conversations about the experience and difficulties of communication. Integrating activities that require active contribution from participants can help to maintain engagement. For example, many of the improv activities that teach story structure (e.g., string of pearls) require that everyone in the room is listening and ready to contribute. Further personalizing the activity, whether to a context the participant works in or to the specific skills the participants need to work on, directly connects the activity to their work. Engagement is dependent upon many external factors but activities that are interactive and personalized draw in participants and support engagement.

Informal Feedback, Peer and Expert

Feedback is a product of communication. The feedback provided during the workshop was informal in the sense that it was not formally structured or written down but was verbal and matched to the objectives of the activity. During the improvisation sessions participants could try different approaches to communicating and were provided immediate verbal feedback. The interactive nature of the improv activities was an opportunity to provide and receive feedback in a way that is not possible in traditional graduate courses. Another example of informal feedback was during the journalism session when participants were interviewed in front of the class. The interviewee was then offered feedback from the expert facilitators. Although this kind of feedback requires time during programming and the implementation of interactive activities, the direct and immediate feedback was invaluable to learning.

The four themes from the conversation represent strengths of the program as well as avenues to enhance future programming.

Recruiting is a Year Long Effort

The spotlight on recruiting in the admission process for NRT programs often shines the brightest in January. Program applications are due and the number of applications are tallied. Although programs are reaching out and enticing qualified students to apply throughout the year, January is a time when evaluation questions reemerge about what worked and what needs to be done to strengthen recruitment efforts.

All NRT programs include the National Science Foundation’s (NSF) focus on increasing equity and diversity in STEM disciplines. Each NRT program has specific recruitment goals reflecting the priorities of the program and the needs of the institution. To achieve these goals NRT programs recruit at the individual, program and institutional levels. I have outlined some of the common approaches graduate programs use in order to recruit students and spread the word about the training opportunities in their programs.

Recruiting approaches used at the individual level to disseminate information about the program include:

  • Send targeted emails
    • personal contacts
    • professional  contacts
  • Word of mouth
    • current participants
  • Create website links and forms for inquiries about the program
  • Follow up on prospective students’ email inquiries

At the program level recruiting and dissemination of information builds upon the individual approaches:

  • Email administrators  and instructors in related programs or in industry
    • academic programs (e.g., undergraduates, masters)
    • industry partners
  • Use mailing lists
    • departments
    • specialist groups (e.g., Ecolog, CoastGSI, etc)
    • federal or state funded  programs (e.g., Sea Grant programs, etc)
  • Attend and advertise at discipline specific conferences
    • participating students
    • faculty and staff
  • Attend minority-focused conferences (e.g., SACNAS, ABRCMS)
  • Schedule information sessions or presentations in related courses
  • Online descriptions of the training program and faculty research

The institutional level offers existing infrastructure to contact prospective students and disseminate information:

  • Campus media outlets (e.g., announcements, articles etc)
  • Post on related department websites
  • Include on graduate student association fellowships and scholarships lists
  • Notify campus programs and diversity initiatives (e.g., Graduate Diversity Center)
  • Contact discipline specific scholarship programs (e.g., Biology Scholars Programs)
  • Utilize university and college networks (e.g., UCs and California State Universities)

This list of recruiting approaches requires curation. Each recruitment approach uses time and resources to be employed, especially if there are mechanisms to monitor who is contacted (i.e., interest, qualification, and diversity). Explicit recruiting priorities for the program can be used to select a set of recruiting approaches as well as measure the impact on the applicant pool. The spotlight on impact of recruiting is now but from a program perspective recruiting is a year long effort. This effort is improved by frequent and critical review throughout the year.

More to read:

 

Posselt, Julie R. Inside graduate admissions: Merit, diversity, and faculty gatekeeping. Cambridge, MA: Harvard University Press, 2016.

 

NRT Annual Meeting 2018: Evaluation Challenges Continued

Common challenges across NRT sites were evident in attendee conversations during the evaluation and assessment sessions and in the plenary sessions of the NRT Annual Meeting 2018. Gleaned from the participants questions and comments, three categories of challenges for NRT programs are discussed in this blog: framing the role of evaluation, selecting the approach for evaluation, and facilitating the dissemination of evaluation products. Although discussed within the context of a challenge, each provides an opportunity to enhance the use of evaluation in NRT programs.

Frame. Simply asking the question, what is the role of evaluation for NRT programs, opens the door to many responses beyond formative and summative feedback. The question does not specify the full complexity of program evaluation and graduate education.  One complexity that was touched upon in the NRT meeting was the different possible combinations of people contributing to the evaluation at each site; each individual contributing to the role of evaluation. This particular complexity makes it tricky to compare the implementation and products of evaluation across sites. The formal process of framing the role of evaluation within a given site  helps to elucidate the contribution of  the different perspectives and priorities of the people involved. The inclusion of personnel, from external evaluators, internal evaluators, program coordinators, principal investigators, faculty to graduate students, determine how evaluation is framed.  It is often a challenge for programs to fully frame the role of evaluation because it is determined by the unique contributions of individuals and their interactions. Although the combination of people contributing to the evaluation of a program is often fluid or undetermined at the beginning stages of a program, establishing an initial frame of perspectives and priorities adds to a more nuanced reponse to the question what is the role of evaluation.

Select. Selecting the approach for program evaluation is constrained by many factors including the overarching context of graduate education. The list below (which is not an exhaustive list) are common factors considered across NRT sites to select an approach to program evaluation. Each generates its’ own set of questions and concerns about what works or doesn’t work for evaluating graduate programs.

  • Institutional human subjects review – navigating the process and guidelines within an institution.
  • Small sample sizes – drawing evidence from the numbers of participants in graduate programs.
  • Tradeoff of different methods – considering (for example) the benefits of qualitative methods verses quantitative methods.
  • Level of faculty and stakeholders involvement – assessing the amount of time and training required for participants to contribute to the evaluation design and interpretation of evaluation results.
  • Recruiting comparison groups –  motivating students or faculty outside the program to participate in the evaluation.
  • Cohort effects – accounting for the differences in the composition of each year of entering students.
  • Hawthorne effect – changing behavior because participants are aware of the evaluation.
  • Burden on students (and faculty) – requiring time and energy to provide evaluation data.

Facilitate. There are many mechanisms to disseminate evaluation findings. Dissemination of evaluation findings happens informally and formally in various ways at the site level, across sites, and to the broader community.  The NRT annual meeting provided an opportunity to share and discuss what is happening at other sites. It was also an opportunity for evaluators to meet and build relationships across sites, provide critical feedback and create connections to enhance evaluation. Some ideas emerged during the NRT annual meeting that could  further facilitate increased communication of evaluation results and could be directly integrated into the meeting agenda. First, during the poster session of the annual meeting, in a conversation with other attendees, it was proposed that there could be posters for evaluation findings. These posters would be of interest to everyone because evaluation is one of the common threads across all sites. In another conversation someone suggested, an alternative to a poster session, a “rubric fair” to share the assessment and evaluation tools that sites are using. Both ideas are more informal yet structured forums to interact. Second, one of the key suggestions given to new PI’s during their orientation is to plan for dissemination. For evaluation, what are the specific evaluation deliverables for the program? Who is responsible? When are the deliverables due? Following up on the the plan for dissemination during the subsequent annual meetings in the form of a working session focused on dissemination of products would keep the plan current and offer an opportunity for peer feedback.  Finally, the meeting offered the opportunity to help facilitate building an evaluation community beyond the meeting. Part of this effort is to connect individuals working on evaluation at the different NRT sites to the NRT evaluator website. Ideas swirled for the ways to curate and share materials on the website. An interim step is to  solicit posts from the NRT evaluation community for this site to expand what we know about each others interests and work.

Each NRT site must frame the role of evaluation, select the evaluation approach and faciliate dissemination of evaluation products. The NRT sites collectively can add their experience and knowledge to addressing these issues. Further, the challenges connect the programs and provide a common forum to discuss evaluation and graduate education in the STEM fields.

Please consider sending in a blog post to cjhschwab@berkeley.edu

NRT Annual Meeting 2018: Assessment and Evaluation Breakout

The NRT Annual Meeting at the end of September 2018, was an opportunity for the NRT sites across the United States to showcase their expanded program knowledge and advances in scientific knowledge. The lessons learned from assessment and evaluation of NRT programs were presented in two sessions during the two day meeting. Presentations ranged from descriptions of frameworks for the evaluation process to methods for visualization of evaluation findings.

Each presenter described information and insight produced by evaluation activities. The range of methods and tools used across the sites reflected the priorities and novelty of the programs as well as the expertise of the participants; hierarchical models, mixed methods, interviews, rubrics, competency models, peer evaluation, social network analysis, mental modeling, and competency models.  Regardless, the evaluation activities produced information to drive NRT program toward their goals. A brief description of each presentation is below.

Elijah Carter, University of Georgia, described a hierarchical data use model used to navigate the transition between formative and summative evaluation. Often the formative components of an evaluation are extensive and data rich, which is resource consuming. Framing the kind of information collected in parallel to the development of the program allows programs to understand and prepare for data decision points across time.

Gemma Jiang, Clemson University, discussed the use of social networking analysis to understand the impact of the program activities on social connections. Social networking analysis created maps used to visualize relationships within a program and to provide feedback to the program and to students.

Kate McCleary, University of Wisconsin Madison, discussed the need for an on-going process of formative evaluation. The role of formative evaluation was reframed as integrated into all stages of a program’s development, through pre-implementation to post-implementation, formative evaluation continues to contribute to program learning.

Glenda Kelly, Duke University, presented her work with trainees and content experts to develop learner centered rubrics. Students created customized rubrics in which they defined their goals and tracked their progress across time. Through this work authentic, common student goals were articulated and can be used to guide the instructional priorities of the program.

Rebecca Jordan, Rutgers University, described the use of mental models to track students’ changing transdisciplinary ideas over time. As students learn about use-inspired science their conceptual models of a problem change. Throughout the program students engaged in generating models of their understanding, providing insight into their developing research skills.

Ingrid Guerra Lopez, Wayne State University, outlined an extensive process to develop and validate common competencies across 12 academic perspectives. The implementation reflected the program participants commitment to communication and collaboration across academic boundaries. The process also revealed the parallel purposes of evaluation activities to guide student, faculty and program development.

Dawn Culpepper and Colin Philips, University of Maryland, College Park, described their work together to understand scholarly identity. This study is an exemplar of the feedback loop and close collaboration between evaluators, faculty, and students undertaken to understand how graduate students learn and how a program can support them.

Cheryl Schwab, University of California, Berkeley, presented the results of an Evaluation Survey distributed to the PIs of the NRT sites. The results started a conversation around common practices and challenges across the NRT programs. Those common threads will be further delineated and discuss on the NRT Evaluators website.

 

Evaluation Relationships and Issues I

During the NRT Annual Meeting on September 27, 2018, I presented the results of a Evaluation Relationships and Issues survey conducted to gather information across NRT programs about their experience with evaluation. I described the common threads of evaluation across NRT programs, the results of the survey, and the discussed opportunities to build community.  In this blog I begin by highlighting the common threads and the relative impact of evaluation issues.

The common threads of evaluation practices across NRT programs stem from the commonalities of who, what, how and when, evident in the logic models produced to guide the programs.

howwhatwhen

The main WHO are NRT graduate students, non NRT graduate students, masters students, faculty and staff. Within the institution the members of lab groups, institutes, and departments contribute to the program. Many programs employ different types of partners from industry to government, mentors and advisors, outside workshop and career building providers, and internal and external evaluators. WHAT is the core of the program, specified outcomes usually consisting of content, research, and communication or career building skills. HOW the program intends to achieve the outcomes is a set of elements or activities implemented in the program (i.e., courses, internships, workshops, mentors). WHEN is series of training activities building over time. Each question reveals commonalities between programs that are opportunities for connections.

The survey link was sent via email to the PIs of funded NRT programs with a request for them to complete the survey and to forward the survey link to key members of their NRT program engaged in evaluation.  A total of 59 people filled out the survey.

The survey asked about person’s level of engagement in the evaluation , connections, and to what extent issues impacted the evaluation of the program. I start with the latter, the interpretation of the responses to the prompt “Rate the extent to which the following issues currently impact your work to evaluate your NRT program.” A total of 20 evaluators, 29 principal investigators, and 10 program coordinators responded to the likert scale. The table contains the issues in ordered using an item response rating scale model, from the issue impacts the work of evaluation to a “great extent” to the issue impacts the work of evaluation “not at all”. I have included the raw count data across the five cateogies of the likert scale in order for you to get a sense of how the issues were ordered.

Impact on Evaluation Issue
Great Extent
Moderate Extent
Somewhat
Very Little
Not at All
 
11 17 17 11 2   Engaging time constrained program participants in all aspects of evaluation.
12 18 11 15 2   Understanding what constitutes success in the program.
15 11 18 8 6   Analyzing small sample sizes.
7 19 19 10 3   Creating “authentic” performance measures.
12 12 17 9 8   Defining the scope and content of the program.
9 11 14 16 8   Balancing funding, personnel and resources to do evaluation work.
7 11 16 17 6   Supporting the use of evaluation results.
6 9 20 15 8   Assimilating the evaluation process into the culture of the program.
5 13 15 11 12   Generating generalizable evaluation findings.
6 12 14 14 12   Accounting for the variability in student background and integration into NRT program.
5 10 18 13 11   Finding comparison groups.
4 10 18 18 8   Adapting to a changing program.
5 8 20 16 9   Increasing trainee’s motivation to participate in evaluation.
4 7 20 22 5   Reconciling the priorities across program stakeholders.
4 12 12 18 12   Communicating evaluation findings succinctly.
6 8 9 24 11   Managing attrition and low response rates.
5 8 15 16 14   Disseminating evaluation findings outside the program.
2 6 21 15 14   Opportunities to discuss and share evaluation findings.
1 10 13 20 13   Maintaining engagement of outside stakeholders (e.g., industry partners, etc.).
4 4 6 11 33   Navigating changes in the evaluation team.

These findings help us as an NRT community to target issues that may need extra attention. The findings also provide a forum in which to ask NRT sites how they have addressed the most pressing issues and provide insight.

To be continued…

 

This material is based upon work supported by the National Science Foundation under grant no. DGE-1450053

Learning What is Valuable in Graduate Education at UC Berkeley

On May 1, 2017, the Data Science for the 21st Century (DS421) NRT training program hosted a symposium to celebrate the graduation of the first cohort of trainees. The day’s agenda included discussions of the changing environment on the UC Berkeley campus for data science and interdisciplinary research education and what we have learned from the implementation of the DS421 program. Five themes framed the program evaluation results as “what is valuable in graduate education”: heterogeneity, opportunity, practice, question, and people. Each theme was supported by multiple sources of evaluation data. The themes help to ground the training outcomes and guide the discussion of ways to improve the next iteration of the program elements.

The DS421 program was founded on building students’ 1) knowledge of concepts, empirical methods, and analytic tools from environmental science, social science, and statistics, 2) ability to conduct interdisciplinary research, and 3) development of skills to effectively communicate results to diverse audiences. The program evaluation activities target the assessment of the student outcomes, providing feedback and support for the development and improvement of the program elements. The driving questions of the evaluation activities were:

–How do the DS421 training elements contribute to student and faculty attainment of training outcomes?

–How can we scaffold students’ development of interdisciplinary research skills?

Evaluation data was collected from multiple sources through self-report surveys, observations, interviews, and focus groups. In the review of the evaluation data five themes emerged across sources and methods.

Heterogeneity

The graduate students admitted into the DS421 program come from 8 departments across 5 schools. Students’ vary in their prior knowledge and experiences, as well as, their ongoing experience. When the students are in the program their experiences continue to be different; navigating varying departmental requirements, courses to take, papers to complete, teaching assistantships to fulfill, and qualification exam timelines. Some students are admitted to graduate school with an advisor and others do not select an advisor until their second year. There is also varying membership to departmental cohorts, laboratory groups, and research seminars that are dependent upon school, advisor, and research the student is interested in.  Needless to say, the students are heterogeneous when they enter the program and continue divergent paths during the program.

Yet, students apply to the DS421 program to acquire a foundation of data science skills and to engage in interdisciplinary dialogue regarding their research. These common goals tie the DS421 community together. Each individual brings a perspective and experience that is unique and welcome. A student described his/her biggest take away in the colloquium as, “The opinions of my classmates are pretty diverse!” Highlighting the exposure to new perspectives and approaches to inquiry. The program provides a rich environment for learning that extends students’ knowledge across discipline.

Opportunity

The graduate students admitted into the DS421 program are provided the opportunity to discuss their ideas and to present their research to people outside their discipline. One student said the program allowed him/her to “[get] outside the box” of departmental course requirements. The program provided alternative courses and a new set of peers from different disciplines. Another student wrote, “It has been helpful to see  what problems other people in different disciplines are grappling with and what tools they are using.” Each element of the program was designed to give students the opportunity to immerse themselves in research outside their discipline. For the first cohort of the DS421 program, the final project of the Reproducible Data Science course involved working in mixed discipline groups, bringing students together with different expertise and interests to address one question. Within these opportunities students are learning about and building networks to areas of research that are not traditionally discussed in their departmental programs.

Practice

Graduate students don’t explicitly say they are practicing research but they actually are; practicing to think and present research questions, talk with an audience, collaborate with others, and apply tools. The elements of the DS421 program have practice built-in. The colloquium brought students in a cohort together to practice “how to communicate across disciplines.” The communication workshop provided the space for students to try, fail, and succeed at communicating to an audience outside their discipline. From practicing the delivery of a jargon free message to “practicing answering tough questions before an audience,” students were asked to apply the skills and techniques they learned again and again. In the Impacts, Methods and Solutions course students practiced proposal writing. Students posed and wrote about different ideas throughout the semester. A student wrote, “It’s been useful to write frequent proposals, and to workshop them in class.” The Reproducible Data Science course was designed as a series of modules to practice “the collaborative workflow process using Github”, each module adding new tools and different content. Opportunities to practice skills in a supportive environment allows students to try new skills and test out new ideas.

Question

The DS421 program faculty press students to “expose [them]selves to the types of questions in other disciplines.” The program elements push students beyond exposure into engagement and integration of the way other people think about problems. As one student in the program wrote that he/she was able “to think about a statistical methods question …in a new and interesting way by talking with students who do not go to the same usual methods as economists usually go to.” Another student wrote that the colloquium “… has been a great introduction into the way people from other disciplines think about the same questions I do.” Understanding the nuances of research questions across disciplines reveals to students new ways of framing and new methods to addressing their own research questions.

People

At the symposium, a faculty member stated that the program supported the “…building [of] trust, [by] talking across boundaries.” In order for students to cross disciplinary boundaries the people from both sides need to be brought together for a common purpose and to be in a supportive environment.  The DS421 program does this through striving to balance the participants interests, experiences, and goals.  One of the primary activities students report they are participating in are discussions of topics with peers and mentors. If their peers and mentors cross disciplinary boundaries the discussions are fundamentally different than if they did not. Building these relationships increases students access to interdisciplinary discussions and ultimately opportunities. The summer research program requires students to work with graduate students and faculty from different disciplines, further extending the boundaries of single disciplines.

Next: How can we use these themes to inform and increase the success of the DS421 program?

 

Metrics & Rubrics Working Group Update

Kate McCleary (University of Wisconsin-Madison), Daniel Bugler (WestEd), Cindy Char (Char Associates), Stephanie Hall (University of Maryland), Glenda Kelly (Duke University), Mary Losch (University of Northern Iowa), Leticia Oseguera (Penn State University), & David Reider (Education Design).

On Monday, October 10, 2016, members of the metrics and rubrics working group held a teleconference to update each other and get feedback on different tools and instruments being created and used around shared skills and competencies which are being assessed across project sites. The skills and competencies that the group discussed include agency, communication, cross-disciplinarity, entrepreneurship, and student engagement. The purpose of this blog post is to share with the NSF-NRT evaluator community that met in May 2016 key updates from our conversation, and to encourage continued collaboration across sites in the development and implementation of evaluation measures.

Agency: Agency is a point of interest for the University of Maryland’s NSF-NRT evaluation. KerryAnn O’Meara and Stephanie Hall see agency as a key characteristic in the development of interdisciplinary researchers. Loosely defined as “the strategic actions that students might take towards a goal,” the UMD evaluation team is seeking to understand how the graduate students in the language science community develop ownership of their training program. O’Meara and Hall see mentorship and multiple pathways through programs as contributing to agency. They are interested in learning if other programs are looking at agency, and if so, what is being used to capture trainee agency? Other members of the working group see the development of agency as components to some of their programs through decision-making and the trainees’ identification of coursework to fulfill their career goals.

Communication: Measuring different components of communication cuts across many of the evaluations being carried out by the working group. A central focus for our working group was how to use rubrics as components of our evaluation plans. Glenda Kelly, internal evaluator with Duke University, shared a rubric on how to assess trainee elevator speeches Scoring Elevator Speech for Public Audiences. The rubric was used as part of the Duke NSF-NRT two-week boot camp, one week of which featured training in team science and professional skills. Trainees participated in a science communication training “Message, Social Media and the Perfect Elevator Speech” facilitated by faculty from the Duke Initiative for Science and Society. Trainees were presented the rubric on how to assess elevator speeches at the beginning of the workshop and used this rubric as a guide in helping develop their elevator pitches. Graduate trainees then presented their elevator speeches to the group a few days later, and used the rubric as a guide in providing informal feedback. The rubric served as a useful tool for trainees as a guide to developing their elevators pitches and providing formative feedback of each other’s presentations.

Cross-Disciplinary Skills: Kate McCleary, evaluator for the University of Wisconsin-Madison, created a cross-disciplinary presentation rubric to be used during a weekly seminar where trainees present their research to peers and core faculty from four disciplines. McCleary used data from individual interviews with faculty and graduate trainees to define cross-disciplinarity within the context of the NSF-NRT project at the University of Wisconsin-Madison, and used literature to further explore the competencies developed through cross-disciplinary collaborations (Hughes, Muñoz, & Tanner, 2015 and Boix Mansilla, 2005). The rubric was also turned into a checklist to provide different options in assessing trainee presentations. The rubric format was based on the AAC&U VALUE Rubrics which are useful tools in assessing sixteen competencies. Relevant competencies and rubrics for the NSF-NRT grants include: creative thinking, critical thinking, ethical reasoning, inquiry and analysis, integrative learning, oral communication, problem solving, teamwork, and written communication.

Entrepreneurship: Leticia Oseguera, evaluator for Penn State University, spent time investigating the literature around entrepreneurship. Working with the NSF-NRT team, she contributed to an annotated bibliography on entrepreneurship, and this competency will be the focus of one professional development program hosted for graduate trainees. A few books that stood out on entrepreneurship include Fundamentals for becoming a successful entrepreneur: From business idea to launch and management by M. Brännback & A. Carsrud (2016) and University startups and spin-offs: Guide for entrepreneurs in academia by M. Stagards (2014). Oseguera shared that the Penn State team has shifted more to investigating cross-disciplinary and interdisciplinary knowledge, and finding ways to objectively assess them.

Student Engagement: The NSF-NRT team at Iowa State University is interested in assessing student engagement. Mary Losch, evaluator from the University of Northern Iowa, is looking at existing metrics and measures to assess student engagement. Losch’s current work on student engagement is pulling from two key articles: “Assessment of student engagement in higher education: A synthesis of literature and assessment tools” by B.J. Mandernach (2015) and “Processes involving perceived instructional support, task value, and engagement in graduate education” by G.C. Marchand & A.P. Guiterrez (2016). In her work with the NSF-NRT team at Iowa State, she is seeking to clarify what aspect of student engagement they are looking to measure (i.e. behavioral, cognitive, affective engagement), and determine where in the program the assessment of student engagement best aligns.

The rubrics and metrics working group plans to continue meeting once a semester. We value the opportunity to share ideas and support one another in the development of meaningful evaluation measures.

References:

Boix Mansilla, V. (2005). Assessing student work at disciplinary crossroads. Change, January/February, 14-21.

Brännback, M. & Carsrud, A. (2016). Fundamentals for becoming a successful entrepreneur: From business idea to launch and management. Old Tappan, NJ: Pearson Education Inc.

Hughes, P.C., Muñoz, J.S., & Tanner, M.N. (Eds.). (2015). Perspectives in interdisciplinary and integrative studies. Lubbock, Texas: Texas Tech University Press.
Mandernach, B.J. (2015). Assessment of student engagement in higher education: A synthesis of literature and assessment tools. International Journal of Learning, Teaching and Educational Research, 12(2), 1-14.

Marchand, G.C. & Gutierrez, A.P. (2016). Processes involving perceived instructional support, task value, and engagement in graduate education. The Journal of Experimental Education, http://dx.doi.org/10.1080/00220973.2015.1107522.

Stagards, M. (2014). University startups and spin-offs: Guide for entrepreneurs in academia. New York: Apress.

Welcome New NRT Sites and Evaluators

The National Science Foundation (NSF) announced awardees for the second cohort of the NSF Research Traineeship (NRT) program. On October 3rd, Laura Regassa, Tara Smith, and Swatee Naik, NRT Program Directors, convened PI’s from 18 funded sites at NSF’s Arlington, VA campus to discuss project commonalities and shared challenges. Invited members of current NRT program   sites offered insight into navigating the first year of funding: Michelle Paulsen (Northwestern University) program logistics and administration; Cheryl Schwab (UC-Berkeley) program evaluation; Lorenzo Ciannelli (Oregon State University) resources and models; and Colin Phillips (U of Maryland) dissemination, sustainability, and scalability of the model. In addition, Earnestine Easter, Program Director, Education Core Research (ECR), discussed NSF’s commitment to funding opportunities to broadening participation in graduate education. Although each NRT site has an evaluation plan written into their proposal there are common elements and barriers across sites. The NRT PI’s had questions for a range of evaluation topics from steering the human subject approval process to integrating evaluation findings over time. We will address these questions by sharing how other sites are defining and tackling these common issues.

What is the difference between internal and external evaluation? How do we benefit from each type of evaluation?

How can we better specify what we want to know about our programs?

How can we use data that is already being collected?

How can we quantitatively measure success? How can we qualitatively measure success? How can we integrate the two kinds of information?

How can we engage faculty in the development, implementation, and use of assessment and evaluation?

How can we engage students in the development, implementation, and use of assessment and evaluation?

How can we use evaluation data to make changes in our program?

How can we share evaluation results and resources across sites?

Do one of these questions resonate with you? If you have comments or would like to submit a blog addressing an evaluation topic contact cjhschwab@berkeley.edu.

We look forward to hearing from you!

Life at NSF | NSF – National Science Foundation

National Science Foundation590 × 380Search by image

NSF Building in Ballson/Arlington, Virginia