Small Program Assessment Metrics

Intended audience & scope for this module

This module is designed as an introduction for non-education faculty researchers who are planning to incorporate undergraduate students into research proposal(s) within their disciplines, but who may not have extensive experience with assessing undergraduate programming.  It is not designed for faculty researchers who are applying for grants where education research is the primary focus, such as NSF IUSE grants (which would require far more comprehensive education evaluation processes).

Undergraduates can engage in research programming through numerous mechanisms, including (among others): student employment funded by the grant; mentoring by faculty or graduate student researchers; research engagement through UNM courses; undergraduate research fellowships; volunteering on a research project; or engaging with community partners.  Engaging undergraduate students on a research grant can strengthen proposal sections such as “Broadening Participation” and “Broader Impact.”

Assessing program impact takes many forms.  Two important assessment mechanisms include (1) formal assessment conducted by professional (often contracted) external evaluators; and (2) internal assessment conducted and reported by grant personnel.  While the former approach is important, and may in fact be required by your grant’s funder, this guide will focus on the latter.

Creating goals for your undergraduate engagements

One of the first steps in assessing program impact is to create goals and objectives for your undergraduate student engagements.  These should be connected to your department’s Academic Program Review where possible.  In addition, you can link to UNM General Education Essential Skills, or to the UNM5.  In some instances, it may strengthen your proposal to connect your programming to UNM2040, UNM’s strategic plan.

Here are a few questions to consider in building your goals and objectives:

  • What are the research goals of your grant proposal? Which parts of your research are accessible or understandable to undergraduates, and at what levels (freshman, sophomore, junior, senior)?
  • How can undergraduate students contribute to and/or learn from your research?
  • How important is undergraduate student engagement to your proposal getting funded?
  • How important is it to establish your undergraduate engagements as underscored by department or institutional priorities (for instance, UNM2040)?
  • What student engagement mechanisms do you have access to (i.e., student employment, academic courses, community partnerships)
  • Specifically, what do you want students to LEARN from your undergraduate engagements, and how do you want this to influence their future opportunities or behaviors?

The impact metrics you choose should align closely with your goals and objectives.  For instance, if you want to influence student decisions to persist in their degree program through baccalaureate graduation, then you may design your grant’s educational activities specifically to improve undergraduate retention in your academic discipline.  Consequently, degree persistence, next semester retention, and graduation rates will be important impact metrics, but eventual graduate program enrollment may be less so.  Examples of impact metrics are provided below in greater detail.

Example: Professor A is conducting research in her discipline.  She applies to NSF for research funding, and opts to include undergraduate students in her Broader Impact plans.  She proposes to improve STEM education, specifically for low-income women in her discipline.  Professor A works with her Co-PIs to develop a scaffolded approach for more effectively recruiting, teaching and mentoring low-income women relative to the proposed research project.  Her project team plans to offer a special topics course designed specifically for first- and second-year students, and develops/adopts teaching/mentoring approaches that are proven effective for low-income and female undergraduate students.  Her team articulates specific learning goals for the course, and develops assessment mechanisms accordingly.  Professor A also plans to use this course as recruiting pool for student employees hired to help on her research.  Through this course, Professor A plans to improve content mastery (measured by cognitive assessments of learning in the course), degree progress (measured by students following the degree pathway without repeating or missing important courses), improve degree persistence rates (measured by students not switching out of the major), improve UNM retention (measured by students returning each subsequent semester until graduation) improve eventual graduation rates (measured by students graduating with the expected degree), and increase the number of low-income female students hired on grants in her department (measured by the number of students hired after completing the course).

Selecting student populations

Some grant programs may require that you collect and report comparative student data. Others may only require that you report on the students you serve. For instance, if you are developing an undergraduate fellowship that engages 20 Hispanic students in your research, with the intent to improve graduation rates among Hispanic students in your discipline, your funder may be interested in some combination of:

  • For your 20 fellows, how do they perform on selected benchmarks and goals (i.e., next semester retention, next semester degree persistence, GPA in their major, and graduation rate)?
  • In relation to the same metrics, how do your 20 fellows perform compared to other Hispanic students in your academic discipline who are not participating as fellows?
  • How do your 20 fellows perform compared to all of the other students in your academic discipline who are not participating as fellows?
  • How do your 20 fellows perform compared to a subset of other students, selected based on their intersectional similarities to your fellows cohort (for instance, state of residence, race/ethnicity, gender, income level, rural high school status).

The scope of your impact comparisons should be driven by your grant Request for Proposals (RFP).  While it may seem enticing to add numerous comparison populations to every proposal, this is not necessarily a good idea. The more comparison students you fold into your proposal, the more time you and/or your grant colleagues will spend collecting, organizing and analyzing that data. If that work will not add to your proposal’s competitiveness, then your grant reviewer may see that as a drain on your grant’s resources. 

Here are a few questions to consider in selecting your student populations:

  • How important is education research to your grant proposal’s evaluation criteria? For instance, are you required to have an educational researcher among the co-PIs? If so, then you will likely want to build in sophisticated impact comparisons. Conversely, if your education efforts are included in a “broader impact” section of a non-education research proposal, then you may need fewer comparisons populations (or possibly none).
  • If you need to include comparison populations, which students are most important to understanding impact on your population? For instance, should you compare to students of similar genders, race/ethnicities, professional goals, majors, class standing (freshman, sophomore, junior, senior, graduate)?
  • Will you need to compare UNM student metrics to other similar institutions? If so, the Office of Institutional Analytics has identified UNM Peer Institutions based on similarities in mission, enrollment size and diversity, and program offerings.
  • What personnel resources do you have available to help collect, organize and analyze the comparison data? Requesting and securing confidential student data is by definition a laborious process. UNM has many systems in place to ensure student records secure and confidentiality, and working those processes takes time. You should never expect to ask for and receive data within a few weeks, so plan far ahead.
  • Will you need to measure impact variables before and after your program is implemented? If so, you will need to establish your baseline data (see below).

Example (continued): In the example above, Professor A is primarily focused on (1) undergraduate students who are female, AND (2) who are low-income, AND (3) who are admitted as major or pre-majors in her academic department.  Since she is looking to improve outcomes for this population, she is also interested in comparing to (1) undergraduate students who are not female, OR (2) who are not low-income, AND (3) who are admitted as major or pre-majors in her academic department.  By comparing these two populations, she can determine how well her program is improving outcomes for her target students compared to other students.

Collecting baseline data

In some instances, you may need baseline data for your proposal. For instance, if you are planning to improve degree persistence rates by 50% in five years, then you will need to know your current degree persistence rates. Depending on your chosen metric, this data may not be readily available. The process for accessing this data is likely to require that you do the following:

Define your data.  Work with student data experts to develop a definition for your data (for instance, how are you defining degree persistence?  Are you interested only in students who started in the degree as freshmen, or are you interested in tracking any student who is admitted to your degree?  Are you interested in students admitted in your degree only as their first major, or are you also interested in students double-majoring?)  You may also want to review the UNM OIA Data Dictionary.  

Request the data. Student data can be requested in numerous formats, including (1) identifiable student data, with associate student names, (2) deidentified student data, where student names and other identifiers have been removed, and (3) aggregated student data, where you do not see individual student information, but rather a numerical report of how many students fit your specified variables.  Securing access to the first two data types is much more extensive than the third data type. 

Baseline data can come from various sources. 

  • Aggregated student data comes from the UNM Office of Institutional Analytics. An example of aggregated data would be success rates for UNM’s highest-enrollment courses.  The quickest baseline data to collect comes from the UNM Fact Book, Official Enrollment Reports and the Common Data Set.  If you need additional aggregated data, you can also utilize the OIA Data Request Form.  Please plan ahead.  Customized data requests require extra time to complete.
  • Student level data comes only from the UNM Division of Enrollment Management. An example of student data would be a list of students enrolled in your degree program, possibly including their GPA in the courses required for their major.  The Division of Enrollment Management is tasked with securing this confidential student data, and ensuring that only those individuals with a legitimate and necessary purpose have access to it. Use the UNM Data Request Form at the Office of the Registrar to request student-level data. Again, plan far ahead.

Baseline data can also be collected as part of your grant activities.  For instance, the UNM ECURE program (funded by NSF) seeks to understand the impact of undergraduate research pedagogy on student perceptions.  To accomplish this, ECURE administers a survey to students enrolled in ECURE sections at the beginning of the semester, and again at the end of the semester.  This data allows ECURE to measure the change in student perceptions over the course of the semester.  OIA provides links to resources that can help you craft your surveys.  Before collecting any survey data, be sure to check in with the Institutional Review Board (IRB) to determine whether you need to submit an IRB proposal (see below).

Example (continued): In addition to comparing her target students to other students in her department, Professor A is also interested in change over time.  She will want to determine whether her new course is changing outcomes for her target population.  To do this, she will need to collect baseline data from before her course is offered.  Referring back to her desired outcomes, she will want to collect prior data for her target population on degree progress, degree persistence, UNM retention and graduation rates.  Professor A has determined that she does not need to access individual student records for this purpose, and will be satisfied with aggregated data.  Using the OIA Data Request form (see above), she requests this information from the Office of Institutional Analytics.

Submitting proposal to institutional review board (IRB)

Educational research falls into the category of Human Subjects Research. If you are not familiar with this type of research, The UNM Institutional Review Board (IRB) is here to help.  Check out the IRB FAQs here. Securing IRB approval requires trainings and proposal submissions. You may request IRB Consultations to help you through the process.  In most instances, you will need an IRB review BEFORE collecting any individual student data (including baseline or comparison data).  Researchers should plan on a minimum of 30 business days for the review process (from the time of submission until a determination is made by the IRB).

Identifying impact metrics

The following list is not comprehensive. Rather, it is designed to provide a quick overview of important metrics that may be available to you.   

Educational Records.  Aggregated data requested from OIA, Student-level data requested from Enrollment Management.

  • Next semester retention
  • Freshman retention
  • Degree persistence
  • Graduation rates
  • Time to graduation

Undergraduate Engagement Metrics.  This data is not centrally or consistently collected at UNM.  You will most likely need to collect this data for your students through surveys, interviews or forms.

  • Future engagement in specified undergraduate enrichment programs (i.e., internships, being mentored)
  • Participation in a future type of undergraduate engagement (i.e., research project, professional development)
  • Participation in teaching others while undergraduates (i.e., teaching assistant, PLF, tutor, future mentoring of others)
  • Participation in future undergraduate scholarship (i.e., publication, presentation, exhibit, portfolio creation)
  • Student recognition through formal awards (internal and exteral)

Undergraduate Perceptions Metrics.  While some UNM programs have developed surveys to assess these and other metrics relative to their activities, you may need to develop, adapt or adopt your own mechanisms to collect this information.  The development of surveys is complicated and requires time, testing and specialized knowledge.  If possible, it is preferable to utilize or adapt surveys that have already been piloted, tested, implemented and published.  If you are interested in undergraduate research survey questions, please contact the UNM ECURE program to request a copy of the current undergraduate research student survey. When collecting student perceptions, you can measure (1) perceptions at the end of your program, or (2) changes in perception from the beginning to the end of your program.  The former requires only that you collect the data at the end of the student’s engagement with your program, while the latter requires that you collect the data at the beginning of their engagement as well.

  • Student efficacy & agency (i.e., do students identify more as researchers as a result of participating in your program?)
  • Student interest & commitment (i.e., do students feel more committed to pursuing a graduate degree as a result of participating in your program?)
  • Perceptions of program effectiveness (i.e., do students feel that your program contributed to their staying in college?)

Graduate Progression & Engagement Metrics.  Some, but not all, of this data is centrally or consistently collected at UNM.  You may need to collect this data for your students through surveys, interviews or forms. If your programming primarily includes graduate students, be sure to check out the UNM Graduate Student Success Metrics.  For students who attend graduate school at other institutions, you may not be able to collect this information consistently or comprehensively.

  • Graduate school application/acceptance/enrollment
  • Participation in teaching others while graduates (i.e., TA, GA, mentoring of others)
  • Participation in future graduate scholarship (i.e., publication & presentation, exhibit)

Employment Metrics.  This data is not centrally or consistently collected at UNM, especially over a long period of time.  You will need to collect this data for your students through surveys, interviews or forms.  Be aware that this is difficult information to collect, and relies on busy former students (who are often occupied with establishing their new careers) responding to your inquiries.  Response rates are low, and you may need to assign a team member to follow-up with students who do not reply to your requests.

  • Future/sustained employment in the profession

Program Impact Portfolio.  A portfolio is a collection of participant artifacts that demonstrates what they have learned, and/or a reflection on how that learning changed them. While portfolios can be powerful measures of program impact, they can also be time-consuming to collect, curate and publish.  All student products are confidential intellectual property of the student creators. You should always collect student permission (in writing) before publishing or sharing any of these products with the public, either in full or excerpted. Program portfolios may also contain program artifacts, designed to show reviewers or colleagues how you achieved your program goals.  Share these artifacts at the discretion of your program’s leadership. 

  • Student artifacts (i.e., posting student projects online)
  • Program artifacts (i.e., program outcomes, schedule of events, recruitment emails)
  • Faculty artifacts (i.e., faculty testimonials, faculty interviews)

Program Participation Metrics.  Most grant funders will want to know (at a minimum) how many students participated in your programming. 

  • Number of students introduced to program/topic/discipline/etc.;
  • Number of students who apply/participate/complete;
  • Number of students from emphasis populations who apply/participate/complete;
  • Number of students hired/mentored on project;
  • Number of students who demonstrate competency relative to key skills or learning outcomes;
  • Number of faculty or staff trained to engage in program, or implement program activities.

Additional resources

Here are a few additional resources that you may find helpful.

 

Questions? 

If you would like feedback on planning and/or assessing educational initiatives in your grant proposal, or referrals to assessment experts at UNM, please feel free to contact Tim Schroeder, URAD Director, at timschroeder@unm.edu