Once the grade submission deadline for a semester has passed, ASER generates reports and securely shares the electronic files with department and program SRTI coordinators, who then distribute reports to instructors. The files include:
- SRTI Section Report: Item frequencies and means for each participating course section and comparison group means at the program/department, school/college, and campus level.
- Section Open Ended Responses: Student responses to the four open-ended items on the SRTI Form.
- Departmental Summary Report: Responses for all course sections summarized at the department and/or program level.
- Data Files: Excel files of raw student responses and summary measures (means, standard deviations, and credible intervals).
ASER (then OAPA) enlisted the statistical expertise of Jeff Starns, Associate Professor, Psychological and Brain Sciences, to develop a statistically appropriate method for deriving an interval around each mean course rating on the SRTI Section Report. Following statistical tests of the method and feedback from groups of faculty and former and current departmental chairs (both those with statistical expertise and those without), we implemented this additional reporting method beginning in fall 2018.
Credible Interval Description
To help provide context for interpreting SRTI ratings, page two of the SRTI Section Report has always contained SRTI item means for the course as well as comparison group means controlling for class size and course level (undergraduate or graduate) at the department, school/college, and campus level. The report also includes an interval measurement around each course mean called a “credible interval”.
Conceptually similar to a confidence interval, the 90% credible interval is based on a different set of statistical assumptions that are more appropriate for SRTI data than traditional confidence interval construction. Observed differences in SRTI means may be attributed to any number of factors in addition to those related to teaching practices or the quality and content of instruction. Other factors to consider include those intrinsic to the group of students taking a course in any given semester. For example, students may differ in their level of interest in course content, in their response to an instructor as a person, or in their general practices for using rating scales (e.g., some students may give a rating of 5 for an instructor who is better than average and other students might only give a rating of 5 if an instructor was one of the best they ever had). The credible interval provides a range of values in which the “true” mean rating for a SRTI item is likely to lie. In this case, the true mean is the one an instructor would get if the influence of this student-level variation could be removed. Additional details on the construction and interpretation of the credible interval can be found in the following report: Interpreting Ratings in Context: The Credible Interval (university login required).
The SRTI Section Report includes item frequencies and means for each participating course section in a program or department. To help put the results into context, the report also includes comparison group means at the program/department, school/college, and campus level.
Report Header
At the top of each page is the instructor name, subject, catalog, section, and class number. The header also indicates how many students responded, the total enrollment for the course, and the response rate. If the response rate is below 50 percent, a warning indicating such is printed.
Item Frequencies (Page 1)
SRTI diagnostic and global item frequencies are reported on Page 1. In addition, frequencies are displayed for items gathering student feedback on the classroom space and student effort, attendance, and hours spent working outside of class.
Item Means (Page 2)
On the second page, we report the course section item means, followed by an interval measurement around each mean called a “credible interval”. Conceptually similar to a confidence interval, the 90% credible interval is based on a different set of statistical assumptions that are more appropriate for SRTI data than traditional confidence interval construction.
Next, comparison group means are displayed at the program/department, school/college, and campus level for all courses in the same class level (undergraduate or graduate) and enrollment category as the rated section (e.g., undergraduate sections with 120 or more enrolled). Comparison group means are reported only if there are 10 or more sections.
Finally frequencies for student characteristics that research shows may have a slight effect on student ratings are displayed including elective vs. required status, class level distribution, and student expected grade.
The departmental summary report is designed to give you a more detailed look at SRTI results at the department or program level. We report results separately for undergraduate and graduate sections and further break down results by four categories of course enrollment.
Undergraduate Sections (Pages 1 and 2)
Item frequencies for all undergraduate course sections are reported on Page 1. Frequencies for additional items are also reported, including elective vs. required status, class level distribution, and expected grade.
The second page includes item means for all undergraduate courses by four categories of course enrollment: fewer than 25 enrolled, 25–59 enrolled, 60–119 enrolled, and 120 or more enrolled. Means are only reported for enrollment categories with three or more sections and are displayed in a bar chart at the bottom of the page.
Graduate Sections (Pages 3 and 4)
Results for all graduate course sections are presented in the same format as those for undergraduate sections.
All Sections (Page 5)
On the final page, item means are reported for three groups: all sections combined, all undergraduate sections, and all graduate sections. To help you easily identify large differences in these item means, we have included a bar chart.