Something to think about: “Good methodological choices
will produce results useful for program enhancement, and poor methodological
choices will be detrimental to that process”
Remember: “There’s no one “true way”
to measure or assess different student abilities or learning outcomes.”
A variety of assessment tools are used to assess student learning
at the course, program and institutional levels. Some are locally developed
for a particular program or institution. Others are standardized instruments
that have been used nationally. A quick review of the strengths and
weaknesses of each approach is charted below (excerpted from unpublished
material of J. Haworth):
|Commercial, standardized norm-referenced tests
||ACT Collegiate Assessment of Academic Proficiency
(CAAP); ETS Academic Profile
easy to administer
reference groups provided
sacrifices specificity for generality
test low-level skills, very seldom test higher order skills
of analysis, synthesis, or evaluation
test student recall of general information but fail to assess
students’ actual abilities to work with and demonstrate
knowledge over time
provide little or no substantive feedback to students in specific
quick & easy
useful for group-level performance and external comparisons
not useful for student or program evaluation
|Criterion referenced exams
||ETS Academic Profile I
easy to administer
compare student performance to pre-determined standard
no reference groups provided
multiple choice exams emphasize recall over mastery and application
can be time-consuming
useful for assessing how students change over
useful for formative & summative evaluations
|Locally developed instruments
||Comprehensive examinations (senior assessments, gen ed exams,
use specific criteria for assessing student performance in relation
to course, program goals
more accurately assess what is taught
provides feedback to faculty and students
can be difficult to develop initially
may be costly in faculty time
can be unreliable
provide no reference group outside of the institution
tendency to test that which is easily testable and avoid higher
order thinking skills and application of knowledge
most useful for course and program evaluation
must be supplemented for external validity
|Common final exams
same advantages as above, plus opportunity for more in-depth
assessment and testing of higher order thinking skills
provides students with timely feedback
||particularly useful for assessing highest program priorities
In addition to the above, a program may wish to use some form of competency-based
methods. An appraisal of student performance and performance on a simulated
activity are two methods. In the K-12 sector, performance appraisals
are sometimes referred to as “authentic assessment.” With
performance appraisals, students are expected to comprehend, connect
and apply knowledge in a tangible way.
Simulations approximate performance appraisals because students demonstrate
their tangible understandings under an artificial setting. For example,
business students at the completion of so many business courses may
be asked to plan, manage, and analyze financial portfolios in a simulated
Wall Street environment.
Both of these approaches are valuable in providing valid measures of
skill development but may require much faculty and student time. Careful
construction of the design of these approaches is necessary to minimize
problems of reliability and validity.
Before deciding on which tools or approaches to use, it is important
to have your purpose in mind. Key questions to be addressed as you think
about the purpose include:
- What learning outcomes do you want to assess, at what levels?
- Which students will be participating in this process?
- How will you convey the results of this assessment to the participating
- Who else is your audience for the results?
- How will the results be used to inform decisions about actions taken
to improve student learning outcomes? Who will document these actions?
- How does this assessment tie into existing assessments?
- How often will this assessment be used? Will it be an annual event,
each semester, every other year?
- Who will be involved in collecting, analyzing and storing the data
and documenting the results?
- How much of the budget can be devoted to this effort?
- What existing instruments can serve your purpose?
- Is pilot-testing possible? (recommended for locally developed instruments)
- Finally, given the costs, time, and expertise needed to administer
the instruments, which appear more feasible?
A good assessment plan typically includes the use of a variety of
assessment approaches and tools. Examples of actual tools and other
materials will be covered in the workshops sponsored by the Center for
Teaching and Learning.