|About SABES Contact Us Site Map|
|What's New||SABES Resources||SABES Calendar|
|Teacher's License||Other Resources||Other Sites|
Fair Assessment Practices: Giving Students Equitable Opportunties to Demonstrate Learning
Fair Assessment Practices: Giving Students Equitable Opportunities to Demonstrate Learning
I am a terrible bowler. On a good night, I break 100. (For those of you who have never bowled, the highest possible score is 300 and a score below 100 is plain awful.) This is a source of great frustration for me. I've taken a bowling class, so I know how I'm supposed to stand and move, hold the ball and release it. Yet despite my best efforts to make my arms and legs move the same way everytime, the ball only rarely rolls where it's supposed to. Why, I wonder, can't my mind make my body perform the way I want it to, every time I roll the ball?
If we can't always control our bodily movements, we certainly can't always
A colleague who's a chemist throws up his hands at all this. Having obtained controlled results in a laboratory, he finds assessment so full of imprecision that, he says, we can never have confidence in our findings. But to me this is what makes assessment so fascinating. The answers aren't there in black and white; we have, instead, a puzzle. We gather clues here and there, and from them try to infer an answer to one of the most important questions that educators face: What have our students truly learned?
Seven Steps to Fair Assessment
If we are to draw reasonably good conclusions about what our students
have learned, it is imperative that we make our assessmentsand our
1. Have clearly stated learning outcomes and share them with your students, so they know what you expect from them. Help them understand what your most important goals are. Give them a list of the concepts and skills to be covered on the midterm and the rubric you will use to assess their research project.
2. Match your assessment to what you teach and vice versa. If you expect your students to demonstrate good writing skills, don't assume that they've entered your course or program with those skills already developed. Explain how you define good writing, and help students develop their skills.
3. Use many different measures and many different kinds of measures. One of the most troubling trends in education today is the increased use of a high-stakes assessmentoften a standardized multiple-choice testas the sole or primary factor in a significant decision, such as passing a course, graduating, or becoming certified. Given all we know about the inaccuracies of any assessment, how can we say with confidence that someone scoring, say, a 90 is competent and someone scoring an 89 is not? An assessment score should not dictate decisions to us; we should make them, based on our professional judgement as educators, after taking into consideration information from a broad variety of assessments.
Using "many different measures" doesn't mean giving your students eight multiple-choice tests instead of just a midterm and final. We know now that students learn and demonstrate their learning in many different ways. Some learn best by reading and writing, others through collaboration with peers, others through listening, creating a schema or design, or hands-on practice. There is evidence that learning styles may vary by culture (McIntyre, 1996), as different ways of thinking are valued in different cultures (Gonzalez, 1996). Because all assessments favor some learning styles over others, it's important to give students a variety of ways to demonstrate what they've learned.
4. Help students learn how to do the assessment task. My assignments for student projects can run three single-spaced pages, and I also distribute copies of good projects from past classes. This may seem like overkill, but the quality of my students' work is far higher than when I provided less support.
Students with poor test-taking skills may need your help in preparing
for a high-stakes examination; low achievers and those from disadvantaged
backgrounds are particularly likely to benefit (Scruggs & Mastropieri,
1995). Performance-based assessments are not necessarily more equitable
than tests; disadvantaged students are likely to have been taught through
rote memorization, drill, and practice (Badger, 1999). Computer-based
assessments, meanwhile, penalize
5. Engage and encourage your students. The performance of "field-dependent" students, those who tend to think more holistically than analytically, is greatly influenced by faculty expressions of confidence in their ability (Anderson, 1988). Positive contact with faculty may help students of non-European cultures, in particular, achieve their full potential (Fleming, 1998).
6. Interpret assessment results appropriately. There are several approaches to interpreting assessment results; choose those most appropriate for the decision you will be making. One common approach is to compare students against their peers. While this may be an appropriate frame of reference for choosing students for a football team or an honor society, there's often little justification for, say, denying an A to a student solely because 11 percent of the class did better. Often it's more appropriate to base a judgement on a standard: Did the student present compelling evidence? summarize accurately? make justifiable inferences? This standards-based approach is particularly appropriate when the student must meet certain criteria in order to progress to the next course or be certified.
If the course or program is for enrichment and not part of a sequence, it may be appropriate to consider growth as well. Does the student who once hated medieval art now love it, even though she can't always remember names and dates? Does another student, once incapable of writing a coherent argument, now do so passably, even if his performance is not yet up to your usual standards?
7. Evaluate the outcomes of your assessments. If your students don't do well on a particular assessment, ask them why. Sometimes your question or prompt isn't clear; sometimes you may find that you simply didn't teach a concept well. Revise your assessment tools, your pedagogy, or both, and your assessments are bound to be fairer the next time that you use them.
Spreading the Word
Much of this thinking has been with us for decades, yet it is still not being implemented by many faculty and administrators at many institutions. Our challenge, then, is to make the fair and appropriate use of assessments ubiquitous. What can we do to achieve this end?
As we continue our search for fairness in assessment, we may well be embarking on the most exhilarating stage of our journey. New tools such as rubrics, computer simulations, electronic portfolios, and Richard Haswell's minimal marking system (1983) are giving us exciting, feasible alternatives to traditional paper-and-pencil tests. The individually custom-tailored assessments that seem hopelessly impractical now may soon become a reality. In a generationmaybe lessit's possible that we will see a true revolution in how we assess student learning, with assessments that are fairer for all . . . but only if we all work toward making that possible.
When this article was written, Linda Suskie was director of AAHE's Assessment Forum, and assistant to the president for special projects at Millersville University of Pennsylvania.
Several organizations have developed statements that include references to fair assessment practices. Some are available online:
Code of Fair Testing Practices in Education by the Joint Committee on
Testing Practices, National Council on Measurement in Education
Code of Professional Responsibilities in Educational Measurement by the
National Council on Measurement in Education
Leadership Statement of Nine Principles on Equity in Educational Testing
and Assessment by the first National Symposium on Equity and Educational
Testing, North Central Regional Educational Laboratory
Nine Principles of Good Practice for Assessing Student Learning by the
American Association for Higher Education
Writing Assessment: A Position Statement by the Conference on College
Composition and Communication
American Association for Higher Education. (1996, July 25).
Anderson, J. A. (1988). Cognitive styles and multicultural populations. Journal Teacher Education, 24(1), 2-9.
Badger, E. (1999). Finding one's voice: A model for more equitable assessment. In A. L.
Nettles & M. T. Nettles (Eds.), Measuring up: Challenges minorities face in educational assessment (pp. 53-69). Boston: Kluwer.
Conference on College Composition and Communication. (1995). Writing assessment: A position statement [Online]. Available: http://www.ncte.org/ccc/12/sub/state6.html
Fleming, J. (1998). Correlates of the SAT in minority engineering students:
Gonzalez, V. (1996). Do you believe in intelligence? Sociocultural dimensions of intelligence assessment in majority and minority students. Educational Horizons, 75, 45-52.
Haswell, R. H. (1983). Minimal marking. College English, 45, 600-604.
Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards: How to assess evaluations of educational programs (2nd ed.). Thousand Oaks, CA: Sage.
Joint Committee on Testing Practices. (1988). Code of fair testing practices in education [Online]. Available: http://ericae.net/code.txt
Lam, T. C. M. (1995). Fairness in performance assessment: ERIC digest [Online]. Available: http://ericae.net/db/edo/ED391982.htm (ERIC Document Reproduction Service No. ED 391 982)
Linn, R. L. (1999). Validity standards and principles on equity in educational testing and assessment. In A. L. Nettles & M. T. Nettles, (Eds.), Measuring up: Challenges minorities face in educational assessment (pp. 13-31). Boston: Kluwer.
McCabe, D. L., & Pavela, G. (1997, December). The principled pursuit
McIntyre, T. (1996). Does the way we teach create behavior disorders
National Council on Measurement in Education. (1995). Code of professional
responsibilities in educational measurement [Online].
Russell, M., & Haney, T. (2000, March 28). Bridging the gap between testing and technology in schools. Education Policy Analysis Archives [Online serial], 8(19). Available: http://epaa.asu.edu/epaa/v8n19.html
Scruggs, T. E., & Mastropieri, M. A. (1995). Teaching test-taking skills: Helping students show what they know. Cambridge MA: Brookline Books.
This article was reprinted from the May AAHE Bulletin (American Association for Higher Education) and published in Adventures in Assessment, Volume 14 (Spring 2002), SABES/World Education, Boston, MA, Copyright 2002.
Funding support for the publication of this document on the Web provided in part by the Ohio State Literacy Resource Center as part of the LINCS Assessment Special Collection.
|| SABES Home Page | SABES Resources | SABES Calendar | Other Web Sites ||