Shop Hoagies' Page before you visit your favorite on-line stores
and many more of your favorite stores. Thanks for making Hoagies' Gifted
Your donations help keep Hoagies' Gifted Education Page on-line.
Support Hoagies' Page!
Creating Meaningful Performance AssessmentsThis document has been retired from the active collection
of the ERIC Clearinghouse
on Disabilities and Gifted Education.
It contains references or resources that may
no longer be valid or up to date.
The ERIC Clearinghouse on Disabilities and Gifted Education (ERIC EC)
ERIC EC Digest #E531
Author: Stephen N. Elliott
Performance assessment is a viable alternative to norm-referenced
tests. Teachers can use performance assessment to obtain a much
richer and more complete picture of what students know and are able to
Defining Performance Assessment
Defined by the U.S. Congress, Office of Technology Assessment (OTA)
(1992), as testing methods that require students to create an answer
or product that demonstrates their knowledge and skills, performance
assessment can take many forms including
- * Conducting experiments.
- * Writing extended essays.
- * Doing mathematical computations.
Performance assessment is best understood as a continuum of assessment
formats ranging from the simplest student-constructed responses to
comprehensive demonstrations or collections of work over time.
Whatever format, common features of performance assessment involve
Performance assessments measure what is taught in the curriculum.
- Students' construction rather than selection of a response.
- Direct observation of student behavior on tasks resembling
those commonly required for functioning in the world outside school.
- Illumination of students' learning and thinking processes
along with their answers (OTA, 1992).
There are two terms that are core to depicting performance assessment:
- Performance: A student's active generation of a response that
is observable either directly or indirectly via a permanent product.
- Authentic: The nature of the task and context in which the
assessment occurs is relevant and represents "real world" problems or
How Do You Address Validity in Performance Assessments?
The validity of an assessment depends on the degree to which the
interpretations and uses of assessment results are supported by
empirical evidence and logical analysis. According to Baker and her
associates (1993), there are five internal characteristics that valid
performance assessments should exhibit:
- Have meaning for students and teachers and motivate high
- Require the demonstration of complex cognition, applicable to
important problem areas.
- Exemplify current standards of content or subject matter
- Minimize the effects of ancillary skills that are irrelevant
to the focus of assessment.
- Possess explicit standards for rating or judgment.
When considering the validity of a performance test, it is important
to first consider how the test or instrument "behaves" given the
content covered. Questions should be asked such as:
- * How does this test relate to other measures of a similar
- * Can the measure predict future performances?
- * Does the assessment adequately cover the content domain?
It is also important to review the intended effects of using the
assessment instrument. Questions about the use of a test typically
focus on the test's ability to reliably differentiate individuals into
groups and guide the methods teachers use to teach the subject matter
covered by the test.
A word of caution: Unintended uses of assessments can have precarious
effects. To prevent the misuse of assessments, the following
questions should be considered:
- * Does use of the instrument result in discriminatory practices
against various groups of individuals?
- * Is it used to evaluate others (e.g., parents or teachers) who
are not directly assessed by the test?
Providing Evidence for the Reliability and Validity of Performance
The technical qualities and scoring procedures of performance
assessments must meet high standards for reliability and validity.
To ensure that sufficient evidence exists for a measure, the following
four issues should be addressed:
- Assessment as a Curriculum Event. Externally mandated
assessments that bear little, if any, resemblance to subject area
domain and pedagogy cannot provide a valid or reliable indication of
what a student knows and is able to do. The assessment should reflect
what is taught and how it is taught.
Making an assessment a curriculum event means reconceptualizing it as
a series of theoretically and practically coherent learning
activities that are structured in such a way that they lead to a
single predetermined end. When planning for assessment as a
curriculum event, the following factors should be considered:
- * The content of the instrument.
- * The length of activities required to complete the assessment.
- * The type of activities required to complete the assessment.
- * The number of items in the assessment instrument.
- * The scoring rubric.
- Task Content Alignment with Curriculum. Content alignment
between what is tested and what is taught is essential. What is
taught should be linked to valued outcomes for students in the
- Scoring and Subsequent Communications with Consumers. In
large scale assessment systems, the scoring and interpretation of
performance assessment instruments is akin to a criterion-referenced
approach to testing. A student's performance is evaluated by a
trained rater who compares the student's responses to multitrait
descriptions of performances and then gives the student a single
number corresponding to the description that best characterizes the
performance. Students are compared directly to scoring criteria and
only indirectly to each other.
In the classroom, every student needs feedback when the purpose of
performance assessment is diagnosis and monitoring of student
progress. Students can be shown how to assess their own performances
- The scoring criteria are well articulated.
- Teachers are comfortable with having students share in their
own evaluation process.
- Linking and Comparing Results Over Time. Linking is a generic
term that includes a variety of approaches to making results of one
assessment comparable to those of another. Two appropriate and
manageable approaches to linking in performance assessment include:
- * Statistical Moderation. This approach is used to compare
performances across content areas for groups of students who have
taken a test at the same point in time.
- * Social Moderation. This is a judgmental approach that is
built on consensus of raters. The comparability of scores assigned
depends substantially on the development of consensus among
How Can Teachers Influence Students' Performances?
Performance assessment is a promising method that is achievable in
the classroom. In classrooms, teachers can use data gathered from
performance assessment to guide instruction. Performance assessment
should interact with instruction that precedes and follows an
When using performance assessments, students' performances can be
positively influenced by:
- Selecting assessment tasks that are clearly aligned or
connected to what has been taught.
- Sharing the scoring criteria for the assessment task with
students prior to working on the task.
- Providing students with clear statements of standards and/or
several models of acceptable performances before they attempt a task.
- Encouraging students to complete self-assessments of their
- Interpreting students' performances by comparing them to
standards that are developmentally appropriate, as well as to other
Baker, E. L., O'Neill, H. F., Jr., & Linn, R. L. (1993). Policy and
validity prospects for performance-based assessments. American
Psychologist, 48, 1210-1218.
U.S. Congress, Office of Technology Assessment. (1992, February).
Testing in American schools: Asking the right questions.
(OTA-SET-519). Washington, DC: U.S. Government Printing Office.
Derived from Elliot, S. N. (1994). Creating Meaningful Performance
Assessments: Fundamental Concepts. Reston, VA: The Council for
Exceptional Children. Product #P5059.
ERIC Digests are in the public domain and may be freely reproduced and
This publication was prepared with funding from the Office of Educational
Research and Improvement, U.S. Department of Education, under contract no.
RR93002005. The opinions expressed in this report do not necessarily reflect
the positions or policies of OERI or the Department of Education.
Top of Page Back to ERIC Menu Back to Hoagies' Gifted Education Page