Shop Hoagies' Page before you visit your favorite on-line stores
and many more of your favorite stores. Thanks for
making Hoagies' Gifted community possible!
Your donations help keep Hoagies' Gifted Education Page on-line.
Support Hoagies' Page!
Clearinghouse on Disabilities and Gifted Education (ERIC EC)
ERIC EC Minibib EB15
Updated July 2003
Citations with an ED (ERIC Document; for example, ED123456) number are available in
microfiche collections at more than 1,000 locations worldwide; to find the ERIC
Resource Collection nearest you, point your web browser to: http://ericae.net/derc.htm.
Documents can also be ordered for a fee through the ERIC Document Reproduction
http://edrs.com/, email@example.com, or 1-800-443-ERIC (no
Journal articles (for example, EJ999999) are available for a
fee from the originating journal (check your local college or public library),
through interlibrary loan services, or from article reproduction services such as:
Infotrieve: 800.422.4633, www4.infotrieve.com, firstname.lastname@example.org; or ingenta: 800.296.2221, www.ingenta.com, email@example.com.
Campbell, P. C., Campbell, C. R. (1995). Instructional Teaming, Part A: Skills for Planning Instruction. Trainee Workbook. Building Inclusive Schools, Module 3. The Kansas Project for the Utilization of Full Inclusion Innovations for Students with Severe Disabilities. ERIC Document Reproduction Service (EDRS), ED391313. 100p.
This manual presents the trainee's workbook and the trainer's guidelines for the third of six modules in a teacher inservice series developed to promote the unified effort of both regular and special education personnel in understanding and applying nationally recognized practices to implement fully inclusive education for student with diverse learning abilities and disabilities. Module 3 is on skills for planning instruction in instructional teams. The trainee workbook is in the form of 44 transparency masters and 4 activities which address performance assessment, steps in designing performance assessments, examples of real-life performances and products, the Individualized Education Program as a performance assessment, establishing performance objectives, task analysis, determining performance benchmarks, performance evaluation, and discrepancy analysis in designing instruction. The manual for trainers offers specific objectives and suggested comments keyed to each of the transparencies, covering restructuring of assessment and designing of performance assessment. A pre/posttest is also included.
Cizek, G. J. (1991). Confusion Effusion: A Rejoinder to Wiggins. Phi Delta Kappan, 73(2), 150-53. EJ432762.
This rejoinder to Grant Wiggins on performance assessment suggests that true educational reform will undoubtedly be evidenced by something more substantial than pocket folders bulging with student work. Labeling performance tests "authentic" does not ensure their validity, reliability, or incorruptibility. Such tests are neither replacements nor cure-alls for other assessment shortcomings.
Dalton, B. and Others. (1995). Revealing Competence: Fourth-Grade Students with and without Learning Disabilities Show What They Know on Paper-and-Pencil and Hands-On Performance Assessments. Learning Disabilities Research and Practice, 10(4), 198-214.
This study evaluated 4 types of assessment with 74 fourth-grade students with and without learning disabilities (LD) in general education classrooms. Students with and without LD performed more strongly on the hands-on tasks than the paper-and-pencil measures; the facilitative effect of hands-on assessment was greater for students with LD and low-achieving students.
Day, V. P. and Skidmore, M. L. (1996). Linking Performance Assessment & Curricular Goals. TEACHING Exceptional Children, 29(1), 59-64.
This article explains how elementary teachers can link performance-based assessments and student self-assessments to the curriculum for improved learning. It provides graphic organizers, rubrics, and examples of student writings, and discusses ways to connect assessment and instruction, benefits for students (feedback and instruction), and benefits for teachers (promoting collaboration).
Denner, P. R., Salzman, S.A., and Harris, L. B. (2002). Teacher Work Sample Assessment: An Accountability Method That Moves beyond Teacher Testing to the Impact of Teacher Performance on Student Learning. ERIC Document Reproduction Service (EDRS), ED463285. 143p.
This paper shows how a mid-sized teacher education institution proactively developed a performance assessment method, Teacher Work Samples (TWSs), addressing the school's efforts to use TWSs to obtain evidence of the impact of teacher performance on student learning. The paper examines challenges faced in developing and implementing TWS assessments and how the school uses TWSs to hold graduates accountable for program and state standards. It also presents evidence supporting the validity and reliability of using TWSs for high-stakes assessment and program accountability. Evidence comes from over 400 work samples collected during the 2000-02 academic year. Candidates completed two TWSs in conjunction with two internships taken during the teacher education program. Overall, TWS assessments met the elements of Crocker's (1997) content representativeness (frequency, importance or criticality, and realism). The TWS measured state standards targeted in the work sample. Ratings found using the analytic scoring rubric were sufficient for making judgments regarding candidates' TWS performance. TWS performance remained constant from students' first to second internship experiences. Determination of the quality of assessment evidence was problematic. The study found that evidence of the impacts of candidate performance on student learning must be embedded within the context of the quality of the assessment evidence. Teacher Work Sample Index of Student Learning Assessment (published by the College of Education, Idaho State University) is appended.
Elliott, S. N. (1998). Performance Assessment of Students' Achievement: Research and Practice. Learning Disabilities Research and Practice, 13(4), 233-241.
Examines the fundamental characteristics of and reviews empirical research on performance assessment of diverse groups of students, including those with mild disabilities. Discussion of the technical qualities of performance assessment and barriers to its advancement leads to the conclusion that performance assessment should play a supplementary role in evaluation of students with significant learning problems.
Elliott, S. N. (1991). Authentic Assessment: An Introduction to a Neobehavioral Approach to Classroom Assessment. School Psychology Quarterly, 6, 273-309.
Mini-Series on Authentic Assessment. This article introduces a mini-series on authentic assessment and describes a variety of procedures including portfolios, exhibitions, performances, and self-assessment, which are discussed in relation to behavioral assessment and the practice of school psychology. Other articles in the series include "Authentic Assessment: Principles, Practices, and Issues" by Douglas A. Archbald, "Authentic Assessment: Straw Man or Prescription for Progress?" by Sandra Christenson, "Authentic Assessment and Content Validity" by Randy W. Kamphaus, and "Alternative Psychometrics for Authentic Assessment?" by Frank Gresham.
Elliott, S. N. and Fuchs, L. S. (1997). The Utility of Curriculum-Based Measurement and Performance Assessment as Alternatives to Traditional Intelligence and Achievement Tests. School Psychology Review, (26)2, 224-233.
Curriculum-based measurement and performance assessments can provide valuable data for making special-education eligibility decisions. Reviews applied research on these assessment approaches and discusses the practical context of treatment validation and decisions about instructional services for students with diverse academic needs.
Fuchs, L. S. and Others (1999). Mathematics Performance Assessment in the Classroom: Effects on Teacher Planning and Student Problem Solving. American Educational Research Journal, 36(3), 609-46.
Examined the effects of classroom-based performance assessment-driven instruction (PA) with 16 teachers randomly assigned to PA and no-Pa conditions. Results show PA-driven instruction can increase teachers' understanding of what PA is and how it might be used to improve mathematics instructional decisions.
Haigh, J. A. (1996) Maryland School Performance Program. Outcomes, Standards, & High-Stakes Accountability: Perspectives from Maryland and Kentucky. ERIC Document Reproduction Service (EDRS), ED394256. 191p.
This document presents a collection of materials on school performance in Maryland, especially as demonstrated in the Maryland School Performance Assessment Program (MSPAP) and the Independence Mastery Assessment Program (IMAP) for some special needs students. The MSPAP is a testing program administered to third, fifth, and eighth grade students to measure the performance of Maryland schools in three ways: how well students solve problems cooperatively and individually, how well students apply what they have learned to real world problems, and how well students can relate and use knowledge from different subject areas. IMAP assesses the progress of schools and programs for students with severe cognitive developmental disabilities toward achieving performance standards. Among materials included are: bulletins and fact sheets, a summary of MSPAP principles, the MSPAP test structure, statewide results on the MSPAP, disaggregated MSPAP data, a sample MSPAP calendar, a list of regular/special education areas assessed, suggested accommodations on the MSPAP for special needs students, IMAP components, IMAP profile, IMAP domains and outcomes, the IMAP sequence, an excerpt from the IMAP user manual, an excerpt for IMAP scoring instructions, a parent survey, a performance task list, IMAP content comparison charts, first year pilot results of IMAP training, and the process of developing authentic performance tasks. Appended are a sample MSPAP performance task (for grade 8 mathematics), a sample IMAP performance task (for leisure skills of 17 years old), and a sample IMAP scoring rubric.
Herman, J. L. (1992). Educational Leadership, 49(8), 74-78. EJ444324.
What Research Tells Us about Good Assessment.
This article appeared in a special issue on performance assessment. It summarizes research supporting current beliefs in testing, identifies good assessment qualities, and reviews the current knowledge of test design. Standardized tests negatively affect academic program quality. Alternative assessments must be judged by their validity, reliability, consequences, fairness, generalizability, cognitive complexity, content quality, coverage, meaningfulness, and cost effectiveness.
Kirst, M. W. (1991). Interview on Assessment Issues with Lorrie Shepard. Educational Researcher, 20(2), 21-23, 27. EJ423944.
Discusses the movement toward authentic assessment, also called direct or performance assessment, as an alternative to multiple-choice, standardized, norm-referenced testing. Authentic testing involves assessment tasks that are real instances of extended criterion performances rather than proxies of actual learning goals. Questions use of assessments to leverage educational reform.
Maeroff, G. I. (1991). Assessing Alternative Assessment. Phi Delta Kappan, 73(4), 272-81. EJ435781.
For all its attractiveness, alternative assessment is fraught with complications and difficulties, as Rhode Island's experience shows. Although alternative assessment can be systematic, there are no ways to rate large numbers of performance-based tasks, portfolios, interviews, exhibits, or essays. Some standardization is necessary, and assessment must be aligned with instruction.
Mehrens, W. A. (1991). Using Performance Assessment for Accountability Purposes: Some Problems. ERIC Document Reproduction Service (EDRS), ED333008.
Abridged from a paper presented at the Annual Meeting of the American Educational Research Association. Problems with performance assessment and multiple-choice tests are outlined with reference to the literature on accountability. Purposes for performance assessment include integrating assessments with instruction, supplementing traditional examinations for licensure decisions, and other accountability purposes. Reasons for the popularity of performance assessment as compared to multiple choice testing are described, and A 52-item list of references is included.
O'Neil. J. (1992). Putting Performance Assessment to the Test. Educational Leadership, 49(8), 14-19. EJ444307.
This article appeared in a special issue on performance assessment. The desire for students to graduate with more than basic skills has fueled interest in performance assessment methods such as essay writing, group science experiments, or portfolio preparation. Officials in Vermont, California, Kentucky, Maryland, and other states are betting that performance assessments may prove as powerful a classroom influence as paper-and-pencil testing used to be.
Soodak, L. C. (2000). Performance Assessments and Students with Learning Problems: Promising Practice or Reform Rhetoric? Reading and Writing Quarterly: Overcoming Learning Difficulties, 16(3), 257-280.
Argues that the goal of enhancing fairness and equity in the evaluation of students with learning difficulties will not be realized by merely introducing performance assessments. Suggests need for development of an empirical base to guide implementation and to reevaluate prevailing beliefs about student failure and the role of assessment in determining eligibility for services.
Tindal, G. and Marston, D. (1996).Technical Adequacy of Alternative Reading Measures as Performance Assessments. Exceptionality 6(4), 201-230.
Two studies focused on concurrent validity and instructional decision making in using alternative reading measures with elementary regular and special education students. The first study examined the relation of oral reading performance to several other reading measures; the second study investigated teacher decision making, using quantitative and qualitative outcomes reflecting individual student change in reading fluency and prosody.
Wiggins, G. (1991). A Response to Cizek. Phi Delta Kappan, 72(9), 700-703. EJ425524.
Responding to Gregory Cizek's critique of the "faddishness" of direct assessment methods, this article urges a more constructive debate about the pressing issues of costs versus benefits, the place of face validity in test design, the differing needs in assessment data reporting, and assessment methods that actually improve school performance.
Woodward, J., Monroe, K., and Baxter, J. (2001) Enhancing Student Achievement on Performance Assessments in Mathematics. Learning Disability Quarterly, 24(1), 33-46.
A study examined the combined effects of class wide instruction on performance assessment tasks and problem solving instruction in ad hoc tutoring in mathematics on six students with learning disabilities. The two interventions led to demonstrable differences over time when compared to students who did not receive this kind of instruction.
Worthen, B. R. (1993). Is Your School Ready for Alternative Assessment? Phi Delta Kappan, 74(6), 455-456. EJ457278.
This article appeared in a special issue on performance assessment. It lists 10 conditions essential to a school's readiness to explore alternative assessment methods, including desire for better assessment information, insufficiency of current assessment method, staff and parent openness to innovation, conceptual clarity, assessment "literacy," clarity about desired student outcomes, unsuitability of present curriculum to traditional objective testing, and preexisting alternative assessment examples.
Top of Page Back to ERIC Menu Back to Hoagies' Gifted Education Page
copyright © 1998
ERIC Clearinghouse on Disabilities and Gifted Education