Design Principles for Instructionally Relevant Assessment Systems
The decisions states make regarding what their assessments look like and what kind of information they produce inevitably shape instruction. Since the No Child Left Behind Act of 2001 ushered in an era of testing-based accountability for schools, state assessments have been governed by a set of design decisions that emphasize easily generated, easily compared scores—even when these assessments are somewhat superficial proxies for the rich performance expectations state standards set for student learning. This makes sense if state assessments play a narrow and siloed role, focused on sending up a red flag around school performance and triggering a cascade of follow-up actions. While this might be consistent with how designers intend for assessments to be used, there have been unfortunate and unintended consequences for teaching and learning.
Many states want to construct assessment systems more deliberately to achieve their goals. By leveraging our understanding of how various groups use information from state assessments, we can design assessments and systems that have a net positive impact on instruction. We can do so by designing assessments that:
- Define what shifts from current instructional practices should be incentivized.
- Ensure that state assessments are designed and communicated such that the most proximate logical way to match the state assessment in local practice (e.g., interim assessments, classroom assessment resources) mirrors activities that reflect research on how students develop disciplinary knowledge and practice.
- Recognize that what happens in the classroom is not limited to interactions between teachers, students, and the content of instruction alone. What and how students learn is shaped by decisions made by educators and leaders throughout the system. Indeed, many of the instructionally relevant decisions that state assessments are most likely to influence lie outside day-to-day teacher–student interactions. By focusing on the most impactful ways state assessments influence instruction, states can ensure that assessments have a positive influence on instruction without extending into purposes that large-scale external assessments are not well suited to address.
- Provide teachers and leaders with information that offers a significant perceived value-add over other kinds of information they already receive through their classroom, school, and district instructional and assessment practices and resources.
Design Principles for Instructionally Relevant Assessment Systems
Based on evidence from assessment system design and implementation as well as lessons learned working alongside various states, a set of design principles emerge that govern assessments intended to support teaching and learning. These principles are designed to:
- Build upon current conceptions of alignment to standards.
- Focus on the most discerning features of assessment system design—that is, those features that are most likely to distinguish between systems that lead to positive shifts in instruction and those that have neutral or negative impact on teaching and learning, while allowing for a range of ways states could enact these principles.
- Triangulate among the most important instructional shifts; the key users; and the specific, evidence-based behaviors we want to influence.
- Walk the line between aspirational and achievable—it is unlikely that any state’s current large-scale assessment program meets all of these design principles, but it is imminently conceivable that they could make different design decisions right now to bring their assessments into better alignment with instructionally impactful goals.
Instructionally relevant assessment systems are intentionally designed to incorporate the following six principles:
- Authentic. Assessments should highlight and center the key concepts, modes of inquiry, and ways of learning in the discipline.
- Curriculum-Anchored. Assessments are designed such that high-quality curriculum better prepares students for success on the assessment, the assessment incentivizes the adoption and use of high-quality curriculum, and the assessment supports implementation of high-quality materials.
- Educative. Assessments build educator and student understanding of and experience with high-quality teaching and learning in the discipline.
- Developmental and Asset-Oriented. Assessments recognize what students do know and can do, and surface progress relative to individual student performance and along appropriate learning progressions.
- Reflective of and Responsive to Learners. Assessments follow principles of universal design and cultural responsiveness to ensure that each learner is supported in making their thinking visible.
- Useful for Informing Decisions That Impact Instruction. Assessments are designed to produce relevant information at appropriate times to support decision-making.
By centering features of assessments that support better student learning experiences, teacher practice, and systematic supports and decision-making, we can create assessment systems that have a net positive impact on instruction. The design principles detailed in this report reflect ambitious but accomplishable goals for assessment systems—and large-scale systems, including states as well as national and international programs, are already on the path to making this work a reality. As systems move forward, keeping “positive instructional impact” as the North Star and centering decisions on specific instructional shifts from the current state of teaching and learning that assessments should support can help system designers make the best decisions within their local contexts.
Design Principles for Instructionally Relevant Assessment Systems by Aneesha Badrinarayan is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.
This research was supported by the Carnegie Corporation of New York, Chan Zuckerberg Initiative, William and Flora Hewlett Foundation, and Walton Family Foundation. The Heising-Simons Foundation, Raikes Foundation, Sandler Foundation, Skyline Foundation, and MacKenzie Scott provided additional core operating support for LPI. The ideas voiced here are those of the authors and not those of our funders.