Design Principles for Instructionally Relevant Assessment Systems

Summary
The design and implementation of state assessment systems inevitably shape instruction. A growing number of states are seeking to design their state assessment systems to account intentionally for this relationship and lead to net positive impacts on teaching and learning. This brief describes six design principles that emerge as foundational for instructionally relevant assessment systems. Drawn from work alongside state leaders, assessment and curriculum designers, and teachers and leaders across several states, these design principles offer specific focus areas for assessment system design and development that can transform state assessments into tools for instructional good.
The report on which this brief is based can be found here.
The decisions states make regarding what their assessments look like and what kind of information they produce inevitably shape instruction. Since No Child Left Behind ushered in an era of testing-based accountability for schools, state assessments have been governed by a set of design decisions that emphasize easily generated, easily compared scores—even when these assessments are somewhat superficial proxies for the rich performances that state standards set for student learning. This makes sense if state assessments play a narrow and siloed role, focused on sending up a red flag around school performance and triggering a cascade of follow-up actions.
While this might be consistent with how designers intend for assessments to be used, there have been unfortunate and unintended consequences for teaching and learning. Narrow assessments can lead to limiting the kinds of learning experiences students engage in to better “match” the test, preventing many students from accessing rich and relevant learning experiences across domains. For example, many students report that rather than reading full books or other complete texts and engaging in evidence-based practices such as developing rich content understanding to support their comprehension, their reading instruction focuses on decontextualized skills and short reading passages, to better align with what they are expected to do on a test.
Many states want to construct assessment systems more deliberately to achieve their goals. By leveraging our understanding of how various groups use information from state assessments, we can design assessments and systems that have a net positive impact on instruction. We can do so by designing assessments that:
- define instructional shifts, from current practice, that assessments should be designed to incentivize and drive;
- ensure that state assessments are designed and communicated such that the most proximate logical way to “match” the state assessment in local practice (e.g., interim assessments, classroom assessment resources) mirrors activities that reflect research on how students develop disciplinary knowledge and practice;
- recognize that what happens in the classroom is shaped by decisions made by educators and leaders throughout the system and is not limited to interactions among teachers, students, and the content of instruction; and
- provide teachers and leaders with information that offers a significant perceived value-add over other kinds of information they already receive through their classroom, school, and district instructional and assessment practices and resources.
Design Principles for Instructionally Impactful Assessment Systems
Based on evidence from assessment system design and implementation, as well as lessons learned working alongside and across states, a set of design principles emerges that governs assessments intended to support teaching and learning. These principles are designed to:
- build upon current conceptions of alignment to standards;
- focus on the most discerning features of assessment system design—that is, those features that are most likely to distinguish between systems that lead to positive shifts in instruction vs. those that have neutral or negative impacts on teaching and learning, while allowing for a range of ways that states could enact these principles;
- triangulate among the most important instructional shifts, the key users, and the specific, evidence-based behaviors to influence; and
- walk the line between aspirational and doable.
It is unlikely that any state’s current large-scale assessment program fulfills all these design principles, but it is imminently conceivable that states could make different design decisions right now to bring their assessments into better alignment with instructionally impactful goals. Instructionally relevant assessment systems are intentionally designed to incorporate six principles (see Table 1).
Conclusion
By centering features of assessments that support better student learning experiences, teacher practice, and systematic supports and decision-making, we can create assessment systems that have a “net positive” impact on instruction. The design principles detailed here reflect ambitious but accomplishable goals for our assessment systems—and large-scale systems, including states as well as national and international programs, are already on the path to making this work a reality. As systems move forward, keeping “positive instructional impact” as the North Star and centering decisions on specific instructional shifts from the current state of teaching and learning that assessments should support can help system designers make the best decisions within their local contexts for better assessments.
Design Principles for Instructionally Relevant Assessment Systems (brief) by Aneesha Badrinarayan is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.
This research was supported by the Carnegie Corporation of New York, Chan Zuckerberg Initiative, William and Flora Hewlett Foundation, and Walton Family Foundation. The Heising-Simons Foundation, Raikes Foundation, Sandler Foundation, Skyline Foundation, and MacKenzie Scott provided additional core operating support for LPI. The ideas voiced here are those of the authors and not those of our funders.