MENUMENU

In Part 1, I reviewed a brief history of large-scale assessment since No Child Left Behind (NCLB) became law in 2001 and the struggle to provide both accountability and positive individual student impact through standardized testing. In Part 2, I will look at the next-generation of large-scale assessment and will describe a model that summarizes the key components of assessment programs that deliver on the full promises of student assessment, even at a state or consortium level.

Looking forward into 2016, there are a number of ideas and indicators that point to the intended direction of large-scale assessment. For example, there is a growing movement advocating for test designs that spread summative assessments across multiple, shorter periods throughout the school year. There is also increasing interest in performance tasks that deeply gauge student expertise across several subjects at once. What’s lacking, however, is a cohesive way to describe the optimal combination of these directions into a single holistic description of what third-generation large-scale assessment should deliver.

Best practices for large-scale assessment feature several key components:

  1. Test design that changes summative assessments from one lengthy test at the end of the year to multiple, shorter assessments throughout the school year, leveraging fixed forms, computer adaptive, and portfolio work.
  2. Item types that include not just performance tasks as currently leveraged but in-depth tasks that span multiple subjects.
  3. An ecosystem of technologies that are tightly integrated amongst themselves and with state and local systems.
  4. Dynamic data visualizations that present student results in intuitive, graphical formats, allowing educators to instantly recognize and understand key performance levels, trends, and necessary actions.
  5. Services offered by states or vendors that provide customized, flexible service teams to districts and schools — including pervasive and embedded professional learning to best understand how to understand and apply insights from summative assessments.

Questar has combined each of these best practices into a model called the Third Evolution of Large-Scale Assessment (TELSA). This model was created by combining Questar’s almost 40 years of experience in large-scale assessment with leading market trends and research. Assessment programs that meet the requirements of the TELSA model offer both accountability and instructional benefits to their constituency of states, districts, teachers, students, and parents.

The model has three parts, each of which includes a number of specific requirements for a third-generation program:

  1. Design: test design and creation
  2. Delivery: reliability and flexibility of test delivery
  3. Difference: actionable data and services to support change

Design

The test design section has three requirements:

  • State specific. The test should be designed according to a state’s specific educational needs, costs, and time constraints, including support for states’ growth model initiatives.
  • Authentic and distributed. The program should incorporate leading-edge test design that distributes multi-subject performance tasks and other authentic assessment across the entire school year.
  • Local involvement. The test design process should involve local agencies and teachers to generate in-state employment and drive local involvement.

Delivery

The delivery of a third-generation assessment should meet requirements in three areas:

  • Online assessments should be delivered by highly reliable systems that include a multi-tiered approach to data security and stability. They should be delivered by a vendor with a history of reliability and quality and who can provide not just reactive but proactive help desk and support services.
  • Assessments should accommodate virtually any environment, including the full range of student accommodations required for IMS APIP certification; open item type standards as defined by IMS QTI specifications; options for online, paper/pencil, and mixed testing; and open standards to provide the ability to integrate student data and results across state and local student systems.
  • Assessment of Higher Order Levels of Learning. Assessment programs should support third-generation test design with support for all common TEIs in addition to multi-subject performance tasks; the option and ability for hand-scored constructed response and performance tasks; and options for fixed form and adaptive test formats.

Difference

Complementing “Design” and “Delivery,” the third component, that of “Difference,” is arguably the most important. How a next-generation assessment program makes a difference at the state and local levels include three key requirements:

  • Test results should be linkable to third-party assessments such as ACT and SAT to provide broad comparability for college and career readiness.
  • Student results should be presented to educators and parents with visualized, interactive data. Large-scale assessment results should be integrated with local assessments and systems to offer predictability and mid-year instructional corrections.
  • Services oriented. Services teams from states, consortia, and vendors should be flexible and customized to each situation. The vendor or state should provide pervasive, embedded professional learning to instruct teachers and district staff on best practices in applying summative results to affect change in curriculum and instruction.

Assessment programs that meet the requirements of these three areas — Design, Delivery, and Difference — will provide a balance of accountability and student improvement, which is the core goal of all large-scale assessments. More information about the TELSA model for third-generation assessment programs can be found here, and in Part 3, I will explore Questar’s TELSA-compliant solutions.