Anyone familiar with the educational assessment industry knows that there are two primary methods of scoring student responses:

  1. Machine (automated) scoring
  2. Human (hand) scoring

Both methods range from simple to elaborate in how they are configured to determine a final raw score for each response a student submits. Machine scoring is extremely reliable and consistent, and usually produces results very quickly for a high volume of student responses. Machine scoring has proven to be efficient and accurate for everything from simple multiple choice to gridded response to technology-enhanced item types.

Human scoring is a different animal altogether, and requires great care to implement properly in order to achieve expected and reliable outcomes. For example, a robust and high-performing data model is one key foundational element for building an effective human scoring system. Your data model should handle the fast retrieval and submission of data, support the workflow prescribed in the rubrics, and provide maximum configurability by your end users to support the many facets of customization that drive the scoring workflow. Human scoring software must be scalable to handle the sheer volume of responses from a large user base that need to be  viewed and scored — and it must be very efficient to allow people to get their work done. Ninety-eight percent of all scoring activity happens at a base level — that of the scorer viewing a response, judging the content, and capturing the score of the response.

In addition, because people often use computers to score test responses, a well-designed scoring system must provide the ability to:

  1. Read, view, listen, and watch the response in a manner that allows zooming, minimal scrolling, and predictable delivery of content.
  2. Maximize screen real estate to easily show the student’s response.
  3. Easily and intuitively enter scores.
  4. Access item content information like rubrics, scoring guides, stem descriptors, and more.
  5. Stay in the scoring workflow of scoring with minimal clicking.

Whether scoring software is designed for a controlled and centralized facility or built to serve the needs of distributed scoring, these basic guidelines will help ensure successful scoring by people, for students. Every response from every student is important, and a better, smoother software interface experience for human scorers will ultimately allow them to do their very best work in scoring student assessments.