ACT Provides Explanation For Low Essay Scores

Over the past several weeks, Summit has heard from a number of families who have concerns about ACT essay scores. In general, ACT essay scores from this past fall seem low. In the most noteworthy cases, students were receiving ACT composite scores in the 30s (quite strong), and essay scores in the low 20s (about average). As it turns out, families and educators across the U.S. have noted this trend and are expressing concern.

Apparently, the feedback has been so loud and frequent that ACT decided to issue a 12-page research paper* explaining the new scoring system. We will provide some context for these issues, summarize key points of the ACT’s research paper, and offer some insights and conclusions.

 

 

A Brief History Of The Act Essay

Before September 2015, ACT graded essays on a 2-12 scale that had no connection with the familiar 1-36 scale used for the rest of the ACT. This meant that the previous essay score could not be readily compared to the composite results. Beginning in 2015, the ACT significantly changed both the format of the essay and the scoring method. The changes to the content of the essay task are not the focus of the ACT report. Instead the report, and the recent criticism, focuses on the scoring of the essay.

The new ACT essay scoring system continues to rely on two scorers providing raw scores on several dimensions of writing (development, analysis, organization, etc.). These initial raw scores are translated into the ACT’s familiar 1-36 scale. However, the initial administrations of the revised essay have yielded scores that are not well aligned with the scales for the multiple-choice parts of the test.

 

 

Key Points Addressed In ACT Research Letter

The lower scores are not imagined. Writing scores are notably lower than ACT composite scores – on average, 3.2 points lower. Students with high composite scores were particularly likely to see wide gaps between their composite and essay scores.

According to ACT, the problem is in our interpretation, and not the scores themselves. First, the report tells us that a score in one section does not mean the same thing as that score in another section. For example, if a student receives a score of 30 in English on the April ACT, that score reflects the same level of ability as a 30 on any ACT English test taken by any student at any time. However, a 30 in English does not reflect precisely the same level of ability as does a 30 in Math, Reading, Science, or Writing. In fact, the 30 in Writing reflects a markedly higher level of accomplishment.

ACT provides this chart to illustrate this point:

ACT Essay- Table 1

The scales differ for each separate section of the ACT. For example, the top 5% of students will have scored at least a 32 on English, but will have scored approximately 27 on the Writing test. The ACT report repeatedly makes the point that if you want to rank students’ performances across subjects, you need to look at percentiles, not scaled scores. The problem with this argument is that families, educators, and colleges rarely rely on percentiles. They instead rely on the ACT’s familiar 1-36 scale to mean the same thing for every section of the ACT.

 

 

Key Points Left Unaddressed

Though the ACT’s explanation is helpful, it is also woefully incomplete.

ACT sends a mixed message about the significance of its scaled scores. On the one hand, ACT tells us: “The ACT score scales of 1-36 are a well-established and trusted source of information and can be used to monitor longitudinal trends and comparisons of different groups.” Here, ACT is absolutely correct. Colleges publish median ACT scores to show where they rank on the academic pecking order, and students and families rely on this information when building their prospective list of colleges. On the other hand, the ACT discourages the reliance on scaled scores, recommending the use of percentiles instead. Also, the wide discrepancy between Writing scaled scores and Composite scores makes it more difficult for families and admission offices alike to assess the meaning of a particular scaled score.

ACT does not give any reason why a particular raw score on the essay should translate to a specific scaled score. It is not entirely clear why the Writing section’s scale scores and percentiles are so different from those of the other sections, especially because the ACT could have restructured its Writing scales. Fortunately, there are likely to be positive changes made for future tests. For example, the ACT acknowledges that its scorers will improve as they get used to the new format: “…as raters become more familiar and experienced in scoring with the new domain-based rubrics, these issues will be mitigated.” Perhaps the ACT will also tweak future Writing scales.

The ACT report does not confront the core question: why were the ACT Writing scores consistently lower? It’s valid to suggest that percentiles, rather than scaled scores, are better fitted to ranking students across subjects, but this begs the question of why the Writing scale is skewed. Similarly, ACT’s lengthy discussion about standard error of measurement (SEM) focuses on a statistic used to measure the level of reliability of a particular score. It does not discuss why Writing scores are scaled lower than the scores on all the other test sections.

 

 

Conclusions and Recommendations

ACT’s response to this issue demonstrates an appropriate level of concern. The organization has hinted that it will provide essay scorers with additional training and will perhaps revisit how to correlate raw scores to scaled scores. We anticipate that ACT will continue to respond in such a way as to minimize the frustration and concern of students and families.

As we have recently discussed in the context of the SAT essay, it is important to keep the ACT writing score in perspective. While Summit recommends that students complete the essay as part of the ACT, the essay remains optional and does not factor into the student’s composite score. It is unclear how colleges will use the Writing Test score, if at all. We also urge families to reach out to college admission offices directly to determine the weight of the ACT essay on admission.

For a fee, ACT will rescore an essay at a family’s request. We have anecdotally heard that this can result in a score that better aligns with the student’s abilities. While we don’t have any hard evidence to back this up, it is an option for students who feel their scores are far below their testing skill, even after accounting for ACT’s skewed scale.

As always, Summit prides itself on being a resource for families and educators, and we are committed to keeping students and families up-to-date with the latest information available on this issue. We would be happy to help you interpret ACT Writing scores, regardless of the scales ACT decides to use. Please do not hesitate to contact us.

Recommended Posts
Comments
  • Alan Haas
    Reply

    This is a thoughtful and informative analysis by Mr. Wolfert on this issue. Thank you very much

Leave a Comment