Essay-Grading Software Seen as Time-Saving Tool

Teachers are turning to essay-grading software to critique student writing, but critics point out serious flaws within the technology

Jeff Pence knows the way that is best for his 7th grade English students to boost their writing is always to do a lot more of it. But with 140 students, he would be taken by it at the very least a couple of weeks to grade a batch of these essays.

So that the Canton, Ga., middle school teacher uses an online, automated essay-scoring program which allows students to get feedback on their writing before handing in their work.

“It does not tell them how to proceed, nonetheless it points out where issues may exist,” said Mr. Pence, who says the a Pearson WriteToLearn program engages the students just like a casino game.

A week and individualize instruction efficiently with the technology, he has been able to assign an essay. “I feel it is pretty accurate,” Mr. Pence said. “can it be perfect? No. Nevertheless when I reach that 67th essay, i am not real accurate, either. As a team, we have been pretty good.”

With the push for students to become better writers and meet with the Common that is new Core Standards, teachers are looking forward to new tools to aid out. Pearson, which will be based in London and new york, is one of several companies upgrading its technology in this space, also known as artificial intelligence, AI, or machine-reading. New assessments to evaluate deeper move and learning beyond multiple-choice answers are also fueling the demand for software to help automate the scoring of open-ended questions.

Critics contend the software doesn’t do even more than count words and so can’t replace human readers, so researchers will work difficult to improve the software algorithms and counter the naysayers.

Even though the technology happens to be developed primarily by companies in proprietary settings, there has been a new concentrate on improving it through open-source platforms. New players in the market, such since the startup venture LightSide and edX, the nonprofit enterprise started by Harvard University therefore the Massachusetts Institute of Technology, are openly sharing their research. Last year, the William and Flora Hewlett Foundation sponsored an open-source competition to spur innovation in automated writing assessments that attracted commercial vendors and teams of scientists from around the planet. (The Hewlett Foundation supports coverage of “deeper learning” issues in Education Week.)

“we have been seeing plenty of collaboration among competitors and individuals,” said Michelle Barrett, the director of research systems and analysis for CTB/McGraw-Hill, which produces the Roadmap that is writing for in grades 3-12. “this collaboration that is unprecedented encouraging a lot of discussion and transparency.”

Mark D. Shermis, an education professor at the University of Akron, in Ohio, who supervised the Hewlett contest, said the meeting of top public and researchers that are commercial along with input from many different fields, may help boost performance associated with technology. The recommendation through the Hewlett trials is that the software that is automated used as a “second reader” to monitor the human readers’ performance or provide more information about writing, Mr. Shermis said.

“The technology can not do everything, and nobody is claiming it may,” he said. “But it really is a technology which have a promising future.”

The initial essay-scoring that is automated go back to the early 1970s, but there was clearlyn’t much progress made before the 1990s with the advent of this Internet additionally the capacity to store data on hard-disk drives, Mr. Shermis said. More recently, improvements have been made when you look at the technology’s capacity to evaluate language, grammar, mechanics, and style; detect plagiarism; and offer quantitative and feedback that is qualitative.

The computer programs assign grades to writing samples, sometimes on a scale of just one to 6, in a number of areas, from word choice to organization. The merchandise give feedback to help students improve their writing. Others can grade short answers for content. The technology can be used in various ways on formative exercises or summative tests to save time and money.

The Educational Testing Service first used its e-rater automated-scoring engine for a high-stakes exam in 1999 when it comes to Graduate Management Admission Test, or GMAT, in accordance with David Williamson, a senior research director for assessment innovation for the Princeton, N.J.-based company. It also uses the technology with its Criterion Online Writing Evaluation Service for grades 4-12.

The capabilities changed substantially, evolving from simple rule-based coding to more sophisticated software systems over the years. And statistical techniques from computational linguists, natural language processing, and machine learning have helped develop better methods for identifying certain patterns in writing.

But challenges stay in picking out a definition that is universal of writing, plus in training a computer to understand nuances such as for instance “voice.”

Over time, with larger sets of data, more experts can identify nuanced aspects of writing and improve the technology, said Mr. Williamson, who is encouraged by the new era of openness about the research.

“It really is a topic that is hot” he said. “There are a lot of researchers and academia and industry looking into this, and that is a good thing.”

High-Stakes Testing

Along with utilising the technology to enhance writing when you look at the classroom, West Virginia employs software that is automated its statewide annual reading language arts assessments for grades 3-11. Their state spent some time working with CTB/McGraw-Hill to customize its product and train the engine, using several thousand papers it has collected, to score the students’ writing according to a prompt that is specific.

“we have been confident the scoring is quite accurate,” said Sandra Foster, the lead coordinator of assessment and accountability in the West Virginia education office, who acknowledged skepticism that is facing from teachers. But many were won over, she said, after a comparability study indicated that the precision of a trained teacher and the scoring engine performed a lot better than two trained teachers. Training involved a few hours in how exactly to measure the writing rubric. Plus, writing scores have gone up since implementing the technology.

Automated essay scoring can also be used on the ACT Compass exams for community college placement, the latest Pearson General Educational Development tests for a high school equivalency diploma, and other summative tests. Nonetheless it have not yet been embraced because of the College Board for the SAT or even the ACT that is rival college-entrance.

The 2 consortia delivering the new assessments under the Common Core State Standards are reviewing machine-grading but have not committed to it.

Jeffrey Nellhaus, the director of policy, research, and design when it comes to Partnership for Assessment of Readiness for College and Careers, or PARCC, wants to know if the technology is going to be a good fit with its assessment, therefore the consortium would be conducting a research predicated on writing from the first field test to observe how the scoring engine performs.

Likewise, Tony Alpert, the principle operating officer for the Smarter Balanced Assessment Consortium, said his consortium will measure the technology carefully.

With his new company LightSide, in Pittsburgh, owner Elijah Mayfield said his data-driven approach to automated writing assessment sets itself aside from other products available on the market.

“What we are trying to do is build a system that instead of correcting errors, finds the strongest and weakest chapters of the writing and locations to improve,” he said. “It is acting more as a revisionist than a textbook.”

The software that is new which will be available on an open-source platform, is being piloted this spring in districts in Pennsylvania and New York.

In higher education, edX has just introduced software that is automated grade open-response questions to be used by teachers and professors through its free online courses. “One regarding the challenges in past times was that the code and algorithms were not public. They were seen as black magic,” said company President Anant Argawal, noting the technology is in an experimental stage. write my paper “With edX, we put the code into open source where you could see how it really is done to simply help us improve it.”

Still, critics of essay-grading software, such as for instance Les Perelman, want academic researchers to possess broader use of vendors’ products to judge their merit. Now retired, the previous director associated with the MIT Writing Across the Curriculum program has studied a number of the devices and managed to get a high score from one with an essay of gibberish.

“My main concern is he said that it doesn’t work. Whilst the technology has many use that is limited grading short answers for content, it relies a lot of on counting words and reading an essay requires a deeper degree of analysis best done by a person, contended Mr. Perelman.