Dependency parsing of learner English

Yan Huang, Akira Murakami, Theodora Alexopoulou, Anna Korhonen

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Current syntactic annotation of large-scale learner corpora mainly resorts to “standard parsers” trained on native language data. Understanding how these parsers perform on learner data is important for downstream research and application related to learner language. This study evaluates the performance of multiple standard probabilistic parsers on learner English. Our contributions are three-fold. Firstly, we demonstrate that the common practice of constructing a gold standard – by manually correcting the pre-annotation of a single parser – can introduce bias to parser evaluation. We propose an alternative annotation method which can control for the annotation bias. Secondly, we quantify the influence of learner errors on parsing errors, and identify the learner errors that impact on parsing most. Finally, we compare the performance of the parsers on learner English and native English. Our results have useful implications on how to select a standard parser for learner English.
Original languageEnglish
Pages (from-to)28-57
Number of pages27
JournalInternational Journal of Corpus Linguistics
Volume23
Issue number1
Early online date31 May 2018
DOIs
Publication statusPublished - 2018

Keywords

  • dependency parsing
  • parsing accuracy
  • learner error
  • learner English
  • annotation bias

Fingerprint

Dive into the research topics of 'Dependency parsing of learner English'. Together they form a unique fingerprint.

Cite this