What Do Graded Decisions Tell Us about Verb Uses

Warning

This publication doesn't include Faculty of Sports Studies. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

CINKOVÁ Silvie KREJČOVÁ Ema VERNEROVÁ Anna BAISA Vít

Year of publication 2016
Type Article in Proceedings
Conference Proceedings of the XVII EURALEX International congress
MU Faculty or unit

Faculty of Informatics

Citation
Field Informatics
Keywords Word Sense Disambiguation; usage patterns; computational lexicography; graded decisions; Likert scales; Corpus Pattern Analysis; Pattern Dictionary of English Verbs; regular polysemy; coercion
Description We work with 1450 concordances of 29 English verbs (50 concordances per lemma) and their corresponding entries in the Pattern Dictionary of English Verbs (PDEV). Three human annotators working independently but in parallel judged how well each lexical unit of the corresponding PDEV entry illustrates the given concordance. Thereafter they selected one best-fitting lexical unit for each concordance – while the former setup allowed for ties (equally good matches), the latter did not. We measure the interannotator agreement/correlation in both setups and show that our results are not worse (in fact, slightly better) than in an already published graded-decision annotation performed on a traditional dictionary. We also manually examine the cases where several PDEV lexical units were classified as good matches and how this fact affected the interannotator agreement in the best- fit setup. The main causes of overlap between lexical units include semantic coercion and regular polysemy, as well as occasionally insufficient abstraction from regular syntactic alternations, and eventually also arguments defined as optional and scattered across different lexical units despite not being mutually exclusive.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info