3260 papers • 126 benchmarks • 313 datasets
Lemmatization is a process of determining a base or dictionary form (lemma) for a given surface form. Especially for languages with rich morphology it is important to be able to normalize words into their base forms to better support for example search engines and linguistic studies. Main difficulties in Lemmatization arise from encountering previously unseen words during inference time as well as disambiguating ambiguous surface forms which can be inflected variants of several different base forms depending on the context. Source: Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing Universal Dependencies Treebanks
(Image credit: Papersgraph)
These leaderboards are used to track progress in lemmatization-18
No benchmarks available.
Use these libraries to find lemmatization-18 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.