3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in definition-extraction-4
No benchmarks available.
Use these libraries to find definition-extraction-4 models and implementations
No subtasks available.
This work uses various pretrained language models (i.e., BERT, XLNet, RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the DeftEval competition, and explores a multi-task architecture that was trained to jointly predict the outputs for the second and the third subtasks.
The present paper presents Globalex: Lexicographic Resources for Human Language Technology, a full day workshop at LREC2016, Portorož, Slovenia, which highlights the need for lexicographic resources for human language technology.
This work proposes a novel model for DE that simultaneously performs the two tasks in a single framework to benefit from their inter-dependencies and presents a multi-task learning framework that employs graph convolutional neural networks and predicts the dependency paths between the terms and the definitions.
This work explores the performance of Bidirectional Encoder Representations from Transformers at definition extraction and proposes a joint model of BERT and Text Level Graph Convolutional Network so as to incorporate dependencies into the model.
This paper describes the submissions to the DeftEval shared task (SemEval-2020 Task 6), which is evaluated on an English textbook corpus, and provides a detailed explanation of the system for the joint extraction of definition concepts and the relations among them.
Adding a benchmark result helps the community track progress.