3260 papers • 126 benchmarks • 313 datasets
Temporal relation extraction systems aim to identify and classify the temporal relation between a pair of entities provided in a text. For instance, in the sentence "Bob sent a message to Alice while she was leaving her birthday party." one can infer that the actions "sent" and "leaving" entails a temporal relation that can be described as "simultaneous".
(Image credit: Papersgraph)
These leaderboards are used to track progress in temporal-relation-extraction-2
No benchmarks available.
Use these libraries to find temporal-relation-extraction-2 models and implementations
Experimental results on nine real-life datasets show that LTSF-Linear surprisingly outperforms existing sophisticated Transformer-based L TSF models in all cases, and often by a large margin.
The TempEval-3 task is described, incorporating a three-part task structure covering event, temporal expression and temporal relation extraction; a larger dataset; and single overall task quality scores.
It is demonstrated that a pre-trained Transformer model is able to transfer from the weakly labeled examples to human-annotated benchmarks in both zero-shot and few-shot settings, and that the masking scheme is important in improving generalization.
A novel method, Clinical Temporal ReLation Exaction with Probabilistic Soft Logic Regularization and Global Inference (CTRL-PG) to tackle the problem at the document level, and significantly outperforms baseline methods for temporal relation extraction.
CATENA, a sieve-based system to perform temporal and causal relation extraction and classification from English texts, exploiting the interaction between the temporal and the causal model is presented.
This study employs a structured perceptron together with integer linear programming constraints for document-level inference during training and prediction to exploit relational properties of temporality, together with global learning of the relations at the document level.
This work extends the classification model’s task loss with an unsupervised auxiliary loss on the word-embedding level of the model to ensure that the learned word representations contain both task-specific features and more general features learned from the unsuper supervised loss component.
Experimental results on three high-quality event temporal relation datasets demonstrate that incorporated with pre-trained contextualized embeddings, the proposed model achieves significantly better performances than the state-of-the-art methods on all three datasets.
This work proposes a framework that enhances deep neural network with distributional constraints constructed by probabilistic domain knowledge and solves the constrained inference problem via Lagrangian Relaxation and applies it on end-to-end event temporal relation extraction tasks.
EventPlus as the first comprehensive temporal event understanding pipeline provides a convenient tool for users to quickly obtain annotations about events and their temporal information for any user-provided document.
Adding a benchmark result helps the community track progress.