3260 papers • 126 benchmarks • 313 datasets
Sentence ordering task deals with finding the correct order of sentences given a randomly ordered paragraph.
(Image credit: Papersgraph)
These leaderboards are used to track progress in sentence-ordering-8
No benchmarks available.
Use these libraries to find sentence-ordering-8 models and implementations
No subtasks available.
This work proposes an end- to-end unsupervised deep learning approach based on the set-to-sequence framework to address the structure of coherent texts and shows that useful text representations can be obtained by learning to order sentences.
This paper proposes InsertGNN, a simple yet effective model that represents the problem as a graph and adopts the Graph Neural Network (GNN) to learn the connection between sentences and achieves an accuracy of 70%, rivaling the average human test scores.
A novel deep coherence model (DCM) using a convolutional neural network architecture to capture the text coherence and can be easily trained in an end-to-end fashion.
This paper presents a method that partially shuffles the training data between epochs, which makes each batch random, while keeping most sentence ordering intact, and achieves new state of the art results on word-level language modeling on both the Penn Treebank and WikiText-2 datasets.
A novel and flexible graph-based neural sentence ordering model, which adopts graph recurrent network \cite{Zhang:acl18} to accurately learn semantic representations of the sentences, which outperforms the existing state-of-the-art systems on several benchmark datasets.
It is shown that NSP is detrimental to training due to its context splitting and shallow semantic signal, and it is demonstrated that using multiple tasks in a multi-task pre-training framework provides better results than using any single auxiliary task.
This work devise a new approach based on multi-granular orders between sentences, which form multiple constraint graphs, which are encoded by Graph Isomorphism Networks and fused into sentence representations, determined using the order-enhanced sentence representations.
Reorder-Bart (Re-BART) is presented, that leverages a pre-trained Transformer-based model to identify a coherent order for a given set of shuffled sentences and achieves the state-of-the-art performance across 7 datasets in Perfect Match Ratio (PMR) and Kendall's tau.
STaCK is introduced — a framework based on graph neural networks and temporal commonsense knowledge to model global information and predict the relative order of sentences and results are reported on five different datasets.
Adding a benchmark result helps the community track progress.