3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in zero-shot-slot-filling-8
Use these libraries to find zero-shot-slot-filling-8 models and implementations
No subtasks available.
The MASSIVE dataset–Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation is presented and modeling results on XLM-R and mT5 are presented, including exact match accuracy, intent classification accuracy, and slot- filling F1 score.
Several strategies are described to improve the retriever and the generator of RAG in order to make it a better slot filler, which reached the top-1 position on the KILT leaderboard on both T-REx and zsRE dataset with a large margin.
A novel approach to zero-shot slot filling that extends dense passage retrieval with hard negatives and robust training procedures for retrieval augmented generation models and demonstrates the robustness of the system showing its domain adaptation capability on a new variant of the TACRED dataset for slot filling.
This work proposes utilizing both the slot description and a small number of examples of slot values, which may be easily available, to learn semantic representations of slots which are transferable across domains and robust to misaligned schemas.
A novel approach based on prototypical contrastive learning with a dynamic label confusion strategy for zero-shot slot filling to establish the label dependence between the source domains and the target domain on-the-fly.
GENSF (Generative Slot Filling), which leverages a generative pre-trained open-domain dialog model for slot filling, achieves state-of-the-art results on two slot filling datasets with strong gains in few-shot and zero-shot settings.
This work proposes a single multi-task BERT-based model that jointly solves the three DST tasks of intent prediction, requested slot prediction and slot filling and proposes an efficient and parsimonious encoding of the dialogue history and service schemata that is shown to further improve performance.
This work proposes Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation, and introduces a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output.
A coarse- to fine-grained contrastive learning based on Gaussian-distributed embedding to learn the generalized deep semantic relations between utterance-tokens, by optimizing inter- and intra-token distribution distance is proposed.
A cascade-style joint learning framework coupled with context-aware soft label representations and slot-level contrastive representation learning to mitigate the data and label shift problems effectively and demonstrate the superiority of the proposed approach over a series of competitive baselines.
Adding a benchmark result helps the community track progress.