3260 papers • 126 benchmarks • 313 datasets
Semantic role labeling aims to model the predicate-argument structure of a sentence and is often described as answering "Who did what to whom". BIO notation is typically used for semantic role labeling. Example: Housing starts are expected to quicken a bit from August’s pace B-ARG1 I-ARG1 O O O V B-ARG2 I-ARG2 B-ARG3 I-ARG3 I-ARG3
(Image credit: Papersgraph)
These leaderboards are used to track progress in semantic-role-labeling
Use these libraries to find semantic-role-labeling models and implementations
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.

A transition-based parser for AMR that parses sentences left-to-right, in linear time is described and it is shown that this parser is competitive with the state of the art on the LDC2015E86 dataset and that it outperforms state-of-the-art parsers for recovering named entities and handling polarity.
A new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA- SRL parser are presented, and neural models for two QA -SRL subtasks are presented: detecting argument spans for a predicate and generating questions to label the semantic relationship.
This paper provides the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks.
This work proposes to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference to reduce the magnitude of bias amplification in multilabel object classification and visual semantic role labeling.
This work is the first to successfully apply BERT in this manner for relation extraction and semantic role labeling, and its models provide strong baselines for future research.
It is found that a number of probing tests have significantly high positive correlation to the downstream tasks, especially for morphologically rich languages, and these tests can be used to explore word embeddings or black-box neural models for linguistic cues in a multilingual setting.
A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.
Adding a benchmark result helps the community track progress.