3260 papers • 126 benchmarks • 313 datasets
Structured Prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such as linear programming relaxations and greedy search. Source: Torch-Struct: Deep Structured Prediction Library
(Image credit: Papersgraph)
These leaderboards are used to track progress in structured-prediction-4
Use these libraries to find structured-prediction-4 models and implementations
No subtasks available.
This work designs a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference in structured prediction tasks such as articulated pose estimation.
A simple change to common loss functions used for multi-modal embeddings, inspired by hard negative mining, the use of hard negatives in structured prediction, and ranking loss functions, is introduced, which yields significant gains in retrieval performance.
Concrete random variables---continuous relaxations of discrete random variables is a new family of distributions with closed form densities and a simple reparameterization, and the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks is demonstrated.
This work presents an ANN architecture that combines the effectiveness of typical ANN models to classify sentences in isolation, with the strength of structured prediction, and outperforms the state-of-the-art results on two different datasets for sequential sentence classification in medical abstracts.
Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction, are proposed, which are more accurate than Bi-LSTM-CRFs while attaining 8x faster test time speeds.
This paper presents Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks, and shows that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and the final tree search agent, trained tabula rasa, defeats MoHex 1.0.
Memory Augmented Policy Optimization is presented, a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimate and improves the sample efficiency and robustness of Policy gradient, especially on tasks with sparse rewards.
This paper proposes a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting and demonstrates that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.
An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem.
This paper presents the input convex neural network architecture, which are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs.
Adding a benchmark result helps the community track progress.