3260 papers • 126 benchmarks • 313 datasets
Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction. The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.
(Image credit: Papersgraph)
These leaderboards are used to track progress in text-style-transfer-14
Use these libraries to find text-style-transfer-14 models and implementations
This paper proposes a method that leverages refined alignment of latent representations to perform style transfer on the basis of non-parallel text, and demonstrates the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.
A deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques and demonstrates the effectiveness of the method on a wide range of unsuper supervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation.
The Style Transformer is proposed, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation.
A latent representation of the input sentence is learned which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties, and adversarial generation techniques are used to make the output match the desired style.
A simple yet effective approach is proposed, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words prediction, respectively, and this disentangled latent representation learning can be applied to style transfer on non-parallel corpora.
It is shown that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning disentangled representations, and a new model is proposed where this condition on disentanglement is replaced with a simpler mechanism based on back-translation.
This work proposes a simpler approach, Iterative Matching and Translation (IMaT), which constructs a pseudo-parallel corpus by aligning a subset of semantically similar sentences from the source and the target corpora and iteratively improves the learned transfer function by refining imperfections in the alignment.
This work augments adversarial autoencoders with a denoising objective where original sentences are reconstructed from perturbed versions (referred to as DAAE) and proves that this simple modification guides the latent space geometry of the resulting model by encouraging the encoder to map similar texts to similar latent representations.
This work proposes two novel evaluation metrics that measure two aspects of style transfer: transfer strength and content preservation, and shows that the proposed content preservation metric is highly correlate to human judgments.
Adding a benchmark result helps the community track progress.