3260 papers • 126 benchmarks • 313 datasets
Article commenting poses new challenges for machines, as it involves multiple cognitive abilities: understanding the given article, formulating opinions and arguments, and organizing natu ral language for expression.
(Image credit: Papersgraph)
These leaderboards are used to track progress in comment-generation-3
No benchmarks available.
Use these libraries to find comment-generation-3 models and implementations
No datasets available.
No subtasks available.
InCoder is introduced, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via infilling) with the ability to condition on bidirectional context substantially improves performance on these tasks, while still performing comparably on standard program synthesis benchmarks.
A new attention module called Code Attention is proposed to translate code to comments, which is able to utilize the domain features of code snippets, such as symbols and identifiers, which has better performance over existing approaches in both BLEU and METEOR.
This research proposes CodeReviewer, a pre-trained model that utilizes four pre-training tasks tailored specifically for the code review scenario, and establishes a high-quality benchmark dataset based on the collected data for these three tasks.
Experimental results show that CoNT clearly outperforms the conventional training framework on all the ten benchmarks with a convincing margin, and achieves new state-of-the-art on summarization, code comment generation (without external data) and data-to-text generation.
This work proposes a new task: cross-model automatic commenting (CMAC), which aims to make comments by integrating multiple modal contents, and presents an effective co-attention model to capture the dependency between textual and visual information.
Experimental results show that the proposed Personalized Comment Generation Network (PCGN), which utilizes user feature embedding with a gated memory and attends to user description to model personality of users, can generate natural, human-like and personalized comments.
Malcom, an end-to-end adversarial comment generation framework, is developed that can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels.
This work proposes to use the existing comments of similar source code as exemplars to guide the comment generation process, based on an open source search engine, and demonstrates that this model significantly outperforms the state-of-the-art methods.
Experiments on the market comment generation task show that exploiting contrastive examples improves the capability of generating sentences with better lexical choice, without degrading the fluency.
Adding a benchmark result helps the community track progress.