3260 papers • 126 benchmarks • 313 datasets
Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function. Description from: CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
(Image credit: Papersgraph)
These leaderboards are used to track progress in code-documentation-generation-6
Use these libraries to find code-documentation-generation-6 models and implementations
This work develops CodeBERT with Transformer-based neural architecture, and trains it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators.
A new model is proposed (HAConvGNN) that uses a hierarchical attention mechanism to consider therelevant code cells and the relevant code tokens information when generating the documentation in computational notebooks.
This work evaluates the memorization and generalization tendencies in neural code intelligence models through a case study across several benchmarks and model families by leveraging established approaches from other fields that use DNNs, such as introducing targeted noise into the training dataset.
This work assembles available foundation models, such as CodeBERT and GPT-2, into a single model named AdaMo, and utilizes Gaussian noise as the simulation of contextual information to optimize the latent representation.
RepoAgent, a large language model powered open-source framework aimed at proactively generating, maintaining, and updating code documentation, is introduced, showing that RepoAgent excels in generating high-quality repository-level documentation.
Adding a benchmark result helps the community track progress.