3260 papers • 126 benchmarks • 313 datasets
Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function. Description from: CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
(Image credit: Papersgraph)
These leaderboards are used to track progress in code-documentation-generation-3
Use these libraries to find code-documentation-generation-3 models and implementations
This work develops CodeBERT with Transformer-based neural architecture, and trains it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators.
A new model is proposed (HAConvGNN) that uses a hierarchical attention mechanism to consider therelevant code cells and the relevant code tokens information when generating the documentation in computational notebooks.
This work evaluates the memorization and generalization tendencies in neural code intelligence models through a case study across several benchmarks and model families by leveraging established approaches from other fields that use DNNs, such as introducing targeted noise into the training dataset.
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112-times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8-states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs.
This work assembles available foundation models, such as CodeBERT and GPT-2, into a single model named AdaMo, and utilizes Gaussian noise as the simulation of contextual information to optimize the latent representation.
RepoAgent, a large language model powered open-source framework aimed at proactively generating, maintaining, and updating code documentation, is introduced, showing that RepoAgent excels in generating high-quality repository-level documentation.
Adding a benchmark result helps the community track progress.