3260 papers • 126 benchmarks • 313 datasets
Selective information is expected to be continuously removed from a pre-trained model while maintaining the rest.
(Image credit: Papersgraph)
These leaderboards are used to track progress in continual-forgetting-14
No benchmarks available.
Use these libraries to find continual-forgetting-14 models and implementations
No datasets available.
No subtasks available.
Group Sparse LoRA is effective, parameter-efficient, data-efficient, data-efficient, and easy to implement, and it is demonstrated that GS-LoRA manages to forget specific classes with minimal impact on other classes.
Adding a benchmark result helps the community track progress.