3260 papers • 126 benchmarks • 313 datasets
Automatic extraction of clinical named entities such as clinical problems, treatments, tests and anatomical parts from clinical notes. ( Source )
(Image credit: Papersgraph)
These leaderboards are used to track progress in clinical-concept-extraction-1
Use these libraries to find clinical-concept-extraction-1 models and implementations
This work proposes CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters, and shows that this new model improves the performance of Bert on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.
This paper proposes an alternative, streamlined approach: a recurrent neural network (the bidirectional LSTM with CRF decoding) initialized with general-purpose, off-the-shelf word embeddings that outperform all recent methods and ranks closely to the best submission from the original 2010 i2b2/VA challenge.
This work presents a state-of-the-art system for DNR and CCE that avoids conventional, time-consuming feature engineering, and outperformed all previously proposed systems with the Bidirectional LSTM-CRF model.
This work shows that by combining off-the-shelf contextual embeddings (ELMo) with static word2vecembeddings trained on a small in-domain corpus built from the task data, they manage to reach and sometimes outperform representations learned from a large corpus in the medical domain.
A clinical text mining system that improves on previous efforts in three ways, and can recognize over 100 different entity types including social determinants of health, anatomy, risk factors, and adverse events in addition to other commonly used clinical and biomedical entities.
The results highlight the importance of specialized language models, such as CLIN-X, for concept extraction in non-standard domains, but also show that the task-agnostic model architecture is robust across the tested tasks and languages so that domain- or task-specific adaptations are not required.
Adding a benchmark result helps the community track progress.