
This research investigates using knowledge distillation to create smaller, more efficient BERT models for clinical information extraction. Large language models (LLMs) are effective at this, but their computational cost is high. The study uses LLMs and medical ontologies to train distilled BERT models to recognize medications, diseases, and symptoms in clinical notes.
Version: 20241125
No comments yet. Be the first to say something!