
This research investigates using knowledge distillation to create smaller, more efficient BERT models for clinical information extraction. Large language models (LLMs) are effective at this, but their computational cost is high. The study uses LLMs and medical ontologies to train distilled BERT models to recognize medications, diseases, and symptoms in clinical notes.
Version: 20241125
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.