Improving EHR Usability with Machine Learning
A new system helps clinicians find important notes faster using machine learning.
― 9 min read
Table of Contents
- Importance of Electronic Health Records
- Analyzing EHR Note-Writing Sessions
- User Study with Clinicians
- General Insights on Machine Learning in Healthcare
- Related Work
- The Documentation Process in the Emergency Department
- Insights from Data Analysis
- Reading Patterns Among Clinicians
- Modeling Proactive Information Retrieval
- Features Used in the Model
- Evaluation of Model Performance
- Feedback from Clinicians
- Conclusion
- Future Directions
- Original Source
Healthcare professionals spend a lot of time looking through patient notes and entering information into electronic health records (EHRs). This can lead to stress and burnout among doctors and nurses. To help with this issue, we are looking into how to use Machine Learning to find important information more easily during the Documentation process.
By using logs from EHRs, we built a system that can suggest which patient notes are important to read at a specific moment. This is especially useful in busy settings like emergency departments, where doctors have to make quick decisions.
In our study, we found that our system can predict with high accuracy which notes a doctor will likely read when they are writing new notes. In addition, feedback from actual Clinicians indicates that our approach can help them find important information faster.
Importance of Electronic Health Records
EHRs are a key part of tracking a patient's medical history. They contain both organized data and written notes from healthcare providers. This information is used throughout the process of making medical decisions. During patient visits, doctors look for information for various reasons, such as understanding a new patient, refreshing their memory about a current patient, or finding specific details to aid in diagnosing a condition.
However, finding the necessary information in EHRs can take a lot of time because there is so much data. A lot of this important information can only be found in written notes, which are often long and complex. The documentation requirements and the amount of information have become so overwhelming that doctors may spend more time using EHRs than with their patients. This can lead to exhaustion and reduced job satisfaction.
Our research focuses on how to improve the documentation process by better understanding how doctors read and write notes in the EHR system. By analyzing thousands of note-taking sessions in the emergency department, we aim to create a system that can suggest relevant information as the clinician writes their note.
Analyzing EHR Note-Writing Sessions
We looked at data from numerous note-taking sessions in the emergency department. Previous studies have mainly focused on the general actions doctors take in the EHR, such as reviewing records and entering orders. Our work goes deeper, examining how doctors read and write notes together to find patterns in the way they gather information.
We have developed a framework that allows for dynamic suggestions of information as the context changes. Our system predicts which notes may be helpful based on the note being written. We also applied machine learning techniques to help our system actively retrieve useful information to assist with the writing process.
In our experiments, we found that our system can predict with impressive accuracy which notes will be read in a single note-writing session.
User Study with Clinicians
To validate our approach, we conducted a study where real clinicians used our framework. We found that it indeed made it easier for them to find important information quickly. This suggests that our methods could be useful in other healthcare settings and across different types of data, like lab results and imaging.
General Insights on Machine Learning in Healthcare
Our research emphasizes the importance of real-time Information Retrieval in healthcare. While we specifically focused on retrieving information from unstructured notes, this concept can apply to other types of data within electronic health records. There is an opportunity to create proactive systems that can help healthcare workers locate relevant information automatically.
Additionally, with advancements in language models, we have the chance to improve the generation and analysis of clinical notes. We need to explore the workflows and information needs that lead to the final documentation instead of treating these notes as unchanging documents.
Related Work
Over the years, many machine learning techniques have been developed to help healthcare professionals extract and summarize information from free-text EHR notes. For instance, some methods focus on structuring data or summarizing key points from notes. Others use document embeddings to find relevant codes for diagnoses based on existing documents.
However, most of this research looks at static documents rather than how information needs change during the writing process. Our work is different because we explore how to use machine learning to find important notes dynamically as a clinician creates a new note.
Audit Logs in EHRs
EHRs also generate audit logs, which record granular information about user activities within the system. Initially designed for access control, these logs have proven valuable for understanding how clinicians use EHRs. They can show how frequently different actions are performed and help redesign user interfaces to improve workflows.
However, audit logs alone may miss important context, so combining them with other analytical methods can provide a fuller picture. Our study not only examines patterns of retrieval and documentation but also the content of the notes and how predictive algorithms can be designed to improve the process.
Proactive Information Retrieval
Given that healthcare professionals often have high information needs during documentation, various efforts have been made to enhance their ability to find relevant data dynamically. Some existing systems use natural language processing to summarize patient histories, but they do not tailor information to the current clinical context.
Our approach differs because we aim to actively retrieve important unstructured notes as they are written. This proactive stance allows clinicians to benefit from timely access to relevant information during the patient care process.
The Documentation Process in the Emergency Department
The documentation that occurs after a patient presents to the emergency department is critical. Clinicians frequently see many patients in a short span of time, which requires them to quickly gather and process information from various historical notes.
Doctors often communicate with colleagues to coordinate care, which adds another layer of complexity to their tasks. The fast-paced environment of an emergency department creates distinct patterns of information retrieval and documentation that differ from typical writing scenarios.
To analyze this process, we developed a dataset that captures the activities surrounding note retrieval and writing. We gathered logs from a large urban hospital, recording detailed reading and writing activities over several weeks.
Insights from Data Analysis
By looking at both what clinicians read and what they write during the same sessions, we can gain insights into their information-gathering behaviors. For instance, we can analyze how much of the written notes comes from the notes that were read, which helps us understand how doctors synthesize information from various sources.
Our findings so far show that there is often a significant overlap between the text of read notes and the final written notes. This indicates that many clinicians rely on past notes to inform their current writing, which emphasizes the need for efficient access to relevant information.
Reading Patterns Among Clinicians
To better understand how clinicians retrieve information, we randomly selected a few patient cases to analyze reading patterns. We observed that multiple team members often read the same notes, especially the most recent ones. This points to a strong reliance on recent documentation for making care decisions.
Each clinician may have different needs depending on their specialty and role within the care team. For example, a resident may only need the latest notes, while a specialist may dive deeper into older documents to gain comprehensive insight into a patient's medical history.
Modeling Proactive Information Retrieval
Our model aims to assist clinicians in finding relevant information quickly as they document patient interactions. During the process, a doctor is under pressure to collect all needed facts in a very short timeframe. They must read multiple documents while treating patients and coordinating with other team members.
To address this, we framed the task of proactive information retrieval as a classification problem. The model predicts whether a source document should be retrieved based on the current context of the note being written.
Our approach continuously updates the set of documents relevant to the clinician's writing, taking into account the latest information available. This leads to more accurate suggestions during each subsequent session.
Features Used in the Model
The model incorporates various features, including patient information captured during triage, document creation time, and metadata about the source documents. By using these features, we can offer accurate predictions of which notes are most likely to be relevant.
The chief complaint of the patient, along with the clinician's role, helps provide the necessary context for each note. Features related to the document's creation time and how frequently it has been read also contribute to improving the model's accuracy.
Textual representation of both the source documents and the written notes is performed using a bag-of-words approach. This method allows us to capture the presence of words and phrases that may signal the relevance of specific documents.
Evaluation of Model Performance
To assess the effectiveness of our model, we used several performance metrics commonly applied in classification tasks, such as precision, recall, F1 score, and area under the curve (AUC). We paid particular attention to AUC, as it indicates how well the model differentiates between relevant and irrelevant documents.
Additionally, we considered information retrieval metrics that measure how well the model surfaces relevant documents for the next writing session. These metrics help us determine if the machine learning approach meets the practical needs of clinicians.
Feedback from Clinicians
To further validate our findings, we conducted a review of patient charts with clinicians from different specialties. Each clinician provided feedback about the notes they chose to read and their reasoning behind those choices. This helped us frame the relevance of the notes in terms of their clinical utility.
The results indicated that our model performed well in surfacing relevant notes that clinicians deemed important for patient care. In many cases, the most highly ranked documents aligned with the clinicians' needs for quick access to pertinent information.
Conclusion
As healthcare data continues to grow, it is essential for clinicians to efficiently retrieve and process this information to provide optimal care. Our work shows how machine learning can contribute to creating proactive systems that help in finding relevant information swiftly.
By developing a dynamic information retrieval framework using EHR audit logs, we have taken a significant step toward improving the documentation process. This framework has the potential to be expanded and utilized in various healthcare settings, ultimately benefiting clinicians and patients alike.
Future Directions
Looking ahead, we aim to explore different settings within healthcare to understand how information retrieval needs vary. We plan to evaluate and refine our predictive model through practical deployment, gathering real-time feedback from clinicians during patient interactions.
With ongoing advancements in technology, we are optimistic that our work will lead to better tools for healthcare professionals, helping them focus more on patient care rather than administrative tasks.
Title: Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record Notes
Abstract: The large amount of time clinicians spend sifting through patient notes and documenting in electronic health records (EHRs) is a leading cause of clinician burnout. By proactively and dynamically retrieving relevant notes during the documentation process, we can reduce the effort required to find relevant patient history. In this work, we conceptualize the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context, at a particular point in time. Our evaluation focuses on the dynamic retrieval in the emergency department, a high acuity setting with unique patterns of information retrieval and note writing. We show that our methods can achieve an AUC of 0.963 for predicting which notes will be read in an individual note writing session. We additionally conduct a user study with several clinicians and find that our framework can help clinicians retrieve relevant information more efficiently. Demonstrating that our framework and methods can perform well in this demanding setting is a promising proof of concept that they will translate to other clinical settings and data modalities (e.g., labs, medications, imaging).
Authors: Sharon Jiang, Shannon Shen, Monica Agrawal, Barbara Lam, Nicholas Kurtzman, Steven Horng, David Karger, David Sontag
Last Update: 2023-08-09 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2308.08494
Source PDF: https://arxiv.org/pdf/2308.08494
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.