Rebecca Jones on recent trends in patient safety data and the effect of AI on events and analysis

Headshot Rebecca Jones

Rebecca (Becky) Jones, M.B.A.

Rebecca (Becky) Jones, M.B.A., B.S.N., R.N., is the Director of Data Science & Research for the Patient Safety Authority, an independent non-regulatory state agency in Pennsylvania charged with improving the quality of health care by collecting and analyzing patient safety information, advising facilities through publication, education, and collaboration, and issuing recommendations for improvement. Jones oversees the analysis of data from the Pennsylvania Patient Safety Reporting System, the largest patient safety database of its kind in the country. She talks with Patient Safety Beat about the significance of increased event reports, trends seen in recent data and the role of artificial intelligence in patient safety. 



Patient Safety Beat: The total number of events reported to the Patient Safety Authority increased in 2024, reaching a new high of more than 300,000 reports from facilities across Pennsylvania – including hospitals, ambulatory surgery centers, abortion facilities, and birthing centers. How do you understand those increases? 

Becky Jones: First, it’s important to understand that an increase in the number of reports doesn't necessarily mean more events are happening in practice. More often, it’s an indication that more events are being recognized and reported. At PSA, we work closely with facilities to strengthen reporting practices. Our regional patient safety advisors, who are part of our outreach and education team, serve as a direct link to each facility and support a wide range of patient safety needs, including reporting.  

While there are always multiple factors involved, we see the increase in reports as one indication that our work is helping to strengthen reporting practices. When there is an increase or other notable shift in reporting, it’s often a sign that people have a better understanding of what should be reported or that the overall culture around safety and reporting is improving. 


When there is an increase or other notable shift in reporting, it’s often a sign that people have a better understanding of what should be reported or that the overall culture around safety and reporting is improving.


Patient Safety Beat: Are some categories of patient safety events showing improvement or increased risk of harm?  

Becky Jones: It’s hard to say definitively whether patient safety is improving or worsening in a certain category just by looking at reporting numbers. Reporting is complex, and there’s always some subjectivity in how events are categorized or how harm is assessed. I can say, however, that certain categories of events continue to pose challenges to patient safety. It's probably not surprising to hear that those categories include medication errors and falls. Despite considerable efforts to improve safety in these areas, we still see these reports coming in, with familiar underlying issues, contributing factors, and root causes occurring over and over. It's not that people aren't working hard to make things safer, but some issues are complex and have been difficult to overcome with standard approaches.  


Patient Safety Beat: Pennsylvania collects data on events that do not cause harm, which is unusual in state reporting criteria. What has been your experience with near-miss data?  

Becky Jones: The term “near miss” has several definitions, so I should clarify that. In Pennsylvania, we use two main classifications of events: incidents and serious events. A serious event is an occurrence that results in death or causes an injury that's unanticipated by the patient and requires additional health care services. Incidents are events that don’t cause an unanticipated injury requiring care beyond first aid. A near miss is an incident that is intercepted or corrected before it reaches the patient.  

Incident data is a required part of reporting in Pennsylvania, and it’s incredibly valuable because it provides insights we would not get from reports of serious events alone. It is important to understand that most events that don't cause harm could easily have done so had they occurred under slightly different circumstances. The same underlying event can have a different outcome depending on patient factors, environmental or temporal issues, and many other circumstances. About 4% of our reports are serious events — 96% are incidents. Seeing those incidents gives us a much broader view of what's going on.  

If we only had reports of serious events, we’d know about the things that went wrong and resulted in harm, but we wouldn’t know about all the times something similar went wrong but didn’t cause harm to a patient. Since we receive reports of harm and no-harm events, we get a fuller view of what’s being reported overall, which helps us understand the scale of the issue within the data. For example, we recently conducted research on falls that occur on the day of discharge. Having reports of both incidents and serious events allowed us to uncover a threat that had not been identified in any previous research. We found a statistically significant difference in the proportion of serious events related to falls that occurred on the day of discharge compared to other times during hospitalization. Falls surrounding discharge were more than two and a half times more likely to result in serious injury than falls that happened at other times. If we didn’t have the incident data in addition to the serious events, we would have no way of identifying this very important finding.  


Patient Safety Beat: Do you see any signs that the increased role of artificial intelligence in patient care is causing patient safety events?  

Becky Jones: We're keeping a close eye on that. Initially, there was a lot of fear about the effects of AI related to patient safety, and I think there are still questions about its impact on care and the risks it may pose. So far, we haven’t received many reports involving AI. Interestingly, in the reports we have received, AI is more often described as helping to identify an event early rather than being the cause of the event itself. For example, we’ve received several reports that describe a radiologist missing a fracture on initial read, which is then identified by AI. In that case, AI is supporting the human being, who is fallible, by identifying a problem before it causes harm.  

The involvement of AI introduces new complexities in reporting. For instance, if a clinician disregards a recommendation from an AI-based decision support tool because it’s not clinically appropriate, should that be considered an event? What if the recommendation wasn’t ideal but not necessarily wrong? It’s not always black and white. Perhaps we don't get those reports because many clinicians see these situations as an expected part of working with AI.  

We recently published a brief article highlighting what we’ve seen so far in reports involving AI and offering guidance to our reporting facilities to ensure we’re aware that AI was involved in a reported event. This will help as we continue to monitor this topic. 


Patient Safety Beat: How is your team currently using AI in your analysis of patient safety events, and what potential do you see for it in the future? 

Becky Jones: In the past, we used traditional machine learning techniques to train and test against a known standard. Over the past few years, we’ve been using some open-source AI models that run on our own systems, which has been a good way to begin exploring how these tools might support our work as we consider options for the future. One example is topic modeling, which we’ve used to identify themes across different reports. It shows us one way the data could be organized, which can spark some new ideas we further refine through manual review. Having that structure up front often helps us work more efficiently and can enhance the quality of what we produce, even though the real value still comes from our own analysis.  

We know we’ve only scratched the surface, so we’re also looking into options for secure use of interactive large-language models. A simple but powerful example of how large language models could enhance our current work is by going beyond keyword-based queries. Right now, if we’re looking for events that involve something more abstract, like workarounds, it’s hard to predict every way someone might describe it in a report. Since large-language models understand concepts, they could help identify relevant reports we might otherwise miss, while also potentially reducing false positives. That’s just one example, but it points to something much bigger, which is how AI might help us see more in our data, more quickly and more clearly. Human expertise and oversight will always be essential, but AI has the potential to take our work to a whole new level that we can only imagine at this point.