The Duplicate Disconnect
Originally published by Just Associates
Reality and Perception Rarely Match When it comes to Duplicate Rates
What is your hospital’s duplicate record rate?
It’s a straight-forward question. But it’s one that many healthcare organizations cannot answer, and if they can, it’s typically understated.
This perception-and-reality disconnect isn’t due to naiveté or denial on the part of health information management (HIM) directors. The culprit is lack of standardization, whether that be variances in the formulas used to calculate the rate, algorithms for identifying duplicates or how duplicates are reported. As a result, establishing an accurate duplicate rate that would be accepted industry wide is a significant challenge for even the most diligent HIM manager.
Lack of agreement on the calculations
When measuring duplicates, different measurements are calculated and variances exist in the components of the rate calculation.
For instance, many organizations report their duplicate rate as the number of duplicates in their MPI divided by the number of patients in the MPI database. This calculation actually reports the “existence,” or “historical,” rate in the MPI and does not reflect what is currently being created. An organization may have cleaned up its MPI a year ago, therefore the existence rate is fairly low. However, if the creation rate is high and no attention is given to keeping the MPI clean, the organization may have a false sense of security about the accuracy of their MPI.
Further compounding the problem, when measuring the “creation” rate, different opinions exist on whether the denominator for this formula should be the number of registrations that occurred in the time period being measured, or the number of new medical record numbers (MRNs) created. Proponents for the number of registrations state that this denominator indicates the number of opportunities for a duplicate to be created. Conversely, proponents of the number of new MRNs believe this to be the more correct denominator.
All of these variances have caused the industry to be reporting apples and oranges when it comes to duplicate rates.
Lack of standardized algorithms
Many facilities simply rely on the algorithms that come standard with their HIT systems to identify potential duplicates, which often fall into the basic or intermediate categories in terms of sophistication. However, stronger, more advanced algorithms do a better job of identifying duplicates. Thus, the results between hospitals that use the basic or intermediate algorithms will be much different (and significantly understated) than those utilizing advanced algorithms.
This is simply because basic and intermediate algorithms do not have enough “error tolerance” to detect more significant or multiple errors. An undetected duplicate is an unreported duplicate, which leads to underreporting.
For example, an intermediate algorithm may look only at the name, date of birth, Social Security Number and gender to determine potential duplicates. Meanwhile, a more sophisticated algorithm will also compare middle name, address, telephone numbers and previous names.
What’s more, some systems do not include algorithms to determine the likelihood of a duplicate. Instead, they will flag any records that contain exact matches of, for example, first and last name—all of which must be manually validated.
Grabbing the Spotlight
Despite the challenges of establishing an accurate duplication rate, HIM professionals have long pushed to prioritize improving patient matching. The implications of high duplication rates are significant, including patient safety and compliance with meaningful use. Indeed, the success of health information exchange organizations depends upon the exchange of clean, duplicate-free patient data.
As a result, duplicate rates are finally getting serious attention outside the HIM department. The ONC has made patient matching a primary focus. HIMSS and the American Hospital Association have acknowledged its importance as well.
It is encouraging to see that there are many more organizations recognizing the importance of accurate patient matching. Standardizing the approach to calculating potential duplicates will provide more accurate metrics and enable more hospitals to dedicate the necessary resources to proper remediation management and algorithm improvement.
Detection Is the First Step. Remediation and Mitigating the Risk of Duplicate Creation Are the Imperative Next Steps.