Report Name
Quality EVE Performance by Provider
About This Report
This report shows information about EVE performance at a provider level. It segments match rate performance across primary abstractor and overreader. Managers can performance of the Evidence Validation Engine (EVE) at a provider level to monitor efficacy.
Service Type
Full Service, Platform, Client Clinical Only
Intended User
Manager / Technical User
Best For
Monitoring EVE performance by provider on a weekly basis.
What Information Can I Filter?
Projects |
Address |
NPI |
Response Date |
What do the Data Items Mean?
Data Item | Description |
Total Providers | Count of all providers from the returned query |
Total Members | Count of all members from the returned query |
Total Measures | Count of all measures from the returned query |
Total Open Gaps | Total of all gaps sent to EVE |
Total EVE Match | Total count of gaps with EVE no match from all the returned query |
Total EVE Partial Match | Total count of gaps with EVE partial match from all the returned query |
Total EVE No Match | Total count of gaps with EVE no match from all the returned query |
Average EVE Match | % of EVE no match count against match, no match and partial match from all the returned query |
Average EVE Partial Match | % of EVE partial match count against match, no match and partial match from all the returned query |
Average EVE No Match | % of EVE no match count against match, no match and partial match from all the returned query |
NLP Overall Accuracy Rate | % of total match and partial match against all results by the EVE |
Total Chases Reviewed and Submitted by OR | Count of chases that are past the Overread Stage |
Total Gaps Reviewed | Total of all EVE results reviewed by an overreader |
Total Gaps Not Reviewed | Total of all EVE results not yet reviewed by an overreader |
Avg Gaps Reviewed | % of gap results reviewed by an overreader against total EVE results |
Total Gaps Overreader Agrees | Total count of gaps with overreader acceptance = Agree from all the returned query |
Total Gaps Overreader Disagrees | Total count of gaps with overreader acceptance = Disagree from all the returned query |
NLP Overreader Accuracy Rate | % of total match and partial match against all results by the EVE |
Table | Shows a detailed view of how EVE Overread performs by a provider. |