Talking about performance metrics - Display-Lab/scaffold GitHub Wiki

Okay, let's break down how to describe performance measures (metrics) for hospital procedures in a way that's clear for both computer programmers (for implementation) and statisticians (for analysis and interpretation).

The key is precision and structure. You need a consistent way to define each metric. Here’s a framework and vocabulary:

I. Core Components of a Metric Definition:

Every performance measure, regardless of type, should have these elements clearly defined:

  1. Metric Name / Identifier:

    • Description: A unique, concise, and descriptive name. Often uses CamelCase or snake_case for programming contexts.
    • Example: SurgicalSiteInfectionRate, ThirtyDayReadmissionCount, PatientReportedOutcomeMeasure_PainScoreCategory.
    • Vocabulary: Identifier, Metric ID, Measure Name.
  2. Description / Definition:

    • Description: A clear, human-readable explanation of what the metric represents and why it's important. Briefly state the clinical or operational significance.
    • Example: "Measures the rate of surgical site infections (SSIs) occurring within 30 days post-procedure for specific surgical types, indicating potential gaps in infection control protocols."
    • Vocabulary: Definition, Description, Purpose, Rationale.
  3. Measure Type / Data Type:

    • Description: The fundamental nature of the measurement's value. This is crucial for storage, calculation, and statistical analysis.
    • Examples & Vocabulary:
      • Categorical (Nominal): Represents distinct groups with no inherent order. (Data types: String, Enum, Factor). E.g., ProcedureOutcome with values {"Success", "Complication", "Failure"}.
      • Categorical (Ordinal): Represents distinct groups with a meaningful order. (Data types: Ordered Factor, Enum, Integer mapping). E.g., PatientSatisfaction with values {"Very Satisfied", "Satisfied", "Neutral", "Unsatisfied", "Very Unsatisfied"} or mapped to {5, 4, 3, 2, 1}.
      • Count: A non-negative integer representing the number of occurrences of an event. (Data type: Integer, Non-negative Integer). E.g., NumberOfFalls, CLABSICount.
      • Rate / Proportion / Ratio: A value derived from dividing one quantity (numerator) by another (denominator), often expressed as a percentage or per X events. (Data type: Float, Double, Decimal, Percentage). E.g., InfectionRate (0.015 or 1.5%), ReadmissionRate.
      • Continuous: A measurement that can take any value within a range (often with decimal points). (Data type: Float, Double, Decimal). E.g., LengthOfStayInDays, ProcedureDurationMinutes.
      • Boolean: Represents a binary state. (Data type: Boolean, Bit). E.g., AntibioticAdministeredOnTime (True/False).
  4. Unit of Measurement:

    • Description: The scale or unit for the measured value. Essential for interpretation, especially for counts, continuous, and rate measures.
    • Example: Infections per 1000 procedures, Days, Minutes, Patients, Events, Percentage (%).
    • Vocabulary: Unit, Scale.
  5. Numerator Definition (For Rates/Proportions/Ratios):

    • Description: Precise definition of the event or condition being counted in the numerator. Include specific inclusion and exclusion criteria.
    • Example (for SSI Rate): "Number of patients undergoing [Specific Procedure Codes] between [Start Date] and [End Date] who develop a surgical site infection (defined by [CDC Criteria/ICD Codes]) within 30 days of the procedure."
    • Vocabulary: Numerator Criteria, Inclusion Criteria (Numerator), Exclusion Criteria (Numerator).
  6. Denominator Definition (For Rates/Proportions/Ratios):

    • Description: Precise definition of the population at risk or the total group being considered. Include specific inclusion and exclusion criteria.
    • Example (for SSI Rate): "Total number of patients undergoing [Specific Procedure Codes] between [Start Date] and [End Date]."
    • Vocabulary: Denominator Criteria, Population at Risk, Base Population, Inclusion Criteria (Denominator), Exclusion Criteria (Denominator).
  7. Calculation Formula (If applicable):

    • Description: The exact mathematical formula used.
    • Example (for SSI Rate per 1000): (Numerator / Denominator) * 1000
    • Vocabulary: Formula, Calculation Logic, Algorithm.
  8. Data Source(s):

    • Description: Where the raw data originates. This informs data quality and collection processes.
    • Example: Electronic Health Record (EHR) system (specify modules/tables if possible), Billing Data (ICD/CPT codes), Patient Surveys, Manual Chart Abstraction, National Databases (e.g., NSQIP).
    • Vocabulary: Source System, Data Origin.
  9. Measurement Period / Frequency:

    • Description: How often the metric is calculated and reported.
    • Example: Monthly, Quarterly, Annually, Per Procedure.
    • Vocabulary: Reporting Frequency, Calculation Interval, Aggregation Period.
  10. Desired Direction / Interpretation:

    • Description: Specifies whether a higher or lower value is considered better performance.
    • Example: Lower is Better (for infection rates, mortality), Higher is Better (for patient satisfaction, certain compliance measures).
    • Vocabulary: Performance Goal Direction, Desired Trend, Interpretation Key.

II. Describing Comparison to a Standard:

When comparing a metric to a fixed standard, add these components:

  1. Standard / Benchmark / Target Value:

    • Description: The specific value against which the metric is compared.
    • Example: 2.0 (for an SSI rate per 1000), 95% (for a compliance measure), 0 (for never events like wrong-site surgery).
    • Vocabulary: Standard, Benchmark, Target, Threshold, Goal.
  2. Source of Standard:

    • Description: Where does this standard come from?
    • Example: National benchmark (e.g., CMS, NHSN), Regulatory requirement, Internal quality goal, Literature-based standard.
    • Vocabulary: Standard Source, Benchmark Origin.
  3. Comparison Logic:

    • Description: How the comparison is performed.
    • Example: Metric Value <= Standard, Metric Value >= Standard, Metric Value == Standard.
    • Vocabulary: Comparison Operator, Evaluation Rule.
  4. Comparison Outcome / Classification:

    • Description: The result of the comparison, often a categorical status.
    • Example: {"Met", "Not Met"}, {"Below Target", "At Target", "Above Target"}, {"Compliant", "Non-Compliant"}.
    • Vocabulary: Performance Status, Compliance Status, Achievement Level. This can itself be considered a derived categorical metric.

III. Example Metrics Using the Framework:

Example 1: Surgical Site Infection Rate (Rate)

  • Metric Name: ProcedureX_SSI_Rate_Per1000
  • Description: Rate of surgical site infections within 30 days following Procedure X, per 1000 procedures performed. Monitors post-operative infection control effectiveness.
  • Measure Type: Rate (Float/Decimal)
  • Unit: Infections per 1000 procedures
  • Numerator Def: Count of patients undergoing Procedure X (CPT codes: A, B, C) within the reporting period who are diagnosed with an SSI (ICD-10 codes: X, Y, Z) linked to that procedure index date within 30 days post-op. Exclude patients with pre-existing infection at site.
  • Denominator Def: Total count of Procedure X (CPT codes: A, B, C) performed within the reporting period. Exclude procedures on patients < 18 years old.
  • Calculation Formula: (Count of SSIs / Total Procedures) * 1000
  • Data Source(s): EHR (Procedure Logs, Diagnosis Codes), Infection Control Surveillance Data.
  • Measurement Period: Quarterly
  • Desired Direction: Lower is Better
  • Standard Value: 1.5
  • Source of Standard: Internal Quality Goal based on National Benchmark percentile.
  • Comparison Logic: Metric Value <= Standard Value
  • Comparison Outcome: {"Met Standard", "Exceeded Standard"}

Example 2: 30-Day Unplanned Readmission (Count)

  • Metric Name: ProcedureY_Readmission_30Day_Count
  • Description: Count of patients readmitted unexpectedly to this or another acute care hospital within 30 days of discharge after undergoing Procedure Y. Indicates potential issues with discharge planning, patient education, or unresolved post-op complications.
  • Measure Type: Count (Integer)
  • Unit: Patients
  • Numerator Def: N/A (Implicit in the count definition)
  • Denominator Def: N/A (Though often used with a denominator to create a rate later)
  • Calculation Formula: N/A (Direct Count)
  • Data Source(s): EHR (Admission/Discharge/Transfer data), Claims Data (for external readmissions).
  • Measurement Period: Monthly
  • Desired Direction: Lower is Better
  • Standard Value: 5 (e.g., an absolute count threshold for investigation)
  • Source of Standard: Internal Operational Trigger Point.
  • Comparison Logic: Metric Value > Standard Value (triggers review)
  • Comparison Outcome: {"Below Trigger", "Trigger Exceeded"}

Example 3: Timely Prophylactic Antibiotic Administration (Categorical/Boolean derived)

  • Metric Name: PreOpAntibiotic_Timeliness_Compliance
  • Description: Assesses whether prophylactic antibiotics were administered within the recommended window (e.g., 60 minutes) prior to surgical incision for applicable procedures.
  • Measure Type: Categorical (Derived Boolean then aggregated to Proportion/Percentage) - Often tracked per-case as Boolean (True/False) and then aggregated.
  • Unit: Percentage (%) of compliant cases
  • Numerator Def (for %): Count of eligible surgical cases where prophylactic antibiotic was administered within the defined window (e.g., incision time - administration time <= 60 minutes and >= 0).
  • Denominator Def (for %): Total count of eligible surgical cases requiring prophylactic antibiotics as per protocol [Specify procedures/conditions].
  • Calculation Formula: (Compliant Cases / Total Eligible Cases) * 100
  • Data Source(s): EHR (Medication Administration Record - MAR, Operating Room Logs).
  • Measurement Period: Monthly
  • Desired Direction: Higher is Better
  • Standard Value: 98.0
  • Source of Standard: Regulatory/Accreditation Body Requirement (e.g., Joint Commission).
  • Comparison Logic: Metric Value >= Standard Value
  • Comparison Outcome: {"Compliant", "Non-Compliant"}

Key Considerations for Programmers & Statisticians:

  • Data Dictionary: Maintain this structured information in a formal data dictionary or metadata repository.
  • Data Types: Be precise about data types (integer vs float, precision for floats).
  • Null Handling: Define how missing data (e.g., missing denominator, missing component data) is handled in calculations.
  • Granularity: Define the level at which the metric is calculated (patient-level, procedure-level, monthly aggregate, ward-level, etc.).
  • Risk Adjustment: Statisticians will want to know if/how the metric is adjusted for patient risk factors (e.g., age, comorbidities) to allow fairer comparisons. This adds complexity but is often necessary.
  • Distribution: Statisticians will consider the underlying statistical distribution (e.g., Binomial for proportions, Poisson for counts) for analysis and modeling.

By using this structured vocabulary and framework, you can ensure your performance measures are well-defined, consistently understood, correctly implemented in software, and appropriately analyzed statistically.