GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD

June 4, 2017 | Author: Todd Noah Andrews | Category: N/A
Share Embed Donate


Short Description

Download GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD...

Description

GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD

Introduction

No health data is perfect…and there is no one definition of data quality.. No health data from any source can be considered perfect: all data are subject to a number of limitations related to quality, such as missing values, bias, measurement error, and human errors in data entry and computation. Data quality assessment is needed to understand how much confidence can be put in the health data presented. In particular, it is important to know the reliability of national coverage estimates and other estimates derived from HIS data that are generated for health sector reviews, as these often form the basis for annual monitoring. However, there is no one definition of data quality that is used consistently across institutions. Data quality is a multi-dimensional construct. Overall data quality, then, becomes a function of each of its dimensions. If data quality is a latent construct and is defined as a function of its different dimensions, by extension the assessment of data quality would entail an assessment of each of its dimensions. There are different tools that assess different dimensions of data quality.1 Some tools focus on assessing the status of the system that is producing the data. This is based on questions about presence of e.g. trained staff, forms and electronic media, procedures, adherence to definitions, and ways in which data quality is assessed. Other tools focus on the assessment of the quality of the actual data that are generated by the system. The first step is a desk review of data quality. A desk review of data will not necessarily identify the underlying causes of inaccurate data but it will identify problems of data completeness, accuracy and external consistency.. Health facility data are a critical input into assessing national progress and performance on an annual basis and they provide the basis for subnational / district performance assessment. WHO proposes the Health Facility Data Quality Report Card (DQRC), which is a methodology that examines certain dimensions of data quality through a desk review of available data and a data verification component. The aim of DQRC is to ensure systematic assessment of completeness and internal and external consistency of the reported data or computed statistics and determine whether there are any data quality problems that need to be addressed. The desk review component of the DQRC is conducted through the use of the WHO Data Quality Assessment (DQA) Tool, an Excel-based tool, that reviews the quality of data generated by a health facility-based information system for four key indicators: antenatal care first visit (ANC1), health facility deliveries, diphtheria-pertussis-tetanus third dose (DTP3) and outpatient department (OPD) visits. Through analysis of these four standard tracer indicators, the tool quantifies problems of data completeness, accuracy and external consistency and thus provides valuable information on “fit-for-purpose” of health facility data to support planning and annual monitoring. Data verification refers to the assessment of reporting ‘correctness’, that is, comparing health facility source documents to HIS reported data to determine the proportion of the reported numbers that can be verified from the source documents. It checks whether the information contained in the source documents has been transmitted correctly to the next higher level of reporting, for each level of reporting, from the health facility level to the national level. It is recommended to implement data verification with the annual health facility survey (Service Availability Readiness Assessment (SARA)) on a 1

Different frameworks and different dimensions of data quality will be discussed in a data quality assessment guideline document that is under development. In this document, we will only focus on two of the dimensions of data quality that are assessed by the Data Quality Report Card.

representative sample of health facilities to obtain a national level estimate of the verification factor for the health information system. The DQRC examines four dimensions of data quality. These dimensions are: 1) completeness of reporting; 2) internal consistency of reported data; 3) external consistency of population data; 4) external consistency of coverage rates. There are two levels of assessment for the indicators in the DQRC: 1) an assessment of each indicator at the national level; and 2) performance of sub-national units, mostly districts or provinces/regions, on the selected indicator. The indicator definitions change when evaluated nationally or sub-nationally. The DQRC has been primarily designed to be examined at the national level. However, it is possible that certain provinces/states/regions in a country might want to look just at their own performance. For example, a province might want to examine data quality in their districts. This is possible to do in the DQRC. Any reference to the word national can be replaced with a sub-national unit. 2 For each of the four dimensions a small set of indicators is used. The indicators can, with adaptations, be used for most indicators that can be derived from health facility data. Data quality problems are usually systemic and are not specific to any one program area. For instance, there may be a group of districts or facilities that do not report at all or poorly. Or the denominators of the coverage indicators, based on population projections, are systematically off. Even if it is not possible to do an exhaustive data quality analysis of all the key indicators in the national health strategy, conducting a data quality analysis as described below can indicate potential problems in multiple program areas. The focus in this manual is on maternal and child health indicators. The DQRC should be generated on annual basis to evaluate the quality of the data to be used for annual reviews. The following table gives a quick overview of the indicators of the DQRC. Detailed explanation for each of the indicator are given in the subsequent section.

2

This should, however, be done cautiously. For external comparison, survey aggregation levels are usually only at the state/province/regional levels. If a province wants to look at within province data quality performance, it will not be able to make external comparisons of their facility data with survey data if the survey aggregation level from the most recent population-based survey is only available at the province level.

National Profile

Sub-national Profile

Definition

Application of indicator at the Subnational unit level

% of monthly sub-national unit (such as district) reports received for a specified period time (usually one year)

Number and % of sub-national units that had less than 80% completeness of monthly reporting for a specified period time (usually one year)

Total # of sub-national unit reports received Total # of expected sub-national unit reports

# of sub-national units with less than 80% reporting completeness nationally______ Total # of sub-national units in the country

% of expected monthly facility reports received for a specified period time (usually one year)

Number and % of sub-national units with monthly facility reporting rates below 80% for a specified period time (usually one year)

Total # of facility reports received nationally Total # of expected facility reports nationally

# of sub-national units with monthly reporting rates less than 80% nationally_____________ Total # of sub-national units in the country

% of monthly Sub-national unit reports that are NOT zero/missing (Average of 4 indicators: ANC1, Deliveries, DTP3, OPD)

Number and % of sub-national units with at least 20% zero/missing values (Average of four indicators: ANC1, Deliveries, DTP3, OPD)

Total # of zero/missing values for all subnational units for the reporting year for ANC1 + Deliveries + DTP3 + OPD___________________ Total # of sub-national units X 12 X 4

((# of sub-national units with more than 20% zero and missing values for all four indicators (ANC1 + Deliveries + DTP3 + OPD) combined* Total # sub-national units in the country

Indicator

Completeness of reporting Completeness of subnational unit reporting

Completeness of facility reporting

Completeness of indicator data (zero/missing values)

*A sub-national unit has more than 20% missing/zero values for the four indicators combined when the equation below is greater than 20%: # of zero/missing values in ANC1 + Deliveries + DTP3 + OPD_____________________________ Total # expected values for the reported year for each indicator X 4

Internal consistency of reported data Extreme outliers

% of monthly Sub-national unit values that are extreme outliers (at least 3 standard deviations (SD) from the mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Total number of extreme outliers for ANC1 + Deliveries + DTP3 + OPD__________________ Total # sub-national units X 12 expected monthly reported value per sub-national unit for 1 indicator X 4 (for the four indicators)

Number and % of sub-national units in which even one of the monthly sub-national unit values in any of the four indicators is an extreme outlier (± 3SD from the sub-national unit mean) # of sub-national units with at least one extreme outlier in ANC1, Delivery, DTP1, or OPD________________________________ Total # of sub-national districts in the country

Moderate outliers

% of Sub-national unit values that are moderate outliers (between ±2-3 SD from the mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Total number of moderate outliers for ANC1 + Deliveries + DTP3 + OPD__________________ Total # sub-national units X 12 expected monthly reported value per sub-national unit for 1 indicator X 4 (for the four indicators)

Number and % of sub-national units in which 5% or more of the monthly Sub-national unit values for all the four indicators combined are moderate outliers (between ±2-3 SD from the Sub-national unit mean) Total # of sub-national units with 5% or more moderate outliers for all four indicators combined (*)____________________________ Total # of sub-national units in the country *A sub-national unit has 5% or more moderate outliers for the four indicators combined when the equation below is greater or equal to 5%: # of moderate outliers in ANC1 + Deliveries + DTP3 + OPD_____________________________ Total # expected values for the reported year for each indicator X 4

Consistency over time

Number of events for the current year divided by the mean of preceding 3 years (average for ANC1, Deliveries, DTP3, OPD) Ratio of the total number of events for the current year to the mean number of events for up to 3 preceding years for ANC1 + Deliveries + DTP3 + OPD_____________________________ 4

Internal consistency between indicators

THERE IS NO NATIONAL LEVEL CONSISTENCY CHECK FOR THIS INDICATOR

Number and % of Sub-national unit with at least 33% difference between national ratio for the current year to the mean of the preceding 3 years and the Sub-national unit ratio for the current year to the mean of the preceding three years. # of sub-national units whose ratio is more than  33% different from the national ratio_______ Total # of sub-national units in the country

Number and % of Sub-national units whose ratio of DTP1 to ANC1 from HMIS data is at least 33% different from the national HMIS ratio of DTP1 to ANC1. # of sub-national units whose DTP1 to ANC1 ratio is more than  33% different from the national ratio for DTP1 to ANC1____________ Total # of sub-national units in the country

Consistency between DTP1 and DTP3

Number of DTP3 immunization divided by the number of DTP1 immunizations (should be less than 1) Total # of national DTP3 immunization doses recorded_____________________________ Total # of national DTP1 immunization doses recorded

Number and % of Sub-national units with the number of DTP3 immunizations over 2% higher than DTP1 immunization # of sub-national units with DTP3 doses that were over 2% higher than DTP1 dose_______ Total # of sub-national units in the country

Verification of reporting consistency through facility survey

% of agreement between data in sampled facility records and national records for the same facilities

THIS INDICATOR CANNOT CURRENTLY BE CALCULATED IN THIS TOOL

# of recounted events = verification factor # of reported events The verification can be done for any indicator. If done for more than one indicator, the verification factors can be averaged.

External consistency of population data Consistency of population projection (UN)

Population projection of number of live births from the Bureau of Statistics divided by UN projection based live births

THIS INDICATOR IS NOT CALCULATED SUB-NATIONALLY

Population projection of live births from Bureau of Statistics____________________ Population projection of live births from UN population division

Consistency of denominator (estimated number of pregnant women)

Ratio of the official denominator estimate for pregnant women divided by an alternative denominator for the number of pregnant women (derived by dividing ANC1 total events from HMIS by ANC1 coverage rate estimated from the most recent population based survey)

Number and % of aggregation units used for the most recent populationbased survey (such as a province/state/region) whose official estimates of total number of pregnant women and alternative estimates of pregnant women are at least 33% different from each other

Official denominator for pregnant women Alternative denominator for pregnant women*

# of province/state/region whose official estimate for pregnant women is more than  33% different from the alterative estimate for pregnant women________________________ Total number of province/state/region

*Calculated by: ANC1 total events from HMIS________ ANC1 coverage rate from most recent population based survey

Consistency of denominator (estimated number of children under 1 year)

Ratio of the official denominator for the number of children under 1 year divided by an alternate denominator for children under 1 year of age (derived by dividing DTP1 total events from HMIS by DTP1 coverage rate estimated from the most recent population based survey) Official denominator for children under 1___ Alternative denominator for children under 1* *Calculated by: DTP1 total events from HMIS________ DTP1 coverage rate from most recent population based survey

Number and % aggregation unit used for the most recent population-based survey (such as a province/state/region) whose official under-1 estimates and alternative under-1 estimates are at least 33% different from each other # of province/state/region whose official estimate for children under 1 is more than  33% different from the alterative estimate for children under 1________________________ Total number of province/state/region

External Comparison External Comparison: of ANC1

ANC1 coverage rates based on facility reports divided by survey coverage reports ANC1 coverage rate from facility data ANC1 coverage rate from most recent population-based survey

Number and % aggregation unit used for the most recent population-based survey (such as a province/state/region) whose ANC1 facility-based coverage rates and survey coverage rates are at least 33% different from each other # of province/state/region whose ANC1 facility coverage rate is more than  33% different from the ANC1 coverage rate from the most recent population-based survey ___________ Total number of province/state/region

External Comparison: of Deliveries

Delivery coverage rates based on facility reports divided by survey coverage reports Delivery coverage rate from facility data Delivery coverage rate from most recent population-based survey

Number and % of aggregation unit used for the most recent populationbased survey (such as a province/state/region) whose facilitybased coverage rates for deliveries and survey coverage rates for deliveries are at least 33% different from each other # of province/state/region whose Delivery facility coverage rate is more than  33% different from the Delivery coverage rate from the most recent population-based survey____ Total number of province/state/region

External Comparison: of DTP3

DTP3 coverage rates based on facility reports divided by survey coverage reports DTP3 coverage rate from facility data DTP3 coverage rate from most recent population-based survey

Number and % of aggregation unit used for the most recent populationbased survey (such as a province/state/region) whose DTP3 facility-based coverage rates and survey coverage rates are at least 33% different from each other # of province/state/region whose DTP3 facility coverage rate is more than  33% different from the DTP3 coverage rate from the most recent population-based survey ___________ Total number of province/state/region

Data Quality Report Card Indicators This section describes in detail the indicators that comprise the DQRC and how they are calculated. The DQRC is a method that can be done by anyone without any specific tool. However, to make it easier to calculate the indicators included in the DQRC, there is an accompanying Excel-based data quality assessment tool that provides results for the DQRC indicators based on data that is inputted.

Dimension 1: Completeness of reporting The purpose of this dimension is to examine if all entities that are supposed to report are actually reporting. The indicators in this dimension include completeness of reporting at the health facility level-usually the 1st administrative unit level, completeness of reporting at levels higher than a health facility, and the completeness of data elements in a report (or otherwise presence of missing data). 1a. Completeness of administrative unit reporting- In many countries health facilities send their monthly reports to the next reporting administrative unit, e.g. a district. These administrative units compile the information and forward it to the next level of reporting. All administrative units have a schedule for reporting that they are supposed to adhere to. This indicator whether the administrative unit is reporting according to schedule. For this indicator the reporting rates of facilities within the administrative boundaries does not matter. Only the reporting rates of the actual administrative unit counts; this provides an indication of the health office performance in compiling and submitting their monthly reports on a timely basis. In many countries where the HMIS is now web-based from the health facility onwards, this indicator is redundant. Definition Completeness of district/provincial reporting (%) is defined as the number of district/provincial monthly reports received on time divided by the total number of reports expected for a specified time period (usually one year). A completeness rate of 100% indicates that all units reported on time. At the national level, this is a straightforward calculation according to the definition above. A completeness rate of 100% indicates that all units reported on time. At the subnational level (e.g. district or province), a completeness rate is computed for each administrative unit over the specified time period (usually one year). Units that have a completeness rate below 80% are considered to have poor reporting. The percentage of units that have a completeness rate below 80% is shown in the summary table of results.

Example: At the national level, if the country has 10 districts, the expected number of reports would be 120 reports (10 reports/month X 12 months). The actual number of reports received was 97 (shown in Table 1). Therefore, the completeness rate would be 97/120=81%. At the subnational level , suppose there are ten districts which are expected to report monthly. Table 2 shows an example of monthly reporting by ten districts over the span of twelve months. Five out of the 10 districts (50%) have district completeness reporting rate of less than 80%. The summary of the results would be shown as in Table 2. Table 1: District reporting example. District health offices submitting monthly reports on time are indicated with tick marks. Districts with poor reporting (ie. completeness rate below 80%) are shown in red. Month Completeness 1 2 3 4 5 6 7 8 9 10 11 12 Total rate District 1 9 75%          District 2 12 100%             District 3 12 100%             District 4 10 83%           District 5 11 92%            District 6 9 75%          District 7 7 58%        District 8 12 100%             District 9 7 58%        District 10 8 67%         National 10 8 6 8 7 10 8 8 9 9 8 6 97 81% Table 2: Example summary results.

Year National district monthly reporting completeness rate Number (%) of districts with completeness rate below 80% Districts with completeness rate below 80%

81% 5 (50%) District 1, District 6, District 7, District 9, District 10

1b. Completeness of facility reporting –All facilities are expected to submit reports on key service outputs on a pre-determined schedule. In most countries this schedule is monthly. The best-case scenario would include reporting from all public facilities, private facilities, facilities run by nongovernmental organizations, faith-based organizations, etc. However, in most developing countries only the public health facilities and sometimes facilities run by non-governmental organizations and faithbased organizations report in to the health management information system (HMIS). It is critical to know the facility reporting completeness rate to make informed interpretation on key indicators. If facility reporting completeness is less than 100%, one only has partial and incomplete information on health indicators. The total expected reports would include all facilities that are supposed to report to the HMIS. Definition Completeness of facility reporting(%) is defined as the number of reports received, according to schedule, from all health facilities nationally, divided by the total expected reports from all

facilities that are supposed to report to the HMIS for a specified time period (usually one year). The numerator is the actual number of facilities that submit a report and the denominator is the total number of health facilities that are expected to submit a report. At the national level, this is a straightforward calculation according to the definition above. A completeness rate of 100% indicates that all units reported on time. At the subnational level (e.g. district or province), a facility reporting completeness rate is computed for each administrative unit over the specified time period (usually one year). The total actual number of facilities that submit a report are divided by the total number of health facilities that are expected to submit a report for each administrative unit. Subnational units that have facility reporting rates below 80% within their administrative boundaries are considered to have poor reporting.

Example: At the national level, if a country has 1000 facilities that report to the HMIS, the total number of expected reports for one year would be 1000 X 12 = 12,000 reports. At the end of the year only 10,164 reports have been received (shown in Table 1 below). Completeness of facility reporting rate = 10,164/12,000 or 85%. At the subnational level, facility reporting rates within each of the 10 districts are examined. Districts that have less than 80% facility reporting completeness are shown in red. Three out of 10 districts (30%) have facility reporting rates of less than 80%. Summary of results are shown in Table 2. Table 1: Facility reporting rate within districts. Districts with facility reporting rate of less than 80% are shown in red. Total # of facilities

District 1 District 2 District 3 District 4 District 5 District 6 District 7 District 8 District 9 District 10 National

Expected reports (Total facilities X 12 months)

Actual # of reports received in 12 months

Facility completeness rate (%)

100 150 50 80 120 170 130 100 40 60

1200 1800 600 960 1440 2040 1560 1200 480 720

1200 1140 554 960 1080 1920 1270 1200 240 600

100% 63% 92% 100% 75% 94% 81% 100% 50% 83%

1000

12000

10164

85%

Table 2: Example summary results.

Year National facility reporting completeness rate Number (%) of districts with facility reporting completeness rate below 80% Districts with completeness rate below 80%

85% 3 (30%) District 2, District 5,District 9

1c. Completeness of indicator data (zero/missing values) –Completeness of missing data is an indicator of the extent to which facility and district reports include all reportable events. Missing data should be clearly differentiated from true zero values in district and facility reports. A true zero value indicates that no reportable events occurred in the specified reporting period; a missing value indicates that reportable events occurred but were not actually reported. In many HMIS reports, missing entries are assigned a value of 0, making it impossible to distinguish between a true zero value (no events occurred) from a missing value (events occurred but were not reported). Since it is difficult to differentiate between a true zero value and a true missing value, both these criteria have been combined together into one indicator. This completeness of indicator data can be examined at a more aggregate administrative unit level such as district or province or at the facility level. The preferred level of analysis would be looking at zero/missing values at the facility level but if data is only available at a more aggregate level, one can also look at zero/missing values this level. Definition Completeness of indicator data (zero/missing values) (%) is defined as the average percentage of monthly values for antenatal care first visit (ANC1), deliveries, diphtheria-pertussis-tetanus third dose (DTP3) and outpatient department (OPD) combined that are not zero or missing values for the specified time period (usually one year). That is, the indicator is calculated by subtracting the percentage of values that are zeros or missing from 100%. At the national level this indicator is as defined above – an average percentage of values that are zero or missing across the four indicators. At the subnational level (e.g. district or province), it is the percentage of administrative units in which more than 20% of the monthly values of the four indicators combined are missing values. This percentage is calculated by summing all the missing values within an administrative unit, across the four indicators, for a specified period of time and dividing by the total number of expected values, across the four indicators, for the administrative unit for the same specified period of time.

Example: The example below shows the percentage zero/missing values for ANC1. Each  means that the district had non-zero or non-missing value for the month in question. When examining monthly district level data for ANC1 over a period of one year, it is seen that, nationally, there are 21 occasions where district level data shows 0 or missing values. -The numerator, 21, is the national total of district level zero/missing values for ANC1. -The denominator is the total expected number of values. With 10 districts and 12 expected monthly values for each district for ANC1, the total expected values nationally are 120. -The total % of zero/missing values nationally for ANC1 is 17.5% (21/120). However, since we are calculating values that are not zero/missing, our indicator is 100%-17.5% =82.5%. At the subnational level, Table 1 shows that 4 out of 10 districts (40%) have more than 20% zero/missing values for ANC1 within their districts Table 1: Zero/missing values by district for ANC1. Districts are marked in red if with 20% or more of their values are zero or missing. Month Total zero/missing % 1 2 3 4 5 6 7 8 9 10 11 12 value zero/missing District 1 0 0 2 17%           District 2 0 0%             District 3 0 0%             District 4 1 8%            District 5 0 1 8%            District 6 0 3 25%          District 7 0 5 42%        District 8 0 0%             District 9 5 42%        District 10 4 33%         National 0 2 4 2 3 0 2 1 1 0 2 4 21 17.5%

A similar calculation is done for the other three indicators (Deliveries, DTP3, and OPD). The missing values for Deliveries, DTP3 and OPD are 30, 5 and 10, respectively. At the national level, the indicator is calculated by averaging zero/missing values across the four indicators -The numerator for the is the total zero/missing for the four indicators (21+30+5+10) = 66 -The denominator is the total expected number of values for the four indicators. With 10 districts and 48 expected monthly values for each district for the four indicators (ANC1, deliveries, DTP3, OPD), the total expected values nationally are 480. -The total % of zero/missing values nationally for the four indicators is 13.75% (66/480). However, since we are calculating values that are not zero/missing, our indicator is 100%-13.75% =86.25%. At the subnational level, the number of districts with more than 20% of zero/missing values for ANC1, deliveries, DTP3, OPD is 4, 5, 1, 2, respectively. The average number of districts with zero/missing values across the four indicators is ((4+5+1+2)/4 = 3). Therefore, 30% (3/10) of the districts have more than 20% zero/missing values.

3.3

Example (continued): Subnationally, the indicator is calculated by summing missing/zero values across all four indicators. Table 2 below gives an example of this. The steps for calculating the indicator are given below: a) Total number of missing/zero values – Sum the total number of missing/zero values across the four indicators for each district. For example, District 4 has missing/zero values in three of the indicators with a total of 11 missing/zero values. b) Total expected monthly reported values - The total number of expected monthly reported values for each district for one year for each indicator is 12. That is, one expects to see 12 monthly values for ANC1, 12 monthly values for deliveries, 12 monthly values for DTP3 and 12 monthly values for OPD. The total number of expected monthly values for the four indicators is (12+12+12+12 = 48). c) Calculate the average % of missing/zero values – For each district divide the total number of missing/zero values for the four indicators by the total number of expected monthly reported values for the four indicators. For District 4 (as seen in Table 2 below), the total number of missing/zero values is 11. The total number of expected monthly reported values is 48 for each district. The percentage of missing/zero values is 11/48 = 23%. Three districts out of 10 have 20% or more of the reported monthly values across the four indicators that are missing/zero values. Table 2: Districts with more than 5% of values that are moderate outliers are highlighted in red Total # of zero/ missing ANC1 Deliveries DTP3 OPD values 2 0 District 1 4 0 6 13% 0 0 District 2 0 0 0 0% 0 District 3 3 0 1 4 8% 1 0 District 4 7 3 11 23% 1 0 District 5 4 0 5 10% 3 0 District 6 0 0 3 6% 5 District 7 7 5 5 22 46% 0 0 District 8 0 0 0 0% 5 0 District 9 5 1 11 23% 4 0 District 10 0 0 4 8%

Summary results at the national and subnational level are shown in Table 3 Table 3: Example summary results. Year % of monthly reports that are not zero/missing (average of ANC1, deliveries, DTP3, OPD) Number (%) of districts with 20% or more values zero/missing for the four indicators combined

86.25% 3 (30%)

Dimension 2: Internal consistency of reported data This dimension looks at the accuracy and reliability of the category of data that are classified as numerators (or event data) when calculating coverage indicators. Proposed indicators within this subdimension examine outliers (more than 2 and/or 3 standard deviations from the annual average), compare events (numerator information) of similar indicators to see the level of congruence, examine trends over time, and comparing source data from health facility registers to actual reported data in the HMIS. 2a. Accuracy of event reporting: outliers in the current year: Most often reported data follow a pattern. Usually this means that the reported event data over a period of time (such as monthly data) look very similar to each other. There are not very large variations in the numbers. So, when significant variations do happen, it is important to examine them to confirm if these variations are legitimate or there is an issue with data quality. If the reported values follow a normal distribution, 68% percent of all values are supposed to be within 1 standard deviation (SD) of the mean, 95% of all values fall within 2 SD from the mean, and 99% of the values are within 3 SD from the mean. This indicator examines values that are outliers. Outliers are classified as moderate or extreme. A moderate outlier is any single reported value in a specified period that is between  2 to 3 SD from the average value for the same reported period. An extreme outlier is defined as any single reported value in a specified period that is more than 3 SD from the average value for the same reported period. The cut-offs for outliers have been purposefully kept wide to ensure that small variation in values due to cyclical or other programmatic reasons are not mistakenly captured as outliers. Based on the definition of moderate and extreme outliers, less than 5% of the values should be moderate outliers and less than 1% of the values should be an extreme outliers. Definition Accuracy of event reporting (%) moderate outlier is defined as the average % of reported monthly data for the four indicators (ANC1, deliveries, DTP3, and OPD) that are moderate outliers (between  2 to 3 SD from the average value of the indicator), for a specified reporting period (usually one year). Accuracy of event reporting (%) extreme outliers is defined as the average % of reported monthly data for the four indicators (ANC1, deliveries, DTP3, and OPD) that are extreme outliers (at least 3 standard deviations (SD) from the mean) for a specified time period (usually one year). At the national level this indicator is defined as above. Moderate outliers for the four indicators are summed and divided by the expected number of values for all the indicators. If the time period of analysis is one year, the total number of expected values for one indicator is (Total number of administrative units of analysis X 12). Total number of expected values for the four indicators combined is (Total number of administrative units of analysis X 12 X 4). A similar calculation is done for extreme outliers. Moderate outliers: At the subnational level (e.g. district or province), it is the percentage of administrative units in which more than 5% of the combined monthly values of the four indicators are moderate outliers (between ±2-3 SD from the administrative unit mean). This percentage is calculated by summing all the moderate outliers within an administrative unit for the four indicators for a specified period of time and dividing by the total number of expected values for the administrative unit for the same specified period of time.

Extreme outliers: At the subnational level (e.g. district or province), it is the percentage of administrative units in which even one of the monthly administrative unit values in any of the four indicators is an extreme outlier (± 3SD from the administrative unit mean). This percentage is calculated by dividing the total number of administrative units with extreme outliers for the specified time period with the total number of administrative units. Some administrative units might have an extreme outlier in more than one of the indicators; however, they are only counted once here. Example: First, we will examine outliers for ANC1. Table 1 below shows moderate outliers for ANC1. There are 8 moderate outliers for ANC1. They are highlighted in red. Eight of the districts have at least one occurrence of a monthly ANC1 value that is a moderate outlier. Table 1: Monthly ANC1 values by district. Values in red are moderate outliers. Months

District 1 District 2 District 3 District 4 District 5 District 6 District 7 District 8 District 9 District 10 National

1

2

3

4

5

6

7

8

9

10

11

12

2543

2482

2492

2574

3012

2709

3019

2750

3127

2841

2725

2103

1547

1340

1403

1593

2161

1729

1646

1642

1355

1581

1412

1410

776

541

515

527

857

782

735

694

687

628

596

543

1184

1118

1195

1228

1472

1324

1322

1305

1160

1178

1084

1112

1956

1773

1768

2062

2997

2056

1839

1842

2028

2002

2032

1904

819

788

832

802

999

596

672

792

933

1134

810

789

781

1199

981

963

818

897

853

736

2208

2734

1323

1229

1382

1379

1134

1378

1417

1302

1415

1169

1369

1184

1207

1079

1992

1751

1658

1823

3306

2692

2300

2218

2026

2003

1752

1753

3114

2931

2956

4637

6288

4340

3788

3939

3708

4035

3738

3606

0

0

0

0

5

0

0

0

0

2

0

1

Nationally, this indicator is an average percentage of values that are moderate outliers across the four indicators. -The numerator is a sum of moderate outliers across the four indicators for all administrative units. If the total number of ANC1, delivery, DTP3 and OPD moderate outliers for in the 13 districts for one year are 8, 5, 7 and 2, respectively, the numerator would be the sum of these values (8 + 5 + 7 + 2 = 22 total moderate outliers). The denominator is the sum of total number of expected reported values for all four indicators for all the administrative units. It is calculated by multiplying the total number of units (in the selected administrative unit level) with the expected number of reported values for one indicator for one administrative unit with 4 (to get the total figure for 4 indicators). The denominator is then calculated as follows: 10 districts X 12 expected monthly reported values per district for one indicator X 4 indicators =480 total expected reported value]. The average percentage of reported values that are moderate outliers equals ((22/480)*100  4.6%). Subnationally, the indicator is calculated by summing moderate outliers across all four indicators. Table 2 below gives an example of this. The steps for calculating the indicator are given below: a) Total number of moderate outliers – Sum the total number of moderate outliers across the four indicators for each district. For example, District 2 has moderate outliers in three of the indicators with a total of 3 moderate outliers.

Example (continued): b) Total expected monthly reported values - The total number of expected monthly reported values for each district for one year for each indicator is 12. That is, one expects to see 12 monthly values for ANC1, 12 monthly values for deliveries, 12 monthly values for DTP3 and 12 monthly values for OPD. The total number of expected monthly values for the four indicators is (12+12+12+12 = 48). c) Calculate the average % of moderate outliers – For each district divide the total number of moderate outliers for the four indicators by the total number of expected monthly reported values for the four indicators. For District 2 (as seen in Table 2 below), the total number of moderate outliers is 3. The total number of expected monthly reported values is 48 for each district. The percentage of moderate outliers is 3/48 = 6%. Three districts out of 10 have 5% or more of the reported monthly values across the four indicators that are moderate outliers. Table 2: Districts with more than 5% of values that are moderate outliers are highlighted in red Total # of ANC1 Deliveries DTP3 OPD outliers District 1 1 1 1 0 3 6% District 2 1 0 1 0 2 4% District 3 0 0 0 1 0 0% District 4 1 0 1 0 2 4% District 5 1 1 1 0 3 6% District 6 1 0 1 0 2 4% District 7 1 1 1 0 3 6% District 8 0 1 0 0 1 2% District 9 1 1 0 1 2 4% District 10 1 0 1 0 2 4%

Table 3 gives the summary national results and the sub-national Table 3: Example summary results.

Year % of district monthly values that are moderate outliers (between ±2-3 SD from the district mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Number and % of districts in which 5% or more of the monthly district values for all the four indicators combined are moderate outliers (between ±2-3 SD from the district mean)

4.6%

3 (30%) – Districts 1, 5 and 7

The national calculation for the extreme outlier is done the same way as for the moderate outliers. For the subnational calculation, for the extreme outlier, even if one of the indicators has an extreme outlier, the district is flagged. So if District 3 has 1 extreme outlier for OPD and no other districts have extreme outliers for any of the other indicators, the number and percentage of districts that have extreme outliers across the four indicators is 1/10 = 10%

2b. Consistency over time- This indicator shows the consistency of the values for key indicators in the most recent year compared with the mean value of the same indicator for the previous three years combined. Differences in values are expected from one year to the next; however, if the differences are very large, they warrant further scrutiny. While large differences usually suggest some type of reporting error, it is also possible an introduction of a new intervention might have contributed to a large percentage increase in indicator values from one year to the next. Definition Consistency over time (%) is defined as the average ratio of events/service outputs for the current year of analysis to the mean events/service outputs of up to three preceding years for ANC1, Deliveries, DTP3 and OPD. At the national level this indicator is as defined above – an average of the ratios of the four indicators. Sub-nationally, this indicator looks at the percentage of administrative units in the selected administrative level of analysis with at least 33% difference between their ratio and the national ratio across the four indicators. The purpose of this indicator sub-nationally is to see how different an administrative unit value is from the national value. National values can often mask intra-country differences. A large difference between an administrative unit and national value shows an administrative unit that is performing very differently (has a very different trend) when compared to the nation as a whole. If the performance of the unit is better than the nation, it would be useful to examine further the possible factors that are contributing to the improved performance; and similarly, examine factors that are contributing to poor performance if the administrative unit is performing poorly compared to the nation. Example: First, we will examine consistency over time for ANC1. National total for ANC1 for 2011 = 355,000 National total for ANC1 for 2010 = 300,000 National total for ANC1 for 2009 = 288,000 National total for ANC1 for 2008 = 260,000 Mean of 2008, 2009 and 2010 = ((260,000 + 288,000 + 300,000)/3) = (848,000/3) ≈ 282,667 Ratio of current year to the mean of the past three years for ANC1= 355,000/282,667 ≈ 1.26 The national ratios for the other three indicators are also calculated similarly. They are 1.34, .99 and 1.34 for deliveries, DTP3 and OPD, respectively. The national consistency over time indicator is calculated as an average of the ratios of the four indicators. It is ((1.26 + 1.34 + .99 + 1.34)/4) ≈ 1.23 The average ratio of 1.23 shows that there is an overall 23% increase in the service outputs for 2011 when compared to the average service outputs for the preceding three years across the four indicators. Table 1 below shows a presentation of this indicator sub-nationally. For example, District 2 has an ANC1 ratio of 1.1. The national ratio comparing ANC1 of the current year to the preceding three years is 1.26. The % difference between District 2 ratio and the national ratio is:

Example (continued):

|

istrict ratio National ratio | National ratio

|

|

The percentage difference between the district ratio and the national ratio for ANC1 for District 2 is less than 33%. However, there is a difference of approximately 64% between istrict 3’s OP ratio and the national ratio. To calculate this indicator sub-nationally, all administrative units whose ratios are different from the country’s ratio by 33% or more are counted. In the example below, only District 3 has a difference greater than 33%. Therefore, 1 out of 10 districts (10%) has a ratio that is more than 33% different from the national ratio. However, if any one administrative unit is different by more than 33% in more than one indicator, it is also only counted once. For example, if District 3 also had its delivery ratio more than 33% different from the national delivery ratio, the subnational indicator value would still be 1 out of 10 districts (10%). As it is the same district, it is not counted twice. However, if District 10 has more than 33% difference in its ANC1 ratio compared to the national ratio, than there are 2 separate districts (or 2/10 or 20%) that have greater than 33% difference between their district and national ratios. Table 1: Consistency trend: comparison of district ratios to national ratios. More than 33% difference between district and national ratio is highlighted in red

District 1 District 2 District 3 District 4 District 5 District 6 District 7 District 8 District 9 District 10

ANC1 ratio of current year to mean of preceding 3 years 1.3 1.1 1.6 1.1 1.2 1.5 1.5 1.2 1.1 1.5

Nationally

1.30

≥±33% difference from ANC1 national ratio 0.0% 15.4% 23.1% 15.4% 7.7% 15.4% 15.4% 7.7% 15.4% 15.4%

Deliveries ratio of current year to mean of preceding 3 years 1.7 1.3 1.7 1.5 1.2 1.6 1 1.4 1.5 1.4

≥±33% difference from Deliveries national ratio 21.4% 7.1% 21.4% 7.1% 14.3% 14.3% 28.6% 0.0% 7.1% 0.0%

1.4

DTP3 ratio of current year to mean of preceding 3 years 1.4 1.1 1.5 1.2 1.3 1.5 1.2 1.1 1.3 1.3

≥±33% difference from DTP3 national ratio 7.7% 15.4% 15.4% 7.7% 0.0% 15.4% 7.7% 15.4% 0.0% 0.0%

1.3

Table 3 gives the summary results Table 3: Example summary results.

Year Average ratio of events/service outputs for current year to the mean events/service outputs for three preceding years for ANC1, Deliveries, DTP3 and OPD. Number (%) of districts with at least 33% difference between their ratio and the national ratio across the four indicators.

1.23

1 (10%) – District 3

OPD ratio of current year to mean of preceding 3 years 0.9 0.9 1.8 1.3 1 1.3 0.9 1 1 1.3 1.1

≥±33% difference from OPD national ratio 18.2% 18.2% 63.6% 18.2% 9.1% 18.2% 18.2% 9.1% 9.1% 18.2%

2c. Internal consistency between indicators –Certain indicators have similar patterns of behavior in health care in developing countries, such as ANC1 and DTP1. Typically these indicators have high coverage as they are usually point of entries for pregnant women and children, respectively, into the health system. Most pregnant women that are going to seek care during their pregnancy have at least one ANC visit and most children that will be seek care in the first year of their life will have at least one visit to the health facility, and we would expect women who seek care during pregnancy to also seek care for their children after birth. Definition: Internal consistency between indicators is the number (%) of administrative units whose DTP1 to ANC1 ratio is more than 33% different from the national DTP1 to ANC1 ratio. This indicator is not calculated at the national level. At the sub-national level the indicator looks at the percentage of administrative units of with at least 33% difference between their DTP1 to ANC1 ratio and the national ratio for the same indicators. We want to examine sub-national whose relationship between ANC1 and DTP1 are markedly different from the national relationship between ANC1 and DTP1.

Example: Suppose District 1 has a DTP1-ANC1 ratio of 1.08. The national DTP1-ANC1 ratio is 0.88. The percentage difference between the District 1 ratio and the national ratio is computed as follows:

|

|

|

|

Since the percentage difference between the District 1 ratio and the national ratio is less than 33%, it is not flagged. Please see Table 1 below for examples of districts that have a greater than 33% difference between their DTP1-ANC1 ratio and the national DTP1-ANC1 ratio. Table 1: % difference between the district and national ratios. Districts with more than a 33% difference between the district and national ratios are shown in red.

District 1 District 2 District 3 District 4 District 5 District 6 District 7 District 8 District 9 District 10

District ratio 1.08 0.77 0.98 0.74 0.83 0.93 1.20 0.80 0.55 0.87

National ratio 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88

% Difference 23% 13% 11% 16% 6% 6% 36% 9% 38% 1%

The total number of districts that have more than 33% difference from the national ratio is 2 out of a total of 10 districts, ie. 20%. Table 2 shows the summary results. Table 2: DTP1-ANC1 consistency example summary results.

Year National ANC1-DTP1 consistency Districts with DTP1/ANC1 ratio 33% above national ratio (DTP1 too high) Districts with DTP1/ANC1 ratio 33% below national ratio (DTP1 too low)

0.87 1 (10%) District 7 1 (10%) District 9

2d. Consistency between DTP1 and DTP3 – DTP1 is the first vaccine dose in the DTP schedule. DTP3 is the 3 dose in the DTP schedule. The first dose of DTP will always precede the third dose of DTP. While it is theoretically possible for the number of DTP third doses to be slightly higher than the number of first doses, such as administrative units with a lot of in-migration or due to the size of cohorts, it is unlikely to happen systematically. Thus, if DTP3 numbers are higher than DTP1, data quality problems are usually indicated.

Definition: Consistency between DTP1 and DTP3 is defined as the total number of DTP3 doses administered divided by the total number of DTP1 doses administered. One would expect the ratio to be 1 or below (the number of DTP1 doses should be either more than DTP3 or the same). At the national level, this indicator is ratio of the total number of DTP3 doses administered divided by the total number of DTP1 doses administered. At the sub-national level, this indicator shows the percentage of administrative units that have DTP3 immunization numbers that are 2% or higher than DTP1 Example:

National level: If the total number of national DTP3 doses is 305,000 and the total number of national DTP1 doses is 312,000, the ration of DTP3 to DTP1 is:

.

Subnational level: If District 3 has DTP1=7682 and DTP3=6978, the percentage difference is calculated by the following formula. Table 1 below shows the DTP1 and DTP3 ratio for all the districts.

Table 1: % difference between DTP1 and DTP3. District where DTP3 is higher than 2% from DTP1 is highlighted in red % DTP1 DTP3 difference District 1 20995 18080 -16% District 2 18923 16422 -15% District 3 7682 6978 -10% District 4 15669 14151 -11% District 5 9577 12663 24% District 6 20233 19960 -1% District 7 11402 9291 -23% District 8 12520 10461 -20% District 9 15984 13930 -15% District 10 18214 15491 -18%

Table 2 shows the summary results. Table 2: DTP1-ANC1 consistency example summary results

Year National DTP1-DTP3 ratio Number (%) of districts where DTP3 is more than 2% greater than DTP1

0.98 1 (10%) District 5

2e. Verification of reporting consistency through facility survey: Data verification refers to the assessment of reporting ‘correctness’, that is, comparing health facility source documents to HIS reported data to determine the proportion of the reported numbers that can be verified from the source documents. It checks whether the information contained in the source documents has been transmitted correctly to the next higher level of reporting, for each level of reporting, from the health facility level to the national level. This allows the identification of systematic errors that occur in the reporting of data, and to identify problem districts or health facilities that consistently make errors. The data verification provides a quantitative measure (“verification factor”) of the proportion of reported events that can be verified using source documents. The verification factor is a summary indicator of the quality of data reporting which measures the extent to which reported results match verified results. It is the ratio of the number of recounted events from source documents and the number of reported events from the reporting forms. The data for this indicator needs to be collected separately through additional data collection. Currently, the GAVI’s QA and QS and the Global Fund QA and R QA allow the calculation of a verification factor. However, the verification factor calculated from these activities is most often not representative at the national level. As an example, if a hospital has reported 1794 as the total number of deliveries for the 1 st quarter but in the recount from the source documents for the same period showed the total number of deliveries to be 1695, the verification factor would be: (the recounted deliveries divided by the reported deliveries). This example calculation is shown for one hospital. When calculating the verification factor for a sample of health facilities, the overall verification factor would use weights (unless a simple random sample methodology was used) to adjust for differences in the sample when compared to the sampling frame. A national verification factor of less than one indicates over-reporting: more events were reported than could be verified from the source documents. There are many possible reasons for over-reporting, such as errors in computation when aggregating data and incomplete source documents. Similarly, a verification factor greater than one indicates underreporting. It is useful as an indicator of data quality as it is a quantitative measure summarizing the reliability of the reporting system. Dimension 3: External consistency of population data The purpose of this dimension is to compare two difference sources of population estimates (values are calculated differently) to see the level of congruence between the two sources. If the two data sources are discrepant, coverage estimates for a program area can be very different based on the source used. The higher the level of consistency between denominators from different sources, the more confidence can be placed in the accuracy of the population projections. 3a. Consistency with United Nations (UN) population projection – Denominators are often cited as the leading problem when coverage derived from the HMIS are very different from coverage rates derived from household surveys. If denominators from two different sources are very different, this could potentially indicate a problem. For this indicator, we can compare the denominator (total population of interest) used for one of the four indicators included in the Report Card to the UN population projections. Denominators used to calculate rates and ratios are usually derived from the census or civil registration system. Denominators from the census are usually population projections based on estimates of natural growth and migration. The most common denominator used for calculating ANC

rates and delivery rates is the total number of live births in a specified period of time, for immunization it is the total number of surviving infants (total live births adjusted for infant mortality) and for OPD, it is the total population. Comparable denominators available from UN Projections are births and total population. The user can compare either births or total population figures from their country projections to the UN projections, based on availability. Definition: Consistency with United Nations (UN) population projection is defined as the ratio between the official country projection for live births for the year of interest divided by the UN official projection for live births for the same year. This indicator is not calculated at the subnational level. At the national level it is as defined above. Example: If the official live birth estimate for the year of analysis is 255,000 and the projected UN population is 200,000, the ratio of country population projections to UN population projection is 255,000/200,000  1.28. This ratio shows that the country population projection for live births is higher than the UN population projection for the same year.

3b.1. Consistency of denominator (for pregnant women) – Population denominators used to calculate coverage rates for facility data (usually projections from the national bureau of statistics) can be compared to denominators derived from alternate sources of data. This indicator compares the existing estimates for pregnant women and compares it to an alternate estimate for pregnant women derived from survey data. There are two pieces of information necessary for the creation of this alternate denominator estimate: 1) an estimate of coverage for a specific intervention for national and sub-national levels (if available) from survey data; and (2) numerator estimates for the indicators from facility data. There is one necessary condition that needs to be met to calculate a robust alternate denominator: Confidence in the quality of the numerator data of the selected indicator, such as 1.  20% of the data is missing or has 0 values; 2.  5% the values are outliers An additional preferred characteristic of the indicator would be high levels of coverage and relatively little variability across the country. Examples of indicators that usually fit this criterion are first antenatal visit or first vaccination (BCG or DPT1). Definition: Consistency of denominator (for pregnant women) is defined as the ratio of the official denominator estimate for pregnant women divided by an alternative denominator for the number of pregnant women (calculated by dividing ANC1 total events from HMIS by ANC1 coverage rate estimated from the most recent population based survey) At the national level this indicator is as defined above: a ratio of two denominators.

At the subnational level (e.g. district or province), the ratio of denominators is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two denominators is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Table 1 below provides an example of how this indicator is calculated at the national and subnational level Table 1: Comparison of official versus alternate denominators and highlighting discrepancies of more than 33% between the two (highlighted in red)

District 1 District 2 District 3 District 4 District 5 District 6 District 7 District 8 District 9 District 10 National

No. of women making ANC1 visit 27,825 18,161 9,470 14,351 13,240 24,639 11,422 14,276 15,318 20,555 169,257

Survey Estimate 95.0% 97.3% 90.4% 94.9% 80.3% 97.6% 85.9% 97.6% 91.9% 78.8% 93%

Official denominator of pregnant women 26,557 19,896 6,790 18,788 13,784 26,412 13,533 12,923 11,117 22,477 172,277

Alternate denominator 29289 18665 10476 15122 16488 25245 13297 14627 16668 26085 181,997

Ratio of offical to alternate denonimantor 0.91 1.07 0.65 1.24 0.84 1.05 1.02 0.88 0.67 0.86 0.9466

≥±33% difference between official and alternate denominator 9% 7% 35% 24% 16% 5% 2% 12% 33% 14% 5%

At the national level: -Total number of ANC1 from the health facility data for the year 2011 is 169,257 -Coverage for ANC1 from most recent population-based survey is 93% Alternate ANC denominator = The reason for calculating an alternate denominator by this method is that if the numerator value from the facility data is accurate and the survey estimate is reliable, our given numerator value of 169,257 women seeking first ANC should be 93% of a population of pregnant women. By dividing the given numerator by the 93% gives us an alternate population denominator to which we can compare the official denominator. -The official denominator used for calculating ANC1 coverage rate is 172,277.

The ratio of the two denominators :

.

Example (cont.): If the ratio is 1, it means that the two denominators are exactly the same. If the ratio is >1, it means that the official denominator is higher than the alternate denominator If the ratio is 1, it means that the HMIS coverage is higher than the survey coverage rate If the ratio is >1, it means that the survey coverage rate is higher than the HMIS coverage rate The ratio of 1.05 shows that the two denominator values are fairly similar to each other and there is approximately 5% difference between the two values. At the subnational level, the ratio of denominators is calculated for each administrative unit. Districts with at least 33% difference between their two denominators are flagged. In Table 1 above, District 3 and District 9 have at least 33% difference between their two ratios. Table 2 below shows the summary results. Table 2: HMIS coverage rate-survey coverage rate consistency example summary results 2011 National ANC1 coverage rates 0.946 consistency ratio Districts with ANC1 consistency ratio below 0.67 (Survey coverage rate is 0 (0%) higher) Districts with ANC1 consistency ratio 2 (15%) above 1.33 (HMIS coverage rate is District 3, District 9 higher)

4b. External comparison of deliveries – This indicator is calculated in exactly the same way as the previous two for deliveries. With deliveries there is often a much larger private sector component. If the private sector does not report through the HMIS, then the comparison should focus on the public sector data from the survey. Definition: External comparison of deliveries is defined is defined as the coverage rate for deliveries based on the facility reports divided by the coverage rate for deliveries based on household survey data. At the national level this indicator is as defined above: a ratio of the two deliveries coverage rates (HMIS and survey) at the national level. At the subnational level (e.g. district or province), the ratio of the deliveries coverage rates (HMIS and survey) is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two coverage rates is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Please see example for 4a above which compares the HMIS coverage rates to survey coverage rates for ANC1. The comparison of deliveries HMIS and survey coverage rates is done exactly the same way)

4c.External comparison of DTP3 – The comparison of the DTP1 coverage rate from facility data to DTP1 coverage rate from survey data is calculated exactly the same way as the comparison of ANC1 rates from facility data to survey-based coverage rates. Definition: External comparison of DTP3 is defined is defined as the coverage rate for DTP3 based on the facility reports divided by the coverage rate for DTP3 based on household survey data. At the national level this indicator is as defined above: a ratio of the two DPT3 coverage rates (HMIS and survey) at the national level. At the subnational level (e.g. district or province), the ratio of the DTP3 coverage rates (HMIS and survey) is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two coverage rates is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Please see example for 4a above which compares the HMIS coverage rates to survey coverage rates for ANC1. The comparison of DTP3 HMIS and survey coverage rates is done exactly the same way)

View more...

Comments

Copyright � 2017 SILO Inc.