Families USA: The Voice for Health Care Consumers
    
Loading

Home

Tell Us Your Story

Sign Up

About Us

Action Center

Annual Conference

Donate

Contact Us



A Guide to Monitoring Medicaid Managed Care

Chapter 5: 
Health Care Utilization and Quality

Managed care at its best can bring about more appropriate utilization of services: greater use of primary care, fewer non-emergency visits to emergency rooms, and reduced hospital admissions for controllable conditions. At its worst, managed care may under serve its members and provide poor quality care. To evaluate whether managed care enrollees receive appropriate and quality care, policymakers rely on several types of data regarding health care service use:

  • Encounter data record the health care services that are provided to each individual patient. 
  • Utilization data show what services are provided to a population. States may calculate utilization data by aggregating encounter data. 
  • Clinical performance measures, such as those in the Health Employer Data and Information Set (HEDIS®), compare care received by people with given health conditions to care recommended under practice guidelines. A few measures, such as low birth-weight, provide information about the health outcomes of a population but, for the most part, performance measures provide information about the process of care. 
  • Measures of consumer satisfaction, such as those collected through the Consumer Assessment of Health Plans (CAHPS®), show how consumers rate the quality of their health care, whether they are able to communicate with their health providers, and whether they are readily able to obtain the health care services they need. 
  • Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) data show whether children receive check-ups, health tests, immunizations, and treatment.

Unfortunately, because little data have been collected about the quality of services under fee-for-service Medicaid, advocates cannot readily evaluate how managed care alters health care services. One of the few types of health care services data that states collect under both fee-for-service and managed care programs concerns Early and Periodic Screening, Diagnosis, and Treatment (EPSDT). Medicaid programs must provide check-ups to children and treatment when problems are identified. State Medicaid agencies report to the federal government on the proportion of eligible children who receive these EPSDT services. While not perfect, EPSDT data can help advocates track changes in the quality of preventive care to children.

CASE STUDIES: 
CONNECTICUT AND TEXAS How Have Advocates Examined Utilization and Encounter Data?

The Children's Health Council in Connecticut identified inadequate utilization of dental care as a long-standing problem for children enrolled in Medicaid. In 1998, the Connecticut Children's Health Project used encounter data to determine the scope of the problem for children enrolled in Medicaid managed care. They gathered encounter data from managed care plans for a sample of children who were continuously enrolled in Medicaid for one year. If the data showed that a child had received one of fourteen preventive dental services, the child was counted as having received preventive dental care during the period. The study showed that two-thirds of children received no preventive dental care during the year. Utilization rates were lowest among adolescents, and rates varied by race/ethnicity, county, and managed care plan. Three managed care plans had particularly low utilization rates.

**********

In 1999, the Consumers Union Southwest Regional Office examined two publicly available reports calling attention to gaps in Texas's oversight of service use under Medicaid managed care. In one, the Texas Department of Health reported to the state legislature that emergency room and hospital visits were reduced under managed care and that average hospital stays were shorter. In another report, an external quality review organization, Texas Health Quality Alliance (THQA), found that utilization data furnished by plans were incomplete and extremely unreliable. When THQA asked plans to furnish the underlying data that were used to calculate utilization rates, some plans could not do so, and using data from other plans, THQA was not able to replicate the utilization rates in its audit. Under managed care, hospitals could be paid on a per diem or a per diagnosis basis, and administrative records about "encounters" did not match encounters shown through patient medical records. Consumers Union wrote, "Wherever Texas Department of Health evaluates changes in the use of services due to managed care and bases its information on Utilization Management reports, this data cannot be verified and does not present an accurate picture of actual services rendered." Consumers Union's report helped catalyze the Texas legislature to obtain a fuller evaluation of the effects of Medicaid managed care.

Sources: Connecticut Children's Health Project, "Utilization of Preventive Dental Services By Children Enrolled in Medicaid Managed Care," Hartford, CT, March 31, 1998; Consumers Union Southwest Regional Office, Looking Back at the Promises of Medicaid Managed Care (Austin, TX: Consumers Union, April, 1999); information from Lisa McGiffert, Consumers Union, January 2000.

Encounter Data

a. What are encounter data?

An encounter is a record of a single "face-to-face" delivery of health service to a patient on a given date.* Encounter data document the patient's diagnoses and all of the services a provider rendered to the patient during the visit. It should reflect services such as lab work, transportation, and personal care in addition to services performed in a physician's office or hospital. In managed care, unlike in traditional fee-for-service, one cannot look at individual billing records to analyze what care was given. Managed care organizations (and sometimes their subcontracting providers) are reimbursed via a monthly premium for each member; they are not paid for individual services. In order to collect encounter data, managed care organizations must require their providers to submit records of the services they perform. These may take the form of "shadow claims" that are similar to fee-for-service billing records.

b. Why are encounter data important?

Encounter data can identify patterns of service use by individual beneficiaries, patterns of care for persons with particular diagnoses, and patterns of services provided by individual plans and providers. State and managed care organizations aggregate encounter data in order to calculate utilization data and clinical performance measures, described below. If encounter data are inaccurate or incomplete, efforts to evaluate service use and clinical performance will be undermined.

c. What are the problems with encounter data?

For a number of reasons, states cannot assume that the encounter data reported from the managed care plans are completely accurate. As health plans enter Medicaid managed care, experts generally acknowledge that there is a delay before the administrative problems (in collecting and reporting the data) are resolved. Providers in the health plan's network may not send their data in on time, or the data may be inaccurate or incomplete. Since providers are not paid on the basis of individual services, they have little incentive to devote time to ensuring that this type of record-keeping is accurate. 

Providers may also have different ideas about how to report the data, particularly if Medicaid agencies and managed care organizations have not clearly defined how different medical procedures should be coded. For example, in one state, some providers coded EPSDT exams as "preventive medicine" while others coded the exams as "evaluation and management." Until the state rectified these coding problems, its tallies of EPSDT services were incomplete. 1 Similarly, states may not clearly define what to record as an "encounter," so one plan's single encounter may be another plan's multiple encounter.

Consider this example: Two physicians from two different HMOs perform the same number of obstetrics services in a given year. A physician from HMO A only reports the delivery, not information about prenatal visits. Meanwhile, a physician at HMO B reports the delivery as well as each prenatal visit. Though both physicians have performed the same number of services, the physician from HMO B will have reported many more encounters.

Finally, there may be problems with computer systems or with the way computers in managed care plans link to Medicaid agency computer systems that confound encounter data analysis. 

States can improve the accuracy and completeness of encounter data by issuing clear instructions about encounter data collection, holding managed care plans accountable for the provision of encounter data through contract provisions and sanctions, working to resolve computer linkage problems, and periodically verifying the accuracy of encounter data. Medical records audits can validate the reported encounter data by comparing it to information included within the medical records.

d. What are the federal and state requirements for collecting and reporting encounter data?

Under federal law, Medicaid managed care contractors must collect and report encounter data. The Health Insurance Portability and Accountability Act (HIPAA) will standardize the manner in which all health insurers (Medicaid, Medicare, and commercial) code and submit electronic data, including encounter data, by 2002.  

A Nationwide Study of Medicaid Managed Care Contracts, by Sara Rosenbaum, et al.,  provides a good starting point for researching your state's reporting requirements for Medicaid managed care encounter data. The study indicates which states sanction plans for incomplete or inaccurate data submissions. This study is updated regularly online. State Medicaid agencies may also provide detailed reporting requirements through procedural guides or other administrative memoranda to managed care plans.

Utilization Data

a. What are utilization data?

While encounter data measure the services provided to each individual enrollee, utilization data measure the quantity of health care services provided to a population. Health care services include visits to the doctor, hospital stays, emergency room visits, laboratory tests, etc. States and managed care plans may aggregate encounter data submitted by managed care plans in order to compute utilization data, or they may have other medical or administrative records that enable them to quantify services provided to a population.

b. Why analyze utilization data?

One premise of managed care is that it will promote use of primary and preventive services. Policymakers have hoped that managed care would also reduce inappropriate emergency room use. Utilization data shows the amounts of primary, specialty, acute, and emergency care services actually delivered by managed care plans. The federal government and the states use Medicaid managed care utilization data to monitor whether plans provide appropriate care, to compare service use under managed care and fee-for-service Medicaid, and to help set future capitation rates. 

All managed care organizations use certain techniques to control costs. One common technique is "utilization management and review." This generally refers to several practices used by managed care organizations to control members' use of health care services, such as using gatekeepers, instituting pre-authorization procedures, and relying on standards or protocols to determine when care will be approved or denied. Some of the impact of the policies instituted as a result of utilization management and review practices can be examined through an analysis of utilization data.

Utilization data can:

1) Help to show whether changes in the health care delivery system result in changes in Medicaid beneficiaries' use of health services. For example, when people enroll in managed care, do they really use more primary care and make fewer visits to hospital emergency rooms? Does the addition of care coordinators in managed care plans lead to an increase in enrollees' use of outpatient mental health services? 

2) Identify plans that are providing too little care (as well as those providing adequate care); 

3) Identify plans with management problems; 

4) Reveal patterns of care over time within a plan or within the state's health care industry; 

5) Reveal problems with the way data are reported by health care plans; and 

6) Indicate differences in treatment of Medicaid enrollees as compared to other enrollees.

CASE STUDY: 
COLORADO State Auditor Questions Low Utilization of Mental Health Services

In 1998, the Colorado Office of the State Auditor used utilization data in an analysis of the cost-effectiveness of Colorado's Medicaid Mental Health program. The Auditor found that before capitation, the percentage of Medicaid beneficiaries receiving mental health services was increasing by over 6 percent a year; after capitation, the percentage served declined by about 1 percent a year. Between 1992 and 1997, under capitation, services per person decreased from 48 services to 36 services. During this time period, costs per beneficiary increased. The Auditor concluded that Colorado was "now paying more to provide fewer mental health services" and issued a series of recommendations for analyzing payment rates, ensuring the cost-effectiveness of the program, and establishing performance-based payments to providers.

Source: "Capitated Medicaid Mental Health Program Costs State More Than Fee-for-Service," BNA's Health Care Policy Report, November 16, 1998.

c. What are the federal requirements for reporting utilization data?

Under federal law and regulations, state Medicaid agencies must have procedures in place to safeguard against unnecessary utilization of services and to assure quality care. To detect patterns of inappropriate care, impartial, professional personnel must screen admissions to hospitals and other institutions, and states must examine sample data on admissions and length of hospital stays. States that mandate enrollment of Medicaid beneficiaries into managed care must have procedures, including procedures for the collection and review of data, to monitor the appropriateness of care furnished to enrollees.

HCFA has further specified requirements for utilization data in some states that use Section 1115 waivers or Section 1915(b) waivers to mandate enrollment in Medicaid managed care. Through terms and conditions on waivers, HCFA has required states to provide data such as the number of primary care provider encounters, number of referrals to specialists, number of hospital discharges/1000 enrollees, number of paid hospital days/1000 enrollees, and number of emergency room approvals and denials.

d. What are the state requirements for reporting utilization data?

State requirements for reporting utilization data may be found in:

1) Medicaid managed care contracts; 

2) State laws and regulations governing Medicaid managed care; and 

3) Laws and regulations governing all licensed managed care plans.

Through their Medicaid managed care contracts, states commonly require data on utilization of physician visits (primary care, specialty care) and inpatient hospital services (number of discharges, average length of stay, emergency room visits, etc.). Some states collect more detailed information, such as data on the use of various types of surgical procedures, use of various types of specialty visits, numbers of prescription drugs dispensed, etc.

To research your state's requirements for Medicaid managed care utilization data in relation to other states' requirements, see A Nationwide Study of Medicaid Managed Care Contracts, by Sara Rosenbaum, et al. Also ask your state Medicaid agency whether it has administrative memoranda or procedural guides that detail reporting requirements for managed care plans. Some states require plans to follow HEDIS® guidelines, described later in this chapter, for reporting utilization data.

To find out whether all licensed managed care plans (Medicaid, Medicare, and commercial) provide utilization data in your state, you must look at licensure laws and regulations and contact your state insurance commissioner. Licensed HMOs may need to report data to a state Department of Corporations, Department of Health, or another agency that oversees managed care. While data provided for all managed care plans may not be as detailed as that provided for Medicaid managed care, they may enable you to compare the utilization patterns of Medicaid beneficiaries and privately insured managed care enrollees. In some states, the primary role of the insurance commissioner is to oversee the solvency of managed care plans. Find out whether the utilization data provided to the insurance commissioner is accurate (for example, has it been audited?), comparable across plans, and detailed enough for your purposes.

e. What are the problems with utilization data?

Utilization data that are based on aggregated encounter data will be inaccurate if the underlying encounter data are flawed. Find out how your state collects various measures of utilization. You might find that some measures are more accurate than others because of collection methods. For example, counts of hospital stays might be more accurate than counts of physician visits due to differences in the ways hospitals and physicians are paid and maintain administrative data. The most accurate comparison of utilization data should take into account the differences in case mix among the plans (e.g., differences in the age, gender, and health status of those enrolled in each plan). Where the analysis of the utilization data shows significant differences among plans, differences among the populations served by the plans may be part of the reason (e.g., an older, sicker population will reasonably have more encounters with the health plan).

f. How do you examine utilization data?

In many cases, state agencies report utilization data that they have already analyzed. That is, the Medicaid agency determines the average rate of use of services by Medicaid managed care enrollees or the average number of hospital days per 1000 members. The state may use this information to compare hospital use under managed care and fee-for-service or to report utilization trends over time. Advocates' task may be to highlight findings of a state report for the general public or to tell the public if the state does not seem to be drawing valid conclusions from the data. For example, if outpatient mental health services decline and hospitalizations for mental health increase under a behavioral health organization, advocates might sound an alarm that the behavioral health organization is not effectively managing care. If a state boasts that managed care has appropriately shortened hospital stays and reduced admissions, advocates may want to ascertain that this change was indeed attributable to managed care, rather than an overall trend that is mirrored in fee-for-service, and that the state relied on accurate data to make this claim. Look for analyses of utilization data in:

  • External quality review reports; 
  • Reports from the Medicaid agency to the state legislature; 
  • Regular reports that the Medicaid agency might generate for its internal use, other state agencies, HCFA, or its Medical Advisory Committee; and 
  • Evaluations of Section 1115 or Section 1915(b) waivers.

If your state does not provide data on utilization rates but does furnish aggregate data on encounters (total number of physician visits, hospital admissions, etc. provided by a managed care plan), you can calculate utilization rates as follows:

Step One: Take the aggregate utilization figure for the plan and population you are examining-e.g., in Plan A, Medicaid enrollees had 120,000 physician encounters in a year

Step Two: Divide this figure by the number of Medicaid enrollees in the plan to get an average utilization rate. (You can repeat this analysis for commercially insured enrollees.)

Divide aggregate utilization figures by the average monthly Medicaid enrollment (rather that the number of Medicaid enrollees in the plan at one particular time), since enrollment figures fluctuate throughout the year. To get average Medicaid enrollment, add up the number of Medicaid beneficiaries who were enrolled in the plan each month of the year, and then divide this total by 12.

Example: An average of 20,000 Medicaid beneficiaries were enrolled in Plan A each month. Since Medicaid enrollees had 120,000 physician encounters in a year, Medicaid enrollees had a utilization rate of six physician visits per year.

For most services, utilization rates are expressed as per member averages. For hospital services, utilization rates are usually expressed as visits per 1000 members.

Step Three: If you are trying to compare utilization rates among plans, look for rates that fall far outside the norm among all the plans that you are examining. Weighted averages, which take into account the differences in the total number of enrollees per plan, will help you to do this. A standard average gives a very small plan with only a few members the same weight as a large plan with many members. A weighted average offsets this bias by counting the utilization rate of a small plan in proportion to its size relative to the other plans. See Chapter Appendix 1 for instructions on computing weighted averages.

Interpreting Utilization Data

a. What can utilization data tell you?

Differences in utilization rates between plans may indicate a number of things-for example, differences in health status of the enrolled populations, differences in length of enrollment, differences or errors in data reporting, and over-utilization or underutilization of services. Utilization rates that differ significantly from the average should be investigated further. Looking at utilization rates is a comparative process, and a good analysis requires you to investigate the potential reasons for the differences.

b. Utilization data can signal the following:

1) Plans may be providing too little care.

Unusually high or low rates may signal that a plan is providing an inappropriate level of service. Since most managed care organizations assume financial risk for the cost of care, they have an incentive to reduce care because the payment per patient remains the same, regardless of the level of care. Advocates should monitor managed care organizations by watching out for plans with unusually low utilization rates.

Once you identify plans with low utilization rates, take the analysis a step further. Does the low utilization indicate a problem for enrollees? The factors to consider include:

  • Whether the differences are statistically significant.
  • Whether differences in the population served by a plan may account for differences in utilization. Is the plan's population younger on average than the others to which you are comparing it? Younger adults tend to use fewer healthcare services than older adults. On the other hand, pregnant women and infants use more health care services than healthy populations of other age groups. Is the plan's population healthier than others in the comparison group? If other plans have higher rates of members with chronic illnesses, their utilization rates should be higher.
  • Whether low utilization rates reflect access problems for a population. For example, does the plan serve a more rural population or does it have a higher percentage of members for whom English is not a primary language? A low utilization rate among such plans may indicate possible access problems. These differences in enrollee population should be noted for further investigation.
  • Whether differences emerge in utilization rates between Medicaid and commercial enrollees. Do the Medicaid enrollees have significantly different utilization rates?
  • If you are looking at differences in the average length of stay or hospitalization rates, whether the plan has a dramatically more successful prevention and follow-up program that may account for lower utilization rates.

2) Plans may have management problems.

Since it is in a plan's financial self-interest to correct overly high utilization rates, this is not generally an area that advocates need to investigate further. On the other hand, plans with management problems may present other concerns: for example, plans that allow unreasonable or fraudulent claims to slip through may be doing little to oversee or improve the quality of care.

3) Patterns of care may have changed over time or within the state's health care industry.

A reduction in utilization rates over time may be significant. Whether the pattern is found in an individual plan or is part of a statewide trend, changes in utilization over time may indicate changes in medical practice that are beneficial to consumers, or they may indicate that plans are simply providing fewer services. For example, a decline in hospital stays might result from consumers getting better preventive and ambulatory care, or it might be a result of problems encountered when accessing hospital services.

4) Significant data reporting problems.

Effective monitoring requires reliable data. Advocates can push agencies with oversight responsibilities to improve the quality of available data by publishing analyses that highlight data deficiencies. The report by Consumers Union Southwest Regional Office, cited in a box at the beginning of this chapter, is an example of such an analysis.

5) Differences in treatment of Medicaid enrollees as compared to other enrollees.

Differences in utilization rates may indicate problems in access for Medicaid enrollees within the managed care organization in question. For example, suppose privately insured children in a particular managed care organization receive more dental care than children with Medicaid coverage. Perhaps few dentists accept Medicaid, or perhaps families do not know that their children's dental care is covered. Advocates might turn to data on access (see Chapter 4) to further explore reasons for low utilization.

c. What can you do with the data?

You can use the data that you have analyzed in several ways. Consumer education is a key part of the managed care advocacy effort. As hospitals and managed care organizations begin to release so-called quality-of-care report cards, information on how to interpret the utilization data will be an important tool to assist consumers in making informed managed care decisions. As managed care evolves, data from the utilization analyses can be used to support policy advocacy work with the state or managed care organizations.

Clinical Performance Measures

A performance measure is a tool to determine whether health care providers and health plans rendered appropriate services to patients. Clinical performance measures are based on practice guidelines. That is, for many health conditions, scientific research shows that a given course of treatment is effective. Performance measures show the percentage of people with a health condition who received a service (or set of services) in accordance with a practice guideline. Performance measures also show the proportion of a population that received recommended preventive treatment. For example, a performance measure might calculate the proportion of infants who received recommended immunizations or the proportion of heart attack patients who received beta-blocker treatment. Scores on clinical performance measures are often reflected in external quality reviews and in plans' internal quality studies.

To calculate a performance measure, reviewers use data about managed care plan members' ages or diagnoses to draw a sample of like members. Reviewers can examine actual medical records or use the encounter data compiled by managed care plans to determine the proportion of members who received appropriate treatment. Health care researchers have developed many performance measures. Over 1,000 are catalogued in a database called CONQUEST, maintained by the Agency for Healthcare Research and Quality. The most widely used set of clinical performance measures for managed care plans is contained in the Health Plan Employer Data and Information Set (HEDIS®) "effectiveness of care" domain.

CASE STUDY: 
MARYLAND Medicaid HMOs Fail Audit of Performance

All of Maryland's health maintenance organizations failed the first external evaluation of care to Medicaid beneficiaries under the state's mandatory managed care program, HealthChoices. In an audit covering the period from July to December 1997, none of the nine HMOs in HealthChoices passed the tests for adequacy of care for diabetics and for new enrollees. Only two HMOs met the standards for prenatal care. Substance-abuse screening was virtually nonexistent. In an interview with the Washington Post, Maryland Health Secretary Martin Wasserman said, "It doesn't alarm me. The managed care organizations know they had learning curve problems, and the study substantiated that. But we're in a much different place today. . . . We will use this study to point out to them that you can't continue to have these kinds of findings." Record-keeping problems contributed to the poor scores: HMOs were unable to furnish about 1,000 of the 6,000 medical records requested by the external reviewer.

Source: Avram Goldstein, "Maryland Medicaid HMOs Fail Audit of Service," Washington Post, Wednesday, January 13, 1999, page B 8. 

As described in Chapter 2, HEDIS® is a set of standardized performance measures for managed care plans, developed by the National Committee for Quality Assurance (NCQA).The effectiveness of care domain contains measures of clinical performance. HEDIS® explains how each measure is calculated. A "childhood immunization status" measure calculates the percentage of children who turned two years old during the reporting year and were continuously enrolled in the plan for 12 months who received specified immunizations. (For a list of HEDIS® measures, see Chapter 2 Appendix.) Plans that serve both publicly and privately insured enrollees are supposed to calculate each measure separately for their privately insured members, Medicaid members, and Medicare members. 

Keep several caveats in mind when you look at HEDIS® data. First, unaudited data reported by plans may not be accurate or complete. Check to see whether plans in your state report audited data, which are more reliable. Second, especially when they are new to managed care, plans have a limited capacity to collect data and report HEDIS® data. States and plans often select only some HEDIS® measures for reporting. Third, measures may not yet have been developed for a population you want to study. Very few measures, for example, capture information about the quality of care for persons with disabilities. Furthermore, most measures look at the process of care (did patients hospitalized for mental illness receive follow-up care?) rather than health outcomes (did patients' health improve after treatment?). Fourth, many measures are only relevant when members have remained in a health plan for a period of time. Medicaid beneficiaries frequently stay in health plans for short periods of time. Fifth, while the data might be useful in comparing plans with each other, HEDIS® cannot tell you how well plans should perform. 

Advocates should urge their states not only to consider average performance but also optimal performance in setting standards for Medicaid managed care plans. Thus, states should consider setting standards based on health goals such as those in "Healthy People 2010," the national health objectives, and should consider setting certain standards based on levels that the highest-performing plans were able to achieve. On the other hand, advocates and states should ensure that standards and benchmarks are reasonable for plans that serve high-risk populations. For example, plans that serve many high-risk mothers will likely show high percentages of low birth-weight deliveries. NCQA's website provides information about commercial plans' scores on selected HEDIS® measures by region (Final HEDIS® Effectiveness of Care Benchmarks and Thresholds for Accreditation '99. In February 2000, for the first time, the American Public Human Services Association and the Commonwealth Fund provided national data on Medicaid HEDIS® scores. Eighteen states and 112 managed care plans contributed data about their 1997 HEDIS® scores. The results are published in Lee Partridge and Carrie Szlyk, National Medicaid HEDIS® Database/Benchmark Project: Pilot Year Experience and Benchmark Results (New York, NY: Commonwealth Fund, 2000).

Consumers' Assessment of Care

Consumer surveys can provide information on consumers' own assessment of the quality of care they receive. The Consumer Assessment of Health Plans (CAHPS®) is a survey instrument developed for national use. The National Committee for Quality Assurance includes CAHPS® as the HEDIS® "Satisfaction" domain measurement tool, but CAHPS® can also be used by plans that do not report HEDIS®. CAHPS® questionnaires are publicly available on the Agency for Health Care Research and Quality website. CAHPS® includes questions both about consumers' experiences accessing care and about their ratings of the quality of care provided by their doctors, nurses, and health plans. For example, it asks respondents to rate their doctors, nurses, and specialists; to grade their providers on how well they listen, respect what patients have to say, and explain things; and to rate them on their helpfulness generally. In a supplement for persons with chronic conditions, CAHPS® inquires about how well the provider understands the effect of chronic problems on daily life; whether the consumer has been involved as much as desired in decisions affecting health care; and whether the consumer was able to obtain any therapy and special medical equipment from his or her health plan. One CAHPS® questionnaire asks adults about their own experiences with care. A second CAHPS® questionnaire asks adults about their child's health care. For instance, parents are asked if they have received reminders for their children's check-ups, if their children have gone to the doctor for check-ups or "for shots or drops," and if they got an appointment for their child's first visit as soon as they wanted. 

A National CAHPS® Benchmarking Database enables researchers to compare consumers' assessments of one plan to consumers' assessments of plans nationally. In 1999, Medicaid agencies, employers, and health plans submitted data covering 500 plans to the national database. 

If your state does not use CAHPS®, find out whether the state or contracting managed care organizations survey consumers using another instrument, and whether the survey contains questions about the quality of care. As you analyze the data, remember that the raw percentage of satisfied consumers is not very meaningful. Most people are satisfied with their health care, but most are healthy and use services relatively infrequently. Information about whether healthy people obtain preventive services, whether unhealthy people get the services they need, and about specific problems encountered are more meaningful in assessing and improving the quality of care.

Early And Periodic Screening, Diagnosis, And Treatment (EPSDT)

Many areas of health care have no firm standards about what services should be provided, what utilization rates are appropriate, or what clinical performance scores are attainable. For children covered by Medicaid, however, there are standards. Under federal law and regulations, every state Medicaid program must provide Early and Periodic Screening, Diagnosis and Treatment (EPSDT) for children under age 21. Children and youth must be provided immunizations and regularly scheduled health assessments, including complete physical exams, health and developmental history, laboratory tests such as blood lead level tests, and health education. They must receive periodic vision and hearing testing and ongoing dental care. State Medicaid programs must follow a nationally developed schedule for children's immunizations. For other health screening, states develop periodicity schedules-showing the ages and frequency at which children should receive various health tests-in consultation with the medical community. When problems with a child's physical or mental health or development are identified through screening, Medicaid must cover necessary treatment to correct or ameliorate the problem.

HCFA sets goals for participation in the EPSDT program. Since 1995, the national goal has been for every state to screen 80 percent of eligible children and youth. Advocates should hold Medicaid agencies and their contracting managed care plans accountable for attaining or surpassing that goal.

a. What data are reported to the federal government?

Federal law requires states to report annually the following information to HCFA: (1) the number of children provided screening; (2) the number of children referred for corrective treatment; (3) the number of children receiving dental services; and (4) the state's results in attaining EPSDT participation goals. 

States report their annual EPSDT participation to HCFA on "Form HCFA-416." On it, states list the number of children covered by Medicaid in various age groups and, based on the state's periodicity schedule and the length of time each child was enrolled in Medicaid, compute the number of screenings that the child population should have received. This number is compared with the number of screenings actually performed to arrive at a "screening ratio." The number of children who received at least one screen divided by the number of children who should have received at least one screen is the "participant ratio." States also report on the number of children referred for corrective treatment; the number receiving preventive dental services; the number receiving dental treatment services; the number enrolled in managed care; and the number of blood lead tests. 

The 416 provides valuable information that advocates can use to track their state's progress in reaching participation goals over time. The data, however, do have several problems. First, the 416 was revised in 1999, so earlier data may not be comparable. Second, variations in state periodicity schedules limit comparisons among states. Third, for children in managed care plans, data will only be reliable if managed care plans have submitted accurate data to the state.

b. Other data on managed care plans' provision of EPSDT services

Advocates may find information about managed care plans' provision of EPSDT services in three other sources:

  • Managed care plans' regular reports to the state, 
  • Plans' internal quality studies, and
  • External quality reviews.

States may require contracting Medicaid managed care plans to submit data regularly regarding the EPSDT services that they provide. Advocates can ask their state Medicaid agencies what data are required and how copies may be obtained.

Plans may internally study their performance on well-child visits, immunizations, or EPSDT generally as part of a quality improvement effort. Advocates can ask the plans what studies are underway, what interventions the plan is undertaking to improve its services to children, and whether the plans will make their studies available to the public. 

External quality review organizations may assess the quality of managed care plans' EPSDT services as part of their annual review. From the external quality review organizations' "scope of work," advocates should be able to find out whether any aspects of EPSDT services are being reviewed and what methodology reviewers are using to determine whether children received EPSDT services. For example, do reviewers rely on encounter data submitted by the plans or HEDIS® measures calculated by the plans, or do the reviewers examine medical charts? If they rely on encounter data, how do they verify the data's completeness and accuracy? Advocates can request external review organizations' scope of work and findings from their Medicaid agencies. 

Several HEDIS® measures address aspects of EPSDT compliance. For example, they measure the percentage of children in various age groups who received well-child visits and the percentage who received immunizations. Plans may use either administrative data or an actual review of medical records to calculate HEDIS® scores. When plans base their HEDIS® scores on administrative data regarding patient encounters, an auditor can review a sample of medical records to verify the accuracy of the data.

HEDIS® measures do not, however, assess whether the well-child visits included all of the required components of an EPSDT exam (comprehensive health and development history, an unclothed physical exam, age-appropriate immunizations, laboratory tests, and health education and anticipatory guidance). They do not assess whether vision, hearing, dental screening, and other services were provided according to the state's periodicity schedule or whether appropriate services were provided to children enrolled in the plan for only a short time. Advocates might want to suggest that external reviews and internal plan studies address the completeness of screening services and follow-up treatment.

CASE STUDY: 
WASHINGTON STATE How Have Advocates Analyzed EPSDT Data?

In 1998, the Children's Alliance in Washington State issued a report, How Children Fare in Medicaid Managed Care, which drew on state-sponsored studies of EPSDT compliance. One of these studies, conducted by an external quality review organization under contract with Washington State, showed that children on Medicaid managed care received very few of the required well-child visits. Based on a review of medical records, the study showed that on average only 18 percent of infants enrolled in Medicaid managed care plans received at least four out of six well-child visits. Providers performed only 15 percent of the expected number of well-child visits for three- to six-year-olds and only 9 percent of the expected number of well visits for adolescents. Often, when children did receive well-child visits, health screening was not comprehensive. 

To understand why children were not getting proper screening, the Children's Alliance turned to information from providers and consumers. Pediatricians comprising the leadership of the state chapter of the American Academy of Pediatrics said that they did not have phone numbers and addresses to contact many of the patients assigned to them and, therefore, could not remind them that they needed well-child visits. As to why exams were often incomplete, the pediatricians said that some providers performed the same screening for all children, regardless of whether or not they had Medicaid, and that this screening often did not include vision, hearing, and mental health or chemical dependency checks required under EPSDT. 

Washington State gathered some information about consumer experiences through the 1997 Consumer Assessment of Health Plans (CAHPS®). Two-thirds of parents and caretakers of children under age two said they received reminders about well-child exams and immunizations. However, the Children's Alliance pointed out, CAHPS® did not gather information for other age groups and did not ask parents about their awareness of EPSDT requirement. Consumer focus groups conducted a few years earlier showed that consumers were not aware that Medicaid covered well-child checks. 

Finally, the Children's Alliance reported, Washington State's problems in collecting encounter data resulted in an incomplete reporting of EPSDT statistics to HCFA and made Washington appear to have among the lowest rates of EPSDT compliance in the country. The Children's Alliance noted that improving encounter data is an essential part of improving accountability for EPSDT services at the provider level, because encounter data can identify patients that are not getting proper EPSDT services from their providers.

Source: The Children's Alliance, How Children Fare in Medicaid Managed Care: A Report on Washington State's Healthy Options and BHP+ Programs (Seattle, WA: The Children's Alliance, December 1998).

Endnotes

1  Connecticut Children's Health Project, Quarterly Report to The Children's Health Council on EPSDT On-Time Visit Rates: Third Quarter 1997 (Hartford, CT: Connecticut Children's Health Project, 1998).

2  Social Security Act § 1903 (m) (2) (A).

3  HCFA, Medicaid HIPAA Plus, November 1999.

4  Sara Rosenbaum, et al., Negotiating The New Health System: A Nationwide Study of Medicaid Managed Care Contracts, Third Edition [Washington, DC: George Washington University Center for Health Services Research and Policy, 1999.

5  Social Security Act § 1902(a)(33)(A); Social Security Act § 1932(c).

6  Letter to Paul Offner, Commissioner on Health Care Finance, Washington, D.C. from Rachel Block, Director, Medicaid Managed Care Team, HCFA, Baltimore, MD, outlining terms and conditions on the District's 1915(b) waiver, March 19, 1997.

7  Sara Rosenbaum, et al., op cit., vol. 2, pp. 5-216-5-440.

8  Kaiser Family Foundation, National Survey on Consumer Experiences With Health Plans [Menlo Park: CA, June 2000.

9  Social Security Act § 1905(r); 42 CFR 441.50-441.62.

10  Jane Perkins and Kristi Olson, Medicaid Services for Children: Federal Revisions to Reporting Form Raise Many Questions [Chapel Hill, NC: National Health Law Program, October 16, 1999.

Update Your Profile | Site Map | Privacy Policy | Contact Us | Copyright and Terms of Use