MASS HIV TESTING:
A DISASTER IN THE MAKING
By Christine Johnson
Zenger's August 1996
The U.S. Food and Drug Administration (FDA) recently approved the sale of HIV
antibody test kits for home use. One of the stated purposes for selling a home test kit is to
pick up a large percentage of people at risk for AIDS who refuse to get tested in a clinic,
but indicate they would do so in a home setting. since anyone will soon be able to walk
into a drug store and buy one of these kits, the privacy and anonymity they afford makes
it inevitable that the kits will also be used by thousands, or even millions, of people in the
general population who are not at risk for AIDS.
Sometimes referred to as the "worried well", people not at risk have something
else to worry about: Baye's Law. Baye's Law is simply a principle of statistical analysis
that states the following: When you use a test in a population with a very low incidence
of the disease you are testing for, you will get huge numbers of false-positives. This is
true for any test for any disease.
Every scientist researching HIV testing knows this and every public policy
maker should know it. This includes the American Medical Association, which
on June 27  approved a recommendation for the mandatory testing of all pregnant
women. In regards to screening low-risk populations Xin M. Tu of Harvard School of
Public Health estimated that 90 percent of positive tests are in fact false.(1) Although this
may not be of much concern in certain situations, such as screening donated blood
(where positive units are simply discarded), he notes, "Falsely labeling individuals
applying for marriage licenses, pregnant women, health care workers and patients
admitted to the hospital as carrying the virus is certainly irresponsible and can have
an enormous psychological and social impact on the individuals."
How does Baye's Law work? It all depends on what degree of infection actually
exists in the population being tested. The lower the level of infection, the higher the
false-positive rate. This is true even if the specificity of the test remains constant. (The
specificity of a test indicates how often uninfected people will test positive -- a false-
positive.) So a test with 99.9 percent specificity will perform worse and worse as the
prevalence of infection in the tested population gets lower and lower. And, as stated by
Theresa Germanson of the University of Virginia, "At some point of extremely low
disease prevalence, it is expected that the positive predictive value (how often a positive
result will be true positive) of even the most powerful assay series will
deteriorate to a substandard level of performance." (2)
The 1993 estimated prevalence of HIV infection in the general population is
about 1 in 17,000. This is based on a recent Centers for Disease Control (CDC)
publication in which they published a graph indicating an infection rate of .006 percent
among 1993 blood donors. so, out of 100,000 people, there would be six who would be
infected. The remainder -- 99,994 -- are not infected and should test negative.
However, the home test kit (called "Confide") claims a specificity of 99.95
percent, and thus it will correctly identify uninfected people 99.95 percent of the time.
The rest of the time (0.05 percent) it will test false-positive. Out of every 100,000 tests
performed on those not at risk, 50 people (99,995 x 0.0005) would test false-positive for
every six who were actually infected! And if the specificity of the test slips even slightly,
only down to 99.90 percent (which is virtually guaranteed to happen under the less than
ideal conditions of public labs), there would be 100 false positives for every six true
But does this only apply to screening tests? Isn't the whole process perfectly
accurate when you add the Western Blot to the algorithm? In fact, what matters in this
mathematical analysis is not whether you "confirm" a test with the Western Blot, but
what the specificity of the full testing sequence is (think of it as the combined specificity
of two ELISA's plus a Western Blot. The CDC has estimated the specificity of the testing
sequence as 99 to 99.8 percent. giving them the benefit of the doubt between the two
numbers, what will happen in a 0.006 percent prevalence population if tested with three
sequential tests yielding specificity of 99.8 percent? There will be 200 false-
positives for every six true-positives.
The idea that it is possible to perform widespread testing in low-prevalence
populations and obtain any degree of accuracy is currently based on a highly publicized
study performed by Donald Burke for the U.S. military in 1988. (3) he used a multi-step
testing sequence and obtained a specificity of 99.999 percent. Without getting into
weightier issues such as whether any HIV antibody test has ever been properly
authenticated by a proper virus isolation gold standard, let us for a moment accept
Burke's findings at face value.
First of all, Burke's military labs required an extraordinary high level of quality
control, a level not normally found in the public labs where a person's specimen would
ordinarily be sent. Burke himself testified before a House of Representatives
subcommittee that many laboratories performed too poorly to be considered for the
military contract to analyze blood samples. (4) Over a two year period, 19 labs had
applied for the contract to test Army applicants and personnel for HIV. Ten out of 19 (59
percent) on at least one occasion could not analyze test samples to a level of 95 percent
accuracy, and were therefore rejected.
A more thorough critique of Burke's methodology can be found elsewhere. (5)
Suffice it to say that tests performed at a 99.999 percent specificity have been called
"utopian" (6) and "unusual in clinical medicine".(7) This means
that Burke's figures are too good to be true.
Even if it were true, one can see from above that Burke's testing sequence would
still produce one false-positive for every six true-positives in the general population. In
other words there would still be a 14 percent chance of getting a false-positive.
Any testing scheme, whether mandatory or voluntary, that involve large numbers
of low-prevalence populations is a disaster in the making.
|Specificity: || 99% || 99.9% || 99.99% ||
|Number of |
| 1,000 || 100 || 10||
| 6 || 6 || 6|| 6|
| 0.6%|| 5.7%|| 37.5%||
The table shows the relationship between the specificity of a diagnostic test (the percentage of
positive test results that are true-positives) and its positive predictive value for a population in
which only a low percentage (0.006 percent) of people are actually infected.
Christine Johnson is an alternative AIDS activist and lay researcher with Health
Education AIDS Liaison (H.E.A.L.) in Los Angeles.
1) Tu, X.; Litvak, E.; Pagano, M.; 1992. Issues in Human Immunodeficiency Virus (HIV) screening
programs. Am J. Epi. 136:244-245.
2) Germanson, T., 1989. Screening for HIV: Can we afford the confusion of the false positive rate? J. Clin.
3) Burke, D.; Brundage, J.; Redfield, R.; et al., 1988. Measurement of the false positive rate in a screening
program for human immunodeciciency virus infections. NEJM. 319:961-964.
4) Barnes, Deborah, 1987. New questions about AIDS test accuracy. Science. 238:884-885.
5) Papadopulos-Eleopulos, E.; Turner, V.; Papadimitriou, J., 1993. Is a positive Western Blot proof of HIV
infection? Bio/Technology. 11:696-707.
6) Griner, P.; Mayewski, R.; Mushlin, A.; et al., 1981. Selection and interpretation of diagnostic tests and
procedures. Ann. Int. Med. 94:559-563.
7) Meyer, K.; Pauker, S., 1987. Screening for HIV: Can we afford the false positive rate? NEJM. 317:238-