How does prevalence affect the predictive value of a test?

7th Jan 2021

Apart from sensitivity and specificity, what other indicators for test validity do we need to know in order to correctly interpret test results? Clinicians are usually confronted with test results that are either positive or negative. So the question is, what does a positive or negative test result tell you? This is where the predictive value comes in. 

 

Case Study 1—How do we calculate predictive values?

Let's consider an example. We have a population of 1000 individuals: 200 are diseased, 800 are not. Out of the 200, 160 correctly tested positive. Of the non-diseased, 720 are correctly tested negative. So, we have 160 true positives and 720 true negatives. Accordingly, we have 80 false positives and 40 false negatives. 

Example showing number of true and false positives and true and false negatives.

Figure 1. An example to illustrate predictive value. Out of 1000 individuals, 200 are diseased and 800 are not. 160 out of 200 test positive, while 720 out of 800 test negative. So, there are 40 false negatives and 80 false positives. 

How should we interpret a positive result?

Well, positive predictive value, or PPV, is the percentage of truly diseased people out of those who test positive. So, 160 true positives divided by all 240 positives, times 100, gives us a positive predictive value of 67%. In other words, this means that 67% of all individuals testing positive are truly diseased.

A sample calculation of positive predictive value.

Figure 2. Calculate the positive predictive value (PPV) using the number of truly diseased people who tested positive divided by all the people who tested positive and multiplying by 100. 

How should we interpret a negative result?

Well, as you might've guessed, there's also a negative predictive value, or NPV. NPV is the percentage of truly non-diseased people out of those who tested negative. From our example, it's calculated by taking 720 true negatives divided by all 760 people who tested negative, times 100, to get an NPV of 95%. So, 95% of all people who test negative are truly disease-free.

A sample calculation of negative predictive value.

Figure 3. Calculate the negative predictive value (NPV) using the number of truly non-diseased people who tested negative divided by all the people who tested negative and multiplying by 100.

 

Clickable call to action, "Start learning for free", with direct link to sign up for a free Medmastery trial account.

 

Is the predictive value always the same?

Positive and negative predictive values are actually much more helpful than sensitivity and specificity for a clinician to interpret the data. Essentially, we want to know what the probability of disease is given a positive or negative test result. But we often see different specialists interpret the same lab values in a very different way. 

And, these specialists should interpret the same values in a different way because their populations differ in a major way. And that major way has to do with prevalence. 

 

Case Study 2—How does prevalence limit the use of predictive values?

Let's look at an example again. Let's pick a population of 1000 individuals and a test with a sensitivity and specificity of 90%. And, let's pick a disease prevalence of 5%, so 50 people have the disease, and 950 don't. With the sensitivity and specificity of 90%, the test will correctly pick up 90%, or 45 individuals, with the disease, and 90%, or 855 individuals, without the disease. The remaining 5 will be classified as false negatives and 95 as false positives. 

Example showing true and false positive and true and false negative results and impact of prevalence.

Figure 4. Within a population of 1000 people, a disease with a 5% prevalence will impact 50 people. A test with 90% sensitivity and 90% specificity will correctly identify 45 people with the disease, and 855 without the disease. This will leave 5 false negatives, and 95 false-positive results. 

So overall, we have 140 people who test positive. The positive predictive value is 45 divided by 140, times 100, equaling 32%—very weak. In this population, the test is useless. 

Calculating positive predictive value with low disease prevalence.

Figure 5. For a disease prevalence of 5%, using a test with sensitivity and specificity of 90% in this population of individuals generates a positive predictive value of 32%. 

 

Case Study 3—How does prevalence support the use of predictive values?

For this same population of 1000 people, now let's say the disease prevalence is 20% so 200 individuals are diseased and 800 are non-diseased. The test will correctly identify 90% of diseased and non-diseased since the sensitivity and specificity are 90%. So now we have 260 people test positive—180 true positives and 80 false positives. The PPV is 180 divided by 260, times 100, equaling 69%. So, in this population, the same test is much more useful. 

Calculating a positive predictive value with moderate disease prevalence.

Figure 6. A test with 90% sensitivity and specificity, when used for a population of 1000 individuals with a disease prevalence of 20%, has a positive predictive value (PPV) of 69%. This is higher than when used in the same population with a disease prevalence of 5%. 

Disease prevalence in a population is important when deciding the probability for developing the disease based on a positive or negative test result.

 

Case Study 4—A final clinical example

Let me tell you a story that has to do with this concept. I once saw a very fit and athletic pilot who was dismissed from flying because a routine ECG showed a left bundle branch block. Every other test, including the echo, was unremarkable. The aviation authorities had issued a directive upon which they argued that left bundle branch blocks were actually associated with an increased risk of latent or future cardiomyopathy. And pilots with left bundle branch blocks should actually be dismissed from service. 

When I looked at the paper that served as the basis for this directive, I saw that it was carried out in patients who were either hospitalized or seen in an outpatient cardiology clinic. So, it was done in a population with a much higher prevalence of cardiomyopathy than the private practice where the pilot got the ECG. 

This means the directive is highly problematic and based on indicators that might not be transferable to the population of pilots because the positive predictive value of left bundle branch block for the diagnosis of future cardiomyopathy is probably not useful in this population!

 

That’s it for now. If you want to improve your understanding of key concepts in medicine, and improve your clinical skills, make sure to register for a free trial account, which will give you access to free videos and downloads. We’ll help you make the right decisions for yourself and your patients.