How does specificity affect the predictive value of a test?
Here's an example. Let's take a population of 1000 individuals. Let’s say the prevalence is 20%, so 200 have the disease and 800 don't. The test sensitivity is 70% and the specificity is also 70%. So out of the 200 diseased, 70%, or 140 people, will be picked up by the test, whereas 60 people will be missed. Similarly, 70%, or 560 non-diseased individuals, will be picked up, whereas 240 will be falsely classified as diseased. Overall, we have 380 folks who tested positive—140 true positives and 240 false positives.
Figure 1. Within a population of 1000 people, a disease with a 20% prevalence will impact 200 people. A test with 70% sensitivity and 70% specificity will correctly identify 140 people with the disease, and 560 without the disease. This will leave 60 false negatives, and 240 false-positive results.
Now let's calculate the PPV. That's 140 true positives divided by 380 who tested positive, times 100, equals 37%—pretty bad.
Figure 2. For a disease prevalence of 20%, using a test with sensitivity and specificity of 70% in this population of individuals generates a positive predictive value, or PPV, of 37%.
What happens if we use a more sensitive test?
Now let's see what happens to the positive predictive value if we change the sensitivity to 90%. Now out of the 200 diseased, we're going to pick up 180 people and we'll miss 20. Nothing changes with the non-diseased individuals, since specificity stays the same at 70%. So overall, we end up with 420 people who test positive.
The PPV is calculated as 180 divided by 420, times 100, which equals 43%. So not a big improvement from our initial 37%.
Figure 3. For a disease prevalence of 20%, using a test with a sensitivity of 90% and a specificity of 70% in this population of individuals generates a positive predictive value (PPV) of 43%.
What happens if we use a more specific test?
Now, let's see what happens if we take our initial numbers and change the specificity from 70 to 90%. In this case, the number of diseased individuals and their results stay the same since sensitivity is left unchanged at 70%. Now we're correctly diagnosing 90% of non-diseased, or 720 people, and we're going to get 80 false positives.
So overall, there are 220 people who test positive. The positive predictive value is 140 divided by 220, times 100, which equals 64%. So much better than our initial 37%!
Figure 4. For a disease prevalence of 20%, using a test with a sensitivity of 70% and a specificity of 90% in this population of individuals generates a positive predictive value of 67%.
Why does specificity have so much more influence on the positive predictive value than sensitivity?
Well, because in a normal population there are many more people in the non-diseased group. Therefore, a 1% change in the number of non-diseased individuals correctly identified as negative, or the specificity, has a much bigger effect than a 1% change in the number of diseased individuals that correctly test positive, or the sensitivity.
That’s it for now. If you want to improve your understanding of key concepts in medicine, and improve your clinical skills, make sure to register for a free trial account, which will give you access to free videos and downloads. We’ll help you make the right decisions for yourself and your patients.