'Medical Principles: Part 2'
CAERS SUBSTACK ARTICLE #53
‘Medical Principles: Part 2’
CAERS Substack Article #53
In the last article I stated that doctors take a history and then examine a patient in order to develop a differential diagnosis. We use tests to help us decide which of the many possible diagnoses in the differential is the correct one so that we can then develop a treatment plan. Simply doing lots of random tests with no specific plan, referred to as a ‘shotgun’ approach, is not only not helpful, it can lead us in the wrong direction and even harm the patient. How do we avoid that?
There are few tests that tell us the entire story; every test has limitations, strengths and weaknesses. As a general rule, the easier, cheaper and safer the test, the less information it gives. Often we start off with these kinds of tests because usually the tests that provide the most information are more complicated, expensive and most importantly, risky. So, we have to weigh all of these factors when performing a test; injuring a patient with a complex test that provides information that a safer test could provide instead is not always wise. When looking for bowel cancer, a simple test that looks for microscopic blood in your bowel movement (stool) is very safe and cheap (though aesthetically unpleasant) and is a good place to start, whereas a colonoscopy could puncture your bowel in a way that requires immediate surgery. That is why it is so important to pick the right tests in the right sequence in order to safely and efficiently whittle the differential diagnosis down to the final diagnosis that explains the patient’s symptoms the best. Once we order the best test, then what?
The first question we must ask ourselves when obtaining a test result, especially a result we were not expecting, is this: does the result explain the patient’s problem? In other words, is this an incidental finding or is there a correlation, and if the latter, does it imply causation? It is common to find abnormal tests that are irrelevant to the patient’s problem, and we refer to these as ‘incidental-omas’. As well, the human body is complex, and not all bodies are identical, which means that there are lots of variations of normal; some people are left-hand dominant, for example. In addition, it is easy to wrongly assume that the test abnormality caused the patient’s symptoms when it merely correlates. Ascribing a patient’s problem to incidental- omas, variations of normal or non-causative correlation can do harm if we are not careful.
Which leads to the second question: what kind of test is this? There are screening tests and ‘gold standard’ tests (referred to as ‘diagnostic’). Screening tests in general are cheap, easy and safe, while the gold standard ones tend to be the opposite. Screening tests are usually very sensitive; they pick up anything that looks abnormal because we don’t want to miss anything. However, they are not very specific; the abnormalities they pick up often have a very broad range of causes. The stool test mentioned above is a screening test for bowel cancer that will detect blood from any cause, such as hemorrhoids. Colonoscopy is the gold standard that will weed out the non-cancerous causes of blood in bowel movements from the cancerous ones. It has a very high specificity—it can distinguish, or specify, bowel cancer from hemorrhoids.
We want screening tests to be very sensitive because we want don’t want to miss any bowel cancers, for example; we say they have high sensitivity. The problem is they are non-specific as they also identify blood from hemorrhoids; we say they have low specificity. In other words, it’s often a trade-off: sensitivity versus specificity.
Imagine that you are fishing in a lake that has an equal number of sunfish and trout; all of the sunfish are under two pounds whereas half of the trout are less than two pounds but half are over two pounds. Suppose that you only want to catch trout. If you use a net that captures fish of any size, half will be sunfish and half will be trout; the net will be very sensitive (it will not miss any trout), but it will not be very specific (half of the fish caught are sunfish). But if you use a net that will only capture fish over two pounds it will not be as sensitive (it will miss the smaller trout) but it will be very specific for trout (no sunfish will be caught).
Sensitivity and specificity are important because they help to determine the percentage of false positives: times when a test says something is positive (a trout), when it isn’t (a sunfish). (The opposite is known as a false negative when it says it’s not a trout and in fact it is). However, the number of false positive tests will depend on the prevalence of the thing you are looking for as well. If only a small percentage of the fish in the lake are trout, say 10% not 50%, then with a net that captures fish of any size, 90% of the fish brought up are going to be sunfish, not 50% as above, so the false positive rate will be much higher (90% vs. 50%).
So, in order to understand the accuracy of a test, we must know the sensitivity and specificity of the test, as well as the actual prevalence of the disease we are looking for. This is highly relevant to COVID. The PCR test is a screening test that can be run to very high levels of sensitivity which means that they won’t miss any cases, but the false positive rate will be higher, especially if the prevalence of the disease is low in that population at that time.
How well has this been explained to the public, and understood? A lot of decisions that have had major consequences in our lives have been based on the PCR screening test. But the gold standard test for viruses is a viral culture. How many people with a positive screening test had a viral culture done to confirm that it was a true positive not a false positive? We would not do surgery or suggest chemotherapy for bowel cancer based solely on a stool test; we would always do the diagnostic gold standard test, a colonoscopy, first. In addition, how many people had a blood test done afterward to demonstrate the development of antibodies, which would help to confirm that the PCR test was a true positive and the patient actually had COVID?
Tests always need to be planned with a purpose in mind and then properly interpreted; are they screening tests or gold standard diagnostic tests? Doing tests willy-nilly is not a wise way to practice medicine. What do you think? Have you had concerns about the testing for COVID?
In my next article, we will explore more principles that could help to clarify what may appear to be the mysterious world of medical science.
J. Barry Engelhardt MD (retired) MHSc (bioethics)
CAERS Health Intake Facilitator