Draft

238  PGR Beyond Se And Sp

238.1 Summary

  • PGR – Beyond Sensitivity and Specificity
  • A 55F who is a nurse comes in with sepsis and chest pain, a RUL infiltrate that is cavitating in XR.
  • CME:
  • Answer
  • A history of risk (which is therefore, a history of probability and consequence)
  • ”In the beginner’s mind there are many possibilities, but in the expert’s mind there are few” – Suzuki Roshi
  • Chesterton’s Fence
  • What proportion of CAP is caused by MRSA?
  • How good am I at predicting who has MRSA pneumonia?

238.2 Slide outline

238.2.1 Slide 1

  • PGR – Beyond Sensitivity and Specificity
  • Brian Locke ### Slide 2
  • A 55F who is a nurse comes in with sepsis and chest pain, a RUL infiltrate that is cavitating in XR.
  • Blood cultures are received in the ER. She receives Vanc, Cefepime, and Azithromycin.
  • Blood and sputum cultures are collected. MRSA nasal swab is obtained and negative. Influenza A is positive.
  • She says she’s been sick for a week, thought she was doing better then worsened.
  • Pulmonary is consulted to help 12h after admission. CT is recommended confirms what looks to be a necrotizing pneumonia, no evidence of a lesion. Some ipsilateral LAD. No underlying lung disease.
  • Do you advise the team to stop vancomycin? ### Slide 3
  • CME:
  • Question: your intern says that “MRSA nasal swab is a good test at ruling out MRSA pneumonia.” You are feeling extra pedantic today, insisting only on the most truthful statements. Which of the following is true?
  • MRSA nasal swab is a good test at ruling out disease because it has a high specificity
  • MRSA nasal swab is a good test at ruling out disease because it has a high sensitivity
  • MRSA nasal swab is not a good test, but can still be useful.
  • Ugh. Sensitivity and specificity? These metrics are confusing. ### Slide 4
  • Answer
  • Question: your intern says that “MRSA nasal swab is a good test at ruling out MRSA pneumonia.” You are feeling extra pedantic today, insisting only on the most truthful statements. Which of the following is true?
  • MRSA nasal swab is a good test at ruling out disease because it has a high specificity
  • MRSA nasal swab is a good test at ruling out disease because it has a high sensitivity
  • MRSA nasal swab is not a good test, but can still be useful: when MRSA pneumonia is unlikely but not so unlikely we can ignore it.
  • Ugh. Sensitivity and specificity? These metrics are confusing. ### Slide 5
  • A history of risk (which is therefore, a history of probability and consequence)
  • Let’s take an analogous approach and build up “test characteristics” from the start ### Slide 6
  • ”In the beginner’s mind there are many possibilities, but in the expert’s mind there are few” – Suzuki Roshi
  • In clinical medicine, we would all think a lot more clearly if we used likelihood ratios, rather than sensitivity and specificity (or NPV and PPV)
  • Likelihood ratios actually mean what we incorrectly take Se and Sp to mean
  • Se and Sp are tools fit for an epidemiologic purpose that is not the purpose we use them for
  • PPV and NPV are not flexible enough for use in the diagnostic process. ### Slide 7
  • Chesterton’s Fence
  • Say you come across a fence that is blocking the way to where you’re planning on walking. You think, “I should take down this fence, because it’s in the way”. However, before taking down the fence, one should understand why a fence was put there in the first place.
  • Chesterton’s Fence: the principal that reforms should not be made until the reasoning for the existing state of affairs is understood.
  • Why do we use sensitivity and specificity to summarize how ‘good’ tests are? Why not just ‘accuracy’ (proportion correct)? ### Slide 8
  • What proportion of CAP is caused by MRSA?
  • Chat GPT text ### Slide 9
  • What proportion of CAP is caused by MRSA?
  • ~3.7%
  • https://erj.ersjournals.com/content/34/5/1148 ### Slide 10
  • How good am I at predicting who has MRSA pneumonia? ### Slide 11
  • How good am I at predicting who has MRSA pneumonia? ### Slide 12
  • How good am I at predicting who has MRSA pneumonia? ### Slide 13
  • How good am I at predicting who has MRSA pneumonia? ### Slide 14
  • How good am I at predicting who has MRSA pneumonia? ### Slide 15
  • How good am I at predicting who has MRSA pneumonia? ### Slide 16
  • How good am I at predicting who has MRSA pneumonia?
  • Thinking it was going to be another 32 patients before there was a MRSA pneumonia is called the ”Gambler’s Fallacy” ### Slide 17
  • How good am I at predicting who has MRSA pneumonia?
  • Dumb Brian is 96.1% accurate!
  • (What does it feel like to be 96.1% accurate?) ### Slide 18
  • Enter Sensitivity and Specificity:
  • WW2: RADAR (radio detection and ranging) invented
  • RADAR Receiver Operators: Is this blip a flock of geese or a bomber?
  • Like MRSA pneumonia, this is an imbalanced outcome: there are many more blips that are geese than blips that are bombers.
  • Solution:
  • In cases where it’s a bomber, what portion do we get correct? (Sensitivity)
  • In cases where it’s not a bomber, what portion do we get correct? (Specificity) ### Slide 19
  • Early medical diagnostics:
  • https://www.acpjournals.org/doi/abs/10.7326/M20-5028?cookieSet1 ### Slide 20
  • How good am I at predicting who has MRSA pneumonia?
  • Overall, Dumb Brian is 96.1% accurate!
  • In cases where it IS MRSA: Dumb Brian is 0% accurate (0% Se)
  • In cases where it is not MRSA: Dumb Brian is 100% accurate (100% Sp) ### Slide 21
  • SpIN & SnOUT: INCORRECT AND MISLEADING
  • Specific tests rule diagnoses in; Sensitive tests rule diagnoses out
  • Dumb Brian is 100% Specific ( Probability of a correct answer when the disease is not present) yet is of no use for ruling diagnoses in or out. ### Slide 22
  • How good am I at predicting who has MRSA pneumonia? Coin Flips
  • Overall, Coin Flipping Dumb Brian is 49.8% accurate!
  • In cases where it IS MRSA: Dumb Brian is 54.3% accurate (Se)
  • In cases where it is not MRSA: Dumb Brian is 49.7% accurate (Sp) ### Slide 23
  • How good am I at predicting who has MRSA pneumonia? Dice 1-4 Not MRSA, 5-6 MRSA
  • Overall, Coin Flipping Dumb Brian is 64.8% accurate!
  • In cases where it IS MRSA: Dumb Brian is 36% (Se) accurate
  • In cases where it is not MRSA: Dumb Brian is 65.9% (Sp) accurate ### Slide 24
  • What is the pattern here?
  • Strategy
  • Any relation to outcome?
  • Accuracy
  • Se
  • Sp
  • Always no
  • No
  • 96.1%
  • 0%
  • 100%
  • Die:
  • 1-4 no, 5-6 yes
  • 64.8%
  • 36%
  • 65.9%
  • (fair) coin flip
  • 49.8%
  • 54.3%
  • 49.7%
  • Rule of 100: A test where sensitivity and specify sum to 100 is worthless (no change the likelihood that disease is present) ### Slide 25
  • Summary:
  • Accuracy is a poor measure of performance in when the outcome is uncommon
  • “Classification of Imbalanced Data”: yes does not ~ no
  • Even common diagnoses are often far from 50:50
  • Sensitivity and Specificity are used to solve this problem
  • but they do not mean what we take them to mean (utility in ruling out or in diagnoses) ### Slide 26
  • This applies to tests, but also personal performance.
  • We shouldn’t use “accuracy” in the context of excluding rare diagnoses
  • It takes much better testing (or sharper reasoning) to make a rarer diagnosis
  • Thus you ought to be more skeptical of claims of one having been made (Aberegg and Callahan 2021)
  • It is more impressive to have correctly identified something that is rare.
  • Brier Score: ### Slide 27
  • Why do Se and Sp ! Usefulness for Ruling in and out?
  • Conditional probability:
  • the probability of event B occurring IF event A occurs.
  • P(B | A)
  • If event A does not occur, the conditional probability has no relevance.
  • Sensitivity: conditional probability of correct test IF the patient has disease
  • Specificity: conditional probability of correct test IF the patient does not have disease
  • Which applies to our patient with pneumonia? ### Slide 28
  • Backwards probability:
  • A conditional probability that depends on an event that can’t be known (yet) at the time of usage.
  • A p-value is another example of this: the probability of if nullTrue, what we be the chance of seeing this result (or more extreme).
  • Of people who have the disease, how many have the test result? Backward probability
  • Of people who have a positive test result, how many have the disease? Forward probability (what we want) ### Slide 29
  • Does it matter?
  • We (clinicians and researchers) empirically can’t correctly interpret
  • GERD gigerenzer; majority of MDs in survey’s get this wrong.
  • Think it’s the chance of
  • P-value interpretations
  • “The chance this finding is true given the result P<0.05” ! ”The chance you’d get this result if the null hypothesis is true” ### Slide 30
  • TODO: No text extracted from this slide. ### Slide 31
  • So when ARE sensitivity and specificity the right tool?
  • You are a researcher trying to construct the best MRSA nasal swab possible. You want to tune the threshold for what Cycle Count of DNA counts as possible.
  • Your aim is to choose the right threshold to balance the number of false positives vs false negatives.
  • Or, you want to understand the FP and FN rate if a known test is applied to a population
  • Or, you were a receiver operator interested in weighing FN and FP rates.
  • Conditioning on unknowable information: ameliorated by repeating over many trials.
  • Choose a threshold to balance: Receiver Operating Characteristic curve analysis ### Slide 32
  • TODO: No text extracted from this slide. ### Slide 33
  • What about +/- predictive value?
  • Conditional probability of +disease given +test
  • NOT a backward probability (which is good)
  • But depends on the prevalence in ‘comparable’ cases. (which is bad)
  • Often, there is no data on ‘comparable’ cases
  • A major draw of sensitivity and specificity is that they don’t depend on prevalence
  • PPV/NPV are good at a population level, but will mislead in individuals who have risks that differ from the population norm. ### Slide 34
  • What’s the alternative:
  • Likelihood ratios (same paradigm: of classifying whether patients have a disease or not)
  • Likelihood ratio is a forward probability , but depends on an estimate of how probable the diagnosis is beforehand, rather than the population prevalence
  • Or their extension (which can handle continuous variables): Logistic Regression [ ] discussed here https://pubmed.ncbi.nlm.nih.gov/15800301/
  • Risk modeling (paradigm shift: to predicting the risk of patient outcomes or predicting treatment responsiveness)
  • e.g. HLD is no longer a diagnosis. ASCVD risk 10y superseded this. ### Slide 35
  • The likelihood principal (not Bayesian statistics) ### Slide 36
  • Computation of LR ratio ### Slide 37
  • Why are LR better? Proper Scoring Rule
  • “Q: What is a proper accuracy scoring rule?
  • A: When applied to predicting categorical outcomes, a proper probability accuracy scoring rule is a measure that is optimized when the predicted probabilities are the true outcome probabilities.Examples of proper accuracy scores include the Brier score, the logarithmic probability score, and the log-likelihood from a correct statistical model. Examples of improper scoring rules, i.e., rules that are optimized by a bogus model, are proportion classified correctly, sensitivity, specificity, precision, recall, and the c-index (area under the receiver operating characteristic curve).” ### Slide 38
  • If you’re going to dichotomize results and disease states: LR is a much better indicator of the strength of a result.
  • (you can generate LR’s for various levels of a continuous variable, which is better). ### Slide 39
  • ROC Graphs: Se, Sp vs LR
  • +Sp
  • +Se
  • +LR
  • How often positive
  • LR represents how far from the line of coin-flips a test is.
  • Further from the line is always better.
  • LR proper scoring rule
  • Higher or more leftward is NOT always better
  • Se Sp not a proper scoring rule ### Slide 40
  • Test for comprehension
  • a.) se 85 sp 75 -> +LR 2.83, -LR 0.20
  • b.) se 90 sp 50 -> +LR 1.8, -LR 0.20
  • c.) Se 95 sp 25 -> +LR 1.27, -LR 0.20
  • https://calculator.testingwisely.com/playground ### Slide 41
  • A 55F who is a nurse comes in with sepsis and chest pain, a RUL infiltrate that is cavitating in XR.
  • Blood cultures are received in the ER. She receives Vanc, Cefepime, and Azithromycin.
  • Blood and sputum cultures are collected. MRSA nasal swab is obtained and negative. Influenza A is positive.
  • She says she’s been sick for a week, thought she was doing better then worsened.
  • Pulmonary is consulted to help 12h after admission. CT is recommended confirms what looks to be a necrotizing pneumonia, no evidence of a lesion. Some ipsilateral LAD. No underlying lung disease.
  • Do you advise the team to stop vancomycin? ### Slide 42
  • CME:
  • Question: your intern says that “MRSA nasal swab is a good test at ruling out MRSA pneumonia.” You are feeling extra pedantic today, insisting only on the most truthful statements. Which of the following is true?
  • MRSA nasal swab is a good test at ruling out disease because it has a high specificity
  • MRSA nasal swab is a good test at ruling out disease because it has a high sensitivity
  • MRSA nasal swab is not a good test, but can still be useful.
  • Ugh. Sensitivity and specificity? These metrics are confusing. ### Slide 43
  • Answer
  • Question: your intern says that “MRSA nasal swab is a good test at ruling out MRSA pneumonia.” You are feeling extra pedantic today, insisting only on the most truthful statements. Which of the following is true?
  • MRSA nasal swab is a good test at ruling out disease because it has a high specificity
  • MRSA nasal swab is a good test at ruling out disease because it has a high sensitivity
  • MRSA nasal swab is not a good test, but can still be useful: when MRSA pneumonia is unlikely but not so unlikely we can ignore it.
  • Ugh. Sensitivity and specificity? These metrics are confusing. ### Slide 44
  • SRMA of MRSA nasal swab. Also VA study?
  • Do they make errors about the distribution of covariates?
  • https://pubmed.ncbi.nlm.nih.gov/29340593/ ### Slide 45
  • Returning to our case:
  • Healthcare exposure and cavitary component are both risk factors for MRSA pneumonia ### Slide 46
  • Say you take my advice…
  • If you don’t take off Vanc and it ends up usually being not MRSA.. Will you question whether I was right? ### Slide 47
  • 2 common problems with Se and Sp estimates in practice:
  • Spectrum Bias
  • Case-control design ### Slide 48
  • TODO: No text extracted from this slide. ### Slide 49
  • Case control studies for diagnostic tests
  • Take two groups of patients: healthy controls and patients with (clear cases) of the disease. Then you calculate the 2x2 table of each
  • Excludes many patients who would be more prone to false positives and false negatives.
  • People with milder disease (sensitivity increases if there is any comofbidity that increases severity in the population)
  • Consider: EKG stress test is more sensitive in older patients, men, and those w more vessel dz
  • People with alternative conditions
  • When we use tests, we ALWAYS use them where either the patient is ill with the disease in question, or the are ill with another disease. (Healthy isn’t a comparator) ### Slide 50
  • Test your learning:
  • Can MRSA nasal swab rule out MRSA in sinus infections? Is it a better or worse test for MRSA in sinus and why? ### Slide 51
  • Summary Points
  • Some tests are very informative while others are not: we need to represent this somehow.
  • Accuracy is misleading in imbalanced data, which is most of medicine.
  • Se/Sp fix this and are useful for epidemiologists, but they cause confusion because they are backward probabilities.
  • PPV & NPV are useful at a population level, but patients that differ from the population average are not well represented.
  • We can represent the informativeness of diagnostic tests in a way that matches what we intend to communicate
  • Use likelihood ratios to represent test characteristics in clinical care. ### Slide 52
  • Other slides ### Slide 53
  • What proportion of patients with CAP who have MRSA nasal swabs ordered have a positive result? ### Slide 54
  • Roadmap: Problems with Se and Sp
  • They are conditional probabilities that are conditional on something you don’t know
  • They make continuous variables into dichotomous ones, and thus discard information
  • They do not describe the ability of a test to rule out (sensitivity) and rule in (specificity), like people think they do
  • They are often derived from case-control studies, which over-estimate the usefulness of tests
  • Therefore, they are unnatural and confusing ways to describe the usefulness of tests. It is only a historical coincidence we use them. ### Slide 55
  • MRSA review ### Slide 56
  • Good discussion; review for framing
  • https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models ### Slide 57
  • Abductive reasoning: [ ] take this from other paper ### Slide 58
  • https://www.bmj.com/content/358/bmj.j4071?ijkey7569ba3edc8f57718804938da71b1cadd2390a6d&keytype2tfipsecsha
  • This one argues that test chars can only really be understood in the context of a differential. ### Slide 59
  • Se and Sp are NOT proper scoring rules
  • Se:sp
  •  finding the optimal cutoff requires weighing the harm of a FP vs a FN – for this, Se and Sp are useful. However, we are not doing this – we are interested in the amount of information added. … maximizing LR and an analysis of fp and fn might give different results? ### Slide 60
  • Diagnosticity argument:
  • Do not use sensitivity, specificity, NPV, PPV to talk about the “goodness” or usefulness of a test in individual patient care: it is misleading and the only reason we do it is an idiosyncrasy of how things were initially conceptualized. ### Slide 61
  • Argument:
  • Imagine that a patient has a 10% chance of having a disease.
  • (Either, if you saw a patient with the same relevant characteristics 10x, on average - 1 would have the disease Frequentist perspective. Or, given what I know there is a 10% chance that this patient is, in fact, diseased bayesian.) ### Slide 62
  • Out of the following tests: (varying se and sp), which would you like best?
  • All same
  • Thus, se (or sp) does not tell us what we want to know.
  • Additionally, while NPV, and PPV discriminate OK (meaning, a better PPV or a better NPV is indexed better, they are miscalibrated - your estimate will be wrong if your patient has any characteristics that make their risk different from the population average). ### Slide 63
  • Thus use likelihood ratios.
  • Then go in to Harrell’s arguments.
  • https://hbiostat.org/blog/post/addvalue/ ### Slide 64
  • —> dive in to specificity/sensitivity; idea of diagnostic tests as very simple models -> critique of the analysis approach from that lens.
  • (Dichotomizing further simplification and loss of information)
  • (Considering a result in isolation - without collinear information - is not) ### Slide 65
  • Why can’t we just say how accurate a test is?
  • Accuracy does not tell us in what way we are wrong.
  • Accuracy (TP + TN) / (TP+FP+TN+FN) ’how often does the test get it right’
  • If Positives are much less common than negatives, always saying “no disease” will be highly accurate. (called ‘class imabalance’
  • Why did we end up with Sensitivity and Specificity in the first place?
  • —> Accuracy of a test shortcomings.
  • Class imbalance…. Epi text book ### Slide 66
  • ?Reference class problem - study patients “like the patient” - but there are In infinite number of ways you could do this.
  • Ability of a test to add information to other tests
  • (Brief detour into added values metrics like net reclassification index)
  • Assessment of a diagnostic test involves consideration of 3 components of effectiveness: accuracy, clinical utility, and patient benefit:
  • To be effective, a test must be accurate, which is determined by sensitivity (the true-positive rate) and specificity (the true-negative rate) [REFORMULATED AS LIKELIHOOD RATIOS].
  • A diagnostic test with clinical utility is one that has a measurable net positive effect on a patient’s clinical outcome.
  • —> can avoid unnecessary exposure.
  • A diagnostic test with patient benefit is one that positively affects a patient’s health or well-being in some way, even if the test result does not directly influence clinical treatment decisions or prognosis.
  • —> comparison to MRSA-based de-escelation? Probably depends a lot on what ### Slide 67
  • Why did we even develop this convention in the first place?
  • Why did we even adopt this convention?
  • https://mlu-explain.github.io/roc-auc/
  • ‘ROC curves were first employed during World War 2 to analyze radar signals: After missing the Japanese aircraft that carried out the attack on Pearl Harbor, the US wanted their radar receiver operators to better identify aircraft from signal noise (e.g. clouds). The operator’s ability to identify as many true positives as possible while minimizing false positives was named the Receiver Operating Characteristic, and the curve analyzing their predictive abilities was called the ROC Curve. Today, ROC curves are used in a number of contexts, including clinical settings (to assess the diagnostic accuracy of a test) and machine learning (the focus of this article).’
  • The AUC is the probability that the model will rank a randomly chosen positive example more highly than a randomly chosen negative example
  • if the model always predicts a positive sample is more likely to have a positive label than a negative sample, then it will have an AUC of 1. This strategy also provides a very easy method to estimate the AUC: simply tally up the proportion of correctly ranked positive-negative pairs!
  • In cases when you desire well-calibrated probability outputs (i.e. when you care about the true likelihood of an event, and not just model performance), ROC/AUC will give us no such information. Additionally, the AUC metric treats all classification errors equally, ignoring the potential consequences of ignoring one type over another.
  • http://bedside-rounds.org/episode-63-signals/
  • ——- ### Slide 68
  • “A test result is merely a number that is open to interpretation. The idea that a test is either true or false took seed in “signal-detection-theory” in the 1940’s in radar and psychometrics. A signal on a radar screen may be an enemy, or a flock of birds, and training in radar detection aimed to increase the true detections of enemy while reducing a retaliatory barrage at our feathered friends (falsely detected as enemy). In diagnostic testing, values for the test result above the cut-off value are positive (enemy), and, below, negative (birds). However, as in radar detection, some signals are stronger than others and the ability to determine true from false depends on the strength of the signal. ” ### Slide 69
  • Bayesian vs Likelihoodist vs Frequentists statistical formulations - https://plato.stanford.edu/entries/probability-interpret/ ### Slide 70
  • Statist. Med. 2000; 19:1783}1792The (In)Validity of sensitivity and speci”city ### Slide 71
  • ‘Type 1. The first is the ‘overall’ likelihood ratio, which is the frequency of a finding in those with a diagnosis divided by the frequency of the same finding in those without the diagnosis (also called the ‘sensitivity’ of a test divided by its ‘false positive rate’ (i.e. 1 minus the specificity).
  • Type 2. The second type is the ‘differential’ likelihood ratio, which is the frequency of the finding in those with a diagnosis divided by the frequency of the same finding in a rival differential diagnostic possibility. This involves one ‘sensitivity’ being divided by another ‘sensitivity’.
  • It is the second type—the differential likelihood ratio that allows a finding to be assessed for use in diagnosis by probable elimination. This ratio needs to be as high as possible; if it is infinity, then it shows that the rival diagnosis is impossible. When a likelihood ratio is used in calculations, it is the rival diagnosis’s sensitivity that is put on top, so that the likelihood ratio of infinity becomes a likelihood ratio of zero.
  • https://academic.oup.com/book/31795/chapter/266181309?loginfalse ### Slide 72
  • Take-aways
  • For decisions about testing on individual patients, Sensitivity and Specificity are the wrong test characteristics to use because they are confusing to apply (reference class problem, forward probabilities) and are not proper scoring rules for how their common use case (not proper scoring rules)
  • Positive and negative predictive value are also inappropriate as they imply all patients are equally likely to have the disease.
  • Likelihood ratios address all the above issues: they are easier to understand and they mean what we think they mean.
  • They are not perfect (dichotomization) ### Slide 73
  • Review approach in: Tell me the odds,
  • https://a.co/6ksOrXe
  • https://www.amazon.com/gp/product/B01AZXQY1K
  • http://www.fairlynerdy.com/an-intuitive-guide-to-bayes-theorem/ ### Slide 74
  • Se, Sp are not the right operator characteristics for clinical use
  • Confusion Matrix
  • Test (+)
  • Test (-)
  • Ref. Std (+)
  • True Positive (TP)
  • False Negative (FN)
  • Se: TP / (TP+FN)
  • In those w/ dz, what portion does the test get right?
  • In practice, we do not know which row we are in
  • Ref. Std (-)
  • False Positive (FP)
  • True Negative (TN)
  • Sp: TN / (TN + FP)
  • In those w/o dz, what portion does the test get right?
  • PPV : TP / (TP+FP)
  • In those w/ (+) test, what portion does the test get right?
  • NPV: TN / (TN+FN)
  • In those w/ (-) test, what portion does the test get right?
  • In practice, know which column we are in ### Slide 75
  • Relationship to ’Information Added’
  • Conditional probability:
  • Pr( [Test +] given [has dz]) Pr( [Test +] and [has dz])/Pr([has dz])
  • Bayes Theorem:
  • Pr ( [has dz] given [Test +]) Pr ([has dz]) Pr ([Test+] given [has dz]) / Pr([Test+])
  • We want: Conditional probability the patient has the disease given a positive test / Conditional probability the patient doesn’t have the disease given a positive test
  • Plug in to bayes and solve: Pr([test+]) drops, leaves you with a likelihood ratio and the Pr([has dz]) aka your pretest probability.
    • Likelihood ratio Pr([Test + and Have Dz]) / Pr ([Have Dz]) over Pr([Test – and have dz]) / Pr([don’t have dz])
  • +LR se / (1-sp) ### Slide 76
  • Sp 80%, Se 20% - good confirmatory test?
  • Rule of 100: If se+sp 100, then LR 1. No information is added.
  • Spectrum of utility
  • Do one of these with the pie charts? ### Slide 77
  • TODO: No text extracted from this slide.

238.3 Learning objectives

  • PGR – Beyond Sensitivity and Specificity
  • A 55F who is a nurse comes in with sepsis and chest pain, a RUL infiltrate that is cavitating in XR.
  • CME:
  • Answer
  • A history of risk (which is therefore, a history of probability and consequence)

238.4 Bottom line / summary

  • PGR – Beyond Sensitivity and Specificity
  • A 55F who is a nurse comes in with sepsis and chest pain, a RUL infiltrate that is cavitating in XR.
  • CME:
  • Answer
  • A history of risk (which is therefore, a history of probability and consequence)

238.5 Approach

  1. TODO: Outline the initial assessment or decision point.
  2. TODO: Outline the next diagnostic or management step.
  3. TODO: Outline follow-up or escalation criteria.

238.6 Red flags / when to escalate

  • TODO: List red flags that require urgent escalation.

238.7 Common pitfalls

  • TODO: Capture common errors or missed steps.

238.8 References

TODO: Add landmark references or guideline citations.

238.9 Slides and assets

238.10 Source materials