Troponins
-
-
- Troponin → Troponin C, Trop I, Trop T; I & T are “heart specific”
- STEMI & NSTEMI
- ECG
- Tombstoning
- Reciprocal changes
- Serial ECG and serial troponin…
- Heart score / Heart pathway
- Current literature says cathlab increases mortality for NSTEMIs (ie cathlab = STEMI only)
- Addie alluded to it with the “door to needle time” comment, but before there was the cathlab there were fibrinolytic agents
- ECG
- Myocarditis? (Can cause troponin bumps that aren’t indicative of MI)
- Myocarditis due to..? coxsackie, hep c, hiv, chagas
- Pericarditis can also bump troponin → more on that in cardio episode 5
- STEMI & NSTEMI
- Troponin → Troponin C, Trop I, Trop T; I & T are “heart specific”
- Algorithm study Franke mentioned → https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5613667/
-
Regarding high sensitivity troponin assays and endurance sports, here is a paper from 2019 showing troponin leaks in more than 100 runners who were prospectively enrolled and planned to take part in a marathon. “These results demonstrate that marathon running is associated with an asymptomatic cTnT rise for all runners, and this rise is significantly correlated to baseline cTnT levels, in addition, marathon runners with pre-existing cardiac pathology or who collapse at the finish line do not exhibit an increased cTnT rise compared to healthy runners.”
- Leckie T, et al. “High-sensitivity troponin T in marathon runners, marathon runners with heart disease and collapsed marathon runners.” Scand J Med Sci Sports. 2019. https://www.ncbi.nlm.nih.gov/pubmed/30664255
Angina
- “In 1959, remarks on ‘A variant form of angina pectoris’ by Dr. Myron Prinzmetal (1908-1987) appeared as the first article 2 distinguishing it as a separate entity from the classic Heberden’s angina described in 1772, a distinct syndrome with pain provoked with increased cardiac work and relieved by rest or the administration of nitroglycerin. In Prinzmetal’s original report of 32 cases of variant angina, the pain associated with transient ST-segment elevation came on with the subject at rest or during ordinary activity but was not brought on by exercise.” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4166862/
Franke asks → Does unstable angina really exist? (His written commentary below)
When we think about angina, we usually think of stable and unstable angina. There are a few other types (Prinzmetal etc.), but these are more rare.
Generally speaking angina is due to an imbalance of myocardial oxygen supply and demand, and as a result you get chest pain. Stable angina occurs with exertion. Essentially as the demand increases, the supply cannot compensate fully. This is typically the result of some amount of atherosclerosis preventing sufficient blood flow. This does not cause death or damage to myocardial tissue, which is a key point.
Unstable angina is a similar chest pain (lack of sufficient perfusion) that occurs at rest/random. This is the result of disruption of atherosclerotic plaque causing partial thrombosis, leading to myocardial ischemia. This is distinguished from an NSTEMI by the absence of troponin.
Our troponin tests have gotten way more sensitive (the smallest amount of troponin that can reliably be detected with a commercially available tool is 0.2 pg/mL, which is 2×10^-10mg/mL). This intuitively means that even relatively mild amounts of ischemia (that cause the slightest amount of tissue death) should have a detectable increase in troponin. To quantify the increased sensitivity in troponin levels that we can detect: from 1995 to 2007, the limit of troponin detection fell from 0.5 ng/mL to 0.006 ng/mL (~99% decrease), and the more recent tests are now sensing as little as 0.0002 ng/mL (or a further ~97% decrease).
As the result of this, recent studies have shown that a lot of people that are having “unstable angina” are actually having very very mild NSTEMIs, as there is some troponin leakage (which means there was some tissue death). It seems likely, given what we have seen so far with our more sensitive troponin tests, that as these get more effective, we will eventually phase out “unstable angina” as a diagnosis.
Sources:
- http://circ.ahajournals.org/content/106/14/1893
- http://circ.ahajournals.org/content/127/24/2452
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2721709/
- Other point that came up when I was researching this : When you have a STEMI, do the cells undergo apoptosis or necrosis?
- What is the difference between these two? Apoptosis is cell mediated, and “organized” whereas necrosis is more external and damaging.
- In STEMIs it appears as though both are actually occurring, though the proportions of both is questionable.
Clinical Pearl
LVAD, how do you take the blood pressure of a patient with one?
- Find the MAP with doppler (details on how to do that here)
Fun fact, I recently learned that the automated BP cuffs in the hospital aren’t directly reading systolic and diastolic numbers. Rather, they are finding the MAP and then using proprietary algorithms to infer the systolic & diastolic numbers. Here is an editorial suggesting that these may not be incredibly accurate. (My own personal experience with primary care clinics using automated wrist cuffs is that they are always ~20 or more points high on the systolic. I had suspected this was a function of being cheapo devices and also being on the wrist, however.)
Heterotopic Heart Transplantation
IMAGE SOURCE: Pratticò, et al. “Worsening Dyspnea in a Man With 2 Hearts.” Ann Emerg Med. 2012. doi:10.1016/j.annemergmed.2011.11.039
Vocab
Temporizing.
- “Therefore, hyperventilation is recommended only as a temporizing measure for the reduction of elevated ICP and should not be used for routine management or prophylaxis.” — Rosen’s Emergency Medicine: Concepts and Clinical Practice
- [In typhlitis] “Percutaneous drainage of large abscesses (>5 cm in diameter) can be temporizing in acutely ill patients.” — Infectious Disease Essentials
- “Phlebotomy of 2–3 units of blood is also an effective temporizing treatment modality.” — Emergency Medicine: Avoiding the Pitfalls and Improving the Outcomes
Step1 Q
Addie’s editorial note: we didn’t have a Step 1 Q written in time for recording. Possibly we will find a way to test this concept later, as there are multiple causes of it, but the topic I had wanted to probe was Von Willebrand Factor deficiency secondary to LVAD. You see, VWF normally floats around in the blood in a rolled-up-form, sort of like a much smaller and far less delicious cinnamon roll. If it unfurls, this can expose an interior domain that leaves it vulnerable to cleavage by ADAMTS13. Unfurling can be caused by high sheer forces, such as those seen in aortic stenosis and potentially with LVAD.
The key point here is that while VWF dyscrasia is usually a genetic condition, it can also be secondary to certain physiology.
Here is a citation from Blood from 2016 → https://www.ncbi.nlm.nih.gov/pubmed/27143258
EBM → Bayesian thinking and BNP
Bayes was a statistician, minister, and philosopher. He is best known now for Bayes Theorem, which was uncovered in his notes after his death. Bayes Theorem, aka “Bayesian thinking” basically says:
You have some idea how likely something is. Then you learn some new information, and this new information allows you to update your understanding of the likelihood.
For example → An American patient has a chronic cough. How likely are they to have lung cancer? Probably you have decided “not very likely.” If we update the clinical picture to include the information that the person is a pack-and-a-half-a-day smoker, does that change your estimation? I expect it does. This is basic Bayesian thinking. You were probably asking yourself about the proportion of chronic coughs that turn out to be lung cancer, and then later trying to integrate some intuition about the RR or OR of cancer for persons with 1.5 pack/day smoking habits (vs non-smokers).
Bayesian thinking is important in the clinical environment because we have a tendency to think of laboratory tests and CT scanners as magic truth devices. Each of these has important test characteristics, and many of these tests have such useful characteristics that in very many cases a positive or negative result means you have a clear answer. Which is why we tend to think of them as somewhat infallible. But many tests don’t have such clearcut statistics. And even for the tests with phenomenal characteristics, if the probability of a positive (before the test) is also phenomenally low, don’t be fooled into purely accepting the test. (Likewise, if the probability of a positive is quite high before the test, in many cases you will not want to blindly accept a ‘negative’ result.)
Basically, the baserate (aka prevalence) is important for understanding the predictive value of the test.
To look at this subject using a bit of an absurd example: if only 1% of your patient population has a disease of interest, and you use a coin flip to “test” who is positive and negative, then a negative test result means a 98.9% chance that a given patient is a true negative. Having a ~99% NPV sounds great! Of course we know that this is a worthless test, since declared the test was a coin flip, and we have a strong intuition about how useful coin flipping is (and indeed the sensitivity and specificity shown in the image below will bear this out). Nonetheless, if you neglected to consider the baserate when you were told about a fancy new test with a 98.9% negative predictive value, you could be easily led into thinking this is an infallible test, rather than a crappy test being used in the setting of a rare finding. This is important in emergency medicine because many of the things we consider ‘do not miss’ diagnoses are, even among people who come to the ED, not all that common.
A key to operationalizing Bayesian thinking is the likelihood ratio (“LR”), which, mathematically, is the ratio of true positive rate (“TPR”) to false positive rate (“FPR”). That is → [TPR/FPR] → which is the same as: [SENSITIVITY / (1-SPECIFICITY)]. (More formulas here.) Maybe you vaguely remember LR+ and LR- from your EBM / population health course in MS1. What most people take away, if anything, is that LR+ over 10 and LR- under 0.1 are the thresholds for really clinically useful tests, and any test below (10) or above (0.1) must be looked at carefully. What is powerful about knowing the LR is that you can use Bayesian thinking (ie your pretest probability factored with LR) to arrive at a post-test probability.
Imagine a disease with a very low prevalence, like HIV in the US population (~0.3%). Now imagine the test for that disease has very good sensitivity and specificity. In the case of HIV rapid tests meant for point of care, this is something like 99.6% and 99.7% [citation]. You can use Fagan’s nomogram to find 0.3% on the right hand column, draw a straight line through the middle column’s value for LR and arrive at the left hand column (which gives you the posterior probability). Remember, as mentioned in the show, every other time I have seen this nomogram it has been the mirror image of above → you start on the left, go through the middle, and end at the right (which makes more sense for folks who read left-to-right).
Regarding BNP being a ‘poor’ test because of its low predictive value:
“BNP Is Not a Value-Added Routine Test in the Emergency Department.” Annals of Emerg Med. 2009. https://www.annemergmed.com/article/S0196-0644(08)01856-8/pdf
“B-Type Natriuretic Peptide Testing, Clinical Outcomes, and Health Services Use in Emergency Department Patients With Dyspnea: A Randomized Trial.” Annals of Internal Medicine. 2009. https://annals.org/aim/article-abstract/744379/b-type-natriuretic-peptide-testing-clinical-outcomes-health-services-use
— Episode credits —
Hosted by Addie, Kim, and Alex.
Audio production and editing by Addie.
Show notes written by Addie (with a little help from Alex).