Pain Reduction by Distraction

Here’s the paper (“The Effectiveness of Distraction as Procedural Pain Management Technique in Pediatric Oncology Patients: A Meta-analysis and Systematic Review”) that we talked about regarding gate theory and distraction in pain reduction.

In the beginning of our discussion we watched a video of a pediatrician with amazing kiddo-distracting skills. Here’s a short article on him — he runs a solo practice outside DC. It’s a quick and worthwhile read to get a sense of a practice model that might appeal to some of y’all. We didn’t watch this video, but it’s also another great one along the same lines.

We also talked about a commercial-slash-consumer device that goes on the appendage of a kiddo and uses buzzing and temperature sensations to reduce injection pain. Here’s the YouTube video we watched of a young person named Tenley who is super brave and has some pretty admirable injection skills.

News Article

Bernie Sanders was in the news recently, drawing attention to the $375,000 cost of a medicine that used to be free. How was it free? Well that’s an interesting and capitalistically-heartwarming story of a small pharmaceutical company.

Early Childhood Radiation (and Later Cognition)

We looked at this 2004 Swedish study in the BMJ → “Effect of low doses of ionising radiation in infancy on cognitive function in adulthood: Swedish population based cohort study” → This is a really interesting paper for several reasons. First, it has some pretty remarkable (and scary) findings, particularly given how often CT scans are performed on young children. The design of the study looks reasonably good to us, but at the same time, it is retrospective. Given the nature of what it’s investigating, as well as the timeline involved and many other logistics, it would be incredibly difficult to undertake a prospective study of this sort. Sometimes retrospective looks at natural experiments are the best we can get for certain situations. Which is the second really interesting aspect: the total lack of attention paid to this study. I spent some time poking around in other scholarly works that cited this one (there are 349 citations for it, so I skipped plenty that seemed irrelevant). It’s cited in several textbooks, but merely as an afterthought (it’s used to support some brief statement, like “radiation can have insidious effects on later life cognitive abilities” with no additional discussion of its veracity). I was hoping to find something, anything really, that took a serious look at its findings — whether in support or opposing. The almost total silence is in itself interesting, particularly because this was published in the BMJ, which is not exactly an obscure journal.

The Swedish radiation study is the most modern of any study like this that I’m aware of. The only studies that even kind of fit into the same category are this one from 1982, two 1966 and 1978 (follow up) studies, and finally a number of studies of radiation effects from survivors of the atomic bombings in Japan. The data I have seen suggest that fetuses exposed to that radiation did not show the same effects the Swedish study suggested. While it’s possible that being in utero would lead to different effects than being an infant, one might expect these effects to be magnified, not reduced. We also briefly touched on the idea of cross validation in model/hypothesis testing.

We also talked about the long history of unethical medical experimentation that has taken place in the United States. The Plutonium Files was mentioned (here’s a story on its writing that gives a synopsis), as well as Acres of Skin and Medical Apartheid. A few books not solely devoted to medical experimentation, but worth mentioning are Dying While Black, Killing the Black Body, and An American Health Dilemma. Here’s the JAMA review of An American Health Dilemma. Here’s a recent PBS Newshour segment with Dorothy Roberts, author of Killing The Black Body:

I also mentioned a video on avoiding sexual assault that is shown to new inmates (as part of PREA, the Prison Rape Elimination Act). I recalled this being a Pennsylvania prison system video, but it turns out it was Alabama. That said, Pennsylvania has its own PREA program, but the videos that might be used (their policy gives detention centers the option of using videos) are not available online that I can find. Anyway, here is the link to the Alabama video, which I should warn you is likely to leave your heart feeling heavy, no matter how much you brace yourself for its subject matter.

Vocab

Our vocab word for the week was, ‘ditzel’ → your guess is as good as ours on this one.

Clinical Pearl

We talked about Virchow’s node, as well as the test-taking strategy of interpreting painless lymphadenopathy different from painful (the former being a hint at malignancy). Getting away from test taking and entering the real world, this paper [pdf] looked at the ability of family physicians to refer (within 4 weeks) malignant cases [aka: sensitivity]. It found sensitivity was 80-90%. Meanwhile, specificity [not referring benign cases] was 91-98%. Posterior probability (ie having malignancy if referred) was 11%. Another study found 17.5% of patients referred would have a malignancy. Lymphadenopathy of Virchow’s node (supraclavicular node) increased the likelihood of malignancy by 50 percent [see: pdf link].

EBM

As a bit of foreshadowing to the Great Belated DOAC Discussion that we’ve delayed for a couple of episodes now but finally tackled (see below!) we discussed the 2014 Annals of Internal Medicine article, “Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications.” The results from the abstract speak for themselves,

“Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in ClinicalTrials.gov, 28% differed from the matched publication.”

As mentioned in the podcast, this is hugely problematic because journals supposedly require trials to be registered and to follow best practices as a precondition for publication … but it seems they often don’t follow through on enforcing these requirements. And shifting the burden of enforcing to the peer reviewers (who are unpaid volunteers) is not a solution that is likely to work. Having sifted the clinical trials database results for the 5 studies we’ll discuss below in the DOAC write-up, I can tell you that it is actually a fair amount of work to correlate what is said in the publication to what is in the various historical records. Each of the studies below had 5 or 10 records, each many pages long, that had to be looked at carefully and then compared to each other. And in the end, some of those studies broke NEJM rules that have been in place since 2005, yet were still published. Top journals like NEJM make a lot of money publishing work completed by others and reviewed by volunteers. Running the journal isn’t easy, and has costs of course, and one of those costs should probably be the expense of enforcing their own editorial policies.

The Great Belated DOAC Discussion

Okay, it took us a few weeks to find time to make it happen, but we’ve finally gotten to the discussion of direct oral anticoagulants, what evidence got them on the market, what issues might exist with that evidence, and what the state of things looks like as far as the reversal agent that is coming to an emergency department near you any day now. On the podcast I mentioned a JAMA study that looked at drugs approved by the FDA from 2005-2012 and the characteristics of the trials associated with those drugs; that study is here.

Apixaban

ARISTOTLE trial, published in NEJM (clinical trial #NCT00412984). The major bulletpoints are below, but one focus of our discussion was fraud that occurred in the trial at some of its clinical sites in China, and the delays this created in the drug being approved.

  • 18,201 patients with atrial fibrillation and at least one additional risk factor for stroke
  • 1034 clinical sites in 39 countries
  • Non-inferiority design
  • The primary outcome stated in their publication was “ischemic or hemorrhagic stroke or systemic embolism.” (Clinical trials database agrees largely with this, although it should be pointed out that use of the word “primary” connotes singular, while use of the word “or” two time makes it plain that they have three ‘primary’ outcomes. This is not a logical use of primary, but of course many, many studies are guilty of this offense to logic.)
    • The last record in their clinical trials record has a total of 19 primary and secondary outcomes. Not to get too pedantic, but the first outcome is primary, the second is secondary, the third is tertiary, and so on … up to their decennonary (19th) outcome. A 19th outcome cannot be ‘secondary.’
    • In the historical trials record you can see they revise the study arms several times. I don’t know what to make of this. But it does not raise my confidence in their reported results, especially in a study where the FDA used the word “fraud” and “data” in the same sentence when reviewing their results.
    • They don’t define ‘major bleeding’ until after the trial is done. All of this stuff should be done before the trial starts, but particularly in a trial of a drug related to bleeding, you need to define something like this from the outset.

Regarding fraud in the ARISTOTLE study, here’s what an FDA reviewer had to say:

As discussed earlier, there is an ever evolving unknown rate of medication errors and unknown effect of the manual manipulations of the IVRS data on randomization. The apparent lack of the Applicant’s knowledge of the issues make the reviewers uneasy about the monitoring and conduct of the trial and the potential implications on important endpoints. This information developed late during the review cycle, and the issues have not been resolved to our satisfaction. The Applicant does not appear to fully understand their data (answers to our questions went through multiple iterations, are not fully explained, and have only instigated more questions), the errors and the impact on safety. Moreover, the Applicant’s medication error dataset appears to have errors in it whereby the data do not match the CRF and there are dates after database lock. This carelessness in cleaning the data adds to our skepticism of their responses to our medication error questions and makes us question their study conduct. The timing of the study conduct issues was late in the review cycle and much of the primary safety analyses were complete, so the review presented in this section essential ignores the medication errors. Sensitivity analyses for the fraud in China are presented in Sec 3.1.2.”

Rivaroxaban

ROCKET AF trial, published in NEJM  (clinical trial #NCT00403767). Here is another article, published a year earlier by the study designers, that also discusses the ‘rationale and design of the study.’

  • 14,264 patients with non-valvular atrial fibrillation who were at increased risk for stroke
  • 1178 participating sites in 45 countries
  • Non-inferiority design
  • The primary outcome stated in their publication was “the composite of stroke (ischemic or hemorrhagic) and systemic embolism” 
    • The clinical trials dot gov historical database shows numerous issues here. We will overlook for a moment the fact that a ‘primary’ efficacy outcome that is a composite is, in fact, not primary and a way of hedging to inflate results (in the context of non-inferiority, which sets the bar essentially as low as it can get). When one combs through the records for this trial, one will see that although the trial was declared before it actually began, the primary and secondary outcomes of interest were not declared before initiating the trial. The whole point of these things is they must be declared a priori. This is egregious if you consider the funding and expertise involved. It becomes more egregious when you go through each record and see that not only did they not even have any outcomes declared until six months into the trial, but they continually tweaked their primary and secondary outcomes over the ensuing years. This fact alone should technically disqualify the study from publication in any ICMJE journal (“…registration of clinical trials in a public trials registry at or before the time of first patient enrollment as a condition of consideration for publication” [1]). The NEJM is one such journal, and according to their own editorial policy, at least as it presently stands, this would not qualify to be published. “The ICMJE and, therefore, NEJM require investigators to register trials in acceptable clinical trial registries before the onset of patient enrollment. Manuscripts describing primary results of nonregistered trials will be turned away prior to peer review.” [2]
      • Nov 2006: study registered at clinical trials dot gov
      • Dec 2006: study begins
      • June 2007: one primary (composite) and two secondary outcomes (one of which is a composite) first registered
      • Dec 2007: new primary composite outcome substituted, stroke or non-CNS embolism added as second primary outcome (which itself is two outcomes), additional series of adverse events added as secondary outcome, all cause mortality removed as secondary outcome
      • June 2009: study ends
      • Oct 2010: more juggling of outcomes (I’m going to stop transcribing each granular change, but you’re welcome to look for yourself)
      • Feb 2011: more juggling of outcomes
      • June 2012: outcomes moved to results section (normal) … their final results report three primary outcomes, each of which is a composite. There are seven secondary outcomes, each of which is a composite or “its components” (which is a further lowering of the bar).

On top of all of the issues bulletpointed above, a few years after the trial finished it was revealed in a BMJ investigation that the point-of-care INR testing device that the study relied upon had a design flaw that reported erroneous numbers for warfarin users:

“Alere—the device manufacturer—had received 18,924 reports of malfunctions, including 14 serious injuries. The company confirmed to The BMJ that the fault went back to 2002, before the ROCKET-AF trial started.”

“Back in September 2015, The BMJ asked the investigators named in the NEJM paper about the recall. They included researchers from Bayer, Johnson and Johnson, and the Duke Clinical Research Institute, which carried out the trial on behalf of the drug companies.”

“None of the authors responded, but a spokesperson for Johnson and Johnson contacted The BMJ to say that they were ‘unaware of this recall’ and they took the journal’s concerns ‘seriously.’ But it took months of probing by The BMJ before the companies, world drug regulators, and [the] Duke [researchers who completed the study] began to investigate the problem in earnest.”

Here’s a quote from a New York Times article covering the controversy:

“Questions about the trial have been stirring since last fall, when Johnson & Johnson and Bayer, which sells Xarelto overseas, notified regulators that the device that was used in the trial had been recalled in 2014 because it was understating patients’ risk of bleeding. The device, the INRatio sold by Alere, was used in the trial to help doctors gauge whether patients were getting the right dose of warfarin. The trial compared the number of strokes and bleeding events experienced by patients taking Xarelto to those of patients who were given warfarin.”

In 2016 the study authors published a re-analysis of their data. It was an internal re-analysis, but included, “Two physicians who were unaware of study group assignments [and who] reviewed baseline medical history and adverse events identified during trial follow-up for all patients enrolled in the trial for any of the conditions cited in the recall.”

Another news article notes: “More than 19,000 lawsuits against the drug have been filed in the federal litigation underway in the Eastern District of Louisiana … plaintiffs allege that the Bayer and Johnson & Johnson downplayed the drug’s risks and wrongly marketed it as an improvement over a decades-old blood thinner called warfarin.”

In December 2017, “a jury in the Philadelphia Court of Common Pleas awarded US$28 million, including US$1.8 million in actual damages and US$26 million in punitive damages, to a woman who suffered a severe gastrointestinal bleed a little over a year after she was prescribed Xarelto.”

Andexanet alfa

Tradename: Andexxa; antidote for apixaban and rivaroxaban. A quote found in what is essentially a press release from the manufacturer reads, “In the U.S. alone in 2016, there were about 117,000 hospital admissions attributable to factor Xa inhibitor-related bleeding and nearly 2,000 bleeding-related deaths per month.” These data aren’t supported with a citation (but they may well be accurate). Assuming they are, that speaks to the need for such an antidote. The data on whether FFP/PCC work adequately are nonexistent. In our discussion of Andexanet alfa I mentioned that I’m aware of a couple of publications related to its respective studies:

  • The first publication on andexanet alfa looked promising in 2015, but it should be noted that it has some major caveats.
    • It’s actually two studies, with an aggregate of 145 “healthy volunteers.” This is spectrum bias — it’s looking at a different population that the one this antidote would be used on.
    • Additionally, it’s a tiny number of people. In one of the two studies, for example, the intervention has 24 participants and the placebo had 8 participants. This is a very small number, and so we should be careful here. That said, the fact that the manufacturer powered the study in this way speaks to the fact that they believe this drug will have a demonstrable effect! Now, let’s move onto what that effect…
    • The outcome of interest was a laboratory surrogate marker that (given our biological understanding) should precipitate a temporary reversal of the DOAC. The outcome of interest was not time to hemostasis (they chose a disease-oriented and not patient-oriented outcome). Additionally, the outcome(s) of interest mentioned in their clinical trials registration make no mention of the timeframe to reversal. In the NEJM paper the timeframe is discussed in the context of what the results show. While in this case I don’t think the timeframe aspect is a huge problem (physicians would use the drug whether the results were for 5 minutes or 25 minutes), it should be noted that not specifying timeframe is a violation of the 2016 FDA final rule (see this NEJM article); since 2005 this has been a precondition for publication in NEJM, although astute readers will notice that this study took place after 2005 and is in fact published in NEJM.
    • Study timeline:
      • March 2014: study begins
      • July 2014: outcome measures first declared (no clinical trials dot gov records for these trials existed at all before this)
      • May 2015: study ends
      • Nov 2015: NEJM article published
      • August 2018: outcome measures updated … in a way that doesn’t appear totally egregious, but does now lack any mention of the 43 day timeframe that was part of all their original outcome measures. In fairness, the NEJM paper reports the 43 day timeframe.
    • The timeline is a little problematic in that the third trial (see next bulletpoint) begins not too long after the outcome measures are posted for the first two trials (and while the first trials are ongoing). Scientific integrity implies that they aren’t peeking at their data from these ongoing two trials and using that data to design the third trial. But even without peeking, there are many ways that experimenters can inadvertently become unblinded (for example: participants who have an outcome of interest also complain of side effects that are obviously not from a placebo). Particularly when there is a 3:1 ratio of intervention-to-placebo in your existing trials, it would be surprising if you didn’t have an intuitive sense of whether things are going as expected. Because drug companies are at liberty to run many, many trials and only publish the favorable ones, one can imagine some particularly motivated companies might have several trials overlapping, using experience from each to inform the subsequent, in hopes of maximizing probability of success while minimizing time to completion.
  • The second publication came in 2016, and also had some major caveats.
    • On the plus side, hemostasis was now the marker of interest, which is much more patient-oriented.
    • This is not a randomized trial. It’s open label and there is only the intervention, so there is no blinding.
    • While I appreciate the patient-oriented outcome of interest being front and center, achievement of this outcome was an “adjudicated judgment” … in short, a committee decides if they think hemostasis was excellent, good, or poor.
      • This excellent / good / poor trichotomy has a list of definitions only available in a supplementary appendix (where they are unlikely to be seen) and includes things like hemostasis being “good” (in visible bleeding) if fewer than 2 units of plasma / coag factors (and excluding pRBCs) are given and hemostasis is achieved at the 1-4 hour mark. That’s a pretty subjective definition. Good or excellent were the marks they aimed to achieve.
    • The trial included 67 patients when published although it was ongoing at that time (requiring 250 for power calculations related to safety).
    • Their outcome of interest was updated (at clinical trials dot gov) slightly during the trial, with wording changing from 24 hours to 12 hours. Additionally, the definitions reported in their NEJM supplement are more liberal than those of their clinical trials dot gov record. For example they define acute major bleeding in their records as a drop in hemoglobin of 2g/dL or a baseline of 8g/dL. In their NEJM report they include this, but also the following: “(or an investigator’s opinion that the hemoglobin level would fall to 8 g per deciliter or less).” I don’t think this is terrible, since if anything it would make them look better to say officially they were going to have looser definitions and then to report in press that they had strict definitions.
    • As of February 2019 their clinical trials dot gov data does not include any results or analysis. Their product is already approved by the FDA and is being used in hospitals across the United States. It is hard to imagine that investigators take the trials records seriously when journals and the FDA do not seem to care if they accurately report their data there, update their outcomes in a post hoc fashion, or simply omit data entirely.
    • 12/67 patients had a thrombotic complication by 30 days. It’s hard to know that this means, because…
    • To reiterate — there is no comparison group. There’s no way to know if this worked better than any other solution (FFP, PCC, etc).
  • This JAMA medical letter discusses the fact that the FDA has approved andexanet alfa for fast track availability in the US, on the basis of the trials discussed above — no proper RCT comparing to FFP or PCC (etc) in a sick population. “The FDA is requiring the manufacturer to conduct a randomized, controlled trial comparing the new product with usual care, which could include prothrombin complex concentrates, in patients taking factor Xa inhibitors who have active bleeding.” The drug will also include a “boxed warning about the risk of thromboembolic, ischemic, and cardiac events, including sudden death, in patients treated with andexanet alfa. In the interim analysis of ANNEXA-4, 11% of patients had a thrombotic event and 12% died within 30 days after administration of the drug.”
  • As mentioned in the podcast, pricing is $50,000. Compare that to the $4500 cost for PCC, or the $500 cost for FFP. I sourced the $50,000 pricing from an article published in The Medical Letter, and is noted to be the wholesale pricing. Here is the complete citation included in that article for this claim: “Approximate WAC for the high-dose regimen (bolus dose plus one 120-minute infusion). WAC = wholesaler acquisition cost or manufacturer’s published price to wholesalers; WAC represents a published catalogue or list price and may not represent an actual transactional price. Source: AnalySource® Monthly. June 5, 2018. Reprinted with permission by First Databank, Inc. All rights reserved. ©2018. www.fdbhealth.com/policies/drug-pricing-policy.”

Andexanet alfa update → The ‘full study report’ for the 2016 paper was just published in NEJM and we will briefly discuss it on the next episode (whether it tells us anything the 2016 report didn’t, since it now has 254 patients in the efficacy analysis rather than 67). I will update the bulletpoints above after that so that all of this info can live in one easy-to-access place for y’all.

 


— Episode credits —

Hosted by Addie, Kim, and Alex. Audio production and editing by Addie. Record player scratch by Mike Koenig. Railway sound by Satish Madiwale. Franke’s French airhorn should be under fair use but for good measure I recreated it and overdubbed using a recording of a single horn that is within the public domain. Theme music compositions (Too Cool, and Laserpack) by Kevin MacLeod of incompetech.com, licensed under Creative Commons: By Attribution 3.0 License.