• WalrusDragonOnABike [they/them]
    link
    fedilink
    1
    edit-2
    10 months ago

    Sure, but alpha is an arbitrary choice. An p-value of .051 isn’t magically different than .049. They’re essentially equally statistically significant. The reported p-value was 0.05. Smaller p-values are better, but its a continuum, not a series of buckets. Also, if you are testing if care is better than no-care, a 1-tailed test should be done, so the p-value would be 0.025.

    But the bigger problem is they controlled indirectly for the outcome, so any difference is being minimized. And given they conveniently only leave out the valid model for specifically this section, it almost seems like the “mistake” was intentional. At least one author as a history of public transphobia and clearly had their conclusion before any analysis started. There’s lies, damn lies, and then there’s statistics. If you have enough data variables, you can find specific things where you simply do not have enough data to get your p-value small enough. Given all people in the study were at least referred to a gender clinic means they’re in a good enough place where that’s something they feel they can do any parents aren’t able to interfere, its going to be a group with an already low suicide rate compared to the overall transgender population - they only have 7 data points divided between both GR groups. It would have to be a massive difference for any effect to be statistically significant and they still had to manipulate the data to bring the p-value up to 0.05.

    Or I guess one could argue that death by suicide is caused by a high number of psychiatric treatment contacts, as the study’s authors seem to be implying. Not saying you are arguing that. Just pointing out how ridiculous of a statement the study makes. Technically it just says correlation, but if you want to assume they’re not just transphobic or totally incompetent, then that’s the interpretation that would make most sense.

    • @feedum_sneedson
      link
      1
      edit-2
      10 months ago

      Perhaps researcher bias is an issue, I don’t know. Statistics are frequently abused and manipulated, but also frequently disregarded when the data “feels” significant - even in academic papers! It’s been a long time since I formally studied statistics, but more recently I’ve been shocked by how casually they are glossed over in higher education. Consult a statistician. … has anybody ever done this?

      It’s fraught really, evidence-based medicine is our best tool but when the subject is so emotionally (and increasingly politically) charged, is there anybody researching this who doesn’t have a bias? I genuinely doubt it. In fact… my hypothesis is that there are no unbiased researchers on this. Which would possibly be the null hypothesis.

      • WalrusDragonOnABike [they/them]
        link
        fedilink
        110 months ago

        Consult a statistician. … has anybody ever done this?

        Definitely not in academia. Agreed that academic papers are regularly published by people who know nothing about statistics, but threw some numbers from 3 trails into some package told them without any understanding of what they’re doing other than “p-value below certain thresholds, so I put *, **, or *** in a column”. And I doubt journals make sure to get someone with a statistics background to double check things for the basic sciences.

        I’d just expect better in the medical field where statistics are much more essential and the results are much more applied in a way that directly has a large impact on the QoL of people.

        Agreed there’s no unbiased researchers. But you can be biased and still not make obvious exclusions to fit your story. And if they want their story to be “psychiatric care causes suicide,” they should bury their central claim deep in the paper. It should be explicit about that claim in the title or at least the abstract.