I think they should never be used.

  • @[email protected]
    link
    fedilink
    138 months ago

    No experiment, no proof. But, taken with a grain of salt a good survey can be better than pure speculation where experiment is impossible or unethical. On the other hand experiments can prove something, but depending on how reduced or artificial the context they may not be proving as much as you hope, either. Science is just difficult in general.

    • @JokklMaster
      link
      68 months ago

      Exactly. Luckily I’m in a field where true experiments are possible, but I have many colleagues who can’t ethically run true experiments. It’s surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.

      • @[email protected]
        link
        fedilink
        48 months ago

        And even if you can carry out a proper experiment, it might be useful to see of there’s already a survey on the same topic. If there is, you can use that data to design your experiment, and hopefully you’ll be able to take important variables into account.

    • @merari42
      link
      37 months ago

      where true experiments are possible, but I have many colleagues who can’t ethically run true experiments. It’s surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.

      There is a whole discipline on causal inference with observational data that is more than a hundred years old (e.g. John Snow doing a diff-in-diff-strategy). Usually, it boils down to not having to control for every detail but to get plausibly exogeneous variation in your treatment either due to a policy only implemented in one group(state), a regulatory threshold, or other “natural experiments”. Social scientists typically need to rely on such replacements for true experiments. Having a good survey is only the first step before you even think at how you could potentially get at the effects of interest. Looking at some correlations in a survey is usually only some first descriptive to find interesting patterns. Survey design itself is a whole different problem. There you also have a experiments and try to find how non-response and wrong answers work. For example, there are surveys in scandinavia, the netherlands, france in Germany that can easily be linked to social security (or even individual credit card data in the danish case) to validate answers or directly use high-quality administrative data.

  • aviationeast
    link
    128 months ago

    I think you should generate a survey to see what people think…

  • @[email protected]
    link
    fedilink
    11
    edit-2
    8 months ago

    I think this is moreso a misunderstanding - surveys on their own, in raw form, are not science

    There’s all kinds of bs that can come up like:

    • selection bias
    • response bias
    • general recollection errors/noise (especially for scary or traumatic experiences - there’s a bunch of papers on this behavior)

    But data scientist can account for these by looking at things like sample selection (randomly selected so as to represent the nation/region/etc), pilot runs, transparency (fucking huge dude, tell everyone and anyone exactly what you did so we can help point out bullshit), and stuff like adjusting for non-responses.

    Non responses are basically the idea that some people simply don’t give a fuck enough to do the survey. Think about a survey your Human Resources team at work might send out - people who fuckin hate working there and don’t see it changing anytime soon might not vote, which means there would be less people expressing their distaste which leads to a false narrative: that people like working there.

    Hope this makes sense! Stay curious!!

    PS/EDIT: Check out the SAGE method for data science for some more info! (There’s probably a YouTube vid instead of the book if you’d prefer I’m sure!)

  • dual_sport_dork 🐧🗡️
    link
    7
    edit-2
    8 months ago

    I deal with the fallout of this, or something closely related to it, frequently in my industry.

    Manufacturers think focus groups represent the needs and opinions of the general public. What they categorically fail to realize is what focus groups actually represent is in fact the types of people who attend focus groups.

    The kind of people who respond to surveys are the kinds of busybodies who respond to surveys. Not an actual vertical cross-section of the populace.

  • @Paraponera_clavata
    link
    68 months ago

    If you look into the other methods, they’re also filled with flaws and biases.

  • @Paragone
    link
    48 months ago

    Shoddy use of them is normal, that is true.

    Don’t toss-out the baby, with the bath water, tho, eh?

    Training people in critical-thinking, & having quality standards for doing surveys, would help our world more, than would removing a method of discovery.

    _ /\ _