Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • @phoneymouse
    link
    English
    41 year ago

    It’s always fun to hear management pushing code coverage. It’s a fairly useless metric. It’s easy to get coverage without actually testing anything. I’ve seen unit tests that consist simply of starting the whole program and running it without asserting anything or checking outputs.