Saturday, February 8, 2014

Variation in test terminology

When I observe conversations about testing with other engineers, I notice that latent disagreements about the meaning of testing terminology breaks the conversations. Latent, because people think they're talking about the same thing, but they are not.

Here are some examples:

"Integration Test" / "Integrated Test" - is there a difference?

  • I have tested Thing 1 and Thing 2 in isolation; now test that they work together.
  • A test for Thing 1, but Thing 2 is along for the ride.
  • I test the entire system, end to end.


"Unit Test"

  • Any test written by developers
  • A test that talks to the code directly (any size of "unit")
  • A test of a small portion of the code
  • A test of a single class, in isolation
  • A test of a single, tiny behavior


TDD

  • A practice where developers write automated tests
  • Tests get written before code
  • We look to tests for feedback on our code design
  • I follow a RED/GREEN/REFACTOR workflow

UI Test

    • I test my business rules by manipulating the UI
    • I test my program as a whole by manipulating the UI (assume the components are already tested)
    • I test my UI



    Why unit tests?

    In conversations around unit testing, I regularly hear someone assert the real reason we write unit tests. Sometimes the conversation goes like this:

    Alice: "We should write unit tests"
    Bob: "No. We can already achieve result X through some other approach." or "No. Unit tests fail to accomplish Y which is important."
    Alice: "But Z is the real reason we write unit tests."

    It would help me to have a list of supposed reasons for unit tests, so I'll catalog them here.

    Note that I'm not asserting specific definitions of "unit" and "test" and "unit test", although varying definitions here are a big part of the problem.

    Correctness: proof that my code does what I intended it to do.

    Detractors: There are many bugs that unit tests won't catch, e.g. integration bugs, security bugs, timing bugs. Also, the mindset that created a bug is also the mindset creates the test, so we can't rely on unit tests to catch all the bugs.

    Proponents: Developers often fail at basic correctness, leaving testers the tedious work of finding easy bugs. By using unit tests to ensure basic correctness, even imperfectly, testers can do the interesting and important work we need them to do.

    Regression: proof that my changes didn't break something else.

    Proponents: Much time and energy go to fixing regressions, which could be eliminated if we had tests.

    Detractors: Same arguments as in correctness.

    Refactoring: I can safely refactor without introducing regressions.

    Proponents: Refactoring is key to the long-term well-being of the code base, and the mental health of the programmers. Having unit tests in place makes that safe.

    Detractors: The obvious arguments from above, plus the burden of dealing with failing tests whenever you change something that should be innocuous. Do you fix 100 tests or just throw them away?

    Design: unit tests help me reduce coupling & increase cohesion

    Detractors: I don't see that happening.

    Proponents: When a test is hard to write, that's design feedback to refactor your code. When the test is short, clear, easy to write, easy to read, has a good name, and runs fast, you know your code is in good shape.

    Note that it can take a lot of practice to develop the skills required here.

    Efficiency: Unit tests affect the time it takes to get the job done

    Detractors: Time spent writing unit tests could be time spent creating customer value. Also, writing these unit tests takes a long time.

    Proponents: Highly skilled unit testers write less code because they have less redundancy and they only implement as much functionality as is required by the tests. They also spend less time fixing bugs, and can afford to perform root cause analysis on every bug.