Edsger W. Djikstra once said that testing shows the presence of bugs, but they don’t show their absence. Was he right?. When that sentence was written I don’t think there was much work on automated testing (Context matters).
Testing doesn’t show you the absence of all errors. But it does show you absence of errors for which you have written the tests. There is, therefore, at least some value on tests.
My first job in the UK as a developer was for a printing company. We were using an internal framework. Whenever there was an error I was told to replicate the error in the most concise way possible. It always paid the time spent getting that minimal code showing the error because a lot of times, when I thought I knew why the error happened it was, in fact, a red herring.
Getting the example to the smallest possible piece allow you to avoid getting lost on details and paths that have nothing to do with the error.
When there is no testing
When have you ever written a piece of code that worked perfectly the first time around and never broke? Maybe very simple code. Otherwise, is very difficult. Doesn’t matter how good your other practices are, how easy is to read your code, or how much help you get from the compiler. Especially around business logic (although a good type system can help).
There is never no testing. Either you test your system, or your client is testing your system for you in production.
That is not a good situation, hopefully all of us can agree. So something have to be done. Enter the idea of the dedicated QA team to test the system. So the code will go back to you, because the QA team would have found something. But this comes back after you have started already new work. Which means you have to switch your attention back to something that was already “done”.
Of course, doing that through manual tests, as the system grows, makes the tests take longer and longer, and the feedback take longer and longer to come back. And is a repetitive process. Inefficient.
Enter automation. Which increased the speed of testing the system.
But most of that automation is code. Why are devs not coding and using those tests?
There is an additional problem. Those tests, even automated, are slow, as they hit the whole system, and they need a lot of setup to test even the smallest change. They are terribly inefficient.
The whole process is terribly inefficient.
Because I do a bit of code on Clojure, I would like quickly to talk about the repl. With the repl you can run your code multiple times as you are developing to check that it is doing what you think is supposed to do, quickly changing and adapting. But with the manual tests that we discussed above, that is a lot of work anytime you want to go back and change something. Which means that is as inefficient as manual tests (which is what they are). The repl is superb for exploratory coding.
Testing is not easy
Adding automated testing wherever is possible massively reduces the inefficiency. Automated test are code, and therefore their setup should be part of the normal work of the programmers.
Once you do that possibilities open, like running those tests as part of the CI, or have a runner/watcher on your computer that runs the tests whenever you change code.
But this automated testing is not easy. It is code. And if you don’t have discipline and knowledge you end creating brittle tests, or tests that are not easily modified. Tests need the same care as the rest of your code.
Uses of testing
Testing has multiple uses. The basics of tests is that they are to stop/detect errors.
But also they serve as documentation of what the code actually does. This is a behaviour that I take advantage, especially when using OSS libraries: I tend to go to the testing suite to better understand how the code is #used and what I can expect it to do.
Tests also allow you to refactor in your code knowing that if you break the know behaviour it will let you know.
TDD (and other options like BDD) can be used to drive your work. If you write the tests before you write the code, you know what is your objective, and you know when you are done. They do also help on creating code that is easy to modify.
But, again, testing is not easy, and doing TDD is not easy either. I will point you to this magnificient (and funny) post by Samir Talwar: TDD in 3 easy steps
Levels of testing
Have you heard of the pyramid of testing? It was originally described by Mike Cohn in his book Succeeding With Agile. That pyramid indicates that the bigger number of tests you need to have in your system are unit tests, then you have less integration/service tests, then the lesser UI tests. Finally you have manual tests which size will depend on the type of code that you are creating . For back end code, the number of manual tests should approach zero, while on frontend you will end having more. The pyramid of test reduce inefficiencies in the process of testing.
Who takes care of tests?
Code is the responsibility of developers. The design of those tests, is where QA/QE people shine, creating or describing the scenarios that we want to take care of, especially on the larger scale, e2e tests. And they do take care of all exploratory testing, and manual tests. But, like everything else, it should be a collaborative effort, with constant communication, trying to reduce the time needed to setup, and increasing the feedback loop frequency to deliver the software.
At the moment, my experience has showed me that automated tests are a necessity, and that using a good TDD approach leads to better code. That doesn’t mean that it will always hold true. Maybe at some point there is a language that allows you to not have them. Or we discover/realize a different way that removes completely regression errors.
But they do require discipline and constant supervision.