A Philosophy of Testing 1: Introduction

Mark "Justin" Waks
6 min readNov 10, 2021

The Golden Rule of Programming

I’ve been programming for the better part of forty years, in several dozen different languages. Those environments vary a lot, and many things have changed, but more than you might think has stayed the same.

One of the things that has never changed has gradually turned into my Golden Rule of Programming:

Writing code is easy.
Maintaining code is hard.
Therefore, first and foremost, focus on making your code maintainable!

Perhaps the most important innovation that has arisen over those years was automated testing — the notion that you can and should have programs that test your programs. Most people at least pay lip service to the idea of test automation, and some people go kind of overboard on it. It is possible to overdo it, yes, if your testing is so rigid that it impedes your ability to evolve your code.

But done right, test automation can make your code far more maintainable, and improve your ability to quickly release improvements and bug fixes, by giving you the confidence to refactor and enhance while knowing that you haven’t broken your previous progress. It’s essential for building serious, scalable, long-lived software.

What We’re Going To Talk About

This is the beginning of an in-depth series called A Philosophy of Testing. Note the “A” there: it’s specifically not The one true way to do things. There’s lots of room to disagree here, but I hope you’ll read through this and think about how it might fit into your environment.

I’m going to be talking about a fairly specific (but common) situation: the quick, run-them-all-the-time tests for service-oriented applications written in Scala. By “service-oriented”, I mean applications that are mainly exposed via APIs, including both monoliths and microservices. These tend to call other dependencies, such as databases and other external services — that’s not essential, but we’ll be keeping that in mind as a consideration.

I’m not going to be covering (at least initially):

  • Languages other than Scala
  • Testing the UIs that use these APIs
  • Testing libraries
  • Integration testing of multiple services
  • End-to-end testing of an entire enterprise-grade system

Some of the thoughts here apply to those situations, but a key message here is that testing isn’t one-size-fits-all: you need to design your test approach around the code that you are testing.

It is common for a production-grade system to have several test suites, including scenario tests for the backend, UI tests for the frontend, and end-to-end release tests. For now, we’re just going to focus on the first of those.

For the style of testing I’m advocating here, I’m going to use the term “scenario testing”. While it runs at the same time as traditional unit tests, I’m not calling it “unit testing”, because that usually means a style of testing that I don’t recommend for this sort of problem. (I’ll have a later article that goes into why not.)

These scenario tests have a pretty specific style:

  • Mainly test the entire service as a whole, by calling the API entry points
  • Create stubs/emulators (usually not mocks per se) for the external dependencies, or run in-memory versions of them
  • Instrument the heck out of the whole thing

I’ll go into detail about this approach in subsequent articles.

Test Automation is Every Developer’s Job

Let’s start off the philosophy with a strong statement, that some folks will nod along with and others will disagree with:

Every developer, as part of their everyday work, should be building test automation.

That isn’t to say that all test automation should come from the developers — there is a lot of value in having dedicated QA engineers who are driving the overall test process, especially the integrated end-to-end tests. (Which you should also have, separately from the tests that this series is about.) And good QA engineers are trained in how to think about tests, so it can be valuable to have them help to spec the tickets, even if they don’t write the code.

But the original sin of the software industry is to think of QA as separate from development: that developers build stuff, throw it over to the wall to QA, and don’t take responsibility for finding problems. This results in disconnects that cause no end of bugs, ranging from the minor to the fatal. You, the developer, are often closest to the problem, and often have the clearest idea of what needs testing. (Not least, you’re likely to know where the edge cases are.)

So instead, as part of your coding, you should be writing tests which show that your code works, and illustrate the use cases that you have coded for, which can be used on an ongoing basis to prove that the code still works as intended. These tests should be fast enough to run constantly — sometimes dozens of times a day as part of your regular development — and while they may not test every possible use case upfront, they should be enough to let you code with courage, knowing that you probably haven’t broken things.

This takes time, and here I’m speaking to both developers and managers. It should be considered entirely normal for a developer to spend 20–25% of their coding time on test automation. That’s not “overhead”, that’s an essential part of writing code that is demonstrably reliable. It’s an upfront expense, but saves you a vast amount of tsuris (up to and including reputational damage due to downtime) down the line, and can speed up your cycle long-term, by reducing the need for manual QA, even eliminating it in many cases.

It also results in a great deal of code: in my experience, my test code is often larger than the code under test, and it’s not unusual for my tests to be 2–3 times the size of the application. There is nothing wrong with that — there are often a lot of scenarios that need testing.

That said, this shouldn’t just be boilerplate that you are churning out mindlessly. Which gets to another key point:

Test code is code.

That may sound obvious, but people often don’t treat it that way. We spend a lot of time and effort writing good code for our applications, refactoring around DRY (don’t repeat yourself) principles and making things elegant and maintainable — and then allow our test harnesses to be boilerplate-filled monstrosities.

There is no excuse for that: the same principles of good code style apply even more to test code, since it is often much larger than the main system, with patterns repeated frequently. Factoring your test harnesses well takes time and care, and you will find that they need to evolve as your system grows. (In particular, watch out for repeated patterns, and lift them into shared functions.) This pays off in the long run, though, with test code that is crisp, clear, readable, low in boilerplate and easier to maintain.


To finish off this introduction, I’d like to provide a general principle for you to consider:

Don’t just test by rote. Test mindfully.

It is far too common for tests to be treated as a checkbox item, as completely formulaic. This is easy — but it’s also not very useful. It is part of the problem with traditional unit tests: too many of them are just useless checkboxes.

Instead, I recommend thinking of your tests as science. Each test should be a hypothesis that something is true (that it will work as expected). Don’t just auto-generate those hypotheses — think about what is interesting to demonstrate here, that actually matters, and which isn’t trivially obvious.

This is every bit as important as all of the technical details: correctly choosing what to test will determine whether your tests are valuable, and whether they help you show, during enhancements and maintenance, that things are still working. That’s the name of the game here — to make your systems more maintainable, by helping your prove, on a constant basis, that things are still pretty much working.

The Series

This is going to be a fairly deep dive, with a lot of things to talk about. The articles to come will include:

(Plus perhaps some more down the line.)

I hope you will find it interesting food for thought in your own projects.



Mark "Justin" Waks

Lifelong programmer and software architect, specializing in online social tools and (nowadays) Scala. Architect of Querki (“leading the small data revolution”).