I've recently been thinking about the role of testing. Having primarily worked on smaller projects, I've never really had to write tests. Working at Dynasty, with a relatively young codebase, has been useful for me to develop my own opinion about tests.
This post is a bit of an extension to last week's post about tour types. The important context is that a new feature had to be shipped quickly due to the current pandemic. Because of the speed we wanted to push our the feature, we didn't have time to write tests. The feature was already (lightly) in use before we started to write tests for it. I'd like to share some realizations I've made coming from the perspective that this was my first task at Dynasty and my lack of experience with "proper" testing.
As stated, we were able to push out a feature that seemed to work without writing any tests. The caveat is that we're rolling it out slowly to customers that really want it because we don't want to effect too many customers if there is a major bug in the new feature. With proper tests, we could have potentially pushed it out faster to all customers. We did QA the feature to make sure it functioned correctly on the probable use case (and errors were caught), but no code was written to test the new feature.
This allowed us to move fast to ship code, but forced us to ship with caution. Part of the reason for the speed is because it allowed us to forgo thinking about the design of the implementation. As stated in the previous post, I realized mid-way that there was a possible design flaw with the implemented system. While this case ended up not being an issue, there could have been a case where it was an issue. Writing tests forces you to think about how something should be implemented because it forces you to think about how something should be tested.
As it turns out, there are many types of tests that you can write. It looks like Dynasty in-house terminology differs from what I've found online, but I think I've found a reasonable quote from to sum it up: "It’s not important what you call it, but what it does". I think the proper way to explain the testing strategies are:
If the implementation should be tested with a unit test, then it's mostly likely a smaller functional add-on. This means that most of the logical changes should be concentrated in a couple of files, if not just a single file.
If the implementation should be tested with an integration or acceptance test, then you're changing how your users flow through the codebase. The feature is then most likely interacting with many parts of the codebase. Not only does this mean that the current integration or acceptance tests need to be modified or extended, but depending on the feature, the current design of a system is potentially in question as well. For example: this test made this assumption, does it still hold after the feature is implemented?
Writing tests before or during the implementation of a feature helps you sort out the scope of the feature.
I think because of how testing forces you to think about the structure of your codebase, it's nature to view tests as a form of documentation. Thinking back, while looking at the data models and walking through the code helped with my understanding of how the codebase at Dynasty worked, walking through the acceptance or integration tests would also be a solid way to learn as well.
Although I need more experience, I'm leaning towards believing that documentation should be updated during integration/acceptance test writing. The code itself provides valuable information as to how the system should work, but having written context explaining the system is important for understanding the higher level design choices for a codebase. These system level tests should be inherently slower to write, so documentation updating might as well be bundled here to maintain an up-to-date knowledge base.
My general concrete conclusions are that acceptance tests should be defined and written before a feature is developed, unit and "UI" tests are good to use during development. An integration test could be either a unit or acceptance test, it's a type of implementation.
As it turns out, because of the urgency of the particular issue, it ended up being correct to forgo writing tests to make sure our customers got the new feature as soon as possible. There is merit to breaking my assumptions, as always, it depends on the situation.