Challenge
Testing is a core and constant part of running a product.
In my time running/growing/managing SaaS platforms, I've noticed I -- somewhat surprisingly -- really enjoy testing. In fact, my #004 post was about reporting bugs (and, thus, testing).
Over the last year or so, I've put more and more in place to ensure testing not only happens but happens consistently and its results are acted upon. I am confident any SaaS business without a half-decent testing structure is doomed to fail (or go slower than its competitors, which is essentially the same thing).
These last couple of sprints have put my initial processes to the test, for a few reasons.
We ran three high-impact epics in parallel. I know, boo.
One of these epics was a major refactor of several key parts of our codebase.
I noticed cracks in my original plan, namely completely ignoring parts of the codebase that, to my untrained eye, seemed unrelated to the new feature we were testing but, in fact, were very much impacted.
I had to step up my (and my team's) game. Here are the decisions I made.
Decisions
1. Create a generic test plan
First things first: create a generic testing plan.
So far, we've been testing on a feature-per-feature basis. Nothing wrong with that, but when you're working with a complex app you're bound to miss something.
This plan's philosophy is simple. What would you test if you had to tell your boss with 95% certainty that "everything is working fine"?
The 95% is important here. I don't want my team (or me) to spend a billion hours testing every button every single time. It just needs to be a thorough whip through of the app.
The plan I put together covers 13 sections of our app (with sub-tasks per section) for a thorough but not intrusive. It would take me two to three hours to go through it diligently.
2. User stories + acceptance criteria = ❤️ testing ❤️
Part of what slows you down when testing initially is you don't really know what good looks like. I recently took on the resource-intensive (but worth it) task of creating product requirement documents (PRDs) for every key part of our platform.
Along with this work, I am also creating user stories and acceptance criteria for each user story. With these three pieces, we've got the complete puzzle:
PRDs -- Tell us why the feature was built.
User stories -- Tell us what the feature needs to do.
Acceptance criteria (AC) -- Tell us whether the feature is complete.
This little (not so little) process gives us an additional little benefit. A process-cherry on our testing-cake, if you will.
We can now use these stories & ACs as a blueprint to test these parts of the application.
Say we introduced a new feature in part A of the platform. Now, instead of going through the 2-3hrs in-depth test of the entire platform using my generic test, I can whip out part A's acceptance criteria and get crackin'.
3. Report bugs on existing tickets (duh)
This was brought up by one of our developers.
So far, we had been raising individual bug tickets whilst testing. This introduced a disconnect between the work in the current sprint and the bug reports. It also removed context for the developers when solving the bugs later on.
Instead, our dev recommended we comment bugs right on the actual tickets. A simple idea, yet we hadn't thought of it until now.
This significantly improved the speed and efficient of our testing processes.
4. Involve the dev team
We've essentially got two processes here.
The first, the generic test plan, is the big one. Going through it would take several hours and produce an extremely thorough output.
The second is a more targeted test. Faster, but you risk missing out on some dependency bugs.
A new feature rolls out. Which one do you pick?
I don't think there's an answer here. Except maybe both. My decision here is to include the dev team in this conversation. They will be able to tell me if:
This change is minor or major (targeted vs. generic plan).
Which parts of the codebase are affected by their work. This information will help me prioritise my tests, particularly if I'm working through the generic test.