Alex’s Newsletter

Share this post
#010 - Creating a sensible testing plan
alexdebecker.substack.com

#010 - Creating a sensible testing plan

Huntin' dem bugs down as early as possible

Alex Debecker
Feb 21, 2022
Share this post
#010 - Creating a sensible testing plan
alexdebecker.substack.com

Challenge

Testing is a core and constant part of running a product.

In my time running/growing/managing SaaS platforms, I've noticed I -- somewhat surprisingly -- really enjoy testing. In fact, my #004 post was about reporting bugs (and, thus, testing).

Alex’s Newsletter
#004 - Striking the bug report balance
Challenge As a product owner, one of my key jobs is to look over the quality of the (SaaS) product we put out there. Before customers put their hands on a new update or feature, I must make sure it fits our standards. This includes, obviously, working properly but also usability, copywriting, consistency, documentation, and more…
Read more
a year ago · Alex Debecker

Over the last year or so, I've put more and more in place to ensure testing not only happens but happens consistently and its results are acted upon. I am confident any SaaS business without a half-decent testing structure is doomed to fail (or go slower than its competitors, which is essentially the same thing).

These last couple of sprints have put my initial processes to the test, for a few reasons.

  • We ran three high-impact epics in parallel. I know, boo.

  • One of these epics was a major refactor of several key parts of our codebase.

  • I noticed cracks in my original plan, namely completely ignoring parts of the codebase that, to my untrained eye, seemed unrelated to the new feature we were testing but, in fact, were very much impacted.

I had to step up my (and my team's) game. Here are the decisions I made.

Decisions

1. Create a generic test plan

First things first: create a generic testing plan.

So far, we've been testing on a feature-per-feature basis. Nothing wrong with that, but when you're working with a complex app you're bound to miss something.

This plan's philosophy is simple. What would you test if you had to tell your boss with 95% certainty that "everything is working fine"?

The 95% is important here. I don't want my team (or me) to spend a billion hours testing every button every single time. It just needs to be a thorough whip through of the app.

The plan I put together covers 13 sections of our app (with sub-tasks per section) for a thorough but not intrusive. It would take me two to three hours to go through it diligently.

2. User stories + acceptance criteria = ❤️ testing ❤️

Part of what slows you down when testing initially is you don't really know what good looks like. I recently took on the resource-intensive (but worth it) task of creating product alignment documents (PADs) for every key part of our platform.

Along with this work, I am also creating user stories and acceptance criteria for each user story. With these three pieces, we've got the complete puzzle:

  1. PADs -- Tell us why the feature was built.

  2. User stories -- Tell us what the feature needs to do.

  3. Acceptance criteria (AC) -- Tell us whether the feature is complete.

This little (not so little) process gives us an additional little benefit. A process-cherry on our testing-cake, if you will.

We can now use these stories & ACs as a blueprint to test these parts of the application.

Say we introduced a new feature in part A of the platform. Now, instead of going through the 2-3hrs in-depth test of the entire platform using my generic test, I can whip out part A's acceptance criteria and get crackin'.

3. Report bugs on existing tickets (duh)

This was brought up by one of our developers. 

So far, we had been raising individual bug tickets whilst testing. This introduced a disconnect between the work in the current sprint and the bug reports. It also removed context for the developers when solving the bugs later on.

Instead, our dev recommended we comment bugs right on the actual tickets. A simple idea, yet we hadn't thought of it until now.

This significantly improved our testing processes.

4. Involve the dev team

We've essentially got two processes here.

The first, the generic test plan, is the big one. Going through it would take several hours and produce an extremely thorough output. 

The second is a more targeted test. Faster, but you risk missing out on some dependency bugs.

A new feature rolls out. Which one do you pick?

I don't think there's an answer here. Except maybe both. My decision here is to include the dev team in this conversation. They will be able to tell me if:

  • This change is minor or major (targeted vs. generic plan).

  • Which parts of the codebase are affected by their work. This information will help me prioritise my tests, particularly if I'm working through the generic test.

Share this post
#010 - Creating a sensible testing plan
alexdebecker.substack.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Alex Debecker
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing