Book Review: A Practical Guide to Testing in DevOps by Katrina Clokie

I just finished reading Katrina Clokie’s new book, A Practical Guide to Testing in DevOps and I wanted to do a quick write-up for anyone considering picking up the e-book.

Do it.

The book is a great primer on DevOps practices and concepts but unlike most books on the subject, it’s written from the tester’s perspective. Hallelujah! I’ve been waiting for a book just like this. The material is easy to digest while still being chock full of quotes, research, and real industry examples that are easily referenced thanks to the clickable links in the e-book format. After reading this book, I feel like I have a good handle on what to expect as a tester as my responsibilities change in the DevOps shift.

I especially liked that she gave several examples of exercises to try running with your team to help nail down the expectations of tester responsibilities such as test retrospectives and risk workshops.  I plan to run both of these with my team in the near future.

Whether your company has made the DevOps shift, plans to in the future, or you just want to take a look at what’s happening in the industry right now, I cannot recommend this book enough.



On teaching developers to test


Recently, I’ve begun to teach my developers to help test our stories within the sprint. Crazy, right?


So why would I ever want to teach developers how to test? Developers develop, and tester test. That’s the way of the world. But because other duties started taking up a significant portion of my time, I was no longer able to be available for my team as much as I’d like. To prevent testing from becoming a bottleneck, and in an attempt to cross-train and become more Agile, the team decided that the developers would begin picking up some of the testing work in each sprint.

This wasn’t a decision we made lightly. There is a significant stigma against developers testing their own code and we knew we’d need to be conscious of that. We also had to overcome the idea that testing belongs to the tester – the developers were already writing unit and integration automation but exploratory testing was a whole new beast.

We discussed the potential ramifications – how would the quality of our work be affected? Did we think these senior developers, but junior testers, could provide value in the scope of testing?


The first thing we decided was that no developer would test his own code without pairing with another developer. Bringing in the pair testing technique helped each developer challenge their assumptions and biases and do more thorough testing. When I’m available, I also pair/mob with the developers during test sessions.

For helping my developer’s get started, I leaned a lot on Katrina Clokie’s “Testing for Non-Testers”. I also gave them a presentation on using heuristics in testing. After each story was completed, we talked about the testing that was done and how it could be improved in the next iteration.

For each story, I spend time with the developers at the whiteboard. We mindmap the story and highlight areas of risk that needed the most testing. We come up with questions about the story that would need to be answered during the test sessions and prioritize our plan. This is the same test planning I would normally do then review with the developers, but now we are doing it together as a group from the beginning.

The developers follow the test plan and make notes in the story about what they tested and what they found – whether it be the code functioning as expected, an issue they discovered, or just a behavior they didn’t expect. If you’re a tester, this all probably sounds very familiar – this is the same general outline I use for my own testing notes.

Finally, I review their testing notes and do any further testing I felt was necessary and report our findings to the Product Owner so they can sign off on acceptance.


One challenge has been getting the developers to “test outside the box”. In the beginning, they often just tested that the code did what they expected it to do – basically just acceptance testing. As a result, we had an issue where our service was outputting a checkbox from the UI as a string instead of the boolean it should have been. As a team, we discussed “building the right thing” vs “building the thing right” and how to question if what we built genuinely serves our customer’s needs.

We also struggled with “over-testing” some areas. In some cases, they were spending significant time testing areas that were well covered by automation, or running several variations of a test that had a low chance of returning unique results. When working within a sprint, all time is precious and must be spent wisely. The tests they were performing had a diminishing return of value. In this case, we discussed considering the value provided by a test before running it – what information am I expecting to learn from this test? What is the risk that this test will perform differently than I expect?

This experiment also required a lot of trust and vulnerability on all sides. The team as a whole needed to be able to trust that the developers could do a sufficient job at test, a role they were unused to performing. And because of their inexperience, the developers had to be open to trying a new role where they don’t have their usual level of expertise. I also had to be vulnerable enough to let go of feeling like I owned testing on my team.


This has been an ongoing, growing, and changing process. Just like how we iterate on our code, we’ve been iterating on our testing. Yes, some things have been missed but many were caught during the review part of the process. Do I think that the developers are skilled testers? No, they are still developers who are doing testing. But making them into testers was never the goal. The goal was to get enough testing on our stories to be able to get them accepted and into production. In that light, I’d call this a success.

One thing to note is that as much as I’ve been teaching the developers about test techniques and heuristics and other testing concepts, I have also been learning. The developers approach testing in a way very different than how I would do it and it’s opened my eyes to new ways of testing. This, too, has been a success.

Autonomy – Mastery – Purpose: What Motivates a Tester

By now, I think most people have heard of the Autonomy-Mastery-Purpose (AMP) model of motivation that is put forth by Dan Pink in his book Drive. He’s also given a TED talk on the subject. In case you aren’t, here is a quick overview: studies show that the traditional incentive method (higher salary or bonuses) only produces better performance on purely mechanical tasks. In tasks that require cognitive skill, it can actually decrease performance. Dan Pink suggests that what motivates people in roles that require creative or critical thinking skills is a trifecta of qualities called Autonomy, Mastery, and Purpose.

There’s no question that testing, as a profession, requires cognitive skills. From test design to the communication of results, every step of the way demands critical thinking and careful, deliberate cognition to perform well. A recent talk at the Quality Jam 2017 conference got me thinking about how I define AMP in my role as a tester. For each of the three qualities, I’ve listed some ways I see them manifested in my role.


“the desire to direct our own lives”

⇒Control over your tools – In my experience as a tester, I’ve very much valued being able to select the tools that I feel best meet the needs of my project. Being forced to use a tool that lacks key features or, sometimes worse, is unnecessarily bulky, makes the day seem longer and lackluster. Of course, many companies will want a level of consistency and will ask all the testers to use the same set of tools. In these situations I try to become a part of a feedback loop so that I can help suggest new ways to use the tools available, or even propose adoption of new tools.

⇒Control over your process – Not every project will need the same process. Being able to adjust how I work with my team makes me more energized and more productive. From Session Based Test Management to Mob Testing, I’ve been lucky to have the freedom to experiment with different testing methods. While some experiments have succeeded, it’s important to note that this must also include the freedom to try something new and fail. Even our failures can help us learn and inform our future testing efforts.
⇒Control over your development – No, that doesn’t say over your developers. This isn’t about code, but your development as a tester. Do you have the opportunities and support to learn and grow? Are you encouraged to learn skills even if they aren’t immediately or obviously useful to your current work? Being able to direct my own growth helps keep me always looking outward for new or interesting ideas on how to improve myself and my work.



“the urge to get better and better at something that matters”

⇒Building and improving skills – James Bach proposed that there are “Seven Kinds of Testers” that he has identified during his time coaching others. I feel like I naturally fall into mix of the Administrative, Emotional, and Social tester categories. Knowing your own strengths can be a big boon when it comes to self-improvement. You can choose to play to those strengths or use them as a way to identify the gaps in your skills. While I am a firm believer in the benefits of specialization, I also prefer to build at least a baseline knowledge in as many areas as possible. I’ve recently been working to improve my technical and analytical skills. I recommended the Ministry of Testing Masterclass “Multiplying the Odds” by Fiona Charles as a jumping off point for ideas on specific skillsets to learn or improve.

⇒Feeling like your input is valued – I think this facet can take several forms in the workplace. Being invited to planning or refinement meetings, influencing the development strategy by asking critical questions, and informing stakeholders of potential value or risk are all areas where a tester could feel like their voice was treasured – or trashed. Knowing that others want to hear what I have to say makes me feel like they see me as a subject matter expert – or at least as subject matter experienced. This can include both testing and domain knowledge.


“the yearning to do what we do in the service of something larger than ourselves”

⇒Serving the Customer – Some people work in industries that match their personal interests or activistic desires, but everyone works in an industry with customers. No matter what you do, the software you help produce is making someone’s life better somehow. It’s very fulfilling to know that I’m improving someone’s experience. I find that keeping this in mind makes my job brighter and more personally satisfying. There is a real sense of pride when I know I assisted in building a quality product.

⇒Pushing technology/design boundaries – Sometimes I’m motivated simply by the idea of getting to work with a new technology or design idea. The first time I worked on a responsive design website I thought it was just SO cool. Getting to explore new frameworks or UX concepts brings both purpose and a new potential area of mastery to my role.

There are numerous ways that the AMP motivation model can be applied to the tester role. Having worked in environments with and without AMP, I do believe that it makes a difference in the level of my performance. Being self-directed, feeling competent, and understanding the bigger picture of the service the product I work on provides helps me push myself harder and do better work. However, I do feel like this model is missing a factor – passion. Simply loving what you do makes getting up to go to work each day easier. If you think of any other factors or AMP facets, please mention them in a comment below!


Quality Jam 2017 Recap

This post was co-authored by Margie Kehr

—-Link to presentation recordings—-


I was really excited to get to attend the 2017 Quality Jam. The schedule boasted speakers like Keith Klain and Michael Bolton (no, not either one of those Michael Boltons) so I knew it was going to be informative and fun. Plus a REAL LIFE DISNEY IMAGINEER TESTER! I attended several talks on topics including automation, agile testing, testing careers, building testing teams, and rapid software testing. Here are a few of the points from the talks that stuck with me.

**“Modern Apps Need Modern Development Practices” by Jeff HammondJeff talked about how the current software development industry really needs a different kind of mindset or culture to succeed. He highlighted four qualities/practices that he sees as necessary for an organization to flourish in today’s development world.

  1. Speed and recovery valued over perfection and control
  2. A collaborative learning culture
  3. Automation that enables the creativity of the testing team
  4. A goal of building data driven experiences for the users

While I felt proud to know that we have been working towards those four points at my workplace, there was another section of the talk I found especially inspiring. Almost everyone has heard of the Autonomy-Mastery-Purpose model of job satisfaction, but as Jeff spoke about it, I really began to wonder what that looks like for a tester. Is it any different than for a developer? What model do I use to determine if I have Mastery in my role?

**“Evolve or Die” by Mike Cooper – This talk was over building and growing your career as a tester. I really liked how he spoke about taking personal responsibility for your career growth by setting your own career goals and figuring out what skills you need to get there. He emphasized not neglecting the soft skills, which is something I feel we all often do as we focus our learning efforts on new tools and techniques.

Mike also talked about the impact testers have on their organizations. He said “it’s QA who leads ATDD, BDD, and CI/CD adoption.” I hear so many testers in the community talking about how these things will be the “death of testing”, but if that’s true then why are testers leading the charge? I think that this contradiction was really what was at the heart of his talk – testers who evolve and grow themselves will always have a place in the software development field.

**”How to Get Automation in Your Definition of Done” by Angie Jones – “Automation is feedback. Ask if you really want this feedback and how often you want it before you automate.” I’ve seen and heard a variation of this quote in almost every automation article or talk I’ve encountered, but I feel like it can’t be said enough. Other good advice from the talk included getting others involved when developing your automation plan such as product owners to get usage analytics and customer pain points, testers to help build scenarios, and developers to build an initial contract between the development and the automation. Finally, I really liked how she advocated an incremental approach to automation. Building “just enough” automation to support the rest of the development process prevents wasted effort – but you can only do that well with the collaboration of the whole development team. This talk was just further proof that quality is something everyone on the team owns, not just the testers! I asked her how she prefers to handle test data and she directed me to “Four Test Data Strategies” by Paul Merrill.  It was an excellent and informative read/listen!

**”Debugging Your Test Team” by Keith Klain – I felt like Keith knew the questions I would have after Jeff Hammond’s talk about mastery and purpose for testers and designed his talk to revolve around how to answer them. He identified three statements that are signs that someone doesn’t have AMP in their role.

  • “I don’t think deeply about my work.”
  • “I don’t trust my team”
  • “I don’t like testing”

I found all three of these very powerful – I feel like all the amazing testers I’ve seen have been critical thinkers who built solid relationships with their team and really loved what they did. Seeing that idea boiled down into those three sentences was a serious ah-ha moment. As a follow up on thinking critically about our work inspired by this talk, I wondered what our testing values are at my company? Are we all striving to uphold those values? What could we do to improve? Because as Keith said, “No one needs permission to do their job better.”


There were a number of great sessions at the Quality Jam conference, all providing new testing perspectives and insights. Aside from the highlights that Jasmin mentioned above, here are a few more takeaways for me…

Keynote: Josh Clark & Chuck Bryant | A History of Testing (And Things that Failed Spectacularly)

Josh and Chuck, from the award-winning podcast “Stuff You Should Know”, provided a hilarious account of products that failed miserably in the market due to either a complete lack of testing or overlooked testing all together.  They talked about iSmell, developed by DigiScents in 2001, which cost $20 million in research & development to produce the prototype – a peripheral device that connected to a PC via USB, designed to emit smells of the internet.  However, of their research, none of it was user testing, or market surveys, or proving consumer demand – turns out no one wanted to smell the internet!  Another product Josh and Chuck mentioned was the E.T. Atari game and how the designer was asked to complete development within a very tight 5-week deadline for the 1982 holiday season.  This of course did not leave room for testing and the game ended up being quite buggy (one bug being the player could fall into a pitfall loop) and ultimately a very poor gameplay experience.  These examples make you wonder how successful these products could have been if the user testing phase had been more prominent.  I believe iSmell would not have even made it to market, and the E.T. game could have been legendary (and not buried, literally).

Keynote: Michael Bolton | A Ridiculously Rapid Introduction to Rapid Software Testing

For anyone that has heard of Michael Bolton, James Bach, or Rapid Software Testing (RST), and wants to learn more – I would suggest watching Michael’s super duper rapid 50-minute intro to RST conference session.  Michael notes that testing investigates risk and testers should focus on two main questions:  Is there a problem? & Are we okay with this?  He goes on to suggest that every tester should have their own model (be it HTSM or something else), and testing should be exploratory, not confirmatory.  Machines can only check, while humans can do ALL THE THINGS (learn, explore, collaborate, infer, question, observe, study, etc).  The main thing we should remember as testers is to be suspicious of the things we perceive and conceive while we test, and to always question our assumptions.

And with that, I will leave you with a quote from Michael Bolton:

“Testing is like the tale of Captain Jack Sparrow, pirate so brave on the seven seas…”

Oh wait, wrong guy… here’s the right quote from the right guy:

“Exploratory Testing is not so much a thing that you do; it’s far more a way that you think.”


Welcome to my blog…

My very first blog! My very first blog post! This is all very exciting to me so please forgive me if I overuse exclamation points!

My mentors, Connor Roberts and Brian Kurtz, often pushed me to start writing, but I always felt like I didn’t have anything worthwhile to say. But now, as I creep up on a decade of working in software testing, I finally feel like I have some things I want to share. I hope some of what I share is useful to someone. Or at least interesting.