The Testday Brand

Over the last few months I’ve been surveying people who’ve participated in testdays. The purpose of this effort is to develop an understanding of the current “brand” that testdays present. I’ve come to realize that our goal to “re-invigorate” the testdays program was based on assumptions that testdays were both well-known and misunderstood. I wanted to cast aside that assumption and make gains on a new plan which includes developing a positive brand.

The survey itself was quite successful as I received over 200 responses, 10x what I normally get out of my surveys. I suspect this was because I kept it short, under a minute to complete; something I will keep in mind for the future.

Who Shared the Most?

testday-responses

When looking out who responded most, the majority were unaffiliated with QA (53%). Of the 47% who were affiliated with QA nearly two thirds were volunteers.

How do these see themselves?

testday-mozillians

When looking at how respondents self-identified, only people who identified as volunteers did not self-identify as a Mozillian. In terms of vouching, people affiliated with QA seem to have a higher proportion of vouched Mozillians than those unaffiliated with QA. This tells me that we need to be doing a better job of converting new contributors into Mozillians and active into vouched Mozillians.

What do they know about Testdays?

testday-familiarity

When looking at how familiar people are with the concept of testdays, people affiliated with QA are most aware while people outside of QA are least aware. No group of people are 100% familiar with testdays which tells me we need to do a better job of educating people about testdays.

What do they think about Testdays?

testday-keywords

Most respondents identified testdays with some sort of activity (30%), a negative feeling (22%), a community aspect (15%), or a specific product (15%). Positive characteristics were lowest on the list (4%). This was probably the most telling question I asked as it really helps me see the current state of the brand of testdays and not just for the responses I received. Reading between the lines, looking for what is not there, I can see testdays relate poorly to anything outside the scope of blackbox testing on Firefox (eg. automation, services, web qa, security qa, etc).

Where do I go from here?

1. We need to diversify the testday brand to be more about testing Firefox and expand it to enable testing across all areas in need.

2. We need to solve some of the negative brand associations by making activities more understandable and relevant, by having shorter events more frequently, , and by rewarding contributions (even those who do work that doesn’t net a bug).

3. We need to teach people that testdays are about more than just testing. Things like writing tests, writing new documentation, updating and translating existing documentation, and mentoring newcomers is all part of what testdays can enable.

4. Once we’ve identified the brand we want to put forward, we need to do a much better job of frequently educating and re-educating people about testdays and the value they provide.

5. We need to enable testdays to facilitate converting newcomers into Mozillians and active contributors into vouched Mozillians.

My immediate next step is to have the lessons I’ve learned here integrated into a plan of action to rebrand testdays. Rest assured I am going to continue to push my peers on this, to be an advocate for improving the ways we collaborate, and to continually revisit the brand to make sure we aren’t losing sight of reality.

I’d like to end with a thank you to everyone who took the time to respond to my survey. As always, please leave a comment below if you have any interesting insights or questions.

Thank you!

Report on Recognition

Earlier this year I embarked on a journey to investigate how we could improve participation in the Mozilla QA community. I was growing concerned that we were failing in one key component of a vibrant community: recognition.

Before I could began, I needed to better understand Mozilla QA’s recognition story. To this end I published a survey to get anonymous feedback and today, I’d like to share some of that feedback.

Profile of Participants

The first question I asked was intended to profile the respondents in terms of how long they’d been involved with Mozilla and whether they were still contributing.

recognition-activityThis revealed that we have a larger proportion of contributors who’ve been involved for more than a couple of years. I think what this indicates we need to be doing a better job of developing long-term relationships with new contributors.

recognition-teamsWhen asked which projects contributors identified with, 100% of respondents identified as being volunteers with the Firefox QA team. The remaining teams breakdown fairly evenly between 11% and 33%. I think this indicates most people are contributing to more than one team, and that teams at the lower end of the scale have an excellent opportunity for growth.

Recognizing Recognition

The rest of the questions were focused more on evaluating the forms of recognition we’ve employed in the past.

recognition-forms

When looking at how we’ve recognized contributors it’s good to see that everyone is being recognized in some form or another, in many cases receiving multiple forms of recognition. However I suspect the results are somewhat skewed (ie. people who haven’t been recognized are probably long gone and did not respond to the survey). In spite of that, it appears that seemingly simple things, like being thanked in a meeting, are well below what I’d expect to see.

recognition-likelihoodWhen looking at the impact of being recognized, it seems that more people found recognition to be nice but not necessarily a motivation for continuing to contribute. 44% found recognition to be either ineffective or very ineffective while 33% found it to be either effective or very effective. This could point to a couple of different factors, either our forms of recognition are not compelling or people are motivated by the work itself. I don’t have a good answer here so it’s probably worth following up.

What did we learn?

After all said and done, here is what I learned from doing this survey.

1. We need to be focused on building long-term relationships. Helping people through their first year and making sure people don’t get lost long-term.

2. Most people are contributing to multiple projects. We should have a framework in place that facilitates contribution (and recognition of contribution) across QA. Teams with less participation can then scale more quickly.

4. We need to be more proactive in our recognition, especially in its simplest form. There is literally no excuse for not thanking someone for work done.

5. People like to be thanked for their work but it isn’t necessarily a definitive motivator for participation. We need to learn more about what drives individuals and make sure we provide them whatever they need to stay motivated.

6. Recognition is not as well “baked-in” to QA as it is with other teams — we should work with these teams to improve recognition within QA and across Mozilla.

7. Contributors find testing to be difficult due to inadequate description of how to test. In some cases, people spend considerable amounts of time and energy figuring out what and how to test, presenting a huge hurdle to newcomers in particular. We should make sure contribution opportunities are clearly documented so that anyone can get involved.

8. We should be engaging with Mozilla Reps to build a better, more regional network of QA contributors, beginning with giving local leaders the opportunity to lead.

Next Steps

In closing, I’d like to thank everyone who took the time to share their feedback. The survey remains open if you missed the opportunity. I’m hoping this blog post will help kickstart a conversation about improving recognition of contributions to Mozilla QA. In particular, making progress toward solving some of the lessons learned.

As always, I welcome comments and questions. Feel free to leave a comment below.

Cheers!

Ninety Days with DOM

Last quarter marked a fairly significant change in my career at Mozilla. I spent most of the quarter adjusting to multiple re-orgs which left me as the sole QA engineer on the DOM team. Fortunately, as the quarter wraps up I feel like I’ve begun to adjust to my new role and started to make an impact.

Engineering Impact

My main objective this quarter was to improve the flow of DOM bugs in Bugzilla by developing and documenting some QA processes. A big part of that work was determining how I was going to measure impact, and so I decided the most simple way to do that was to take the queries I was going to be working with and plot the data into Google docs.

The solution was fairly primitive and lacked the ability to feed into a dashboard in any meaningful way, but as a proof of concept it was good enough. I established a baseline using the week-by-week numbers going back a couple of years. What follows is a crude representation of these figures and how the first quarter of 2015 compares to the now three years of history I’ve recorded.

Volume of unresolved Regressions & Crashes
dom.regressions-vs-crashes.unresolved.alltime.2015q1Regressions +55%, Crashes +188% since 2012

Year-over-Year trend in Regressions and Crashes
dom.regressions-vs-crashes.unresolved.annual.2015q1Regressions +9%, Crashes +68% compared to same time last year.

Regressions and Crashes in First Quarters
dom.regressions-vs-crashes.unresolved.quarterly.2015q1Regressions -0.6%, Crashes +19% compared to previous 1st Quarters

Resolution Rate of Regressions and Crashes
dom.regressions-vs-crashes.fixrate.2015q190% of Regressions resolved (+2.5%), 80% of Crashes resolved (-7.0%)

Change in Resolution Rate compared to total Volume
dom.regressions-vs-crashes.volume.2015q1Regression resolution +2.5%, Crash resolution -6.9%, Total volume +68%

I know that’s a lot of data to digest but I believe they show embedding QA with the DOM team is having some initial success.

It’s important to acknowledge the DOM team for maintaining a very high resolution rate (90% for regressions, 80% for crashes) in the face of aggressive gains in total bug volume (68% in three years). They have done this largely on their own with minimal assistance from QA over the years, giving us a solid foundation from which we could build.

For DOM regressions I focused on making existing bug reports actionable with less focus on filing new regression bugs; this has been a two part effort. The first being focused on finding regression windows for known regression bugs, the second being focused on converting unconfirmed bugs into actionable regression reports. I believe this is why we see a marginal increase in the regression resolution rate (+0.4% last quarter).

For DOM crashes I focused on filing previously unreported crashes (basically anything above a 1% report threshold). Naturally this has led to an increase in reports but has also led to some crashes being fixed that wouldn’t have been otherwise. Overall the crash resolution rate declined by 2.6% last quarter but I believe this should ultimately lead to a more stable product in the future.

The Older Gets Older

The final chart below plots the age of unresolved DOM bugs week over week which currently sits at 542 days; an increase of 4.8% this past quarter and 241% since January 1, 2012. I include it here not as a visualization of impact but as a general curiosity.

Median Age of Unresolved DOM Bugs
dom.regressions-vs-crashes.fixrate.2015q1Median age is 542 days, +4.8% last quarter, +241% since 2012

I have not yet figured out what this means in terms of overall quality or whether it’s something we need to address. I suspect recently reported bugs tend to get fixed sooner since they tend to be more immediately visible than older bugs. A fact that is likely common to most, if not all components in Bugzilla. It might be interesting to see how this breaks down in terms of the age of the bugs being fixed.

What’s Next

My plan for the second quarter is to identify a subset of these to take outside of Google Docs and convert into a proof of concept dashboard. I’m hoping my peers on the DOM team can help me identify at least a couple that would be both interesting and useful. If it works out, I’d like to aim for expanding this to more Bugzilla components later in the year so more people can benefit.

If you share my interest and have any insights please leave a comment below.

As always, thank you for reading.

[UPDATE: I decided to quickly mock up a chart showing the age breakdown of bugs fixed this past quarter. As you can see below, younger bugs account for a much greater proportion of the bugs being fixed, perhaps expectedly.]

Screen Shot 2015-04-06 at 3.56.33 PM