Visualizing Crash Data in Bugzilla

Since joining the Platform Graphics team as a QA engineer several months ago I’ve dabbled in visualizing Graphics crash data using the Socorro supersearch API and the MetricsGraphics.js visualization library.

After I gained a better understanding of the API and MG.js I set up a github repo as sandbox to play around with visualizing different types of crash data. Some of these experiments include a tool to visualize top crash signatures for vendor/device/driver combinations, a tool to compare crash rates between Firefox and Fennec, a tool to track crashes from the graphics startup test, and a tool to track crashes with a WebGL context.

Top crash dashboard for our most common device/driver combination (Intel HD 4000 w/driver
Top graphics crash for Intel HD 4000 w/driver

Fast forward to June,  I had the opportunity to present some of this work at the Mozilla All-hands in London. As a result of this presentation I had a fruitful conversation with Benoit Girard, fellow engineer on the Graphics team. We talked about integrating my visualization tool with Bugzilla by way of a Bugzilla Tweaks add-on; this would both improve the functionality of Bugzilla and improve awareness of my tool. To my surprise this was actually pretty easy and I had a working prototype within 24 hours.

Since then I’ve iterated a few times, fixing some bugs based on reviews for the AMO Editors team. With version 0.3 I am satisfied enough to publicize it as an experimental add-on.

Bugzilla Socorro Lens (working title) appends a small snippet into the Crash Signatures field of bug reports, visualizing 365 days worth of aggregate crash data for the signatures in the bug.  With BSL installed it becomes more immediately evident when a crash started being reported, if/when it was fixed, how the crash is trending, or if the crash is spiking; all without having to manually search Socorro.

Socorro snippet integrate on
Socorro snippet integration on

Of course if you want to see the data in Socorro you can. Simply click a data-point on the visualization and a new tab will be opened to Socorro showing the crash reports for that date. This is particularly useful when you want to see what may be driving a spike.

At the moment BSL is an experimental add-on. I share it with you today to see if it’s useful and collect feedback. If you encounter a bug or have a feature request I invite you to submit an issue on my github repo. Since this project is a learning experience for me, as much as it is a productivity exercise, I am not accepting pull requests at this time. I welcome your feedback and look forward to improving my coding skills by resolving your issues.

You can get the add-on from

[Update] Nicholas Nethercote informed me of an issue where the chart won’t display if you have the “Experimental Interface” enabled in Bugzilla. I have filed an issue in my github repo and will take a look at this soon. In the meantime, you’ll have to use the default Bugzilla interface to make use of this add-on. Sorry for the inconvenience.

Reducing the NVIDIA Blacklist

We recently relaxed our graphics blocklist for users with NVIDIA hardware using certain older graphics drivers. The original blocklist entry blocked all versions less than v8.17.11.8265 due to stability issues with NVIDIA 182.65 and earlier. We recently learned however that the first two numbers in the version string indicate platform version and the latter numbers refer to the actual driver version. As a result we were inadvertently blocking newer drivers on older platforms.

We have since opened up the blacklist for versions newer than (Win XP) and (Vista/Win7) via bug 1284322, effectively drivers released beyond mid-2009.This change only exists on Nightly currently but we expect it to ride the trains unless some critical regression is discovered.

If you are triaging bugs and user feedback, or are engaging with users on social media, please keep an eye out for users with NVIDIA hardware. If the user does have NVIDIA hardware please have them check the Graphics section of about:support to confirm if they are using a driver version that was previously blocked. If they are try to help them get updated to the most recent driver version. If the issue persists, have them disable hardware acceleration to see if the issue goes away.

The same goes if you are a user experiencing quality issues (crashes, hangs, black screening, checkerboarding, etc) on NVIDIA hardware with these drivers. Please make sure your drivers are up to date.

In either case, if the issue persists please file a bug so we can investigate what is happening.

Feel free to email me if you have any questions.

Thank you for your help!

London Calling

I’m happy to share that I will be hosting my first ever All-hands session, Graphics Stability – Tools and Measurements (12:30pm on Tuesday in the Hilton Metropole), at the upcoming Mozilla All-hands in London. The session is intended to be a conversation about taking a more collaborative approach to data-driven decision-making as it pertains to improving Graphics stability.

I will begin by presenting how the Graphics team is using data to tackle the graphics stability problem, reflecting on the problem at hand and the various approaches we’ve taken to date. My hope is this serves as a catalyst to lively discussion for the remainder of the session, resulting in a plan for more effective data-driven decision-making in the future through collaboration.

I am extending an invitation to those outside the Graphics team, to draw on a diverse range of backgrounds and expertise. As someone with a background in QA, data analysis is an interesting diversion (some may call it an evolution) in my career — it’s something I just fell in to after a fairly lengthy and difficult transitional period. While I’ve learned a lot recently, I am an amateur data scientist at best and could certainly benefit from more developed expertise.

I hope you’ll consider being a part of this conversation with the Graphics team. It should prove to be both educational and insightful. If you cannot make it, not to worry, I will be blogging more on this subject after I return from London.

Feel free to reach out to me if you have questions.

9 Years

Today I pause to reflect.

9 years ago last Saturday was my first day at Mozilla as an intern. It was also the first time I set foot in Silicon Valley, the first time I set foot in California, even the first time I set foot on the west coast of North America. It was a day of many emotions: anticipation, excitement, fear, and pride.

Taking big risks is not something that comes easy to me. Deciding to hop on a plane and travel to California not knowing what my future held. Deciding only a few months earlier to take a risk with my career and venture in to open source software. Deciding just two years before to risk leaving a stable career in the Canadian military to earn a software development degree at Seneca College. These are things that took a lot of personal courage, and I suspect much courage for my parents as well.

Mozilla has changed a lot since that early Spring day in 2007 but so have I. Through all the things I’ve experienced. The ups and downs. The re-orgs and pivots. The flamewars. The burnout. The one thing that hasn’t changed is the people of Mozilla — they are my rock.

They provided guidance and mentorship when I needed it most. They helped me learn from my mistakes, celebrate my victories, and saved me from burning out on more than one occasion. In short, I would not be here today without them.

Some of my best personal relationships are those I’ve developed at Mozilla. I hold the highest of respect for these people and while many have moved on I still think of them frequently. I strive to be what they were to me: a mentor in work and a mentor in life.

If you’re reading this and we’ve interacted at all in the past, I write this for you. Every experience we’ve shared, every discussion we’ve had, it changes us. This I believe fundamentally.

When people ask me why I work for Mozilla (and why I’ll continue to for the foreseeable future), the answer is simple: the people. To me they are like family, nothing else matters.


The Testday Brand

Over the last few months I’ve been surveying people who’ve participated in testdays. The purpose of this effort is to develop an understanding of the current “brand” that testdays present. I’ve come to realize that our goal to “re-invigorate” the testdays program was based on assumptions that testdays were both well-known and misunderstood. I wanted to cast aside that assumption and make gains on a new plan which includes developing a positive brand.

The survey itself was quite successful as I received over 200 responses, 10x what I normally get out of my surveys. I suspect this was because I kept it short, under a minute to complete; something I will keep in mind for the future.

Who Shared the Most?


When looking out who responded most, the majority were unaffiliated with QA (53%). Of the 47% who were affiliated with QA nearly two thirds were volunteers.

How do these see themselves?


When looking at how respondents self-identified, only people who identified as volunteers did not self-identify as a Mozillian. In terms of vouching, people affiliated with QA seem to have a higher proportion of vouched Mozillians than those unaffiliated with QA. This tells me that we need to be doing a better job of converting new contributors into Mozillians and active into vouched Mozillians.

What do they know about Testdays?


When looking at how familiar people are with the concept of testdays, people affiliated with QA are most aware while people outside of QA are least aware. No group of people are 100% familiar with testdays which tells me we need to do a better job of educating people about testdays.

What do they think about Testdays?


Most respondents identified testdays with some sort of activity (30%), a negative feeling (22%), a community aspect (15%), or a specific product (15%). Positive characteristics were lowest on the list (4%). This was probably the most telling question I asked as it really helps me see the current state of the brand of testdays and not just for the responses I received. Reading between the lines, looking for what is not there, I can see testdays relate poorly to anything outside the scope of blackbox testing on Firefox (eg. automation, services, web qa, security qa, etc).

Where do I go from here?

1. We need to diversify the testday brand to be more about testing Firefox and expand it to enable testing across all areas in need.

2. We need to solve some of the negative brand associations by making activities more understandable and relevant, by having shorter events more frequently, , and by rewarding contributions (even those who do work that doesn’t net a bug).

3. We need to teach people that testdays are about more than just testing. Things like writing tests, writing new documentation, updating and translating existing documentation, and mentoring newcomers is all part of what testdays can enable.

4. Once we’ve identified the brand we want to put forward, we need to do a much better job of frequently educating and re-educating people about testdays and the value they provide.

5. We need to enable testdays to facilitate converting newcomers into Mozillians and active contributors into vouched Mozillians.

My immediate next step is to have the lessons I’ve learned here integrated into a plan of action to rebrand testdays. Rest assured I am going to continue to push my peers on this, to be an advocate for improving the ways we collaborate, and to continually revisit the brand to make sure we aren’t losing sight of reality.

I’d like to end with a thank you to everyone who took the time to respond to my survey. As always, please leave a comment below if you have any interesting insights or questions.

Thank you!

Report on Recognition

Earlier this year I embarked on a journey to investigate how we could improve participation in the Mozilla QA community. I was growing concerned that we were failing in one key component of a vibrant community: recognition.

Before I could began, I needed to better understand Mozilla QA’s recognition story. To this end I published a survey to get anonymous feedback and today, I’d like to share some of that feedback.

Profile of Participants

The first question I asked was intended to profile the respondents in terms of how long they’d been involved with Mozilla and whether they were still contributing.

recognition-activityThis revealed that we have a larger proportion of contributors who’ve been involved for more than a couple of years. I think what this indicates we need to be doing a better job of developing long-term relationships with new contributors.

recognition-teamsWhen asked which projects contributors identified with, 100% of respondents identified as being volunteers with the Firefox QA team. The remaining teams breakdown fairly evenly between 11% and 33%. I think this indicates most people are contributing to more than one team, and that teams at the lower end of the scale have an excellent opportunity for growth.

Recognizing Recognition

The rest of the questions were focused more on evaluating the forms of recognition we’ve employed in the past.


When looking at how we’ve recognized contributors it’s good to see that everyone is being recognized in some form or another, in many cases receiving multiple forms of recognition. However I suspect the results are somewhat skewed (ie. people who haven’t been recognized are probably long gone and did not respond to the survey). In spite of that, it appears that seemingly simple things, like being thanked in a meeting, are well below what I’d expect to see.

recognition-likelihoodWhen looking at the impact of being recognized, it seems that more people found recognition to be nice but not necessarily a motivation for continuing to contribute. 44% found recognition to be either ineffective or very ineffective while 33% found it to be either effective or very effective. This could point to a couple of different factors, either our forms of recognition are not compelling or people are motivated by the work itself. I don’t have a good answer here so it’s probably worth following up.

What did we learn?

After all said and done, here is what I learned from doing this survey.

1. We need to be focused on building long-term relationships. Helping people through their first year and making sure people don’t get lost long-term.

2. Most people are contributing to multiple projects. We should have a framework in place that facilitates contribution (and recognition of contribution) across QA. Teams with less participation can then scale more quickly.

4. We need to be more proactive in our recognition, especially in its simplest form. There is literally no excuse for not thanking someone for work done.

5. People like to be thanked for their work but it isn’t necessarily a definitive motivator for participation. We need to learn more about what drives individuals and make sure we provide them whatever they need to stay motivated.

6. Recognition is not as well “baked-in” to QA as it is with other teams — we should work with these teams to improve recognition within QA and across Mozilla.

7. Contributors find testing to be difficult due to inadequate description of how to test. In some cases, people spend considerable amounts of time and energy figuring out what and how to test, presenting a huge hurdle to newcomers in particular. We should make sure contribution opportunities are clearly documented so that anyone can get involved.

8. We should be engaging with Mozilla Reps to build a better, more regional network of QA contributors, beginning with giving local leaders the opportunity to lead.

Next Steps

In closing, I’d like to thank everyone who took the time to share their feedback. The survey remains open if you missed the opportunity. I’m hoping this blog post will help kickstart a conversation about improving recognition of contributions to Mozilla QA. In particular, making progress toward solving some of the lessons learned.

As always, I welcome comments and questions. Feel free to leave a comment below.


Ninety Days with DOM

Last quarter marked a fairly significant change in my career at Mozilla. I spent most of the quarter adjusting to multiple re-orgs which left me as the sole QA engineer on the DOM team. Fortunately, as the quarter wraps up I feel like I’ve begun to adjust to my new role and started to make an impact.

Engineering Impact

My main objective this quarter was to improve the flow of DOM bugs in Bugzilla by developing and documenting some QA processes. A big part of that work was determining how I was going to measure impact, and so I decided the most simple way to do that was to take the queries I was going to be working with and plot the data into Google docs.

The solution was fairly primitive and lacked the ability to feed into a dashboard in any meaningful way, but as a proof of concept it was good enough. I established a baseline using the week-by-week numbers going back a couple of years. What follows is a crude representation of these figures and how the first quarter of 2015 compares to the now three years of history I’ve recorded.

Volume of unresolved Regressions & Crashes
dom.regressions-vs-crashes.unresolved.alltime.2015q1Regressions +55%, Crashes +188% since 2012

Year-over-Year trend in Regressions and Crashes
dom.regressions-vs-crashes.unresolved.annual.2015q1Regressions +9%, Crashes +68% compared to same time last year.

Regressions and Crashes in First Quarters
dom.regressions-vs-crashes.unresolved.quarterly.2015q1Regressions -0.6%, Crashes +19% compared to previous 1st Quarters

Resolution Rate of Regressions and Crashes
dom.regressions-vs-crashes.fixrate.2015q190% of Regressions resolved (+2.5%), 80% of Crashes resolved (-7.0%)

Change in Resolution Rate compared to total Volume
dom.regressions-vs-crashes.volume.2015q1Regression resolution +2.5%, Crash resolution -6.9%, Total volume +68%

I know that’s a lot of data to digest but I believe they show embedding QA with the DOM team is having some initial success.

It’s important to acknowledge the DOM team for maintaining a very high resolution rate (90% for regressions, 80% for crashes) in the face of aggressive gains in total bug volume (68% in three years). They have done this largely on their own with minimal assistance from QA over the years, giving us a solid foundation from which we could build.

For DOM regressions I focused on making existing bug reports actionable with less focus on filing new regression bugs; this has been a two part effort. The first being focused on finding regression windows for known regression bugs, the second being focused on converting unconfirmed bugs into actionable regression reports. I believe this is why we see a marginal increase in the regression resolution rate (+0.4% last quarter).

For DOM crashes I focused on filing previously unreported crashes (basically anything above a 1% report threshold). Naturally this has led to an increase in reports but has also led to some crashes being fixed that wouldn’t have been otherwise. Overall the crash resolution rate declined by 2.6% last quarter but I believe this should ultimately lead to a more stable product in the future.

The Older Gets Older

The final chart below plots the age of unresolved DOM bugs week over week which currently sits at 542 days; an increase of 4.8% this past quarter and 241% since January 1, 2012. I include it here not as a visualization of impact but as a general curiosity.

Median Age of Unresolved DOM Bugs
dom.regressions-vs-crashes.fixrate.2015q1Median age is 542 days, +4.8% last quarter, +241% since 2012

I have not yet figured out what this means in terms of overall quality or whether it’s something we need to address. I suspect recently reported bugs tend to get fixed sooner since they tend to be more immediately visible than older bugs. A fact that is likely common to most, if not all components in Bugzilla. It might be interesting to see how this breaks down in terms of the age of the bugs being fixed.

What’s Next

My plan for the second quarter is to identify a subset of these to take outside of Google Docs and convert into a proof of concept dashboard. I’m hoping my peers on the DOM team can help me identify at least a couple that would be both interesting and useful. If it works out, I’d like to aim for expanding this to more Bugzilla components later in the year so more people can benefit.

If you share my interest and have any insights please leave a comment below.

As always, thank you for reading.

[UPDATE: I decided to quickly mock up a chart showing the age breakdown of bugs fixed this past quarter. As you can see below, younger bugs account for a much greater proportion of the bugs being fixed, perhaps expectedly.]

Screen Shot 2015-04-06 at 3.56.33 PM

New Beginnings

…or trying to adapt to the inevitability of change.

Change is a reality of life common to all things; we all must adapt to change or risk obsolescence. I try to look at change as a defining moment, an opportunity to reflect, to learn, and to make an impact. It is in these moments that I reflect on the road I’ve traveled and attempt to gain clarity of the road ahead. This is where I find myself today.

How Did I Get Here?

In my younger days I wasted years in college jumping from program to program before eventually dropping out. I obviously did not know what I wanted to do with my life and I wasn’t going to spend thousands of dollars while I figured it out. This led to a frank and difficult discussion with my parents about my future which resulted in me enlisting in the Canadian military. As it happens, this provided me the space I needed to think about what I wanted to do going forward, who I wanted to be.

I served for three years before moving back to Ontario to pursue a degree in software development at the college I left previously. I had chosen a path toward working in the software industry. I had come to terms with a reality that I would likely end up working on some proprietary code that I didn’t entirely care for, but that would pay the bills and I would be happier than I was as a soldier.

After a couple of years following this path I met David Humphrey, a man who would change my life by introducing me to the world of open source software development. On a whim, I attended his crash-course, sacrificing my mid-semester week off. It was here that discovered a passion for contributing to an open source project.

Up until this point I was pretty ignorant about open source. I had been using Linux for a couple years but I didn’t identify it as “open source”; it was merely a free as in beer alternative to Windows. At this point I hadn’t even heard of Mozilla Firefox. It was David who opened my eyes to this world; a world of continuous learning and collaboration, contributing to a freer and more open web. I quickly realized that choosing this path was about more than a job opportunity, more than a career; I was committing myself to world view and my part to play in shaping it.

Over the last eight years I have continued to follow this path, from volunteering nights at school, through internships, a contract position, and finally full-time employment in 2010.

Change is a way of life at Mozilla

Since I began my days at Mozilla I have always been part of the same team. Over the years I have seen my team change dramatically but it has always felt like home.

We started as a small team of specialists working as a cohesive unit on a single product. Over time Mozilla’s product offering grew and so did the team, eventually leading to multiple sub-teams being formed. As time moved on and demands grew, we were segmented into specialized teams embedded on different products. We were becoming more siloed but it still felt like we were all part of the QA machine.

This carried on for a couple of years but I began to feel my connection to people I no longer worked with weaken. As this feeling of disconnectedness grew, my passion for what I was working on decreased. Eventually I felt like I was just going through the motions. I was demoralized and drifting.

This all changed for me again last year when Clint Talbert, our newly appointed Director and a mentor of mine since the beginning, developed a vision for tearing down those silos. It appeared as though we were going to get back to what made us great: a connected group of specialists. I felt nostalgic for a brief moment. Unfortunately this would not come to pass.

Moving into 2015 our team began to change again. After “losing” the B2G QA folks to the B2G team in 2014, we “lost” the Web and Services QA folks to the Cloud Services team. Sure the people were still here but it felt like my connection to those people was severed. It then became a waiting game, an inevitability that this trend would continue, as it did this week.

The Road Ahead

Recently I’ve had to come to terms with the reality of some departures from Mozilla.  People I’ve held dear for, and sought mentorship from, for many years have decided to move on as they open new chapters in their lives. I have seen many people come and go over the years but those more recently have been difficult to swallow. I know they are moving on to do great things and I’m extremely happy for them, but I’ll also miss them intensely.

Over the years I’ve gone from reviewing add-ons to testing features to driving releases to leading the quality program for the launch of Firefox Hello. I’ve grown a lot over the years and the close relationships I’ve held with my peers are the reason for my success.

Starting this week I am no longer a part of a centralized QA team, I am now the sole QA member of the DOM engineering team. While this is likely one of the more disruptive and challenging changes I’ve ever experienced, it’s also exciting to me.

Overcoming the Challenge

As I reflect on this entire experience I become more aware of my growth and the opportunity that has been presented. It is an opportunity to learn, to develop new bonds, to impact Mozilla’s mission in new and exciting ways. I will remain passionate and engaged as long as this opportunity exists. However, this change does not come without risk.

The greatest risk to Mozilla is if we are unable to maintain our comradery, to share our experiences, to openly discuss our challenges, to engage participation, and to visualize the broader quality picture. We need to strengthen our bonds, even as we go our separate ways. The QA team meeting will become ever more important as we become more decentralized and I hope that it continues.

Looking Back, Looking Forward

I’ve experienced a lot of change in my life and it never gets any less scary. I can’t help but fear reaching another “drifting point”. However, I’ve also learned that change is inevitable and that I reach my greatest potential by adapting to it, not fighting it.

I’m entering a new chapter in my life as a Mozillian and I’m excited for the road ahead.

Improving Recognition

I’ve been hearing lately that Mozilla QA’s recognition story kind of sucks with some people going completely unrecognized for their efforts. Frankly, this is embarrassing!

Some groups have had mild success attempting to rectify this problem but not all groups share in this success. Some of us are still struggling to retain contributors due to lack of recognition; a problem which becomes harder to solve as QA becomes more decentralized.

As much as it pains me to admit it, the Testdays program is one of these areas. I’ve blogged, emailed, and tweeted about this but despite my complaining, things really haven’t improved. It’s time for me to take some meaningful action.

We need to get a better understanding of our recognition story if we’re ever to improve it. We need to understand what we’re doing well (or not) and what people value so that we can try to bridge the gaps. I have some general ideas but I’d like to get feedback from as many voices as possible and not move forward based on personal assumptions.

I want to hear from you. Whether you currently contribute or have in the past. Whether you’ve written code, ran some tests, filed some bugs, or if you’re still learning. I want to hear from everyone.

Look, I’m here admitting we can do better but I can’t do that without your help. So please, help me.



Firefox 27 Bug Statistics

I’m writing today to present the bug statistics for Firefox 27. My apologies for the tardiness of this blog post; too many things have got in my way recently. I try to get these posts out at the end of life of the respective Firefox version as that allows me to present the statistics across the entire life-cycle of a Firefox version. For Firefox 27, this should have coincided with Firefox 28’s release a few weeks ago. Again, my apologies for getting this out later than usual.

The first story I want to tell is about the high-level breakdown of all tracked bug in this release. As you can see below there was a marked drop in the total bug volume in Firefox 27. Perhaps unsurprisingly this allowed us to focus a bit more which resulted in a smaller amount of unresolved and unconfirmed bugs being shipped in this release. The numbers are still much higher than we would like but it is a small victory for the overall quality of Firefox if these numbers continue to trend downward.


The second story I want to tell is about the percentage of incoming bugs confirmed. This is typically an indication of the effectiveness of our incoming bug triage practices. As the volume of incoming bugs decreases we like to see the number of confirmed bugs increase. Unfortunately we have been trending the opposite direction for some time. Previously I had attributed this to the ever increasing volume of bugs but I can no longer rely on this excuse. Looking forward to Firefox 28 I can say that we’ve made remarkable improvement in this area in an effort to reverse this trend. I’ll share more on that in a few weeks.


The third story I’d like to share is that of when fixes landed for Firefox 27. The following chart I’ve plotted the average time-line for the past few releases along with Firefox 27’s time-line. In general we expect to see an ever increasing curve toward through the Nightly cycle, trailing off as we proceed through Aurora and Beta, with spikes in the first half of these cycles.

Firefox 27 appeared to be trending higher than average as we approached the end of each cycle. While these numbers are not completely out of control it does put a bit of extra strain on QA. After all, the later a fix lands, the less time we have to test it. Ultimately this creates risk to the quality of the product we ship, but as long as we recognize that we can try to plan for it accordingly.


The fourth story I want to tell is about the number of bugs reopened. We typically reopen a bug when something is fundamentally flawed with the initial implementation and/or if a patch needs to be backed out. Even in cases where a regression is found, we tend to leave the bug closed and deal with the regression in its own bug report. As such, a high volume of bugs being reopened is usually indicative of a release that saw much churn and may point to quality issues in release.

Unfortunately Firefox 27 continues the story of many of the version before it and represents a marginal increase in the number of bugs reopened. Of course, the other side of this story may be that testing was more effective. It’s hard to say concretely just looking at the bug numbers.


The fifth story I want to tell is one of stability. The following chart shows the number of topcrash bugs reported against Firefox 27 as compared to previous releases. For those unaware, a topcrash bug are those crashes which show up most frequently in the wild and present the greatest risk to quality and security for our users. The unfortunate story for Firefox 27 is that we’ve seen an end to the downward trend that we saw started with Firefox 25 and continued with Firefox 26. The volume of topcrashes puts Firefox 27 in the same ballpark as the rash of point-releases we saw in Firefox’s teens.

Of course there’s two sides to every story. The other side of this may very well be that we got better at reporting stability issues and that resulted in a higher volume of known bugs. It’s hard to say for sure.


The final story I want to tell today is about the percentage of regressions reported post-release. As we hone our processes, bring on more engineers, and get assistance from more contributors, we’ve been getting better at finding and fixing regressions. It’s inevitable that the more code landing in a release increases the potential for regression. Naturally this leads to an increase in the total number of regressions reported. Firefox 27 was no different so I thought I’d look at regressions a little differently this time around.

The following chart shows the ratio of regressions reported before release to regressions reported after release. A release with a high-volume of post-release regressions is a failure from a QA perspective because it means many bugs slipping through our fingers. I wouldn’t expect the number of post-release regressions to ever be 0 but we need to strive to always be better.

Firefox 27 represents a huge victory on this front. We saw a huge drop in the number of Firefox 27 regressions reported post-release. For months we’ve sought to improve our triage processes, engage more with developers, and work harder to involve volunteers in our day to day efforts. It’s nice to see these efforts finally paying off.


That’s Firefox 27, in a nutshell, from a QA perspective. I think it’s useful to be able to reflect on the bug numbers and see what kind of an impact our efforts are having on the product. I really do enjoy visualizing the data and talking about our “victories”, but it’s just as interesting seeing what the data is telling us about where we may have failed. I believe that learning from failures has far more impact than building on successes and acts as a great motivator. What we want to avoid is those crippling failures. I think Firefox 27 is a nice iterative step forward.