Report on Recognition

Earlier this year I embarked on a journey to investigate how we could improve participation in the Mozilla QA community. I was growing concerned that we were failing in one key component of a vibrant community: recognition.

Before I could began, I needed to better understand Mozilla QA’s recognition story. To this end I published a survey to get anonymous feedback and today, I’d like to share some of that feedback.

Profile of Participants

The first question I asked was intended to profile the respondents in terms of how long they’d been involved with Mozilla and whether they were still contributing.

recognition-activityThis revealed that we have a larger proportion of contributors who’ve been involved for more than a couple of years. I think what this indicates we need to be doing a better job of developing long-term relationships with new contributors.

recognition-teamsWhen asked which projects contributors identified with, 100% of respondents identified as being volunteers with the Firefox QA team. The remaining teams breakdown fairly evenly between 11% and 33%. I think this indicates most people are contributing to more than one team, and that teams at the lower end of the scale have an excellent opportunity for growth.

Recognizing Recognition

The rest of the questions were focused more on evaluating the forms of recognition we’ve employed in the past.

recognition-forms

When looking at how we’ve recognized contributors it’s good to see that everyone is being recognized in some form or another, in many cases receiving multiple forms of recognition. However I suspect the results are somewhat skewed (ie. people who haven’t been recognized are probably long gone and did not respond to the survey). In spite of that, it appears that seemingly simple things, like being thanked in a meeting, are well below what I’d expect to see.

recognition-likelihoodWhen looking at the impact of being recognized, it seems that more people found recognition to be nice but not necessarily a motivation for continuing to contribute. 44% found recognition to be either ineffective or very ineffective while 33% found it to be either effective or very effective. This could point to a couple of different factors, either our forms of recognition are not compelling or people are motivated by the work itself. I don’t have a good answer here so it’s probably worth following up.

What did we learn?

After all said and done, here is what I learned from doing this survey.

1. We need to be focused on building long-term relationships. Helping people through their first year and making sure people don’t get lost long-term.

2. Most people are contributing to multiple projects. We should have a framework in place that facilitates contribution (and recognition of contribution) across QA. Teams with less participation can then scale more quickly.

4. We need to be more proactive in our recognition, especially in its simplest form. There is literally no excuse for not thanking someone for work done.

5. People like to be thanked for their work but it isn’t necessarily a definitive motivator for participation. We need to learn more about what drives individuals and make sure we provide them whatever they need to stay motivated.

6. Recognition is not as well “baked-in” to QA as it is with other teams — we should work with these teams to improve recognition within QA and across Mozilla.

7. Contributors find testing to be difficult due to inadequate description of how to test. In some cases, people spend considerable amounts of time and energy figuring out what and how to test, presenting a huge hurdle to newcomers in particular. We should make sure contribution opportunities are clearly documented so that anyone can get involved.

8. We should be engaging with Mozilla Reps to build a better, more regional network of QA contributors, beginning with giving local leaders the opportunity to lead.

Next Steps

In closing, I’d like to thank everyone who took the time to share their feedback. The survey remains open if you missed the opportunity. I’m hoping this blog post will help kickstart a conversation about improving recognition of contributions to Mozilla QA. In particular, making progress toward solving some of the lessons learned.

As always, I welcome comments and questions. Feel free to leave a comment below.

Cheers!

Ninety Days with DOM

Last quarter marked a fairly significant change in my career at Mozilla. I spent most of the quarter adjusting to multiple re-orgs which left me as the sole QA engineer on the DOM team. Fortunately, as the quarter wraps up I feel like I’ve begun to adjust to my new role and started to make an impact.

Engineering Impact

My main objective this quarter was to improve the flow of DOM bugs in Bugzilla by developing and documenting some QA processes. A big part of that work was determining how I was going to measure impact, and so I decided the most simple way to do that was to take the queries I was going to be working with and plot the data into Google docs.

The solution was fairly primitive and lacked the ability to feed into a dashboard in any meaningful way, but as a proof of concept it was good enough. I established a baseline using the week-by-week numbers going back a couple of years. What follows is a crude representation of these figures and how the first quarter of 2015 compares to the now three years of history I’ve recorded.

Volume of unresolved Regressions & Crashes
dom.regressions-vs-crashes.unresolved.alltime.2015q1Regressions +55%, Crashes +188% since 2012

Year-over-Year trend in Regressions and Crashes
dom.regressions-vs-crashes.unresolved.annual.2015q1Regressions +9%, Crashes +68% compared to same time last year.

Regressions and Crashes in First Quarters
dom.regressions-vs-crashes.unresolved.quarterly.2015q1Regressions -0.6%, Crashes +19% compared to previous 1st Quarters

Resolution Rate of Regressions and Crashes
dom.regressions-vs-crashes.fixrate.2015q190% of Regressions resolved (+2.5%), 80% of Crashes resolved (-7.0%)

Change in Resolution Rate compared to total Volume
dom.regressions-vs-crashes.volume.2015q1Regression resolution +2.5%, Crash resolution -6.9%, Total volume +68%

I know that’s a lot of data to digest but I believe they show embedding QA with the DOM team is having some initial success.

It’s important to acknowledge the DOM team for maintaining a very high resolution rate (90% for regressions, 80% for crashes) in the face of aggressive gains in total bug volume (68% in three years). They have done this largely on their own with minimal assistance from QA over the years, giving us a solid foundation from which we could build.

For DOM regressions I focused on making existing bug reports actionable with less focus on filing new regression bugs; this has been a two part effort. The first being focused on finding regression windows for known regression bugs, the second being focused on converting unconfirmed bugs into actionable regression reports. I believe this is why we see a marginal increase in the regression resolution rate (+0.4% last quarter).

For DOM crashes I focused on filing previously unreported crashes (basically anything above a 1% report threshold). Naturally this has led to an increase in reports but has also led to some crashes being fixed that wouldn’t have been otherwise. Overall the crash resolution rate declined by 2.6% last quarter but I believe this should ultimately lead to a more stable product in the future.

The Older Gets Older

The final chart below plots the age of unresolved DOM bugs week over week which currently sits at 542 days; an increase of 4.8% this past quarter and 241% since January 1, 2012. I include it here not as a visualization of impact but as a general curiosity.

Median Age of Unresolved DOM Bugs
dom.regressions-vs-crashes.fixrate.2015q1Median age is 542 days, +4.8% last quarter, +241% since 2012

I have not yet figured out what this means in terms of overall quality or whether it’s something we need to address. I suspect recently reported bugs tend to get fixed sooner since they tend to be more immediately visible than older bugs. A fact that is likely common to most, if not all components in Bugzilla. It might be interesting to see how this breaks down in terms of the age of the bugs being fixed.

What’s Next

My plan for the second quarter is to identify a subset of these to take outside of Google Docs and convert into a proof of concept dashboard. I’m hoping my peers on the DOM team can help me identify at least a couple that would be both interesting and useful. If it works out, I’d like to aim for expanding this to more Bugzilla components later in the year so more people can benefit.

If you share my interest and have any insights please leave a comment below.

As always, thank you for reading.

[UPDATE: I decided to quickly mock up a chart showing the age breakdown of bugs fixed this past quarter. As you can see below, younger bugs account for a much greater proportion of the bugs being fixed, perhaps expectedly.]

Screen Shot 2015-04-06 at 3.56.33 PM

New Beginnings

…or trying to adapt to the inevitability of change.


Change is a reality of life common to all things; we all must adapt to change or risk obsolescence. I try to look at change as a defining moment, an opportunity to reflect, to learn, and to make an impact. It is in these moments that I reflect on the road I’ve traveled and attempt to gain clarity of the road ahead. This is where I find myself today.

How Did I Get Here?

In my younger days I wasted years in college jumping from program to program before eventually dropping out. I obviously did not know what I wanted to do with my life and I wasn’t going to spend thousands of dollars while I figured it out. This led to a frank and difficult discussion with my parents about my future which resulted in me enlisting in the Canadian military. As it happens, this provided me the space I needed to think about what I wanted to do going forward, who I wanted to be.

I served for three years before moving back to Ontario to pursue a degree in software development at the college I left previously. I had chosen a path toward working in the software industry. I had come to terms with a reality that I would likely end up working on some proprietary code that I didn’t entirely care for, but that would pay the bills and I would be happier than I was as a soldier.

After a couple of years following this path I met David Humphrey, a man who would change my life by introducing me to the world of open source software development. On a whim, I attended his crash-course, sacrificing my mid-semester week off. It was here that discovered a passion for contributing to an open source project.

Up until this point I was pretty ignorant about open source. I had been using Linux for a couple years but I didn’t identify it as “open source”; it was merely a free as in beer alternative to Windows. At this point I hadn’t even heard of Mozilla Firefox. It was David who opened my eyes to this world; a world of continuous learning and collaboration, contributing to a freer and more open web. I quickly realized that choosing this path was about more than a job opportunity, more than a career; I was committing myself to world view and my part to play in shaping it.

Over the last eight years I have continued to follow this path, from volunteering nights at school, through internships, a contract position, and finally full-time employment in 2010.

Change is a way of life at Mozilla

Since I began my days at Mozilla I have always been part of the same team. Over the years I have seen my team change dramatically but it has always felt like home.

We started as a small team of specialists working as a cohesive unit on a single product. Over time Mozilla’s product offering grew and so did the team, eventually leading to multiple sub-teams being formed. As time moved on and demands grew, we were segmented into specialized teams embedded on different products. We were becoming more siloed but it still felt like we were all part of the QA machine.

This carried on for a couple of years but I began to feel my connection to people I no longer worked with weaken. As this feeling of disconnectedness grew, my passion for what I was working on decreased. Eventually I felt like I was just going through the motions. I was demoralized and drifting.

This all changed for me again last year when Clint Talbert, our newly appointed Director and a mentor of mine since the beginning, developed a vision for tearing down those silos. It appeared as though we were going to get back to what made us great: a connected group of specialists. I felt nostalgic for a brief moment. Unfortunately this would not come to pass.

Moving into 2015 our team began to change again. After “losing” the B2G QA folks to the B2G team in 2014, we “lost” the Web and Services QA folks to the Cloud Services team. Sure the people were still here but it felt like my connection to those people was severed. It then became a waiting game, an inevitability that this trend would continue, as it did this week.

The Road Ahead

Recently I’ve had to come to terms with the reality of some departures from Mozilla.  People I’ve held dear for, and sought mentorship from, for many years have decided to move on as they open new chapters in their lives. I have seen many people come and go over the years but those more recently have been difficult to swallow. I know they are moving on to do great things and I’m extremely happy for them, but I’ll also miss them intensely.

Over the years I’ve gone from reviewing add-ons to testing features to driving releases to leading the quality program for the launch of Firefox Hello. I’ve grown a lot over the years and the close relationships I’ve held with my peers are the reason for my success.

Starting this week I am no longer a part of a centralized QA team, I am now the sole QA member of the DOM engineering team. While this is likely one of the more disruptive and challenging changes I’ve ever experienced, it’s also exciting to me.

Overcoming the Challenge

As I reflect on this entire experience I become more aware of my growth and the opportunity that has been presented. It is an opportunity to learn, to develop new bonds, to impact Mozilla’s mission in new and exciting ways. I will remain passionate and engaged as long as this opportunity exists. However, this change does not come without risk.

The greatest risk to Mozilla is if we are unable to maintain our comradery, to share our experiences, to openly discuss our challenges, to engage participation, and to visualize the broader quality picture. We need to strengthen our bonds, even as we go our separate ways. The QA team meeting will become ever more important as we become more decentralized and I hope that it continues.

Looking Back, Looking Forward

I’ve experienced a lot of change in my life and it never gets any less scary. I can’t help but fear reaching another “drifting point”. However, I’ve also learned that change is inevitable and that I reach my greatest potential by adapting to it, not fighting it.

I’m entering a new chapter in my life as a Mozillian and I’m excited for the road ahead.

Improving Recognition

I’ve been hearing lately that Mozilla QA’s recognition story kind of sucks with some people going completely unrecognized for their efforts. Frankly, this is embarrassing!

Some groups have had mild success attempting to rectify this problem but not all groups share in this success. Some of us are still struggling to retain contributors due to lack of recognition; a problem which becomes harder to solve as QA becomes more decentralized.

As much as it pains me to admit it, the Testdays program is one of these areas. I’ve blogged, emailed, and tweeted about this but despite my complaining, things really haven’t improved. It’s time for me to take some meaningful action.

We need to get a better understanding of our recognition story if we’re ever to improve it. We need to understand what we’re doing well (or not) and what people value so that we can try to bridge the gaps. I have some general ideas but I’d like to get feedback from as many voices as possible and not move forward based on personal assumptions.

I want to hear from you. Whether you currently contribute or have in the past. Whether you’ve written code, ran some tests, filed some bugs, or if you’re still learning. I want to hear from everyone.

Look, I’m here admitting we can do better but I can’t do that without your help. So please, help me.

 

 

Firefox 27 Bug Statistics

I’m writing today to present the bug statistics for Firefox 27. My apologies for the tardiness of this blog post; too many things have got in my way recently. I try to get these posts out at the end of life of the respective Firefox version as that allows me to present the statistics across the entire life-cycle of a Firefox version. For Firefox 27, this should have coincided with Firefox 28’s release a few weeks ago. Again, my apologies for getting this out later than usual.

The first story I want to tell is about the high-level breakdown of all tracked bug in this release. As you can see below there was a marked drop in the total bug volume in Firefox 27. Perhaps unsurprisingly this allowed us to focus a bit more which resulted in a smaller amount of unresolved and unconfirmed bugs being shipped in this release. The numbers are still much higher than we would like but it is a small victory for the overall quality of Firefox if these numbers continue to trend downward.

Firefox27_TotalBugs

The second story I want to tell is about the percentage of incoming bugs confirmed. This is typically an indication of the effectiveness of our incoming bug triage practices. As the volume of incoming bugs decreases we like to see the number of confirmed bugs increase. Unfortunately we have been trending the opposite direction for some time. Previously I had attributed this to the ever increasing volume of bugs but I can no longer rely on this excuse. Looking forward to Firefox 28 I can say that we’ve made remarkable improvement in this area in an effort to reverse this trend. I’ll share more on that in a few weeks.

Firefox27_Confirmed

The third story I’d like to share is that of when fixes landed for Firefox 27. The following chart I’ve plotted the average time-line for the past few releases along with Firefox 27’s time-line. In general we expect to see an ever increasing curve toward through the Nightly cycle, trailing off as we proceed through Aurora and Beta, with spikes in the first half of these cycles.

Firefox 27 appeared to be trending higher than average as we approached the end of each cycle. While these numbers are not completely out of control it does put a bit of extra strain on QA. After all, the later a fix lands, the less time we have to test it. Ultimately this creates risk to the quality of the product we ship, but as long as we recognize that we can try to plan for it accordingly.

Firefox27_Fixes-by-Date

The fourth story I want to tell is about the number of bugs reopened. We typically reopen a bug when something is fundamentally flawed with the initial implementation and/or if a patch needs to be backed out. Even in cases where a regression is found, we tend to leave the bug closed and deal with the regression in its own bug report. As such, a high volume of bugs being reopened is usually indicative of a release that saw much churn and may point to quality issues in release.

Unfortunately Firefox 27 continues the story of many of the version before it and represents a marginal increase in the number of bugs reopened. Of course, the other side of this story may be that testing was more effective. It’s hard to say concretely just looking at the bug numbers.

Firefox27_Reopened

The fifth story I want to tell is one of stability. The following chart shows the number of topcrash bugs reported against Firefox 27 as compared to previous releases. For those unaware, a topcrash bug are those crashes which show up most frequently in the wild and present the greatest risk to quality and security for our users. The unfortunate story for Firefox 27 is that we’ve seen an end to the downward trend that we saw started with Firefox 25 and continued with Firefox 26. The volume of topcrashes puts Firefox 27 in the same ballpark as the rash of point-releases we saw in Firefox’s teens.

Of course there’s two sides to every story. The other side of this may very well be that we got better at reporting stability issues and that resulted in a higher volume of known bugs. It’s hard to say for sure.

Firefox27_Topcrashes

The final story I want to tell today is about the percentage of regressions reported post-release. As we hone our processes, bring on more engineers, and get assistance from more contributors, we’ve been getting better at finding and fixing regressions. It’s inevitable that the more code landing in a release increases the potential for regression. Naturally this leads to an increase in the total number of regressions reported. Firefox 27 was no different so I thought I’d look at regressions a little differently this time around.

The following chart shows the ratio of regressions reported before release to regressions reported after release. A release with a high-volume of post-release regressions is a failure from a QA perspective because it means many bugs slipping through our fingers. I wouldn’t expect the number of post-release regressions to ever be 0 but we need to strive to always be better.

Firefox 27 represents a huge victory on this front. We saw a huge drop in the number of Firefox 27 regressions reported post-release. For months we’ve sought to improve our triage processes, engage more with developers, and work harder to involve volunteers in our day to day efforts. It’s nice to see these efforts finally paying off.

Firefox27_Regressions

That’s Firefox 27, in a nutshell, from a QA perspective. I think it’s useful to be able to reflect on the bug numbers and see what kind of an impact our efforts are having on the product. I really do enjoy visualizing the data and talking about our “victories”, but it’s just as interesting seeing what the data is telling us about where we may have failed. I believe that learning from failures has far more impact than building on successes and acts as a great motivator. What we want to avoid is those crippling failures. I think Firefox 27 is a nice iterative step forward.

Google RMA, or how I finally got to use Firefox OS on Wind Mobile

The last 24 hours have really been quite an adventure in debugging. It all started last week when I decided to order a Nexus 5 from Google. It arrived yesterday, on time, and I couldn’t wait to get home to unbox it. Soon after unboxing my new Nexus 5 I would discover something was not well.

After setting up my Google account and syncing all my data I usually like to try out the camera. This did not go very well. I was immediately presented with a “Camera could not connect” error. After rebooting a couple times the error continued to persist.

I then went to the internet to research my problem and got the usual advice: clear the cache, force quit any unnecessary apps, or do a factory reset. Try as I might, all of these efforts would fail. I actually tried a factory reset three times and that’s where things got weirder.

On the third factory reset I decided to opt out of syncing my data and just try the camera with a completely stock install. However, this time the camera icon was completely missing. It was absent from my home screen and the app drawer. It was absent from the Gallery app. The only way I was able to get the Camera app to launch was to select the camera button on my lock screen.

Now that I finally got to the Camera app I noticed it had defaulted to the front camera, so naturally I tried to switch to the rear. However when I tried this, the icon to switch cameras was completely missing. I tried some third party camera apps but they would just crash on startup.

After a couple hours jumping through these hoops between factory resets I was about to give in. I gave it one last ditch effort and flashed the phone using Google’s stock Android 4.4 APK. It took me about another hour between getting my environment set up and getting the image flashed to the phone. However the result was the same: missing camera icons and crashing all over the place.

It was now past 1am, I had been at this for hours. I finally gave in and called up Google. They promptly sent me an RMA tag and I shipped the phone back to them this morning for a full refund. And so began the next day of my adventure.

I was now at a point where I had to decide what I wanted to do. Was I going to order another Nexus 5 and trust that one would be fine or would I save myself the hassle and just dig out an old Android phone I had lying around?

I remembered that I still had a Nexus S which was perfectly fine, albeit getting a bit slow. After a bit of research on MDN I decided to try flashing the Nexus S to use B2G. I had never successfully flashed any phone to B2G before and I thought yesterday’s events might have been pushing toward this moment.

I followed the documentation, checked out the source code, sat through the lengthy config and build process (this took about 2 hours), and pushed the bits to my phone. I then swapped in my SIM card and crossed my fingers. It worked! It seemed like magic, but it worked. I can again do all the things I want to: make phone calls, take pictures, check email, and tweet to my hearts content; all on a phone powered by the web.

I have to say the process was fairly painless (apart from the hours spent troubleshooting the Nexus 5). The only problem I encountered was a small hiccup in the config.sh process. Fortunately, I was able to work around this pretty easily thanks to Bugzilla. I can’t help but recognize my success was largely due to the excellent documentation provided by Mozilla and the work of developers, testers, and contributors alike who came before me.

I’ve found the process to be pretty rewarding. I built B2G, which I’ve never succeeded at before; I flashed my phone, which I’ve never succeeded at before; and I feel like I learned something new today.

I’ve been waiting a long time to be able to test B2G 1.4 on Wind Mobile, and now I can. Sure I’m sleep deprived, and sure it’s not an “official” Firefox OS phone, but that does not diminish the victory for me; not one bit.

 

Firefox 26, A Restrospective in Quality (Part II)

A few days ago a wrote a post detailing a qualitative analysis of Firefox 26 using statistics from Bugzilla. In it I talked about regressions and the volume speaking to a “potential failure, something that was missed, not accounted for, or unable to be tested either by the engineer or by QA”. I’d like to modify that a little by incorporating post-release regressions.

Certainly one would expect the volume of regressions in pre-release to increase as QA, Engineers, or volunteers find and report more regressions. I realize now that simply measuring the volume of regressions might not be a clear indication of quality or a breakdown in process. Perhaps I painted this a bit too broadly.

I’ve just retooled this metric to take a look at pre-release versus post-release regression volume. I think looking at regressions in this way is a bit more telling. After all, a pre-release regression is something we technically knew about before release whereas a post-release regression is something that became known after we released. A high volume of post-release regressions would therefore imply a lower quality release and an opportunity for us to improve.

Just as a reminder, here is the chart comparing all regressions from a few days ago:

chart_regressions

Here is the new chart incorporating regressions found post-release:

chart_regressions_2014-02-03

As you can see the volume of post-release regressions is fairly significant. Perhaps unsurprisingly chemspills seem to correlate to periods of more post-release regressions. Speaking of Firefox 26 specifically, it continues a downward trend marking the fourth release with fewer post-release regressions and fewer regressions overall.

Anyway, that’s all I wanted to call out for this release. I will be back in six weeks to talk about how things shaped up for Firefox 27. I’m hoping we can continue to see improvements through iterating on our process and working closer with other teams.

 

Firefox 26, A Retrospective in Quality

[Edit: @ttaubert informed me the charts weren’t loading so I’ve uploaded new images instead of linking directly to my document]

The release of Firefox 27 is imminently upon us, next week will mark the end of life for Firefox 26. As such I thought it’d be a good time to look back on Firefox 26 from a Quality Assurance (QA) perspective. It’s kind of hard to measure the impact QA has on a per release basis and whether our strategies are working. Currently, the best data source we have to go on is statistics from Bugzilla. It may not be a foolproof but I don’t think that necessarily devalues the assumptions I’m about to make; particularly when put in the context of data going back to Firefox 5.

Before I begin, let me state that this data is not indicative of any one team’s successes or failures. In large part this data is limited to the scope of bugs flagged with various values of the status-firefox26 flag. This means that there are a large amount of bugs that are not being counted (not every bug has the flag accurately set or set at all) and the flag itself is not necessarily indicative of any one product. For example, this flag could be applied to Desktop, Android, Metro, or some combination with no way easy way to statistically separate the two. Of course one could with some more detailed Bugzilla querying and reporting, but I’ve personally not yet reached a point where I’m able or willing to dig that deep.

Unconfirmed Bugs

Unconfirmed bugs are an indication of our ability to deal with the steady flow of incoming bug reports. For the most part these bugs are reported from our users (trusted Bugzilla accounts are automatically bumped up to NEW). The goal is to be able to get down to 0 UNCONFIRMED bugs before we release a product.

chart_incoming_bugs

What this data tells us is that we’ve held pretty steady over the months, in spite of the ever increasing volume, but things have slipped somewhat in Firefox 26. In raw terms, Firefox 26 was released with 412 of the 785 reported bugs being confirmed. The net result is a 52% confirmation rate of new bug reports.

However, if we look at these numbers in the historical context it tells us that we’ve slipped by 10% confirmation rate in Firefox 26 while the volume of incoming bugs has increased by 64%. A part of me sees this as acceptable but a large part of me sees a huge opportunity for improvement.

The lesson here is that we need to put more focus on ensuring day to day attention is paid to incoming bugs, particularly since many of them could end up being serious regressions.

Regressions

Regressions are those bugs which are a direct result of some other bug being resolved. Usually this is caused by an unforeseen consequence of a particular code change. Regressions are not always immediately known and can exist in hiding until a third-party (hardware, software, plug-in, website, etc) makes a change that exposes a Firefox regression. These bugs tend to be harder to investigate as we need to track down the fix which ultimately caused the regression. If we’re lucky the offending change was a recent one as we’ll have builds at our disposal. However, in rare cases there are regressions that go far enough back that we don’t have the builds needed to test. This makes it a much more involved process as we have to begin bisecting changesets and rebuilding Firefox.

The number of regressions speaks to a potential failure, something that was missed, not accounted for, or unable to be tested either by the engineer or by QA. In a perfect world a patch would be tested taking into account all potential edge cases. This is just not feasible in reality due to the time and resources it would take to cover all known edge cases; and that’s to say nothing of the unknown edge cases. But that’s how open source works: we release software we think is ready, users report issues, and we fix them as fast and as through as we reasonably can.
In the case of Firefox 26 we’ve seen a continued trend of a reduction in known regressions. I think this is due to QA taking a more focused approach to feature testing and bug verifications. Starting with Firefox 24 we brought on a third QA release driver (the person responsible for coordinating testing of and ultimately signing-off on a release) and shifted toward a more surgical approach to bug testing. In other words we are trying to spend more time doing deeper and exploratory testing of bug fixes which are more risky. We are also continuing to hone our processes and work closer with developers and release managers. I think these efforts are paying off.

chart_regressions

The numbers certainly seem to support this theory. Firefox 26 saw a reduction of regressions by 20% over Firefox 25, 25% over Firefox 24, and 57% better than Firefox 17 (our worst release for regressions).

Stability

Stability bugs are a reflection of how frequently our users are encountering crashes. In bugzilla these are indicated using the crash keyword. The most serious crashes are given the topcrash designation. The more crash bugs we ship in a particular release does not necessarily translate for more crashes per user, but it is indicative of more scenarios under which a user may crash. Many of these bugs are considered a subset of the regression bugs discussed earlier as these mostly come about by changes we have made or that a third-party has exposed.
In Firefox 26 we again see a downward trend in the number of crash bugs known to affect the release. I believe this speaks to the success of educating more people in the skills necessary to participate in reviewing the data in crash-stats dashboard, converting that into bugs, and getting the necessary testing done so developers can fix the issues. Before Firefox 24 was on the train the desktop QA really only had one or two people doing day to day triage of the dashboards and crash bugs. Now we have four people looking at the data and escalating bug reports on a daily basis.

chart_crashes

The numbers above indicate that Firefox 26 saw 12% less crash bugs that Firefox 25, 41% less than Firefox 24, and 59% less than Firefox 15, our most unstable release.

Reopened

Reopened bugs are those bugs which developers have landed a fix for but the issue was proven not to have been resolved in testing. These bugs are a little bit different than regressions in that the core functionality the patch was meant to address still remains at issue (resulting in the bug being reopened), whereas a regression is typically indicative of an unforeseen edge case or user experience (resulting in a new bug being filed to block the parent bug).

That said, a high volume of reopened bugs is not necessarily an indication of poor QA; in fact it could mean the exact opposite. You might expect there to be a higher volume of reopened bugs if QA was doing their due diligence and found many bugs needing follow up work. However, this could also be an indication of feature work (and by consequence a release) that is of higher risk to regression.

chart_reopened_bugs

As you can see with Firefox 26 we’re still pretty high in terms of the number of reopened bugs. We’re only 5% better than Firefox 23, our worst release in terms of reopened bugs. I believe this to be a side-effect of us doing more focused feature testing as indicated earlier. However it could also be indicative of problems somewhere else in the chain. I think it warrants looking at more closely and is something I will be raising in future discussions with my peers.

Uplifts vs Landings

Landings are those changes which land on mozilla-central (our development branch) and ride the trains up to release following the 6-week cadence. Uplifts are those fixes which are deemed either important enough or low enough risk that it warrants releasing the fix earlier. In these cases the fix is landed on the mozilla-aurora and/or mozilla-beta branches. In discussions I had last year with my peers I raised concerns about the volume of uplifts happening and the lack of transparency in the selection criteria. Since then QA has been working closer with Release Management to address these concerns.

chart_fixed_by_date

I don’t yet have a complete picture to compare Firefox 26 to a broad set of historical data (I’ve only plotted data back to Firefox 24 so far). However, I think the data I’ve collected so far shows that Firefox 26 seemed far more “controlled” than previous releases. For one, the volume of uplifts to Beta was 29% less than Firefox 25 and there were 42% less uplifts across the entire Firefox 26 cycle compared to Firefox 25. It was also good to see that uplifts trailed off significantly as we moved through the Aurora cycle into Beta.

However, this does show a recent history of crash-landings (landing a crash late in a cycle) around the Nightly -> Aurora merge. The problem there is that something that lands on the last day of a Nightly cycle does not get the benefit of Nightly user feedback, nor does it get much, if any, time for QA to verify the fix before it is uplifted. This is something else I believe needs to be mitigated in the future if we are to move to higher quality releases.

Tracked Bugs

The final metric I’d like to share with you today is purely to showcase the volume of bugs. In particular how much the volume of bugs we deal with on a release-by-release basis has grown significantly over time. Again, I preface these numbers with the fact that these are only those bugs which have a status flag set. There are likely thousands of bugs that are not counted here because they don’t have a status flag set. A part of the reason for that is because the process for tracking bugs is not always strictly followed; this is particularly true as we go farther back in history.

chart_tracked_bugs

As you can see above, the trend of ever increasing bug volume continues. Firefox 26 saw a 21% increase in tracked bugs over Firefox 25, and a 183% increase since we started tracking this data.

Firefox 26 In Retrospective

So there we have it. I think Firefox 26 was an incremental success over it’s predecessors. In spite of an ever increasing volume of bugs to triage we shipped less regressions and less crashes than previous releases. This is not just a QA success but is also a success borne of other teams like development and release management. It speaks to the success of implementing smarter strategies and stronger collaboration. We still have a lot of room to improve, in particular focusing more on incoming bugs, dealing with crash landings in a responsible way, and investigating the root cause of bugs that are reopened.

If we continue to identify problems, have open discussions, and make small iterative changes, I’m confident that we’ll be able to look back on 2014 as a year of success.

I will be back in six weeks to report on Firefox 27.

My Philippine Adventure

A few weeks ago I had the opportunity to embark on a two week vacation to the Philippines with my boyfriend, Genesis. His family being from the Philippines it was a natural first destination to Asia for me. As luck would have it the adventure started well before we were due to depart.

Our original itinerary included a week in the northern island of Luzon and a week on the southern island of Palawan. Each would offer us a different experience: Luzon with an exploration of Philippine civilization and Palawan with it’s natural beauty. Having booked our vacation back in September we had no idea what the future held. Unfortunately the Philippines would have to endure horrible devastation at the hand of Typhoon Yolanda (aka Haiyan), Palawan being directly in her path. While I’m confident they will recover and be stronger than ever, my sympathies go out to the people affected.

Not wanting these events to impact my vacation I decided to turn the week in Palawan into a week exploring various locales around Luzon. As luck would have it Luzon was unaffected by the storm, giving me the opportunity to explore the wonderful history and culture of the Philippines.

Day 1: Departure

It was Friday, our bags were packed, and we were on our way to the airport. I was so excited for this trip. Arriving at the airport, balikbayan box in tow, we discovered our flight was delayed. We’d have to wait until 1:30 in the morning to board but I was not deterred. Nothing was going to diminish my excitement for this vacation.

Sunrise over the Pacific Ocean

Some time later, after a couple movies and a few hours of sleep, I was pleasantly rewarded with an amazing sunrise. We were a couple hours the other side of the international date line and inching our way closer to the Philippines.

When we landed we were greeted by some of Genesis’ family who had hired a van for the remainder of our trip. Stepping out of the airport in Manila at about 7am local time I recall feeling like I had stepped into the midday sun of a Canadian summer.

Market in Manila

After clearing the airport traffic we stopped at a market for some breakfast. We bought some fresh red snapper, crab, shrimp, and squid which they cooked for us. It was an amazing breakfast having spent 14 hours on a plane, and looking forward to another 8 hours to travel by van.

Later that evening we arrived in Solano, Genesis’ hometown. Coincidentally it was his mother’s birthday so we were welcomed with a bit of a feast. After eating and a visit with more of the family it was time to turn in.

Day 2: Bayombong

Our second day in the Philippines was spent exploring the provincial capital, Bayombong; but first a traditional Ilocano breakfast complete with dried fish, banana, veggies, rice, and pandesal (fresh from the trike vendor).

Saint Dominic Church

Following breakfast we spent the bulk of the morning exploring the market in Solano, getting fresh food for that night’s supper. It was a bit overwhelming at first, so many people and so much food, but I soon settled in to my surroundings. After the market we traveled into the provincial capital, Bayombong, visiting the local museum and Saint Dominic Cathedral.

Day 3: Waterfalls and Parrots

On the third day Genesis, his brother Adonis, and I ventured out to visit a nearby waterfall. Traveling by trike, one of the more common forms of transport in Nueva Vizcaya, we headed up a dirt path into the hills near Solano. After about 30 minutes along the rocky, muddy path we reached the trail head. From here it was a short hike across a rickety suspended foot bridge to a waterfall view.



Unfortunately we weren’t able to hike any further since a recent rain made the rocks far too slippery to attempt climbing. On the plus side this hike gave us a bit of an appetite. Upon returning back to Solano, we picked up Genesis’ mother and his aunt to go out for lunch (yes, more food). I thoroughly enjoyed the presentation at this restaurant; the carrot parrot was a nice touch.

Carrot Parrot

Day 4: Banaue

On the fourth day we traveled to Banaue, a town famous for its rice terraces. This trip gave me the opportunity to travel by another popular form of transportation in the Philippines, the Jeepney; often wildly decorated world war 2 era transport vehicles. It was not the most comfortable form of transportation but it was extremely affordable, only costing a few dollars per trip.

Jeepney to Banaue

After a few hours winding our way slowly up the road into the mountains we arrived in Banaue. Dismounting the Jeepney we walked around the village and up to the view point of the rice terraces. Unfortunately we arrived following the harvest and were welcomed by a thick mountain rain, cutting our trip to Banaue a bit shorter than expected. Even still I found the view stunning and well worth the trip.

Town of Banaue
Banaue Rice Terraces

Day 6: Road to Baguio

The sixth day marked the end of our visit in the Cayagan Valley, today we were off to the mountain city of Baguio. The road to Baguio is one of the most scenic roads I’ve ever had the pleasure of traveling. As you leave the Cayagan Valley, the road winds westward up into the mountains, reaching elevations of 1400 meters. Along the way we came across Ambuklao Dam, a hydroelectric facility on the Agno river in Benguet province. Having been on the road for a few hours we decided to stop, stretch our legs, and take in the view.

Ambuklao Dam

After the break we continued on our way. The road continued to snake ever higher until we finally reached the city of Baguio. Much to my surprise the climate here was completely different than what I’d expected from the Philippines; 23C and minimal humidity; a stark contrast to the 35C and plenty of humidity we experienced days before back in Solano.

Mines Park Lookout, Baguio City
Baguio City

After checking in to our hotel we went to Wright Park for a picnic and walked around the tourist attractions in the area. Unfortunately after a couple hours we would have to say good bye to Genesis’ family. It was time for them to head home and for Gensis and I to carry on with our vacation. The departure was a bit emotional but I’m grateful I was able to share some of my introduction to the Philippines with his family.

Day 7: Exploring Baguio

On the seventh day, being our only full day in Baguio, we wanted to explore as much of it as possible. The first stop was to the botanical garden where we got acquainted with some of the native plants, artwork, and peoples.

Baguio Botanical Garden

Of course the trip wouldn’t be complete without a trip to the SM shopping mall and a stop a Jolibee. As luck would have it, it started to rain soon after we finished up at the botanical garden. The mall provided the perfect distraction to wait out the rain. Once it let up we went to visit the Baguio museum where we got to learn the city’s history and culture.

Baguio Museum

Following the museum we continued on to Burnham Park, named for the American architect (Daniel Burnham) who designed several buildings in Baguio as part of the Philippine Commission in the early 1900s. We strolled around the park for a while before heading back to the mall for some more shopping and a movie before we turned in for the evening.

Burnham Park

Day 8: The Road to Laoag

Following a good night’s rest and a traditional Baguio breakfast of dried fish, rice, and egg it was time to pack our bags and catch a bus to Laoag. The ride itself would take us west through the mountains to the coast and then north to Laoag through several (very old) cities and towns. As with our drive too Baguio the view going down the other side of the mountains was absolutely stunning.

Mountains west of Baguio

 

After a couple hours of our bus winding down the mountain we were nearing the coast. At this point the road veered north, worming its way along the Philippine coast, teasing us with sights of the sea several times along the way. It would be several hours before we reached the historic city of Vigan. Unfortunately for us this was just a stop along the way. The bus would stop here for a few minutes to transfer some passengers before continuing to our destination of Laoag.

Gateway to Vigan

A couple hours later we arrived on the outskirts of the city of Laoag. From here it was a short ride on a trike through the countryside to the coast.

Fort Ilocandia

As luck would have it we arrived at our destination, Fort Ilocandia, just in time to watch the sun set.

Sunset at Fort Ilocandia

After a quick stroll around the grounds of Fort Ilocandia it was time to turn in for the night.

Fort Ilocandia Fountain at Night

Day 9: Laoag City

We had a pretty lazy start to our ninth day in the Philippines. We slept in, had breakfast, went for a long stroll along the beach, had lunch, and then went for a swim. Following that we decided to spend the afternoon exploring Laoag city. It was a short ride from the hotel on the coast to the city.

Laoag Bell Tower

 

One of the main tourist attractions to Laoag is the Sinking Bell Tower. The tower is believed to have been built in 1612 by the Augustinians and leans slightly to the north. It’s location is fairly central to other historical architecture like the St William’s cathedral, Ilocose Norte capital building, and court house. I can’t help but be reminded while visiting these places that the Philippines is a country with deep roots in Catholocism and they’ve done well preserving their roots.

After spending a couple hours exploring Laoag it was time to head back to Fort Ilocandia where we were treated to another amazing sunset.

Sunset at Fort Ilocandia

Day 10: Relaxing at Fort Ilocandia

On our tenth day in the Philippines we decided to stay close to home and just relax; I think we needed a bit of a break from all the travel. After breakfast we went down to the beach and watched some local fishermen bringing in their nets they had set out the previous evening.



Following this we went for a swim in the ocean. I was really quite surprised with how warm the water was. This wasn’t my first time swimming in an ocean (I swam in the waters off Prince Edward Island when I was a kid), nor was it my first time in the Pacific (I took a dip in Hawaii last year), but it was my first time in the South China Sea and it was the warmest natural body of water I have experienced to date. We spent a couple hours playing in the waves, trying and failing to keep from swallowing the water. Once we achieved optimum sodium levels we decided it was time for lunch.



On our way to lunch we discovered Fort Ilocandia was home to a small zoo, we decided this would be a good way to spend the afternoon. Their zoo was home to several animals; not least of which were crocodiles, monkeys, and a pair ostrich.

After a walk with the animals we wanted to explore more of what Fort Ilocandia had to offer. Unfortunately some of the more exciting activities (like snorkeling and hot air ballooning) were only open to groups of four or more. Being off-season we basically had the hotel to ourselves so these activities were not accessible to us. It’s a bit regrettable that we weren’t able to enjoy these activities but perhaps we’ll have better luck next time, or come with a larger group next time.

The rest of the day was spent relaxing by the water.

Day 11: Paoay City

Our eleventh day in the Philippines we decided to get back to exploring. Following breakfast we hired ourselves a trike with the goal of seeing Paoay City. The main attraction in Paoay, like many other cities in Ilocos Norte, was an opportunity to take a step back in time and witness some centuries old architecture. In the case of Paoay this would mean a visit to St. Augustine’s Cathedral, a church built in 1710.

St Augustine Cathedral, Paoay
St Augustine Cathedral, Paoay

After strolling around the grounds of St Augustine’s cathedral we were escorted by our trike driver to the Ferdinand E. Marcos Presidential Center. The visit itself was a bit somber as I would learn this to be the final resting place of Ferdinand E Marcos, former president of the Philippines. That aside, it was interesting to see the way homes used to be built in the Philippines; an interesting mixture of cement, local wood, and windows made from translucent capiz shells.

Ferdinand E Marcos Presidential Center

Our next stop on the journey was a bit more exciting. Our trike driver took us back up the coast to a place somewhat off the beaten path. It was here that I would enjoy probably my most thrilling experience on this trip, riding in the back of a 4×4 across the Paoay Dunes.



It was quite windy that day, but as our vehicle danced playfully across the dunes, sand blasting in our face, holding on for fear of being catapulted, I found myself forgetting about all the worries in my life. I could think of nothing else but the fun I was having. I was living in the moment.

Paoay Dunes

After our joy ride was over, and a few minutes to calm ourselves down, we were off to our next destination. We drove back out to the main road, following it around Paoay lake for several kilometers until we reached the far side. Here we reached a rather large house which I would soon learn was one of the many homes of the Marcos family. This home in particular was set up as a museum, not only of the former president’s life but of the people of the Ilocos region.

Malacañang of the Norte

 

We wandered the grounds of this amazingly beautiful property for quite a while. As luck would have it, upon leaving we met a local man who knew the area well and offered to be our tour guide. We decided to hire him for the next, and what would be our last, two days in Ilocos Norte. Upon our return to Fort Ilocandia that evening we were gifted another amazing sunset.

Sunset over South China Sea

Day 12: Pagudpud

On our twelfth day we ventured north along the coast toward Pagudpud. The driver we met the day before escorted us the entire way, showing us some sights we may have missed if we had tried to go it alone. Our first stop was a salt mill just off the road to Pagudpud.

Salt Milling

It was interesting to see how they made the salt, something I had perhaps taken for granted before visiting the Philippines. After farming and milling the rice grain the waste product is the grain casing. Instead of throwing out this casing they use it as fuel for fire. The fire is used to heat salt water from the ocean to its boiling pont. As the water boils the sodium content distills into salt crystals which are then farmed our of the water. It was really interesting to watch this process unfold before my eyes.

The next stop along the way was a very old lighthouse perched on top of a rocky hill just east of the coast. The lighthouse was constructed and first lit in 1892 and still functions today, marking the northwestern most point of the Philippines.

Cape Bojeador Lighthouse

We continued our journey up the coast to the town Burgos to visit the Kapurpurawan rock formation, a limestone monument scuplted by the elements over thousands of years.

Kapurpurawan Rock Formation

 

It was a little bit of a hike to get down the hillside to visit the rocks but it was well worth the journey. The geology was unlike anything I had seen in the Philippines and I had found myself, for the second time, challenging my assumptions of this being a country of beaches and palm trees. The first time of course was seeing pine trees and experiencing temperate weather in Baguio. The hike also presented me with an opportunity to make a new friend.



I found this little guy trying to cross the pathway. After capturing this shot I helped him safely to the other side.

Further up the coast we stopped by a wind farm. It was somewhat unexpected but unsurprising at the same time, as the coast in these parts was really quite windy.

Philippine Wind Farm

 

It was rather curious to see a country without the financial capabilities of Canada embracing this technology.

We carried on to our destination, soon arriving in Pagudpud. As we drove further along the coast, coastline turned to mountains and we found ourselves in a bit of rain but that did not deter us.

Welcome sign in Pagudpud

 

The road continued and eventually lead us back out to the coast where we came across a sleepy little resort. We decided this would be a good place to stop for some food before heading back down to Laoag for the evening. We walked into a small roadside restaurant where a woman cooked us up some fresh tuna and prawns.

Pagudpud Beach

 

After a healthy dose of seafood we made our way back “home” to rest up for our final day in Ilocos Norte. Our driver, Mario, to return in the morning.

Day 13: Vigan

Our thirteenth day in the Philippines was also our last day in Ilocos Norte. Since we didn’t get to see much of Vigan on our first way through we decided we wanted to see it before we left. The trip to Vigan was a few hours round trip so we had plenty of time to fit it in before flying to Manila later that evening.

Old City of Vigan

We arrived in Vigan shortly before lunch and spent much of the mid-day walking around this old city. I was yet again amazed to see the preservation of their rich history and ways of life. Many of the buildings, built hundreds of years ago, still standing despite decades of pummeling by nature. Many of the people still practicing crafts using the traditional techniques taught to them by their ancestors.

Pottery Factory in Vigan
Filipino making a Vase

I felt so grateful that I was able to see this city on foot and to meet the people of Vigan. Unfortunately I was unable to stay for long as we had to eat then return to Laoag so we could catch our flight to Manila.

Much later that evening we had landed safely in Manila and after quite some time in Manila’s infamous traffic we checked into our hotel.

Makati

We were only in Manila for two nights before we had to head home. I found Manila to be quite hectic in contrast to the much more relaxing lifestyle I had experienced throughout my journey around Luzon. We spent much of our time walking to and from restaurants and shopping malls. Much of my experience in Manila reminded me of why I left Toronto (too busy, too noisy, too smoggy).

Don’t get me wrong though. I enjoyed my time in Manila. It was just different, and a bit of a disorienting.

That said I still look at my time Manila as positive experience. After all, it gave me the opportunity to visit with Adonis (Genesis’ brother) and I able to experience a filipino hilot massage, complete with hot banana leaf across my back.

Back to Reality

This whole vacation has been an amazing journey. It has been a feast for the eyes and the mind; and at times just a feast.It was relaxing, busy, enlightening, and rejuvenating. I feel like I was able to have an authentic Philippine experience with a dash of frivolity. Perhaps the best part was being able to share some of that with Genesis and his family.

In the end my time in the Philippines was successful. I came back to my life in Vancouver re-energized and enlightened, something I hope to achieve in all my vacations.

I’m already looking forward to returning to the Philippines someday, perhaps next year. There is so much more this country has to offer and I can’t wait to experience it.

 


If you want to see more pictures of my trip, I’ve posted a photo album on trovebox.

Firefox 25 Bug Stats

We hit another milestone this week. After 24 weeks, 12 Betas, and 3 RCs, Firefox 25 was tested, signed-off, and shipped to the general public. Since Firefox 10 I’ve been collecting data using the bugzilla status flags in an effort to determine what impact our policies and efforts are having on the quality of the product. I’ve decided it would be good of me to move that project over to my blog. For one, it makes this data a little bit more discoverable. For another, this will give me an excuse to do something I don’t have a good track record of following through on: blogging.

The Data

Before I go into the numbers let me state that I have no background in Metrics or Statistics, whatsoever. I will not, nor can I make any conclusions about the data. I am merely presenting it here to you, the community, in a way I feel to be interesting. Feel free to comment on this or any related posts if you have any suggestions about how I may improve my methodology or to make more accurate, informed conclusions.

With that, here are the numbers for Firefox 25. Anything in green is a success that we need to continue to build on. Anything in red is an area that needs greater attention as we move forward.

  • 382 verified fixes (1% improvement)
  • 649 unverified fixes (14% improvement)
  • 439 unfixed bugs (14% improvement)
  • 245 wontfix bugs (10% improvement)
  • 129 unconfirmed bugs (34% degradation)

We’ve made continuous improvement to our processes around fix verification. This includes focusing more on high priority bugs in the pushlogs, verifying fixes earlier in the cycle whenever possible, documenting our processes, and trying to involve more volunteers. tells a story that I’ve seen repeated the last few releases. The addition of a third QA release lead (Tracy Walker) has allowed us to scale somewhat in the last couple of cycles (we are now spending more time on Aurora than we were before). I suspect the move to two Betas per week has also contributed here in that it’s allowed us to do more focused testing on fewer changes more frequently. Unfortunately we aren’t putting enough effort into unconfirmed bug triage, something I hope to improve upon as I switch focus to Firefox 28 next week.

The Data Visualized

Now I’d like to share the visualization of this data. Please keep in mind that I’m not drawing any conclusions here, I’m merely visualizing the data in ways I believe to be interesting. I hope you do too.

Firefox25-Bug-Breakdown
Breakdown of bugs by status per Firefox version
Number of fixes landed in each branch for each Firefox version
Number of fixes landed in each branch for each Firefox version
Percentage of fixes verified prior to release for each Firefox version
Percentage of fixes verified prior to release for each Firefox version
Active instances of Firefox compared to unconfirmed bugs shipped each release
Active instances of Firefox compared to unconfirmed bugs shipped each release
Active instances of Firefox compared to number of fixes verified in that release
Active instances of Firefox compared to number of fixes verified in that release

 More to Come

That’s Firefox 25 from my perspective. I hope you have found this interesting. I will continue to share this data and update the visualizations every six weeks. You are encouraged to provide me feedback and ask questions on the data, the comparisons I’ve visualize, and my methodology.

Until next release, enjoy.