A developer's Guide to Quality Assurance


React Native



 Why did we need Quality Assurance?

Let’s onboard you onto our project first! We’re a team of 3 developers, one UX designer, one scrum master and a product owner building a wonderful yet complex application in React Native by rigorously applying SCRUM methodologies. Without digging into the project’s specifics, our product is destined to the Malaysian market and there happened to be a big offshore consulting firm who is in charge of Quality Assurance (QA) Testing. By QA testing, I mean black-box testing the application against a set of requirements.

Every 3 to 4 weeks after a new release was published we would receive a bunch of defects from QA to correct. Given the random nature of the defects (When are we going to receive the defects? How many of them are legitimate? Are they complex?) we received, it sometimes made it difficult to plan the sprints ahead. Not only did we get the QA Testers’ defects too late but as the application grew bigger and bigger, some regressions creeped in thus feeding our QA Testers with more bugs.

As a result at every QA testing round, our image was deteriorated. Our client was losing confidence in our work. They started having doubts about our development and our ability to deliver a robust application.

What is a QA Developer?

Let’s get back to our example. Our client was unsatisfied by the quality of our work, yet none of our Scrum team members are QA Testers. When developing a ticket, it is the developer’s role to test his/her own feature/fix against the definition of “The Done”. The Product Owner goes through the ticket again to validate it once and for all. Nowhere along the chain is there a QA Tester to test the isolated feature and its impact on the existing ones, neither did we automate end-to-end integration tests on our application. The client explicitly asked for someone specially dedicated to QA.

Yet no one at BAM had QA Testing expertise. This is where I came in as both a QA Tester and a Developer for a month or so.

My role was twofold:

1. Find defects before our client’s QA Testers did which involved QA Testing skills.
2. Correct them before they are shipped to release. Not only did I need to put out the fire on our existing issues, but more importantly our client wanted me to have a critical overview of the application’s architecture in order to identify common root causes of defects and architectural deficiencies. The idea was not to provide quick fixes to the application but to have a deeper refactoring of some buggy modules.

Quality Assurance Testing

The first part of my learnings were purely QA oriented. Let’s focus on those first before looking at the developer side of things (Bug fixes).

Here is a QA Testing workflow that happened to work with me taking the example of the AirBnB application:

1. Draw the tree of paths

Draw the tree of paths a user can take from a well defined starting point. The purpose of the tree is to be as exhaustive as possible when writing your test plan. We just downloaded the AirBnB application, our starting point is our Welcome screen. I use Google Drawing for its simplicity and ease of access. (~2h)


AirBnb (2)

2. Write a test plan.

A test plan is a roadmap of your tests. Every branch of your tree makes up a test case. You then break down every test case in a series of actions to be performed and their expected results. In our example, there are 9 ways to create an account by accepting the Terms & Conditions aside from Google, Weibo & Facebook options. Below is an example of a very simple test plan for our previous example. You need to answer a set of questions : What prerequisites do I need to run the tests ? What device am I going to run it on ? Followed by a series of actions to be performed and expected results. (~3h)


Screen Shot 2018-01-04 at 09.42.53

3. Include boundary tests to your test plan.

Suppose there was a rule that said, you cannot create more than 5 different accounts with the same first & last name, don’t forget to include it in your test plan.

4. Execute the test plan

Execute the test plan to try out the different account creation tracks by allowing or refusing the “Notifications” and the “Sync contacts” and run the boundary tests. (~1h per device). If you have a lot of devices, don’t execute every test case on every device or else you’ll spend your nights doing so. If you’ve already created an account with the authorization pop-ups for Notifications and Sync Contacts on iOS, you can run another test case with the 2 pop-ups on Android instead.

5. Isolate component tests.

For low order components requiring user input, you perform unexpected actions. In an email input field for example you would try to enter an incorrectly formatted email. No need for a test plan, it’s up to the tester’s common sense and experience to run those tests while he is executing test cases of the test plan.

6. Monkey test your app!

Here is a non exhaustive list of tricks to keep in mind:
    - Go offline
    - Press buttons several times in a row quickly
    - Open & Close the keyboard several time
    - Open background applications and reopen your application (Multi tasking on your phone)
    - Go from portrait to landscape
    - Enter special characters in search inputs
    - Play with sliders
    - Scroll rapidly and repeatedly
    - Focus on the back end API calls. Go offline, kill the app, go back and forth around  those. This is where it helps to have a developer’s vision and understand the ins and outs of the application.

Bug reporting

For every defect you identify you create an instance in your favorite bug reporting tool such as Jira. No need to say you have checked with the product owner and the development team that it is indeed a defect and not a feature or a part of the development cycle. I found that when tracking bugs it is important to specify its severity. Yet in the software industry no consensus exists on bug severity.


For the purpose of the article let us focus on 3 categories:

1. Critical

A critical functionality or data is not working. There is no workaround. It can cause serious financial and business loss.

2. Major

A major deviation from the business requirements. A workaround exists. Either the feature works yet not as expected or it does not work but is not essential to the application.

3. Minor 

A minor deviation from the business requirements or a UX/UI (Cosmetic) defect.


I also added an Extra Feature category. A defect is an extra feature when there is a discrepancy between the developers' user stories and the QA Testers' requirements.
Below is a graph highlighting the number of defects returned to us by the client’s QA Testers and done on a given sprint. There are 2 peeks one after each release. Our product owner received the defects from the QA Testers and tried to fit them into the sprint with varying degrees of priority usually in line with the bugs’ severities.



We started doing QA on sprint 18. We can see that from there on the overall number of defects returned to us by the client decreased along with their severity. There was a major client QA testing phase around sprint 21.

Bellow is a graph representing what has been done by the QA developer of our team per sprint as of sprint 18. We notice a decrease in the severity of the defects corrected.

We can also see 3 consecutive phases more or less intertwined:

1. Putting out the fire on critical issues
2. Refactoring/Rethinking the architecture of the application.
3. Implementing automatic end-to-end integration tests to detect regressions on already mature features.



Identify root causes

Rather than treating bug tickets one by one I’ve found that you can often gather some defects together under a common root cause. To do so you can spend 10 to 15 mins on each defect writing down your assumptions about the origin of the problem and a rough draft of the technical solution if you have one. You can then group defects under the same roof.

I’ve found that it had 2 benefits:

1. Tackling defects by root cause proved to be more efficient and produced a robust solution for the future.
2. It provides a critical overview of the application’s architecture pinpointing modules that require deeper refactoring.


Below are examples of common root causes I have identified in our project:

1. Transitioning from orders to subscriptions.  Our application would send an order to the Back-End which after a series of checks would turn into a subscription. Yet when sending an order via the application, there was no way to find which subscription your order was linked to and there was no way to track the status of an order. We would just poll on our subscriptions for 20 seconds hoping to find a new subscription. In 99% of the cases it was fine. Yet when an order failed or was stuck in progress, we could not track it and give appropriate feedback to the user.
2. Error management was not homogenous in the application and not dealt with in some cases. When going offline and performing actions there were no feedback to the user or the feedback was not always appropriate.
3. Buggy modules were responsible for several defects across the application. We were using redux-form and in some rare cases when deleting components with a redux-form it would throw an error thus triggering a white screen for the user.

Final Tips

1. As a part of the Continuous improvement process don’t forget to share what you’ve learned from bug fixing with the team. It’s important not to reproduce them!
2. Test on a stable environment as close to the production environment as possible. Our team kept developing new features, with a back end that would sometimes restart. It was not always easy to test rigorously.
3. Stay in touch with the development team and give visibility to the product owner. As both a developer and QA Tester, I was taken out of the team’s celerity and I worked in kanban on important defects which took me up to 8 days when refactoring important modules. Things you can do:

    - Write a QA Daily in addition to the Daily mail the rest of the team would send   

   - Have your QA Tickets follow the classic SCRUM flow from the Today Backlog to Doing to To Be Validated by the Product Owner just like the other team developers.

5. Prioritization of your test plan. In our case there were 14 different contexts with which you could create an account. If you were to test the rest of the features with the 14 accounts, either you have an army of testers or you’re spending your nights doing so! Prioritize by popularity. It might seem obvious but you want to test features that are the most commonly used first.
6. Spend some time on error management at the beginning of the project to establish standards and a strategy that you can stick to through the whole project (Not worth a lot of work yet it brings a sense of robustness to your application)

Stay tuned for an upcoming article on end-to-end integration tests with Detox :)