
Ask HN: At what point in mobile app development is QA asked to test? - brayhite
What indicates to the engineers or product manager that a mobile app project, whether it be a feature or new app altogether, that the (manual testing) QA team is ready to begin?<p>I understand that some believe a QA team should work in tandem with devs, but in our small company, QA is spread amongst iOS, Android, desktop web and mobile web testing, and made up of two people. So where time is limited for QA, when do devs&#x2F;PMs feel the time is appropriate for QA to begin exploratory testing builds?
======
wingerlang
We do testing when a ticket is moved to "testing" column in JIRA. Then another
sweep once all features is ready.

------
gls2ro
Here is my opinion related to your situation: you probably already have a
mindset about testing as being reactive (meaning is starts after something is
built). It might help on the long term (with efforts done on short term) to
change this and involve the testing team as soon as possible in your
development effort. As they time will pass by this will reduce the number of
bugs, the need for retesting and regression and thus in the end maybe more
time for the testers.

Aside from this big change, here are more practical advices:

The Testing Team _can start_ as soon as they can execute the mobile app
(either through a simulator/emulator or directly on the device).

In order to minimze the effort of the testing team here are some things that I
think might help:

1\. The purpose of the testing team should be: a) _covering all
functionalities_ and b) _discover as much bugs as possible_.

2\. To be able to implement the first point, then the testing team should
define a series of risks they want to cover. This can be done quick and
efficient if they have two types of knowledge:

    
    
       (a) about what the application does (and here I think they can maximize what they learned from other platforms) 
    
       and 
    
       (b) about the types of bugs specific to the platform they are testing. 
       ​

With these in mind they can write a list of risks of what might go wrong in
the app, even before starting the testing using the knowledge they have about
the product features.

With this list and with an app ready to be installed on the device they can
and should start testing.

I hould say two more things:

1) Optimising the testing effort by limiting different variables related to
the testing process (ie. start as late as possible, doing it with few people
...) should be a decision always balanced by risks. I'm not saying you should
not do it. But I am saying that when taking a decision one should assess what
is there that one can risk in terms of probability and impact.

2) Testing is an activity that can be done also by development. So you can -
and should based on many best practices - write unit testing as much as
possible. Writing unit testing by development team lets the testing team focus
more on exploration of functionalities and putting themselves in the shoes of
the end-user and thus discovering what might go wrong from that perspective.

3) When constrained by time pressure one _strategic focus_ of the testing team
should be to _cover more with less effort_. It can be they should use more
tools, learn more best practices, increase some skills that buy time (example:
how fast they can type) or buy faster machines or decrease the amount of
documentation written.

Hope this helps in some ways.

edit: formatting

~~~
brayhite
Those are some very valid points. In your opinion, does the ease at which
defects and/or bugs are found matter? In an environment that I'd hesitate to
call agile but we try to be as hands on across the board as possible, should
engineers/devs be expected to do very basic and rudimentary testing of a
feature outside of a unit test before saying it's ready for QA?

~~~
gls2ro
> In your opinion, does the ease at which defects and/or bugs are found
> matter?

I'm not sure I understood completely your question. But if you are referring
to discovering a lot of bugs very quick by the two testers, then the fix for
this is for the developer to do unit testing and integration testing or basic
feature testing/requests testing (depending on your architecture).

> Should engineers/devs be expected to do very basic and rudimentary testing
> of a feature outside of a unit test before saying it's ready for QA?

First about Unit Testing: this type of testing will usually assure a developer
that the classes or modules or functions _are working as the developer
expected_. This is very different than verifying that the feature _works as
the client or end-user will expect_.

Here is an example: With Unit Testing you will probably verify that a Class
that is exporting PDFs is exporting them and no errors are present during the
export. It might also test that if a footer is defined it will include it at
the bottom of the page.

But when looking at System/Acceptance Testing (or feature testing) one should
look that the information that brings some value to an end-user is present or
maybe that it looks accordingly with the color scheme of the branding.

So basically yes, they should do more than Unit Testing. Or if you are into an
agile environment you can do some kind of "pair testing" (not sure if this
concept exists) where a developer and a tester are working together to write
tests.

Ref to the second part of the question I think you are looking to some kind of
Definition of Ready for QA.

And here is one simple metric:

1) I finished writing code

2) I finished writing some unit testing (you can choose a level of code
coverage for unit testing depending on how much time you want to invest in
it).

3) I _at least once_ opened the application and followed the story to see if
from the perspective of the end-user I implemented it correctly (if it has
acceptance criteria then I will follow that)

Of course step 3 can also be automated by writing some kind of automated
acceptance testing (or at least a basic functional scenario).

