

Robot With Long Finger Wants to Touch Your iPhone Apps - hugs
http://www.wired.com/wiredenterprise/2013/08/tapster/

======
perze
My non robot hand has 10 fingers and can execute up to 10 point tactile
dependent tests, but that's not really the point of this article.

First, there seems to be a very uninformed notion that being able to "program"
makes you a better tester. Maybe, but that really depends on what your context
is. What's important is the mindset of the tester and how you can bake that
into the engineering culture that it's in.

Second, speed. Agile isn't about speed. Agile is about people and how the
people interact with each other to produce better "things". I think one of the
places where bugs are introduced in any development cycle and that's when the
developers write code. The faster your "speed" is, the faster you introduce
"bugs". The ability to test "fast" is nice but you have to test intelligently
to get the most bang for your buck and you need a human for that.

Third, Manual testing was referred to as the "lowest form of life on the dev
cycle" is not a reflection on the tester but how bad the organization's hiring
methods, and engineering processes are. How you employ a tool or a person does
not define and limit what testing is for. cum hoc ergo propter hoc.

Not to take away from Jason's accomplishments in the test automation tool
space and what he's done to revolutionize and simplify certain tasks that
would have been really tedious and even annoying to perform over and over
again. I thank him for that.

But, if people who doesn't seem to understand what testing is for and make
summary judgements and/or conclusions on the future of people with brains in
software testing, I agree wholeheartedly with Keith Klain as to where they can
put that Robot's finger
([https://twitter.com/KeithKlain/status/362941136207233025](https://twitter.com/KeithKlain/status/362941136207233025)).

Peace.

------
vsComputer
There's an implied dichotomy in this article between "manual testers" and
"testers who know how to program" which is unnecessary, in my opinion. There's
a whole spectrum of technical skill that's being ignored. All software testers
are using computers to do their work, and they are all using tools to make
them more effective.

I think this tool sounds really interesting and would allow a tester to
generate lots of interesting tests. It could conceivably also speed them up,
which is great, and it could allow for automated regression testing of touch
features, which is also great. That doesn't really mean anything with regard
to who is using the tool. You don't have to become a full-time developer to
learn to use Selenium for tasks where it makes sense, and I assume that this
tool is the same way. If it's not, then a collaboration between somebody who
has those automation skills and the "manual testers" could still create a
wrapper that allows the testers to use the tool effectively.

------
vsComputer
"Catch’s flagship product, Enterprise Tester, actually automates this part. It
takes specs created by analysts and software architects and automatically
generates test plans. These can then be handed over to manual testers to run."
<\--- this is how they test software in Hell.

~~~
hugs
I agree. That is hell. There's good automation and bad automation. Bad
automation is just automating the creation of more busy work for people. Good
automation does the boring, tedious things for people -- so they can relax and
go back to focusing on the things that make them happy.

------
philk10
If only the robot could do all the thinking and observing that a skilled
manual tester can do whilst testing an app...

I'm all in favour of automating the boring repetitive parts and using
automation for things humans can't do but AI ain't here yet

~~~
hugs
I totally agree. The way I see it, there are two types of manual testing that
often get conflated in conversation:

    
    
      1) Make sure everything that worked yesterday is still working today.
      2) Check out this new feature. Let us know what makes sense, and what doesn't.
    
    

Automation is great for scenario one and (still) terrible for scenario two.
Some expert testers prefer to call scenario one "checking". They don't want
their high-skill work (which it is) to be confused with low-skill button
clicking.

