Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Usetrace – Record and Play Highly Maintainable UI Tests – For Web Devs (usetrace.com)
8 points by eeheino on April 24, 2015 | hide | past | favorite | 1 comment



Even recognizing my bias as an automation engineer, I don't know if I think this is a long-term solution for most companies.

For one thing, the point in the FAQ regarding the inflexibility of keyword/flowchart-based testing is pretty apt. You can do a lot more with a UI testing DSL like Selenium or Watir than with something like this. They wave that off as 'you probably don't have the skills available,' but the thing is that people who do have the skills around Selenium or other UI frameworks aren't -that- rare or expensive.

Further, I personally haven't found this kind of testing to necessarily be easier than coding, at least for any even mildly interesting interface. The main skills around UI testing aren't really about knowing what fields to point at and what to click, they're more around:

* How and what do you reliably verify? What's really invariant in a session and what's not?

* How do you write code so that your tests are robust?

They say these are maintainable but record/playback tests typically aren't very resistant to minor flow, identifier, or container/ownership changes like new divs. Neither are coded tests, necessarily, but they're usually a little easier to abstract for that stuff and they're generally a lot easier to debug to fix when things go off the rails.

And having or consulting a skilled automation engineer will go a long way towards telling you how to write your app code to make sure that's true.

* How you do you deal with timing issues?

Knowing how to effectively do synchronization (wait for this before doing that) is more than just putting in the sync request, it's knowing what to sync on. Sometimes it's not something a record/playback system can see visually. That means these sorts of systems usually end up recording hard delays, but those aren't very resistant to variations in network timing. You also have to prune out delays because the human doing the recording was thinking for a moment from delays that are actually necessary.

They also make your tests run slower in the best or even normal case, and these folks charge per minute.

* How do you always start from a known state and deal with test fixtures (starting data)?

This isn't trivial if they're pointing at your production app, and it saves any kind of state. It may or may not be trivial pointing at a test instance.

* How do you recover if a test has a problem?

This is easier on a web app because you can generally go back to a home page, but if you have to clean up half-done data or operations it's not necessarily trivial either.

And finally, with a system like this:

* How do you work around its limitations?

That's a skillset in itself. It's one of the reasons I don't like this sort of thing much. What you learn doing standard UI testing is pretty portable. What you learn figuring out the quirks of a keyword/flowchart system is generally not.

Last consideration is:

* How do you review the test for correctness?

This is a hell of a lot easier looking at code than it generally is going through a one-keyword/node at a time flowchart with a crapload of dropdowns.

Now, to be fair:

* When you're at minimum viable product and it's not doing much, this could be a ok bootstrap test system. There are local record/playback tools with a better growth curve, but they do probably have a bigger ramp.

* If your app is completely deterministic and your tests actually need no logic whatsoever, record/playback can be pretty solid.

* If cleaning up sessions is as simple as deleting a user or something, some of the concerns around fixtures go away.

* Though there's testing dogma to never record/playback, it's BS (like most dogma). Sometimes it is actually cheaper to re-record tests than fix them. Depends on how much massaging you had to do after recording, mostly, and how many tests you have.

* And on that subject, just 10-20 good user scenario UI tests can do a -ton- for basic risk mitigation even on a complex codebase. Most companies actually overspec UI suites considerably. The majority of tests should be coded at unit/component level, or at least hermetic (not end to end, no external servers). End to end UI tests are for your most important scenarios, not for every edge case--they're too fragile and maintenance heavy.

But ultimately, I think companies will usually be better served with investing in a longer-term and more flexible solution than this for their test coding, and go to something more like SauceLabs or some other hosted execution service for the automated runs.

Honestly, if Usetrace would just separate the coding part from the running part and offer the keyword/flow-style-speccing as a "value add" for people who don't want to do traditional scripts, I'd be more likely to say this could be a pretty cool solution. But right now there's no ramp from "super-limited test" to "flexible test," and that's a real paint yourself in the corner kind of problem.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: