Protractor (http://www.protractortest.org) is a good wrapper for Selenium: despite the name, you can totally use it with non-Angular pages (I'm a Preact person myself). It's convenient because it handles installing and updating webdrivers (which was especially handy on Windows), and seems well-maintained. And if you're using only Chrome and Firefox, it even has a direct connection mode to skip having to run a Selenium server (example: https://github.com/wildpeaks/threejs-examples-screenshots).
Otherwise if you just want to run tests in a real browser environment but don't want to test in multiple browsers, you could run them in NW.js, Electron, SlimerJS, or PhantomJS.
In case with some (actually, a lot of) projects, people who can code will have to spend 100% of their time coding features. Tools Selenium IDE strive to offer a 'good enough' solution for cases when you're trying to automate the work of manual testers or have less technical members of your team involved into UI testing. And tools like Screenster turn 'good enough' into a working solution.
Visual regression testing is, imho, another case for tools like Screenster (http://bit.ly/2kXEEFV), even though I know that, technically, you have some of this functionality with Protractor (http://bit.ly/2kune0t).
This is what I do for web, mobile web and mobile app (appium). It takes some time to set up the proper layout, but once it's done it's very easy to move the frameworks around for the various platforms. Scripting out the tests also allows me to do more with our API for more than just UI testing, but also functional.
1 - http://www.guru99.com/page-object-model-pom-page-factory-in-...
The advantage of visual/screenshot-driven automation tools like Screenster.io or Kantu.io is twofold:
1. It allows non-developers to join the testing team and create automated web tests.
2. Even if you are a developer by training, tools like NW.js or PhantomJS still have a significant learning curve. I argue that developer time is typically better spend on the product than on coding complex test cases.
I mean, if you want to do end-to-end UI testing, you'd probably want manual testers to cover a huge part of the routine. And tools like Screenster help you automate the process and use it for UI regression testing.
1. Substitute writing code with a visual editor. This is, essentially, what Screenster does.
2. Make scripting abstract enough to a point where it starts to resemble plain English. That's, in a nutshell, the approach of Cucumber.
Each of the two has its pros and cons. In my opinion, the key strength of tools like Screenster in this context is how they address visual/css regression testing (specifically, things like text or image formatting issues, layout issues and other things that typically get broken by CSS updates):
For this reason, much prefer test-libraries that read more like BDD cases without actually trying to invent a new DSL for me to maintain on top.
If that's an acceptable tradeoff for having anyone be able to create new test cases though, it would probably be okay.
Lately, I was playing with it. Here is a starter project: https://github.com/jumasheff/nightwatch-cucumber-e2e-testing...
I like my web app acceptance tests to be code, readable and maintainable, not UI/OCR thingies... but I've seen so many bugs like "this element is visible and clickable, but Selenium says it's not, because the 3d-css animation + dom-manipulation voodoo of the tested app got selenium completely confused"... and I'm not even mentioning web code with events-on-SVG-nodes or webgl here...
Any non-standard and animation-heavy web-app/page will utterly confuse Selenium to the point that you end up testing by injecting js into the browser (so now you can have bugs in tests only, or tests that pass despite bugs) mingled with test-runner code and heuristic timeouts everywhere...
(I know the "real solution" is "use a frontend framework and write front-end-unit-tests instead", but usually you're writing the acceptance tests for something that already exists).
I find that screenshot-based test cases are very easy to read and maintain - even and especially for the non-coders on your team. New button? Just take a new screenshot. Everyone that can use Snagit understands that ;)
I see this as somewhat analogous to the abstractions provided by an ORM. It won't provide every edge case but that doesn't mean you should toss it out entirely - just bypass the abstraction in the (minority) of cases where you need to.
If you need to bypass the abstraction too much of the time that indicates that something is a bit wrong with your front end code, just as having a database that you could almost never use an ORM would indicate something fundamentally wrong with your data model.
Expose all data trough a clean and documented API. If screenreader accessibility matters, someone will eventually pay you to develop an alternate interface using same API, or an open-source will be developed at some point...
Solution to accessibility problems is usually: (1) expose your data though an API, (2) wait to be paid to develop a truly accessible alternate "light" UI (if it ever happens), but don't cripple the "main ui" (well, actually more like "have it like the customer wants it" ;) heavy animations and fx are "crippling" in my personal view too...).
(And apart from general and professional ethics, local laws might disagree with the assumption that you're free to ignore users for that reason.)
Eh, GUI tests definitely have their place, although they can be far less specific than your unit tests. They're pretty great for quick and regular confidence checks.
Anyway, TestCafe says it doesn't use Selenium, but I haven't tried that yet: https://testcafe.devexpress.com/
For a more visually oriented IDE Sikuli seems to be a good start but last time I checked there was no easy way to automate its use for CI.
Some lessons learned that we're solving with the product:
- creating basic tests must be doable in 1 minute
- testing is a team effort (testers&devs) - must have collaboration tools built-in
- must be extendable with code (js)
- sync/timing issues must be handled automatically (when possible)
- UIs will change - generate locators and algorithms that adds robustness
- debugging must be speedy: must be able to debug test steps live with the browser context (html5 remote connection to the browser)
- randomness is needed - need an easy way to introduce random data (like random emails)
- testing emails must be as easy as testing a web page - inbox must be an integrated feature
- must handle frames transparently - user does not care if the element is inside a frame or not
- must be pluggable to CI
- must have good, clear reports
- some tests will be flaky - must be a way to keep builds stable & green
- no installations / plugins, just a browser
- fast to update tests when UI changes - the system must do global replacements when it sees duplicate stuff
- duplicate test step/code will cause a major maintenance burden - must have reusable/parameterizable components
- people will want to reuse functional tests for stress testing and monitoring - allow that too
- people need to check for dynamic content - must have a quick and codeless way to use variables
- people want to have quick builds - tool must offer parallel testing
- people will want to integrate to their own systems - must offer APIs
- html pages are responsive - must have support for different screen sizes
But I'll be happy to re-evaluate the tool.
I'd add that it's nice to have an "eject button" path to run the tests on local infrastructure. Also, it should be possible to run tests against localhost.
Evan at the "starter - 20 builds / wk" tier, I'm not sure that would cover any team I've ever worked with for an entire day...
Anyway, I appreciate the transparency of "yes, this is our product" as well as inclusion of competitors. Doesn't make it seem as though they're the only other game in town.
Most importantly, their analysis tells me they're watching their competition. Too many companies say "oh they stink" wrt their competitors and take a "we know best" approach.
Being open to watching competitors means knowing when they do something smart and adopting the good parts. I appreciate that in marketing
I didn't really intend to pick on Selenium. Actually, that's the exact wording from the title of the article I used as my reference, so I though it would be fair to leave it the way it is.
But I agree, Selenium is one of the many cases illustrating the huge impact of open-source projects on the technology, and we must give it credit for that.
Time is not free so in the end you have to compare the total cost and productivity and choose the best tool for yourself.
Advantage is that testrunner code mingles seamlessly with injected-into-browser-js. Problem is... complexity, and twilight-zone-class bugs in the testing tools themselves: there comes a point where the logic of the acceptance tests is so convoluted, and they are so undebuggable and unmaintainable, that fixing and extending them eats the time of the best and most expensive developers, instead of being a great tasks for juniors...
Eager to try helium next, hope it sucks less :)
Edit+: don't get me wrong, CodeceptJS is amazing compared to the madness of seleniu-webdriver/webdriver.io used "raw", with promises and callbacks, so kudos to its devs. But its far from optimal... though this maybe because selenium itself is far from optimal.
The reason a lot of people still use it, is because it's very easy to create a test; you basically record the things you want to test and Selenium IDE generates the test for you, without requiring any programming knowledge.
( Source: I work for a company where people upload their Selenium IDE tests to run them on a large grid of browsers in the cloud, https://testingbot.com )
The Selenium IDE I like using at this point is Oxygen. It's Atom with a Selenium recorder built in. It records and gives you code with details about the XPath and CSS Selectors provided in comments above each recorded action. It's the best tool I've found for quickly getting off of record-playback dependency and into working with code. The Oxygen IDE provides intellisense and real debugging. It strikes the right balance of allowing you to record when you need to, but then getting out of the way and providing you with a real IDE so you can work with the code.
I did a blog post not too long ago about some of it's quirks on Windows...
you can set the implicit wait setting. this will work in most cases but will make the tests slow when you need to test negative conditions. the behavior around implicit wait varies by driver implementation which can be annoying. in general it's better to use explicit waits.
> Selenium IDE only works with Firefox and may not work for sites or web apps that have slightly different DOM structure in Chrome, Safari, and IE.
selenium ide has webdriver playback. if you don't like that you can generate webdriver scripts in c#, java, python, or ruby and run them against selenium server.
Obviously you can do everything with WebDriver, Screenster uses it too. The point is that you can get a lot of benefits out of the box when you are using a highler level platform like Screenster vs hand coding to a low level API like WebDriver.
There's some description of Screenster features here http://screenster.io/selenium-alternatives-for-testing-autom...
A while back, I combined Java's Robot class with Selenium and a simple scripting language to create Scripted Selenium.
Never mind that I'd be seriously worried that a lot of these small time players are out of business the next year and I'll end up with useless tests (of course this is better if the software itself is just helping recording tests and it all ends up as Selenium scripts in some human-readable language).
> Finally, Sachi, RFT, and UFT are expensive enterprise solutions that require local installation, certain servers, runtime licenses and lots of config. Being difficult to learn, hard to maintain, and costly, none of these solutions beats the open-source and free Selenium. End of story.
Sahi is open source and is hosted on SourceForge: https://sourceforge.net/projects/sahi/files/
I think the future lies with platforms like Screenster that have recording, collaboration, built-in browser execution, visual verification and lots of other magic, and not with low-level frameworks like Sikuli.
But both have their place :-)
Plus all the data that you record is tangled in the generated code making it hard to locate or change.
We created gauge http://getgauge.io/ (It's open source) to solve this problem by making it easier to author and write tests in your favorite tool, without a recorder.
The tests can be authored in mark down and wired up to re-usable selenium code.
One thing we've done in the past is to generate a large background load with BWMG and then run a few PhantomJS/Selenium tests in a smaller test rig. It's a lot easier to get the load that way vs. trying to swarm Phantom or Selenium.
In the modern web pages that have to work well with touchscreens mouse over in general is de-emphasized anyway.