
Show HN: Wdio – Docker setup for WebdriverIO - blueimp
Hi HN,<p>recently there have been a lot of discussions about Chrome&#x27;s dominance on the Web leading to many websites being broken for alternative browsers.<p>I think one of the reasons for this is that browser automation is hard and manual testing is often limited to the browser developers themselves use.<p>I&#x27;ve distilled my best practices for browser test automation into the following project and hope it helps other developers to test their projects with more browsers:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;blueimp&#x2F;wdio" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;blueimp&#x2F;wdio</a><p>It&#x27;s a Docker setup for WebdriverIO with automatic screenshots, image diffing and screen recording support for containerized versions of Chrome and Firefox.<p>It also includes Webdriver configurations to test an app running in Docker with Safari Desktop, Safari Mobile and Chrome Mobile via Appium and Internet Explorer and Microsoft Edge in a Windows 10 virtual machine.
======
janpot
wrt "Chrome's dominance in test automation", I'd like to point out that the
puppeteer team is working on Firefox support.

[https://www.npmjs.com/package/puppeteer-
firefox](https://www.npmjs.com/package/puppeteer-firefox)
[https://aslushnikov.github.io/ispuppeteerfirefoxready/](https://aslushnikov.github.io/ispuppeteerfirefoxready/)

~~~
blueimp
I think puppeteer is an interesting project, but right now it's Chrome-only
and therefore pretty much useless for cross-browser testing. Even with support
for Firefox, it would still lack support for Safari Desktop, Safari Mobile,
Internet Explorer and Microsoft Edge. It also won't allow you to run the same
tests against real devices.

I think Puppeteer is likely the superior choice for any browser-automation
task apart from cross-browser testing. But if you want to make sure that your
website is working for your users, using a Framework that uses the
standardized W3C Webdriver API is the far better choice, unless you only want
to support Chrome.

------
Vinnl
I've been running WebdriverIO in Docker for a while now, as follows:

On my CI server (using GitLab CI), I run a Node Docker image, and connect the
Selenium Firefox [1] or Selenium Chrome [2] Docker image to it. I then install
WebdriverIO, and tell it to find Selenium on that container's hostname.

This works, but is a little bit brittle, and I've had to pin the Selenium
image versions because something broke at a certain point and it didn't seem
worth it to fix it yet.

Which is to say: I'd very much be in the market for using the containerised
versions of Chrome and Firefox, if there were instructions for doing so in CI
- the primary use case for browser automation, in my opinion.

I realise that this might not be your intended usage, but figured I'd provide
this feedback just in case it is.

[1] [https://hub.docker.com/r/selenium/standalone-
firefox/](https://hub.docker.com/r/selenium/standalone-firefox/)

[2] [https://hub.docker.com/r/selenium/standalone-
chrome/](https://hub.docker.com/r/selenium/standalone-chrome/)

~~~
blueimp
Hey Vinni,

you can definitely use this project and the containerized versions of
Chrome/Firefox on CI - in fact that's its primary use case.

The way this project is setup is to use the chromedriver/geckodriver servers
directly, without using the selenium Java server.

My recommendation for anyone using this in a production CI system is to fork
the wdio, chromedriver, geckodriver and underlying basedriver repos and set up
your own Docker automated build for them.

For a given GitLab repository, you would add this project as a folder and
modify the provided docker-compose.yml to replace the example app with the
application files from your repository.

Please let me know if I can help you with additional instructions.

~~~
Vinnl
Ah, but for me the main appeal would be to point it to an image, and then not
having to maintain it myself - which is what I'd be doing with forking, and
which is basically what I'm already doing, but with more work.

So for example, in my GitLab CI config, I've simply added the following lines
to my job configuration:

    
    
      services:
        - name: selenium/standalone-firefox:3.13
          alias: selenium
    

I can then simply tell my wdio config that Selenium is running at `selenium`
(i.e. `wdio --host=selenium`), and it will work.

However, the setup is somewhat brittle, doesn't work with the latest versions
of Selenium (and I can't be arsed to fix it), and I think it still starts an X
server. If, instead, I could simply point it to an image that runs headless
Firefox, is maintained, and intended for use with wdio, then that would be an
excellent time saver.

When I have to fork, however, the hurdle to start using this is a lot higher,
and the savings of not using the Selenium Java server is not really worth the
additional effort.

~~~
blueimp
Well you can use the provided images without forking and they both support
running Chrome/Firefox headless without X.

But since I'm building this in my personal time there's no professional
support nor a guarantee that it won't break, so I would still recommend to
fork it.

------
dplgk
I've tried various automated browser tools and they all were flakey. (e.g.
Randomly hangs while waiting for an element to appear when the element is
already there.). Is this is a bug in my testing code or are all these types of
tools actually that unreliable?

~~~
blueimp
I'd say it's definitely hard to write cross-browser automated tests that are
not flaky.

Some of that is due to unreliably implementations of the Webdriver API (or the
previous Selenium JSON Wire protocol) in the different drivers.

Another part is that the API by nature is asynchronous, which might make it
harder to reason about - although with WebdriverIO you can actually write your
tests in a synchronous way, or using async/await with modern NodeJS versions.

Regarding the error you described - waiting for an element to appear when the
element is already there: This might also be due to the element being outside
of the viewport - e.g. the browser will not be able to click on it until you
scroll there.

------
ioseph
I may be out of the loop but one of the reasons I went with a different
library to WDIO was the ability to remotely test any browser without having
drivers installed on the device. Has this changed with Selenium?

~~~
blueimp
Any Framework (including WebdriverIO) that uses the W3C Webdriver API or the
older Selenium JSON Wire Protocol requires the appropriate driver for each
browser.

In my opinion that's not a disadvantage, since the Webdriver API is a W3C
standard and there are official drivers for each browser, implemented by the
Browser vendors themselves (with the exception of the IEDriver, which is
implemented by the Selenium project as far as I know).

Unless you use a built-in browser automation API (like Webdriver / Puppeteer),
the only alternative is to inject the test code via JavaScript, which might
pose problems with the Content-Security-Policy directive and often requires
the tested site to run in an iframe, which poses additional problems.

------
aboutruby
I'm mostly used to Capybara / Selenium (mostly chromedriver) and have a
reasonable level of Javascript/Node but I'm really not sure how I would use
this project beyond the setup of having the different browsers running, am I
supposed to load those config with WebdriverIO using `remote`?

~~~
blueimp
Hey aboutruby, the way to use this project is the following:

1\. Checkout the repo

2\. Follow the README to setup the different browsers

3\. Run the tests against the included sample app

4\. Replace the sample app with your own app.

That last part depends very much on your own app. It's easiest if your own app
can already be run via docker-compose, then you would simply replace the
example container with your own container set. Otherwise you could point the
baseUrl in the wdio.conf file to your host machine (e.g. using
`host.docker.internal`) and run your app on your host.

------
fulafel
The docker-comjpose file raises the question is there some handy way for
versioning the used docker images in the docker-compose.yml, like pip / npm
have? You would usually want to control for browser versions in this kind of
testing, no?

~~~
blueimp
While Docker definitely supports tagging versions, I've decided to not tag the
example images for now.

The main reason for this is that it would be very difficult to properly
express what a version stands for. e.g. there are multiple changing parts in
the chromedriver image:

1\. The Chrome version

2\. The Chromedriver version (although this tied to the Chrome version)

3\. The Docker image configuration

I will try to keep changes to the chromedriver/geckodriver image
configurations to a minimum, but also won't guarantee so.

Another reason not to tag those images is that Chrome/Firefox use a rolling
release system, making Chrome/Firefox versions less meaningful, as usually the
latest version is the most important to test.

My recommendation for anyone using the those images for production CI
infrastructure is to create a fork of the repos and your own Docker automated
builds.

