I hope drawing a clear boundary like that reduces some of the entitled nonsense that can happen around open-source projects.
Case in point: Rust's actix web
Regardless of where you are on the soundness telenovela, the fact that a lot of the FLOSS project consumers felt entitled to demand access to a well maintained project for free and complained that even forking a project is not a valid option because they expect someone else to spend their personal time to benefit them personally is an attitude that boggles the mind.
I'm always on the lookout for browser automation using Python so excited to try this framework. I was a big fan of https://github.com/miyakogi/pyppeteer but it doesn't seem to be maintained anymore.
There is another project by microsoft called playwright, though it's js-only for now. They don't have plan for Python bindings for now https://github.com/microsoft/playwright/issues/1043
It's all about ROI, and ROI is huge for simple high-level smoke tests.
Honestly, though, 75% of automation errors are timing missteps -- automation dispatching events at a specific millisecond that a human never would.
One of the things that really struck me as I was reading through examples and such was how common it seems to be to have test code like "Click x; Wait 5 seconds; Click y; Type 'abc'; Click Z". My experience with unit testing tells me that as tests break 'randomly' (because something occasionally takes a few milliseconds too long to load) those delays get inserted, and eventually you end up with slow tests that are harder to maintain and still flaky.
In my project, we were installing on a variety of OSes and versions, and based on what dependencies had to be downloaded+installed it could take anywhere from a few seconds to a couple minutes. I ended up writing a lot of helper functions like `WaitForFile(filename, absoluteTimeout)` that basically run a retry loop, checking every couple seconds (to allow fast-as-possible pass/fail), but frankly was surprised that this was non-standard -- I kind of expected any "integration test assertion library" to be chock full of helpers like this.
I think there's a lot of ground to be covered in improving integration testing in general, and especially bringing in some more modern software design and engineering principles. Unit and integration tests are still code after all -- just because it doesn't ship with the product doesn't mean it shouldn't be maintained at the same quality.
The Java bindings have support for this in the form of FluentWait#until http://javadox.com/org.seleniumhq.selenium/selenium-support/...
Using explicit waits for expected conditions (instead of static waits or implicit waits) helps with making the tests more robust. It's a bit more work though.
It's been working pretty well for me so far.
That's the first mistake.
You should always wait for some kind of state change, you're just building in fragility.
That's 20 years out of date.
But, I work on some project where testing with selenium/cypress was a important functionality so the whole code was designed to be testable, and this made all the difference.
Once you start to be careful and design your code to be automatically testable (which in any case, is just applying some good practice most of the time), it yield some very good result and it does catch a lot of regression/bugs.
Unit testing on the other end, they were letting a lot of problems pass through...
And that seems to be part of the point of this project. From the description:
> In Selenium, you need to use HTML IDs, XPaths and CSS selectors to identify web page elements. Helium on the other hand lets you refer to elements by their user-visible labels.
The example script uses a bunch of user-visible labels, not classes, ids, ancestors, etc. It is things like "click('Sign in')" which you would tell a user you were walking through an interface.
Global XPath type traversal is fragile, unless you're plugged in enough that the dev team lets you know when they break it.
Now (years later) we have a subset of the original tests to ensure the most crucial processes are regression tested.
My experience is that it's a lot harder for these tests to find the right balance and depth than for unit or even other integration tests.
As already said, it's good to consistently have ids on relevant elements, as otherwise the test code complexity grows significantly.
Life is definitely much easier if the devs set up their pages to be automation friendly. I view this as a prerequisite. Otherwise tests can get nightmarish complexity.
A few UI tests on top of the pyramid does help.
We did move from selenium to Cypress because of flakyness in selenium, and because most browser run the same engine under the hood (cypress is chrome only).
The Edge support kinda came for free thanks to its switch to the Blink engine, but the FF support had been under development for a long time.
Animations and rendering races are sticky points too. Usually it's possible to put in some strategic waits to get consistent results.
Lastly getting them running in parallel can be tough, but the payoff is usually worth it if the number of tests must grow beyond a few dozen.
"Spinner that disappears" isn't terrible, but isn't great either.
"Everything looks the same, and then some results load, and more results might load at some point, maybe" is sadly not that rare.
The ratio of maintenance to issues remains very high, but the kinds of issues that it can catch put a thumb on that ratio.
At any rate there are of course issues that can be caught in GUI tests that cannot be caught in unit tests.
Great little tool!
Those tools perhaps will be useful for automated testing or accessibility, etc.
If you're interested: https://stackoverflow.com/questions/33225947/can-a-website-d...
As a lot of the Selenium API is quite low-level (speaking about the Java variety).
Would be wonderful if this and similar efforts would end up in the core Selenium project, keeping the low-level options, but providing an easier to use layer on top.
The video shows a script running itself.
One advantage of wrapping WebDriver, rather than using something like the Chrome DevTools Protocol, is that WebDriver has an interface specified by an W3C Standard  , and can be implemented for any browser. The Chrome DevTools Protocol (obviously) only works with browsers based on Chrome.
"WebDriver can be used with all major browsers. Automate real user interactions in Firefox, Safari, Edge, Chrome, Internet Explorer and more!" 
(Edit: zabil removed the link to his project.)
I'm glad someone wrote what I had thought :D