
How we test the TeamCity UI - el_duderino
https://blog.jetbrains.com/teamcity/2020/06/teamcity-ui-how-do-we-test-it/
======
ed_balls
Great post about testing, it's pretty much all you need to develop a robust
app.

BUT TeamCity UX is the worst UI from all CI systems that I have used. From
first version of Jenkis to managed stuff like CircleCI. People hate is so much
that someone added a Slack icon :teamshitty: at work.

~~~
cglong
Having never used TeamCity (but being intrigued), what's so bad about it?

~~~
snuxoll
As somebody who does like TeamCity - it can be a bit clunky to navigate
around. It’s much like Zabbix in that once you figure it out it’s not a
problem, but the learning curve for anything beyond basic use can be steep.

I pretty much use Gitlab CI these days for anything outside my day job though,
and the stuff at work is all Azure Pipelines (via Azure DevOps) now.

------
drinchev
The snapshots part is something that I completely disagree.

> Here, we change the engine type from turbofan to propfan. Just to test how
> it works. Since the new engine no longer matches our snapshot, the test
> fails. We’ve got a report, and our engineers are on their way to investigate
> the problem.

I would be really happy if I know that the onboard computer has tests that
checks if the engine being used is compatible before it takes off.

IMO, tests in general should be as close as possible to the real use-case (
kudos for the screenshot testing ) and as far as possible from the
implementation. What happens with snapshots is that you are bound to the
implementation. Maybe I have a logic somewhere in my code that does
s/propfan/turbofan/. Would a user be interested in that or the functionality
itself?

We ditched all JEST's snapshots statements in our code-base, because of code-
coverage abuse and false sense of "being safe" just a couple of months ago. It
was a good decision that helped us be closer to "what the user sees" rather
than "what the markup should be"

~~~
bickeringyokel
Am I misunderstanding, or are snapshots basically just schema validation?

~~~
simoncarter
It's more akin to file diffing. Developers are notified of any differences in
snapshots. The developer can they either accept the new snapshot, or
investigate and fix.

------
hogFeast
For E2E testing, is the browser still totally necessary? I feel the React
rendering test libraries (React Testing Library/Enzyme) manage quite well? I
have used Selenium and that ilk before but find you get odd failures every now
and then (i.e. browser problems but also resources problems when you run
parallel tests).

Also, I am sure this will be pointed out several times, but snapshot testing
doesn't seem like a great idea. Your test is wrapped totally around your
implementation and it is kind of unclear what you are testing i.e. change
something, run test, snapshot fails because of UI update, update snapshot,
passes...if you introduce some kind of unexpected bug with that change, how do
you know? It works if you are testing for the same result but if you are
making changes to the snapshot, you lose all coverage...tests are just as
important then too.

Tbf, you usually end up wrapped around your implementation anyway (i.e.
checking to see if there is a button that has certain text) but I feel that
snapshots are a bit shortcut-y and give false confidence.

Surely the point of UI tests is to really test the functionality from the
point of view of the user...I don't think snapshots achieve this (and I have
always got in trouble when I strayed away from this principle...personally).

~~~
yen223
Tests are a trade - you spend some dev time writing and maintaining tests, and
in return you might save some time and money down the line, if the test
prevents bugs from doing damage. Tests aren't always good. There are tests
that are so difficult to write and maintain, that they basically never repay
their dev cost.

All that is to say, I like snapshot tests. Snapshot tests can be created and
updated automatically, basically making their dev cost close to 0. They are so
cheap that they don't actually have to catch a lot of bugs for them to be a
net positive.

------
kevsim
Anyone have experience with successfully doing UI tests (especially screenshot
tests) in a very young software product? We try to keep our "plumbing" code
and backend code well tested, but the UI is changing all of the time and I'm
quite worried it'd be a nightmare to maintain UI tests at this stage.

~~~
hugs
(Selenium project founder here.) Test maintenance cost is a sane thing to be
worried about! Early on, there can be lots of code thrashing and UI changes.
Better to keep the UI tests to a minimum at that stage - have them act more
like smoke alarms. You only need a few alarms in the right places to tell you
that _something_ is wrong. Other general advice is that in a young product,
it's often more important to make sure you're building the right thing.
("Right" is subjective, but could be defined as "has users" or "makes money".)
Having a well-tested product that has no users and/or makes no money is an
even bigger nightmare than UI test maintenance. Once you know you've built the
"right" thing (hopefully sooner than later), you can afford to invest more in
all kinds of test automation (not just UI tests).

------
brianmcd
> For example, a huge amount of UI problems we catch belong to the Screenshot
> testing stage. Fewer problems belong to the Linters / Unit / Render tests.
> That doesn’t make those tests meaningless. On the contrary, it could mean
> that we work in these areas well enough to prevent tons of issues.

They don't say how they measure "issues" in their chart, but if it's through
CI failures, it seems likely that "Linters / Unit / Render tests" catch fewer
issues because developers run them locally before pushing code.

------
tunesmith
I'm curious what kind of practices people have put in place to minimize the
need for UI tests? I personally believe a UI test should only be written as a
last result if it's impossible to test through other more backend-ish means.
When you combine concurrent test runs with shared mutable state, you get flaky
unreliable tests, and it's almost impossible to avoid with UI tests. So it
seems the better solution is to restructure your code as much as possible to
get as close as you can to the UI layer as possible with unit tests. But how
far can you actually push this? I'm currently sad because my team is in the
middle of a push to integrate Cypress into their react frontend, and I'm
dreading the future where we have a billion qa-written UI tests that are
redundant compared to our unit test coverage, and flaky besides.

~~~
bickeringyokel
A billion tests most certainly won't be useful. Good tests target very
specific things that are critical to your product. In my experience UI tests
are only slightly more flaky than other testing methods as long as they are
written well.

------
valuearb
As a mobile developer I read this eagerly looking for any useful ideas to
bring UI testing into my client projects. I’m digesting still but so far, no.

I’m intrigued by screenshot testing but can it work across all screen sizes,
from iPhone 6 to XR to iPad, and work in all orientations?

Otherwise my problems are that our designer made tap pinned too small in a
couple places without anyone noticing. Or that something isn’t perfectly
aligned or a line separator isn’t long enough, etc. they always seem to
require a human to use it and say, I don’t like this or this could be better.

~~~
benologist
I have been experimenting with this and the possibilities are very exciting.

[https://userdashboard.github.io/dashboard-
sitemap](https://userdashboard.github.io/dashboard-sitemap)

These screenshots are generated from the test suite for all my UI tests. I
have puppeteer navigate a series of steps and save each screenshot, resizing
to mobile resolutions. This is done in Chrome, Puppeteer also supports
Firefox. The screenshots are then integrated with my documentation both as
sitemaps like above and to demonstrate usage:

[https://userdashboard.github.io/administrator/reset-
codes](https://userdashboard.github.io/administrator/reset-codes)

They are generated from a simple sequence of steps added to my tests, which
can run with or without saving/generating the screenshots:

    
    
          req1.screenshots = [
              { hover: '#administrator-menu-container' },
              { click: '/administrator' },
              { click: '/administrator/accounts' },
              { click: `/administrator/account?accountid=${user.account.accountid}` },
              { click: `/administrator/account-reset-codes?accountid=${user.account.accountid}` }
          ]
    

The code for parsing those steps into Puppeteer actions is available here:
[https://github.com/userdashboard/dashboard/blob/master/test-...](https://github.com/userdashboard/dashboard/blob/master/test-
helper-puppeteer.js)

In addition to being user-friendly for documentation there is tremendous QA
opportunity to being able to observe every page at every resolution.
Everything required better code too, you cannot forget to link to a page or
have some part of the navigated route not working or have anything going wrong
anywhere.

I am finalizing localization so in the next week or so the documentation will
also be able to switch between any combination of language and device.

Note: some of my documentation/screenshots hasn't generated yet

~~~
valuearb
Im a native mobile app developer, need something that works with swift on IOS.

------
mister_hn
I'd more interested in a post about how do they test IntelliJ UI

------
FlashBlaze
The _Screenshots Diff_ section was amazing! Learned a ton from this section
alone as I wasn't even aware you could do something like this for UI testing.

------
rawoke083600
Good post.. wish they would start with what Team City is...

------
iFire
Does anyone know how to do ui screenshot testing for games and in c++ guis?

------
Kenji
Meanwhile the TeamCity classic GUI has frequent freezes that lock up the
entire Browser tab, and they claimed that they fixed it but it's still
happening in the latest release. Nice blog post but fix your shit please.

