
TDD in CSS – fad or reality? - sirkarthik
Do you test-drive your CSS code? What has been your long experience in this? Was this concept  of test-driving CSS, an experiment of the past?
======
sirkarthik
My primary reason against Tests for CSS, is that CSS is not about business
logic. And the other being that the way CSS works is the last run CSS styling
over-rides the previous one. Unit-testing eh? The order of CSS styling that is
applied depends on the order of inclusion, correct? Now write anaother test
for this? All these thoughts, made me laugh at the idea of Testing the CSS
back then. All said, I was still curious to know what was the
positive/negative experience of those who tried it.

------
borplk
How would you even "test" CSS?

Look for particular expectations being met in the browser?

As a lot of the general TDD stuff I'd say it's a bad idea.

It gives you nothing extra of value.

Instead you end up multiplying the number of times you declare/define/write
something.

I consider this kind of stuff feel good busy work. You want to brag about your
CSS test coverage (?!) but you are just wasting time and picking up
complexity.

Tests at this level of granularity are just not worth it.

What's next TDD your TDD?

~~~
mbrock
Here are some kinds of claims I would like to be able to verify on my
stylesheets, whether through testing, fuzzing, proving, or magically intuiting
via some proprietary black box APIs for $99/month:

\- The number of columns in the article is 1 on mobile.

\- On desktop, the sidebar is to the left of the article.

\- There is no horizontal scrolling.

\- All paragraphs have a line height equal to the baseline.

\- All elements are vertically aligned on the baseline grid (the "vertical
rhythm" property).

\- All sidebar icons are 1rem squares.

I want to take high level desired properties, specify them as assertions, and
then use that to continuously check that the assumptions still hold,
especially as I change the code, or switch devices, etc.

We seem to _need_ this for CSS styling, because as soon as your website or app
reaches a threshold of complexity -- a very low threshold, in my experience --
it starts to happen very easily that a seemingly innocent new stylesheet rule
has some detrimental "action at a distance"... it's right there in the name,
"cascading".

You might be thinking that you can get around this problem through proper
abstraction, and I'm interested to hear about that too -- anything that can
structurally improve stylesheet quality is urgently necessary. I note that CSS
variables are still experimental technology, and that from what I've seen,
various CSS preprocessing technologies mostly increase the need for
verification (maybe because they tend to encourage even more complex forms of
cascading).

In practice, for most projects, I would end up conceding that we can't test
our CSS beyond things like screenshot diffing (which is a pretty nice
technique), just because it would probably require some "original research"
into specifying stylesheets.

Maybe Google or Facebook could sponsor it. They're both producing frontend
development resources because they want web developers to succeed in making
cool stuff that they can link to (and surveil).

Is there an academic clique somewhere with an interest in formal methods for
stylesheets? Or a startup?

It seems interdisciplinary enough to be cool, right?

------
claudiulodro
IMO CSS doesn't really need unit tests because you are QA-ing it as you
develop it. You can't really write CSS without looking at the page and making
sure it looks good. If you're already looking at the page to make sure it
looks good, what's the point of writing tests to do the same thing?

------
seanwilson
I've found testing tools that take screenshots very useful. Once you're happy
with the site, you take screenshots at various screen widths and after you
make changes you can view visual diffs of the new screenshots to see if you
broke anything.

~~~
erichurkman
We've had good luck with Percy. It was a few lines of code to have it drop
HTML+static assets to them, they take care of screen shots. It takes screen
shots (HTML) via our normal test suite, so we didn't have to write any new
tests.

It shows up as a check on every merge request we have from develop->master,
and any visual differences get a manual check. It works great, even on large
refactors!

~~~
seanwilson
Do you have to pay to use that? If so do you find it restricts you? Sounds
like something you want to self-host. I've used Wraith before which works OK
but their version of PhantomJS is pretty old.

~~~
erichurkman
Yeah, we pay. And no. The benefit is they take care of rendering, our test
suite only has to send DOM (html) snapshots with static assets. We, in turn,
get a consistent rendering environment that can rapidly render+diff 100s++
screen shots in a minute or two. The key benefit is that it doesn't take our
test suite much additional time to do it since our side is lightweight.

------
roryisok
Never even heard of anyone using TDD for CSS. How does that work?

~~~
another-dave
See frameworks like:

* Galen: [http://slides.com/netzartist-de/galen-tdd-css](http://slides.com/netzartist-de/galen-tdd-css) * Quixote: [https://github.com/jamesshore/quixote](https://github.com/jamesshore/quixote)

It does sound interesting to me (well testing with good coverage on
appropriate elements, rather than 'TDD' per se), but I've never had a chance
to get it running on a project. It feels like the type of thing you'd need to
do from the beginning if at all.

Could be useful especially around 'baseline' things on the site, that you're
not expecting to change, e.g. do font-sizes, block-level padding, grid
alignment, brand colours, etc.

EDIT:

Just to add, so I think the basic premise of this type of visual testing is
that you're testing imperatively (i.e. "when I'm on the homepage, the
navigation should be flush to the top of the screen and should have a 100px
gap between it and the primary header") vs using something like Wraith where
you're testing declaratively (here is a screenshot of the homepage; the actual
homepage should look exactly like this ±5% variance).

It's roughly equivalent to integration tests (how does my nav 'integrate with'
the Window, the header, etc.) vs functional end-to-end tests (does the
homepage as a whole work). Given your point of view, that can either give
great savings in debugging things (as you've more specificity about what has
changed that caused your funky layout) or could be a complete waste of effort.
The answer is probably somewhere in between :)

I would be interested to hear if there's anyone using a CSS test framework
like Cucumber/BDD style tests — as the Galen slides above mention, your
designer is probably subconsciously checking things on the page, but they're
being exact about it — if we could distill that precision into BDD tests and
allow our test framework building blocks to grow over time, would you be able
to create a suite of reusable test statements that can be written up by design
by themselves?

