Hacker News new | past | comments | ask | show | jobs | submit login

I wonder, and this is speaking from virtually no experience in the area, if it would be possibly to use machine learning to infer all the myriads of layout rules (etc.), instead of actually writing to spec



I'm fairly familiar with ML, and I'd say that's a definitive no.

Implementing a layout spec is exactly the kind of thing that is "easy for computers, hard for humans". ML is for things that are "hard for computers, easy for humans" (like, telling dogs apart from cats, or transcribing speech, etc).


My favorite example is differentiating between blueberry muffins and chihuahuas. :)


You could probably get something that would work pretty well for most common things but wouldn't have the same behavior in more complex edge cases.


I'd bet even that would be only with a some or all of (1) a LOT of training data, (2) a LOT of preprocessing, and (3) using less famous architectures (perhaps recursive neural nets).

I say so because (among other reasons) in general, current popular ML architectures (like transformers) count processing heirarchical data as one of their weaknesses. For example, there's a theoretical paper proving that self attention blocks (which are central to transformers) cannot solve arbitrarily nested negation(Ie, resolve not(not(not(true))) to true). Practically as well, we see most times that language models have trouble dealing with naturally occurring double negation in language, etc. But CSS/HTML is, I think, very heirarchical.


if you want to completely give up any agency over actually being able to fix concrete rendering problems


There was a paper on HN recently about auto-inferring semantics of x86 instructions — although it used an SMT solver (I think) rather than ML.

https://cs.stanford.edu/people/eschkufz/docs/pldi_16.pdf


No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: