Hacker Newsnew | comments | show | ask | jobs | submit login

The limit is 600kb; I'll have to publically document that. Only 3% of GitHub's one million CSV files go over that limit.

As you might imagine, shoving in that many rows into the DOM kills any browser. G




I'm actually kinda surprised that the number is that high.

Part of me wants to make a stumble-upon that grabs a random csv from GitHub. I feel like there's probably some really interesting data waiting to be explored.

-----


Please do it!

-----


I'm tempted to say you leave an enable option and report it to the browser teams as a performance challenge – it's the kind of thing which should work better than it does.

I noticed that more than half the time in Chrome isn't HTML parsing but rather layout recalculation - I suspect you'd be able to avoid much of that if you were able to set some sort of min/max width on the columns and containing table with overflow:scroll-x or hidden. Perhaps set a fixed width when the server sees the size is large and/or by measuring the rows while rendering the template?

-----


I believe you would have to set hard widths to avoid expensive reflows (layout recalculation). Min/max width are much more expensive, particularly in a very tall table.

-----


That's likely - I wasn't sure whether anyone implemented the optimization of stopping as soon as you've maxed out a fixed width table but you could certainly do something similar by hand when rendering the page and set fixed widths based on the ratio of column sizes.

-----


In similarly large tables, I found if you shove about 1000 rows into the DOM at a time inside a setTimeout(0) loop, most browsers behave quite ok.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: