Hacker News new | past | comments | ask | show | jobs | submit login

It's also funny to see these techniques being reinvented and called optimisations, when perhaps 30 years ago they would not even be considered optmisations but the only possible way to do it, because anything else would be ridiculously slow to the point of being unworkable.

For example, the find and replace package uses markers to highlight search results, and if you ran a search for the letter e in a large file, the excessive time spent updating thousands of markers on every keystroke was intolerable

This suggests another problem they have is a huge constant, in addition to the algorithmic one --- "thousands" shouldn't be something a modern machine would choke on, if updating positions is a simple operation like increasing their offset by 1 after a character is inserted. On a CPU that can do one billion instructions per second (a good OOM estimate, which still underestimates how fast real CPUs today are by a few times), adding 1 to 10000 variables should not take any perceptible amount of time --- less than a millisecond.

Going from 800ms to 50ms is a great improvement, but I think that's still several orders of magnitude slower than what the machine can really do.




> This suggests another problem they have is a huge constant, in addition to the algorithmic one --- "thousands" shouldn't be something a modern machine would choke on, if updating positions is a simple operation like increasing their offset by 1 after a character is inserted.

As I mentioned in the article, a big part of the cost was updating a different interval storage structure as the markers were updated since the absolute positions approach left us no way to defer it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: