Hacker News new | past | comments | ask | show | jobs | submit login
Fast Software, the Best Software (craigmod.com)
470 points by Tomte on July 24, 2019 | hide | past | favorite | 283 comments



Author's point about Google Maps being bloated and slow is 100% true. I use that app every day and I just recalled today when I first started using Google Maps in 2009 on my good old Nokia E63 phone it was fast and so great and then since last few years the app's design keep changing (at least on Android) and it is hard to keep up with these design changes and worst of all...every design change makes the app even slower. :(


In the past 18 months I've noticed Google Maps' navigation prompts being announced just slightly too late for comfort; e.g. I'm often nearly into an intersection at 30-40 mph when my phone tells me to turn onto the cross street. In that same 18 month timeframe I've moved to a new city so I'm acutely aware of this lag.

I chalk this up to Google Maps' overall bloat and sluggishness on my 2-year-old phone.


Would you happen to be playing audio through Bluetooth? My car has a delay for non-call Bluetooth audio which causes that delay for me and gets really annoying. I will often take the phone off car Bluetooth just for that reason.


On my phone it doesn't even get the audio prompts out when I'm playing music from the phone through bluetooth.

The music just cuts out for a few seconds, but there's no navigation audio.

I've turned off navigation prompts for that reason, and just check the "next turn" thing at the top every now and then.


What's even more bizarre is that for me, when using bluetooth audio I get part of the prompts, but other parts are missing. I I remember correctly, it leaves out important words like "left" amd "right". It works fine as soon as I turn off bluetooth.


I get the prompt starting, then the volume on the music is lowered.


> Would you happen to be playing audio through Bluetooth?

No, straight from the phone.

Last weekend I had a friend in the car using apple maps on a newish iPhone, and every prompt from his phone was around a second earlier than from my phone. At 35 mph, one second is 50 feet (15 meters). That's enough to turn a safe maneuver into an unsafe one.


Add to that when the only directional advice you do get is “Go north-east on <street name>”. It is not very useful if your biking or otherwise not able to see the screen.


Not a regular Google user, but you could try downloading the offline map for your city. Might improve responsiveness?

https://duckduckgo.com/?q=google+maps+download+offline+maps


No. In my experience, the first thing that Google Maps does is send your location to Google, run an auction for local advertisers, then download a half dozen images of local "featured" businesses. On my old phone, Maps was snappy enough before they added that antifeature, and things went downhill from there. When I retired that phone last year, it could take over a minute just to show a map (often devoid of street names).

Now, I actually don't have a mapping app on my phone. It takes a little planning ahead, but I'm happier to do without.


For me at least it doesn't. Even just typing into the search field lags sometimes so that I have to stop a second to let the characters appear. Searching for a restaurant, clicking on it, reading the information can take whole 10 seconds and be so frustrating.

I use Google Maps many years already but it just became so slow in the last year or so. Maybe they just don't support non-high end phones anymore.


It's also slow on fast phones and feels like the ui thread is being blocked by something for several 100s of milliseconds. I stopped using google maps because of this. So annoying.


Do you know a good alternative?


I don't have a good alternative for points of interest (restaurants, ...), but for maps and routing I use https://www.locusmap.eu (with offline Locus Maps, Android only). For car navigation Waze, ironically an app from google, is also nice and has full congestion avoidance.


Same experience on a few years old lower end samsung phone. Rubbish.


Might want to try a dedicated GPS. Not too expensive, and they have nice features like talking you into the correct lane.

Also, no lag (especially wierd lags) and no interruptions from notifications, phone calls and similar.


Apple maps works offline while giving accurate timely route information.


They've been absolutely wrecking Google Maps's performance and UX on Android since around ~1.5-2 years ago.

Switch back to version 9.42.3, or possibly a couple releases later if you need the "Favorites" and "Want to go" features in addition to just "Starred". It's pretty snappy and a breath of fresh air.


Related, they released an update for GMail a while ago that bricked the app and made it not able to open on certain inboxes (I had 3 accounts on my phone, one would trigger this bug as soon as I switched to this while the others did not).

The only fix was reverting to the stock version, and it was AMAZING. So smooth. I had completely forgotten how fast it was while they boiled this frog's water over the years with their updates.


Tried, after force stopping and disabling (which reverts it to factory version, 9.72.2) on my Verizon Galaxy S7. Says "App not installed." when trying to install APK after running through the install process. =(


You'll need root for this. You should be able to use one of those system app removal tools, or Titanium Backup if you have it (you might be able to convert it to a user app first). You might need a reboot after you do that before you can install another version. At least that's what I did on my end.


I recently had to use my 8 year old Nexus S phone for a few weeks, and Google Maps wouldn't even load. Chrome and Firefox could barely load most web pages, though the older Android Browser could limp along on enough pages to be useful (HN was the only site that actually felt usable). The phone's got half a gig of RAM, and didn't seem terrible when I stopped using it full time four years ago, but four more years of software updates have completely destroyed the usability of almost every app, while barely improving functionality at all.


And it doesn't stop at phones. I kid you not, the main benefit of upgrading my early 2009 Mac Pro GPU was having a fully functioning Google Maps experience again.


Interesting. On older tesla cars, people complained about the maps and the older MCU1 cpu.

The telsa maps are based on google maps.

I heard there were two ways to speed up maps:

1) turn off traffic, which significantly slowed things down

2) switch from vector maps to satellite maps. Apparently the map tiles of the vector maps require costly rendering for display, but the satellite maps just required a blit.

#2 may also explain #1, I'll bet traffic requires rendering too.


Install "Google Maps Go". It is another product by Google without all the bloat and is much much faster.


As a counterpoint, Google Maps on my iPhone 3G in 2008 was not reliable at all for directions. Sure it was better than printing out MapQuest on sheets of paper, but Google Maps has gotten significantly more reliable over time.


It is still pretty fast for me. How fast is your connection? Latency?

I agree on annoyance with "design changes" sometimes. Last weekend having the map always oriented to the North was driving me nut. They used to have a compass with a N to toggle it. While I was driving it took me 15 minutes to find the option under all the advanced options.

But that said I think Google is in general pretty good in trying to reduce the "featurite" in the app and simplify the UI.


The maps load pretty quickly for me (satellite data on the desktop is another story, for some reason). My problem is with the "featurite", which I humbly disagree is getting better. I spend so much time fighting with the UI to try to just see a map:

-The app always loads with an "Explore {your_location}" card that takes half the screen. I am usually at my suburban house, near which there is nothing worth exploring. Even when I'm somewhere else, my primary use of the app is to get directions to an _already determined_ destination. The card is useless 95% of the time, but obscures half of the map, which is what I want to see and is so important as to be the name of the application.

-If I so much as brush the screen with more than one finger, it enters its 3D map mode. Like the "Explore" card, this seems to be screaming "look what we can do!", but it ignores what I want to do, which is navigate using the bird's-eye paradigm that has been used by every other map since the dawn of cartography.

-If I search for a place and tap a result, I get a full-screen card with information about that place, but no map. I just searched for "pizza" and tapped the first result and got a large picture of a pizza. I already knew what a pizza looks like, but I still don't know where the restaurant is. If I swipe the card down and then tap the map to try to get a better view of the restaurant's location, I lose the location indicator on the restaurant and the map re-centers on my current location.

I could go on, but I feel like an old man shouting into the void.


I am on Google Fi network with ~35 Mbps of download speed on LTE. The overall speed on interaction within the app is slow. For example: When I click on Route Options to select "Avoid tolls" I always end up clicking on something else because the UI lag is almost always there on interactions. And the new features such as "Start AR" never opens for me and app crashes on that feature.


Weird. I have 10 Mbps at the moment and I have a 3 years old phone. I tried exactly what you said with Avoid Tolls. Every interaction has absolutely no delay for me.

I do not have "Start AR" at the moment so I cannot test that.


I wonder what "pretty fast" means for you. I have a 1gbit fiber connection and a beastly computer, and everything lags in gmaps, in both firefox and chrome (on a machine that can run Witcher 3 at 120fps in ultra)


I remember after the first big design overhaul it got significantly slower. I think we all just got used to the slowness since then. And some of us got faster computers and stopped noticing as much.


> How fast is your connection?

200 Mbps


I like Google Maps. It's definitely not slow for me, the only thing that is slow is network speed, JavaScript is not slow at all. And I recently discovered their 3D feature which really astonishes me.

Google Mail is quite fast for me as well... And I'm not even using some crazy fast computer, just old PC with Core i5.


Subjectively others view gmail as slow and you view it as fast. Lets make it objective.

On a dual core cpu from 2014 with an ssd running linux I opened first firefox 70 and then chrome 75.

In both cases I was logged into my google account. In both cases I opened and then closed gmail to ensure what could be cached was cached.

In each I then entered mail.google.com and measured the time between hitting enter and seeing what looked like a usable interface. Lets compare that to other local ways of accessing email.

Running mu find maildir:/Gmail/INBOX took 9 ms this is easy to measure as it happens in the terminal.

Creating an emacsclient frame takes about 150ms added to the mu query which takes about 9. Logically there is some time required to render the view but its quite small and challenging to measure so lets say aprox 200ms or 1/5th of a second.

Loading the old school plain interface html interface to gmail took aprox 1.5 seconds.

It took 10 seconds in either browser to load the "modern" javascript gmail view.

For reference this is about 7 times slower than the html view or 50 times slower than mu4e.

Reading

https://www.pubnub.com/blog/how-fast-is-realtime-human-perce...

> A response time of 100ms is perceived as instantaneous. Response times of 1 second or less are fast enough for users to feel they are interacting freely with the information. Response times greater than 10 seconds completely lose the user’s attention.

This is why the slow loading Gmail interface has a loading screen so it is perceptibly doing something rather than appearing to be frozen. After this the interface is fast enough in the context of web apps wherein the user expects a small delay between actions but not as quick as a desktop app which can trivially be perceptibly instant.

Interestingly the old interface despite using vastly less ram and loading much much quicker initially appears to say load a message slightly slower than the slower loading interface so the optimum web interface would appear to be the slower loading more sophisticated interface if you don't mind keeping that browser tab pinned and always running.

A desktop app is still on the overall the superior option.

Is your gmail experience objectively different?


Sure. I'm running Core i5 7600 / 16 GB RAM / Samsung SSD 750 EVO / GeForce GTX 1060. It takes around 3-4 seconds to open for the first time and it takes around 2-3 seconds to open for the second time. For both Chrome and Firefox. I don't consider it slow. After it opened, every interaction is almost instant. I recorded video to check exact timestamps, so you can check it for yourself: https://youtu.be/F7LPAIe0jhw

I don't know how could you expect ms-time response. I have 70ms ping to mail.google.com. It's just not possible for a web app to 9ms time response, unless you're living at data center. But as a web app, Gmail is amazing and pretty fast. I don't know why it's slow for other people, I'm not Google engineer and I don't have performance insights. I just telling that it's far from slow for me. Definitely not 10 seconds.


Your cpu is probably 4-6x faster than mine and not running in a mode optimized for battery life. You have twice as much ram and faster to boot with a faster ssd.

It's slow for others because a huge range of machines exist and tons have slower cpu, storage, or especially network.

Worldwide the majority is probably using worse machines especially in poorer countries than ours.


> Google Mail is quite fast for me as well... And I'm not even using some crazy fast computer, just old PC with Core i5.

If you want to see fast mail, go load the basic html version of Gmail. I have it bookmarked because it loads as fast as you think it should.

Regular google mail loads so painfully slow in comparison, they needed a splash screen.


It does not load painfully slow for me. 2-4 seconds to load and that's about it. I'm considering it acceptable.


> just old PC with Core i5.

with possibly 8 gigs of RAM?

Thats still WAY faster than most machines being used in the wild.

Sorry, your SV bubble is showing.


One benefit of performing at the speed of the human hand not explicitly discussed here is the ability for users to _compose_ their own features (and to love doing so!).

If I need to wait even a second for a page or app to respond per click, I am a lot less likely to _invent_ in your app. I won't stretch the boundaries, I won't do things that take more than 4 or 5 clicks. The app will _fit_ me less.

I will call up and ask for a feature that is a 6-7 click combination, if I am paying.

And if you build that feature, now you will likely have a slower product overall.

And also a slower product development org. Because more features mean more product managers. And more product managers mean more coordination meetings. And more coordination meetings mean... well you get the picture.

Love this post!


That’s a great point. UIs are similar to APIs in that respect; if you can provide features that are really simple and fast and composable, users can be incredibly creative about finding new ways to plug them together.

We too often worry about ticking off a checklist of use cases, and not enough about providing a flexible toolkit that can handle unanticipated use cases.


The creativity with which I see people "compose" Excel actions together 100% supports your theory.


I sometimes watch analysts at my firm use excel. It’s pure magic in the right hands, absolutely sublime.


In the words of one of my coworkers: Excel is the most abused tool known to man.


Gmail is now crazy slow for me. Used to be nice and snappy, I dread even clicking a button in there now. I even pay for it with GSuite.

> I sigh — actually sigh — whenever I have to open Photoshop

Yep, quad-core laptop with fast SSD and I dread opening it. I remember being an early adopter with an SSD and I just used to open PS to see how fast it opened. They took that feeling away.


Use the "basic HTML" version and make it your default view. On mobile it's even worth putting up with its more desktopy appearance to avoid the bloated crap that is modern Gmail.

The site takes so f*$%ing long to load you should have no problem clicking the "Slow? Try the Basic HTML version" link, maybe with 2-3 tries to figure out where it is (it's a small link). Then you should see an option to set it as your default.

Bonus: you can actually leave it open in a tab without eating three figure MB of memory and sometimes slowing down your machine doing god-knows-what in the background. Only downside is no notifications for new mail, but 1) that assumes you consider that a downside, and 2) that's what actual, native email clients are for.


Here's a direct link to go to the basic HTML version: http://mail.google.com/mail/h/


And the mobile-web version is even better imho : https://mail.google.com/mail/mu/mp/


Wrt your downside, I haven't tried this myself but I can't imagine it'd be too difficult to write an extension to check for new emails every x minutes - if there's a button to refresh your mailbox you can just put an a call to the button's click event[0] inside a setTimeout, and if there's no button just refresh the page (you will probably need logic to make sure you aren't actively interacting with it). But most likely a quick 30 minute project if you've made an extension before (and only an hour or so if not).

[0]: https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement...


Gmail takes a good five seconds to load the basic UI on my i7 laptop (and then some more time to load messages), but for some reason the Basic HTML link in the corner doesn't work. I've logged in a few times this week and clicked the link each time, and the browser confirms the click with the focus rectangle appearing, but it still loads the slow version of Gmail. This is in Firefox as well as Falkon (WebKit-based).

My company uses Outlook/Exchange and it also has a webapp; the website is pretty feature-rich and has a UI very similar to native Outlook, but it is blazing fast. The folks at Google are doing something very wrong to make it this slow.


It's not just you. Gmail has become so slow to load for me that I wonder if they've deliberately sabotaged it on Firefox. Amazing how it's 2019 and web apps somehow seem to be less usable than they were ten years ago. I miss fast desktop apps.


If a chunk of the time, effort and money that went into making web UI so nice and easy to work in could be invested into making true desktop apps equally as nice (no, electron doesn’t count) then I think we’d be in a great place.


Don't despair, it's slow as hell on Chrome too.


It looks Google Audit agree with you: https://imgur.com/a/mROAbvh

Chrome 75.0.3770.142 with not extension active.


But this is getting so ridiculous I find it hard to believe. Aren't people at Google supposed to be one of the smartest in the world? Don't they see what they did to their flagship product? Are they completely blind?


Is GMail their flagship product ? I would say "Search" is their flagship product (and I don't know if it's getting slower.)

I've never used to commercial tools that their actual customers (advertisers) use, I wonder if they are more polished in terms of speed.


I thought 'Fastmail' was a silly name until I logged back into Gmail.

And then waited for it to load.


I just checked and Fastmail loaded for around 3 seconds, while Gmail loaded for less than 2 seconds.


I work on Fastmail’s webmail.

Did you have a warm cache for either? I find that Fastmail’s cold load is generally a fair bit faster than Gmail’s cold load, and Fastmail’s warm load is consistently faster than Gmail’s warm load; but Fastmail’s cold load can be slower than Gmail’s warm load.

I’m looking forward to working on service worker code loading loading and offline support in Fastmail; I expect us to get warm loading to under one second on a typical desktop machine, regardless of location (because it will require no network calls).

Either way, once the code has loaded, Fastmail is consistently snappier than Gmail. And uses far less memory (in Firefox, 10–15MB instead of >150MB). Offline mode will improve snappiness even further even if you don’t opt into actual offline sync, by virtue of persisting its data record cache. But that’s not implemented yet.


After few reloads load times are comparable, something around 1-2 seconds for both. Fastmail feels a little bit faster, but the difference is tiny. Gmail eat 205M and Fastmail eats 60M according to Chrome Task Manager, so yeah, difference in RAM consumption is huge. I have 16G RAM, so it does not bother me that much, but I could imagine that it could cause some memory-constrained computers to slow down significantly.


I used the Fastmail site today after a long time to change some setting and noticed the UI had been updated. Really liking the new look, great job to you and your team!

I was interested in what front-end library you guys were using so I did a bit of searching and found a twitter post about OvertureJS. I know this is kind of random, but I'd love to understand how you guys came to the decision to build a framework from scratch instead of relying on Angular/React. What kind of technical limitations did they have that you guys wished to tackle with Overture?


This was long before I started; but the simplest answer is that there were no good options meeting our requirements when Overture was developed, which is back starting around 2010. (Overture has been the foundation for our webmail since late 2011 when that interface was released as beta.) Also do note that Overture does a lot more than just “Angular” or “React”, which I am not inclined here to expound; you’d need quite a few more libraries to equal it—a surprising number of which genuinely don’t exist in open source, so you’d still be writing a lot from scratch. Overture is definitely slanted towards being a web app development library (yet without being a heavyweight and strongly-opinionated-in-UI-matters beast like ExtJS).

When you care about performance, there’s still a lot to be said for controlling the full stack.


Thanks for taking the time to respond!


Thanks for your work on Fastmail. In my experience it definitely lives up to its name. I love how instantaneous everything feels.


I used to click the button for the html version in the right bottom corner... but now the button doesn't respond because the app is not fully loaded yet, wich is funny.

I now have a bookmark for the non JS version.


Too much javascript.

And they are probably optimizing for First meaningful paint (FMP) over Time to interactive (TTI).

Strange for a company that is putting this out: https://developers.google.com/web/fundamentals/performance/u...


I'm not running anything abnormal, have a 2018 Thinkpad with a decent connection (20Mbps). GMail loads like I'm on Win95, which is the first OS I've ever used.

It's 2019, the app is built by one of the leading tech companies, I have a modern device with good connection (not the absolute best I could get, but still OK). Why does it take like 5 seconds to load GMail? I would expect it to be ~1s, considering it's built by freaking Google. I'd expect some mega-optimizations going into this thing and to be amazed that they managed to squeeze everything out of a simple mail app. I'd like to imagine myself thinking I'd never be able to build THAT thing, yet it's not the case.

And it's not just the loading part - interactions even after first load feel sluggish. If you try the HTML version, it feels more responsive even just navigating the inbox. It could be psychological, but even in that case it would suggest that they need to improve the first load to make me feel like the app is fast in general.


I'm on Google Fiber, of all things, and it's slow as hell. I use basic HTML instead, much faster. As you note, even interactions are faster which, you know, was the whole point of AJAX back in the day but I guess by the time you mediate that through a bunch of bloated serialization formats and bullshit abstraction atop the DOM it's not fast anymore. Basic HTML's very snappy.

I (at least internally) swear any time I open Google Maps, either by accident or when I can't avoid using it. Same for Google office suite webapps, which I avoid as much as possible.


I had to observe this during the meeting, someone was giving a presentation:

oh yeah just remember to mention this thing, let me open my mail quick, ... Gmail loading... and the whole room fills with malaise.


Why aren’t they making it mega lightning fast? Not because they can’t, but because the market is willing to put up with the sluggishness.

I treat it just like how I treat the massive phone screen sizes and lack of stick shift cars in the US auto market. Consumers just don’t care about the same things I do.


https://mail.google.com/mail/mu/mp/

The mobile web version. Works really well, even on a desktop!


But again... beside the first loading every now and then, it is pretty fast for me.

After I disabled some extensions the Lighthouse metrics improved but not dramatically.

Are you using a privacy extension that keep clearing the browser cache? Did you try in a private window? Is it still annoying on the second load?


>beside the first loading every now and then

I don't keep the tabs opened. I close gmail when done reading or responding email (I check my inbox every 2-3 hours.) I even close the browser very frequently during the day.


You're not the user story that they optimize for. Try leaving the Gmail tab open for a while, you'll quickly get to reap the speed benefits too.


What I found really eye-opening was looking at the about:memory output for gmail in Firefox. There's more memory used for the various chat-related iframes than for the actual gmail bits themselves. And I suspect the chat widget is not the thing people really want out of gmail.


Professional tools can get away with slow cold starts, since they’re likely to be left open all day. If the individual editing operations were slow, that would be another thing.


There is easily almost a second of lag when navigating the UI (selecting and deleting emails) using the keyboard on a MBP. :(


If you don't need any of the fancy stuff, the basic HTML view is snappy and loads incredibly fast for me.


Photoshop 3.0 was the sweet spot for me. A reasonable set of functionality coupled with good speed.


Check out photopea.com it's a free web based Photoshop clone.


I've been taking this philosophy to heart the past few years. Working in the corporate world, I find users acceptance level to be in the tens of seconds/minutes/HOURS?! for queries and reports.

Projects I've worked on have focused on being as minimal and lightning fast as I can possibly make them. If I add a new feature/query, and it's not instant, I don't ship it until it is.

I don't know where or when this became norm, but often times the difference between hours and near instant, is a matter of a simple query optimization. Waiting more than a few seconds for something to run is infuriating to me.


The other side of the coin:

* Why waste development time on something that is only executed once a month, or once a quarter, or even once a year? If you don't need to iterate on the reports, waiting a while for something that is almost never done is fine

* Often simply adding a new index can reduce a query from hours to seconds. But why bloat your DB with an index and potentially reduce insert performance for something that's almost never needed?


> But why bloat your DB with an index and potentially reduce insert performance for something that's almost never needed?

Oh, for crying out loud: will you please stop trotting out this tired nonsense.

This may have been a problem sometimes in the 80s and 90s. With any good modern RDBMS, say SQL Server 2017 (but you can pretty much pick any you like the look of[1]), and a half-competent developer and/or DBA, insert performance is almost never a problem when adding an index.

You generally need some extra storage space for the index, but so what?

I have seen, with surprising regularity over the years, tables with no indexes at all where adding a single clustered index (which is the table itself) can massively improve performance. In this scenario no additional storage is required.

[1] I pick SQL Server because I know Microsoft started seriously addressing insert performance, which had been a problem with SQL Server 6.5 and 7, in SQL Server 2000. Nowadays, if you're half-competent, there really is no issue worth talking about. If you're not it is, of course, still possible to do crazy things that don't perform well. All truly powerful programming languages, database systems, et al., will cheerfully hand you the rope you need to hang yourself.


Claiming they have no impact besides storage is also being disingenuous. The more indexes you have, the more noticeably slow inserts have become on some of the tables I've worked with. Of course, this was an inherent issue with the table design (massive table with lots of columns and a lot of indexes) but sometimes you can only work with the legacy system you have.

In an ideal world with a sanely (not over the top or inadequately) normalised database I'd agree with you. But in my experience there have been times where adding an index for a rarely run query was absolutely the wrong decision.


And that's fine, but you obviously did some investigation: what irks me is people trotting out this line about insert performance with no investigation.

Of course, if you over-index a table, there can be some impact but in the general case that I've seen over the years that's seldom the problem.


Oh okay, I can agree with you there. But I think as a general rule, if something is done less than once a day and it's already a heavily indexed table you can probably skip the index unless it's a massive or potentially massive table. In that case it may be worth exploring alternatives.


It depends on how much data you have in the table you're adding the index to and how much traffic you're dealing with. I deal with this every day (SQL 2017) and I assure you that adding an index to a large table has noticeable overhead in a heavily used system.


For sure, but my beef here (as every time I hear this) is that it's taking an exception and turning it into a fear-mongering boogie-man story that paralyses people into inaction when they (often) have serious performance bottlenecks in their database that need to be addressed. So often with performance it's down to the basics not being followed, and often this comes down to the lack of an appropriate index.


Instead of carefully investigate which fields should have an index (all fields used for joins) the brute strategy is to index them all :P


Clustered indexes fix that perfectly.


How? You can only have one (on SQL Server anyway). Maybe I don't understand your point.


I haven't been close to the DB world for a few years, but didn't they start requiring tables to have a clustered index sometime around SQL 2012?


Not as far as I know. We all joined the company in late 2017 and found a couple of absolutely gigantic heaps in our main database as recently as the beginning of 2018 (running on SQL Server 2012 for several years at that time). Adding clustered indexes here had a transformational impact on performance.


The third side of the coin:

* How many processes aren't executed more often because they are inconvenient? How much less efficient does this make the org?

* How much is index fear driven by 20th century hardware cost?

Anecdata:

In a prior job, I worked at a nonprofit that partially shares a CRM codebase with another ~30 peers. I cut query runtime 10x with a few indexes. I cut query write-time similarly by writing a few denormalized SQL views. The cost on CRUD transactions was marginal.

These queries drove mailings, marketing efforts, financial reporting, PowerBI dashboards, etc. Our peer organizations that used the out-of-the-box query tool favored SSRS reports for the same functionality, which meant that reporting was less flexible and more expensive to maintain. I can't prove causality but I believe query ease-of-use made a big difference.


Likewise, I recently had a "discussion" with a colleague and their contention was that they had too many writes for an SSD to be reasonable (they don't)... However, still complaining about performance in their app.

Every time you want 10x the users, you need to make compromises in design to reach that goal... 100 users is easy, 1000-10000 you can hardware your way out. More than that, your design better be decent.


So many performance problems I see with database writes are developers not understanding implicit transactions, they think transactions equate with more work and don't realize how much processing and disk writing can be saved by not having to update indexes on every insert. I've lost count of the number of times I've seen transactions reinvented with staging tables too. I suspect this contributes to a lot of the fear about indexes creating performance problems too.

IME if you've got less than 100GB of data (maybe more on SSD) then correctly using indexes, joins and transactions will solve 90+% of performance problems and most things will be quite snappy without much tuning.


On the other other side:

If a process takes a day to run, it won't be run daily. Period. It'll be run weekly or monthly or quarterly. But there may be value in running it more frequently, even continuously. But the present time cost makes that undesirable, and it's not perceived as adding value. Workarounds and alternatives are found, maybe. Or business units act on half-baked, likely out-of-date, information.


Often, things are run infrequently because they are slow and inconvenient. Improving performance gives the user more freedom to design their workflow.


Today my teammate and I debugged a service that was querying a database. Many queries were failing and clients had started to notice, because the quiet would take 45 seconds and API gateway would shut off the connection after 30 servings. We did a tiny bit of debugging, added an couple of indices, and now the quietly takes <1 second. It was 2 hours work, tops.

We couldn’t understand why on earth the team who made this database hadn’t bothered to make any considerations for read queries, but I guess worrying about insert speeds are more important than basic usability.


It's dreadfully common to not understand database performance. Junior engineers I can understand, they're given a task and hack it together and don't understand why it's slow, but at least it works. But I've seen some pretty shitty work from "senior" engineers too. They know how to put a product together but they shy away from the database besides creating the table itself.


It's so pretentious to assume that your "developer time" is so much more valuable than the time of your users. It's plain selfish.

As you say, it's often as simple as adding an index, which have negligible downsides in 2019. It's a matter of caring about your craft and taking 5 extra minutes to do something right.


It wouldn't be a waste of time if it was just done right the first time. Generally these are queries running much more frequently than once a year, because that's how I get involved with them.

Sure, there is always outlier consequences, but more often then not, these messes are the result of non technical people diving into the world of databases through excel and access.


"done right" is a nice phrase to toss around, but unless you're also looking at companion requirements then it's completely subjective.

if it works but slowly, then it was almost certainly done correctly by my book. you've implied as such - otherwise, you'd be making it done correctly first and foremost.

"done fast" more often than not involves non-measured details from parties often hovering outside the orbit of the customer, and you can bet trying to hit both targets leads to delays in shipping v1, which appears to work.

i praise those who ship software that works correctly, understanding that requirements, deadlines, and project timelines are completely lost when the source code is read on its own.


I don't see how any of that makes sense in the context of database queries. Slow running queries are done correctly? Google has it all wrong then. I can't believe we're even arguing about the merits of low latency. If I can return the same data in 1/1000th of the time, what exactly is the issue?


you said you work in a corporate context, right?

let's take an example that's not too far-fetched: legal comes and says that every month, you need to generate an report of some sort to comply with some regulation that corporations over a certain size must comply with.

you sit down and run v1 of the software which works but takes 24 hours to run. is your first instinct to get infuriated, as you said, even though from a requirements/company perspective, this is completely "done right"?

"the issue" is that changing software involves risks, which may be acceptable to you, but may not be acceptable to e.g. legal. they couldn't care less if it took 29 days to run or 29 ms. what they require - again, this is the requirement - is a monthly report generated, correctly.

and yeah, 99 times out of 100 you change the SQL correctly and it runs in 1/1000 of the time the first time. then for whatever reason it messes up one month and legal asks "what the F was this guy doing mucking around with this software which worked 'right'"?


I'll give you some real corporate context. A scheduled task, pulling from a 65GB table with hundred of millions of rows, and no indexes. It runs for hours, and completes with accurate data. During that time, it also saturated a 10Gbit interface(seriously), and consumed half of the IOPS on one our NetApp controllers. Now multiply that by 2x, 10x, 1000x for all of the other junk running in the wild. Slow, accurate, and impacting the rest of the organization.


A competent and responsible programmer should know how good, on a scale from cheaply done prototype to provably optimal, the 24-hour report is. In the first case, rewriting for better performance is part of the first implementation, not a risky change.


i.e; premature optimization is root of all evil .

The example you give is sooo right. And understanding the many levels of compromise between code speed, quality, legal implications, customer needs, management needs, cost, deadline, maintainability, technological choices made elsewhere, etc. is part of the job.

(personally, I have some priorities : meet the deadline comes first (descoping included), management at customer side comes next, then end users and, in the end code speed.


> i.e; premature optimization is root of all evil

I'm gonna repost a chart I made previously[0]:

  Spectrum of performance:
  LO |---*-------*--------*------------*-------| HI
         ^       ^        ^            ^
         |       |        |            |_root of all evil if premature
         |       |        |_you should be here
         |       |_you can be here if you don't do stupid things
         |_you are here
Point being, people tend to invoke this cliche way too early. It's true that mucking with working software is a potentially risky thing, and in corporate context may require approval, but it's also kind of the job you're hired to do as a software engineer, and it's especially important if that extra efficiency buys company value.

--

[0] - https://news.ycombinator.com/item?id=20389856


> premature optimization is root of all evil

Aka the first refuge of the intellectually lazy.


When you add time constraints, budget constraints, human constraints, believe me, declining optimization effort is not an act of laziness

(and the one who answers you has spent thousands of hours optimizing assembly routines for 3D engines, optimizing SQL queries to get the max out of some server, optimizing network traffic to optimize parallel computations,...)


I disagree. It's sure not a cliché. I've often seen programmers gold plating their solutions. As a programmer myself I understand that very well. But most of the time, a half backed solution will unlock many things that are more important than the code. With some prototype level code, you can already test ideas, show things to a customer, start integrating with others, etc. And afterwards, you have the opportunity to decide if optimizing for speed/space is a worthy trade off, or if optimizing at all is important.

Now, this won't work for any kind of industry. Right now i'm in the business/government stuff. There, prototyping is much more important than speed of code. When I was in the gaming industry, the lack of speed was most of the time a technical debt. But even then, having my code working was much more important to the overall team effort than my code being fast.

So instead of your chart, I prefer : 1/ Code is working, 2/ Code is correct 3/ check for other priorities 4/ optimize as needed.

As for what you're expected to do and company value, my experience is that not so many people understand the link between actual code and company value (esp. in the top management where I sit regularly). You'd be surprised to see how much a deadline is more important than a fully working/optimized program ('cos for example the deadline is a trigger for a enormous change in the organization you work for (although we know the lack of speed in the code will have negative impact on dozens of end users))

so, well, it depeends :-) but still, a word of caution sound right to me :-)


Yeah when someone said that to me as an excuse for an indexless query on my team, it took a lot to remain calm and help them learn. I'd rather change it to: "excessive premature optimisation is the root of all evil".

If you're making a query that lots of users will run regularly, that needs an index. Period. Lots of the time there'll already be one, but that doesn't excuse you for not confirming and adding it if it's not there.


You'd probably be happier with the full quote:

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." ~Donald Knuth (1974)


Yeah that makes a lot more sense. Although I think there's something to be said for optimisation habits when such habits don't add work or complexity.


There is. On the diagram I posted above, this is how you reach the "you can be here if you don't do stupid things" point. It's all simple things that you can do better by default by learning a bit about the language, runtime and libraries you use, and by caring at least a little about not being wasteful. They have little to no impact on readability, and arguably yield simpler code at times.


> it's especially important if that extra efficiency buys company value.

If you've figured out that this is the case, the optimization is no longer premature.


I can't recall the last time I heard someone say "premature optimization is root of all evil" as a sanity check to someone who was overdoing things. In fact it may have been more than 20 years ago.

For one, there are usually better arguments around code comprehension.

Pretty much the only time I hear this statement is as a response to criticism. That should raise some questions about the motivations of the speaker. Quite often it comes off as a dodge, and why and what are they dodging? Additionally, the whole quote is often at odds with the point the speaker is trying to make. It's not small inefficiencies, it's factors of 5, or 10, or an extra 6 months before we run out of headroom on something.


This issue is that it takes time and effort by good people to get there. No one prevents you from writing fast queries at BigCo. Mostly they end up being slow because the person writing them doesn't know any better, and there is no motivation by BigCo. to push back and have the developer spend time learning optimizations.

Nothing is free. Developers with more skills tend to cost more. BigCo. probably has some, but they are probably tasked with something else that BigCo. finds of greater value.


There is also your own gained knowledge or a fresh outlook if you look at the problem again but as you said these things take time.


I think of users when queries are slow and async, it creates burden to their dept to manage the state of running operations. Technically that's not my business but I think about it anyway.


But you have similar concerns with which you can empathize.

Builds that take 5 minutes are a lower cognitive burden than builds that take 35, or two hours. If this is a worthwhile goal for your team, why shouldn't other teams want the same things?


I attended a talk by Michael Bender last year on B^epsilon trees and how when users say they have a query speed problem, most of the time it's an insertion speed problem for just this reason. He founded a company with two other professors and they apparently achieved some impressive performance numbers with their database (TokuDB).


Too keep in mind: https://xkcd.com/1205/


Now multiply all that saved time out by thousands of potential end users.


I disagree, this may sound a bit heartless, but it's the engineer's salary that should be measured here.

The hard part is translating frustrated users into financial cost (e.g. lost revenue when users are leaving for competitors and such) which in turn can be translated into hourly wage which then can be used to calculate if optimisation is worth it. And then compare to the lost opportunity of making other optimisations or improvements.

Obviously this is near impossible so the best you can do is make an educated guess.


Yes, if you are a money robot that communicates with money and can only reason about monetary amounts.

Obviously you are, in fact, a meat robot, and if you'd care you could do better than "make an educated guess".


The evil is that overtime products add more and more features. When a feature is added and it slows down the app only 2%, they think it is no biggie.

But overtime they ship 40 features in that way... and now the app take 220% more time.

It just slowly happen overtime and that it is why they may not notice it.

Products need to have performance tests with time limits that you do not want to compromise overtime.


I was confused about those "overtime products". But what you were saying is written "over time", or "as time progresses"


> I don't know where or when this became norm

I assume people who aren't as driven in their careers wouldn't want to mess with something that gives them hour-long breaks "waiting" on queries. I know I wouldn't :)


if you had shorter work weeks you would feel like you're wasting your time yourself, too.


Well that's the thing right. All that freed up IO time, can now be used for other queries!


Some queries can only fast by essentially having a materialised view


And MVs are generally the best way to separate support for infrequent reports or ETL jobs from support from common CRUD operations.


Because a business report typically doesn't need to be run quickly, it needs to be accurate.


I don't believe in that tradeoff, at least from my experience with "business reports".

When programs take a long time to run, they take also take a long time to debug. Your edit-run-test cycle is longer.

In my experience, programs that are unnecessarily slow are often also unnecessarily buggy. The main reason is that developers don't want to touch them. They rot in the background while the underlying data changes.

I'm talking about reports that take hours to run and often don't have any tests. It's pretty easy to get 10x improvement on such programs.

This doesn't conflict with the notion that most programs (particularly business reports) don't need to be optimized at all. They just need to be written with some basic understanding of the problem and the underlying platform. Tests go a long way too.


>In my experience, programs that are unnecessarily slow are often also unnecessarily buggy. The main reason is that developers don't want to touch them. They rot in the background while the underlying data changes.

This is 100% true in my experience.


Sure, it needs to be accurate, but why can't it be quick too? Unless it's truly cost-prohibitive, quick is always a better user experience.


While I'm a strong advocate of speedy software, fast doesn't always mean better than reliable. The points raised in the comments above are valid: if a legal team needs a report once a month and it takes a day to return the results but is super accurate, that's almost always a preferred choice over something quick but less reliable. There are a lot of comments on this thread but a lot of them lack context: yes,maybe it takes ages to run a query,but what's behind the scenes? Does it do some fancy shpancy, mission critical calculations or simply returning results from a database with 10K records.


> Does it do some fancy shpancy, mission critical calculations or simply returning results from a database with 10K records.

Or does it do something stupid, like manipulating a fixed but large amounts of data in a linked list, looking up in a linked list instead of a hash table, and/or passing lots of stuff up and down 20 layers of dependency-injected abstractions?

In my (arguably relatively brief) career, I've seen multiple cases where the core limit on product performance was something stupid that could be trivially identified with a profiler and could yield overall 10x boost of performance. Despite complaints from users, nobody really bothered to fix it.


This type of attitude is what scares me.


Your type of attitude scares me.

Developers making up arbitrary requirements based on what they want has causes me many a headache.


Sorry, being fast is an arbitrary requirement? That's the entire premise of this article and discussion. The point being that most ARE NOT taking it into consideration and making it one.


> Sorry, being fast is an arbitrary requirement?

Yes. In all cases "being fast" is an arbitrary requirement, and a project lead or product manager should when hearing this requirement, interject and ask for a definition of "fast".

There are a thousand things at any company that could be done faster than they are. I could throw in eager caching or reindex data into aggregated documents in order to optimize that once-monthly report that takes an hour to generate, or I could automate the other once-monthly report that a group of folks assembles manually for each client. Which one is more valuable to work on? I could rewrite the server in Rust to get a 10ms response time speed up or I could work on the feature that will allow our sales team to be finally able to say "Yes, we do that" to the folks who previously turned us down due to feature parity. I don't live in a world where time is free, or where I can work on all of these things at the same time and still be done in the time it takes to do just one of them.

And all that additional logic and caching that could be added to improve the performance of some reports will be an increase of surface area for bugs and data discrepancies, and bring upon the team one of those hardest problems in computer science[1].

[1]:https://www.martinfowler.com/bliki/TwoHardThings.html


I was replying specifically to you wondering why what you consider to be poor performance became the norm.

Generally as long as the feature is acceptably fast making it faster is likely a waste of time in most circumstances, especially when it comes to things like reports. Also it may come at the detriment of other considerations e.g cost, accuracy, code quality etc.

I used to work on high performance JavaScript and typically the things done in the name of performance make the code a lot harder to read and modify, this pushes up development costs of a feature quite substantially. I haven't seen the stats myself but I doubt it brings in enough extra revenue to justify the cost.

I am obviously not saying you shouldn't care about it, but it has to be taken with consideration of the bigger picture.


Unfortunately, people have contempt for programming in native APIs like gasp Win32. I doubt universities are even producing people capable of programming proficiently in anything except Java and Javascript - which would explain why the desktop has turned into a hellscape of Electron "apps"...

https://medium.com/commitlog/electron-is-cancer-b066108e6c32


There certainly are universities out there with proper CS courses teaching way more than Java and JS so your point is a bit moot. But that doesn't take away that I also have the impression that universities alone do not produce people capable of programming proficiently in anything at all especially not when it comes to programming complete programs of anything more than very small scale. They simply can't because that takes experience in writing actual software. Whether people get that before they even were in uni of after doesn't really matter it seems.

All anecdotal, but I've seen a bit too much people with CS degrees who were clever enough, had all the knowledge, could produce smart algorithms and everything you want. But the code would just work and that's that. Like: build process only works on their machine, git history like written by a child or non-existent, code impossible to integrate in other projects, trainwreck-like, violating anything you can think of and so on. Is that bad? Well, yeah, but it's also pretty normal and it's not always problematic: teach them some more in an actual team and the really good ones will be better than you in no time.


University is not coding school, either. Students have to learn about a large set of topics and precludes large projects for the most part. Some universities counter that by adding mandatory labs that have students work in teams over longer time scales. But even that is just enough to give them the bare basics of proper work experience. They will have to get that elsewhere.

Also, writing code that barely works and barely gets its job done is a skill ;). Ideally, it means that the person knows how to get something done with minimal effort when that is all that is needed. I noticed that I started to lose that when the projects I worked on became larger and had to meet higher demands. I tend to do stuipd things like one off scripts "properly" when simple hacks would get me there faster.


Computer science != application development. There's a bit of overlap there but they're different worlds. Most universities are teaching theories and fundamentals, not platform APIs.


I don't think universities should teach OS specific APIs in CS or SE curriculums. The student should learn those by himself if he wants to. The university should only explain the concepts.

If I use free software, I will probably prefer to interact with the X11 API or GTK. If I want to develop commercial software I probably want to learn Windows APIs, etc. Forcing everyone into a specific OS is not a good way to teach IMO.


I would gladly write Win32 apps if companies would request and pay as good as JS.


I wonder how much of it is simply down to Win32 being a horrible API to work with? It's so verbose and has nothing in the way of layout helpers, would people be less scared if it were more like Gtk or Qt?


which would explain why the desktop has turned into a hellscape of Electron "apps"...

The low (initial) cost of them already explains it too well.


Universities shouldn't teach any language at all. Maybe show them and explain why each one is better in any example, but I wouldn't like an University that teaches to code for Win32 over Java either.

* My experience in University: I had to code in Assembly language, C, C++ (to learn about memory allocation, pointers, garbage collection - f*ck valgrind - ), Smalltalk / Pharo (for real object-oriented), Python / MatLab (for math things), and Java (for algorithms).

edit: Add Smalltalk/Pharo


University courses must require students to actually write code as part of their assignments. I've worked at universities where the program structure allowed students to finish with little to no coding skills and the the lower tier students from that program were abysmally bad.

A university CS program should force the students to deal with many languages and - importantly - their concepts. A single implementation language for all courses is certainly too little. Show them what it means to dereference a pointer in C, how to write a loop and a function call in assembler, what smalltalk objects can do by passing messages instad of calling methods etc. But I think that having a main language in a CS program is a good idea becaue it allows the teachers to pose more interesting implementation excercises if the students don't have to spend time to learn yet another language while trying to solve a problem.


I loved using Winamp over say Windows Media Player in the early days of MP3s.

You could throw hundreds or thousands of tracks into your playlist and could scroll incredibly quickly through them, even on relatively slow PCs of the day. In an age before Spotify where to have vast collections was appealing, all my friends could scan each others' playlists and immediately start playing a song.

The speed of the software really did help discover new music. Scroll, double-click, scroll, double-click, scroll, double-click.


Same experience today with foobar2000. Faster than any webapp could ever be. And, by the way, to have a vast collection is still just as appealing.


Collections are still very appealing as Spotify doesn't have large enough catalog to cover my normal playlist, let alone my extended collection.


Foobar2000 is the bee's knees


It really whips teh llama's ass...


The v5.666 build (Dec 2013) works just fine even now. Never makes an Internet connection, blazingly fast.

I personally love the keyboard shortcuts for finding music, queuing it, pause, play, etc; Very intuitive.


5.8 beta is available for download on http://winamp.com/


Oh my...I thought it was dead ages ago. I don't really listen music too often nowadays, apart from an odd evening with YouTube,but I do miss the speeds Winamp used to operate on.Blazing fast,even on some crappy pentium PC with a little bit of RAM...


It really whips the llama's ass!


winamp played my files without glitches too, but made my PC barely usable with nearly 100% cpu load (AMD K5-133 / playing 128kbs stereo mp3-s).

On the other hand: mpg123 on Linux played the same files with less than 50% cpu load! ;)


Time to upgrade to that 300mhz proc


This comes up in human factors/UX, and it has some implications in both directions.

The observation is that a series of closely timed delays are experienced as a single, longer delay - or interaction - by the user.

The upshot of this is that a site where all pages are cheap but there are more of them may be more pleasant for the user than a small number of kitchen sink pages (finding stuff with the scroll bar is also slower). However, very very few of my employers have been willing to sign onto this.

Everybody wants their stuff on page 1. Page 2 at the least. Only losers have their information on Page 4. And if your stuff in on page 20... think of the shame. Can you even face your family for the holidays anymore?

What's really fucked up about this is that people consider carousels to equate to being on the front page. There are websites where I could visit literally every single page on the site in less time than it takes for the goddamned carousel to finish doing its thing. The inmates truly are running the asylum right now. Which is why I've done the Homer Simpson in the Hedge maneuver into backend work.


I’ve been a design consultant for well over a decade now, and the vast amount of times that I’ve seen a carousel in production, it has been primarily a political tool. It’s a way to make everyone be front and center.

Of course, given the affordances of carousels, it means that no one is actually in the spotlight.

It’s been relatively easy to get rid of when it’s more of an “accident” than a political tool (or the result of the CEO/PM saying “can you make it more dynamic”), but when it’s tied to internal politics, it’s much harder to get rid of.


Backend is best end.


Not for long. The js people have discovered Node.


One important reason I value speed in software is that it leaves me less time to get distracted. If I'm expecting something to take a few seconds, often that's enough wait time to prompt me to check my phone or reddit or something. Best case scenario I've just turned a 3 second delay in to a 10 second delay. Worst case and frequent scenario because I'm ADHD as fuck is that I get lost browsing.

An area where I think speed is neglected is when people are thinking about what to build. Something I see mentioned occasionally by people here, on reddit, and by friends: They'd like to work on a side project but don't have any ideas for something useful to build. Just take something thats otherwise already useful but slow and make it not slow! These days you can't swing a dead cat without hitting a web app that doesn't need to be a web app, or desktop app built as a web app. There's no shortage of options.


I very much agree with your first part, and slow software make me physically uncomfortable sometimes. But I think you overestimate your last point. Can you name examples of software that could (partly) be recreated faster as a side project?


Not an OP, but two things have recently come to me: 1. Calendar view with hours/minutes on macOS. You have current date on top, or Calendar app. I want to have shortcut to access this app/view, without leaving full screen mode. 2. Scratchpad on status bar for macOS. Evernote has it, but I don't use this app anymore. Just simple scratchpad, same as calendar view, accessible via shortcut, without leaving fullscreen. I often build an app in Xcode and want to note sth to myself, but making comments on Xcode would mess up building process. And I don't want to leave fullscreen mode to go to Notes app - slow, distracting and I have to create new note in correct folder.


I totally relate. I am a college prof and most of my grading is online. I have noticed myself procrastinating while grading because after grading each paper it takes a few seconds for the web page to load the next student's assignment. In those 20 seconds or so I will switch tabs and get distracted. So grading that should take less than an hour ends up taking 3-4 hours.


Speed is important. But what can be more important for the end user is the perception of speed.

A decade or so ago I was looking into reporting tools. Crystal Reports was getting expensive and I was looking for an alt. Some of them were faster at generating a several hundred page report but CR did something the rest didn't do. Instead of making me wait for the entire report to generate (several minutes) it would allow me to view the pages as they were generated. This meant I could read the first few pages while the rest were still in process. This made CR feel many times faster than the others.


I think the important part is responsiveness.

I once made an app with an export feature. It was pretty slow (I benchmarked it against other tools and my tool was an order of magnitude slower)

But it was extremely responsive. The status bar updated frequently and was accurate, and a cancel button responded immediately.

People in reviews wrote that my app has great performance! Nobody complained that an export would take a minute! I did not expect that.

But then I realized that perception of speed / responsiveness is more important than actual speed. My app reacted instantly to user input, and never lagged, even while processing an export. I even wasted a lot of CPU cycles updating the UI to show status. But the result was an app that felt fast, and people really liked it.


> A typewriter is an excellent tool because, even though it’s slow in a relative sense, every aspect of the machine itself operates as quickly as the user can move.

For some reason, coffee machines with a built-in reservoir for coffee beans annoy me a lot more than the ones with a filter. For one with a filter, I know I have to put some coffee in it and fill the reservoir. I can determine the strength of the brew by this as well. The ones with built-in reservoirs should be able to give you a cup of coffee, but it is incredibly annoying when you have to suddenly refill the reservoir.

While the total work you do is less, the machine is unpredictable. It feels like the machine now controls you, instead of the other way around.


So true. It is infuriating to know how much faster have become the last 20 years and to see how software manages to feel as slow as 20 years ago.


Actually, it can be slower today than 20 years ago : https://danluu.com/input-lag/


Well as long as even well funded companies come up euphemisms like "business imperatives", "strategic direction", "quick execution" etc to peddle sub-par javascript software things are not going to change.

What makes it worse is developers keep harping on CPU/RAM being cheap and software on newer hardware is plenty fast to not only justify but promote bad software.


>"quick execution"

I wonder where they are seeing this "quick execution". I for one am unable to see it as hard as I try to look.


I remember Windows 95 on my Pentium 4 being lightning fast. Like, I've barely finished clicking the icon and the program is already up and running.

With Windows 10 and my i7, I usually expect a loading screen or two before I can use what I just clicked on (esp. Microsoft Office).


It might be nostalgia but ...

I believe the peak of user experience was Windows 98 sp2 when USB actually worked.

I miss how you could hear the computer loading stuff when the HD made the arm moving sound. When it took time to open "My computer" you could hear the floppy drive looking for a floppy and the delay was fine becouse the reason was obvoius.

I also miss the snappiness of Win32 GUIs just starting up in a heartbeat. No animations to hide bloat. If it crasched there was a blue screen and pushing the power button actually shuts down the PC power instantly.

Oh, and spamming ctrl-alt-delete resets the computer.


Windows 95 runs on a 386 with 4 MB of RAM. No wonder it flies on a Pentium 4.


And don't forget to count in SSDs. Back in the day the HDDs were really slow.


Code bloat is such a strange phenomenon, the way it almost inevitably grows over time. I’ve recently been noticing this with the Android emulator.

I created two emulator instances, API levels 21 and 28 (Android 5 and 9 respectively), identical emulated hardware. The API 21 VM is much faster than the 28 VM. It’s perfectly usable on my Mac where the newer OS is unbearably sluggish.

Why is that? With every Android release, there’s a song and dance about the work being done to reduce yank (“Project Butter” etc), so why does it keep getting slower?

Google have lots of amazing engineers, the world’s best datacenters and a vast pile of cash, so if they can’t win the fight against bloat, what chance do the rest of us have?


I may be wrong on this, as I've only worked on one software team, not at Google, but at least on my team I notice the intense pressure to deliver new features, as well as new product managers who are evaluated on their ability to work with developers to ship fast, results in performance becoming a second class citizen without a very senior very technical owner who ruthlessly shuts down features and projects to protect these properties.

In short, I feel it's more an org problem.


I basically agree, but at the same time, companies compete hard over speed and benchmarks, so it’s not like performance isn’t on their radar at all. And yet the bloat problem exists. Maybe they’re focusing too hard on low-level benchmarks (JS execution, 60 FPS rendering, etc) and ignoring overall app performance.


The quest for performance is often also a cause for bloat: pushing something into a background thread now requires the UI thread to be aware that the background thread may not have the results yet - a lot of extra ifs around accesses to the data model. A fancy fast algorithm to solve a problem at larger scales faster is often more complex and performs slower with smaller inputs. So while people have added 5k or 10k LOC to the project to make fast for the occasional large input, performance for the smallest possible inputs may have reduced for an order of magnitude or so.

Then there are the unavoidable features. Accessibility and localization are in that category. Suddenly you need to do extra work every step of the way. Look up translations, format strings with locale dependent rules, build a model for screen readers to process, alternate UI themes and layouts for vision impaired people... death by 1000 paper cuts, but also too important to ignore.


>what chance do the rest of us have?

A huge and very easy opportunity. Current software is not just unoptimized, it is antioptimized to the neck. Any idiot could do far better than current "state of the art" if he is raised with prioritizing the correct things and understanding useless layers and crap for what it is.


That’s been true for a long time, though. A steady progression since the dawn of computing. If there really is a golden opportunity for somebody to sweep in with vastly faster software, why has nobody done this yet? (Or have they?)


This article really speaks to me. It’s well written, and written like a rock collector talking about each one of his rocks in his rock collection, only the rocks are apps of different speeds.

There was a related thing that I heard from a previous CTO that changed my outlook on how to write web apps: "Users shouldn’t be able to tell that they’re using a web application. A web application should feel native. That is, it should load instantly."


6 years ago I joined a Swiss company that was running its entire operations on a custom built solution on top of Lotus Notes.Boy did we hate it...Every time you try open something you get this progress icon that used to take anywhere from a few seconds to half a minute. I must give credit to the designers in terms of functionality-it did all we needed(I did borrow some design solutions for my next job), but the lack of speed!!!! I even made an analysis on lost productivity and produced to my manager...Then Americans bought the company and eventually ditched that crap all together:)


The pain with lotus notes is real. I had to work with it for 3+ years and was blown away that someone was able to make an email client that took literal minutes to perform any given task. I always think of it whenever I see any IBM announcement for new tech.


The real takeaway is that despite of all its sluggishness,IBM probalby made billions out of it...This serves as a brilliant example to a lot of technical people out here that things they deem important may not be the same as what the end client thinks they are. Apart from this,my first and hopefully the last experience of lotus notus could be easily expanded into a pretty long article about desing, software vendors and product development...It used to take about a year to add a button on the page layout...


May i ask what some of the design solutions you picked up were?


I spent a year making a replacement for our internal notes system composed as a custom Lotus Notes application. It feels so good to be free of it now.


I do not understand apps that add animations that can't be fully turned off like os x and others. The designers of such apps are clearly delusional in thinking that I want my user experience slowed down the millions of times I'm going to use a feature like changing spaces in os x. That's not an exaggeration. Same thing with unskippable cut scenes in video games. Or anything like that that's not essential and is going to either be happening thousands or millions of times or takes longer and a split second. These designers, as most designers, are designing for eye candy, not usability. They should be fired for such stupid and human unfriendly decisions. That's how bad such UX is. Especially Apple's which makes their spaces completely unusable.


Users (desperately): "We need fast and correct software!"

Most companies: "Here, have this shitty UI redesign that makes our product worse and 50x slower!"


> That said, Sublime Text has — in my experience — only gotten faster. I love software that does this: Software that unbloats over time. This should be the goal of all software. The longer it’s around, the more elegant it should become. Smooth over like a river stone.

This ^ time’s a million


This is why I hate Python. The default for Python is slowness and an incompatibility with the multicore world we live in. What does it sacrifice these two first class citizens for? Shaving a few seconds off of writing types, which has also been proven to be fundamentally incompatible with team written software. Now we all get to write slow cli tools and write type annotations anyway.

I would love to see people put efficiency at the top of every requirement list. With languages like Rust you don't even have to sacrifice productivity or safety to get it.


I somewhat agree with your point, however I really believe that Python makes it possible to write FASTER and better software because it focuses on developer productivity. Just read the YouTube story.


Python is pretty speedy for its intended use case of flexible scripting, though certainly can bog down in large projects or lots of for loops.

Python puts programmer read/write efficiency at the top of the list.


Then why isn’t there a function to flatten a list in Python?

I’m not sure what Python’s priorities are, besides majorly sucking.

I can write software in Scala or OCaml or Racket in less than half the time of Python, while also being 10x faster.

Python needs to die. It’s the most worthless language I’ve ever been forced to use.


> Then why isn’t there a function to flatten a list in Python?

There's a function to flatten an arbitrary iterable into a flattened iterable and a function to turn any iterable into a list.

> I can write software in Scala or OCaml or Racket in less than half the time of Python, while also being 10x faster.

Many people, on many types of problems, find the reverse of the first relationship.


What tangible advantages does Python have compared to Racket? The only thing I can think of are the high quality data science libraries.


> why isn’t there a function to flatten a list in Python?

After a decade of writing and maintaining Python, I've never needed to flatten a list.

> can write software in Scala

I have seen projects in Scala and can't make sense of the types or data flow.

When I show non Python programmers a snippet of Python they have a good chance of getting the sense of it.


Flatten a list in Python using list comprehension

flat_list = [item for sub_list in orig_list for item in sublist ]

Using itertools

flat_list = list(itertools.chain.from_iterable(orig_list))


Yeah, of course you can flatten a list. It seems like Python programmers enjoy writing the same code over and over again.

Would it really be bad if I could just do lst.flatten ? Or flatten(lst)?


There's a case to be made that flattening a list is not a common enough operation to justify a built-in keyword or even a method on the standard library list class. The Pythonic way seems to be that if you can write something with a simple and clear one-liner then it's better and clearer to have people use that one-liner than another built-in method that people would have to go look up the implementation of.


I think you're feeding a troll here. Anyone with time enough in engineering or programming can recognize and respect these sorts of tradeoffs without trash talking least favorite tool.


I mean, you don’t believe that, do you? Maybe we should all be writing our own sorting algorithms as well.

I kind of assumed that DRY was a universally accepted principle in software engineering.

But I guess the Pythonic way is about copy and pasted code, no abstractions, wasting people’s time, egregious language inconsistencies, no multi-threading, and terrible performance.

I’ve probably written more Python than any other language, and honestly, using it feels bad. It feels outdated and useless.


A lot of people are unaware of how insanely fast computers are. If you can do a computational task in a year, a computer should be able to do it well within a second.

At a previous employer, we had a test that took about 45 minutes, which most people just accepted as a fact of life. Unfortunately, we had to use this test a lot. After I optimized it (in my own time, because the team lead didn't think it had priority), it took 5 freaking seconds. Probably still far from optimal, but way more manageable.


Should there be a leaderboard for interaction speed? This might encourage employees to petition for speedups internally.

“We’re on track to be the slowest productivity app on iOS! Let’s prioritize those responsiveness fixes!”, etc.


The worst thing about developing slow software is that it has a broken feedback loop. I suppose that affects other qualities than speed, too.


This is pretty much the one thing keeping me from writing some long diatribe about how performance doesn't matter for a large amount of software.

I've got a tool that analyzes ...lots... of data, and takes about 2 hours to run. This delay completely does not matter at all, when it's working. But man it's brutal when I have to debug some problem.


Does anyone know of any other lists of fast software?


The list at suckless.org is generally OK. Not all the software on that list is fast, but are generally of focused quality


The irony seems to be suckless.org itself seems to suck. I couldn't find the list of software. Can't be bothered reading walls of text that seem to tell you nothing useful


Not sure how it looks on mobile, but the list is right across the top, with sublists under core and tools.



yeah, I worked it out in the end, but the Ux design is so bad there is no way I'm taking their recommendations for software seriously. Given the upvotes my original comment is getting, seems others are having the same issue.


If you think that UX is bad, then I am sorry to state you are one of the reasons why the web is so bloated and slow in the first place.


Sorry for trying to be helpful.


You might be interested to know that (some of) the people behind suckless.org appear to be literally Nazis, with things like Fackelmärsche (https://suckless.org/conferences/2017/, see also the comments in this threa: https://lobste.rs/s/nf3xgg/i_am_leaving_llvm#c_yoghmo) and hostnames like "Wolfsschanze" (https://twitter.com/pid_eins/status/1113738766471057408).


If you're after Linux applications, look at the ones bundled by the lightweight distros.

Arbitrary personal favourite: the Xfe file manager

https://en.wikipedia.org/wiki/Xfe


I find it helpful in terms of UI to bin things into fast and slow. I do this across the board, so in development, somethings will just be slow and trying to optimize seconds away is pointless. I also do this in the UI so that long running operations indicate that they will take a long time and there is usually a visual queue feature so users can see what operations are running, how long they have taken so far and expected completion time. An example is raster operations on large raster datasets. They just take a long time on standard business laptop hardware that my users typically have. Image saturation analysis or reprojecting lidar data with standard algorithms can be an hours long process if the dataset is a few GB.

But the process of queuing that operation up should be lightning fast and the context menu to cancel a running operation or check it's status should be equally so.


When it comes to write software when you can write something "faster" in the same time that you can write something slow you should always write it the "faster" way. It's Obvious but... not really.

For example when it comes to web development, today most of the people still choose solutions that by default always do diffing of the whole (virtual) DOM to decide what to change in the DOM and server side build ton of object to render HTML and destroy them after the rendering for each request (ex: React). You can argue that the React ecosystem will save you time in writing code because you can find a lot of good reusable components. Yes, but as you code grow, moving to a different framework (ex: Svelte), when and if necessary, is going to be a pretty large job.

You need to pick the right metric. The main one I alway look at is the ROI. Usually I take "user satisfaction and delight" as the "Return" and the time I need to develop the feature in that way as "Investment". The second parameter is usually the risk involved.

If writing it "faster" take 10% more time and is safe may still be worth to go the "faster" way. But when writing something "faster" start taking 200% and more risk is added, you want to be very careful. Is it really worth spending so much time on it?

Fast enough is fast enough.

You do not have to make everything the fastest you can. If a page takes 10 secs to load, what is the advantage for the user if you make it 10x faster? It is a huge win.

What if it loads in 1sec and you make it load in 100ms? Still a win.

But now what if the page takes 100ms to load and you make it 10x faster? Not a lot of people will notice the difference.

And if you have a page that loads in 10ms and you can make 10x faster nobody will even notice it.

So if you like to optimize, go look for places where "10x faster" actually matter.

It is more complicated in practice but keeping these principles in mind will save you a lot of time to work on what really need to be done.


> You do not have to make everything the fastest you can. If a page takes 10 secs to load, what is the advantage for the user if you make it 10x faster?

The point is not to make it 10x or 100x or 1000x faster, it's to make it so that it is never slower than a monitor frame (e.g. less than 7-8 ms nowadays)


I haven't used Mac OS for a while

> macOS dialogs for closing an unsaved file have shifted from “Don’t Save, Cancel, Save” to “Delete, Cancel, Save.”

Does that mean if I write some essay, save it, then make some changes and close the application, it will ask this? This would imply like it would delete the whole file from disk instead of just discarding the changes I made since saving. I really hope I am wrong here and there is a different dialog for this case, otherwise this is just very horrible.


The "Delete, Cancel, Save" is when you close a new document without having saved it. In your scenario, it's basically saving as it goes. The document will just close, without a dialog, but your changes will be saved.

macOS is trying to do away with the concept of "unsaved changes" altogether.


Thanks, that makes sense then for that dialog. Doesn't seem as bad as the author makes it out to be. But in general, I'm not really a fan of this implicit saving stuff, but maybe I'm just getting old and unable to adapt. :)


As a long-time Mac user (long enough to remember OpenDoc!) I massively dislike it. It’s becoming difficult to open a document without accidentally making some trivial change, which is then auto-saved. If the doc happens to be in Dropbox, you see the change notification and wonder, what did I do?? I just wanted to look at it!


The big problem is that it's very inconsistent across the OS. Only certain apps adhere to auto-saving (and thereby this modal).


This is strictly for the case where the file has never been saved, and doesn’t exist on disk.

Cancel always cancels the action, allowing you to continue working with it. Save saves, obviously. “Don’t Save” means “continue the user action, which was closing the window, which closes the unsaved file, deleting the contents in memory”. “Delete” does a better job of capturing the consequences of the action here.


> “Delete” does a better job of capturing the consequences of the action here.

Not for me. If I haven't explicitly saved my document, there is no copy to delete. "Don't save" (which I am more accustomed to after using Macs FOR THIRTY YEARS) is what I expect.


I believe the "delete" option is for a new file that has never been saved, the thinking being if you are going to leave an empty file by choosing "don't save", the better choice is to just delete the file ... I guess?


One of the reasons I always get annoyed when I work in windows: Programs take forever, I need the mouse for everything, and god, the installation times.

I recently started working at a company where we use Linux workstations. I work on a machine where I don't have sudo rights. Yesterday I asked a co-worker if he could install a package for me. From the time of me asking and the time that the package was installed on all the workstations, there was about 10 seconds. Nice.


Great post. On the topic, the Bloomberg terminal is a dinosaur and is slow to start from what I remember but email and most of the features are pretty much instant in their response. There is plenty of resistance from customers to newer features which tend to be more ‘modern’ looking and feature-full but have noticeable latency. To give an example, there is a feature to price mortgage backed bonds, the 15 year old (maybe older?) version loads instantly, takes command line arguments, on the other hand the new version takes maybe a few seconds to load and matches 98% of the old functionality plus 30-50% of new capabilities. The new one is basically universally hated.


One app that I've always found refreshingly fast - Trello.

Sure, it has way less features than other products, but I wouldn't be surprised if Jira's inherent slowness is what turns off a lot of people.


I'm so irritated by Jira's slowness that I'm actively evaluating alternatives. The leading contender is something I'd never heard of before - Clubhouse.io. It's missing a few things, but it's fast and slick and the difference is compelling enough to make me think the limitations are worth it for the speed boost. I'd love to hear others' experiences if anyone's used it extensively?


I totally agree with this, Clubhouse is slick and fast. I enjoy using it everyday :)


I used notational velocity for many years. It was great because of its brutal focus on note taking / finding and it was fast indeed. 10/10 this message needs to be spread!


Performance matters! It is important to find the source of the slowness - for example I discovered some CSS animations are useless and very slow. Vuetify material button felt very slow on touch until I removed that useless material ink spreading effect, which IIRC even had to modify DOM after touching it to work. I still believe JS apps can work fast, if only developers cared more about speed than all those useless animations and transitions...


Could someone please recommend a fast video editor?

I have been on the lookout for one for quite some time now. I'm looking for something a little more user-friendly than say FFmpeg, but all the very feature-light and also feature-rich alternatives seem incredibly clunky and slow, even on my fast machine.

Thus far I've tried Shotcut, Premiere and Sony Vegas. I would be very grateful for any suggestions!


Have you tried Blender for video editing?


There's Kdenlive, Olive, OpenShot...


Avidemux is great for simple things.


Nobody likes slow software, and we know that it's bad, but we keep writing it. How can this possibly be turned around?


Not sure how to fix business, but for B2C getting rid of the ability to monetize user data would help. Right now businesses go: well we're gonna have to vacuum up all their data anyway, and our price is free, so who gives a shit if we write it in slow and bloated HTML+CSS+JS? All that matters is that we're on every platform yesterday and sucking up that sweet sweet personal information.


I once worked with an ortho imagery rectification program that was so wonderfully designed I described the hours long task of rectification as a "ballet for my hands"

There's something just so wonderful about elegant, fast interfaces.


> Close to the metal craft.

Loved this phrase.


Any Windows recommendations for something with the speed and elegance of nvALT?


There's a less beautiful Python/Tk remake called nvpy


Speed is important for me too. BUT only for the software that I use daily. I could find faster alternatives for some software that I use. But what's the point when I rarely use them.


If you daily use software that you use rarely, there may be a point.


I am sorry but I don't understand you. There are two categories of software I use. One that I use daily and the other I rarely use. Speed is only important for me in the first category. I hope that clears things up.


The point is that rare software may get used every day ... just not the same particular piece of rare software.

Or put another way:

“Scientists have calculated that the chances of something so patently absurd actually existing are millions to one. But magicians have calculated that million-to-one chances crop up nine times out of ten.” ― Terry Pratchett, Mort


A non sarcastic tl;dr: it’s not that fast software is better, but rather that slow software feels worse. I don’t know that anybody could disagree with the sentiment of how using something feels: everybody likes things to be snappy and responsive, like the app is getting out of the way and letting you get on with your work.



Not all optimization is premature.


Have to do a crazy zoom in to see the street names...


The real reason in vast majority of cases, why most of our software seems slow is not because someone used Javascript instead of Win32 C or something else programming language related. The real driver of slowness are the network requests for everything.

You could build your software in assembly but if each action has to hit a rest endpoint, the network request will utterly dwarf anything else happening locally.

So as more and more software moves to browser as web-apps this slowness is unavoidable.

Consider how more quicker and responsive Thunderbird feels to using gmail, a web-app.


Well, if your problem is the network stack, then don't use it. My word processor doesn't need to be Frankenstein's monster with parts running in my totally abused hypertext viewer and others running on some servers distributed across half the globe where most of the work their are performing is making sure that they are talking to each other properly. Same goes for almost everything else that is a totally ridiculous "browser-based" abomination.

OK, let's turn this into less of a rant: if you can do things in a native desktop app, you're almost always better off than with a browser based solution.

If you need networking, then you need to put up with latency and throughput. Thay is just unavoidable. What is avoidable is the overhead of unsuitable protcols. HTTP is perfect for one-off stateless requests for documents from a remote server. That is the whole point. But people crammed haphazard session tracking into it (cookies) and wonky authentication, then decided that having open sessions on the server is not something you do and layered an extra layer of statelessness on top of that while all the conplex browser based applications out there are actually incredibly stateful on the client and server. A native client that uses a custom tailor-made stateful protocol to talk to a server that also keeps session state would be much faster and more efficient than the shaky Jenga tower of compoments that are cobbled together to form the current generation of browser based software.


> if you can do things in a native desktop app, you're almost always better off than with a browser based solution

The reason fewer and fewer people are doing this are more to do with funding. So if I were to build software today, I'd instantly go with web-apps so that I can monetize them as much as I want.

Web app software has zero issues with piracy and in built regular cash flow model which everyone wants.

Find a way to fund desktop apps and make them as easy to build like web apps and they'll be vogue again.


In other words: companies and individual developers alike prioritize their business needs over providing value to the users. I could stomach this excuse if people were at least honest about it.

And still, it's not applicable everywhere. E.g. if you're targeting business customers, the ones that can afford to pay you are also the ones that would be in deep trouble with the regulators if they pirated your software.

But that's just part of the problem. Unfortunately, with business customers, another big reason against desktop applications is that they can trivially work around the increasingly arcane and arguably bullshit approval and security policies corporations tend to have around installing new software. Though that seems to be changing; I've recently heard of companies using deep packet inspection to apply similar policies to SaaS webapps.


>> In other words: companies and individual developers alike prioritize their business needs over providing value to the users.

I disagree. I would say that web apps are popular because they provide more value. If/when users/companies demand native apps, developers build that instead.

Web apps are easier for users to manage (no installation, no upgrading versions) and instantly cross-platform. Installation is not trivial for a non-technical person, nor an IT manager monitoring and upgrading thousands of PCs and tablets. The natural revenue models fit because the value is recurring -- the subscription means you constantly have installation, upgrades, and data taken care of as a service.


And I disagree with you; asking for users to "demand" native apps is a cop-out, because no matter how loudly users scream, nobody cares. Voting with your wallet doesn't work in a non-commodity market; the supplier is in control, and you can only take it or leave it.

> Web apps are easier for users to manage (no installation, no upgrading versions) and instantly cross-platform.

That's true, with a caveat that automated, forced updates are not an universal good - both for companies and individuals they're a source of risk and frequent frustrations.

> Installation is not trivial for a non-technical person, nor an IT manager monitoring and upgrading thousands of PCs and tablets.

This was mostly solved a good decade ago. Hello screen [Next>] accept the TOS without reading [Next>] leave default settings [Next>] uncheck the sneaky toolbar some morally deficient people put in [Next>] wait for install to finish [Done]. Sysadmins had a way to batch-install software in a non-interactive way. And these days, even Windows has a package manager that allows scripted installations.

> The natural revenue models fit because the value is recurring -- the subscription means you constantly have installation, upgrades, and data taken care of as a service.

Disagree. Installation is a one-time service, updates are as often undesired as they're not, and "data taken care of as a service" is bundling in something that should stay separate, in a sneaky attempt to lock the user in and ensure a recurring revenue stream. The case is simple: businesses like recurring revenue; everything else is either facilitating or attempting to justify it.

Turning products into services is one of the most annoying and anticonsumer trends of the current age. I get that business customers like it because of accounting reasons, but it's becoming a problem for everyone else. Next thing you know, you'll have to sign a TOS to use your hairdryer as a service.


Well, let’s take one of the problem apps; Slack. It is slow because it eats a lot of memory and makes my computer slow. You are right that networking makes apps slow, but not to the point they feel slow now. If I have a background thread getting Slack messages and nothing came in, why is the entire thing slow as molasses? I know most apps/applications simply do 1-1 page to web api req and if the web api req is slow they give up (usually with some stupid message that they cannot connect / reconnect) but why would that be slow; you can do that blistering fast with win32. And yet my Slack is usually stuck and sluggish?

There is a lot of laziness going on: not such an easy relationship between slow web and frontend as you describe. Most apps can work fine with stale data for most uses; they just do not because then the developer has to think about it and that costs money/time and takes talent.


Slack just fixed this. Try updating: https://slack.engineering/rebuilding-slack-on-the-desktop-30...

Slack originally designed the app for one workspace. As users grew the expectations for multiple workspaces, they retrofitted by spawning a new thread for each workspace. eThey consciously traded off tech debt for growth. Now they've grown, they have invested resources to pay off the debt. And everything is one thread again.


Slack is a great example. My first thought was Jira. It’s a great software for functionality, but it’s so unbelievably slow that I tend not to use it at all. We have a quite powerful server for Jira and just five users. It should work out fine, but I encountered waiting times of a minute to just load the start screen. It feels like running all the time and as soon as you need to do something in Jira you’re trapped in quicksand.


Jira, also Youtrack... Why are these systems so slow :( They had almost two decades to optimize.


> The real reason in vast majority of cases, why most of our software seems slow is not because someone used Javascript instead of Win32 C or something else programming language related.

On the PS3, I bought way less stuff in their store than I might have otherwise, because it was so damn slow, crashed so much (out of memory, felt like, but might have been something else), lost state all the time, and so on. It definitely felt like they were using a web view of some kind, probably with scripting enabled and in use, not just HTML+CSS for layouts & formatting or something. I'm sure that made publishing and adding rich content features ("just have a JS dev do it!") convenient. It also cut the amount of stuff I bought on there by at least half. There's no way I was alone in that—it was so, so bad.

Even on the much faster PS4 their store's unpleasant, though—barely, for now, until they bloat it with more JS as we approach the PS5—usable.

[EDIT] point being it wasn't the network requests ruining everything in that case, it was memory and processor hogging webtech on a machine that couldn't handle it yet should never have felt slow, going by specs, unless someone'd screwed up bigtime (say, by putting webtech on it).


The PS3 should be bored out of its mind running a shop front consisting of a few text labels and images. It is a kind of machine where one would think that it takes deliberate effort to write a simple UI that is actually slow.


Funny thing is the Wii U, which was still less powerful than the PS3, has a delightful shop. It wasn’t necessarily fast, but it didn’t shit itself loading a JPEG — AND has charming music and animations!


Point is valid, but that's not the real reason in "the vast majority of cases."

The context here is that of the desktop software and the vast majority of desktop software is still not networked at its core. It is slow exactly because programmers used extra blubber layers to simplify their life, not because of the network API latency.


>used extra blubber layers to simplify their life

"simplify" lmao

Or rather to create an artificial market and justify their pay. More deeply, it is a symptom of our economic system that forces to do such things for having a living.


More importantly spending 99% of the time on self-inflicted non-problems that come with using the web as application platform. For example that anyone can open any url anytime...that alone leads to so many problems related to security (user isn't supposed to be able to see/edit/etc this part) and flow (this page shouldn't be visited before filling a form in this other page, and or if we randomly visit there it would error out crazy, what if user deletes their session, what if network is lost which can happen literally any time in the applications life etc) and then of course theres the matter of anything that would be a trip to ram/local hard disk becomes a networked trip to a distant server.


I'd say that disk comes a close second, if not first. Overtaxed slow disks with low memory makes for a terrible desktop experience.

Once your memory usage climbs and your OS starts paging/swapping things to disk, seemingly trivial operations that page in new code will take forever.


Point is, most normal computer usage absolutely should NOT require a 10TB SSD and 256GB ram. It didn't for providing the same/more functionality a few years ago, why does it suddenly require these days?


Because a web-app is the only way you can monetize a desktop app like functionality in 2019.

Linux desktop toolkits and dev environment utterly sucks. You'll have to develop 3 different codebases for Win, Mac and Linux. Worse still, you'll have to reinvent app updates technology if you go with Qt or will be tied into a different updating technology and still have to deal with piracy problems.

Or you could go with web-apps:

-> Easy to develop code

-> Can't be pirated

-> In built model of regular payments

-> One codebase for all platforms

It's not even a close fight.


The network stack by itself is incredibly fast. I am talking about the Linux TCP/IP stack here, but I suppose that other vendors do just as well if not better.

I wrote a simple web server in C and I was impressed by how tight my code had to be to reach the limits of the OS.

Even the internet is fast. Just look at fighting games. The simple fact they are playable over the internet is quite a feat. Fighting games require reaction times at the limit of what humans are capable of, and some moves require a 1/60s accuracy.

The problem is that there are so many layers of abstraction. You have the OS, JS engine, layout engine, frameworks, etc... Networking is just one of the layers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: