Interesting article but the author seems trapped in an antiquated “compute time is more expensive than developer time” mindset. For most startups today the EC2 budget is negligible compared to the dev team budget. If you’re successful enough that the compute cost matters that’s a good thing and you can deal with it then.
Optimizing for problems you don’t yet have just keeps you from launching and getting successful enough to actually care about your compute costs.
This is an economically rational decision. But it is also a bad one.
It's a good way to build non-scalable applications. Because if you scale the application, then at some point, the computer's time will become more expensive than developer's time. Of course, that cost is economic externality for the development shop, so why should they care?
Edit: I am not sure word "scale" is obvious. There is Google-like scaling, which is we run the software on many machines in house. But there is also Microsoft-like scaling, which is many users run the software. Collectively, they have pay the cost and they have to waste the energy.
No one's saying "let's do stuff the stupid way!" - they're saying hey, let's focus on getting users before fleshing out the technical details of the what-if-we-actually-make-it scenario. Or, as the classic saying puts it, don't put the cart before the horse.
Yes exactly! The point is you don't build a scalable app until you have a scalable business idea. The first iterations of a startup are about testing different ideas, not about building sustainable architecture.
Unless you are building a technical work of art, it's all about finding the right product.
Some decisions need to be made early on. You may be fine with MySQL, but you need to think about decoupling different parts of the process and to think about the implications of delays on the interfaces from the start, even if they aren't there.
Most likely you'll never reach Google scale, but, you'll be happy you did that as the application grows more complex and you don't have to test every part of it for each tiny change.
This is a relatively narrow scenario. Some applications "already made it". I'm helping build an IIoT solution that can process a really ludicrous number of measurements per second. If the code is optimal, we have a pretty positive impact on the company's bottom line. If it's not, we'll become famous for being the first group to unwillingly enter Top500 territory.
Processing efficiency is still extremely important in the embedded world. Don't think that the embedded market is small, practically every product you buy has s/w in it.
When you make for example a million units of a product, every single byte and every cycle counts as there is a large multiplier to take account of.
You can program efficiently in js. Just maybe drop the fat framework that dynamically checks each and every of your assigned variables to update other hidden functions (and maybe the dom) and avoid dragging things through N functions for "encapsulation" reasons and other inefficient OOP principle application.
what I want to say is:
cough javascript developers cough
In terms of performance, bad code is bad code. There's nothing about OOP that inherently results in non-performant code. Like all things, one needs to know what they're doing and, more importantly, what their compiler/runtime is doing.
For example, anyone that has been using OO in a non-GC'ed language for a reasonable amount of time will know that it's bad news to constantly create and free small objects due to the MM overhead. However, I constantly encounter JS-heavy web sites/apps that a) become slower as you use them and use an inordinate amount of memory, and b) really thrash the GC because they simply don't pay attention to allocations at all and constantly use constructs that result in one-off allocations that then need to be recycled.
The fact is that the browser and the DOM is a UI layer, and UI frameworks fit perfectly when implemented using OO architecture.
It's not that simple. My impression - I'm not very knowledgeable about this - is that JS and others started getting around as fast (in best conditions and a lot of voodoo) as unmanaged C code, but they are still behind in terms of memory footprint. For embedded applications, where you may be forced to count every KB of memory and every CPU cycle, the JITted systems are simply not a good fit right now.
You certainly can write a JavaScript implementation which would work in an embedded environment (just found one: http://www.espruino.com/ - it actually looks pretty nice! I wonder how it compares with Arduino?), but, when coughing JavaScript, it most often means "JavaScript as currently implemented in the four most popular implementations" or similar.
So, while it's true that you can write JS code that's 1000s of times more efficient than some bad JavaScript, it's also true that even the good JS is not going to be fast enough for some domains.
This (among others) is where AOT-compiled, GC-less (or with special implementations of those) come in. And even then, there are applications where even the cheap, mostly-compile-time abstractions of such languages prove to be too clunky and you need to drop down to assembly (bootloaders, demos, parts of OSes or language implementations).
So, while you can write efficient JS code, it's not going to be efficient enough for many cases.
Your assessment is definitely sufficiently true, although I would argue that, looking at how v8 essentially increases speed down to just twice the time of a comparable C program (as opposed to a magnitude or two) it is worth the convenience it brings to the table for devs (though c++ 11 and later catch up quite a bit)
Back in the day some devs said JavaScript devs weren't real programmers but script kiddies. I wonder how much of this came from wanting legitimacy? They were like, ok let's see how Java devs do it. Next thing you know the JS devs are also using 75 layers of abstraction.
There's also many multipliers that apply to the web. Every page can be loaded by millions or billions of people, more than once, and usually sharing resources with other tabs. It all adds up quite quickly. It's like saying gum wrappers don't matter, while there's a mountains of garbage growing in the background; maybe not enough people care, but it still matters even though they don't care.
Well said, but embedded developers already know this and behave as if their compute resources are scarce. I read the author as speaking to developers generally, which I think was his intent.
If web developers would be so kind as to consider my compute resources as scarce that would tremendously please me. This sort of thinking is important, you just need to be selective about when and where you apply it.
Since browser vendors like telemetry, there could be an optional plugin to monitor website consumption of endpoint resources, similar to weather sensors that submit to a central public database. Measure and track the consumption of the top 100 JS scripts globally, to motivate improvement.
I want this as a paid service. For example, hey there person using chrome, install this extension and let random people run tests on their website, for the time and resources used you will be paid n, ok?
Then imagine having access to real computers on real networks hit your site over whatever time horizon you want. It would be great. If this could somehow be done on mobile that would be great too (maybe in a push notification way "1x test ready to run, click here to start", app opens, does it's thing, then closes)
For that matter, app stores could profile apps. Apple already has an on-device list of apps which consume the most battery. They could normalize the metrics to account for variables like display time and network traffic.
I don't know if it counts as embedded, but about a month ago we bought a new TV (Sony xbr65x900e) and the built in software is unbelievably bad. Our three year old Roku has better software than this thing.
I wish Sony sold a non-smart version of this TV because other than the software, it's pretty nice.
How can you not use it? IIRC, you have to go through the setup steps for the TV to work.
Even if you aren't actively using it, there are things running in the background that crash. I was watching football yesterday and I kept getting "The Samba Service has Stopped" messages. The only option in the box was OK to acknowledge it. What am I supposed to do with that? It kept coming back so I googled it and found how to disable notifications for that app and that eventually stopped the problem.
I also found the Samba service watches what you watch so that it can target ads better.
The parent comment (which I didn’t write) seems to be getting up voted and down voted repeatedly, bouncing in and out of gray. Could one of the downvoters explain your objection? (Genuinely curious, looking to learn)
I don't have the ability to downvote, but I can explain why I might have downvoted.
The comment is repeating a saying, it's not original. I have heard it a ton of times. It's really an entire category of arguments which can be called "premature optimization".
While some optimization is premature, the saying is repeated in a way that is crazy overly-broad. It's based on a false assumption that is only approximately true in some of the cases for web development. The assumption is that bad performance can be traded for more compute power without other negative side effects. This is very untrue for most practical systems in web development. Most compute-heavy decisions will do both things - increase the compute resources you need, and degrade user experience. If you're super tiny working on super simple systems, then there's a certain envelope of request response times where it won't matter if you are faster or slower. So there are some cases where it is a valid argument, but to me, these seem incredibly specific to a certain stage of a startup. Beyond that, what matters is prioritization, not complete disregard for performance concerns.
In a non-startup world (established business), imagine if you went to the boss and said "If you give me 3 or 4 weeks for optimization I believe I can cut our Azure bill by 15%, year over year". You'd get his attention pretty quickly.
I've made just the opposite argument. If you give me three weeks and a few EC2 instances,I can set up an integration environment that runs some automated test and we can cut the amount we spend on QA by x%.
Small tip from an ex-QAer: Don't worry, once the easy cases are automated, they'll have plenty of work to still do. At least the good ones will. They'll start doing more extreme exploratory testing, setting up combinatorial testing, fuzz testing...
A good QA analyst is like any other good employee - they always have something else they'd like to test.
We are real QA heavy right now with non FTEs. I'm not saying they don't bring value, but the types of things they are QAing now is both new functionality and regression testing. I can't think of any good reason that regression testing shouldn't be completely automated.
Yes, nowadays it is very easy to end up with zillions of VMs (and services) running in the cloud without a little organized plan. In the recent past you should trigger purchase orders to buy hardware. Not saying that the past was better but flexibility has a high price without coordination. Imagine thousands of developers launching new VMs with a single click.
...but giving the design a bit more thought to end up with something which either is more performant or can be easily optimised at a later stage does pay off in the long term.
I am usually in favor of imports over writing my own code, but the honest answer depends on a number of other factors. I don't think there is a "best practice" in this area.
Does the imported function do exactly what you need? No not quite - maybe write your own.
Is it a complex function that will require a lot of testing? yes - an import is probably best.
Are you likely to use other parts of the imported library in the future? Yes, then import may be a better choice.
Will it look good on your CV? Lets face it everyone does it.
Yes, because the primary cost to the company is the cost of developer time, not the download bandwidth cost or the power cost for increased utilization level of the user’s CPU.
The craftsman cares they can accomplish the task without needing a large library. The business cares that it can get to market quickly and profitably at minimal cost. “I prefer your competitor because their code was lovingly hand crafted instead of being shipped quickly with the features I need” said no customer, ever (witness that we all use compilers rather than hand-coding the machine code in an assembler... all the same arguments were made against compiled code back in the day, and compiled code was the right answer, then and now).
This is actually a problem that will fix itself in time. Soon the customer won't be able to run your quickly shipped product because his computer is not powerful enough to run more than 5 electron apps. If we take a browser, a text editor, and chat software as a given, only two more electron apps will be able to compete for the customer's remaining resources.
At which point the customer blames Microsoft for making Windows too slow. The chain of events between your app and their computer’s speed is sufficiently nebulous in the mind of your customers that it’s impact on your success will objectively be zero. I get that you hate Electron apps, and I understand why, but the reality is Slack and VS Code and others have been wildly successful. You might not use them, but millions of others do, and they were fast and cheap to develop.
We are completely on the same page there, though I would say “care about the risk adjusted total cost of ownership.” Early on, the business doesnt know if the product will succeed so the risk is high and any developer time/dollars invested are disproportionately high risk adjusted TCO. Once the product is successful, the operational costs dominate and the risk factor of time/dollars spent on dev time to optimize the code falls, justifying time spent to improve the efficiency across many measures, but those things don’t kick in until there is some measure of success in the market. Until then, risk adjusted TCO for dev time is very very high and needs to be aggesssively minimized to achieve that minimum risk adjusted total cost of ownership.
The point is that cloud computing costs are small compared to labor costs, so it's a waste of time to make a 0.005% cost optimization that takes a week of work.
I would also love to see some examples of a 2G (!) library that people are casually importing. Where have you had this problem?
Well, strictly speaking, developers are also customers for programming tools' company, and there the tidiness of the code is important. I'm sure there are many other domains where writing your software well is actually encouraged by economic factors :)
I really like the point about considering cloud computation costs. It would be great if individual developers could get feedback like, "This patch caused X% CPU cost per request, resulting in Y% monthly cost increase to keep projected usage below alert trigger levels." I also like the note about the headless abstraction, although I think it could gain some strength by talking about AWS Lambda or the actor model.
But, I think the dedication to writing perfect code without executing it is misguided. It's 2018 - we have interactive debuggers, excellent profiling tools, and unit tests. Most developers have a computer with 4+ cores and 8G+ of memory. It would be foolish not to take advantage of that.
One thing I hate about the modern disregard for resource consumption is that I can't tell whether a given app or program is merely sloppily inefficient, or if it's ma!ware-laden. (Too often, it's probably both.)
Unlike the stand-alone, isolated mainframe era, our applications today are interconnected.
In an academic environment, perhaps the author didn't experience intersystem dependencies. In production environments, however, there was arguably more interdependency with (and, therefore, risk to) other systems and users.
Take a manufacturing environment. Shop orders take in inventory information, labor detail, assembly progress, facilities and supplies usage, etc., any or all of which can involve independent systems. In turn, each production step can create information that needs to go back to each system.
Any error or change in those inputs and outputs could force a rerun of all systems downstream of the first error. This is especially noteworthy to the guy/gal being called in at 3AM to unravel such hairballs.
I'd daresay that most interconnection with external systems in the modern mobile environment is primarily used for social media, tracking and other privacy suckage. (Ghostery output can be quite surprising, for example.)
> I'd daresay that most interconnection with external systems in the modern mobile environment is primarily used for social media, tracking and other privacy suckage. (Ghostery output can be quite surprising, for example.)
The web has become a very frequent path for machine-to-machine communications through APIs, besides other, more direct pathways for internet traffic on ports other than 80/443.
Absolutely, 100%, should you optimise for your resources. In a lot of scenarios those resources aren't CPU, HD, memory and bandwidth, instead they are developers time. But certainly if computer hardware becomes more of a bottleneck (cost, time, etc) than man power, then optimising for hardware makes sense. Like most things in life, it isn't binary, and I believe most developers already embrace this, making compromises in order to meet deadlines, create maintainable and debugable code, writing code to help with tomorrows tasks, learn new techniques but attempting to apply new coding ideas that may help in future tasks, make code run optimally enough for the given scenario, etc.
This website fully pegs one CPU core on my laptop (Firefox 57.0.4 on Ubuntu 17.10, with uBlock and Ghostery enabled). Apparently these lessons haven't reached their web design team.
Edit: investigating closer, this even happens with JS disabled, and also in Chrome (with less load though).
I'm not seeing anything like that, but you got my interest.
The page is only using 14mb of memory, which is a bit higher than some pages, but it's about 4mb of JS source, and another 4mb of objects held in memory. It isn't desirable, but shouldn't be causing any issues.
And though the page does load fairly quickly for me, a glance over what it does during that load makes me suspect I know your issue: Styles were recalculated more than 75 times, with the repaint happening more than 35 times.
There's also a huge peak in the middle of the JS being executed. The culprit [0] has a tightly packed for-each loop. Three nested for-each, with each one contained a lambda, which contains two or three more lambdas. It's not performant code. That particular script is also 500kb of more of the same kind of code. (Might also point out that the form-handler on that page is an even bigger script.)
So: I'd think it's the repaint from too many styles coming in overriding each other, but it might just be a reliance on a large library, which doesn't seem to be well thought out.
Experientally, learning to code in constrained systems and with constrained tools does (ren)force a lot more pre-checks before execution, because the cost of failure is higher on your process.
A lot of the language around "why functional programming" goes to the same space. Strongly typed, functional solutions expose problems during code development which avoid runtime problems from sloppy thinking.
I also think simpler is good. So, coding discipline which favour simple techniques (within limits, this is not one dimensional) are good. If you have to exploit a very complex mechanism, the old world old school method was to look in the NAG library for a well written solution (presuming numeric problems, which predominated then) -and now, we do much the same: look for a solution in your language compatible runtime library space, which is a well trodden path.
Not sure that basing this on experience of "hem hem" university usage of mainframes (and presumably the previous gen at that) is actually that relevant to real world usage at the time.
Even back then it would be impossible for a serious mainframe program to be written and expected to work first time.
I disagree with most of this article, and I have ~25 years experience. No, I didn't program mainframes, but I'm tired, as are younger programmers, of even older programmers trying to say that the way they did programming is still "better".
The writer isn't entirely wrong, but the simple fact is that software isn't written the same way anymore. Stop trying to force antiquated methods down younger people's throats. The way I wrote programs 20 years ago is inherently different from the way that I wrote programs 10 years ago is inherently different from the way I write programs today.
15-20 years ago, we didn't write tests. We had QA that wrote our tests for us. We tested the code as well as we could (I became pretty damn good at testing my own code) and then we threw it over a wall to QA. Today, we have zero QA and I write tests for my own code.
10-15 years ago, you aimed for 0 defects, especially for enterprise code, because your enterprise customers couldn't afford downtime. Today, in a SAAS environment, you care about defects but your have a global set of customers, and you roll your code out slowly and watch metrics.
I have a friend in growth at Facebook and his manager got mad at him because he was focusing too much time on testing his own code. Apparently he's supposed to leave that to external QA, and you can always fix the code later. On some growth teams, code quality and maintainability don't matter, all that matters is getting customer growth with new features as quickly as possible. Is that inherently wrong? No, it's a different way of doing business. 10 years ago there was no such thing as a growth team.
The way software is used is different, and the way software is developed is different. Mainframe methodologies, while interesting to read, is not relevant. Things like Optimize Upfront is nonsense to me, especially in a global context. You iterate on your features quickly, including optimization. You couldn't do that in mainframe computing, but these days I deploy to production 10 times a day, and depending on how I deploy, I can see problems fairly quickly and iterate without affecting most of my users. That's definitely not a paradigm that you would see back then, when you would have to schedule time, etc.
I did do mainframe programming and I do webapps and phone apps these days, so I've kept up.
It's gotta be both approaches (yours and the articles) but the real problem is the demand for programmers is so high and the barrier to entry is so low that the quality has suffered; the quality of the libraries, build systems, documentation, designs, interfaces, all of it.
You can't magically drag all modern programmers through the mud using line editors and 16 bit processors for 2 years to learn everything the painful way.
I honestly don't see a way out. We're in the eternal September of software development and it's all down hill from here unless we make some commitment to raising the barrier to entry and decreasing the incentives, making it hard again.
"the real problem is the demand for programmers is so high and the barrier to entry is so low that the quality has suffered"
Interestingly I see a whole world of difference in quality between Python and Perl libraries compared to JavaScript libraries. OK not all Python libraries are perfect and not all JS libraries are shit, but in general the backend stuff seems of a far higher quality.
There's always a "first" language - the first programming language that's learned and taught. It was qbasic in the 90s for instance, then vbasic, java, php for a bit, then ruby - it's certainly javascript now.
The first languages, during their reign as first languages are always derided. When something snatches the crown from JavaScript (it'll happen however inconceivable this is), I'm sure things will settle and it won't be so bad any more.
It's like how at the end of its life, most of the people still using AOL instant messenger were respectable computer experts. Same idea.
it's a skill, not a fact. skills are learned tacitly through experience; such as riding a bicycle. You have to ride to learn. The problem is that everything has changed so much.
When I started, compiling took serious time (hours sometimes). So you were much more careful about making mistakes. Compilers also had bugs, as did linkers, debuggers, you had to know how to spot these things and when to question your code and when to question your tools.
Operating system containment and protection was more an intention than a reality and it was relatively easy to lock up or crash a machine through faulty code. Workstation uptimes of weeks was seen as impressive. These days, things are so stable that "uptime" just tells me when my last power outage was.
When we released software it was on physical media, which was either mailed to people in physical boxes or running on physical machines that were shipped out. Not making mistakes was much more important in that situation since you couldn't just deploy an update without a lot of cost and ceremony.
It's all changed so fundamentally; I'd be open to having an instruction course where people have to target some vintage machine (which we'd likely have to virtualize) and have them just deal with it for 6 months. I don't know how many signups you'd get though.
What’s the point, though, in learning about a vintage machine? It’s fun for hobbyists but it’s not useful for real life. That’s the point. The industry has changed and old dinosaurs, which I consider myself a part of, have to adapt.
For example I hate dependency injection. I despise it, I think it’s stupid. But my company does this, so I do it. Many other companies are doing it. I adapt or die.
With Amazon and Google et.al cloud computing we are back to pay-per-cycle, so at least some of it makes sense, if you are for example deploying millions of CPU's.
Software development is still in its infancy. It will slow down eventually and stabilize.
The last two decades were a madness of inventions, with computing doubling almost every 3 year, new languages and paradigm invented, new tools, the internet, the web, phones, giant displays, small displays with touch. We surely won't get that much change in the next two decades.
People have been beating this drum as long as I can remember. Nobody wants to have to have those pesky engineers around to actually do the work - they want to drag and drop some components around, provide an incredibly fuzzy description of what it should do, wave their finger in the air, and voila, working software materializes.
This is one of those "Next year, in Jerusalem" ideas that is perpetually 20 years away from reality.
> On some growth teams, code quality and maintainability don't matter, all that matters is getting customer growth with new features as quickly as possible. Is that inherently wrong?
Yes. That "growth team" just added a bunch of inscrutable garbage to your code base, perhaps hoping that someone would clean it up later. Of course no one ever will, since they're too busy "getting customer growth with new features as quickly as possible."
My current job is converting a growth project into a sustainable and maintainable service. The code is the worst production code I’ve ever seen in 25 years. I’m shocked it works. Everyone that worked on it was fresh grads and it’s shit code for the most part.
But is it inherently wrong? No. It introduced a new feature quickly, a lot more quickly than I ever could. The code i produce is maintainable and relatively bug free but I couldn’t have gotten it up and running as quickly as these kids did.
Also the business decided this is what they wanted to do. Invest a low amount of money to see if the feature works and then if it does pass it onto more senior programmers that turn it into a real service. If it doesn’t stick then throw it away.
It’s not the best, and I would never employ it but it’s one strategy and works if your care more about growth than efficiency.
Facebook doesn't make money by writing working software. They don't lose money when the software doesn't work as intended. It's a terrible example for reliability.
"On some growth teams, code quality and maintainability don't matter, all that matters is getting customer growth with new features as quickly as possible. "
Optimizing for problems you don’t yet have just keeps you from launching and getting successful enough to actually care about your compute costs.