Hacker News new | past | comments | ask | show | jobs | submit login
We’re approaching the limits of computer power – we need new programmers (theguardian.com)
227 points by notinventedhear 12 days ago | hide | past | web | favorite | 254 comments





The major problem areas are those where it's economically "best" to do the computationally inefficient thing.

The obvious example is the quick-to-build MVP, but many of the bigger problems come from platform conflicts. Because we have at least five different actively uncooperating operating system platforms, it's hard to build portable native apps - so people build electron apps instead. We also use the web browser as a competitive battleground; due to coordination problems only one programming language and UI model is possible, although another is creeping in via webassembly.

Then there's the ongoing War On Native Apps. Every platform holder would love to take the 30% cut of the profits and veto which applications can run on the platform. We're left with Windows (non-app-store) and sort of MacOS (although watch out for notarisation turning into a veto in the future). And sadly this has very real benefits in malware prevention. Systems which run arbitrary code get exploited.

Beyond that there's cryptocurrency, where finding a less-efficient algorithm is a design goal to maximise the energy wasted, in order to impose a global rate limit on "minting" virtual tokens.


Don't forget the benefits of portability and the need to hire people as factors contributing to the use of higher level, generally less efficient languages: a few days back I read a description of C as a "hard to find skill these days".

There are plenty of people with real skills in C working in embedded. It might even be easier to find C than C++ developers.

I challenge you to find a C “boot camp.”

In fact doesn’t this point to a gap in the marketplace? Where are my “IOT ALL THE THINGS / 5G / Edge” bootcamps? Where are the “leetcode” challenges that talk about proper sampling rates for an 8-bit A/D converter, or implementing a closed loop PID in a 16-bit architecture?

I suspect that that’s what the Grandparent comment is commenting on — there’s so much talking about the former and so little talking about the latter - even if every computer engineer graduating from a ABET-certified institution is these skills.


> I challenge you to find a C “boot camp.”

C isn't a language that lends itself to bootcamp-stye learning.

With Javascript, you can get something on-screen in a few minutes, and even if you make mistakes, you will normally see something. It's a more forgiving environment.

With C, a small error prevents compilation at all, and it's going to be a relatively long time before you're ready to progress past the "printing text to the console" stage.

C is flatly harder to learn, and unless you're the kind of person who likes mental challenge, it's less rewarding than Javascript. It isn't the kind of thing you tackle because you need hirable job skills by the end of the month.

There are still some excellent C tutorials out there (for example, I think Handmade Hero's[0] intro to C is good, and Handmade Hero itself gets you to the "shiny colors on the screen" stage very quickly), but HH has a different mentality than a bootcamp. HH is about learning, exploring, breaking things, and figuring them out on the fly. A bootcamp is about gathering the minimum knowledge necessary to be productive as quickly as possible.

[0]: https://www.youtube.com/watch?v=F3ntGDm6hOs


A bootcamp is about gathering the minimum knowledge necessary to be productive as quickly as possible.

...and that nicely sums up the problem with software today.


That's a sunny view of the software of yesteryear.

That's what the market wants. Too many buyers who don't care about what's under the hood

> it's less rewarding than Javascript.

This varies with goals, attitudes, background, bias, etc. Besides, if you know a little C, you can livecode over at glslsandbox or shadertoy and be immediately rewarded.

> It isn't the kind of thing you tackle because you need hirable job skills by the end of the month.

No, not really. This also roughly says "JS doesn't necessarily require lots of experience" which is not much of a plus, as someone already pointed out.


The C "boot camp" is called K&R (plus a plain Linux development environment). Plenty of devs have gotten started with just that, and in around the same time.

The problem with K&R is that is useless for building actual, working C applications. Even being up to date with the language's specs means very little as getting every different library up and running in each platform is a fairly high barrier, much more so than other platforms.

The companion Kernighan and Pike is very good on building full Unix command line utilities. But yes, the platform learning process of getting to hello world can be remarkably hard.

I never thought of "The UNIX Programming Environment" and "The C Programming Language" as companion books. Thanks for opening my eyes to that.

Looking at the typical salaries in embedded, can't be that hard to find ...

Plenty of people make Electron apps only aiming to target a single platform it just happens to be so easy to add the others that most do it. The real problem is native platforms suck to code for or have too high a barrier of entry.

It often seems the main target is the web, then Electron steps in to provide an “app”. Slack really comes to mind here.

For that I already have a browser installed.

I just realized that this situation hasn't improved much in twenty years. But the problems are really commercial (or maybe "political") not technical.

Then there's the ongoing War On Native Apps.

I'm no prophet, but I did predict then that the browser would have eaten all the business applications space by now. It was just obvious. Colleagues objected that the web could not match rich native programs.


Well it can't, but it doesn't matter in the way your colleagues thought it would.

Excel is a much better product than Google Sheets, but having the better product doesn't mean having the winning product.


Some times, there's one feature that's so useful it justifies the lack of many others. Google Docs is a terrible word processor compared to Word or even Pages, but being able to edit a document at the same time as someone else (with full features and little friction—IIRC Pages doesn't support Track Changes while collaborating) is so useful that I end up using Google Docs anyway.

Whats most annoying to me such "onlinedness"/"cloudness" that people (myself including) associate with web apps are not really dependant on the front end being web based. You should be able to build as good collaboration etc features as easily with native apps as with web tech. In mobile the the lines are blurrier, but on desktop there feels to be division between offline native apps and online web apps, with very few bridging the gap with native online apps.

Your comment also showcases how IMO a lot of software companies don't compete on tech. They compete on UX [1].

When I first came on HN and learned about YC's motto (build something users love) this idea was reaffirmed.

[1] Google optimizes for a collaborative quick spreadsheet program (handy for consumers), and as other comments say, Microsoft focuses on pro spreadsheet use (e.g. finance).


I won't say that they compete on UX, more that they address different needs from different users groups. Some users need collaboration, others need niche math functions.

We switched from google docs to Dropbox paper. It has one very useful feature which would prevent me from moving back: tracking todos with name and date. Every day I get an email which lists all upcoming todo deadlines across all paper documents. Super convenient way to track your todos.

How is google docs a terrible word editor? Do you mean that it lacking the advanced but rarely used features of word?

Yep, WordArt...

Honestly though, I do think Word captured a really nice standard feature set. And docs does a darn fine job of matching that set one to one. The image placement and handling can be a little wonky at times (at least the last time that I used it) but that's what one gets for trying to handle it all in html/javascript/canvas? For what it does, it's a mighty fine product.


Conspicuously missing for me:

- Always-visible word count (added recently, but missing for nearly a decade)

- Custom text styles—you can modify the existing ones, but not create new ones with new names


I do believe a browser app can do pretty much the same as a native one, but I agree that the important bit is they didn't see the big picture: mostly free tech, no installation, safer, low mantainance, out-of-the-box truly client server, etc.

Actually the web now is 100X more beautiful and responsive than at that time. I mean what you can do with an intranet server, not the radioactive media monstrosities.

Not really a spreadsheet person, I can believe Excel is better than Sheets. But is web vs native the reason?


There's browser-based versions of excel and probably every other component in office. It's always clunkier in my opinion, and loses some features or other depending on what it is. The added layer of security also adds annoyances (like getting constantly kicked out after a period of inactivity) and new bugs.

To the user I'd say it's a trade-off that gains you little or nothing and loses a lot over native apps. The benefits of switching to browser and cloud based apps go to the organization you work for the and software companies selling the products.


The web is the ultimate "just write code" platform there ever was, literally everything that's not your PWA and API is handled by someone else in the chain.

Excel is still king in finance and much of other demanding cases.

Google Sheets ate the lower end, though; it's a bit like iOS vs Android.


> Excel is a much better product than Google Sheets, but having the better product doesn't mean having the winning product.

Much better product? Sheets takes literally seconds to download and install and runs on all your devices. Also it automatically syncs your data between devices and sharing data with other people is as easy as sharing a website. These are very important features in my view and makes Sheets into a better product than Excel. A power user might have different opinions, but to me writing sheet.new in my browser is just so much more convenient.


It's been possible for multiple people to edit an Excel file simultaneously for a long time. Since 2017 (at least) you can use OneDrive as the file location, so you get all those syncing and sharing benefits you mentioned. The newer Click-to-Run installer takes about 2 minutes to get the app to a usable state, and if that's just too long, there's always https://office.live.com/start/Excel.aspx

Excel also has a web client that can be downloaded in seconds and has more features than Google Sheets.

I didn't know that, pretty neat. One feature it has is that it picks my local language and I see no option to change it to English like I have everywhere else, so I still wont use it. I really dislike internationalization efforts, they are often so bad that it makes everything a lot harder to use for non-Americans than if they just got the same page as Americans.

Too many features.

Why are you comparing sheets to a desktop app when the comparable product is excel 365? (Which handily blows sheets out the water)

> this situation hasn't improved much in twenty years

It's gotten much worse. Now you have iOS, Android, Windows, Mac, wed, and Linux(?). In 2000, you had Windows. You couldn't do anything interesting on mobile, Web 2.0 (cringe) wasn't a thing, and Mac's market share was about 3%.


It's pretty weird to me to try to picture Microsoft's monopoly as a better situation than the current one?

I think the point they were trying to make is that in a world with platform monopoly, just making your app for that platform is a logical path for a developer

Thankfully not everyone is getting into Electron bandwagon.

Flutter looks promising for solving a lot of the platform conflict problems.

Except then you have to use Dart, or call into Dart from some other language. There are many people who dislike Dart or otherwise prefer to use other languages.

I really feel like Flutter would have taken off so much more if Google had just used Typescript instead of using it to push Dart.

Any multi platform framework will inevitably hit the hurdle of supporting only the common denominator. For many applications this is fine, but for the sake of consistency I hope native applications will still live for a while.

Having worked with React Native and Xamarin: wherever platform specific problems come up, there is some sort of "escape hatch" you can use to tailor to it.

Still room for improvement but it's not so restrictive about the common denominator.


Sadly the flutter team seems more focused on mobile and web than desktop.

It’s a bummer but the reasons are obvious. Mobile is the reason of Flutter’s existence, if they don’t win there there’s no win anywhere else.

Ok, but web?

Google is a web company ( at least in their heads ) and Dart was supposed to replace Javascript ( lol ), they used GWT for their projects since forever and now many use Dart and Angular - Adsense for sure and many other very important products for them ( $$$ ) so it's important they capture the Web side of things with Flutter because Google and their PMs and VPs are the most important clients in order to ensure the future of Flutter.

Please note I didn't mentioned anything technical, it's pure product management and strategy and that's one of the reasons I'm optimistic about Flutter's future.


Good point.

Honestly I seriously doubt Flutter will ever be popular for web. It recreates everything already included in browsers like DOM, CSS, text editing, etc. There is already too much bloat with modern JS apps.


Now this is horror.

Electron anecdote : I joked with a coworker that they had left their "out of office" status icon on slack in order to work in peace.

Turns out that it was already removed but slack was still displaying it.

⌘ + R (refresh page shortcut) solved it.. Electron might help devs getting something out quickly but all these layers have a cost


Client and server side state sync is a hard problem regardless of whether your app is native. A native app wouldn't automatically handle this.

Some IRC clients would change the nick of the user if the user toggles that they are AFK.

for some reason pretty much every time I have such an issue, command + r solves it, must be my luck

>Beyond that there's cryptocurrency, where finding a less-efficient algorithm is a design goal to maximise the energy wasted, in order to impose a global rate limit on "minting" virtual tokens.

I don't disagree with the gist of this, but from a your technical description verges on nonsense. I'm questioning if you're serious.

>...finding a less-efficient algorithm is a design goal...

At no point is anyone searching for an algorithm. Most mining algorithms were chosen at random or for novelty; Bitcoin uses double SHA-256, Litecoin uses scrypt, Primecoin searches for primes.

>...maximise the energy wasted...

Energy is wasted during mining in order to maximize security. The waste is a side effect.

>...in order to impose a global rate limit...

This is plain false.

>..."minting"...

It's called "mining". I wouldn't complain if this wasn't in quotes.

The whitepaper is only nine pages, but nobody seems to read it. https://bitcoin.org/bitcoin.pdf


I actually prefer "minting" as it is a more accurate name for the activity. Usually governments mint coins. No one mines a coin whole from the ground. The quotes show they know it is an odd usage. I mine cryptocurrency. I "mint" new-type coins.

>The quotes show they know it is an odd usage.

Using scare quotes to mean that nobody else says something strikes me as odd, but you're probably correct.


Imagining you were doing so, what would you do to denote an intentional odd usage? (sic) is used if the originator is incorrect.

The word "mint" really takes the meaning out of the word "mine". True though, minting is part of the economics included in mining.

Usually governments mint coins, but no government (or centralized entity) currently operates a legitimate network that matches up with the same properties as bitcoin.

I might have used an asterisk, maybe? :)


Governments need to mine or turn to mining companies to get the raw material that makes their coins. This is what you are doing with cryptocurrency. You look for the bits that make a coin valid and then the network mints the coin.

That's not true. The coins that advertise ASIC resistance have chosen algorithms which are deliberately stubborn to optimize with HW. In other words, inefficiency (=> high resource consumption) is an explicit goal.

I did forget about ASIC resistant coins. I think my general point still stands.

You're making a logical leap between intentional GPU/CPU coins and inefficiency as an explicit design goal. GPU/CPU coin developers are most likely true believers in a distributed security model. They could also own GPU farms or botnets. I highly doubt developers design cryptocurrencies while dreaming of squandering global resources.


If it's a real consequence of their actions that they're aware of, they're complicit in the continuance of that effect.

I have read the paper; unsurprisingly, it doesn't address any of the subsequent developments in the field.

>> finding a less-efficient algorithm is a design goal

The Bitcoin paper doesn't actually specify a specific algorithm at all, it just says "such as":

> To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof of-work system similar to Adam Back's Hashcash [6], rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits.

The Hashcash paper uses the term "minting".

>>...in order to impose a global rate limit..

> To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases.

ie to limit the global rate of block generation. Which is what makes it useful as a global distributed timestamp server.

>> maximise the energy wasted

> The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.

As everyone noticed fairly early on, like gold mining, this creates a means of expending energy to produce something which can be sold. Just as it's economically advantageous to burn down rainforest, it's economically advantageous to perform a trillion SHA operations and throw away the results of almost all of them.


One of the best 2h practical course that I had was just write the fastest square matrix multiplication. You could use any language, any algorithm, just no libraries. The target was a 32 core CPU server (this was ~10 years ago). At 5000x5000 all the Java and Python attempts were running out of memory. In C, We tried some openmp, some optimized algorithm, but in the end the best trick was to flip one of the matrix so that memory could be always prefetched. Out of curiosity another student tried GNU Scientific Library, it turned out to be ~100 times faster. My take away was find the right tool for the job!

A fun read on cloud scale vs optimized code is this recent article comparing ClickHouse and ScyllaDB (https://www.altinity.com/blog/2020/1/1/clickhouse-cost-effic...)


Yeah, I wouldn't be surprised if the majority of code performing large matrix multiplications these days was written in Python and executed on GPUs by libraries like Tensorflow and PyTorch. With the right abstractions, programmers can be "lazy" and still get great performance.

Matrix multiplication is usually done by a platform-specific BLAS library (BLAS is an API, there are multiple implementations, e.g. Intel MKL, OpenBLAS, cuBLAS). There are some other linear algebra APIs/libraries, but this is what's used the most.

Most of the numerical code that cares about performance for linear algebra uses this API and links an appropriate implementation.


The 'written in Python' you speak of is actually Fortran under the hood.

reminds me of the dude who managed to parse TB with awk instead of whatever spark like product was trendy


Leaving aside the optimisations, I assume you are doing N^3 multiplication, whereas Strassen algorithm with complexity N^2.81 or even Coppersmith–Winograd algorithm with complexity N^2.37 with larger constant is better with 5000x5000 square matrix.

A double array takes O(1) more space in Java vs. C, so running out of memory wouldn't be a problem.

Unfortunately big-O algorithmic analysis doesn't translate cleanly to the real world due to constants and other inefficiencies.

It does in this case. Champtar says it's something about the size of the matrix that makes it run out of space. The overhead of the double array is already paid for.

I just don't buy this. I cut my teeth as a HPC programmer working with C and writing no-lock algorithms. There will always be a need for that, but realistically the vast majority of software being developed is simply not performance-critical. It's designed to work at human speed.

Advances in language, compiler, and runtime implementations will continue to keep up with any growth in the need for performant applications for the foreseeable future, despite the looming collapse of Moore's Law.


> It's designed to work at human speed.

It would be great if most applications worked at human speed. Instead we have web applications taking 5 seconds to load what is basically 3 records from a small database.


...or "instant"(!) messaging applications taking gigabytes of memory and a full CPU core, and yet still can barely keep up with how fast a human can type.

I've often complained out loud with coworkers, while waiting for some horrible webapp to do its thing: "This computer can execute over a billion instructions every second. How many instructions does it take to render some formatted text!?!?"

Related: https://news.ycombinator.com/item?id=16001407


For the likes of it, 15e9.

While throughput is reasonably easy to optimize for, for latency you will havSoftware latency is a hard to optimize target. Throughput is much easiere to fight against each abstraction layer on your code. And that includes layers bolted on your OS and hardware.


Most of these applications spend a few 100 millisecond loading the database records, and an additional 4.5 second loading the 20 different trackers + advertisements on their website.

I reckon humans can actually go MUCH faster than what their software allows today. I often feel frustrated by software on my various devices that make me wait around for non-network operations even though I pay a premium often for top of the line devices. Those little micro-frictions really mess with my mood.

Depends what you mean by human speed. What's faster, a lower tech "human" operation like looking up a word in a physical dictionary (assuming one's handy), or looking it up on dictionary.com, assuming dictionary.com takes 5s to load?

That is human speed though (better than human speed, even). For most human tasks that need to be done, 5ms versus 5s doesn't really matter.

Consider also that spending an hour at the DMV for them to update a database entry or two is also human speed.


What? 100ms is "human speed". When doing anything interactive the difference between 25ms and 5s is monumental. Even just for pressing confirmation buttons 5s is slow enough that you need some substantially faster reacting visual confirmation (loading animation or whatever) to satisfy humans.

5ms versus 5s is the difference between being tempted to check or think about something else or not. Multiply that option over and over with a repetitive task and anyone but your most disciplined monk is going to find themselves getting side tracked regularly.

No, it really really matters. Small things add up, when you're forced to do them multiple times every day.

> taking 5 seconds to load

I want to live in your alternate reality, because in ours anything under 45 seconds is a miracle.


5 seconds is probably fast enough to deliver the content to 80% of the people who need it. If that application has an acceptable bounce rate even with a load time of 5 seconds, then that might be the minimum acceptable human time.

What about battery life?

If you prefer, call it carbon footprint. Python has a huge carbon footprint. We should get rid of slow languages for environmental reasons.


But then we'd have to pour in more manpower, which involves more commute, more upkeeping (AC, lunch), etc.

This will happen at a certain point anyway. The current fashion- and inertia-driven nature of IT is not sustainable in the long term. Tons of money are poured on very questionable projects, all the time.

Plus we have a lot of pretty awesome languages that are mature enough and are serving very different niches (so their union can cover everything in IT) like Rust, Erlang/Elixir, Zig, OCaml (which can be transpiled to two JS variants, BuckleScript and ReasonML), TypeScript, and probably 20+ others.

Not to derail the thread but the dependency on very slow and hard-to-debug dynamic languages like Ruby and Python is getting out of hand.

Statements like "But it's easier to find devs for Python and Ruby than it is for Rust and Elixir" might be statistically correct now but that means nothing. People change technologies as market demands change so I am absolutely not worried about displaced programmers. There's almost no such thing as displaced programmers either, 99% of all my acquaintances just learned the new tech their employer wanted from them and moved on to the next stable paycheck.


Only if you are convinced there is a fundamental need for more manpower for code written in faster languages. For me personally, Crystal was the language that convinced me that great dev UX and productivity is possible in compiled languages. As far as I'm concerned, it even beats Ruby in both. YMMV.

Or just stop using python going forward

So... are you saying people are going to have more kids because we stop using Python?

For equal numbers of humans, all those energy/environmental costs you mention are going to be there regardless of which programming language is used...


This has to happen if we use a lot of manpower inefficient tools, and still want to keep our current pace of R&D.

remote

Unless you live in a cold climate and use electric heat? Endlessly gzipping /dev/random would simply cause your electric heater to run less frequently.

Even if the software only needs to respond at a certain speed, scale will quickly make you either pay through the nose for better hardware or optimize the software so that it can respond in a small fraction of the original speed.

The trick, as always, is finding balance between paying for hardware and paying developers.


But that is the case now too and in my experience it swung to paying through the nose for hardware in general; as more or less a sidetrack I take on projects where I optimise (mostly online) systems. Example; a few weeks ago a startup asked me to check out their setup as they were spending almost 30k$/mo on AWS. I spent a few days optimising and now they are down to less than 10k$. With some more work it will be a few 1000$; there is still so much wrong. But that is less low hanging fruit so it will be a lot more expensive. Still well worth it imho.

People really bought into the ‘people are more expensive than hardware’ as an excuse to get screwed like this. For $5k in human cost, these guys (and their investors) now save 200k/year in hosting. And this is not an isolated story; I am working on another one at this very moment. Programmers have become so incredibly sloppy with the ‘autoscaling’ and ‘serverless’ cloud ‘revolution’.


I don’t know if you feel this way, but my complaint about the more hyped cloud services is that not only can they be expensive (fine) but the promised time-savings and simplicity of operating the system often doesn’t really materialize either, except in restricted circumstances that you don’t appreciate in advance and only find out later after you’ve already committed.

If it really did save time and were simpler, some companies would (quite reasonably) be willing to pay a premium for that - time is money and all that. In reality it seems like people often end up with the worst of both worlds - it’s expensive, complicated, still needs a huge staff to maintain, and doesn’t even work that well.


Well it is better than making even trivial architectures with actual hardware (I have some pictures of me hauling servers over xmas a long time ago; that was cheap (in monthlies and hardware, not in hours!) but I would and do pay a premium for that). Otherwise I do agree somewhat; most overbearing systems can be done much simpler but we are all preparing (and thus paying) for eventualities that most likely will never occur or will not influence the bottom line.

Tech like AWS Lambda (of which I like the theoretical idea) are meant to remedy the issues with complexity for a premium. But that premium makes, personally, my eyes water. I cannot see any high volume operation justifying going live with it. Are there big examples of those? And how is it justified vs the alternatives (which are, besides some programmer+admin time and scalability) far more efficient?


There are some significant high volume cases. We work with companies doing billions of Lambda invocations per month and realising large cost saving benefits. Lambda itself is usually the smallest part of the bill as one of the advantages of building serverless applications is you shift the responsibility of certain execution to specially designed managed services as opposed to code consuming CPU cycles; for example API Gateway takes over routing, S3 takes over file system calls, etc. A large portion of savings organisations see though is in time to production, as well as the overhead of managing servers and container clusters which is a lot more costly than you might think. Especially in the environment we are in now where qualified Dev Ops talent is hard to come by and at a premium. Sure, a developer can take some time to try and learn how to put together some infrastructure, but that's time taken away from adding direct value to business needs and not to mention the fall out when things go pear shaped later because it turns out a few hours Googling doesn't turn someone into a DevOps expert.

You definitely know what you are doing then; I see mostly the negative cases... The abuses of things for which they are not made etc. Thanks for the insight!

> as well as the overhead of managing servers and container clusters which is a lot more costly than you might think

A lot of people underestimate that in my experience; I see a lot of people who find it cool setting them up (also, a large amount are not doing this scripted but via the web interface). My current case has a myriad of VPC, container clusters, load balancers, clusters, auto scaling etc and it looks really impressive but it's very costly and their dev (who was also devops) disappeared as he buckled under the stress. Also, none of that is needed in this case (not saying there are not many cases it is needed!).

Anyway I will experiment more with Lambda; I think I'm tainted by the very costly abuse cases I had to move to normal linux environments to make affordable for the startup.


Thanks for sharing. I am aware there is no small amount of cases where the cloud offerings do save money in total.

But to be fair, for most projects the complexity that Amazon's services carry with them is absolutely not justified. Sure I can learn to work with 10-20 Amazon services but even me as a senior guy who knows his way around pretty much anything you throw at him, that's precious time spent not helping the direct business needs but basically making sure the house won't collapse.

And a lot of smaller companies like to merge the "programmer" and "DevOps" titles into one person because of course, that means one paycheck and not two. And as you said, they get angry that you can't become a pro sysadmin in an afternoon.

I suppose I am just trying to say yet again that many companies reach for BigCorp tools when they really ought to be fine with 2-3 DigitalOcean droplets and 1 dedicated DB droplet, plus 1 extra for backups.


But it does save an enormous amount of time. We have numerous customers using tools like the Serverless Framework to help put together sophisticated systems in days that would have traditionally taken months. I've experienced it myself personally and worked with multiple customers who see the same thing.

Its also not the initial time saving. After implementation, infrastructure maintenance is almost non-existent because the services are all managed for you and you can focus on providing direct value and not worrying about whether your infrastructure can meet your needs.


> paying through the nose for hardware in general

You also have to consider that there are limits to how parallel an application can be - Amdahl's Law - at some point even throwing hardware at a scaling issues has its limits.

Of course, there's also a truism that the team who implemented the first pass won't have to support (financially or as a developer) the software when it no longer scales.


No, amdahl's law is (roughly speaking) a limit to how parallel an algorithm can be. Applications (in the sense of web apps) generally have the potential to scale via Gustafson's law, but we are (IMO) largely held back by framework and old ways of programming. https://en.wikipedia.org/wiki/Gustafson's_law

So long as an application needs to share state between worker processes, (database, redis cluster, etc) then Amdahl’s law still applies. There’s very few modern applications that can truly scale linearly.

You're confusing something. Fully consistent databases are primarily limited by the speed of communication because they need to replicate writes and queries to all nodes and wait for a response even if a node is on the other side of the planet. Unless your CPU is extremely slow (clock frequencies of a few kilo Herz) the speed of light is a significantly more important limit. This is actually a usecase where modern CPUs are more than fast enough and we don't need a significant improvement in processing speed. Faster storage and networks are welcome though.

As a guy using a 10-core Xeon workstation with 64GB 2666Mhz DDR4 ECC RAM and a NVMe SSD capable of 2.8 GB/s read and write... I have to tell you that I only partially agree.

I've noticed my compilation speeds got dramatically better (compared to a MacBook Pro and an old-ish i7-3770 desktop PC). And it can handle even the sluggishness of Slack just fine without you noticing a lag, which I view as a huge achievement.

However, one thing my very detailed system monitors are telling me every day is -- 99% of all software we use every day is not parallel enough. So I have this amazingly powerful CPU that only (1) Git garbage collection, (2) PostgreSQL restoring a big backup, (3) Rust compiler and (4) [partially] Elixir compiler can saturate to its full potential.

I'd say that if everybody buys the new AMD Threadrippers and PCIe 4.0 motherboards, RAMs, SSDs and GPUs, we'd all be collectively fine for like 10 years.

The software however, it badly needs more parallel processing baked in it.


Share consistent state. Eventually consistent models (most web apps) are often generally okay.

(NOTE: I don't disagree with you, I am more like paraphrasing you and adding my take.)

In practice most software is light years away from this theoretical limit of "can't be anymore parallelised". And I fully agree that throwing hardware at a problem indeed has limits, although they are financial and not technical IMO.

As mentioned in another comment down this tree of comments, my 10-core Xeon workstation almost never has its cores saturated yet I have to sit through 5 seconds to 2 minutes of scripted tasks that can relatively easy be parallelised -- yet they aren't.

And let's not even mention how my NVMe SSD's lifetime saturation was 50% of its read/write limit...

There's a lot that can be improved still before we have to concern ourselves with how much more we can parallelise stuff. That's like worrying when will the Star Trek reality come to happen.


You're quite right that there's plenty to optimize. It's not that there isn't money in optimizing. It's that there's often not _enough_ money in optimizing to rise to the level of the top N priorities for a business.

Agreed, until you raise it at the right level at the right time. People do not find me for nothing... Usually after the initial launch euphoria dies down and someone looks at the books and asks why such a large % of the expenditure goes there. People start looking around online and see things like ‘our application serves 200k requests/day with one 50$/mo server’ and compare that with their 30k/mo setup barely serving 50k/day and start poking around. It is usually apples and pears, but more often than not there are massive issues. Most of them I would consider beginner issues but they are not made by beginners; many senior programmers I meet simply do not know about normal forms, proper types (all are stringy), proper indexes, O(n^2) etc; they trust cloud scaling to solve it all. And it does! But it costs...

And ofcourse, there is a limit to what you want to spend even if it might make some profit long term. You need to be able to find programmers to maintain things etc as well. If I needed something handling massive traffic while handling real business logic but not allowed to cost more than a few bucks in hosting, I would use something like [0]. But that would be silly for maintenance reasons alone. Does anyone know a modern (well maintained I mean really) equivalent though? I played around with this a long time ago and it is incredibly efficient.

[0] http://datadraw.sourceforge.net/ (github; https://github.com/waywardgeek/datadraw as sourceforge seems down)

Edit; maybe I answered that last question by finding a github version: seems waywardgeek does maintain at least to keep it running.


> Does anyone know a modern (well maintained I mean really) equivalent though? I played around with this a long time ago and it is incredibly efficient.

https://diesel.rs ? Maybe https://tql.antoyo.xyz/ if you care more about ease of use.


Datadraw is not an ORM; it is more comparable to a statically compiled Redis. So it is far less flexible, but it is very efficient/fast.

One of the purposes of Datadraw is for instance to build SQL databases on top of.


> almost 30k$/mo

That's like, a couple full-time developers, AIUI? Maybe even less than that. Perhaps the people who say "people are more expensive than hardware" have a point - at least in the Bay Area. Or you can move to the Rust Belt if you'd like a change.


Sure, but my point was that they cut that bill with 20k PER month by giving me 5k one off... They gave me 10k runway to poke around but 5k was enough to fix it; it was simply that bad to start with. The low hanging fruit in most systems I see is really trivial to fix; they just have no one to do it... I bet other people here have seen that before when thrown into an existing project (and I read Spolsky at an impressionable point in my career so I am usually the one against rewriting the whole thing outright).

What you’re saying is that there were a handful of bottlenecks that you caught immediately or were found with some simple profiling, right? Not that they made the mistake of writing their app in Python instead of assembly, as the article seems to imply is now necessary.

> there were a handful of bottlenecks that you caught immediately

Exactly. I was responding mostly to the point that most CTO's/management belief that you should just let hardware handle it while programmers should just deliver fast as they can. He says it is always a balance; you cannot pay for optimized assembly when writing a crud application, but I claim we completely swung to the other side of the spectrum. For instance, a financial company I did work for had no database indices besides the primary key and left AWS to scale that for them. And then we are not even talking about Mongo (this was MySQL); Mongo is completely abused as it is famous for 'scaling' and 'no effort setup', so a lot of people don't think about performance or structure at all in any way; people just dump data in it and query it in diabolical ways. They just trust the software/hardware to fix that for them. I recently tried to migrate a large one to MySQL, but it is pure hell because of it's dynamic nature; complete structured changed over time while the data from all the past is still in there; fields appeared, changed content type etc and nothing is structured or documented. With 100s of gbs of that and not sure if things are actually correctly imported, I gave up. They are still paying through the nose; I fixed some indexing in their setup (I am by no means a Mongo expert but some things are universal when you think about data structures, performance and data management) which made some difference, but MySQL or Postgresql would've saved them a lot of money in my opinion. Ah well; at least the development of the system was cheap...


But if they hired you at the beginning you wouldn't have been able to save this much money that would actually justify your salary. I think they made the right decision depending on the amount of time they were burning the cash.

seems like you deserve more of a cut than that.

Well, the premise going in after a quick (very quick) review of the system was: 'I will check what I can do in 5 days at $10k; I believe I can help, but if I cannot, you lose $10k. If I can help you in less time, you only pay that time.'. I do not think I can move that to some other deal with that premise. Maybe if I say; 'I will do this for 50% of the money you save in 12 months after I am done' that would work, but this is is a side thing which I do because I like optimizing things; if I sell it in another way, it's not bound to time which will make it a timesink and risk. It is a choice.

I am curious how do you even find a side job like that.

I am definitely a spiritual brother with you because I love optimising things. But I am very unsure how do I even start a side career with that premise.

Any advice?


> Any advice?

Spend a lot of time with funded startups. Meetups, conferences etc. They will be happy to talk about this. But also online; you need to 'dox' nicks some times, but when you see quite broad questions in slack/reddit about performance of systems and you find out this is some (tech) (co-)founder you can ask them to help. I do no-cure-no-pay if the system is an MVP and crud; I do no-cure-still-pay if the system is larger and already live. That is not because I want to blackmail the company (and if I like the idea you can give me a % as well instead, all fun and games), but usually because 'wanting to help' is punished when it's 'free' as in no good deed will go unpunished. I Did no-cure-no-pay with optimising (and other services) live systems in the past, but as soon as I touch it, people blame me for all kinds of dataloss (while i'm very careful and absolutely make (offsite) backups always) and other misery. So when basically what I do is connect with (co)founders who are in a jam and when they don't have production data yet, I will go no-cure-no-pay; when they have production data they need to keep, I will explore but if I cannot do anything (for that price, mind you; there is always something to do), I still get paid.

There are probably literally 1m projects and growing at any time in this world that have serious issues and that are burning money and that will crash (all the time or sooner or later) that need help. For instance, I know of a large state own postal/courier tracking system that crashes under load every 48 hours. We tried to help them but they are fine just rebooting (manually!). Fine, that happens too.


What sort of waste you tend to see more, if you do this regularly? Is it the case that people are aware of the cost and “don’t care”, or is it surprising/hidden cost?

There are 2 types: 1) they know the costs and thought it would scale infinitely with money but it doesn't (crashes, hangs, etc) 2) they knew it would cost more to scale but they did not expect it going up quite that fast as it does with more traffic (not linear).

That is how people reason about it but we could argue it is the balance between who pays for the hardware and who pays for the software.

I.O.W., If you have a program with millions of users you make something that performs well enough for people to pay for it. While each cpu cycle wasted then becomes millions of cycles, you never get billed for those it doesn't matter to you.

I wonder if an eco system is possible where software providers have to pay for ALL resources consumed. It sounds ridiculous but having any transaction going on would make monetizing software a lot easier.

It would for the most part boil down to billing the end user for the data stored, the cpu cycles and the bandwidth consumed. A perfectly competitive vehicle. Want to invest in growing your user base? Pay part of their fees and undercut the competition.

It would make it more logical if they didn't own the device. The hardware can scale with usage. You just replace the desktop, console or phone with one better fit for their consumption.


> the vast majority of software being developed is simply not performance-critical

Programmers keep saying this, and users keep complaining about slow software.


the vast majority of software being developed is simply not performance-critical. It's designed to work at human speed

But what does that even mean? A 3Ghz quad-core can do 12 billion things per second yet I still regularly experience lags keeping up with typing or mouse movements, scrolling webpages, redrawing windows... the actual interactive experience has gotten much worse since the 90s.


>but realistically the vast majority of software being developed is simply not performance-critical. It's designed to work at human speed.

I learned this by greatly improving a scheduling system algorithm that could schedule 10-12 related (to each other) medical procedures while accounting for 47 dynamic rules (existing appointments, outage blocks, usage blocks, zero patient waiting, procedure split times, etc) to sub second, improving the current algorithm's 13 seconds. You know what? It didn't matter. That was our speed test scenario (most realistically complex one a customer had).

The customer was fine with 13 seconds because it was so much faster than doing it by hand and these customers were paying hundreds of thousands of dollars for the licenses. Because of this, the improved algorithm was never implemented. It was a neat algorithm though.

Absolute maximum performance has its place, it's just not every place.


I have a few PCs running Windows 10 that are older than 5 years. As long as they have SSDs and you're not gaming, they're still plenty fast, even for modern websites.

I have a macbook that's 8 years old with an ssd and 16gb of RAM. Only struggles with gaming on the integrated graphics, and the battery life has always been abysmal with the thirsty 35w i5 cpu.

Usually poorly performant code needs optimisation through a change of approach or mindset. It is the way we are thinking about the problem that is lowering performance. Not necessarily the hardware itself.

I've seen locking brought forward as a critical limit. Long discussions about new hardware and adding nodes and all sorts of expenditure required. We need a larger kubernetes. More GPUs!

I've also been in the situation where we switched to a plain redis queue (LPOP, RPUSH) scheme and gotten 10x the improvement just by lowering message overhead. A lot of the very complex solutions require so much processing power overhead simply because they involve wading through gigabytes. Better alternative solutions involve less gigabytes. Same hardware, different mindset. Not even talking about assembly language or other forms of optimisation being required. Just different philosophy and different methodology.

Perhaps we need programmers with the mental flexibility to run experiments and be open to alternatives. (Spoiler: we've already got plenty of these people.)


We have a good number of them indeed but nobody wants to pay them to fix most of the IT area. Ironic, right.

Or we can't get them past the HR hiring policies that eliminate all candidates. Been through that myself. I even got multiple tech leads to sit the testing and watched them fail. They were already on the team yet would not be able to get on the team. Absurd but true.

Contracting is such a strange world. I've drifted so far into it I've lost the ability to see how salary based people even get work. All I can do is keep the door open for as many people as possible. Sometimes I need to actually assert the door into existence. This was something I didn't know was possible until recently.


Can you tell me more? Sounds quite humorous. And quite usual...

Every few years something like this gets written. I remember similar things being written in 2004-2005 before the Core 2 line of processors came out.

There is still improvements being made to the current tech or new takes on the current tech that aren't incorporated yet in the current bunch of consumer processors.

Also I happen to think that what makes a computer fast is the removal of bottlenecks in the hardware. You can take quite an old machine (I have a Core 2 Quad machine under my desk) slap in an SSD and suddenly it doesn't feel much slower than my Ryzen 3 machine.


except now it has actually been true for years. clock rates aren't increasing. advances in performance have been only from things that are tricky for developers to efficiently leverage (cache, simd, more cores). We need developers who understand these new low level details as much now as we needed that kind of developer in the past.

> except now it has actually been true for years

Sure it is true. It isn't a tech journo writing a quick piece to get some clicks. I am quite cynical these days.

There hasn't been any competition in the Desktop CPU space for years until 2019.

Also clock rates haven't increased since the mid-2000s (there were 5ghz P4 chips). Clock rates being an indication of speed stopped being a thing back then when I could buy a "slower" clocked Athlon XP chip that was comparable to a P4 with a faster clock.

Also more stuff is getting offloaded from the CPU to custom chips (usually the GPU).

> We need developers who understand these new low level details as much now as we needed that kind of developer in the past.

I suspect that there will get better compiler and languages. I work with .NET stuff and the performance increase from a rewrite to .NET core is ridiculous.


Minor correction about desktop CPUs: Ryzen 1 came out in Q1/Q2 2017. Initial problems were mostly solved by 2018.

> Sure it is true. It isn't a tech journo writing a quick piece to get some clicks. I am quite cynical these days.

It might be such a piece and it could still be true.


Most things we do now aren't inherently more computationally demanding than what we did 20 years ago.

I first heard this when I was in school in the mid nineties.

We're mostly fighting Murphy's Law, not Moore's law. As said below, most problems are so far from being compute/$ or otherwise technically limited and far more about organizational / political issues putting vast inefficiencies into these systems and priorities that fund their creation.

Definitely. Honestly, I'd be excited if hardware stopped progressing. Ever-better hardware and ever-shifting platforms cover up a multitude of organizational sins. There's much less incentive to write good code given the rate at which code gets thrown out, and given what people are willing to spend on their AWS bills before asking if something could perhaps be improved.

Yep, agreed. But to be fair, hardware will stop progressing pretty soon IMO. PCIe 4.0 backbone is quite strong and a lot of companies, when buying workstations or servers based on it, won't move on from it for quite a while. Or so I hope.

From the article:

>In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law. 3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.

Codified anti-recycling.


I don't think this article is seeing the whole picture. The author talks about how programmers used to have to cram a program into 16KB of RAM (or ROM) and it had to be efficient. But that came at a huge cost. Reading 6502 Assembly with variables that could only have up to 6 characters for their names, and were all global was a huge pain in the ass!

We have great optimization tools freely available these days, and when necessary they are used. We also have great standard libraries with most languages that make it fairly easy to choose the right types of containers and other data structures. (You can still screw it up if you want, though.)

As soon as it becomes economically necessary to write more efficient code, we will be tasked with that. I work on professional software and we do a hell of a lot of optimization. Some of it is hard, but a lot of it could be done by regular programmers if they were taught how to use the tools.


The renewed interest in C++ and other compiled languages is an indication of the need to get more efficient. Programmer skill will become more important in the future. But they won't be today's skills. I expect that programming in the future will be more about getting the AI to do what you want rather than writing code directly.

I’m already an AI you can pay to get a computer to do what you “want”; the problem is ‘what you want’ is so poorly specified, there’s no way to turn it into an actionable set of steps!

This comment has to be the most insightful I have ever read. It is true on so many levels and captures perfectly the mentality that some senior decision makers have about AI.

Other dev: "I wish I could just tell an AI to do $implementation for me", me: "You know what our boss just did?". ;-)

Agreed. Hopefuly a fusion between prolog and haskell comes by - a modern declarative language to end all languages. Let it decide what's best for cache locality and profile your performance. Let it choose the best data structures based on your constraints - compile time and maybe runtime.

FYI, a fusion between prolog and haskell already exists: its called mercury. Its a statically compiled language with decent performance characteristics (at least, in its category).

Mercury was pretty good last I tried and a very interesting language at that; I like Prolog and did quite a lot of work in it in the 90s (mostly research). I like how clean it looks (that's from Prolog) and the performance is great due to the fact that you give quite a lot of info to the compiler when you are programming (see for instance [0]). It has many backends it compiles to (Java, .NET, Erlang, native; not sure how well they are all maintained). It is a shame that not more people are working on the language and that not more people use it; I think there was not enough hype surrounding it. I gave up as it could not produce native code for ARM, a platform I always need for everything I do (for many years already), so to take a language seriously as something to dive in, it needs to have ARM support. Not sure what the state of that is now; this is many years ago.

[0] https://www.mercurylang.org/information/doc-latest/mercury_r...


As a general comment and talking about myself exclusively, it's a new kind of fatigue that I would call "oh look, another programming language".

There are quite a few very interesting and solid languages out there. One example for something that's not nearly well enough utilised or used widely is OCaml -- although in its case the lack of true multicore CPU support definitely cripples interest. But the language has an amazing type system that catches a _TON_ of errors (maybe even more than Rust's compiler, not entirely sure). And its compiler is just lightning-fast, fastest I've ever seen in fact. And it is multi-paradigm language (OOP and functional, and a lot of interesting typing constructs on top, half of which I don't even understand). Etc.

Not advocating for OCaml by the way (I work with Elixir and am looking to get better at Rust lately). It's just an example demonstrating that, again, there are a good amount of very solid languages and runtimes out there but we the programmers are so busy either (a) belonging to tribes or (b) being so damn busy we can't look beyond the tech we do our daily job with -- and then a lot of excellent tech gets left in the dust. :(

Mercury might be one of these tech pieces. And it's definitely not the only one.


Interestingly (and I did not know this) Mercury is actually one year older than OCaml. Both are fine languages that are underused/under-appreciated; OCaml did much better (PR wise so to say) than Mercury but both should have, imho, more market share.

Mercury came about as the OG was wondering about a mix between Haskell and Prolog.


Identifying the right characteristics in data, and creating properly tagged corpuses of data that correspond with the right characteristics is no less work than writing code.

Not to mention manually written algorithms are, in many cases, more accurate than ML heuristics (for a terrible yet relevant example in the finance industry, identifying the correct sum of a set of numbers).


>The renewed interest in C++ and other compiled languages is an indication of the need to get more efficient.

I kind of disagree with this based on intuition alone. Most developers, professional developers, are using web tech (JS stack in particular - Node and hyped front-end frameworks). Yet we"re seeing "interest" in compiled languages such as Rust, despite almost nobody using it professionally and almost nobody doing much with it outside of simple proof of concepts.

To me it points toward a developing sense of insecurity in modern professional developers that simply being a JS dev isn't really programming/development and they've to "prove" themselves with lower level tech.

Something that indicates that, for me, is in StackOverflow's 2019 survey [0] the most used tech was JS and that which surrounds it, followed by Python and other easy-to-get-going-well-supported tech. Yet the "Most Loved" was Rust.

I could be wrong, and I'm open to being, but I intuitively I don't believe the interest in performant technologies is in the face of the sheer bloat we've seen, particularly from the web-front.

>Programmer skill will become more important in the future.

My prediction on this, not so AI specific, is that developing and deploying web-tech will continue to become easier and easier, meaning it'll take less people to do it. Sure, work may arise from developing countries/economies to supplant a drop in demand for it in the developed world, but maybe not.

Combined with a potential bubble burst in tech, I think those relying on web dev for a living could be in trouble in the coming decade.

I don't foresee much in terms of companies trying to optimize operational costs by instructing their devs to write their code more efficiently with memory/performance in mind to reduce operating costs, and thus spur a push toward jumping on compiled languages. If anything, cloud computing will continue to get cheaper and cheaper as the big 3 continue to try and absorb as much marketshare as possible.

[0] https://insights.stackoverflow.com/survey/2019#overview


At first we optimized languages for memory and CPU use, because there was so little of it.

When we could, we optimized for ease of writing code, and it lead to bloated and slow systems. This is the current status of things.

We are optimizing again for performance, but we want to have our cake and eat it, we want both the performance and the ease of writing code. And the reduction of bugs in the end result.

This change takes time, perhaps decades. Longer than bubbles and market growth. It takes time because we are curious, and we want to test all possibilities. We want fast games, easy abstractions, zero bugs, the whole package.

But it will happen, at some point. Rust, Go, D, may be one of these languages will replace Javascript, or may be it will be a totally new language.


The following is not a particularly deep thought, and that is why it is stated simply:

The way to maintain the easy abstractions, or even increase the level of abstraction while still increasing performance is to ditch the proliferation of massive general purpose programming languages and adopt specific DSLs that fit the problem domain. I feel like the language-oriented programming paradigm that Felliesen of Racket champions is certainly in this spirit, but I would like to see a language core that is specifically tailored to performance concerns.

Not that this is important to the general concept, but this is the current focus of my language design efforts, hoping for first public showing in April 2020,


Rust is used for far more than proofs of concept, and is deployed at some of the largest tech companies in some of their key products.

Not disagreeing with that.

Although, in re-reading the above, I have made a grave mistake above, contrasting most used with most loved. SO's "most loved" is measured by "of those professional using this language, how many responded that they love using it", so in the case of Rust it's 83.5% of the 3% using it professionally.

In retrospect I think my point would be better made by pointing at the difference between the technologies being used professionally and the technologies "most wanted", where Go and Rust are in the top 20% of wanted but the bottom 40% for Go and bottom 20% for Rust.

I'll leave the original intact for posterity but what's cited is done so erroneously.


Yeah, it’s tough; the “most loved” thing is interesting. It’s name kinda indicates a scope that’s different from what it’s measuring, but I’m also not sure what else I’d call it.

I don't think the new interest in compiled language reflects anything other than ongoing cyclical changes in fashion. Neither "compiled" nor "interpreted" language families (the terms are vague and the same language can fall in both categories) has a slam dunk performance advantage over the other.

> I expect that programming in the future will be more about getting the AI to do what you want rather than writing code directly.

This is the clear endgame. The question is how long it takes to get there.


It’s a small semantic nitpick, but I think that you have to qualify the domain of conversation in stating: ‘neither “compiled” or “interpreted” language families ... has a slam dunk performance advantage over the other.’

If this is universally qualified, where are the scientific HPC simulations written in python, AAA video games written in Haskell, and fin tech trading apps written in Lisp?

I am not stating that there are no places where compiled vs interpreted can never produce acceptable results, it’s just more nuanced than a forall type proposition.

Addendum: I am aware C# could sort of be ‘interpreted’ and Unity is C#, so there is at least some evidence in the game category, but I’d quibble over the best-in-class C#/Unity game being considered 100% C#.


Unity is also quite far from being known for the best performance.

It seems to me that if we can merge https://github.com/Syniurge/Calypso into LDC, we can get a language that is easier to understand and compiles faster than C++, Dlang.

Using the data we what have would already be a big gain. I think that's AI's biggest contribution. I've seen a lot of time and complexity go into improving functionality beat by someone who wrote code to track what actions were taken, and adjust based on frequency of use. The first option is needed if you have no data and cannot gather the data, the second is great because it can adjust itself over time.

This is where I see the SRE (site reliability engineers) role. The developers making changes are put into a position where they measure the cost impact of a decision.

It's these feedback loops, and the practices they instill, that I believe we need. New programmers can help break the mold, but without good feedback they'll fall into the same traps.


What about Rust or D? OR even Go-lang?

I'm hopeful about Crystal filling this niche in the future.

Why not both?

There’s also the environmental factor nobody takes into account. Less CPU cycles means less emissions. When a piece of software is used by millions or even billions it must be significant.

There is actually a paper about this topic: "Energy Efficiency across Programming Languages"

Thanks I'll check it out!

Why do we need more computer power? I haven't upgraded my laptop since 2009 (well, I've replaced its HDD with an SSD 2 years ago and it made a huge difference) and I'm okay. Some people insist on photorealistic 3D graphics in the games they play, I agree that's cool but wouldn't say that's anything close to important.

> Why do we need more computer power?

A lot of the latest revolutions (good or bad, that's up to the reader); crunching huge data, ML, every more realistic simulations etc comes from every faster machines. If that growth stops, the article suggests we do something that was (and still is in some circles) normal in the 70-80s with homecomputers and consoles; because you could not upgrade the hardware and almost nothing was compatible (which is the most common reason the IBM PC won) to the next generation, you optimised the software to get everything from that existing hardware you had on your desk. And people are still doing that.

One of my personal miracle examples; my first love was the MSX computer, which is a Z80 3.58mhz computer with (in my case) 128kb in it. This machine could do nice games for the time and some business applications. Many years later, that same physical hardware (I still have my first one) can do this [0]. Obviously the hardware was always capable of doing this, but it needed many years (decades) for programmer(s) to figure out how to get every ounce of performance and memory utilization out of these things and push it beyond what anyone thought possible.

If the improvements in performance stagnate, there is a lot of room for getting most out of the existing hardware. I would think though that in the case of modern hardware, the geniuses that get to that point will get some language to compile to this optimised optimum instead of having to handcode and optimise applications like the Symbos guy did.

[0] http://www.symbos.de/


do you ever compile code? at work I have a machine with an i7-7700 (4C/8T), 32GB of RAM, and an SSD. it still takes about 45 minutes to do a full build of the project I work on, which can easily be triggered by modifying any of the important header files. if I had to do my job on your laptop from 2009, I'd never get anything done.

That is the choice of software tool. You are literally grinding through gigabytes of data. If your tool didn't require so much data processing it would be faster. This may or not be possible to improve by you. Often there are workarounds.

Case in point: a matrix library I used to use needed to a full row/column pass each time. We put a layer in between it and our code. Reduced lookups required by 30%. We were processing the same amount of data and getting the same results but requiring far less time. That layer also reduced memory requirements. Now we could process larger datasets faster with the same hardware. Thats just one example.

Your choice of CPU and other hardware isn't always the limiting factor. Even the language choice has an impact. Some languages/solutions require more data processing overhead than others to get the same final result.

Even the way your program's Makefile or module composition can have an effect on compiling performance. I remember the use of a code generator we included that meant it had to regenerate a massive amount of code each run due to its input files being changed. We improved it by a ridiculous amount simply by hashing its inputs and comparing the hashes prior to running the code generator. Simply not running that code generator each time meant we sped up the build significantly. 30 minute build times reduced by 5-10 minutes. Same hardware. And that was easily triggered by a trivial file change.


I understand your point, I think. c++ has an inefficient build system, and over time projects can end up with very suboptimal build systems. it's definitely worth spending time to pick the low-hanging fruit like in your example, or if possible, to choose a language that builds faster.

still, even twenty minutes is a long time to wait and see if your latest change actually works. in the foreseeable future, there will be complex projects that take a long time to build. you will eventually have to touch things that everything else depends on and recompile most of the code. people that work on these projects can always benefit from faster desktop-class hardware.


That's just picking the lowest hanging fruit which is quite common sense. Not to diminish your achievement but if tomorrow, say, I figure Rust's compiler is slow, there's nothing I can do [in the foreseeable future].

Having much better hardware in these cases helps you be actually productive and not twiddle your thumbs waiting for the compiler.


Came here to say exactly this. I’m waiting for Amazon to deliver the new CPU cooler I ordered today so my i9-9900k can run flat out for longer when building, after I spent so much of yesterday waiting 20 mins for another build when my attempted fixed didn’t pan out. (Though probably more of a reflection on how painful C++ as a language is if you have to touch a common header file).

obviously the whole "textually including header files" model introduces a lot of overhead in compilation, but as I understand it, optimizations are inherently expensive. if you want to do stuff like inlining across module boundaries, full rebuilds are inevitable when you touch commonly used code. I think it's reasonable to expect that compiling optimized builds of complex code will always hunger for more compute power.

No, indeed I don't. It has been a long time since I've started actively avoiding C and C++ because it takes too long time to compile. Nevertheless I used to compile reasonable amounts of C# until recently and it wasn't a problem. Nowadays I mostly use write-and-run scripting languages like Python and build-free vanilla JavaScript.

It’s critical for a lot of scientific and industrial applications. Too many things just can’t be done without a supercomputer.

I disagree. I did my masters' on computational fluid dynamics (CFD) and I'd say that a large fraction of supercomputer use (in fluid dynamics at least) is wasted. Mostly because people take naive approaches and end up computing the wrong thing, set up their simulation poorly, reinvent the wheel, use HPC on something that can be computed by hand, etc. If they read more of the literature they'd have a more solid grasp on things and use the software much more efficiently when they do use it.

My philosophy at the moment is to use HPC only when I've exhausted other possibilities. I think many people jump to HPC prematurely. The simpler approaches are so much cheaper that I think it's usually worthwhile. I'm skeptical of the argument that it's cheaper to use HPC than it is to use more efficient methods in this case, because the more efficient methods are often something like a few days spent reading to find the right equation or existing experimental data vs. at least that much setting up a simulation and longer to run it.

Edit: Bill Rider has a bunch of blog posts that make similar points:

https://wjrider.wordpress.com/2016/06/27/we-have-already-los...

https://wjrider.wordpress.com/2015/12/25/the-unfortunate-myt...

https://wjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-...

https://wjrider.wordpress.com/2016/11/17/a-single-massive-ca...

https://wjrider.wordpress.com/2014/02/28/why-algorithms-and-...


Many moons ago when Beowulf clusters were still new, I remember a project where I was given a months worth of then new cluster time to spend. Due to a delay we had to wait a few months before we could exercise it. One weekend I was playing around with some junk computers I'd assembled for a LAN party. Long story short, I tried out a portion of the project on that LAN, got a useful result then we ran the rest of the project on that overall system and joined more machines. We completed the project using no special machines, often just idle machines added and removed upon their availability. Along the way we rewrote the entire project code as a set of modules then used python to orchestrate them.

None of this is extraordinary now but the result was we reduced multiple times the budget requirements and processing times just using code improvements. A lot of times, the head-on solution just needed optimisation. Sideways improvements also helped such as small optimisations also helped. The more exotic equipment is still useful but its an accelerator.

One last thing: we received a LOT of grief and criticism for our approach. There was peer pressure to use particular solution types even though they were wildly inappropriate. We had funding pulled or threatened to be pulled by some and other backers. One lessons we learnt includes: Don't underestimate how vested certain interests are for the use of various toolkits. "Use this or else!" is the not-so-subtle threat.

I'm so glad to not rely on only academic work now.


This is _so_ true! I remember a few years ago interviewing for a position where the largest supercomputer in the area was being used to run Perl programs. Of course the place I ended up working was running Python at supercomputer scale and running out of I/O bandwidth because of all the naive Python runtime startup crud.

This may be true for some fields, but it's certainly not true for all fields. I'd be surprised if there are still orders-of-magnitude improvements to be discovered for BLAST, for example, and that's a foundational component of modern biology.

I should have spoken more generally to OP's point though. What I was really hoping to get at is that there are applications other than games that require non-trivial amounts of compute, and speeding them up would make meaningful differences in their users' lives.


Can these areas really be optimized on the code level? Aren't serious computation algorithms already implemented the best way known to computer science and optimized to the hardware architecture? I agree most of the software is bloated but I don't know if scientific computation software in particular is.

And by the way I believe the software code is not the only place which could be made more efficient. What if we removed all the legacy stuff from the x86 architecture - wouldn't it become more efficient? What if we designed a new CPU with a particular modern programming language and advanced high-level computer science concepts in mind - wouldn't it make writing efficient code easier?

Also, what are the actual tasks we need so much number-crunching power for, besides things of questionable value like face recognition, deep-fake, passwords cracking and algorithmic trading?


Computational fluid dynamics software rarely is implemented in the best way known to computer science and optimized to the hardware architecture. There's a compromise between development and run time here. Plus, how to optimize the software is an active area of research that often combines both computer and physical knowledge.

How many 'serious computation algorithms' are written in FORTRAN 77? They may be "implemented the best way known to computer science and optimized to the hardware architecture", but they're also a full 40 years (and counting) out of date.

Virtual reality and augmented reality

I'm doing VR on a 6 year old i7 and it runs perfectly fine. Improvements come almost entirely from updating the gpu

There won't be many GPU upgrades left if the manufacturing processes don't improve. GPUs are much closer to the theoretical performance you can squeeze out of a piece of silicon than CPUs.

I doubt it. My new gpu released in 2019 is 3x as powerful as my 2013 gpu but the new CPUs are mostly only improved in core counts.

Rendering is also an insanly parallelizable task. In the worst case we can always slap 2 of the same gpu on one card and get them to render half the screen or for vr, one gpu per eye.


What the future holds is hard to grasp, the piece shared with me yesterday was "we'll spend the next decade removing features, at no loss to functionality"

One of the biggest pieces of bloat I've seen is doing the same thing in multiple places, and the new feature not being an improvement over the old workflow in 90% of cases, the efficiency gained 10% was lost in the other 90%


> the piece shared with me yesterday was "we'll spend the next decade removing features...

Sounds like an interesting read; do you mind sharing a link (or submitting it to HN)?


What will be probably most interesting to watch: the collision between hardware constraints and ever-increasing complexity of standards like Unicode and HTTP.

Are there any progress/path to progress for making competitive 3D cpus/asics?

I understand that 3D has thermal issues but couldn't this be prevented by increasing (dead) dark silicon and maybe water cooling inside the 3D chip?

Not directly comparable but brains are state of the art of computing and are tri-dimensionals.


Stock photo shows voltage regulator circuitry beneath cpu, most likely SMD capacitors. I wonder if the author thinks these are the parts he is writing about?

Those are the decoupling capacitors under the CPU socket. A very odd choice of photograph for this article, but perhaps an ironic hint that we need programmers who also know more about the hardware...

At the least the editor does, as that photo is captioned, "Only so many transistors can fit on a silicon chip."

If you look at the volume of software that needs to be produced, and at the trend to include software in more products, and at the entrepreneurial imperative that risk capital is the most expensive resource, it looks very unlikely that handcrafted machine instructions will play a greater role in the future.

Cloud computing and SaaS have extended the deadline for coming up with an answer to "What comes after Moore's Law." But it is much more likely to not be based on every coder learning what us olds learned 40 years ago. Instead, optimization is more likely to get automated. Even what we call "architecture" will become automated. People don't scale well, and the problem is larger than un-automated people can solve.


I don’t think that handcrafted machine instructions are what is necessary. Even switching from languages like Ruby or JS (Node) to languages like Go or Elixir yields tremendous efficiency improvements.

Beyond that, developers being conscientious of what they send over the wire, and being just a bit critical of what the framework or ORM produces also can yield substantial gains.

I say this as a “DevOps” guy who is responsible for budget at a mid-size startup, where we’re hitting scale where this becomes important. We save about 8 production cores per service that we convert from Rails to Go. Devs lose some convenience, yes, but they’re still happy with the language, and they’re far from writing hyper-optimized, close to the metal code.


You mentioned it yourself early in your comment but IMO going from Rails to Go is a bit weird. Rails to Phoenix (Elixir) is much easier and productive for many devs, it turned out.

Elixir itself is almost completely staying-out-of-your-way language as well -- meaning that if your request takes 10ms to do everything it needs then it's almost guaranteed that 9.95ms of those 10 are spent in the DB and receiving the request and sending the response; Elixir almost doesn't take CPU resources.

I worked with a lot of languages, Go/JS/Ruby/PHP/Elixir included. Elixir so far has hit the best balance between programmer productivity and DevOps happiness. (Although I can't deny that the single binary outputs of Go and Rust are definitely the ideal pieces to maintain from a sysadmin's perspective.)


I was going to say that the more performant language thing was likely to disappoint. As you point out, database access is going to be roughly constant. But Rails -> Go would be an exception.

Well, yeah. Still, Rails is much slower than Phoenix by the mere fact that its templating and ORM facilities are extremely inefficient.

It's not that Ruby is 100x slower than Elixir (of course it's not). It's just that Rails is so inefficient compared to Phoenix.

Sinatra, Phoenix, Rocket.RS, and a ton of others are specially crafted to stay out of your way and utilise the CPU as less as possible. And yep, as we both agree, in these cases the 3rd party I/O is the bottleneck.


Although I guess we could consider more performant, but still easy to use, languages to be a form of automation.

Rust, [partially] OCaml and [partially] Elixir come to mind. Elixir is much slower than those two but for the value it brings to the table, is quite fast.

Out of everything I worked with in the last 15 years I'd heartily recommend Rust for uber-performant-yet-mostly-easy-to-use language.


Yea, it's really annoying when IS vendor said their solution needs 16GBs of RAM for every computer when it's just all basic stuff like dashboards, graphs, tables etc. Even modern PC games still don't require this amount.

There were several good points both for and against the article in this comment section. I was pleasantly surprised, usually the threads caused by posts like this turn into "static typing vs dynamic typing" or "functional vs object oriented" flamewars.

As for my own opinion: yes, optimization is key, but we gotta remember not to make it premature. Take advantage of the fast hardware to actually create something; once we know that the something is viable, let's refactor and optimize.


Literally every experienced programmer would like to do this. But when you get to that last stage the shot-callers are like "nah, it's fine" and you never get to the optimisation.

I've seen many products die simply because customers get frustrated with laggy or buggy experience and leave.

By the time the businessmen wake up, it's usually too late.


Which means that Business Analysts need to save the world by proving to the shot-callers things similar to what Amazon found (a few ms of lag in the site load caused $$$ of revenue loss). The ever improving Observability stack combined with strong analytics on the client-side can make this possible. Perhaps regulation around Climate Effects (or carbon taxes on inefficient software) might also bring about an industry-wide change of attitudes (and incentives).

We also have a mantra against optimization until you know that you need to. It seems too cost and time prohibitive to put these things on the programmer to maintain, and that we need to develop tools to help optimize our code. Maybe the next generation of optimization techniques will be runtime instead of compile time. We already have dbs with optimizers, so maybe there will be programming languages with optimizers?

We are having worse and worse latency, though not bad throughput, but the real issue is complexity. We just keep on pilling more crap on top of the old crap.

I'm lucky at work we write lots of stuff to avoid the tell/mound, but hello! where is the rest of the industry on this?

[You can use our stuff if you like, it is all public. Let's rebuild together.]


“The only consequence of the powerful hardware I see,” wrote one, “is that programmers write more and more bloated software on it. They become lazier, because the hardware is fast they do not try to learn algorithms nor to optimize their code… this is crazy!”

This is remarkably accurate for games as well. Insurgency: Sandstorm for example. I was full of hope when I learned it was being developed in Unreal Engine which supports large scale combat much better than Insurgency's source engine. Unfortunately when it came out if performed much worse than its predecessor. Working with these engines has become so easy you don't really have to 'think' anymore and can just keep throwing stuff in.


This is a topic that really interests me, but I couldn't read the article -- either a paywall, ad-wall, or some other reader-hostile blocker incongruent with the foundation of the Internet prevents usability. Ah well. I'll join the conversation regardless.

For all the programmers out there -- _how do we do this?_. I came into programming through Matlab and Python in Economics and Data Science. I don't have formal training in software engineering. I know some C, some Fortran, and have a journeryman's understanding of how my tools interact with the hardware they run on.

Where can I learn how to be extremely efficient and treat my operating environment always as resource constrained? Am I correct in seeing the rise of point-and-click cloud configuration hell-sites like AWS are masking the problem by distributing inefficiently? (sorry if unrelated, spent hours debugging Amazon Glue code last night and struck me as related).

In other words -- how can we tell what is the path forward?


The days of everything being hand-optimized assembly are behind us. It still has it's niche, but for anything outside hot inner loops or extremely frequently called functions (like malloc), straightforward C++ will be just as fast.

Meaning there's no point in optimizing an expensive function if 99% of your program's memory and run time is spent in a different function.

This means the absolute most important skill to writing efficient software is not assembly language skills, but profiling so you know where to focus your efforts in the first place.


> Meaning there's no point in optimizing an expensive function if 99% of your program's memory and run time is spent in a different function.

Maybe there's no business point in optimizing those. But I feel this line of thinking got us into the current mess to begin with. Everybody is either like "we can't afford to optimize" (blatant lie at least 80% of the time btw) or "nah, not my effing job".

Plus that philosophy only really works when your business is fighting for survival in its initial stages. After you stabilize a little and have some runway you absolutely definitely should invest in technical excellence because it also lends itself pretty well to preventing laggy and/or buggy user experience (and those can bleed your subscriber numbers).


Honestly? I used to jabber on about this with regards to the still distant future of actual nanotechnology ... we need to find the guys who wrote videogames for arcades in the 1980s and press them for their secrets before so many brilliant tricks will be lost to time. They did so much with so little!

My guess is that we will slowly approach this wall and spend a lot of time trying for incremental gains, trying to avoid the inevitable, which would be the design of new chipsets with new instructions, sets of new languages explicitly designed to take advantage of the new hardware, and then tons of advances in compiler theory and technology. On top of it, very tight protocols designed for specific use.

I think we have layers upon layers of inefficiency, each using what was at hand. All reasonable things to do, in the short-term, based on the pressures of business. But in the end of the day we're still transmitting video over HTTP, of all things. Sure, we did it! But you can't tell me that it is efficient or even within the original scope of the protocol's concept.

Naturally, I think the whole thing would run about a trillion dollars and take armies of geniuses, but it would at least be feasible, just ... it would require a lot of will. And money.


The secrets of 1980s video game programmers?

1) hardware that doesn't change. One C64 is just like every other C64 out there. You knew what the hardware was and since it doesn't change, you can start exploiting undefined behavior because it will just work [1].

2) The problem domain doesn't change---once a program is done, it's done and any bugs left in are usually not fixed [2]. The problem domain was fixed becuase:

3) The software was limited in scope. When you only have 64K RAM (at best---a lot of machines had less, 48K, 32K, 16K were common sizes) you couldn't have complex software and a lot of what we take for granted these days wasn't possible. A program like Rouge, which originally ran on minicomputers (with way more resources than the 8-bit computers of the 1980s) was still simple compared to what is possible today (it eventually became Net Hack, which wouldn't even run on the minicomputers of the 1980s, and it's still a text based game).

4) The entire program is nothing but optimizations, which make the resulting source code both hard to follow and reuse. There are techniques that no longer make sense (embedding instructions inside instructions to save a byte) or can make the code slower (self-modifying code causing the instruction cache to be flushed) and make it hard to debug.

5) Almost forgot---you're writing everything in assembly. It's not hard, just tedious. That's because at the time, compilers weren't good enough on 8-bit computers, and depending upon the CPU, a high level language might not even be a good match (thinking of C on the 6502---horrible idea).

[1] Of course, except when it doesn't. A game that hits the C64 hard on a PAL based machine may not work properly on a NTSC based machine because the timing is different.

[2] Bug fixes for video games starting happening in the 1990s with the rise of PC gaming. Of course, PCs didn't have fixed hardware.

EDIT: Add point #5.


I have this theory that a lot of corporations know this but they don't want to be the pioneers who volunteer their money and man-hours, only for their competitors to then reap the fruits of their labour for free.

I can't prove it but I intuitively feel there's a lot of spite out there. Many people are unhappy with the status quo but are also unhappy with the idea to sacrifice their resources for everybody else -- and they will likely not only be non-grateful; they might try and pull an Oracle or Amazon and sue the creators over the rights of their own labour.

Things really do seem stuck in this giant tug of war game lately.


The path forward is to be economical with hardware resources. I always try to imagine a physical character performing a task that i'm trying to code. How far does imaginary character needs to travel, how many trips do they need to make. Is everything they do is absolutely necessary. If they delegate work, is their sub-contractor efficient?

There isn't a single place to learn how to be efficient, it is better to start being extremely curious of how things actually work. Scary number of people I've met do not even attempt to learn how a library functions they use actually work.


Not sure what you are getting at here.

> I always try to imagine a physical character performing a task that i'm trying to code. How far does imaginary character needs to travel, how many trips do they need to make.

Dude, that's why we have optimising compilers. Functional programming is demonstrably less efficient on our imperative/mutable CPU architectures but a lot of compilers are extremely smart and turn those higher-level FP languages into very decently efficient machine code that's not much worse than what GCC for C++ produces. Especially compilers like those of OCaml and Haskell are famous for this. They shrunk the gap between FP and the languages that are closer to the metal. They shrunk that gap by a lot and even if they are not 100% there, I'm seeing results that make me think they are 75% - 85% there.

We need languages that rid us of endlessly thinking about minutiae and we must start assembling bigger LEGO constructs in our heads if we want anything in IT to actually get unstuck and start progressing again. (Of course, this paragraph doesn't apply to kernel and driver authors. They have to micro-optimise everything they can on the lowest level they can. That's a given.)

> Scary number of people I've met do not even attempt to learn how a library functions they use actually work.

I couldn't care less. How a library function works is an implementation detail. I only need to know what does it do. That's why it's a 3rd party library after all. The creator might notice a hot path during stress tests and optimise that implementation detail into some entirely another algorithm and/or data structure. And boom, your code that optimises for an implementation quirk you weren't supposed to look at in the first place, is now slow or even buggy.


The fundamental tradeoff is between control and abstraction. Better control typically means going closer to machine/operational semantics, better abstraction typically means going to denotational semantics.

Compilers are what mediate between these two domains, but tend to become more bloated as they have to accommodate both more diverse hardware and more numerous languages.

This helps the working programmer ignore the problem of writing good code but only for so long. It only delays the inevitable as the returns from clever compilation can't go on forever, and in fact these returns become more volatile as hardware architectures become more complex (typically through more cores or extra caches, incurring synchronization costs). Thus for maximum performance through binaries one would have to practice tweaking compiler settings which just creates another layer of abstraction and defeats the point of having this step automated for you.

Programmer training in particular needs to become both more comprehensive and more specialized. More comprehensive means knowing how each layer of abstraction gets built up from the most common machines (like x86). More specialized means filtering out a lot of people who were trained-for-the-tool and facilitating more cross collaboration between those that can program in a domain but not program for performance. This might mean better methodologies for prototyping across domains or experimentation with organizational structures to complement such methodologies.

Functional algebraic programming as a paradigm still seems somewhat underrated to me as a way of cross-cutting conceptual boundaries and getting programmers refocused on how their code is interpreted from the point of denotation. But it comes at great risk from continuing the trend towards more redundant abstraction which is responsible for bloatware.

At that point it seems that knowing how these problems are solved without classes types and libraries, or at least how classes types and libraries resolve the complexities of just doing it using the native capabilities of the operating environment (and recursing down to the point of maximal control), might be a big improvement, as it means reversing the greater-abstraction trend.

Under these discretions languages like OCaml and Rust seem to make the cut. A lot of good ideas from these languages seem to seep into the design of others. But the white whale is browser programming/web programming, as the browser has become the de facto endpoint for universal application deployment. WASM may or may not fix this. But then we just get to compilers again.

This talk did the most for developing my point of view here: https://www.youtube.com/watch?v=443UNeGrFoM Choice quotes include "If you're going to program, really program, and learn to implement everything yourself" and "At first you want ease, but in the end, all you'll want is control."

Or just take up another field. We probably need more farmers and doctors than programmers now.


> Under these discretions languages like OCaml and Rust seem to make the cut.

I absolutely agree! I am gradually learning both and I am just getting so angry that I didn't know about OCaml like 10-15 years ago. :( I was just so damn busy surviving and being depressed for a heckton of [dumb] reasons for 15 years. And then I woke up.

Now I am just a regular web CRUD idiot dev who, even though he was very clever and inventive and creative in the past, nowadays seems to get pissed at small details like configuring web frameworks (even though I am still much better than a lot of others, I dare say -- proven with practice... or so I like to think). And now I have to work against the negative inertia of my last 15 years and learn the truly valuable concepts and how they are implemented in those two extremely efficient, if a bit quirky in syntax, languages.

But it seems every time somebody says "let's just keep these N languages and kill everything else", no discussion is possible... And I feel we really must only keep a few languages/runtimes around and scrap everything else.


I like the statement saying that software is only limited by human imagination. Meanwhile it is also the case that better hardware brings more possibilities to what we can do.

When I read articles similar to this one, I can't avoid asking myself how can universities take a more integrated approach to disciplines related to software engineering and computer science. I know that we can't learn all the stuff that's going around, but some standard organization should be put on the table. I felt it many times while studying this lack of "low level" preparedness.

How many years was it where bridges were made only out of wood? I feel engineers before us had the foresight to see the possibilities but lacked the tools and understanding.

I fear only when people realized the economy needed to support large mammals crossing a bridge at one time did they really engineer bridges to support that weight. I think the same metaphor could be said for computing.


Yeah, most people only get creative only when they absolutely must, and not one minute earlier.

Finally, a time for Electron and JVM to go away.

Right now, most organizations (and developers) are focused on developer speed/productivity. As compute resources plateau, some developers will be required to focus on compute efficiency and speed. There will always be a limit to how fast you want your CRUD web app to run vs how much you want to spend on developers, though.

You are correct on the outset. I am simply observing the pendulum being on the one extreme end for a long time now though -- businesses always optimise for minimum time to deliver a new product and then pay very hefty consulting fees to fix the mess that could have been easily avoided in the first place (by making the project's development time 20% longer) -- which I am willing to bet my balls would not have been fatal for the business, in like 90% of the cases.

For things to go well and optimally, the pendulum should never be on the extremes. Sure, you guys are in a hurry. OK. But I must protect my name and your interests and must do a good job as well. Don't make me emulate a bunch of clueless Indians, please. Just go hire them.

Businessmen aren't very good at compromises when it comes to techies. I am still coming to terms with that fact and to this day I cannot explain its origins and reasoning well.


You don't need "new" programmers. Just dust off some "old" programmers who are now merely 40-50 years old. There's plenty of life still in us, and we can tell a pointer from a hole in the ground.

We are too expensive for them, it seems.

Title should be changed to needing old programmers, as many comments have hit upon.

It seems like there might be a pretty straight up tradeoff between difficulty of developing software and quantity of software produced. So the more we attempt to optimize at a lower level, the more time it takes to develop, and the less software someone can make and maintain. So, given that - would you rather lose 30-40% of the apps you use and like, but the rest are faster? Or keep using everything you have now?

There are exceptions to this, as with everything, but it's not as easy as this article makes it sound, i.e. "Just make faster stuff dummy!" There's always a cost.


That's just not true. In fact, we're losing 30%-40% of the software we use and like every single day simply because people write utter crap and then pointlessly rewrite it over and over. If we placed more focus on having sane development practices and good computer assists for developers, we'd ultimately find it easier to develop software and maintain it over time, such that there'd be little or no need to throw stuff away altogether.

We have cheap ubiquitous personal computers that can display streaming high definition video, with audio streams and subtitles, on a virtual screen inside a virtual reality - while multitasking and running other programs in the background. The hardware's plenty good enough for daily use.

The problem is that software practices have gotten so bad that a simple text messenger or email client uses at least as many resources as that program that's streaming HD video within a virtual reality, just to send or receive a few bytes of text now and then.

I'd be ok with losing 30-40% of the overbloated apps, because then they could be replaced with apps that don't need 2GB of dependencies to left-pad a string. We've really gone overboard on the "code reuse is great" and "don't reinvent the wheel" to the point that every program tries to include as much as possible of all code ever written and every wheel ever designed.


> would you rather lose 30-40% of the apps you use and like, but the rest are faster? Or keep using everything you have now?

Dude, I agree to lose at least 80% of them, most are useless and with bad UX on top of that. Even worse: they are distracting.

At some point hiring the programmers to pour software by the kilogram becomes a visible problem -- when the businessmen wake up to the fact that the amortised cost of a job sloppily done (say, over the course of the next 2 years) is much higher than investing 20-30% more upfront. That's what the article is arguing for, IMO.


The article read to me like one of those posted every 6-8 months with random thoughts someone had in one morning reminiscing old days with oranges compared to sports cars and complete disregard that, as time marches on, things change, people(customers/users) want more, convenience is prioritized.

I'll also reminiscence a bit: back in 2000s, my 266MHz, 64MB, 4.1GB HDD PC would let me install a 2GB full feature third person adventure game(Legacy of Kain: Soul Reaver, for exampler) worth nearly double digit hours of play, currently it takes 2x of disk space to install a basic platformer giving 1-2h of fun. Every new game lags to hell on a new PC because I have opted for 1 year old GFX card. I can view a PDF nicely with SumatrPDF yet Adobe Acrobat Reader takes 3-digit MB to offer same feature & 5x more time to start. I could use IRC in 2000s while Slack takes all of my RAM available. A website back in days would be few kB, I mean people here frequently compare HN with Jira or how funny it is that Netflix has to spend engineering effort to improve time-to-first-render on it's landing page which is static!

Those are facts, not so good: Soul Reaver vs Assassin's Creed is bad idea, because people didn't mind if grass was just flat texture or hero looked like walking cubes. SumatraPDF can open a PDF but Adobe Reader gives me annotation, form filling, signing etc. NFS2 was just racing, NFS:Heat players demand customizing exhaust gas color. Netflix home page loads more images combined than "back in days" and must adapt to big or small screens so it looks great everywhere. Jira lets me drag-n-drop a ticket while it took x3 time to update same ticket back in days in several form refreshes. HN is the simplest CRUD, it just lets me vote and post basic text, heck it delegated search to algolia(a different service)! The features Slack offers will require 5-7 extra different services if I were to use IRC.

But those kinds of reality don't get posts up-voted, so instead they are always like ranting about why Whatsapp needs more resources than the SMS app when both lets me send text to someone else?

Anyways, things change over time, in 2000s, my PC would lag if I opened MSWord & had windows Media player playing some HD videos or a game would crash if I tabbed out of it to check something. But now I have 20+ tabs open that live update stock tickers and have texts infested with hundreds of advert monitoring things while a tiny window plays current news in corner while am typing away happily in IntelliJ IDE, and have a ML model training in background. Now I can also record a HD version of my gameplay and tab out too. I think, in future complex development will take place in the cloud, we'll probably have high speed internet everywhere and online IDE or similar so everything happens in cloud. Similarly how 4GB HDD costed a fortune in 2000s but same price gets me a x100 capacity now, cloud resources will improve while prices will go down. :)


I agree that we the people have the tendency to look with rose-tinted glasses at the past.

However, saying that things are just fine today is not strictly true. You are mostly correct but there's a lot of room for improvement and some ceilings are starting to get hit (people regularly complain that Docker pre-allocates 64GB on their 128GB SSD MacBooks, or that Slack just kills their MacBook Air they only use for messaging during travels). And still nobody seems to care and then people like you come along and say "don't complain, things were actually much worse before".

...Well, duh? Of course they were.

But things aren't that much roses and sunshine as you seem to make them look. Not everybody has ultrabooks or professional workstations. I know like 50 programmers that are quite happy to use MacBook Pros from 2013 to 2015. Those machines are still very adequate today yet it's no fun when Slack and Docker together can take away a very solid chunk of their resources -- for reasons not very well defined (Docker for example could have just preallocated 16GB or even 8GB; make the damn files grow with time, damn it!).

---

TL;DR -- Sure, things weren't that good in the past, yeah. But the situation today is quite far from perfect... and you seem to imply things are fine, which I disagree with.

(BTW: thanks for the nostalgia trip mentioning Legacy of Kain! They'll remain my most favourite games until my death.)


Functional programming is something to watch and learn. It can help take advantage of multi-core single machines and distributed computing alike because it is thread safe due to using immutable variables and the mathematics behind pure functions. Compared to OOP, no locking, concurrency, or race conditions to worry about if used correctly.

Functional program helps immensely, but I don't think you are describing it quite right. You cannot to distributed systems without concurrency. Even if you don't have low level synchronization failures, you still need to watch out for determinism. Fortunately we have the math for that (usually order theory).

I make this point as someone whose job is Haskell. Too many people expect awesome magic sauce and basically write the same old imperative stuff in functional programming languages: not in the small but in the large. There's still plenty of benefit of using a good language for that, but you won't get zomg auto-parallelism.


Even coding in Erlang/Elixir (inside the BEAM VM in general) where parallelism/concurrency is a 99% solved problem and works as you expect, people still manage to fuck it up with trying their damnest to invent mutable data structures, or inadvertently make 1000 tasks wait on one green thread (when they absolutely shouldn't and there are much better solutions).

It's quite comical and sad to watch at the same time.

I agree with the article's title: we really need a new breed of programmers.


Meant that it enables concurrency and parallelism without having to worry so much about the mechanics of it, which helps take advantage of multiple cores as described in the article. Immutable data structures and pure functions avoid data corruption when two or more threads are working on the same data. OOP requires a lot of code to get the same result, true?

I'm new to FP myself and it seems like if done wisely it simplifies multi thread, parallel processing quite a bit.


I would check out https://github.com/reflex-frp/reflex which is truly a godsend for concurrency but actually uses loads of mutation internally.

Haskell helps loads here but the mechanisms are a lot more complex and nuanced than the circa 2000 ideology you were saying.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: