The obvious example is the quick-to-build MVP, but many of the bigger problems come from platform conflicts. Because we have at least five different actively uncooperating operating system platforms, it's hard to build portable native apps - so people build electron apps instead. We also use the web browser as a competitive battleground; due to coordination problems only one programming language and UI model is possible, although another is creeping in via webassembly.
Then there's the ongoing War On Native Apps. Every platform holder would love to take the 30% cut of the profits and veto which applications can run on the platform. We're left with Windows (non-app-store) and sort of MacOS (although watch out for notarisation turning into a veto in the future). And sadly this has very real benefits in malware prevention. Systems which run arbitrary code get exploited.
Beyond that there's cryptocurrency, where finding a less-efficient algorithm is a design goal to maximise the energy wasted, in order to impose a global rate limit on "minting" virtual tokens.
In fact doesn’t this point to a gap in the marketplace? Where are my “IOT ALL THE THINGS / 5G / Edge” bootcamps? Where are the “leetcode” challenges that talk about proper sampling rates for an 8-bit A/D converter, or implementing a closed loop PID in a 16-bit architecture?
I suspect that that’s what the Grandparent comment is commenting on — there’s so much talking about the former and so little talking about the latter - even if every computer engineer graduating from a ABET-certified institution is these skills.
C isn't a language that lends itself to bootcamp-stye learning.
With C, a small error prevents compilation at all, and it's going to be a relatively long time before you're ready to progress past the "printing text to the console" stage.
There are still some excellent C tutorials out there (for example, I think Handmade Hero's intro to C is good, and Handmade Hero itself gets you to the "shiny colors on the screen" stage very quickly), but HH has a different mentality than a bootcamp. HH is about learning, exploring, breaking things, and figuring them out on the fly. A bootcamp is about gathering the minimum knowledge necessary to be productive as quickly as possible.
...and that nicely sums up the problem with software today.
This varies with goals, attitudes, background, bias, etc. Besides, if you know a little C, you can livecode over at glslsandbox or shadertoy and be immediately rewarded.
> It isn't the kind of thing you tackle because you need hirable job skills by the end of the month.
No, not really. This also roughly says "JS doesn't necessarily require lots of experience" which is not much of a plus, as someone already pointed out.
Then there's the ongoing War On Native Apps.
I'm no prophet, but I did predict then that the browser would have eaten all the business applications space by now. It was just obvious. Colleagues objected that the web could not match rich native programs.
Excel is a much better product than Google Sheets, but having the better product doesn't mean having the winning product.
When I first came on HN and learned about YC's motto (build something users love) this idea was reaffirmed.
 Google optimizes for a collaborative quick spreadsheet program (handy for consumers), and as other comments say, Microsoft focuses on pro spreadsheet use (e.g. finance).
- Always-visible word count (added recently, but missing for nearly a decade)
- Custom text styles—you can modify the existing ones, but not create new ones with new names
Actually the web now is 100X more beautiful and responsive than at that time. I mean what you can do with an intranet server, not the radioactive media monstrosities.
Not really a spreadsheet person, I can believe Excel is better than Sheets. But is web vs native the reason?
To the user I'd say it's a trade-off that gains you little or nothing and loses a lot over native apps. The benefits of switching to browser and cloud based apps go to the organization you work for the and software companies selling the products.
Google Sheets ate the lower end, though; it's a bit like iOS vs Android.
Much better product? Sheets takes literally seconds to download and install and runs on all your devices. Also it automatically syncs your data between devices and sharing data with other people is as easy as sharing a website. These are very important features in my view and makes Sheets into a better product than Excel. A power user might have different opinions, but to me writing sheet.new in my browser is just so much more convenient.
It's gotten much worse. Now you have iOS, Android, Windows, Mac, wed, and Linux(?). In 2000, you had Windows. You couldn't do anything interesting on mobile, Web 2.0 (cringe) wasn't a thing, and Mac's market share was about 3%.
Still room for improvement but it's not so restrictive about the common denominator.
Please note I didn't mentioned anything technical, it's pure product management and strategy and that's one of the reasons I'm optimistic about Flutter's future.
Honestly I seriously doubt Flutter will ever be popular for web. It recreates everything already included in browsers like DOM, CSS, text editing, etc. There is already too much bloat with modern JS apps.
Turns out that it was already removed but slack was still displaying it.
⌘ + R (refresh page shortcut) solved it..
Electron might help devs getting something out quickly but all these layers have a cost
I don't disagree with the gist of this, but from a your technical description verges on nonsense. I'm questioning if you're serious.
>...finding a less-efficient algorithm is a design goal...
At no point is anyone searching for an algorithm. Most mining algorithms were chosen at random or for novelty; Bitcoin uses double SHA-256, Litecoin uses scrypt, Primecoin searches for primes.
>...maximise the energy wasted...
Energy is wasted during mining in order to maximize security. The waste is a side effect.
>...in order to impose a global rate limit...
This is plain false.
It's called "mining". I wouldn't complain if this wasn't in quotes.
The whitepaper is only nine pages, but nobody seems to read it.
Using scare quotes to mean that nobody else says something strikes me as odd, but you're probably correct.
Usually governments mint coins, but no government (or centralized entity) currently operates a legitimate network that matches up with the same properties as bitcoin.
I might have used an asterisk, maybe? :)
You're making a logical leap between intentional GPU/CPU coins and inefficiency as an explicit design goal. GPU/CPU coin developers are most likely true believers in a distributed security model. They could also own GPU farms or botnets. I highly doubt developers design cryptocurrencies while dreaming of squandering global resources.
>> finding a less-efficient algorithm is a design goal
The Bitcoin paper doesn't actually specify a specific algorithm at all, it just says "such as":
> To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof of-work system similar to Adam Back's Hashcash , rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits.
The Hashcash paper uses the term "minting".
>>...in order to impose a global rate limit..
> To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases.
ie to limit the global rate of block generation. Which is what makes it useful as a global distributed timestamp server.
>> maximise the energy wasted
> The steady addition of a constant of amount of new coins is analogous to gold miners expending
resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.
As everyone noticed fairly early on, like gold mining, this creates a means of expending energy to produce something which can be sold. Just as it's economically advantageous to burn down rainforest, it's economically advantageous to perform a trillion SHA operations and throw away the results of almost all of them.
A fun read on cloud scale vs optimized code is this recent article comparing ClickHouse and ScyllaDB (https://www.altinity.com/blog/2020/1/1/clickhouse-cost-effic...)
Most of the numerical code that cares about performance for linear algebra uses this API and links an appropriate implementation.
Advances in language, compiler, and runtime implementations will continue to keep up with any growth in the need for performant applications for the foreseeable future, despite the looming collapse of Moore's Law.
It would be great if most applications worked at human speed. Instead we have web applications taking 5 seconds to load what is basically 3 records from a small database.
I've often complained out loud with coworkers, while waiting for some horrible webapp to do its thing: "This computer can execute over a billion instructions every second. How many instructions does it take to render some formatted text!?!?"
While throughput is reasonably easy to optimize for, for latency you will havSoftware latency is a hard to optimize target. Throughput is much easiere to fight against each abstraction layer on your code. And that includes layers bolted on your OS and hardware.
Consider also that spending an hour at the DMV for them to update a database entry or two is also human speed.
I want to live in your alternate reality, because in ours anything under 45 seconds is a miracle.
If you prefer, call it carbon footprint. Python has a huge carbon footprint. We should get rid of slow languages for environmental reasons.
Plus we have a lot of pretty awesome languages that are mature enough and are serving very different niches (so their union can cover everything in IT) like Rust, Erlang/Elixir, Zig, OCaml (which can be transpiled to two JS variants, BuckleScript and ReasonML), TypeScript, and probably 20+ others.
Not to derail the thread but the dependency on very slow and hard-to-debug dynamic languages like Ruby and Python is getting out of hand.
Statements like "But it's easier to find devs for Python and Ruby than it is for Rust and Elixir" might be statistically correct now but that means nothing. People change technologies as market demands change so I am absolutely not worried about displaced programmers. There's almost no such thing as displaced programmers either, 99% of all my acquaintances just learned the new tech their employer wanted from them and moved on to the next stable paycheck.
For equal numbers of humans, all those energy/environmental costs you mention are going to be there regardless of which programming language is used...
The trick, as always, is finding balance between paying for hardware and paying developers.
People really bought into the ‘people are more expensive than hardware’ as an excuse to get screwed like this. For $5k in human cost, these guys (and their investors) now save 200k/year in hosting. And this is not an isolated story; I am working on another one at this very moment. Programmers have become so incredibly sloppy with the ‘autoscaling’ and ‘serverless’ cloud ‘revolution’.
If it really did save time and were simpler, some companies would (quite reasonably) be willing to pay a premium for that - time is money and all that. In reality it seems like people often end up with the worst of both worlds - it’s expensive, complicated, still needs a huge staff to maintain, and doesn’t even work that well.
Tech like AWS Lambda (of which I like the theoretical idea) are meant to remedy the issues with complexity for a premium. But that premium makes, personally, my eyes water. I cannot see any high volume operation justifying going live with it. Are there big examples of those? And how is it justified vs the alternatives (which are, besides some programmer+admin time and scalability) far more efficient?
> as well as the overhead of managing servers and container clusters which is a lot more costly than you might think
A lot of people underestimate that in my experience; I see a lot of people who find it cool setting them up (also, a large amount are not doing this scripted but via the web interface). My current case has a myriad of VPC, container clusters, load balancers, clusters, auto scaling etc and it looks really impressive but it's very costly and their dev (who was also devops) disappeared as he buckled under the stress. Also, none of that is needed in this case (not saying there are not many cases it is needed!).
Anyway I will experiment more with Lambda; I think I'm tainted by the very costly abuse cases I had to move to normal linux environments to make affordable for the startup.
But to be fair, for most projects the complexity that Amazon's services carry with them is absolutely not justified. Sure I can learn to work with 10-20 Amazon services but even me as a senior guy who knows his way around pretty much anything you throw at him, that's precious time spent not helping the direct business needs but basically making sure the house won't collapse.
And a lot of smaller companies like to merge the "programmer" and "DevOps" titles into one person because of course, that means one paycheck and not two. And as you said, they get angry that you can't become a pro sysadmin in an afternoon.
I suppose I am just trying to say yet again that many companies reach for BigCorp tools when they really ought to be fine with 2-3 DigitalOcean droplets and 1 dedicated DB droplet, plus 1 extra for backups.
Its also not the initial time saving. After implementation, infrastructure maintenance is almost non-existent because the services are all managed for you and you can focus on providing direct value and not worrying about whether your infrastructure can meet your needs.
You also have to consider that there are limits to how parallel an application can be - Amdahl's Law - at some point even throwing hardware at a scaling issues has its limits.
Of course, there's also a truism that the team who implemented the first pass won't have to support (financially or as a developer) the software when it no longer scales.
I've noticed my compilation speeds got dramatically better (compared to a MacBook Pro and an old-ish i7-3770 desktop PC). And it can handle even the sluggishness of Slack just fine without you noticing a lag, which I view as a huge achievement.
However, one thing my very detailed system monitors are telling me every day is -- 99% of all software we use every day is not parallel enough. So I have this amazingly powerful CPU that only (1) Git garbage collection, (2) PostgreSQL restoring a big backup, (3) Rust compiler and (4) [partially] Elixir compiler can saturate to its full potential.
I'd say that if everybody buys the new AMD Threadrippers and PCIe 4.0 motherboards, RAMs, SSDs and GPUs, we'd all be collectively fine for like 10 years.
The software however, it badly needs more parallel processing baked in it.
In practice most software is light years away from this theoretical limit of "can't be anymore parallelised". And I fully agree that throwing hardware at a problem indeed has limits, although they are financial and not technical IMO.
As mentioned in another comment down this tree of comments, my 10-core Xeon workstation almost never has its cores saturated yet I have to sit through 5 seconds to 2 minutes of scripted tasks that can relatively easy be parallelised -- yet they aren't.
And let's not even mention how my NVMe SSD's lifetime saturation was 50% of its read/write limit...
There's a lot that can be improved still before we have to concern ourselves with how much more we can parallelise stuff. That's like worrying when will the Star Trek reality come to happen.
 http://datadraw.sourceforge.net/ (github; https://github.com/waywardgeek/datadraw as sourceforge seems down)
Edit; maybe I answered that last question by finding a github version: seems waywardgeek does maintain at least to keep it running.
https://diesel.rs ? Maybe https://tql.antoyo.xyz/ if you care more about ease of use.
One of the purposes of Datadraw is for instance to build SQL databases on top of.
That's like, a couple full-time developers, AIUI? Maybe even less than that. Perhaps the people who say "people are more expensive than hardware" have a point - at least in the Bay Area. Or you can move to the Rust Belt if you'd like a change.
Exactly. I was responding mostly to the point that most CTO's/management belief that you should just let hardware handle it while programmers should just deliver fast as they can. He says it is always a balance; you cannot pay for optimized assembly when writing a crud application, but I claim we completely swung to the other side of the spectrum. For instance, a financial company I did work for had no database indices besides the primary key and left AWS to scale that for them. And then we are not even talking about Mongo (this was MySQL); Mongo is completely abused as it is famous for 'scaling' and 'no effort setup', so a lot of people don't think about performance or structure at all in any way; people just dump data in it and query it in diabolical ways. They just trust the software/hardware to fix that for them. I recently tried to migrate a large one to MySQL, but it is pure hell because of it's dynamic nature; complete structured changed over time while the data from all the past is still in there; fields appeared, changed content type etc and nothing is structured or documented. With 100s of gbs of that and not sure if things are actually correctly imported, I gave up. They are still paying through the nose; I fixed some indexing in their setup (I am by no means a Mongo expert but some things are universal when you think about data structures, performance and data management) which made some difference, but MySQL or Postgresql would've saved them a lot of money in my opinion. Ah well; at least the development of the system was cheap...
I am definitely a spiritual brother with you because I love optimising things. But I am very unsure how do I even start a side career with that premise.
Spend a lot of time with funded startups. Meetups, conferences etc. They will be happy to talk about this. But also online; you need to 'dox' nicks some times, but when you see quite broad questions in slack/reddit about performance of systems and you find out this is some (tech) (co-)founder you can ask them to help. I do no-cure-no-pay if the system is an MVP and crud; I do no-cure-still-pay if the system is larger and already live. That is not because I want to blackmail the company (and if I like the idea you can give me a % as well instead, all fun and games), but usually because 'wanting to help' is punished when it's 'free' as in no good deed will go unpunished. I Did no-cure-no-pay with optimising (and other services) live systems in the past, but as soon as I touch it, people blame me for all kinds of dataloss (while i'm very careful and absolutely make (offsite) backups always) and other misery. So when basically what I do is connect with (co)founders who are in a jam and when they don't have production data yet, I will go no-cure-no-pay; when they have production data they need to keep, I will explore but if I cannot do anything (for that price, mind you; there is always something to do), I still get paid.
There are probably literally 1m projects and growing at any time in this world that have serious issues and that are burning money and that will crash (all the time or sooner or later) that need help. For instance, I know of a large state own postal/courier tracking system that crashes under load every 48 hours. We tried to help them but they are fine just rebooting (manually!). Fine, that happens too.
I.O.W., If you have a program with millions of users you make something that performs well enough for people to pay for it. While each cpu cycle wasted then becomes millions of cycles, you never get billed for those it doesn't matter to you.
I wonder if an eco system is possible where software providers have to pay for ALL resources consumed. It sounds ridiculous but having any transaction going on would make monetizing software a lot easier.
It would for the most part boil down to billing the end user for the data stored, the cpu cycles and the bandwidth consumed. A perfectly competitive vehicle. Want to invest in growing your user base? Pay part of their fees and undercut the competition.
It would make it more logical if they didn't own the device. The hardware can scale with usage. You just replace the desktop, console or phone with one better fit for their consumption.
Programmers keep saying this, and users keep complaining about slow software.
But what does that even mean? A 3Ghz quad-core can do 12 billion things per second yet I still regularly experience lags keeping up with typing or mouse movements, scrolling webpages, redrawing windows... the actual interactive experience has gotten much worse since the 90s.
I learned this by greatly improving a scheduling system algorithm that could schedule 10-12 related (to each other) medical procedures while accounting for 47 dynamic rules (existing appointments, outage blocks, usage blocks, zero patient waiting, procedure split times, etc) to sub second, improving the current algorithm's 13 seconds. You know what? It didn't matter. That was our speed test scenario (most realistically complex one a customer had).
The customer was fine with 13 seconds because it was so much faster than doing it by hand and these customers were paying hundreds of thousands of dollars for the licenses. Because of this, the improved algorithm was never implemented. It was a neat algorithm though.
Absolute maximum performance has its place, it's just not every place.
I've seen locking brought forward as a critical limit. Long discussions about new hardware and adding nodes and all sorts of expenditure required. We need a larger kubernetes. More GPUs!
I've also been in the situation where we switched to a plain redis queue (LPOP, RPUSH) scheme and gotten 10x the improvement just by lowering message overhead. A lot of the very complex solutions require so much processing power overhead simply because they involve wading through gigabytes. Better alternative solutions involve less gigabytes. Same hardware, different mindset. Not even talking about assembly language or other forms of optimisation being required. Just different philosophy and different methodology.
Perhaps we need programmers with the mental flexibility to run experiments and be open to alternatives. (Spoiler: we've already got plenty of these people.)
Contracting is such a strange world. I've drifted so far into it I've lost the ability to see how salary based people even get work. All I can do is keep the door open for as many people as possible. Sometimes I need to actually assert the door into existence. This was something I didn't know was possible until recently.
There is still improvements being made to the current tech or new takes on the current tech that aren't incorporated yet in the current bunch of consumer processors.
Also I happen to think that what makes a computer fast is the removal of bottlenecks in the hardware. You can take quite an old machine (I have a Core 2 Quad machine under my desk) slap in an SSD and suddenly it doesn't feel much slower than my Ryzen 3 machine.
Sure it is true. It isn't a tech journo writing a quick piece to get some clicks. I am quite cynical these days.
There hasn't been any competition in the Desktop CPU space for years until 2019.
Also clock rates haven't increased since the mid-2000s (there were 5ghz P4 chips). Clock rates being an indication of speed stopped being a thing back then when I could buy a "slower" clocked Athlon XP chip that was comparable to a P4 with a faster clock.
Also more stuff is getting offloaded from the CPU to custom chips (usually the GPU).
> We need developers who understand these new low level details as much now as we needed that kind of developer in the past.
I suspect that there will get better compiler and languages. I work with .NET stuff and the performance increase from a rewrite to .NET core is ridiculous.
It might be such a piece and it could still be true.
>In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law. 3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.
We have great optimization tools freely available these days, and when necessary they are used. We also have great standard libraries with most languages that make it fairly easy to choose the right types of containers and other data structures. (You can still screw it up if you want, though.)
As soon as it becomes economically necessary to write more efficient code, we will be tasked with that. I work on professional software and we do a hell of a lot of optimization. Some of it is hard, but a lot of it could be done by regular programmers if they were taught how to use the tools.
There are quite a few very interesting and solid languages out there. One example for something that's not nearly well enough utilised or used widely is OCaml -- although in its case the lack of true multicore CPU support definitely cripples interest. But the language has an amazing type system that catches a _TON_ of errors (maybe even more than Rust's compiler, not entirely sure). And its compiler is just lightning-fast, fastest I've ever seen in fact. And it is multi-paradigm language (OOP and functional, and a lot of interesting typing constructs on top, half of which I don't even understand). Etc.
Not advocating for OCaml by the way (I work with Elixir and am looking to get better at Rust lately). It's just an example demonstrating that, again, there are a good amount of very solid languages and runtimes out there but we the programmers are so busy either (a) belonging to tribes or (b) being so damn busy we can't look beyond the tech we do our daily job with -- and then a lot of excellent tech gets left in the dust. :(
Mercury might be one of these tech pieces. And it's definitely not the only one.
Mercury came about as the OG was wondering about a mix between Haskell and Prolog.
Not to mention manually written algorithms are, in many cases, more accurate than ML heuristics (for a terrible yet relevant example in the finance industry, identifying the correct sum of a set of numbers).
I kind of disagree with this based on intuition alone. Most developers, professional developers, are using web tech (JS stack in particular - Node and hyped front-end frameworks). Yet we"re seeing "interest" in compiled languages such as Rust, despite almost nobody using it professionally and almost nobody doing much with it outside of simple proof of concepts.
To me it points toward a developing sense of insecurity in modern professional developers that simply being a JS dev isn't really programming/development and they've to "prove" themselves with lower level tech.
Something that indicates that, for me, is in StackOverflow's 2019 survey  the most used tech was JS and that which surrounds it, followed by Python and other easy-to-get-going-well-supported tech. Yet the "Most Loved" was Rust.
I could be wrong, and I'm open to being, but I intuitively I don't believe the interest in performant technologies is in the face of the sheer bloat we've seen, particularly from the web-front.
>Programmer skill will become more important in the future.
My prediction on this, not so AI specific, is that developing and deploying web-tech will continue to become easier and easier, meaning it'll take less people to do it. Sure, work may arise from developing countries/economies to supplant a drop in demand for it in the developed world, but maybe not.
Combined with a potential bubble burst in tech, I think those relying on web dev for a living could be in trouble in the coming decade.
I don't foresee much in terms of companies trying to optimize operational costs by instructing their devs to write their code more efficiently with memory/performance in mind to reduce operating costs, and thus spur a push toward jumping on compiled languages. If anything, cloud computing will continue to get cheaper and cheaper as the big 3 continue to try and absorb as much marketshare as possible.
When we could, we optimized for ease of writing code, and it lead to bloated and slow systems. This is the current status of things.
We are optimizing again for performance, but we want to have our cake and eat it, we want both the performance and the ease of writing code. And the reduction of bugs in the end result.
This change takes time, perhaps decades. Longer than bubbles and market growth. It takes time because we are curious, and we want to test all possibilities. We want fast games, easy abstractions, zero bugs, the whole package.
The way to maintain the easy abstractions, or even increase the level of abstraction while still increasing performance is to ditch the proliferation of massive general purpose programming languages and adopt specific DSLs that fit the problem domain. I feel like the language-oriented programming paradigm that Felliesen of Racket champions is certainly in this spirit, but I would like to see a language core that is specifically tailored to performance concerns.
Not that this is important to the general concept, but this is the current focus of my language design efforts, hoping for first public showing in April 2020,
Although, in re-reading the above, I have made a grave mistake above, contrasting most used with most loved. SO's "most loved" is measured by "of those professional using this language, how many responded that they love using it", so in the case of Rust it's 83.5% of the 3% using it professionally.
In retrospect I think my point would be better made by pointing at the difference between the technologies being used professionally and the technologies "most wanted", where Go and Rust are in the top 20% of wanted but the bottom 40% for Go and bottom 20% for Rust.
I'll leave the original intact for posterity but what's cited is done so erroneously.
> I expect that programming in the future will be more about getting the AI to do what you want rather than writing code directly.
This is the clear endgame. The question is how long it takes to get there.
If this is universally qualified, where are the scientific HPC simulations written in python, AAA video games written in Haskell, and fin tech trading apps written in Lisp?
I am not stating that there are no places where compiled vs interpreted can never produce acceptable results, it’s just more nuanced than a forall type proposition.
Addendum: I am aware C# could sort of be ‘interpreted’ and Unity is C#, so there is at least some evidence in the game category, but I’d quibble over the best-in-class C#/Unity game being considered 100% C#.
This is where I see the SRE (site reliability engineers) role. The developers making changes are put into a position where they measure the cost impact of a decision.
It's these feedback loops, and the practices they instill, that I believe we need. New programmers can help break the mold, but without good feedback they'll fall into the same traps.
A lot of the latest revolutions (good or bad, that's up to the reader); crunching huge data, ML, every more realistic simulations etc comes from every faster machines. If that growth stops, the article suggests we do something that was (and still is in some circles) normal in the 70-80s with homecomputers and consoles; because you could not upgrade the hardware and almost nothing was compatible (which is the most common reason the IBM PC won) to the next generation, you optimised the software to get everything from that existing hardware you had on your desk. And people are still doing that.
One of my personal miracle examples; my first love was the MSX computer, which is a Z80 3.58mhz computer with (in my case) 128kb in it. This machine could do nice games for the time and some business applications. Many years later, that same physical hardware (I still have my first one) can do this . Obviously the hardware was always capable of doing this, but it needed many years (decades) for programmer(s) to figure out how to get every ounce of performance and memory utilization out of these things and push it beyond what anyone thought possible.
If the improvements in performance stagnate, there is a lot of room for getting most out of the existing hardware. I would think though that in the case of modern hardware, the geniuses that get to that point will get some language to compile to this optimised optimum instead of having to handcode and optimise applications like the Symbos guy did.
Case in point: a matrix library I used to use needed to a full row/column pass each time. We put a layer in between it and our code. Reduced lookups required by 30%. We were processing the same amount of data and getting the same results but requiring far less time. That layer also reduced memory requirements. Now we could process larger datasets faster with the same hardware. Thats just one example.
Your choice of CPU and other hardware isn't always the limiting factor. Even the language choice has an impact. Some languages/solutions require more data processing overhead than others to get the same final result.
Even the way your program's Makefile or module composition can have an effect on compiling performance. I remember the use of a code generator we included that meant it had to regenerate a massive amount of code each run due to its input files being changed. We improved it by a ridiculous amount simply by hashing its inputs and comparing the hashes prior to running the code generator. Simply not running that code generator each time meant we sped up the build significantly. 30 minute build times reduced by 5-10 minutes. Same hardware. And that was easily triggered by a trivial file change.
still, even twenty minutes is a long time to wait and see if your latest change actually works. in the foreseeable future, there will be complex projects that take a long time to build. you will eventually have to touch things that everything else depends on and recompile most of the code. people that work on these projects can always benefit from faster desktop-class hardware.
Having much better hardware in these cases helps you be actually productive and not twiddle your thumbs waiting for the compiler.
My philosophy at the moment is to use HPC only when I've exhausted other possibilities. I think many people jump to HPC prematurely. The simpler approaches are so much cheaper that I think it's usually worthwhile. I'm skeptical of the argument that it's cheaper to use HPC than it is to use more efficient methods in this case, because the more efficient methods are often something like a few days spent reading to find the right equation or existing experimental data vs. at least that much setting up a simulation and longer to run it.
Edit: Bill Rider has a bunch of blog posts that make similar points:
None of this is extraordinary now but the result was we reduced multiple times the budget requirements and processing times just using code improvements. A lot of times, the head-on solution just needed optimisation. Sideways improvements also helped such as small optimisations also helped. The more exotic equipment is still useful but its an accelerator.
One last thing: we received a LOT of grief and criticism for our approach. There was peer pressure to use particular solution types even though they were wildly inappropriate. We had funding pulled or threatened to be pulled by some and other backers. One lessons we learnt includes: Don't underestimate how vested certain interests are for the use of various toolkits. "Use this or else!" is the not-so-subtle threat.
I'm so glad to not rely on only academic work now.
I should have spoken more generally to OP's point though. What I was really hoping to get at is that there are applications other than games that require non-trivial amounts of compute, and speeding them up would make meaningful differences in their users' lives.
And by the way I believe the software code is not the only place which could be made more efficient. What if we removed all the legacy stuff from the x86 architecture - wouldn't it become more efficient? What if we designed a new CPU with a particular modern programming language and advanced high-level computer science concepts in mind - wouldn't it make writing efficient code easier?
Also, what are the actual tasks we need so much number-crunching power for, besides things of questionable value like face recognition, deep-fake, passwords cracking and algorithmic trading?
Rendering is also an insanly parallelizable task. In the worst case we can always slap 2 of the same gpu on one card and get them to render half the screen or for vr, one gpu per eye.
One of the biggest pieces of bloat I've seen is doing the same thing in multiple places, and the new feature not being an improvement over the old workflow in 90% of cases, the efficiency gained 10% was lost in the other 90%
Sounds like an interesting read; do you mind sharing a link (or submitting it to HN)?
I understand that 3D has thermal issues but couldn't this be prevented by increasing (dead) dark silicon and maybe water cooling inside the 3D chip?
Not directly comparable but brains are state of the art of computing and are tri-dimensionals.
Cloud computing and SaaS have extended the deadline for coming up with an answer to "What comes after Moore's Law." But it is much more likely to not be based on every coder learning what us olds learned 40 years ago. Instead, optimization is more likely to get automated. Even what we call "architecture" will become automated. People don't scale well, and the problem is larger than un-automated people can solve.
Beyond that, developers being conscientious of what they send over the wire, and being just a bit critical of what the framework or ORM produces also can yield substantial gains.
I say this as a “DevOps” guy who is responsible for budget at a mid-size startup, where we’re hitting scale where this becomes important. We save about 8 production cores per service that we convert from Rails to Go. Devs lose some convenience, yes, but they’re still happy with the language, and they’re far from writing hyper-optimized, close to the metal code.
Elixir itself is almost completely staying-out-of-your-way language as well -- meaning that if your request takes 10ms to do everything it needs then it's almost guaranteed that 9.95ms of those 10 are spent in the DB and receiving the request and sending the response; Elixir almost doesn't take CPU resources.
I worked with a lot of languages, Go/JS/Ruby/PHP/Elixir included. Elixir so far has hit the best balance between programmer productivity and DevOps happiness. (Although I can't deny that the single binary outputs of Go and Rust are definitely the ideal pieces to maintain from a sysadmin's perspective.)
It's not that Ruby is 100x slower than Elixir (of course it's not). It's just that Rails is so inefficient compared to Phoenix.
Sinatra, Phoenix, Rocket.RS, and a ton of others are specially crafted to stay out of your way and utilise the CPU as less as possible. And yep, as we both agree, in these cases the 3rd party I/O is the bottleneck.
Out of everything I worked with in the last 15 years I'd heartily recommend Rust for uber-performant-yet-mostly-easy-to-use language.
As for my own opinion: yes, optimization is key, but we gotta remember not to make it premature. Take advantage of the fast hardware to actually create something; once we know that the something is viable, let's refactor and optimize.
I've seen many products die simply because customers get frustrated with laggy or buggy experience and leave.
By the time the businessmen wake up, it's usually too late.
I'm lucky at work we write lots of stuff to avoid the tell/mound, but hello! where is the rest of the industry on this?
[You can use our stuff if you like, it is all public. Let's rebuild together.]
This is remarkably accurate for games as well. Insurgency: Sandstorm for example. I was full of hope when I learned it was being developed in Unreal Engine which supports large scale combat much better than Insurgency's source engine. Unfortunately when it came out if performed much worse than its predecessor. Working with these engines has become so easy you don't really have to 'think' anymore and can just keep throwing stuff in.
For all the programmers out there -- _how do we do this?_. I came into programming through Matlab and Python in Economics and Data Science. I don't have formal training in software engineering. I know some C, some Fortran, and have a journeryman's understanding of how my tools interact with the hardware they run on.
Where can I learn how to be extremely efficient and treat my operating environment always as resource constrained? Am I correct in seeing the rise of point-and-click cloud configuration hell-sites like AWS are masking the problem by distributing inefficiently? (sorry if unrelated, spent hours debugging Amazon Glue code last night and struck me as related).
In other words -- how can we tell what is the path forward?
Meaning there's no point in optimizing an expensive function if 99% of your program's memory and run time is spent in a different function.
This means the absolute most important skill to writing efficient software is not assembly language skills, but profiling so you know where to focus your efforts in the first place.
Maybe there's no business point in optimizing those. But I feel this line of thinking got us into the current mess to begin with. Everybody is either like "we can't afford to optimize" (blatant lie at least 80% of the time btw) or "nah, not my effing job".
Plus that philosophy only really works when your business is fighting for survival in its initial stages. After you stabilize a little and have some runway you absolutely definitely should invest in technical excellence because it also lends itself pretty well to preventing laggy and/or buggy user experience (and those can bleed your subscriber numbers).
My guess is that we will slowly approach this wall and spend a lot of time trying for incremental gains, trying to avoid the inevitable, which would be the design of new chipsets with new instructions, sets of new languages explicitly designed to take advantage of the new hardware, and then tons of advances in compiler theory and technology. On top of it, very tight protocols designed for specific use.
I think we have layers upon layers of inefficiency, each using what was at hand. All reasonable things to do, in the short-term, based on the pressures of business. But in the end of the day we're still transmitting video over HTTP, of all things. Sure, we did it! But you can't tell me that it is efficient or even within the original scope of the protocol's concept.
Naturally, I think the whole thing would run about a trillion dollars and take armies of geniuses, but it would at least be feasible, just ... it would require a lot of will. And money.
1) hardware that doesn't change. One C64 is just like every other C64 out there. You knew what the hardware was and since it doesn't change, you can start exploiting undefined behavior because it will just work .
2) The problem domain doesn't change---once a program is done, it's done and any bugs left in are usually not fixed . The problem domain was fixed becuase:
3) The software was limited in scope. When you only have 64K RAM (at best---a lot of machines had less, 48K, 32K, 16K were common sizes) you couldn't have complex software and a lot of what we take for granted these days wasn't possible. A program like Rouge, which originally ran on minicomputers (with way more resources than the 8-bit computers of the 1980s) was still simple compared to what is possible today (it eventually became Net Hack, which wouldn't even run on the minicomputers of the 1980s, and it's still a text based game).
4) The entire program is nothing but optimizations, which make the resulting source code both hard to follow and reuse. There are techniques that no longer make sense (embedding instructions inside instructions to save a byte) or can make the code slower (self-modifying code causing the instruction cache to be flushed) and make it hard to debug.
5) Almost forgot---you're writing everything in assembly. It's not hard, just tedious. That's because at the time, compilers weren't good enough on 8-bit computers, and depending upon the CPU, a high level language might not even be a good match (thinking of C on the 6502---horrible idea).
 Of course, except when it doesn't. A game that hits the C64 hard on a PAL based machine may not work properly on a NTSC based machine because the timing is different.
 Bug fixes for video games starting happening in the 1990s with the rise of PC gaming. Of course, PCs didn't have fixed hardware.
EDIT: Add point #5.
I can't prove it but I intuitively feel there's a lot of spite out there. Many people are unhappy with the status quo but are also unhappy with the idea to sacrifice their resources for everybody else -- and they will likely not only be non-grateful; they might try and pull an Oracle or Amazon and sue the creators over the rights of their own labour.
Things really do seem stuck in this giant tug of war game lately.
There isn't a single place to learn how to be efficient, it is better to start being extremely curious of how things actually work. Scary number of people I've met do not even attempt to learn how a library functions they use actually work.
> I always try to imagine a physical character performing a task that i'm trying to code. How far does imaginary character needs to travel, how many trips do they need to make.
Dude, that's why we have optimising compilers. Functional programming is demonstrably less efficient on our imperative/mutable CPU architectures but a lot of compilers are extremely smart and turn those higher-level FP languages into very decently efficient machine code that's not much worse than what GCC for C++ produces. Especially compilers like those of OCaml and Haskell are famous for this. They shrunk the gap between FP and the languages that are closer to the metal. They shrunk that gap by a lot and even if they are not 100% there, I'm seeing results that make me think they are 75% - 85% there.
We need languages that rid us of endlessly thinking about minutiae and we must start assembling bigger LEGO constructs in our heads if we want anything in IT to actually get unstuck and start progressing again. (Of course, this paragraph doesn't apply to kernel and driver authors. They have to micro-optimise everything they can on the lowest level they can. That's a given.)
> Scary number of people I've met do not even attempt to learn how a library functions they use actually work.
I couldn't care less. How a library function works is an implementation detail. I only need to know what does it do. That's why it's a 3rd party library after all. The creator might notice a hot path during stress tests and optimise that implementation detail into some entirely another algorithm and/or data structure. And boom, your code that optimises for an implementation quirk you weren't supposed to look at in the first place, is now slow or even buggy.
Compilers are what mediate between these two domains, but tend to become more bloated as they have to accommodate both more diverse hardware and more numerous languages.
This helps the working programmer ignore the problem of writing good code but only for so long. It only delays the inevitable as the returns from clever compilation can't go on forever, and in fact these returns become more volatile as hardware architectures become more complex (typically through more cores or extra caches, incurring synchronization costs). Thus for maximum performance through binaries one would have to practice tweaking compiler settings which just creates another layer of abstraction and defeats the point of having this step automated for you.
Programmer training in particular needs to become both more comprehensive and more specialized. More comprehensive means knowing how each layer of abstraction gets built up from the most common machines (like x86). More specialized means filtering out a lot of people who were trained-for-the-tool and facilitating more cross collaboration between those that can program in a domain but not program for performance. This might mean better methodologies for prototyping across domains or experimentation with organizational structures to complement such methodologies.
Functional algebraic programming as a paradigm still seems somewhat underrated to me as a way of cross-cutting conceptual boundaries and getting programmers refocused on how their code is interpreted from the point of denotation. But it comes at great risk from continuing the trend towards more redundant abstraction which is responsible for bloatware.
At that point it seems that knowing how these problems are solved without classes types and libraries, or at least how classes types and libraries resolve the complexities of just doing it using the native capabilities of the operating environment (and recursing down to the point of maximal control), might be a big improvement, as it means reversing the greater-abstraction trend.
Under these discretions languages like OCaml and Rust seem to make the cut. A lot of good ideas from these languages seem to seep into the design of others. But the white whale is browser programming/web programming, as the browser has become the de facto endpoint for universal application deployment. WASM may or may not fix this. But then we just get to compilers again.
This talk did the most for developing my point of view here: https://www.youtube.com/watch?v=443UNeGrFoM
Choice quotes include "If you're going to program, really program, and learn to implement everything yourself" and "At first you want ease, but in the end, all you'll want is control."
Or just take up another field. We probably need more farmers and doctors than programmers now.
I absolutely agree! I am gradually learning both and I am just getting so angry that I didn't know about OCaml like 10-15 years ago. :( I was just so damn busy surviving and being depressed for a heckton of [dumb] reasons for 15 years. And then I woke up.
Now I am just a regular web CRUD idiot dev who, even though he was very clever and inventive and creative in the past, nowadays seems to get pissed at small details like configuring web frameworks (even though I am still much better than a lot of others, I dare say -- proven with practice... or so I like to think). And now I have to work against the negative inertia of my last 15 years and learn the truly valuable concepts and how they are implemented in those two extremely efficient, if a bit quirky in syntax, languages.
But it seems every time somebody says "let's just keep these N languages and kill everything else", no discussion is possible... And I feel we really must only keep a few languages/runtimes around and scrap everything else.
I fear only when people realized the economy needed to support large mammals crossing a bridge at one time did they really engineer bridges to support that weight. I think the same metaphor could be said for computing.
For things to go well and optimally, the pendulum should never be on the extremes. Sure, you guys are in a hurry. OK. But I must protect my name and your interests and must do a good job as well. Don't make me emulate a bunch of clueless Indians, please. Just go hire them.
Businessmen aren't very good at compromises when it comes to techies. I am still coming to terms with that fact and to this day I cannot explain its origins and reasoning well.
There are exceptions to this, as with everything, but it's not as easy as this article makes it sound, i.e. "Just make faster stuff dummy!" There's always a cost.
The problem is that software practices have gotten so bad that a simple text messenger or email client uses at least as many resources as that program that's streaming HD video within a virtual reality, just to send or receive a few bytes of text now and then.
I'd be ok with losing 30-40% of the overbloated apps, because then they could be replaced with apps that don't need 2GB of dependencies to left-pad a string. We've really gone overboard on the "code reuse is great" and "don't reinvent the wheel" to the point that every program tries to include as much as possible of all code ever written and every wheel ever designed.
Dude, I agree to lose at least 80% of them, most are useless and with bad UX on top of that. Even worse: they are distracting.
At some point hiring the programmers to pour software by the kilogram becomes a visible problem -- when the businessmen wake up to the fact that the amortised cost of a job sloppily done (say, over the course of the next 2 years) is much higher than investing 20-30% more upfront. That's what the article is arguing for, IMO.
I'll also reminiscence a bit: back in 2000s, my 266MHz, 64MB, 4.1GB HDD PC would let me install a 2GB full feature third person adventure game(Legacy of Kain: Soul Reaver, for exampler) worth nearly double digit hours of play, currently it takes 2x of disk space to install a basic platformer giving 1-2h of fun. Every new game lags to hell on a new PC because I have opted for 1 year old GFX card. I can view a PDF nicely with SumatrPDF yet Adobe Acrobat Reader takes 3-digit MB to offer same feature & 5x more time to start. I could use IRC in 2000s while Slack takes all of my RAM available. A website back in days would be few kB, I mean people here frequently compare HN with Jira or how funny it is that Netflix has to spend engineering effort to improve time-to-first-render on it's landing page which is static!
Those are facts, not so good: Soul Reaver vs Assassin's Creed is bad idea, because people didn't mind if grass was just flat texture or hero looked like walking cubes. SumatraPDF can open a PDF but Adobe Reader gives me annotation, form filling, signing etc. NFS2 was just racing, NFS:Heat players demand customizing exhaust gas color. Netflix home page loads more images combined than "back in days" and must adapt to big or small screens so it looks great everywhere. Jira lets me drag-n-drop a ticket while it took x3 time to update same ticket back in days in several form refreshes. HN is the simplest CRUD, it just lets me vote and post basic text, heck it delegated search to algolia(a different service)! The features Slack offers will require 5-7 extra different services if I were to use IRC.
But those kinds of reality don't get posts up-voted, so instead they are always like ranting about why Whatsapp needs more resources than the SMS app when both lets me send text to someone else?
Anyways, things change over time, in 2000s, my PC would lag if I opened MSWord & had windows Media player playing some HD videos or a game would crash if I tabbed out of it to check something. But now I have 20+ tabs open that live update stock tickers and have texts infested with hundreds of advert monitoring things while a tiny window plays current news in corner while am typing away happily in IntelliJ IDE, and have a ML model training in background. Now I can also record a HD version of my gameplay and tab out too. I think, in future complex development will take place in the cloud, we'll probably have high speed internet everywhere and online IDE or similar so everything happens in cloud. Similarly how 4GB HDD costed a fortune in 2000s but same price gets me a x100 capacity now, cloud resources will improve while prices will go down. :)
However, saying that things are just fine today is not strictly true. You are mostly correct but there's a lot of room for improvement and some ceilings are starting to get hit (people regularly complain that Docker pre-allocates 64GB on their 128GB SSD MacBooks, or that Slack just kills their MacBook Air they only use for messaging during travels). And still nobody seems to care and then people like you come along and say "don't complain, things were actually much worse before".
...Well, duh? Of course they were.
But things aren't that much roses and sunshine as you seem to make them look. Not everybody has ultrabooks or professional workstations. I know like 50 programmers that are quite happy to use MacBook Pros from 2013 to 2015. Those machines are still very adequate today yet it's no fun when Slack and Docker together can take away a very solid chunk of their resources -- for reasons not very well defined (Docker for example could have just preallocated 16GB or even 8GB; make the damn files grow with time, damn it!).
TL;DR -- Sure, things weren't that good in the past, yeah. But the situation today is quite far from perfect... and you seem to imply things are fine, which I disagree with.
(BTW: thanks for the nostalgia trip mentioning Legacy of Kain! They'll remain my most favourite games until my death.)
I make this point as someone whose job is Haskell. Too many people expect awesome magic sauce and basically write the same old imperative stuff in functional programming languages: not in the small but in the large. There's still plenty of benefit of using a good language for that, but you won't get zomg auto-parallelism.
It's quite comical and sad to watch at the same time.
I agree with the article's title: we really need a new breed of programmers.
I'm new to FP myself and it seems like if done wisely it simplifies multi thread, parallel processing quite a bit.
Haskell helps loads here but the mechanisms are a lot more complex and nuanced than the circa 2000 ideology you were saying.