
We’re approaching the limits of computer power – we need new programmers - notinventedhear
https://www.theguardian.com/commentisfree/2020/jan/11/we-are-approaching-the-limits-of-computer-power-we-need-new-programmers-n-ow
======
pjc50
The major problem areas are those where it's economically "best" to do the
computationally inefficient thing.

The obvious example is the quick-to-build MVP, but many of the bigger problems
come from platform conflicts. Because we have at least five different actively
uncooperating operating system platforms, it's hard to build portable native
apps - so people build electron apps instead. We also use the web browser as a
competitive battleground; due to coordination problems only one programming
language and UI model is possible, although another is creeping in via
webassembly.

Then there's the ongoing War On Native Apps. Every platform holder would love
to take the 30% cut of the profits and veto which applications can run on the
platform. We're left with Windows (non-app-store) and sort of MacOS (although
watch out for notarisation turning into a veto in the future). And sadly this
has very real benefits in malware prevention. Systems which run arbitrary code
get exploited.

Beyond that there's cryptocurrency, where finding a less-efficient algorithm
is a design goal to maximise the energy wasted, in order to impose a global
rate limit on "minting" virtual tokens.

~~~
michaelbrave
Flutter looks promising for solving a lot of the platform conflict problems.

~~~
pitaj
Except then you have to use Dart, or call into Dart from some other language.
There are many people who dislike Dart or otherwise prefer to use other
languages.

~~~
GordonS
I really feel like Flutter would have taken off so much more if Google had
just used Typescript instead of using it to push Dart.

------
champtar
One of the best 2h practical course that I had was just write the fastest
square matrix multiplication. You could use any language, any algorithm, just
no libraries. The target was a 32 core CPU server (this was ~10 years ago). At
5000x5000 all the Java and Python attempts were running out of memory. In C,
We tried some openmp, some optimized algorithm, but in the end the best trick
was to flip one of the matrix so that memory could be always prefetched. Out
of curiosity another student tried GNU Scientific Library, it turned out to be
~100 times faster. My take away was find the right tool for the job!

A fun read on cloud scale vs optimized code is this recent article comparing
ClickHouse and ScyllaDB ([https://www.altinity.com/blog/2020/1/1/clickhouse-
cost-effic...](https://www.altinity.com/blog/2020/1/1/clickhouse-cost-
efficiency-in-action-analyzing-500-billion-rows-on-an-intel-nuc))

~~~
jonas21
Yeah, I wouldn't be surprised if the majority of code performing large matrix
multiplications these days was written in Python and executed on GPUs by
libraries like Tensorflow and PyTorch. With the right abstractions,
programmers can be "lazy" and still get great performance.

~~~
srg0
Matrix multiplication is usually done by a platform-specific BLAS library
(BLAS is an API, there are multiple implementations, e.g. Intel MKL, OpenBLAS,
cuBLAS). There are some other linear algebra APIs/libraries, but this is
what's used the most.

Most of the numerical code that cares about performance for linear algebra
uses this API and links an appropriate implementation.

------
ledauphin
I just don't buy this. I cut my teeth as a HPC programmer working with C and
writing no-lock algorithms. There will always be a need for that, but
realistically the vast majority of software being developed is simply not
performance-critical. It's designed to work at human speed.

Advances in language, compiler, and runtime implementations will continue to
keep up with any growth in the need for performant applications for the
foreseeable future, despite the looming collapse of Moore's Law.

~~~
majewsky
> It's designed to work at human speed.

It would be great if most applications worked at human speed. Instead we have
web applications taking 5 seconds to load what is basically 3 records from a
small database.

~~~
ac29
That is human speed though (better than human speed, even). For most human
tasks that need to be done, 5ms versus 5s doesn't really matter.

Consider also that spending an hour at the DMV for them to update a database
entry or two is also human speed.

~~~
wongarsu
What? 100ms is "human speed". When doing anything interactive the difference
between 25ms and 5s is monumental. Even just for pressing confirmation buttons
5s is slow enough that you need some substantially faster reacting visual
confirmation (loading animation or whatever) to satisfy humans.

------
rs23296008n1
Usually poorly performant code needs optimisation through a change of approach
or mindset. It is the _way we are thinking about the problem_ that is lowering
performance. Not necessarily the hardware itself.

I've seen locking brought forward as a critical limit. Long discussions about
new hardware and adding nodes and all sorts of expenditure required. We need a
larger kubernetes. More GPUs!

I've also been in the situation where we switched to a plain redis queue
(LPOP, RPUSH) scheme and gotten 10x the improvement just by lowering message
overhead. A lot of the very complex solutions require so much processing power
overhead simply because they involve wading through gigabytes. Better
alternative solutions involve less gigabytes. Same hardware, different
mindset. Not even talking about assembly language or other forms of
optimisation being required. Just different philosophy and different
methodology.

Perhaps we need programmers with the mental flexibility to run experiments and
be open to alternatives. (Spoiler: we've already got plenty of these people.)

~~~
pdimitar
We have a good number of them indeed but nobody wants to pay them to fix most
of the IT area. Ironic, right.

~~~
rs23296008n1
Or we can't get them past the HR hiring policies that eliminate all
candidates. Been through that myself. I even got multiple tech leads to sit
the testing and watched them fail. _They were already on the team yet would
not be able to get on the team_. Absurd but true.

Contracting is such a strange world. I've drifted so far into it I've lost the
ability to see how salary based people even get work. All I can do is keep the
door open for as many people as possible. Sometimes I need to actually assert
the door into existence. This was something I didn't know was possible until
recently.

~~~
pdimitar
Can you tell me more? Sounds quite humorous. And quite usual...

------
uncle_j
Every few years something like this gets written. I remember similar things
being written in 2004-2005 before the Core 2 line of processors came out.

There is still improvements being made to the current tech or new takes on the
current tech that aren't incorporated yet in the current bunch of consumer
processors.

Also I happen to think that what makes a computer fast is the removal of
bottlenecks in the hardware. You can take quite an old machine (I have a Core
2 Quad machine under my desk) slap in an SSD and suddenly it doesn't feel much
slower than my Ryzen 3 machine.

~~~
gameswithgo
except now it has actually been true for years. clock rates aren't increasing.
advances in performance have been only from things that are tricky for
developers to efficiently leverage (cache, simd, more cores). We need
developers who understand these new low level details as much now as we needed
that kind of developer in the past.

~~~
uncle_j
> except now it has actually been true for years

Sure it is true. It isn't a tech journo writing a quick piece to get some
clicks. I am quite cynical these days.

There hasn't been any competition in the Desktop CPU space for years until
2019.

Also clock rates haven't increased since the mid-2000s (there were 5ghz P4
chips). Clock rates being an indication of speed stopped being a thing back
then when I could buy a "slower" clocked Athlon XP chip that was comparable to
a P4 with a faster clock.

Also more stuff is getting offloaded from the CPU to custom chips (usually the
GPU).

> We need developers who understand these new low level details as much now as
> we needed that kind of developer in the past.

I suspect that there will get better compiler and languages. I work with .NET
stuff and the performance increase from a rewrite to .NET core is ridiculous.

~~~
ahartmetz
Minor correction about desktop CPUs: Ryzen 1 came out in Q1/Q2 2017. Initial
problems were mostly solved by 2018.

------
nickpinkston
We're mostly fighting Murphy's Law, not Moore's law. As said below, most
problems are so far from being compute/$ or otherwise technically limited and
far more about organizational / political issues putting vast inefficiencies
into these systems and priorities that fund their creation.

~~~
wpietri
Definitely. Honestly, I'd be excited if hardware stopped progressing. Ever-
better hardware and ever-shifting platforms cover up a multitude of
organizational sins. There's much less incentive to write good code given the
rate at which code gets thrown out, and given what people are willing to spend
on their AWS bills before asking if something could perhaps be improved.

~~~
pdimitar
Yep, agreed. But to be fair, hardware will stop progressing pretty soon IMO.
PCIe 4.0 backbone is quite strong and a lot of companies, when buying
workstations or servers based on it, won't move on from it for quite a while.
Or so I hope.

------
thewebcount
I don't think this article is seeing the whole picture. The author talks about
how programmers used to have to cram a program into 16KB of RAM (or ROM) and
it had to be efficient. But that came at a huge cost. Reading 6502 Assembly
with variables that could only have up to 6 characters for their names, and
were all global was a huge pain in the ass!

We have great optimization tools freely available these days, and when
necessary they are used. We also have great standard libraries with most
languages that make it fairly easy to choose the right types of containers and
other data structures. (You can still screw it up if you want, though.)

As soon as it becomes economically necessary to write more efficient code, we
will be tasked with that. I work on professional software and we do a hell of
a lot of optimization. Some of it is hard, but a lot of it could be done by
regular programmers if they were taught how to use the tools.

------
sumanthvepa
The renewed interest in C++ and other compiled languages is an indication of
the need to get more efficient. Programmer skill will become more important in
the future. But they won't be today's skills. I expect that programming in the
future will be more about getting the AI to do what you want rather than
writing code directly.

~~~
thechao
I’m _already_ an AI you can pay to get a computer to do what you “want”; the
problem is ‘what you want’ is so poorly specified, there’s no way to turn it
into an actionable set of steps!

~~~
meztez
This comment has to be the most insightful I have ever read. It is true on so
many levels and captures perfectly the mentality that some senior decision
makers have about AI.

------
pier25
There’s also the environmental factor nobody takes into account. Less CPU
cycles means less emissions. When a piece of software is used by millions or
even billions it must be significant.

~~~
xaedes
There is actually a paper about this topic: "Energy Efficiency across
Programming Languages"

~~~
pier25
Thanks I'll check it out!

------
qwerty456127
Why do we need more computer power? I haven't upgraded my laptop since 2009
(well, I've replaced its HDD with an SSD 2 years ago and it made a huge
difference) and I'm okay. Some people insist on photorealistic 3D graphics in
the games they play, I agree that's cool but wouldn't say that's anything
close to important.

~~~
leetcrew
do you ever compile code? at work I have a machine with an i7-7700 (4C/8T),
32GB of RAM, and an SSD. it still takes about 45 minutes to do a full build of
the project I work on, which can easily be triggered by modifying any of the
important header files. if I had to do my job on your laptop from 2009, I'd
never get anything done.

~~~
rs23296008n1
That is the choice of software tool. You are literally grinding through
gigabytes of data. If your tool didn't require so much data processing it
would be faster. This may or not be possible to improve by you. Often there
are workarounds.

Case in point: a matrix library I used to use needed to a full row/column pass
each time. We put a layer in between it and our code. Reduced lookups required
by 30%. We were processing the same amount of data and getting the same
results but requiring far less time. That layer also reduced memory
requirements. Now we could process larger datasets faster with the same
hardware. Thats just one example.

Your choice of CPU and other hardware isn't always the limiting factor. Even
the language choice has an impact. Some languages/solutions require more data
processing overhead than others to get the same final result.

Even the way your program's Makefile or module composition can have an effect
on compiling performance. I remember the use of a code generator we included
that meant it had to regenerate a massive amount of code each run due to its
input files being changed. We improved it by a ridiculous amount simply by
hashing its inputs and comparing the hashes prior to running the code
generator. Simply not running that code generator each time meant we sped up
the build significantly. 30 minute build times reduced by 5-10 minutes. Same
hardware. And that was easily triggered by a trivial file change.

~~~
leetcrew
I understand your point, I think. c++ has an inefficient build system, and
over time projects can end up with very suboptimal build systems. it's
definitely worth spending time to pick the low-hanging fruit like in your
example, or if possible, to choose a language that builds faster.

still, even twenty minutes is a long time to wait and see if your latest
change actually works. in the foreseeable future, there will be complex
projects that take a long time to build. you will eventually have to touch
things that everything else depends on and recompile most of the code. people
that work on these projects can always benefit from faster desktop-class
hardware.

------
Forge36
What the future holds is hard to grasp, the piece shared with me yesterday was
"we'll spend the next decade removing features, at no loss to functionality"

One of the biggest pieces of bloat I've seen is doing the same thing in
multiple places, and the new feature not being an improvement over the old
workflow in 90% of cases, the efficiency gained 10% was lost in the other 90%

~~~
wool_gather
> the piece shared with me yesterday was "we'll spend the next decade removing
> features...

Sounds like an interesting read; do you mind sharing a link (or submitting it
to HN)?

------
rini17
What will be probably most interesting to watch: the collision between
hardware constraints and ever-increasing complexity of standards like Unicode
and HTTP.

------
The_rationalist
Are there any progress/path to progress for making competitive 3D cpus/asics?

I understand that 3D has thermal issues but couldn't this be prevented by
increasing (dead) dark silicon and maybe water cooling inside the 3D chip?

Not directly comparable but brains are state of the art of computing and are
tri-dimensionals.

------
LargoLasskhyfv
Stock photo shows voltage regulator circuitry beneath cpu, most likely SMD
capacitors. I wonder if the author thinks _these_ are the parts he is writing
about?

~~~
userbinator
Those are the decoupling capacitors under the CPU socket. A very odd choice of
photograph for this article, but perhaps an ironic hint that we need
programmers who also know more about the hardware...

------
Zigurd
If you look at the volume of software that needs to be produced, and at the
trend to include software in more products, and at the entrepreneurial
imperative that risk capital is the most expensive resource, it looks very
unlikely that handcrafted machine instructions will play a greater role in the
future.

Cloud computing and SaaS have extended the deadline for coming up with an
answer to "What comes after Moore's Law." But it is much more likely to not be
based on every coder learning what us olds learned 40 years ago. Instead,
optimization is more likely to get automated. Even what we call "architecture"
will become automated. People don't scale well, and the problem is larger than
un-automated people can solve.

~~~
thedevelopnik
I don’t think that handcrafted machine instructions are what is necessary.
Even switching from languages like Ruby or JS (Node) to languages like Go or
Elixir yields tremendous efficiency improvements.

Beyond that, developers being conscientious of what they send over the wire,
and being just a bit critical of what the framework or ORM produces also can
yield substantial gains.

I say this as a “DevOps” guy who is responsible for budget at a mid-size
startup, where we’re hitting scale where this becomes important. We save about
8 production cores per service that we convert from Rails to Go. Devs lose
some convenience, yes, but they’re still happy with the language, and they’re
far from writing hyper-optimized, close to the metal code.

~~~
pdimitar
You mentioned it yourself early in your comment but IMO going from Rails to Go
is a bit weird. Rails to Phoenix (Elixir) is much easier and productive for
many devs, it turned out.

Elixir itself is almost completely staying-out-of-your-way language as well --
meaning that if your request takes 10ms to do everything it needs then it's
almost guaranteed that 9.95ms of those 10 are spent in the DB and receiving
the request and sending the response; Elixir almost doesn't take CPU
resources.

I worked with a lot of languages, Go/JS/Ruby/PHP/Elixir included. Elixir so
far has hit the best balance between programmer productivity and DevOps
happiness. (Although I can't deny that the single binary outputs of Go and
Rust are definitely the ideal pieces to maintain from a sysadmin's
perspective.)

~~~
Zigurd
I was going to say that the more performant language thing was likely to
disappoint. As you point out, database access is going to be roughly constant.
But Rails -> Go would be an exception.

~~~
pdimitar
Well, yeah. Still, Rails is much slower than Phoenix by the mere fact that its
templating and ORM facilities are extremely inefficient.

It's not that Ruby is 100x slower than Elixir (of course it's not). It's just
that Rails is so inefficient compared to Phoenix.

Sinatra, Phoenix, Rocket.RS, and a ton of others are specially crafted to stay
out of your way and utilise the CPU as less as possible. And yep, as we both
agree, in these cases the 3rd party I/O is the bottleneck.

------
lousken
Yea, it's really annoying when IS vendor said their solution needs 16GBs of
RAM for every computer when it's just all basic stuff like dashboards, graphs,
tables etc. Even modern PC games still don't require this amount.

------
luord
There were several good points both for and against the article in this
comment section. I was pleasantly surprised, usually the threads caused by
posts like this turn into "static typing vs dynamic typing" or "functional vs
object oriented" flamewars.

As for my own opinion: yes, optimization is key, but we gotta remember not to
make it premature. Take advantage of the fast hardware to actually create
something; once we know that the something is viable, let's refactor and
optimize.

~~~
pdimitar
Literally every experienced programmer would like to do this. But when you get
to that last stage the shot-callers are like "nah, it's fine" and you never
get to the optimisation.

I've seen many products die simply because customers get frustrated with laggy
or buggy experience and leave.

By the time the businessmen wake up, it's usually too late.

~~~
codeisawesome
Which means that Business Analysts need to save the world by proving to the
shot-callers things similar to what Amazon found (a few ms of lag in the site
load caused $$$ of revenue loss). The ever improving Observability stack
combined with strong analytics on the client-side can make this possible.
Perhaps regulation around Climate Effects (or carbon taxes on inefficient
software) might also bring about an industry-wide change of attitudes (and
incentives).

~~~
pdimitar
Trouble is, most businessesmen I worked with would give you a blank stare if
you tell them they need a business analyst.

------
mjpuser
We also have a mantra against optimization until you know that you need to. It
seems too cost and time prohibitive to put these things on the programmer to
maintain, and that we need to develop tools to help optimize our code. Maybe
the next generation of optimization techniques will be runtime instead of
compile time. We already have dbs with optimizers, so maybe there will be
programming languages with optimizers?

------
Ericson2314
We are having worse and worse latency, though not bad throughput, but the real
issue is complexity. We just keep on pilling more crap on top of the old crap.

I'm lucky at work we write lots of stuff to avoid the tell/mound, but hello!
where is the rest of the industry on this?

[You can use our stuff if you like, it is all public. Let's rebuild together.]

------
Priem19
“The only consequence of the powerful hardware I see,” wrote one, “is that
programmers write more and more bloated software on it. They become lazier,
because the hardware is fast they do not try to learn algorithms nor to
optimize their code… this is crazy!”

This is remarkably accurate for games as well. Insurgency: Sandstorm for
example. I was full of hope when I learned it was being developed in Unreal
Engine which supports large scale combat much better than Insurgency's source
engine. Unfortunately when it came out if performed much worse than its
predecessor. Working with these engines has become so easy you don't really
have to 'think' anymore and can just keep throwing stuff in.

------
tomrod
This is a topic that really interests me, but I couldn't read the article --
either a paywall, ad-wall, or some other reader-hostile blocker incongruent
with the foundation of the Internet prevents usability. Ah well. I'll join the
conversation regardless.

For all the programmers out there -- _how do we do this?_. I came into
programming through Matlab and Python in Economics and Data Science. I don't
have formal training in software engineering. I know some C, some Fortran, and
have a journeryman's understanding of how my tools interact with the hardware
they run on.

Where can I learn how to be extremely efficient and treat my operating
environment always as resource constrained? Am I correct in seeing the rise of
point-and-click cloud configuration hell-sites like AWS are masking the
problem by distributing inefficiently? (sorry if unrelated, spent hours
debugging Amazon Glue code last night and struck me as related).

In other words -- how can we tell what is the path forward?

~~~
at_a_remove
Honestly? I used to jabber on about this with regards to the still distant
future of actual nanotechnology ... we need to find the guys who wrote
videogames for arcades in the 1980s and press them for their secrets before so
many brilliant tricks will be lost to time. They did so much with so little!

My guess is that we will slowly approach this wall and spend a lot of time
trying for incremental gains, trying to avoid the inevitable, which would be
the design of new chipsets with new instructions, sets of new languages
explicitly designed to take advantage of the new hardware, and then tons of
advances in compiler theory and technology. On top of it, very tight protocols
designed for specific use.

I think we have layers upon layers of inefficiency, each using what was at
hand. All reasonable things to do, in the short-term, based on the pressures
of business. But in the end of the day we're still transmitting video over
HTTP, of all things. Sure, we did it! But you can't tell me that it is
efficient or even within the original scope of the protocol's concept.

Naturally, I think the whole thing would run about a trillion dollars and take
armies of geniuses, but it would at least be feasible, just ... it would
require a lot of will. And money.

~~~
spc476
The secrets of 1980s video game programmers?

1) hardware that doesn't change. One C64 is just like every other C64 out
there. You knew what the hardware was and since it doesn't change, you can
start exploiting undefined behavior because it will just work [1].

2) The problem domain doesn't change---once a program is done, it's done and
any bugs left in are usually not fixed [2]. The problem domain was fixed
becuase:

3) The software was limited in scope. When you only have 64K RAM (at best---a
lot of machines had less, 48K, 32K, 16K were common sizes) you couldn't have
complex software and a lot of what we take for granted these days wasn't
possible. A program like Rouge, which originally ran on minicomputers (with
way more resources than the 8-bit computers of the 1980s) was still simple
compared to what is possible today (it eventually became Net Hack, which
wouldn't even run on the minicomputers of the 1980s, and it's _still_ a text
based game).

4) The entire program is _nothing_ but optimizations, which make the resulting
source code both hard to follow and reuse. There are techniques that no longer
make sense (embedding instructions inside instructions to save a byte) or can
make the code slower (self-modifying code causing the instruction cache to be
flushed) and make it hard to debug.

5) Almost forgot---you're writing everything in assembly. It's not hard, just
tedious. That's because at the time, compilers weren't good enough on 8-bit
computers, and depending upon the CPU, a high level language might not even be
a good match (thinking of C on the 6502---horrible idea).

[1] Of course, except when it doesn't. A game that hits the C64 _hard_ on a
PAL based machine may not work properly on a NTSC based machine because the
timing is different.

[2] Bug fixes for video games starting happening in the 1990s with the rise of
PC gaming. Of course, PCs didn't have fixed hardware.

EDIT: Add point #5.

------
jingfire
I like the statement saying that software is only limited by human
imagination. Meanwhile it is also the case that better hardware brings more
possibilities to what we can do.

------
stebann
When I read articles similar to this one, I can't avoid asking myself how can
universities take a more integrated approach to disciplines related to
software engineering and computer science. I know that we can't learn all the
stuff that's going around, but some standard organization should be put on the
table. I felt it many times while studying this lack of "low level"
preparedness.

------
kraig911
How many years was it where bridges were made only out of wood? I feel
engineers before us had the foresight to see the possibilities but lacked the
tools and understanding.

I fear only when people realized the economy needed to support large mammals
crossing a bridge at one time did they really engineer bridges to support that
weight. I think the same metaphor could be said for computing.

~~~
pdimitar
Yeah, most people only get creative only when they absolutely must, and not
one minute earlier.

------
xvilka
Finally, a time for Electron and JVM to go away.

------
JustSomeNobody
Right now, most organizations (and developers) are focused on developer
speed/productivity. As compute resources plateau, some developers will be
required to focus on compute efficiency and speed. There will always be a
limit to how fast you want your CRUD web app to run vs how much you want to
spend on developers, though.

~~~
pdimitar
You are correct on the outset. I am simply observing the pendulum being on the
one extreme end for a long time now though -- businesses _always_ optimise for
minimum time to deliver a new product and then pay very hefty consulting fees
to fix the mess that could have been easily avoided in the first place (by
making the project's development time 20% longer) -- which I am willing to bet
my balls would not have been fatal for the business, in like 90% of the cases.

For things to go well and optimally, the pendulum should never be on the
extremes. Sure, you guys are in a hurry. OK. But I must protect my name _and_
your interests and must do a good job as well. Don't make me emulate a bunch
of clueless Indians, please. Just go hire them.

Businessmen aren't very good at compromises when it comes to techies. I am
still coming to terms with that fact and to this day I cannot explain its
origins and reasoning well.

------
m0zg
You don't need "new" programmers. Just dust off some "old" programmers who are
now merely 40-50 years old. There's plenty of life still in us, and we can
tell a pointer from a hole in the ground.

~~~
axilmar
We are too expensive for them, it seems.

------
a_ranom_dev
Title should be changed to needing old programmers, as many comments have hit
upon.

------
matt2000
It seems like there might be a pretty straight up tradeoff between difficulty
of developing software and quantity of software produced. So the more we
attempt to optimize at a lower level, the more time it takes to develop, and
the less software someone can make and maintain. So, given that - would you
rather lose 30-40% of the apps you use and like, but the rest are faster? Or
keep using everything you have now?

There are exceptions to this, as with everything, but it's not as easy as this
article makes it sound, i.e. "Just make faster stuff dummy!" There's always a
cost.

~~~
zozbot234
That's just not true. In fact, we're losing 30%-40% of the software we use and
like _every single day_ simply because people write utter crap and then
pointlessly rewrite it over and over. If we placed more focus on having sane
development practices and good computer assists for developers, we'd
ultimately find it easier to develop software _and maintain it over time_ ,
such that there'd be little or no need to throw stuff away altogether.

------
n_ary
The article read to me like one of those posted every 6-8 months with random
thoughts someone had in one morning reminiscing old days with oranges compared
to sports cars and complete disregard that, as time marches on, things change,
people(customers/users) want more, convenience is prioritized.

I'll also reminiscence a bit: back in 2000s, my 266MHz, 64MB, 4.1GB HDD PC
would let me install a 2GB full feature third person adventure game(Legacy of
Kain: Soul Reaver, for exampler) worth nearly double digit hours of play,
currently it takes 2x of disk space to install a basic platformer giving 1-2h
of fun. Every new game lags to hell on a new PC because I have opted for 1
year old GFX card. I can view a PDF nicely with SumatrPDF yet Adobe Acrobat
Reader takes 3-digit MB to offer same feature & 5x more time to start. I could
use IRC in 2000s while Slack takes all of my RAM available. A website back in
days would be few kB, I mean people here frequently compare HN with Jira or
how funny it is that Netflix has to spend engineering effort to improve time-
to-first-render on it's landing page which is static!

Those are facts, not so good: Soul Reaver vs Assassin's Creed is bad idea,
because people didn't mind if grass was just flat texture or hero looked like
walking cubes. SumatraPDF can open a PDF but Adobe Reader gives me annotation,
form filling, signing etc. NFS2 was just racing, NFS:Heat players demand
customizing exhaust gas color. Netflix home page loads more images combined
than "back in days" and must adapt to big or small screens so it looks great
everywhere. Jira lets me drag-n-drop a ticket while it took x3 time to update
same ticket back in days in several form refreshes. HN is the simplest CRUD,
it just lets me vote and post basic text, heck it delegated search to
algolia(a different service)! The features Slack offers will require 5-7 extra
different services if I were to use IRC.

But those kinds of reality don't get posts up-voted, so instead they are
always like ranting about why Whatsapp needs more resources than the SMS app
when both lets me send text to someone else?

Anyways, things change over time, in 2000s, my PC would lag if I opened MSWord
& had windows Media player playing some HD videos or a game would crash if I
tabbed out of it to check something. But now I have 20+ tabs open that live
update stock tickers and have texts infested with hundreds of advert
monitoring things while a tiny window plays current news in corner while am
typing away happily in IntelliJ IDE, and have a ML model training in
background. Now I can also record a HD version of my gameplay and tab out too.
I think, in future complex development will take place in the cloud, we'll
probably have high speed internet everywhere and online IDE or similar so
everything happens in cloud. Similarly how 4GB HDD costed a fortune in 2000s
but same price gets me a x100 capacity now, cloud resources will improve while
prices will go down. :)

~~~
pdimitar
I agree that we the people have the tendency to look with rose-tinted glasses
at the past.

However, saying that things are just fine today is not strictly true. You are
mostly correct but there's a lot of room for improvement and some ceilings are
starting to get hit (people regularly complain that Docker pre-allocates 64GB
on their 128GB SSD MacBooks, or that Slack just kills their MacBook Air they
only use for messaging during travels). And still nobody seems to care and
then people like you come along and say "don't complain, things were actually
much worse before".

...Well, duh? Of course they were.

But things aren't that much roses and sunshine as you seem to make them look.
Not everybody has ultrabooks or professional workstations. I know like 50
programmers that are quite happy to use MacBook Pros from 2013 to 2015. Those
machines are still _very_ adequate today yet it's no fun when Slack and Docker
together can take away a very solid chunk of their resources -- for reasons
not very well defined (Docker for example could have just preallocated 16GB or
even 8GB; make the damn files grow with time, damn it!).

\---

TL;DR -- Sure, things weren't that good in the past, yeah. But the situation
today is quite far from perfect... and you seem to imply things are fine,
which I disagree with.

 _(BTW: thanks for the nostalgia trip mentioning Legacy of Kain! They 'll
remain my most favourite games until my death.)_

------
jlj
Functional programming is something to watch and learn. It can help take
advantage of multi-core single machines and distributed computing alike
because it is thread safe due to using immutable variables and the mathematics
behind pure functions. Compared to OOP, no locking, concurrency, or race
conditions to worry about if used correctly.

~~~
Ericson2314
Functional program helps immensely, but I don't think you are describing it
quite right. You cannot to distributed systems without concurrency. Even if
you don't have low level synchronization failures, you still need to watch out
for determinism. Fortunately we have the math for that (usually order theory).

I make this point as someone whose job is Haskell. Too many people expect
awesome magic sauce and basically write the same old imperative stuff in
functional programming languages: not in the small but in the large. There's
still plenty of benefit of using a good language for that, but you won't get
zomg auto-parallelism.

~~~
jlj
Meant that it enables concurrency and parallelism without having to worry so
much about the mechanics of it, which helps take advantage of multiple cores
as described in the article. Immutable data structures and pure functions
avoid data corruption when two or more threads are working on the same data.
OOP requires a lot of code to get the same result, true?

I'm new to FP myself and it seems like if done wisely it simplifies multi
thread, parallel processing quite a bit.

~~~
Ericson2314
I would check out [https://github.com/reflex-
frp/reflex](https://github.com/reflex-frp/reflex) which is truly a godsend for
concurrency but actually uses loads of mutation internally.

Haskell helps loads here but the mechanisms are a lot more complex and nuanced
than the circa 2000 ideology you were saying.

------
peterwwillis
Does the sad realization occur to anyone else that if all software was open
instead of proprietary, we probably would already have the most optimized,
most efficient, most advanced software? Instead, most of it's proprietary, so
we've been re-inventing the wheel for decades.

I remember how well and how fast software worked 20 years ago. Today I have to
reboot my telephone to make a call.

~~~
mkl
This doesn't make sense to me. There's lots of open source software that's
widely used, but most of it is not the "most optimized, most efficient, most
advanced".

I think you are looking at the past with rose-tinted glasses. The software I
remember from 20 years ago was generally slow, clunky, unstable, and often
didn't work very well.

