
A RAM Edition of Dirty Coding Tricks - edgarvm
https://www.gamasutra.com/view/news/310660/Memory_Matters_A_special_RAM_edition_of_Dirty_Coding_Tricks.php
======
mattnewport
A pretty common trick that was part of game programmer lore back in the PS2 /
Xbox era was to have a large static array hidden away in some code file
somewhere. When days before shipping you couldn't quite fit the release build
in to memory this allowed a heroic programmer to miraculously 'find' a few
hundred extra kilobytes of memory by reducing the size of the array by just
enough to fit.

There was a less common variant of this for finding some extra performance by
reducing the iterations of a loop doing no-ops somewhere in the main game
loop.

~~~
bastawhiz
Sadly this usually works only once, especially if it's one person doing it.

I used to work on the performance team at a Bay Area company. One of the
things we did to keep our JavaScript bundle sizes under control was introduce
a "ratchet". There was a threshold, enforced by CI, that you couldn't let the
bundle size exceed without getting in touch with us first and figuring
something out. [0]

This worked wonders for a while, until a few different teams were starting new
feature development. At that point, the ratchet was forcing teams to pause
their feature development to do cleanup work, which made the PMs very unhappy.
Engineers got salty because they would remove dead code, only to find that
another engineer had gobbled up the space they'd freed before they could land
their own commit.

Engineers started working around the ratchet by hoarding dead code and
disguising it to look "not dead" so that it could be easily removed later when
a few dozen kilobytes were needed.

There were many thing that weren't good at this company, and the culture
around ownership of the shared codebase was definitely one of them. I'd like
to think that there are plenty of teams that don't have this problem, but I'm
inclined to think that it's human nature to subvert these sorts of things by
default.

[0] This was necessary because of the volume of tech debt. Teams/engineers had
a bad habit of building new things, then not cleaning up the stuff the new
things replaced. At one point, we estimated that over a third of the
JavaScript was dead code. Some teams had gotten to the point where the
codebase contained >2 versions of their product, while only one of them was
physically accessible to users.

~~~
mattnewport
Back in the era where this was somewhat common on games it only had to work
once. In those days you burned a master for a console game and once it shipped
that game was done. There were no zero day patches (or any other kind of
patches) and no updates to the game once it shipped.

Often it would be the lead programmer on the game who put this array in and it
wouldn't necessarily be known about by everyone. It wouldn't have worked well
if people were constantly grabbing bits of memory from it during development.
It worked because it was a 'secret' and its use was reserved for shipping.

------
k__
"When finishing a level, the game would reboot the console and restart itself
with a command-line argument(the name of the level to start) ... into the next
level. Voila, the perfect(?) way to clear all memory between levels."

Well, that's basically how most websites work, lol

~~~
evincarofautumn
As long as the process is acceptably fast, nuking & restarting (with an
appropriate amount of isolation) is a fine approach to many things that would
be harder or less efficient if done the “right” way. Memory pools and “let it
fail” architecture in Erlang come to mind; and there’s a practice sometimes
seen in Forth, where you ensure that the codebase itself is always small
enough that it can always be easily rewritten, for example if requirements
change enough that incremental development would be harder.

~~~
k__
I do it like that on mobile too.

New screen? Okay, lets throw everything away.

As long as performance is okay, I don't keep stuff around in memory.

------
russellbeattie
Whenever I think, "Thank goodness the limited memory days are behind us", it
pops up again and again. Sure, you can buy a new iMac Pro with 128GB of
RAM(!!) and smartphones regularly have 8GB available, but the increasingly
popular IOT devices and smart consumer hardware (like streaming media boxes,
etc.) try to limit the BOM cost and thus limit memory as much as possible.
Tiny memory leaks become an issue, or random crashes from wonky media codec
implementations, etc.

I think the skills (and hacks) that used to be useful only to game developers
and OEMs are now going to be needed by a much wider audience of devs.

~~~
TeMPOraL
> _Whenever I think, "Thank goodness the limited memory days are behind us",
> it pops up again and again. Sure, you can buy a new iMac Pro with 128GB of
> RAM(!!) and smartphones regularly have 8GB available_

I think you're underestimating the accretion of software bloat. You remember
the days when you did _exactly the same things_ , exactly as fast, with 1GB
machines? 512MB machines?

As long as devs don't give a fuck (er, they make "professional decision" of
optimizing dev time (over product quality)), the days of limited memory are
not going to be behind us.

~~~
FRex
It feels like that dev time cost is instead being crowdfunded by energy, disk,
CPU and RAM that every user has to pay with and eventually that cost is paid
by Earth itself. An app might be cheap or free but in the long time cheaper
for the planet and each user would be an app costing a dollar or two more per
install. I was using a 2011 laptop with only an HDD, 2 gigs of RAM and no
discrete GPU until it actually went and finally broke last year so it irks me
especially when I see this handwaving of "computers are cheap", _especially_
from Westerner millenials or developers from SF. The fact that well off people
who change machines every few years can even dare to call less well off people
with older and shittier hardware "entitled" for wanting performant snappy
software just like they got 5-10 years ago when they bought their machine is
baffling. Not everyone needs a crazy and new machine, writers, reviewers,
sales people, admins, etc. Case in point - G.R.R. Martin uses a DOS machine to
write -
[https://www.youtube.com/watch?v=X5REM-3nWHg](https://www.youtube.com/watch?v=X5REM-3nWHg)
.

I.e. Slack and Atom got absolutely lambasted for performance, sluggishness and
resource use (while VS Code was applauded, so it's clearly not an Electron
specific thing) despite being made by companies valued in billions and based
in the most expensive region of the world, one of them even being a paid
product.

Or a game with pixel art (I do like it and I understand that particular indie
dev optimizing for time with such a niche product so I don't want to name
names here) graphic and gameplay only as deep as some better Flash ones from
mid 2000s requires as its minimal system requirements several GBs of RAM (for
comparison, Doom 3 recommended, not even minimal, was 512 MB in 2003) and disk
space, etc.

Or when a graphically simple 2D game requires a 64 bit OS (despite using no 64
bit features seemingly), a non-integrated GPU (and not because of some lack of
OpenGL features but due to poor optimization) and runs at 30 FPS on an
integrated Intel that has 0 problems with Mincraft with really far draw
distance. And it attempts to load hundreds of files (all of the game assets
for an entire 4-10 hour long VN) at boot, taking 30 seconds on an HDD. And
they could be loaded incrementally (loading what is needed right now only and
everything else in the background, even dumbly and fully into RAM as it does
now) or packed into SQLite or a ZIP to avoid so much FS access, but no -
hundreds of files are being opened at game boot and there are tons of XML
assets with 0 compression or minization. But instead the solution to
performance woes (in gaming especially but through things like Electron it's
seeping into main stream) is apparently to "git gud", "stop being a poor pleb"
and getting a new GPU (apparently GTX 950 M is a potato level GPU now and only
an idiot would play games on it in 2017) or an SSD so that the developer
doesn't have to bother to do the tiniest of optimization.

That 2D game loading all assets, wanting a 64 bit CPU and non-integrated GPU,
all for no good reasons, was Tokyo Dark by the way and due to the way the
developer carry themselves I have 0 problem name dropping them, I made an
entire video about that game, the disk and GPU part is at 15:15 :
[https://www.youtube.com/watch?v=sCXwgPJGLIE](https://www.youtube.com/watch?v=sCXwgPJGLIE)

It feels like what was done with Crash Bandicoot is some interstellar death
star level technology in comparison to what some developers do, not even
bothering to pack files to reduce FS chatter or load smartly or compress
textual assets, they probably had it developed on an SSD, it loaded fast
enough for them, it's done and prime for shipping, duh! Just gotta write a
hype text about how extensively we tested it and how much effort we put in
making it!

I realize I sound like an ass that's ranting and I am writing too lengthy (I
did think about writing articles instead of lengthy HN comments like this one
so if someone is interested feel free in hitting me up) but some of the stuff
just blows my mind in ways I didn't know existed.

It's not even optimization for dev time like Python could feasibly be but
sometimes outright waste or lack of basic care, i.e. Slack was apparently
launching a full blown browser per organization until recently (or something
like that), completely needlessly, now that part is out. At the same time they
had this crazy involved (and cute, because it's 2017 and things must be cute)
error page: [https://slack.com/asdsad](https://slack.com/asdsad) , or that
semi-notorious reply article from a guy using unix CLI instead of hip
BigData(tm) tools to analyze relatively small amount of data (yes, the guy is
rubbing it a bit in too badly when he brings out mawk):
[https://aadrake.com/command-line-tools-can-be-235x-faster-
th...](https://aadrake.com/command-line-tools-can-be-235x-faster-than-your-
hadoop-cluster.html)

That lack of care is evident in other areas too, i.e. in security it manifests
as these SQL injections, IoT botnets, outdated software pwns and
plaintext/unsalted+sha1 password debacles. Afterwards it gets justified by
"state attack, China or Russia probably" or handwaved like "we store passwords
in plaintext to send them to user via email when he forgets them" (an actual
explanation I read once..) or "we innovated so fast to deliver SUPERB customer
experience that we didn't focus on security" (while 'security' in that case
would amount to closing an admin port on an IoT appliance for example..). In
general software we get also stuff like that TP-Link repeater (recently on HN)
that needlessly queries NTP every 5 seconds, squandering hundreds of megs of
transfer per month and basically DDoSing these NTP servers.

It's like this entire mentality that good stuff is too hard or too complicated
or too expensive to do (like that Chess guy and his "clever multi-threaded
application") while Pareto is very much in effect and even as little as not
opening a hundred files at once at game boot or reading the dense man/info
pages and thinking for 20 minutes about the problem at hand or back of the
napkin math could make a big difference. 10 or 20 minutes or hours of dev time
per year is not a big enough reason to squander resources so badly. There is
an expression in Polish that seems really apt for developers who "optimize"
their time to that degree: korona ci z głowy nie spadnie (the crown won't fall
off your head, basically meaning something along the lines that exerting a
little effort towards something isn't too much to be reasonably asked/expected
of you).

I recall a similar event when someone wanted to stress test something on a
webserver and had a few million long file with URLs in it, he did while read
line curl $line in bash, it brought his local machine to its knees, probably
due to this rapid process creation and destruction. I gave him an xargs with
-P and -n to launch a single curl per each 100 URLs instead and it ran no
problem and this time the webserver we were testing was on its knees on my
much weaker laptop (weakest in the company actually, since I wasn't a
programmer and didn't need a strong one), as intended. I'm actually guilty of
overengineering myself, since my first try was a Python 3 + requests +
grequests script, and only when weeks after I forgot where I put the script
and didn't want to rewrite it I ran that xargs version (very Taco Bell eqsue
solution actually -
[https://news.ycombinator.com/item?id=10829512](https://news.ycombinator.com/item?id=10829512)
).. And that's an anecdote but it feels like people (actual 'professionals'
making a paid product and working in $billion+ corps) ship stuff as bad as the
original 1 curl per URL script as if it's not a big deal and then it gets
justified with some handwaving, "focus on features and not performance and
security", "no one is gonna hack a toaster for anything", "computers are fast
and cheap", "optimizing for dev time", etc.

It's a typical high volume low margin situation, like Steve Jobs once said
during original Mac building that improving a load time by even a few seconds
saves lives of people because so many people will use the Mac so often that it
will add to a few lifetimes.

~~~
finchisko
In overall I mostly agree with you. However I doubt effective programming will
only add 1-2$ per app in development costs. For better code, you need better
and more programmers and more time and money. And excellent programmers don't
grow on trees. There is limited amount of them, so they're really hard to get
(event if you have money).

If you're company owner, which path will go? 1\. Adding features less
frequently, costly development, more people needed, but highly efficient code.
2\. Frequent feature updates, cheaper development, less people needed, but
shitty code base.

Even if you're brave enough to go for 2, there always will be competitor with
1. attitude, that will crush you into oblivion.

In case of game development, there is Duke Nukem Forever example. They tried
to perfect it, changed game engine twice, but release took them so long, game
looked dated anyway.

~~~
FRex
How much time and cost do most of the things I listed add? I mean really.

Building a 32 bit exe of a game that uses no 64 bit features, packing assets
up to avoid FS chatter, loading lazily, closing up ports on an IoT appliance,
not abusing NTP like TP-Link does, not pasting raw user input into an SQL
query, having a dedicated security team that 24/7 monitors all tech deployed
in the company for outdated versions of software?

These things are absolutely basic and most are one time efforts and others
completely achievable. None of them require any degree of excellence. This is
not about excellent code, this is technology 101. There are trade offs to be
made like IDEs in Java vs. native ones on look and feel, features, start up
speed, snappiness, etc. but there is no trade off in a situation where a
program does less stuff, does it in a worse way and does it slower and taking
more resources.

Look at amounts of money Equifax operates with and how touchy the information
they handle is and try to tell me again with a straight face that what they
did skimping on security and running outdated software was all okay because if
they did better they'd be crushed by costs and competition into oblivion. And
now there are already articles pointing at China with evidence as flimsy as
"Chinese security blog reported the vulnerability day after it was patched by
Apache and a week later Equifax got hacked".

Or explain to me what and why is TP-Link doing with it's repeaters querying
NTP every 5 seconds (which actually takes more development effort to do than
not doing anything would).

Or the recent failures of Apple, like password being stored in the hint field,
that got deployed despite their (supposedly) stellar QA and polish that
justifies the high price of their products.

This fail talk all reminds me of yet another crazy negligent story. There (and
still is) an online shop in Poland that once was doing some "adjustments" on a
world facing machine (that was supposedly not available from the internet due
to high traffic causing the hosting provider to take it offline... I don't get
it, the language and concept described is murky). Someone accidentally removed
index.php (by renaming it to inedx.php), the web server had file listing
enabled so what was shown was the webroot file listing and there was a textual
backup of entire DB in it that had in it real names, phone numbers, delivery
addresses, plaintext passwords and email addresses in it, it was of course
accessible to the web server so all that separated you from data of 65
thousand people was a single click... The company of course bullshitted and
gave 20% sale to everyone affected after lying for 4 days and saying they have
"experts working on it"... They are also quoted as saying that "users agree
that all their data is public when they sign up" (about real names, phone
numbers, addresses, etc. despite the fact their terms and conditions said that
all data is used only for order processing and never made available to
anyone..) but it's murky and might have been a hoax. I'm not aware of anyone
going to jail over this and the shop is evidently still open for business.
Here's an article (I do not have an English one) if you're interested:
[https://niebezpiecznik.pl/post/kupiles-papierosa-przez-
inter...](https://niebezpiecznik.pl/post/kupiles-papierosa-przez-internet-na-
e-dym-pl-zmien-haslo/)

Tell me that stories like these are not absolutely surreal and that you'd
never do as badly personally (I mean really - all it takes would be to try
visit the website you just edited to see if it's okay and notice the file
listing, lack of index.php, etc.). I'd not believe such a multi-layered fail
story (file listing on, removing index.php, plaintext passwords, DB dump in
web root and accessible, they way they didn't do responsible disclosure, etc.)
if someone told me, it's too outlandish but it's also - evidently - true.

A university teacher would have crushed _me_ into oblivion if for homework I
submitted a web app vulnerable to SQL injections because "no one will guess to
do that and it's illegal anyway" and that stored plaintext passwords as a
"reminder feature". But I would just not submit something as bad in the first
place, and as you can see I am not coy and can stand my ground if I think
something right. But in real world both happen and then people scream China.

Even just recently someone had a laugh here in the comments under Mirai story
about how it was considered (as always..) to have to been China, Russia, North
Korea, etc. and then it turned out to just be few really smart Minecraft kids
plus millions of devices with Swiss cheese security out in the world.

Duke Nukem Forever is a very special case of development hell, it doesn't
exonerate games that don't even care. I have played games on my old laptop
with no real GPU, including Unity3D ones, it's not the tool, it's how it's
used. Today I can't play a 2D VN I paid for on an integrated Intel GPU and
that's somehow okay.

I've already spend too much time replying to you and the "hurr durr we cna't
all use cppluspluz!" gentleman/madam below. I won't be reading any more
replies here, if I didn't convince you then nothing will (short of getting
burned yourself by some company leaking your data in a dumb way - hopefully
not).

------
warent

      When finishing a level, the game would reboot the console
      and restart itself with a command-line argument
      (the name of the level to start) ... into the next level. 
      Voila, the perfect(?) way to clear all memory between 
      levels.
    

OH MY

At 22 years old, this is one of those moments where I'm in awe of the strange
issues and workarounds that existed merely ~5-10 years ago which I'll probably
never have to deal with. Very funny!

~~~
wging
You'll run into today's equivalent soon enough :)

~~~
bigiain
Now I'm looking forward to the first website that decides it needs to restart
Chrome between page changes for me...

~~~
Gaelan
As a web dev, I've often at least considered refreshing the page in an SPA due
to some bug with client-side state—I guess that's the modern-day equivalent.

------
tomalpha
Problem solving within a constrained system. It’s fun stuff and I always find
it amazing how adversity and particularly time pressure can bring out the
ingenuousness in people.

I guess wartime inventions are possibly an extreme example (google Hobart’s
Funnies some time for some good clean ingenuity), but shipping console games
to a deadline seems to have a similar effect.

------
Const-me
RAM usage still matters a lot. Not only because consoles, also because RAM
usage often translates to storage bandwidth, and CPU-GPU bandwidth.

Some of the tricks still apply today. For example, modern GPUs support all
kinds of weird texture formats. Here’s a link for PC:
[https://msdn.microsoft.com/en-
us/library/windows/desktop/hh3...](https://msdn.microsoft.com/en-
us/library/windows/desktop/hh308955\(v=vs.85\).aspx) Other platforms have
conceptually similar stuff.

------
torgard
> Ultimately Crash fit into the PS1's memory with 4 bytes to spare. Yes, 4
> bytes out of 2097152. Good times.

Wow. I am absolutely _blown away_ by this.

~~~
bryanlarsen
I've worked on a couple of projects that only had a few bits of free ROM left.
Not really all that surprising, you stop optimizing for space once it fits.

One of the projects was heavily squeezed to make it fit, the other didn't need
much squeezing, but you couldn't tell the difference by looking at the free
space.

------
kaushiks
Here's another interesting story from the DOS days:

[https://blogs.msdn.microsoft.com/larryosterman/2004/11/08/ho...](https://blogs.msdn.microsoft.com/larryosterman/2004/11/08/how-
did-we-make-the-dos-redirector-take-up-only-256-bytes-of-memory/)

------
fnl
Makes me wonder why Rust isn't popular with game devs. Wouldn't Rust's borrow
checker and resource management at compilation time make most of the described
memory reclaiming issues obsolete?

~~~
meheleventyone
Rust is definitely being looked at. The reason there are no big projects is
that it’s still so new. For example the ability to write custom allocator is
still going through the RFC process last I looked. Further it takes many man
years of effort to write a AAA level game engine so there is quite a lot of
inertia preventing moving to something new. We’re at the phase where people
are experimenting within a relatively immature ecosystem. We’re probably a
year or two away from a big project using Rust for part of their development
process and at least four or five from a large project written from scratch.
There are quite a few hobbiest and open source game projects though.

~~~
twic
Chucklefish are working on some kind of medium-sized RPG:

[http://www.pcgamer.com/new-details-and-screens-from-
stardew-...](http://www.pcgamer.com/new-details-and-screens-from-stardew-
valley-publishers-magic-school-rpg/)

In Rust:

[https://www.reddit.com/r/rust/comments/78bowa/hey_this_is_ky...](https://www.reddit.com/r/rust/comments/78bowa/hey_this_is_kyren_from_chucklefish_we_make_and/)

~~~
meheleventyone
Awesome, it’ll be a good for there to be a commercial game project out there
written in Rust when this gets released.

------
throwmeaway32
This is giving me fond memories of things I had to do or had heard about from
colleagues:-

\- To improve game loading speed of a CD - Load level on PC from hardisk, log
all filenames loaded to a txt file and then use that to order the files when
writing to the final CD.

\- Load all files into PS1 devkits memory, write out all memory in a binary
blob to the harddisk, burn that memory image to CD to use for fast level
loading (just load it in a single fread(..).

\- have separate executables for different levels which had different
features, to save memory.

\- Write a small block allocator to make <256byte allocations quicker and more
efficient.

\- Find a tiny piece of memory in the PS2 IOP chip which doesn't get wiped on
a devkit reboot (for some reason) and use that as 'scratch' space to write log
messages to track down a hard to repro crash that rebooted the kit.

\- Change the colour of the tvs border to different colours to track down a
race condition that only existed on burnt disks and we had no debugger. The
border colour setting code was quick enough to not affect the race condition,
so choose some places in code to arbitrarily set to certain colours, burn the
disk, test it, when it crashed see what colour the border was, then put some
more colours in possible areas, re burn the disk and etc (so basically binary
search the code areas using border colours).

\- Use compiler optimisations settings for 'size' instead of 'speed' as the
smaller executable code size meant you stayed in the DCache more which
actually made the code quicker than compiling for 'speed' which resulted in
generally larger code.

\- Burn a master CD image for publisher, get the game ID code wrong, open up
the disk image file in a hex editor and manually edit it rather than go
through the whole build process again.

\- have no build machine (Gold master got made off whatever code the leads
machine had).

\- Use sourcesafe (no atomic checkins....)

\- Use a few batch files and a directory share for 'source control' of art
assets.

\- Have values in config files we gave to games designers which did nothing
(this was accidental but they swore changing them made a difference to the
game).

\- Have a advertising deal with a company to have a special cheat code in the
game to unlock some stuff, the programming code that does this has a bug that
ships that means you have to alter the code incorrectly to get it to
work....so tell the company that 'Your code was too easy so we made it
harder'.

\- Have a developer write code like this as he swore that passing a extra
parameter would have slowed the game down:- (psuedo code, but original was in
C)

Do stuff(int val)

{

foo * bar;

If (val<10)

{

bar = gStuff[val];

}

else

{

bar = (foo* )val;

}

}

