
Moved ~/.local/share/steam. Ran steam. It deleted everything owned by user - stkhlm
https://github.com/ValveSoftware/steam-for-linux/issues/3671
======
Chris_Newton
This seems like yet another good example of why robust application-level
access control would be a helpful thing to build into modern operating
systems, in addition to the typical user-based controls. This may have been
both a rookie mistake and a regrettable failure of code review processes, but
in any case it simply shouldn’t be possible for an application running on a
modern system to wipe out all user data without warning in such a sweeping
way.

I have often made this argument in the context of sandboxing communications
software like browsers and e-mail clients, where it is relatively unusual to
need access to local files except for their own data. In that context,
restricting access to other parts of the filesystem unless explicitly approved
would be a useful defence against security vulnerabilities being exploited by
data from remote sources. It’s hard to encrypt someone’s data and hold it for
ransom or to upload sensitive documents if your malware-infected process gets
killed the moment it starts poking around where it has no business being.

More generally, I see no reason that we shouldn’t limit applications’ access
to any system by default, following the basic security principle of least
privilege. We have useful access control lists based on concepts of ownership
by users and groups and reserving different parts of the filesystem for
different people. Why can’t we also have something analogous where different
files or other system resources are only accessible to applications that have
been approved for that access?

~~~
andrewfong
Isn't this the basic idea behind the sandbox in OS X?

I think OS X (and mobile app development in general) shows both that this is
great in theory and a net improvement over not having it, but that there are
some common pitfalls to address.

First, there are a handful of apps where this model doesn't work so well --
e.g. text editors, FTP clients, etc. So you're inconveniencing quite a few
legit apps which need broader access.

Second, as a corollary of the first, that means you're going to have a lot of
apps that legitimately need to ask users to approve broader access. And as the
number of apps asking for approval goes up, the more likely users are to
simply ignore the warning and approve all. This is especially problematic
since we can assume the average user is a good judge of which apps need which
access.

Edit: One way of reducing user acceptance fatigue might to introduce greater
granularity into the requested permissions and then tier the permissions
requested -- e.g. commonly asked vs. uncommon. E.g. an app may legitimately
need permission to write to any file in your home directory, but it's highly
unlikely they'll need permission to write to more than X number of files per
second. Or at least they shouldn't be able to do so without the OS throwing up
lots of warnings outside of the app.

~~~
ejdyksen
Sandboxed applications in OS X can read/write to arbitrary locations if they
use the system Open/Save dialogs to ask the user about those files (after
opting into sandboxing, of course). See here [1].

For files and folders the user cares and knows about (documents, projects,
etc), this shouldn't be a problem. For files the user doesn't care about
(caches, configuration), you can just leave them in your sandboxed container.

[1]
[https://developer.apple.com/library/mac/documentation/Securi...](https://developer.apple.com/library/mac/documentation/Security/Conceptual/AppSandboxDesignGuide/AppSandboxInDepth/AppSandboxInDepth.html#//apple_ref/doc/uid/TP40011183-CH3-SW17)

~~~
quotemstr
Lots of applications (like Emacs and vim) don't use the system file dialogs
though. It'd be nice to preserve old-fashioned file access for them.

~~~
chopin
But these are applications for knowledgeable users I think which is not the
type of users the GGP is talking about. For these it might be ok to ask for
permission. Or to grant it automatically if you are root or sudo'ed. And many
use cases could get around with allowing silently to write if the file has
been previously been opened by the same application.

~~~
quotemstr
I will not use a system that pops up some kind of UAC-like approval dialog for
every call Emacs makes to open(2).

~~~
wlesieutre
The OS X sandbox doesn't work that way. If you do "Open" and pick the root of
your hard drive one time, the application gets and keeps access to the entire
drive.

Disk scanning programs like DaisyDisk from the app store have to make you do
this before they can get any information about disk usage.

------
akamaka
Here's the offending shell script code:

    
    
      # figure out the absolute path to the script being run a bit
      # non-obvious, the ${0%/*} pulls the path out of $0, cd's into the
      # specified directory, then uses $PWD to figure out where that
      # directory lives - and all this in a subshell, so we don't affect
      # $PWD
      STEAMROOT="$(cd "${0%/*}" && echo $PWD)"
      [...]
      # Scary!
      rm -rf "$STEAMROOT/"*
    

The programmer knew the danger and did nothing but write the "Scary!" comment.
Sad, but all-too-familiar.

~~~
enneff
Gotta question why they used -f.

~~~
unwind
That's normal, but why the trailing slash?! That's just pointless, and almost
looks like an explicit deathtrap.

Without the slash, an empty variable would result in a command line of "rf
-rf" which would simply fail due to the missing argument.

There is absolutely no need for having a trailing slash, it's not as if "rf
-rf foo" and "rm -rf foo/" can ever mean two different things, there can be
only one "foo" in the file system after all.

Very interesting way of introducing an epic fail with a single character, that
really looks harmless.

~~~
riquito
> it's not as if "rf -rf foo" and "rm -rf foo/" can ever mean two different
> things

That's true, but the original code was akin "rm -rf foo/*" and that's
different, since it removes the content of the directory while preserving it.

------
preconsider
To those name-calling the author of the script:

The product/update is hyped and the release date is set in stone. Tensions are
high and your boss has already let you know that you're on thin ice and not
delivering on the project goals.

A last-minute showstopper bug comes in, caused by file leaks. Everyone is
scrambling, and the file belongs to you so its on you to fix it alone. There
is no time for code review, and delaying isn't an option (so says management).
"I'm afraid if we keep seeing these delays in your components, we might have
to consider rehiring your position".

The rm rf works -- it's a little bit scary, but it works. You write a test
case, and everything passes. Still, you add the "scary" line for good measure.
You have two more bugs for fix today and you'll be lucky if you're home by
midnight and see your wife or kids. You've been stuck in the office and
haven't seen them in days.

Are you an "idiot", "talentless" engineer that "deserves to have his software
engineering license permanently revoked"? How do you know this wasn't the
genesis of this line of code?

~~~
wpietri
Yes, that person is still an unprofessional idiot.

If a doctor accidentally removes the wrong organ because administrators have
overscheduled him, "whoopsie, not my fault" is not the appropriate answer. The
same applies to engineers working on bridges. Professionals take
responsibility for their working conditions.

There is an enormous shortage of programmers right now. Anybody shipping stuff
that is bad or dangerous is choosing that. If we drop our professional
standards the moment a boss makes a sadface, then we're not professionals.

~~~
bulatb
It's hard to be professional if no one wants or values it but you. The word
your manager would use is "obstinate."

If they have not internalized the consequences of the risks they're asking
their subordinates to take, they'll weigh what look like vague misgivings
about "mumble, should be better, dangerous, blah blah" against the better
understood risk of their bonus disappearing if the product doesn't ship on
time.

Even if _you_ choose to sacrifice yourself, your reputation, and your future
prospects--again, if almost no employer in your industry would value what you
call professionalism over short-term profits--someone else will ship the code
you wouldn't.

That isn't a defense of anything; it's just a fact. Taboos (e.g. against bad
code) don't work if they're not shared by the majority.

~~~
wpietri
Sure, you can tell yourself that, and it will remain true. Or you can act like
a professional and seek out places that value that. I have, and know others
who do. I don't think we've sacrificed anything.

------
BoppreH
A while ago I made a small program to cache Steam's grid images and search
missing ones
([https://github.com/boppreh/steamgrid](https://github.com/boppreh/steamgrid)).
More of an experiment in programming in Go, really, but it works and makes
Steam a little better.

When I tried on Linux, it threw a permission error. Turns out Steam installs
the folder "~/.local/share/Steam/userdata/[userid]/config/grid" without the
executable permission bit. Without this bit no one can create files in there,
Steam included, and the custom images feature gets broken.

I reported the problem, saying they should fix their installer, and got a
"have you tried reinstalling it?" spiel. When I said I did, and manually
changing the permission fixes the problem, so it must really be it, they
closed the ticket with "I am glad that the issue is resolved".

This was a ticket at support.steampowered.com, because I didn't know Valve had
a github account. I would open an issue there, but I don't have a linux
installation to test again and this sort of misunderstanding burns a lot of
goodwill.

~~~
NamTaf
Vavle's customer support is known to be pretty awful. This is all too familiar
an issue.

To be fair to them, when the six-sigma of your customer service is 15 year
olds asking to be unbanned cause they totally didn't use hacks, your CS
probably gets a bit desensitised.

------
Xylemon
Something like this with Steam happened to my friend not too long ago. It was
very saddening because he literally lost years of files (including personal
projects) and salvaged what he could. That was with the Steam Beta and I
caught Steam doing this myself (after he told me what happened). I was lucky
to stop the script and switched out of the beta. At the time he reported this
to Valve themselves and said they were "investigating the issue and knew of
it". Seems to still be here, _sigh_.

I know the morale here is keep your files backed up but come on, this is a
ridiculous issue Valve still hasn't fixed.

~~~
justizin
That is quite frustrating, and consumer vendors should be mindful of creating
life-changing experiences.

Also: backups. I know it sounds cliche, but look, if it has a mechanical hard
drive, the manufacturer could have slightly mis-calibrated one of the
mechanical assemblies, and this could have happened because the nature of
digital storage is that it is essentially ephemeral.

Protect yourself from things outside your control. You don't need the most
sophisticated solution, just an external usb drive.

~~~
Chattered
But if that external usb drive is mounted at the time (as in the case of the
user this thread is about), then all data on that drive will be deleted.

For this reason, the recommened way to use things like rsnapshot is to have
your backup directories owned by root and with permissions masked to something
like rwxr--r--. If you then want to read your backups easily, you do things
like mount it under NFS as read-only.

~~~
Latty
A (RAID6) fileserver running ZFS with filesystem-level snapshotting. It's
really, really good.

~~~
cmurf
Btrfs snapshots, and then cp --reflink from the snapshot to the current tree
whatever files or directories are missing; i.e. no need to rollback to a
snapshot.

------
kazinator

       rm -rf "$STEAMROOT/"*
    

This is why serious bash scripts use

    
    
       set -u # trap uses of unset variables
    

Won't help with deliberately blank ones, of course.

Scripting languages in which all variables are defined if you so much as
breathe their names are such a scourge ...

I did this once in a build script. It wiped out all of /usr/lib. Of course, it
was running as root! That machine was saved by a sysadmin who had a similar
installation; we copied some libs from his machine and everything was cool.

~~~
jzwinck
While you're at it:

    
    
        set -e # exit on unchecked failure
    

That way you don't trudge forward through an untested code path after a
failure, you stop there and then.

~~~
oblio
I use this holy trinity in most of my scripts:

set -o nounset # set -u

set -o errexit # set -e

set -o pipefail # I'm not sure this has a short form.

I use the long forms for the option names because they're self documenting.

If anyone has more suggestions, I'm all ears.

~~~
drothlis
David A. Wheeler recommends setting IFS=$'\n\t' to handle spaces in filenames
& variables.

[http://www.dwheeler.com/essays/filenames-in-
shell.html](http://www.dwheeler.com/essays/filenames-in-shell.html)

~~~
kazinator
All those rules add up to: "avoiding writing anything complicated or general
purpose in the shell language that is intended to be used by any user over any
set of files; stick to handling known input materials which are closely
associated with the script."

------
tomaskafka
Another beautiful example of developer/user priority inversion.

All system architects ever:

1) System data are sacred, we must build a secure system of privileges
disallowing anyone to touch them

2) User data are completely disposable, any user's program can delete
anything.

All users ever:

1) What? I can reinstall that thing in 20 minutes, there's like 100 million
copies worldwide of these files.

2) These are my unique and precious files worth years of work, no one can
touch them without my permission!

------
stevewilhelm
Many of the comments mentioned this should have been caught in the code
review. I suspect they don't perform code reviews.

Makes me wonder, is there a tool, system, service for auditing how many 'pair
of eyes' have reviewed a given line of code. This would be hard to determine,
but could be useful. I am envisioning a heatmap bar or overlay that indicates
the number of reviews a line of code has received.

~~~
noarchy
It is trivial to find systems where you can assign specific people to review
code, of course. Plenty exist. But to guarantee that they truly _read_ a line
of code, rather than skimmed/scrolled through it? Even if they paused their
scrolling on that line, it doesn't mean that they really read it, or did so
with proper understanding of what it was doing. Short of placing an actual
comment or question for that line (demonstrating some real interaction), it
doesn't seem like an easy problem to tackle.

~~~
skeoh
I think there's merit in the idea -- not tracking what a code reviewer reads
but more of tracking how many times each line of code has been _included_ in a
review, by being modified or maybe within so many lines of a change (like how
diffs show X lines of surrounding context). The idea would be that code
changes _near_ a buggy line would be more likely to draw attention to that bug
and perhaps lines with less attention would be more likely to facilitate
hidden bugs.

~~~
jacalata
I suspect it would work out the other way round - code in a frequently
modified section would be more likely to hold bugs because the
requirements/understanding of that code is changing more frequently.

------
jtokoph
The biggest lesson here is that backing up your files is extremely important.
Both local backups and remote backups.

I like the 3-2-1 rule:

    
    
      At least three copies,
      In two different formats,
      with one of those copies off-site.
    

Software is written by humans who will undoubtably miss a corner case and not
think of every possible environment.

~~~
gizmo686
>In two different formats

What does this mean?

~~~
mdturnerphys
Probably different media, e.g. an external hard drive and DVDs.

------
Bahamut
It should be noted, as listed in that issue thread, this is apparently also
present on Windows (same bug in two different shell scripts!).

------
lifeisstillgood
This is a danger of me-ware being used before it becomes software. Software
assumes and defends against others using and running it. Me-ware makes no
assumptions because me is the only one running it.

The transition from meware to software is a hard one - and usually it's how we
get terrible reputations as an industry. Basically it's a prototype till it's
burned enough beta users.

Edit OMG - that is actually Steam from valve - I take it back - this is
supposed to be software.

~~~
chadzawistowski
Though it turned out to be irrelevant, I really enjoyed your spiel about "me-
ware". The distinction between it and finished software is too easy to forget.

~~~
lifeisstillgood
I'm pretty sure I stole of off the guy who wrote Source Vault SCM but his name
escapes me.

------
colanderman

        # Scary!
        rm -rf "$STEAMROOT/"*
    

Anybody who writes a line like this deserves their software engineer license
revoked. This isn't the first time I've seen shit like this (I've seen it in
scripts expected to be run as root, no less); it makes my blood boil.

Seriously. "xargs rm -df -- < MANIFEST" is not that hard.

EDIT: I shouldn't be so harsh, if it weren't for the comment admitting knowing
how poor an idea this line is.

~~~
irishcoffee
The programmer committed a cardinal sin to be sure. But, so did everyone on
the code review that let it slide.

~~~
colanderman
You're right, but I don't retract my statement, because we know the dev _knew_
how dangerous this line was thanks to his "Scary!" comment.

~~~
click170
I don't whole heartedly disagree with you but I think its grayer than that.

Maybe there were time constraints. Maybe the coder explained that this code
was dangerous to his boss but was not grantee the time to fix it.

I think we agree that said code should never have been written, but there are
any number of circumstances that place the blame squarely on management. If he
explained the dangers of doing it that way but wasn't granted time to fix it
(or no manifest was kept), theres little that johnny coder can do outside of
their own time.

None of us write perfect code the first time, and we all had to start
somewhere. What's important is how far youve come and what you've learned. I
think.

~~~
colanderman
You are right. Consider my original assertion (which is past its edit window)
amended to read "engineer (or manager who prevented an engineer from fixing)".

------
pjc50
For those of you worried about important files, chattr +i is a useful defence.
No easy way of applying this automatically.

Long ago I had a kernel hack that would kill any process that attempted to
delete a canary file. Worked OK but no chance of it ever going mainstream.

~~~
Zancarius
Reminds me of a shell trick I saw many years ago for short circuiting
accidental 'rm -rf's by issuing a 'touch -- -i' in a sensitive location. In
bash (and others), the glob operator inadvertently feeds the '-i' (now a file)
into rm as an argument which then interprets it as its "interactive" flag,
causing it to prompt for continued removal.

~~~
jjnoakes
Which only helps "rm -rf _". It does nothing for "rm -rf /anything" or "rm -rf
/anything/_" or "rm -rf /*" or any other way of spelling doom.

~~~
Zancarius
Which is why I mentioned the glob operator! Speaking of which, if you _did_
touch '/-i', it would catch

    
    
      rm -rf /*
    

Ironically, I was thinking of adding that _specific_ directories would not be
caught (obviously), but I figured that would be understood implicitly since
most people ought to know what <asterisk> actually does in the shell. And if
they don't...

Edit: I just noticed that the glob operator in my first comment didn't show
up, because it was eaten by markdown. Incidentally, so was the asterisk in
your post! _That_ might be the source of your confusion. In that case, I
should specify such a trick only works with:

    
    
      rm -rf *
      rm -rf /*
      rm -rf ~/*
    

Or similar. _Not_ specific files. But, again, I appeal to the importance of
understanding what the glob operator _actually_ does!

As an aside, the context of this post is a mistake in steam.sh which may
essentially do:

    
    
      rm -rf /*
    

So, the discussion implicitly has nothing to do with exact paths. :)

------
leni536
Well I'm on the guy's side but this worth noting:

 _Including my 3tb external drive I back everything up to that was mounted
under /media._

Well maybe it was just unfortunate and the drive just happened to be mounted
or the "backup" is always online. If it is the latter it is a really bad idea.
If your computer is compromised you risk all of your backup. A proper backup
should protect your data from these occasions.

------
bcantrill
Wow, an awful bug -- and brings back memories of a very similar bug that we
had back in the late 1990s at Sun. Operating system patches on Solaris were
added with a program called patchadd(1M), which, as it turns out, was actually
a horrific shell script, and had a line that did this:

    
    
      rm -rf $1/$2
    

Under certain kinds of bad input, the function that had this line would be
called without any arguments -- and this (like the bug here) would become into
"rm -rf /".

This horrible, horrible bug lay in wait, until one day the compiler group
shipped a patch that looked, felt and smelled like an OS patch that one would
add with patchadd(1M) -- but it was in fact a tarball that needed to be
applied with tar(1). One of the first systems administrators to download this
patch (naturally) tried to apply it with patchadd(1M), and fell into the error
case above. She had applied this on her local workstation before attempting it
anywhere else, and as her machine started to rumble, she naturally assumed
that the patch was busily being applied, and stepped away for a cup of coffee.
You can only imagine the feeling that she must have had when she returned to a
system to find that patchadd(1M) was complaining about not being able to
remove certain device nodes and, most peculiarly, not being able to remove
remote filesystems (!). Yes, "rm -rf /" will destroy your entire network if
you let it -- and you can only imagine the administrator's reaction as it
dawned on her that this was blowing away her system.

Back at Sun, we were obviously horrified to hear of this. We fixed the bug
(though the engineer who introduced it did try for about a second and a half
to defend it), and then had a broader discussion: why the hell does the system
allow itself to be blown away with "rm -rf /"?! A self-destruct button really
doesn't make sense, especially when it could so easily be mistakenly pressed
by a shell script.

So we resolved to make "rm -rf /" error out, and we were getting the wheels
turning on this when our representative to the standards bodies got wind of
our effort. He pointed out that we couldn't simply do this -- that if the user
asked for a recursive remove of the root directory, that's what we had to do.
It's a tribute to the engineer who picked this up that he refused to be
daunted by this, and he read the standard very closely. The standard says a
couple of key things:

1\. If an rm(1) implies the removal of multiple files, the order of that
removal is undefined

2\. If an rm(1) implies the removal of multiple files, and a removal of one of
those files fails, the behavior with respect to the other files is undefined
(that is, maybe they're removed, maybe they're not -- the whole command fails.

3\. It's always illegal to remove the current directory.

You might be able to imagine where we went with this: because "rm -rf /"
always implies a removal of the current directory which will always fail, we
"defined" our implementation to attempt this removal "first" and fail the
entire operation if (when) it "failed".

The net of it is that "rm -rf /" fails explicitly on Solaris and its modern
derivatives (illumos, SmartOS, OmniOS, etc.):

    
    
      # uname -a
      SunOS headnode 5.11 joyent_20150113T200918Z i86pc i386 i86pc
      # rm -rf /
      rm of / is not allowed
    

May every OS everywhere make the same improvement!

~~~
ghaff
>A self-destruct button really doesn't make sense

Bryan--you mean Star Trek didn't get it right? Well, at least they didn't
allow shell scripts.

It's an interesting philosophical question though. At what point do you decide
that the user truly really can't want to do this even though they've said that
they do?

~~~
bhaak
rm -rf is not self-destruct button. It's just a "take everything out of the
closets, rip of the labels, and throw it on a heap".

A real self-destruct button would ensure you couldn't recover the data
anymore.

So at least try "dd if=/dev/random of=/dev/sda" or more elegant when
available, throw away the decryption key.

------
fsaintjacques
Note to all, prepend all your bash script with

"set -o errexit -o nounset -o pipefail"

It'll save you headaches.

~~~
mod
Can you at least explain what this is doing, outside of saving me headaches?

~~~
Anderkent
-errexit: exit the script when a command fails -nounset: fail when referencing an unset variable -pipefail: fail when the any command in a pipeline fails, not just the last one

The last option is unfortunately harder to use, since some programs misbehave
in pipelines.

------
nodesocket
What is this magic doing?

    
    
       $(cd "${0%/*}")

~~~
DangerousPie
$0 is the path of the script. ${0%/* } takes $0, deletes the smallest
substring matching /* from the right (the filename) and returns the rest,
which would be the directory of the script. So this changes the directory to
the directory the script is located in.

~~~
barrkel
$0 is not the path of the script. $0 is the path passed to the shell used to
execute the script. If run via the #! line, it will contain a path of some
kind. But if it's passed directly using "bash myscript.sh", then it won't.

And yes, dirname is a way out of this. I'd do this:

    
    
        "$(cd "$(dirname "$0")"; pwd)"
    

if I wanted the path to the script. I would also sanity-check the path by
testing for the existence of some files or directories that are expected to
exist under it, before trying to delete it all.

------
mtsmithhn
The sad sad part of all this is that Half-life 1 had a similar bug in their
windows installer and would wipe your program files if not careful
[http://arstechnica.com/civis/viewtopic.php?t=479484](http://arstechnica.com/civis/viewtopic.php?t=479484)

~~~
DCoder
Myth II had a similar bug, too: [http://minimaxir.com/2013/06/working-as-
intended/](http://minimaxir.com/2013/06/working-as-intended/)

------
rndn
I don't want to blame anyone, but maybe there should be foolproof default
security measures that prevent something like this from happening. For example
rm -rf called on a home, documents, music, photos etc. directory could require
an additional confirmation, perhaps through a GUI.

~~~
SolarNet
There are, it's called users, and groups, and file permissions. Applications
like steam should really be running under a separate user so they can't write
to personal files (and maybe just have read permissions). But of course proper
application isolation and file permissions is something few people do
correctly on their personal machines, let alone know about.

Window managers don't make it any easier, and I put a lot of the blame on them
for not making it easy to configure applications to start under different
users.

~~~
Arnavion
Steam shouldn't run as its own user. It's a user-level process, not a system
process. It needs to have user-specific things (install directory, save games,
etc.) that need to be accessible to the person using it. Separating processes
into users is only one method of sandboxing, and not appropriate in this case.
Sandboxing via mechanisms like SELinux is the correct solution.

One of the users in the Github thread even mentions how SELinux prevented the
same thing from happening on his machine.

~~~
SolarNet
Yes you are of course correct about SELinux!

I actually separated Steam into a sand-boxed "steam" user account. But maybe
that's because I learned Unix on BSD and never included SELinux or how to use
it (and it isn't obvious from a desktop user accounts perspective), I should
probably check that out.

------
izietto
IT reminds me the Bumblebee Project bug: [https://github.com/MrMEEE/bumblebee-
Old-and-abbandoned/issue...](https://github.com/MrMEEE/bumblebee-Old-and-
abbandoned/issues/123)

Spoiler: due to a forgotten space the entire /usr folder was deleted

------
Alupis
I've done something like this before, with my own build scripts. Except I was
running as root (a requirement for some parts of the build).

Part of the scripts installed a bunch of files into what was supposed to be a
fakeroot, however I did not have bash's 'set -u' configured and an incorrectly
spelled path variable was null, meaning something like: "${FAKEROOT}/etc" was
translated into "/etc". Before I realized it, it had clobbered most of my /etc
directory.

When the build failed, I was puzzled. I only noticed there was an issue when I
opened a new shell and instead of seeing "myuser@host ~]#" I got
"noname@unknown ~]#". Uh oh...

Needless to say I know do my development of those scripts from within a VM.

------
iopq
Since this kind of thing keeps happening, isn't there a need for a safer tool
than shell scripts? Maybe with a little bit more safety around null/empty
variables and not as stringly typed?

~~~
rdc12
Tools that are safer then shell scripts exist, doesn't mean that end users
have them installed, doesn't mean people will chose to you them either

~~~
iopq
Python?

------
deeviant
Reminds me of a bug with the Myth II uninstaller immortalized by the following
Penny Arcade cartoon:

[http://www.penny-arcade.com/comic/1999/01/06](http://www.penny-
arcade.com/comic/1999/01/06)

Basically, they called delete tree. If the user installed or moved the game to
a different location, say, the _root_ , it would delete tree away somebody's
whole computer. Fun times.

------
im3w1l
Just wanted to remind everyone that deleted files can be recovered until their
space is reclaimed for something else.

So if you notice something like this happening, shut down computer asap, so
they can't be overwritten. Plug drive into another computer but do _not_ mount
it. Instead run some file recovery program on it.

For an SSD it becomes murkier though, what with their trimming and automatic
garbage collection.

~~~
nodata
Yeah if you have time to manually guess the filenames on half a million files.
Great.

~~~
im3w1l
$ extundelete /dev/sda{n} --restore-all

------
winslow
We all make mistakes when coding. However, knowing an engineer at Valve did
this in a way makes me feel a little bit better about my abilities as a
software engineer. At the end of the day we are all human and makes something
like working at Valve/Google/Big Name Corp a little less daunting.

------
natrius
When are we moving to the mobile security model on the desktop? I love knowing
that nothing I install on my phone can ever to anything like this or access
data that belongs to another program in general.

------
keruspe
Pretty sure shellcheck (
[http://www.shellcheck.net/](http://www.shellcheck.net/) ) has a lot to say
about this faulty script...

------
lago
this is clearly programmer error, but it's multiplied by a random variable:
the Degree of Misery of the programming language, (in this case) bash.

------
blueskin_
This is an amateur mistake. Makes me wonder what horrors lurk inside Steam
itself...

------
orblivion
Thanks for reminding me to unmount my backup drive.

------
codemac
Steam in docker, anyone?

------
douche
Why are we still running bash scripts? The only thing worse might be DOS batch
files.

It's not like python isn't available on every major linux distro. It's a
little harder to ensure it's on windows, but when Steam is installing every
point release of the Visual C++ runtime that has ever existed on my system,
why not bundle python in there too?

~~~
mst
And even if it isn't, modern perl would do as well as python for this case,
with no language wars required.

~~~
douche
Exactly. There are better options, that have more sane string and path
handling. Much as I grumble about using Powershell on Windows, at least you've
got the .Net framework underneath you that you can drop into relatively easily

------
orbitingpluto
Horrible life lesson. Saved by my own laziness.

------
chj
This could be quick hack, maybe the dev just forgot to put it on the issue
list.

------
mofle
I've written a short guide on how you should safeguard `rm`:
[https://github.com/sindresorhus/guides/blob/master/how-
not-t...](https://github.com/sindresorhus/guides/blob/master/how-not-to-rm-
yourself.md#safeguard-rm)

------
eastbayjake
If you're curious about who the "Scary!" guy is, someone pointed to where the
code was checked in:
[https://github.com/lrusak/steam_latest/commit/21cc14158c171f...](https://github.com/lrusak/steam_latest/commit/21cc14158c171f5912b04b83abf41205eb804b31)

~~~
rcxdude
That looks like an unofficial repo. I don't think the guy who committed that
actually wrote it.

