Hacker News new | past | comments | ask | show | jobs | submit login
Yes `yes no` (michaeltang.me)
200 points by tylermenezes on March 13, 2013 | hide | past | favorite | 83 comments



The explanation is completely incorrect. The outer invocation of `yes` is never actually started. The reason is because the inner invocation is being used as an argument to the outer invocation, which means bash needs to know the entire output of the inner invocation before it even starts the outer invocation.

Instead, all you're seeing is bash thrashing as it tries desperately to capture all the output of `yes no` in-memory.

I expect you'd see the exact same behavior with echo `yes no`.

Now if you really want to screw over your machine, the classic bash fork bomb is

     :(){ :|:& };:


I've always preferred this tougher command to screw VMs I was going to delete.

    dd if=/dev/random of=/dev/sda
Depending on how long does it run, you end up with a more or less screwed computer. It's funny to see how does the machine fall over when you invoke a simple 'cd' command after this.


It's not so funny when you do this accidentally, on a 2 TB disk full of user data, on a live production server, at 2AM on a Friday night...

(The server was part of a MogileFS cluster so there were multiple copies of the data online. There was no data loss, not even any downtime. Still, it was scary as hell, and I spent all Saturday in the data center restoring the box.)


how did you end up doing this accidentally?


I've seen stranger: a redirect gone the wrong way

    $ foo < bar  # runs command `foo`, reading from `bar` on stdin
    $ foo > bar  # runs command `foo`, writing stdout to `bar`
Writing `>` instead of `<` has resulted in many a blowup at 3 AM


I did this with a DELETE SQL WHERE clause, trying to isolate a set of records, instead selecting all records, in an effort to test a production installation, at 3am (YES) and had to buy 3 db admins lunch the following day because they came in to restore multiple customer tables. The DELETE's were triggered across numerous tables and platforms (AS400 and SQL Server). The walk of shame to tell my director was not my most shining moment. I'd like to go back in time and slap myself just moments before. Running commands on live production data. Silly programmer.


At least with MySQL's tools, thanks to ssh's happiness to work as a pipe, it's very easy to clone a database locally for fucking about when you're trying to do something like this:

  ssh remotehost.example.com mysqldump -udbuser proddatabase | mysql -uroot testdatabase
(n.b. the mysqldump options that may lock your production tables during the dump.)


My friend did this...

  rm * -i 
The system response was "-i not found" or something like that.


I once discovered that apparently

  rm bob*
has completely different behavior from

  rm bob *
The above line of code successfully fixed a bug I'd been trying to find for weeks in source code in the same directory. When I rewrote it from scratch, the bug was gone.


This is certainly different. " bob* " refers to all files that start with "bob". " bob * " refers to a file named bob as well as all files.


Thank you, Donny.


rm -i is your friend


I find rm -i to be unusable because it is so annoying when you are deleting more than a couple files. It would be nice if rm -I (notice the capital) would first determine all of the files to remove, print them out and then ask if you want to delete all of those files. Instead it just says "rm: remove all arguments?" which is clearly much less annoying than having to type y for every file, but it is also mostly useless.


Zsh almost does this when using rm * :

  rm /tmp/*
  zsh: sure you want to delete all the files in /tmp [yn]?


protip: always put -i first.

I avert the problem by having rm run interactively by default:

    alias rm='rm -i'
    alias cp='cp -i'
    alias mv='mv -i'

http://aniggler.tumblr.com/post/44530262158/the-first-thing-...


Until one day, lulled into a false sense of security, you find yourself on a system without these aliases…


I usually don't alias the command itself; that's bad form. It limits what you can do and screws you when you are using an environment without your alias when you forget it. Live and learn.


Exactly -- I was benchmarking a disk that was having a performance problem. and I did something like "dd if=/dev/zero of=/dev/sda" instead of "of=/testfile".


That doesn't sound so bad. /dev/random is slow, you have plenty of time to catch it. It probably won't even reach the first partition. urandom on the other hand...


Depends on your system. In BSD /dev/random is /dev/urandom.


/dev/zero is plenty fast... :-/


Aahhh. In the old days, we used this command just to mess with your workmates.

  while :
  do
     clear > /dev/tty
     sleep 5
  done


We wrote our own shell when I was at university (which was completely offensive, not at all obedient and generally no help whatsoever) and chsh'ed them if they left a terminal open :)

(for ref, we had to do very naughty things to SunOS 4 to add it to /etc/shells)


/usr/games/worms is the best shell ever.


Never thought of that - great suggestion :)


Back when `/dev/mem` was a thing in default kernels (now you need a special config, or a module), I enjoyed `cat /dev/urandom > /dev/mem`. That would usually bork things pretty quickly, so then I would set random offsets with `dd`. Fun times.



Or try this -- http://git.zx2c4.com/memory-hemlock/tree/slow-death.c

It picks a random device on your system -- ram, video, bios, etc -- and writes a random number of random bytes into it and then sleeps for a random amount of time.

Last one standing lives.


What's /dev/sda?


On Linux, the hard disk. This command, dd, copies data in bulk from /dev/random (a special device on *nix that outputs random bytes) to /dev/sda (your hard disk). That means it starts to overwrite your disk with trash, rendering the system unusable.


I know what `dd` is. I just don't use Linux, so I'm not particularly familiar with the Linux-specific /dev entries. Thanks though.


Most probably the storage device which is your HD.


Probably your main drive.


The device file applications can use to access the first hard drive on most Linux systems. It provides raw access without filesystems or partitions.

/dev/sda1 is the first partition on that drive, /dev/sda2 the second, and so on. /dev/sdb is the next hard drive, /dev/sdc the next after that; beyond /dev/sdz, the naming scheme is apparently dependent on the hardware driver in use: Going from /dev/sdz to /dev/sdaa is what happens in the default SATA and SCSI drives, up to /dev/sdzzz, at which point you apparently run into problems. [1]

http://rwmj.wordpress.com/2011/01/09/how-are-linux-drives-na...

[1] http://kerneltrap.org/mailarchive/linux-scsi/2010/9/20/68866...


> Now if you really want to screw over your machine, the classic bash fork bomb is `:(){ :|:& };:`

Actually, you're wrong. At least on a mac (IIRC) it caps the process number so fork bombs just fill the console with errors. I'm sure you can get around it but that kind of defeats its simplicity.

EDIT: I mean, don't get me wrong, it still really bogs your computer down, but you can still kill the parent bash process in a few seconds.


Crashed my machine running linux Mint. I couldn't Google it, so I had to try it. I don't know what I expected.

Can someone explain how it works?



Be careful running commands that you don't understand! I've been bitten by that before...


I don't know about linux (and I really should in this case as it's my 'field'), but at least on mac, you can restrict the number of child processes a process can spawn with `ulimit -u`:

    zooey:~ duane$ ulimit -u
    709


Depends on how much RAM you have. If your machine swaps everything out to disk before it hits the process limit, then you're pretty screwed anyway.


It's interesting that bash doesn't figure out the system's maximum command line length and either error or truncate if any single argument exceeds that length.


It would have to know the context; those limits don't apply for

    s=`foo`


`yes` is an excellent tool for simulating CPU load:

    CORES=4
    for i in {1..$CORES}; do yes > /dev/null &; done


I'm proud that I actually found a bug in yes over a decade ago. Yes should exit when whatever it is talking to exits, but under some circumstances doesn't and instead goes into an infinite loop.

http://permalink.gmane.org/gmane.comp.gnu.sh-utils.bugs/48


`yes` will do the same thing. It's just accumulating all the output from yes inside your shell. The yes command is actually pretty crappy for making fork bombs.


It's a memory bomb, not a fork bomb.


I just killed the shell that it was running in. By then it was consuming one CPU core, and about 2 GB of RAM.

Then i tried again, in a subshell, and let it run...

  [user@machine] ~ % zsh
  [user@machine] ~ % yes `yes no`
  zsh: fatal error: out of heap memory
  [user@machine] ~ %     

  [user@machine] ~ % bash
  user@machine:~$ yes `yes no`
  bash: xrealloc: ../bash/subst.c:5184: cannot allocate 18446744071562067968 bytes (4297060352 bytes allocated)


I'm surprised bash wasn't able to allocate 16 exabytes, though I will accept it's a fairly large step up from having 4GB allocated. But surely your machine had heaps of memory?


Linux will let you allocate as much memory as you want. It doesn't ever return failure based on available memory. Instead if you attempt to write to memory it will then map memory and if there is none available it will trigger the OOM killer. The kernel will attempt to kill misbehaving processes.

The 4GB limit is maximum memory that can be allocated to a process (this is configurable) http://en.wikipedia.org/wiki/OOM_Killer


Try this: yes 'c=(╱ ╲);printf ${c[RANDOM%2]}'|bash

It doesn't do any harm, promise.


Video of what it does (BASIC version): http://www.youtube.com/watch?v=m9joBLOZVEo


For doing it without using yes,

    while [[ 1 ]]; do c=(╱ ╲);printf ${c[RANDOM%2]}; done
It's a cheesy maze generator.


    tr '\000-\377' '[/*128]\\' </dev/urandom


That's pretty awesome! (For the sceptics, it indeed does no harm and can be interrupted at will)


> and can be interrupted at will

Unless you're running tmux, obviously...


Haha, good one. Here's another harmless command you can try. It's fun!

sudo rm -rf /


I wouldn't joke with that.. Someone may actually try and lose a lot of time and data. Yes, most HN readers will get your 'joke', but possibly not all.


It'd teach them not to run random commands they found on the internet, what rm does, and to make backups. Three valueable lessons.

That is, if it would actually work: modern rm implementations have a special case for /. Read the manpage and/or try this instead:

rm -rf --no-preserve-root /


Its atleast disabled by default in latest versions of Ubuntu and other distros, but regardless paul_f shouldn't have posted it without any disclaimer.


If you blindly run code you find on the Internet without understanding what it does, you deserve all you get.


In case anyone didn't know, this command would recursively remove all files and directories on the entire filesystem.


No it [most likely] wouldn't. Read the manpage.


To clarify to anyone not fammilar with rm, running `rm /` is such a stupid thing to do, that rm has a special case where if won't let you delete "/". To override this behavior, you need the flag "--no-preserve-root".


This is an interesting illustration of the GNU coding standards:

http://www.gnu.org/prep/standards/html_node/Semantics.html

(as others have noted, the program actually doing something here is bash: it attempts to dynamically allocate as much memory as it can to store the output of 'yes no'. Hopefully the author discovers ulimit.)


In school my friends and I used to have little "hacking" competitions -- who could do the most damage to the other's computer using only an SSH session.

An old favorite of mine was the elegant: `yes > no` [1]

[1] I'm fairly certain I made this up... though it's obviously trivially easy to "discover" on your own.


My friends and I would play similar game in DOS and it was how much you could mess up their machine but in reversible fashion.

Every so often I'd come to my machine and just have a blinking cursor on a blank screen and have to figure out how to get back to a working machine.


This thread is one of the best I've seen on HN. 1001 ways to screw yourself in bash. Love it. My contribution is a message from Mr. Odus himself- also a Reverend.

Don't do this unless you want a bad day: :)

  `printf "\ fr- mr odus"|rev`


Sorry, but

  sudo rm -rf \
...doesn't do anything.

rev(1) doesn't reverse your slashes. :D


Interesting read. Another use I have found for "yes" is separating terminal output. Since "clear" doesn't delete your terminal scrollback, it can be easy to scroll up and get confused of what you are looking at. I've noticed this is especially useful when compiling testing programs to separate build output with "yes" so that you're not hunting down compiler errors/warnings that you have already fixed :) you can even be descriptive with your yes call -- ie: "yes fixed x checking to see if y is still broken"


I was a sysop for the University of Florida CS department once upon a time. It was always fun when students first learned about fork(). This post reminded me of that.


Interestingly enough the lxterm I run echo `yes no` or yes `yes no` in dies after allocating somewhere around 4GB of RAM. I would expect this on a 32 or 32-pae kernel but I don't understand why on a 64 bit kernel.

EDIT: Got the output by running bash in bash:

bash: xrealloc: ../bash/subst.c:5184: cannot allocate 18446744071562067968 bytes (4296822784 bytes allocated)

So now I'm wondering why 18446744071562067968 bytes is the next logical step after 4GB.


It isn't -- it's just a typical 32-bit signed integer overflow that gets sign-extended to create a 64-bit unsigned integer. The previous size was probably just under 2 GB for this string. My guess is that “4296822784 bytes allocated” (which is already a little over 4 GB) refers to all the heap memory allocated so far, not just for this one string, which was actually slightly under 2 GB long.

Bash is full of bugs like this; e.g. on a 64-bit system try doing echo $[263/-1].


    $ bash --version
    GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
    ...
    $ echo $[263/-1]
    -263
What is it "supposed" (failure case) to do?


Bit by HN formatting.

  $ echo $[2**63/-1]
  Floating point exception
Bash crashes out entirely because it doesn't check the operands are safe. See also http://kqueue.org/blog/2012/12/31/idiv-dos/


2^32 = 4294967296

2^64 = 18446744073709551616


Looking at this blog, I started to say myself, "Wow, the Subtle network is getting worse and worse contributors," and then I realized that this blog just _looks_ a lot like a Subtle blog. Then I wiped my forehead, mostly because I expect not to read things that actually have to tell me that the the the thing that makes Unix difference from MS-DOS (uh, what?) is "the terminal".


The effect is interesting. I ran it and, sure enough, started running out of memory. However, simply killing the yes process didn't stop it. I had to kill the bash process in which I had typed the command.


Probably yes was just waiting for the write buffer to clear so it could write again while bash was busy asking the system for more memory which was busy swapping everything to make room.


"say" is cool.

cat /usr/share/dict/words | perl -mList::Utilhuffle -e 'print shuffle(<STDIN>);' | head -n 5 | say -r 150


Here's something similar that is cumbersome to stop, but not so disastrous:

    yes 'yes yes&' | sh


For some reason I keep thinking of the halting problem.


Actually, this is an example of where the halting problem can be trivially solved through static analysis.

Finding all of the programs that are isomorphic to this one when given certain input, now... that's the problem.


Improv teaches us to say "yes, and" to opportunities.

Note that "yes, and" is very different from "yes, but."

Source: _Improv Wisdom_.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: