Hacker News new | comments | ask | show | jobs | submit login
Patch runs ed, and ed can run anything (rachelbythebay.com)
490 points by akerl_ 10 months ago | hide | past | web | favorite | 147 comments



I will admit that people consider it a little surprising and non-hackers will be very loudly booed if they rely on this behavior but ed finds it way into lots of utilities like this. It's a remnant of a bygone era where people wanted their tools to have this kind of power.

But the author is absolutely right, nowadays we should prefer red. Still a small amount of shame if you're applying untrusted patches.


The early computing era (I'd say until early '90s) was definitely had something special in the way so much relied on trust, both trusting the user and trusting the wider community. Today everything is so locked down, paranoid and anxiety inducing that computing has become increasingly stressful and unfun.

I'm also reminded by a RMS writing (I think it was letter of some sort) campaigning that denying computer lab users root access was oppressive. Bit sad that I can't actually find it now.

And of course the classic (if misused like I'm doing here) Franklin quote "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."


It's easy to trust the user and the wider community when your typical user is "another student in UCLA" and the worst they can steal from you is yesterday's experiment log.

Put another way, it's easy to trust a user if you and they both know that they can be tracked down and face disciplinary action if their behavior was deemed unacceptable.


Moreover: it's easy to trust each other if all you're doing is scientific research, games and pranks. But one day we woke up and found half of the world relying on computing in everything, from business to healthcare to national security. With regular people and regular day-to-day stuff comes regular crime.


Perhaps it's the difference between the experimentation and commercialization phases of some technology.

Having a culture of trust replaced by one of distrust.


> I'm also reminded by a RMS writing (I think it was letter of some sort) campaigning that denying computer lab users root access was oppressive.

There's a reference that may be related to this in the book "Hackers: Heroes of the Computer Revolution" that talks about how Stallman despised passwords and worked to promote more open access to the systems.

https://imgur.com/a/SZuNc


Back in the 1980s everyone had root access in the Minix lab at school. It was kind of necessary to do the assignments for operating systems class.


Hmm... I am slightly surprised by this.. can anyone reconcile this with his stance on privacy / surveillance?


At the time, computers were not something everyone used in their daily lives. They were something that was shared. In that context, an administrator denying users of the computer access to pretty important functionality (imagine you couldn't run a debugger on your laptop, or couldn't get root) is anti-user behaviour. Passwords were (at the time) a way to restrict users from being able to access the machine without the administrators permission (since an administrator could remove your login entry -- even if you were physically present at the computer you couldn't do anything).

RMS has said multiple times that technology doesn't change our values, because our values are far too ingrained in order to be changed as something as trivial technology. But different technologies change the outcomes that we judge using our values. The proliferation of the internet and data sharing changes whether or not passwords are a good thing (since they are now used to protect user data from other people, rather than block users from accessing a system they should have the rights to access).



2047? I wish. thanks to apple and google everyone have a computer they can't have the root password today!


World's ungrateful. Apple made root password just blank in High Sierra, and everyone started to scream about a terrible security bug ;)


I think you're thinking of this https://unix.stackexchange.com/questions/4460/why-is-debian-... (see last answser). And yes I see the irony of going through stackoverflow for this.


That sounds correct. Although I feel like there might have been expanded version of this story somewhere, but this is definitely where I remember it from.

For posterity, this is bit more canonical source for the piece:

https://ftp.gnu.org/old-gnu/Manuals/coreutils-4.5.4/html_nod...

> Why GNU su does not support the `wheel' group

> (This section is by Richard Stallman.)

> Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn't know how to do that in Unix.)

> However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root password who sympathizes with the ordinary users, he or she can tell the rest. The "wheel group" feature would make this impossible, and thus cement the power of the rulers.

> I'm on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.


> Today everything is so locked down, paranoid and anxiety inducing

because computing is now globally connected, not limited to small computer labs


Knowing "why" doesn't make it feel any better. If anything, it makes it worse, knowing both that all the paranoia is justified and that our systems are not really yet anywhere near something you could comfortably call "secure".


I'm seeing more and more people these days reference the Franklin quote in the context of computing. I've used it myself, and I believe it started gaining in popularity shortly after the rise of Secure Boot. A sign of the times indeed.

That said, I'm all for preventing remote attacks; it's when people start thinking about locking the actual user out of the computer he/she owns that makes me offended.


This bums me out, too. Back in the day, you didn't have to treat all programs and data as hostile, because they weren't. Nowadays the GNU linker whines about calling gets(), but that wasn't a problem, because you knew that a 32-character stack-allocated buffer was big enough to answer a yes-or-no question.


Yes, but "Back in the day" ended with the Morris Worm at the latest, and that's coming up on its 30th anniversary today.


That's fair -- the Morris Worm was an early demonstration that user-hostile software could easily make everyone's life worse. It still sucks that, as modern coders, most of our lives will be spent either producing or defending against hostile software.


Because the world was a nicer place back then, or because the computers were far less useful and thus mirrored the world far less?


The latter. As long as it was bunch of nerds doing nerdy things, nobody cared. As soon as computers started processing things relevant to regular people, the regular people happened, and with them all the incentives for hostile behaviour.


Indeed. I was being overly rhetorical :)


Ironic considering he writes from an offline PC these days and does not own a mobile phone AFAIK


His reasons for doing that aren't related to security, but rather wanting Free Software purity.

Also I don't think his PC is "offline," but rather he does weird things like reading web pages via a script that scrapes the text and emails it to him.


That's not that weird. In fact, I remember there use to be a service called rss2email or something like that.


I've actually emailed him about the topic of free software mobile phones. He is against them entirely, even if one was completely free software, because by design they track you using cell towers. He said that if he needs to use a phone he uses land-lines.

I can pull up the email if you like. He's a pretty reasonable guy if you ask him a question, and is surprisingly responsive.


"By 1982, Stallman's aversion to passwords and secrecy had become so well known that users outside the AI Laboratory were using his account as a stepping stone to the ARPAnet, the research-funded computer network that would serve as a foundation for today's Internet.

...

"[When] passwords first appeared at the MIT AI Lab I [decided] to follow my belief that there should be no passwords," Stallman would later say. "Because I don't believe that it's really desirable to have security on a computer, I shouldn't be willing to help uphold the security regime.""

Source:

http://www.oreilly.com/openbook/freedom/ch07.html


@zokier "Today everything is so locked down"

I highly dought that. Most users use Windows and don't even have Anti-Virus/Firewall installed on there computers. Networking hardware is set to defualts. I would argue more computers and servers are unprotected because we have more computers and users online.


Windows comes with an antivirus and a firewall preinstalled these days. The antivirus re-enables itself after a few hours if you disable it too.


It won't help you, here is one example; http://www.hackingarticles.in/set-bypass-outbound-rule-windo...


I don't think it's so much that people wanted patch to be able to run arbitrary code, but rather that writing your tools to be very flexible allows you to get the functionality you need with very little code. The first version of ctags was written as a shell script that invoked ed to do the heavy lifting, for example. It was a small fraction of the rewrite of ctags in C in later Unixes.

So that's how you end up with m4 being Turing-complete, being able to put shell commands in anything, that sort of thing.


>It's a remnant of a bygone era where people wanted their tools to have this kind of power.

You should still want your tools to have this power.

It's amazing what you can do with tools built like this and how _easy_ it is. People use the same motivation for making microservices, "do one thing well", they just don't think about it for command line utilities and treat shell scripting like a second class citizen.


The problem is not the ability to "do one thing well", but rather that the power means the tool ends up doing a lot of other things ... not so well.

Patch does a good job of patching. There is no reason why it should also have the job of being able to run arbitrary scripts.

FWIW, perl was what convinced me that the previous Unix-y ideal of "do one thing well", was overhyped.


I personally don't interpret "do one thing, and do it well" to mean "do one small thing, and do it well"; the "one thing" could be as big as a programming language, or a display server, or whatever. Perl's "one thing" would be being a scripting language; it should be a scripting language, and it should be the best scripting language it can be. It just shouldn't also do a bunch of other stuff, like also being an RDBMS.

That philosophy could also be applied recursively; Perl should do just one thing; it should be a good scripting language. The Perl JSON module should just do one thing; it should be a good JSON encoder and decoder. That doesn't mean it should be simple; the JSON module should have as much complexity as necessary to make it a good and useful tool.

I do recognize that my interpretation isn't a great _definition_, because you can construe most things to be UNIX-y if you just expand what "one thing" it should be good at enough, but I do think it's useful to keep in mind when designing something.

That said, I have never written Perl (other than a very tiny amount of Perl 6), and don't know exactly what about it convinced you, but I wouldn't mind a discussion about it if you clarify.


The "do one thing, and do it well" is really misleading, the Unix philosophy cannot be distilled down to a single phrase - this phrase only makes sense if you already understand the ideas (also naming it "philosophy" is kinda misleading too because it doesn't expect everything to follow it, only use it as an ideal for program design that you should strive to follow, but not bind yourself over it and ignore it when it makes sense - a good example for this would be something that does a lot of things that in turn allow programs to follow the Unix philosophy better - like an OS kernel).

The main idea behind the "do one thing" is really composability: by doing one thing, your program can be used to provide the functionality for other programs to form applications. Patch uses ed because it needs to edit text and ed provides the text editing functionality, but ed can also be used interactively or from a shell script or by some other program. This is because ed doesn't care how it is called nor where the commands come from. This isn't enabled by its simplicity, but by its design to be composable: it would be a trivial difference in terms of complexity (and indeed there were similar programs in other systems that did exactly that) to ask the user for a filename upon launching it. But that would only make it useful as an interactive program (which is also why the Unix philosophy warns against captive user interfaces).

Ironically, Microsoft's COM and OLE followed that idea better than anything on the graphical side of Unix ever did (except X11 itself, but few programs ever took advantage of it - the best case would be applications meant to be "swallowed" inside docks - docklets - and panels, but those are very task specific). However there is also something to be said about being simple and COM is anything but simple.


So come up with command line switches to restrict behavior when necessary.

>the power means the tool ends up doing a lot of other things ... not so well.

What I find myself wanting over and over is to do things with software that the developers didn't intend. Maybe using a hammer to mash potatoes is a bad idea, but I don't want the hammer company to make it impossible. Not because I want to shoot myself in the foot, but because preventing me from shooting myself in the foot prevents me from doing a lot of other good things.


> Patch does a good job of patching. There is no reason why it should also have the job of being able to run arbitrary scripts.

It (patch) is not 'run[ning] arbitrary scripts' (ignoring the fact that the actual 'diff' output file is itself a 'script' in a broad sense).

Early versions of diff used to output 'ed' command scripts that would instruct 'ed' how to change one file into another file.

Rather than re-implement all of 'ed' inside of patch, patch simply was written to call 'ed' over a pipe and feed it the input "diff" (in 'ed' format) and let 'ed' handle interpreting the 'ed' script. This in and of itself makes sense. All the 'ed' code for interpreting 'ed' scripts is in 'ed', and enhancements to 'ed' automatically become available to patch and anyone else that calls 'ed' without having to keep plural code bases synchronized. Note that some of this usage may very well have been developed before shared libraries appeared, so there would have been no easy way to share the same 'ed' library among plural tools when there were no shared libraries.

It just so happens that one of ed's commands is a command to execute an arbitrary program. So a carefully crafted diff file can instruct ed to run a program of your choosing.


Patch should have the ability to determine what should be patched and where, then hand off the job of doing that to any other UNIX tool.


I'm not sure if I'm reading your comment correctly but i don't think Perl was ever intended to fit the UNIX philosophy. What with TIMTOWTDY being a central tenet.


I thought the acronym ended with an "I", TIMTOWTDI, "there is more than one way to do it".

But I don't know if this is totally anti-unix; there's something more subtle, unix systems try to have a "proper layering" of tools, starting with high-level stuff like compilers and languages, sitting atop utility programs and shell, which ultimately sit on an OS kernel that offers its own similar layering of complexity, with simple and clean primitives at the very bottom (files, sockets, etc).

Working within this ecosystem of OS/libs/tools, you can compose any particular solution in a wide variety of ways, there's usually more than one "right way" to solve a problem. But the pieces you build your solution upon are generally more and more specialized/single-purpose as you move down the stack.


You're right. I don't know how my brain accepted that Y at the end "There's more than one way to do yoga"? lol


Perl was specifically written as a rebellion against the Unix Way: unifying all those little tools into one giant language.


OpenBSD's patch handles ed style diffs internally now and also it is not permitted to execute other programs thanks to pledge.

https://marc.info/?l=openbsd-cvs&m=144498099601083&w=2


Patch knows that ed is a fine choice. Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity.

In case anyone has not been exposed to the reference, it's perhaps invoking the famous `ed, man !man ed` page. https://www.gnu.org/fun/jokes/ed-msg.html


Crap! I knew I forgot something. I forgot to reference "eat flaming death". Oh well, good catch!


We used that phrase in some Tradewar games during the BBS Era.


geez, towards the end that starts reading like my bottle of Dr Bronner's Soap


This is bad and should be fixed, but there are fairly few circumstances where it actually creates a new vulnerability. The majority of uses of patch are applied to source code by someone who's going to end up running that code anyways, so applying patches you haven't read closely from sources you don't trust is already unsafe.


> The majority of uses of patch are applied to source code by someone who's going to end up running that code anyways, so applying patches you haven't read closely from sources you don't trust is already unsafe.

I think you are overlooking a case that I suspect is common: applying a patch without reading the patch source and then using your source code control system to review what the patch did.

That lets you review that patch using whatever tools you normally use for reviewing code changes, which are often much nicer than reading a raw ed script.


If possible, you should use your VCS's native tool for applying patches, not patch(1). Otherwise, you risk that the patch will mess with the repository internals.

For example, this patch compromises git repositories when applied with patch(1):

  --- a/.git/config
  +++ b/.git/config
  @@ -0,0 +1,2 @@
  +[core]
  +	pager = cowsay


Huh. I was going to say how git-apply(1) probably uses patch(1) internally after doing some preprocessing on the patches it's given, but, on checking it, it seems it doesn't.

Using strings(1) on the git executable turns up "%s/patch", which made me suspect the executable path was interpolated, but `git diff ...@~ | strace -fe trace=execve git apply` on an arbitrary repo turned up nothing, and I couldn't find "%s/patch" in git's source.



It seems like it's interpolated into something like .git/sequencer/patch, then.


Assuming git patch doesn't simply exec patch. (It doesn't, but half the comments here are people saying they want all their tools to run all the other tools because unix.)


  ______________________
 < that looks dangerous >
  ----------------------
         \   ^__^
          \  (oo)\_______
             (__)\       )\/\
                 ||----w |
                 ||     ||


Did git write its own custom implementation of patch from scratch? Or does it use `patch` internally?


They have their own implementation.


The main place where this would be an issue would be if you have a server that is not going to build or run code, but applies patches to source and outputs the result.

So a cr review website could be affected.


That advice has been being passed around for a good long time, probably almost as long as patch itself.


Sure, except you could (for example) steal Linus's private ssh keys with this.


You could build the binary on your machine and run it inside a VM.


The patch could also modify the Makefile (or similar) to run arbitrary commands when the build is started.


To be honest qubes seems to become a better and better idea to run all of the time.


I don’t have ed installed (not a conscious decision), which prevents this from working:

% patch<evil.patch sh: 1: ed: not found patch: ed FAILED

patch works just fine for me, though, so ed is not required.


Patch applies files created with diff. Diff has 4 different output formats. One of those formats is basically a batch script with ed instructions.

For patch to work correctly you have to allow executing ed commands (internally or by spawning ed) but for security reasons you better not let ed execute yet another program.


What distribution?

(I don't think I've ever used a system that didn't have ed installed by default.)


Arch Linux doesn't contain ed in their base package group.


Debian doesn't install ed by default, although it did in the past: https://bugs.debian.org/416585


Arch and Debian and Alpine Linux, to my knowledge.

Which is sad because technically POSIX wants ed installed so purely technically, those distros aren't fully POSIX compliant.


My install of Gentoo Linux doesn't have ed.

It is available in the packages, of course.


my arch/ubuntu/debian/rh hosts don't have it pre installed. It's installed by default in an aLinux instance though.



Off topic, and forgive my naivety & ignorance, but I really do not understand the highly specific nature of this Principle / Law, as it is on wiki:

"Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can."

... was Mail just the example at the time? and this is basically just the a generic reference to feature/scope creep?... I just don't get the highly specific inclusion of "Reading mail" as something that all programs expand towards.


It's a joke about feature creep. Of course, most programs don't and shouldn't read mail.

The choice of this particular feature was not accidental. JWZ was a developer of Netscape Mail:

https://www.jwz.org/hacks/


Copy/paste the link, do not click.


Why?


It detects HN as a referer, and redirects to an unrelated image.


Oh weird. I didn't get the redirect the first time I had clicked the link.


Try it and you'll see why. It's not bad.


These days it would be browsing and displaying web pages, I suspect: "Every program attempts to expand until it can browse the web. Those programs which cannot so expand are replaced by ones which can."


“every program attempts to expand until it includes a full copy of chromium...”


Haha. "Every program attempts to expand until it can compile and run a copy of itself"


Mail was particularly prone to this because, in the days when mail was usually locally delivered to the UNIX system you'd also use for running interactive programs, adding the ability for a program to read mail was just a fairly simple matter of parsing the user's mbox file.

Most terminal mode IRC clients will at least tell you when you have new (locally-delivered) mail, for example.


It's a bit of both. I think the joke was mostly driven by Emacs which - due to how incredibly flexible the Elisp interpret is - started gaining plugins to do the most absurd things. There is, for example, a full "web browser" mode, a full-blown email client, even an implementation of the Snake game.


The funny thing is, Emacs does a lot of those things better than the dedicated applications. It seems that anything that involves text or textual representations can be done in Emacs better, and every feature you import composes with each other nicely and consistently, leading to more speed and more power.


You can have my nyan cat modeline when you pry it from my cold, dead fingers.


You're welcome! :).


Haha,I'll confess that as a Vim user I was a bit jealous when I saw that. The Vimscript implementations are subpar.


Synecdoche. Mail stands in for the general class of popular features.


Eep. Seems like patch should be running red, not ed!

Amazing how an ancient vuln can still be found hidden in plain sight.


Does anyone know if the GNU tools, the coreutils, have been through security audits and fuzzing? They’re the most used tools on the planet, I’d say, and relying on the 90’s “many eyes make all bugs shallow” doesn’t seem to cut it anymore...


I'm the GNU coreutils maintainer and have fuzzed them extensively. For example a recent bad bug in TZ handling was found using AFL and fixed by: http://git.sv.gnu.org/gitweb/?p=gnulib.git;a=commitdiff;h=94...

We put a lot of effort into the test suite which makes it easy for others to test various experimental security checkers. This has been detailed in the "third party testing" section at: http://www.pixelbeat.org/docs/coreutils-testing.html


wow pretty awesome to see such a QA pipeline in Open Source projects. I'm used to this in expensive R&D pipelines in the Telco space[0]. Is there a reason you're not using oss-fuzz[1]?

I recently did a lot of work using AFL-fast[2] (poking mostly Perl & Lua and crappy IoT products). My experience is that AFL-fast yielded far better results (in a fraction of the time) when compared to AFL.

[0] http://www.syssec-project.eu/m/page-media/3/johansson_tfuzz_...

[1] https://github.com/google/oss-fuzz

[2] https://github.com/mboehme/aflfast


aflfast is mentioned in my second link and was the most used fuzzing implementation to find bugs in coreutils.

We've had a quick look at using oss-fuzz, which will need a bit of work since it's more suited to libraries rather than standalone utils.


My bad, another commenter pointed out that GNU Patch is not part of coreutils. The GNU umbrella is quite wide these days...


The «Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services» paper in 1995 reported on exactly that. It made a bit of a splash at the time because the GNU command-line tools did noticeably better than the commercial ones.

https://minds.wisconsin.edu/bitstream/handle/1793/59964/TR12...


Could be, but that was 23 years ago. That report would now be graduating from college :)


Google has an ongoing project for fuzzing popular open source software: https://testing.googleblog.com/2016/12/announcing-oss-fuzz-c...

Looks like at least _some_ GNU tools are covered: https://github.com/google/oss-fuzz/tree/master/projects/gnut...


Audits and fuzzing don't really make sense if it's a documented feature. Also the many eyeballs principle seems to hold given that the Debian folks found the bug.

Having said that, a lot of the core command line tools (not just the GNU coreutils) that are run against untrusted input could certainly be improved. I'd prefer the OpenBSD method though - if it can't exec then I don't need to worry about the bugs the auditor/fuzzer didn't find.


The earliest GNU Patch I could find was 2.5.4, from 1999: https://ftp.gnu.org/pub/gnu/patch/

And that’s 2.5.4, 1.0.0 could have been released in 1985 for all we know.

I guess we have many eyeballs, yet not enough still :)


For the avoidance of doubt, GNU patch is not part of coreutils.


FreeBSD's patch runs red, a change we picked up from DragonFly BSD. That said OpenBSD's change to internally handle ed-style diffs is the best approach and I expect FreeBSD will migrate to that.


similarly, i recall once being denied root on a host while trying to help out with something and then asking if i could at least have sudo less to look at things... which was granted. >:D (then i fixed the problem)


Always nice to know more about the system than those nominally in charge of the system.


"patch will attempt to determine the type of the diff listing, unless over-ruled by a -c, -e, -n, or -u option.Context diffs (old-style, new- style, and unified) and normal diffs are applied directly by the patch program itself, whereas ed diffs are simply fed to the ed(1) editor via a pipe."

According to this, context diffs are not sent to ed.

Is the author suggesting that patch can be fooled to interpret a context diff as an ed diff?

Theres a file called pch.c with an excessive amount of parsing and "intuit" functions like intuit_diff_type().

patch has anthromorphised progress and error messages and tries to "guess".

However I am only a dumb end user. I should not question what I do not understand. Its all safe I'm sure.


Does this mean a "git pull" is vulnerable to this, or does git not rely on patch?


I believe git uses its own implementation for applying deltas sucked down by `git pull`

    https://github.com/git/git/blob/master/patch-delta.c
There's also `git apply` that applies user supplied patchfiles but if I'm reading this correctly it also uses an internal implementation:

    https://github.com/git/git/blob/master/apply.c


git's delta pack format which used on the wire is not based on patch. This would only affect "git patch" and derivatives like "git am".


Does `git apply` actually farm out to /usr/bin/patch? I kind of assumed it reimplemented the patching itself.


You're correct, git has its own patch implementation (plus, it's a bit stricter than the patch program, because it doesn't have to deal with various patch formats--only "unified" patches).



So how long will it be before we have a Monero miner implementation in ed?


Why does patch call ed? What did anyone ever actually use this for?


The `patch` program is expected to be able to apply any patchfile syntaxes created by `diff`. Once upon a time, the default behavior of diff was to spit out an `ed` script (nowadays, it's behind the `-e` flag). So, to apply that syntax of patchfile, it invokes `ed`.


Edit: I'm replacing my somewhat vague reply with a more correct answer:

https://en.wikipedia.org/wiki/Diff_utility#Edit_script


Unix principle of small tools chained together.


Text editors are shells, remember that if your program executes a text editor.


Here's the upstream bug report: https://savannah.gnu.org/bugs/index.php?53566


This hole needs a fancy name. Patchowned? Pwneded ?


I was hoping "bangpatch" (see URL). It's descriptive, and it's also a sly reference to the old-school days of UUCP.


PatchEd


No, just no. Naming vulns is a theme that needs to go away


Agreed. Every mundane bug or misfeature doesn't need its own website or 20-page analysis. Fix things and move on to the next thing that needs fixing. There is an endless supply of broken things.


To be fair I can appreciate the technical analysis of a security issue. The marketing is useless and just plain stupid


I disagree, it's the only way to make the public aware of the issue and got the corporate hierarchy to expedite OS patching. Not that it's necessary in this case.


The Patch Snatch Batch


This is, of course, a vulnerability. But let's not get too crazy. Once you are applying a patch, either you trust the patch or not.

If you blindly apply patches, you will be in risk as soon as you run or try to compile the patched command. This attack is just a bit faster because it happens as soon as you apply the patch.


Patch isn't only used for source code. I wouldn't have expected any risk of malicious code execution if I was patching documentation.


Even for source, you may want to review the patch after applying it.

For example, dpkg-source applies patches when you unpack source package. I don't think anybody expects code execution when unpacking stuff, even when this stuff is untrusted.


Suppose you're applying patches to something other than software?


What if you're trying to apply a patch to a .txt document?


My reaction whenever I read things like the OP: https://blog.codinghorror.com/assets/images/codinghorror-app...


The whole (satirical) website https://holeybeep.ninja/ is really great.


Real programmers use ed.


Disagree :)

  $ cat >a.out
EDIT: maybe not

https://xkcd.com/378/


Does this mean that it would be dangerous to try to patch patch?


This is where unix philosophy fails. Patch should be editing files itself, not via an external program.


This isn't the Unix philosophy at work. This is embedding a DSL. Microsot Office has the same exact problems, does it not? No Unix philosophy there.


Torvalds argument against microkernel: 'But, performance!'.

Result: QubesOS

...


So, who is this Rachel, and why has her blog suddenly exploded on HN? Both old and new post have been filling the front page for about 2 weeks.


The blog has been appearing on HN for many years. I assume she just started publishing more lately.


Yep. Getting out of a 'real job' released a whole bunch of cycles and there are a bunch of stories waiting to be told. I don't submit 'em ... I just write 'em.


I would also like to know the answer to this question. I enjoy the posts, generally, but I don't see why the sudden surge; there aren't even any ads (which is what I'd usually suspect)


Popularity breeds popularity: user sees link on HN, browses around on the site, finds something else interesting, also submits it to HN, etc.


Try filtering around the discussions, she did mention having more time for writing recently.


> This came up as part of the discussion on the "beep exploit" yesterday. I found it buried in the HN /new queue as a simple link to the Debian bug tracker.

I was expecting this post to go through the roof and was rather surprised it gained zero traction, lol. Nobody using the GNU toolchain anymore on HN or it just got drowned out idk ... https://news.ycombinator.com/item?id=16766577




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: