Hacker News new | comments | show | ask | jobs | submit login
On Linux, 'less' can probably get you owned (seclists.org)
302 points by adamnemecek 888 days ago | hide | past | web | 130 comments | favorite

I'm also not sure if the automation actually scratches any real itch - I doubt that people try to run 'less' on CD images or 'ar' archives when knowingly working with files of that sort.

This is a trend not uncommon in GNU software -- features added by someone who at some point thought it was a good idea, but probably didn't even bother using them much beyond an initial test to see that they are somewhat working. Most users likely think of 'less' as nothing more than a bidirectional version of 'more', and not as the "file viewer that attempts to do everything" that it seems to actually be. It's also a little reminiscent of ShellShock.

I'll be in the other camp. At least for directories, I like whatever magic distros do so that I get a file listing when using less on a directory, and can then less an actual file.

Features are nice - and ultimately in this scenario you'd just get owned 5 mins later when your archiving tool would get hacked directly, instead of through less.

The regular user won't use less to check the binary content for malware anyway...

lesspipe is not a GNU project...

Yes, the responsibility here lies squarely with the distros that set LESSOPEN and LESSCLOSE.

This sounds more like a problem with gluing a bunch of "stuff" together with shell scripts that are not well thought out than a problem with less or GNU in general having untested features.

To be fair, the usefulness of a tool is often proportional to your power to also do really dumb things with it.

See: hdparm

Oh, looks like it's been updated, the man page no longer describes the -fwdownload flag as "HAS NEVER BEEN PROVEN TO WORK and will probably destroy both the drive and all data on it"

Haha! I really like this as a quote.

This is probably a good reminder of something else: using selinux/apparmor/tomoyo/... can save you from many situations where you'd be exploited otherwise. For example as a response to this you can set a policy on lesspipe and all its children so that they cannot access the internet or write outside of temp directories.

Whatever library is used by lesspipe - you're safe as long as the output terminal and your kernel are safe.

This is a good point, and I fully endorse the sentiment, but please remember that for basically all modern distros shipping SELinux as default (not counting android), the default is for users to be unconfined.

This means that in a lot of cases, SELinux won't save you from this, unless you've decided to confine your users (which is by no means impossible - I do so - but requires a bit more knowledge of SELinux than "usual" SELinux tasks)

> basically all modern distros shipping SELinux as default.

No on ArchLinux, Gentoo, and apparently, not on Debian.

I also remember that the default settings were not sane on older versions of Fedora (this may have since changed): sshd would get killed by selinux because of some "violation". I never managed to understand what it was, and since sshd is kind of critical, I disabled SELinux so I could be sure not to lose access to that machine.

The added problem regarding SELinux is that most user's have no idea how it works, and there's no "short" explanation for it. Even after reading the entire wikipedia article, I'm left scratching my head wondering how it work or how one would use it.

OTOH, file modes, for example , are explained in a single paragraph, and is something extremely easy to understand.

This should honestly be part of every modern Linux system. It would go a long way toward creating the kind of reasonably secure system people are looking for.

I would envision this to be useful if there were a pre-installed local database of common use cases for each utility, and the user invoking the utility could interactively decide whether to allow specific exceptions, with a big fat warning.

Then, it doesn't matter whether someone exploits your tool, they're only ever allowed to utilize the tool for its intended use cases.

Yet when apple makes a sandbox for apps -- which is basically this design, yet usable by people who don't understand software construction or libs -- people shriek. A sandbox is a security boon for the vast majority of people. And I think this is the only way to get reasonable security; we've been banging on this "reduce bugs" plan for at least a decade with much less progress than one would hope to show for it.

People shriek because Apple isn't giving the device owner any option to get around the security if they want to.

Apple aren't using that model just for security, but also so they can tightly control the platform and restrict what the device owner can do on it.

on my mac:

   apple -> system preferences
   security + privacy
   General tab
   Under "allow apps downloaded from:"
   * Anywhere
As for ios, if you don't believe that users would click "yes" to whatever dialogue was required to make an app run, you simply don't interact with users. Allowing users to opt-out defeats the purpose. For example, my so's mother regularly gets her gmail hacked because she will respond to any google-skinned input box asking for her password. The only way to secure the majority of users is to not allow them through their actions to un-secure themselves.

Why is it ok on mac? Because many computer users are still dependent on non-appstore distributed software. Particularly developers. Nobody has evolved that dependency on ios, so it's not necessary.

> As for ios, if you don't believe that users would click "yes" to whatever dialogue was required to make an app run, you simply don't interact with users. Allowing users to opt-out defeats the purpose.

It does not require "Click here to get owned" level of work around. I'm perfectly fine with needing to boot into the equivalent of single user mode and entering some security key. But if I want to break the security model on a device I own I should be able to.

> Why is it ok on mac? Because many computer users are still dependent on non-appstore distributed software. Particularly developers. Nobody has evolved that dependency on ios, so it's not necessary.

That is circular reasoning. The "dependency" doesn't exist on iOS because Apple tries very hard to prevent non-appstore distribution being a viable avenue for app developers.

> People shriek because Apple isn't giving the device owner any option to get around the security if they want to.

You are misinformed. Right click -> Open instead of double click overrides my current signing requirements and executes the app anyway.

Where do I right click on my iPhone?

I was talking about iOS. I've never heard anyone "shriek" about this issue in regards to OSX.

I think that Apple is, on the whole, doing the right thing. But it's unfortunate that these three things (kind of) coincide:

* The Mac App Store's business model, i.e. no upgrade pricing, no trial versions; content-wise, nothing that you wouldn't sell in Disneyland

* Paying $99/yr for a developer account, even for free apps

* App sandboxing

And even then, this still doesn't mean that all apps on the App Store are safe. Because you used to be able to submit unsandboxed apps; as far as I know, they have never been removed? Sigh...

But at least many of the currently un-sandboxed apps are moving to the browser (MS Office & Skype being my biggest worries).

I totally agree.

But here's the thing: if you want a secure platform usable by the majority of people, the best choice (by a lot) is ios, followed by osx. The app store is certainly imperfect, but hopefully that will get tightened up over the next couple years.

Relatives who've been hacked ask for how not to get hacked, and my best answer is the above: use ios, particularly for banking. For a laptop, use osx and buy programs either from well known brands (adobe, microsoft) or buy app-store apps.

It's ironic that even Android now has SELinux enforced by default, yet most Linux distros do not, even though they had support for it since a decade ago.

The reason for this is that SELinux is a nightmare to configure, and lots of peoples expectations break when dealing with it and they usually don't know how to fix things without just disabling SELinux entirely. For Android peoples expectation has from the beginning been set to expect to run into security restrictions.

This is what people were saying not too long ago on Windows when we'd complain that the apps didn't work if the user running them wasn't a local admin. Now you'd be hard pressed to find non-legacy software that doesn't work as a non-admin. In the commercial world, pressure can be brought down from on high - executives and IT purchasers discriminating against shitty software. What about FOSS? If the community has a 'meh' attitude with SELinux then things won't be easily SELinux compatible because devs don't give a shit. The same way most devs don't give a shit about most security related topics.

I'm really worried that Linux has become a liability. The lax security practices of the culture its part of are obvious. What now? Do we migrate to OpenBSD? I've been considering this at work. It just seems that the culture surrounding OpenBSD just takes security more seriously.

The idea that we can just put a application on the internet sounds dangerous to me. If its isn't hardened by something like apparmor/selinux and also behind some kind of application-level firewall (or at least snort), then we're just asking for trouble. Something has to change. The status quo is now failing us. How many high-priority patches have there been in the FOSS world this year?

Sadly, I worry more about my linux machines than my windows machines.

That's much, much less true than it used to be. These days there's great tools for managing and modifying selinux policy, and much better reporting of what was denied, when and why, and how to fix it.

The "Selinux is too hard for mortals" thing is a relic from the early days of its inclusion into linux, and turning around this attitude is important.

I disagree. The tools are awful and terribly slow to execute at best. Then you occasionally get shafted with a relabel.

Can you be more specific? There's lots of different tools available - there might be some better options available than what you're using. And while tools that change things (e.g. setting booleans, changing context rules, etc) are unlikely to be instant in the near future, as the policy needs to be re-compiled, this has been improved, particularly for booleans. You can also batch updates together, which is a much nicer experience if you're trying to set a bunch of things at once. e.g. semanage -i <( echo -e "boolean -m --on httpd_use_nfs\n boolean -m --on httpd_use_sasl") That said, "this admin command I rarely use takes 30s to run" (30s seems to be about the average on RHEL 6/7) is to me an odd reason to try to avoid an important security feature.

I don't recall the last time I had to do a full relabel on a production system. Not saying it hasn't happened, but I can't recall an instance.

Overall, I spend less time managing SELinux (and that includes the custom policies I maintain) than I do managing IPTables. It's really not the nightmare it's made out to be.

Depends how well integrated it is into the system. I tried to add any MAC to my Arch and discovered that: 1. doing everything from scratch for Tomoyo, including plugging it into grub config and creating your own policies is easier than even getting proper tools for selinux 2. Tomoyo behaves like Apparmor (policies on names not inodes), but can do more (ioctls, devices, more granular network controls) 3. Nobody uses Tomoyo and I really don't get why...

Yeah, it's going to be an uphill struggle on any distro that it isn't already well integrated into - there's a lot of effort that goes into making SELinux in fedora/RHEL/etc JustWork(tm). Duplicating this on a distro without tight SELinux integration is going to be hard - but I tend to feel that if you use arch, linux from scratch or other similar distro, it's not really fair to blame SELinux ;)

It's not that it's too hard for mortals, but that it provides a level of resistance people are not used to, and don't know how to deal with, and where the easiest solution is to turn it off.

Whereas on Android, people expect the barrier, and developers don't expect to be able to disable it.

What "most Linux distros" are you talking about?

Ubuntu has Mandatory Access Control in the form of AppArmor (https://wiki.ubuntu.com/AppArmor). It's switched on with a set of policies on the desktop (do sudo apparmor_status to see) and on the server.

On the mobile side a lot of work has been done to confine individual applications in a manner similar to Android. That's future work that could go into mainstream Linux eventually.

On the traditional server side both RedHat and SUSE have MAC on by default - for them SELinux I believe.

You can't opt-out of Apple's sandbox.

Apparently you can opt-out at some level.

I’m running Yosemite 10.10.1; according to Activity Monitor, Dropbox, VimR and Terminal aren’t sandboxed.

Of course, many low-level processes aren’t sandboxed either.

If only we could abandon c and c++. That would make most security bugs go away. Source: Mozilla

The problem described in this post is beyond just a buffer overrun. It's about pulling in huge numbers of libraries for a few conveniences.

Rust can't save you from that; pulling in libraries makes it way too easy to increase the surface area of your code by several orders of magnitude. Even assuming rust offers an order of magnitude better security than C/C++, it still can't keep up. Sure, it mitigates things a little, but only until the next whiz-bang library gets linked in (even if it's written in rust, it will have some flaws).

In other words, the security of a program is based on the absolute number of bugs in it; but a programming language can't do much more than reduce the ratio of bugs to program size.

The OS is in a much better position to address the problem.

Even including a standard C/C++ library can contribute unknown bugs. I have a prototype system on which the software crashed about once every 3 days until I updated the C library.

It would make most security bugs caused by unsafe by default memory management which do account for a high percentage of security bugs, but logic bugs also cause security issues. See for instance SQL Injection attacks.

Yes, but I can't help but imagine how an OS written in Rust would look. Probably a lot better than what we have today. A combination of Rust and D would probably be capable of doing everything C and C++ can, and much better.

Let the language wars commence!

In reality, there is nothing wrong with C++, particularly not modern C++ (since c++11) as the approaches laid out in Stroustrup's book discourage writing C++ in the old-fashioned "let's throw pointers around!" approach.

As with ANY language, if you write it badly, bad things will happen. The language choice doesn't mean bug-free code.

> In reality, there is nothing wrong with C++, particularly not modern C++ (since c++11) as the approaches laid out in Stroustrup's book discourage writing C++ in the old-fashioned "let's throw pointers around!" approach.

So, in essence this would mean to many that Rusts safety focus happens too late for it to be a big enough benefit for having to give up the investment in modern C++? As in, memory safety is a solved problem with modern C++?

Yes, modern C++ is far safer than the old style or passing pointers all over the place and wondering who was deleting them (not that I ever did that - you could pass const pointers to stop others deleting them anyway; it was only a problem if you didn't obey const-correctness and make certain classes "container" classes).

The addition of move semantics and using references everywhere makes pointers unnecessary for the most part. You can use STL containers for putting your items into so shouldn't see raw "new" or "delete" operations in your own code very much; this is particularly true where you define your own move operators and move constructors.

It's a difficult habit to break though!

See Stroustrup's "The C++ Programming Language Fourth Edition" (the blue book) section 3.3.3 Resource Management and "A Container", where in this early part of the book Stroustrup explicitly directs to 'avoid "naked" new and delete operations" and to "use resource handles and RAII to manage resources".

See std::move to force moving where it isn't clear that you're moving things.

Further reading: http://en.cppreference.com/w/cpp/language/move_operator http://stroustrup.com/C++11FAQ.html#default2

The sad reality that I came to realize from this year's CppCon is that most C++ developers out there will just keep on writing pre-C++98 code, due to multiple reasons like style guides, company policy, old code bases and so on.

Only greefield projects can benefit from modern C++ and there are very few out there.

After reading the updated version of Effective C++, I doubt the average C++ developer will be able to deal with a mixed codebase of pre-C++98, C++98, C++03, C++11 and C++14.

This is very true. I am writing new software for my dayjob but I am stuck with VC2010 on Windows. Despite having Xcode under Mac OS, I have to write old-fashioned C++ as VC2010 doesn't support anything new!

But it helps to learn the new stuff and apply it within my own projects as best I can.

See for instance SQL Injection attacks.

Or more recently and widely publicised, ShellShock.

> , but logic bugs also cause security issues. See for instance SQL Injection attacks.

Though it seems that some languages, or at least how they are used, are more disciplined when it comes to treating things like SQL queries/commands; that they don't treat them as plain text, and instead assign some kind of structure/typing to the query command. Or don't they? I don't really know how SQL injections are handled in such languages, but I do know that some like to be more principled when it comes to SQL queries/commands in the source code text; like using macros to ensure that an SQL query doesn't have a typo in it. Maybe that kind of practice extends to validating user input.

Strings are a big security hole that doesn't get nearly enough attention.

Conceptually there are a ton of different string types. File names, path names, SQL query templates, SQL query strings, URLs, URL query parameters, command line arguments, full command lines, human-readable text.... Yet just about everything gloms them together into one "string" type. Even APIs that allow for structured construction of SQL queries tend to rely on the programmer not to put arbitrary data in the template bits. These really should all be completely separate types requiring conversion.

"people shriek"

Can you expand on that?

there's a search function on hn; most combinations of osx, sandbox, app, apple will yield results

just one example: https://news.ycombinator.com/item?id=3191021

Absolutely agree! It's even more true for closed source software like Skype - to be able to prevent it from accessing to data it doesn't have to.

I know there are many alternatives to Skype, but there are people that have to use it from time to time for professional reasons.

I started doing this for Skype on Ubuntu 14.04 - but have to admit that I quickly ran out of steam. Need to pick up where I ended.

Isn't it impossible to secure Skype because of it's use of X11? I've found it to be impossible to run Skype in untrusted X11 (so that it doesn't get full control over your X display), and running it in a nested X server is probably too slow for video. (And running it in a VM didn't work either for me because of audio and video performance. I really decided I couldn't use Skype except when running it on a real machine dedicated to running Skype.)

If you have a smartphone you could use that as your Skype-machine. I have a cheap Chinese Android phone bought 1 year ago and it can take video calls easily. The point being that in most cases you don't have much choice giving up security on your phone anyway so you may as well put all the spyware in one place.

I agree that it is impossible to completely secure Skype because of X11. And I need as well for some calls the screen sharing feature. So for me this is a wanted feature.

But I would appreciate if I knew that Skype is _not_ able to access my confidential files through the file system. As a bonus it would be great if I could as well forbid Skype to access the local network and just allow it to talk through the default gateway.

I then would start Skype on the laptop only when I have to speak with people. During these teleconferences I anyway have to clean my screen. After the call I can stop the skype process but keep being reachable through skype on phone.

Would this scenario be possible or is there still a security risk for my data?

It is my understanding that once an app has access to full X11, it can do everything you can do with your keyboard and mouse[1]. Like open terminals, cat files, it won't even have to parse the result from the screen if xsel or a text editor is installed since it will be able to copy their content from the clipboard. Or it could run wget or netcat or similar to download a binary that will connect to a server to upload your files.

I suppose that normally you're able to see these activities. But the app could observe periods of inactivity so as to wait until it thinks you're away, then quickly open a terminal, enter the command to wget then start the binary in the background and close the terminal again; that might be like 1/10 second of flickering. Or it might even be possible for it to act similar to a window manager and open the window where it wants, like off-screen or behind other windows, so you wouldn't see anything.

Since it directs the window manager or panel program to open the terminal instead of starting the binary directly, any SELinux properties that say that commands exec'ed by Skype don't have the permission to do stuff will not trigger. You can't stop the panel from opening programs of course unless you're fine with preventing yourself from doing the same. (I don't think SELinux is able to track which X connection leads a panel app to start a binary; that would require it to be able to track code execution and understanding the logic of the program, which seems too complex to be feasible. Cooperation between panel and SELinux might solve this. But then this seems more like a cat&mouse game, will you not have missed other ways in which a program can be started? What about terminals that you opened, they would have to cooperate as well. In the end what you need is a replacement for X11.)

[1] it is possible to inject keyboard and mouse events into the X11 event delivery system; although it might be possible for programs to differentiate between those and real keyboard/mouse events (I know that the events are tagged as articial or some such, but can the programs ignore them without ill effects, and are they doing it?) Here my knowledge is getting spotty, so perhaps it is still possible to get a (somewhat?) more secure situation than what I've depicted above. You'll have to ask someone who knows better. In any case it seems to me that entities that are willing and able to use Skype to break into people's computers will also have way more funding than you to find any obscure ways in which X11 can be used to break out. X11 (at least its normal, trusted mode of operation (and as I said, Skype won't run in untrusted mode)) isn't designed to provide security.

Thanks for your long post. That is really scary! I will still try to get Skype constrained with apparmor. At least I will know that it will not siphon the data through the file interface.

I am wondering if Wayland will be more secure by design. I hope, since it is highly awaited as something that will save us from X11.

Denying applications the ability to send faked input events is one of the design goals of Wayland. This makes some things more difficult, e.g. virtual keyboards.


You can run Skype inside a docker container: https://wiki.archlinux.org/index.php/skype#Docker

"It is possible to use Skype in a safe Docker container. The program is then run through an SSH tunnel with X11 forwarding and sound is heard through PulseAudio's Network Server."


1. how is video speed through the ssh tunnel, then? Unsolved, I guess.

2. as I said, Skype won't do untrusted X11 (at least the version I tried didn't). This means that you need to tunnel it with trusted X11 access. Which means, no security (it will log your passwords and open and use X terminals if it wants to). Which makes the whole exercise moot.

Other suggestions on this page are suffering from the same flaw. It seems all the writers of that page have never heard that X11 doesn't provide security. Also, they advise to use xhost. In the end you're worse off when following the recipes on this page than you were before.

I think this approach would still fail if the program is not behaving as expected by the user.

For instance, I didn't know until reading this article that 'less' on ubuntu went through lesspipe, but the people maintaining the tools know that. If for some legitimate use case lesspipe needed access to the internet, the local database could allow that behavior without any warning, but as a user I would still be dumbfounded that a 'less' command ends up generating network connections.

This research lcamtuf has been doing with AFL is really important.

One thing that it is proving (exactly as a lot of people expected) is, we don't have any idea where security bugs (think the next heartbleed or shellshock) are going to show up, we have no idea how good the software out there is (meaning it is bad), and most of the time we don't even know what's running on our own boxes.

If these basic things we use hundreds of times a day (less, strings) have huge flaws, we have a lot of work ahead of us.

They will never go away while people insist in using C and its derivatives to write userspace code.

This is covered in the fuzzing project's FAQ https://fuzzing-project.org/faq.html

"if you start a new software project it is a good idea to avoid C right from the start. however ... most major software components we use today [are] written in C. Rewriting things from scratch is hard, compared to that finding bugs with fuzzing is the low hanging fruit."

While something like Go has good performance, writing low-level tools in interpreted languages (Python, Ruby, etc) means you would always have the performance overhead of firing up the interpreter for everything. A hypothetical non-C system would feel sluggish compared to what we have now.

Who said anything about interpreted languages?

Modula-2, Modula-3, Go, D, Haskell, OCaml, Oberon, Ada, Extended Pascal, Object Pascal, C#, Java, ...

The list of languages with native code compilers is quite big, no need for interpreters.

The problem with fuzzing is that, like the use of safer languages, it just won't get adopted unless forced down the throat of developers.

This is why, Google, Microsoft and now Apple are adopting a "our way or the high way" in memory safe languages for their platforms. After all, proprietary APIs would already be enough for platform lockin.

I do agree no one is going to re-write the huge amount of code out there. However it is already good enough if new code get written into something else.

It's not just about native vs interpreted though. Most of those languages use garbage collection for memory allocation which is still going to be an unacceptable performance hit for most of the domains where C and C++ are still in use.

Unless one is writting device drivers, kernel modules, signal processing algorithms or anything else that needs to fit in 16ms, there is hardly any reason to use C or C++.

Languages like Object Pascal, Modula-2 and Ada that don't use garbage collection and many that do, like Modula-3 provide much more control about value types and GC behavior than what Java and C# are known for.

Finally many of the garbage collection jitter issues are caused by developers clueless about writing GC friendly algorithms or simply how to use a profiler.

What about Rust from Mozilla?

Shellshock didn't have anything to do with C. gotofail looked like an SCM merging bug, which could have happened in any language. Security issues in the Rails world are often caused by libraries that do more than they should, just like less does (YAML comes to mind).

Yes, it would be nice to get rid of one class of errors. But proper sandboxing would get rid of them all.

Yeah those are the famous ones, but what's much more likely is that you're not going to necessarily be working on a project with as wide a footprint as any of those, and you're probably not going to create a bug that's quite as harmful as any of those either, but you're still going to cause memory-management problems for your project if you start out writing C.

No one's saying it's impossible to write memory-safe C code, it's just exponentially harder, and when you're not 100% sure of your use case and exact performance needs, you're going to spend a lot more time than you want to making C do only what you want and nothing else.

No. Even a perfectly sandboxed Apache could be subverted to send modified (malicious) data to other http clients. Sandboxes are great, but just because something's sandboxed doesn't mean all security issues are now reduced to a DoS.

While true, out of bounds errors, pointer misuse and undefined behaviors are the top exploits of any C code base, removing them would already be a big improvement in security.

You're absolutely right. Actually, this bug doesn't have anything to do with C either. You could probably get less to delegate to some vulnerable shell script utilities. You just have to find one case of "rm -- $INPUT" where someone left out the --, and put in an -rf as input.

It's good to keep spreading awareness and AFL is cool. As for proving new things... fuzzing has been actively used for security research for 15+ years, and by now it's common knowledge that C/C++ programs are rarely safe against untrusted input.

Last month, it was "file", which turned out to have a parser for executables which can be exploited via a buffer overflow.

Probably the best exploit in this line is crafting JPEG files which cause buffer overflows in forensic tools and take over the machine being used for forensics.

We need an effort to convert Linux userspace tools likely to be invoked as root or during installs from C/C++ to something with subscript checking.

"strings" is also vulnerable: http://lcamtuf.blogspot.com/2014/10/psa-dont-run-strings-on-...

Inspecting files is apparently hazardous.

No, writing general handling code in C is hazardous.. And after that, mixing domains, especially for human convenience, is dangerous as parsing gets complicated and leads to the unexpected.

Rust can solve the first problem. A sense of elegance or just good taste can solve the second.

Any language with native code compilers can be used for 90% of the stuff C is still used, not only Rust.

I think due to the rise in VM based runtimes and lost of investment in alternative languages (e.g. Modula-2....) many developers created the myth that C and C++ are the only languages with native code compilers.

This problem with LESSOPEN would exist even if less was written in Rust. The same goes for the strings command.

All the more reason to use memory safe languages as far up the stack as you can go.

Rust can solve the first problem.

I sure hope so. But the language needs more mileage on it to be confident that the checking works.

You're right; it was "strings" last month, not "file".

The first time I accidentally ran "less" on a directory and it piped some version of "ls" into itself, I was mildly annoyed. The thing's supposed to page a text file on a terminal. Since then, I've had to think twice before invoking it to avoid this "helpful" behavior, and I'm not surprised that it came back to bite people.

> The first time I accidentally ran "less" on a directory and it piped some version of "ls" into itself, I was mildly annoyed.

    % less .
    . is a directory
    % less imadir
    imadir is a directory
My point is, that behavior is a configuration issue on your end, not inherent to the less codebase.

> My point is, that behavior is a configuration issue on your end, ...

On my end, and on every "end" where I ever log in. "less" works as a simple pager on my own system, but now I need to remember to possibly fix it everywhere else. "You can fix it in the config file" is not an excuse for broken-by-design software.

> ... not inherent to the less codebase.

Well yes, it kind of is. "less" used to be a better replacement for "more." Now it's something else, which should have a different name. Should "cat" be changed to automatically uncompress files and concatenate archives?

> Well yes, it kind of is. "less" used to be a better replacement for "more." Now it's something else, which should have a different name. Should "cat" be changed to automatically uncompress files and concatenate archives?

"less" already means "pager with all the bells and whistles". If you just want a simple pager, "more" is still there. How many fine gradations do you need?

Back in the old, old days, we all started using 'less' because 'more' did not have the ability to scroll backwards through a file. Nor was it buffering pipe input to enable similar functionality. It seems there has been a bit of feature creep in the ensuing 25 years...

Yup, cause apparenlty staying the same for 25 years is the new normal. Would you discard more too if it added the ability to page back in a document?

No, but if the maintainers added the ability to play MP3s and file my taxes, that is a cause for concern.

You could replace your less with the busybox version of less, which does not support LESSOPEN.

I believe that is the point of the linked email. Some distributions have less symlinked to something else (e.g. the lesspipe script) for convenience. Plain 'ol less isn't that much of a problem.

My understanding is that the less codebase has the hooks (LESSOPEN and LESSCLOSE), and it's configurable (by presence or absence of the variables) whether they're used.

You may also want to try the 'most' command, which is like less but more advanced.

More advanced is exactly the problem here.

If you use a simpler program like 'more' or 'pg', you won't have these kinds of vulnerabilities.

The vulnerability is not actually in the less command. Did you overlook that?

Using a simpler program is useless when that's not the problem in the first place. Using more instead gains you nothing except a less useful program.

The vulnerability is in less-the-complete-system. More doesn't even have the ability to pipe its input through random shell scripts that call exploitable programs, AFAIK.

The problem is parsing complex file formats in unsafe programming languages, and any system which does that will be vulnerable.

We need a less that does less than less today but more than more

This reminds me of an old program called list.com that could open anything as a text file. Any size whatsoever, any line size, etc.

I suppose this would be the more straightforward and expected behaviour:

    less: somedir: is a directory

This was also mentioned in one of the pages of The Fuzzing Project, linked on HN just a short while ago as of this comment: https://fuzzing-project.org/background.html

I was idly wondering about "less" a month ago. https://news.ycombinator.com/item?id=8506813 It looks like my answer is "yeah, people are looking at that."

That is bad default behaviour on Ubuntu's (and centos'?) behalf. I have confirmed this is not the case in Debian.

I think this is one side effect of development. We are trying to implement any feature that would be nice to have into every software.

LESSOPEN or LESSPIPE is a feature that is already achievable via manual means. But automation is the king so it's a nice feature to have it in the software implemented.

If we could just stop and move on whenever software is capable of doing what they are intended for as smooth as possible, many of these issues would diminish to exist.

Am I missing something? Is lesspipe run as root? What could you execute via lesspipe that you couldn't from the command line?

> Ultimately, I think that there's an expectation that running less on a downloaded file won't lead to RCE

When you use less, you do not expect any code to be executed, and this article shows how this might be the case (through unexpected invocations of other utilities which may have bugs in them).

There are many ways to go from normal shell to root shell after that.

And there is no need to get root to do serious damage, rm -r ~/* comes to mind.

There's not a firm list, but certain commands should always be safe to run against hostile input.

If I type "make" on a project I downloaded, arbitrary stuff gets run. I accept that as part of make.

If I sha256sum a hostile file, that should never lead to RCE.

If you open a PDF, that should never lead to RCE, but that's not the world we live in. Better for tools not to open PDFs when you don't expect that to happen.

It depends what's hiding in the word "open."

If I open a PDF in a full-featured reader, I'm slightly ill at ease. If I sha256sum it or wc it, I should be (read: ought to be, in any sane world) perfectly at ease. No matter how complicated the data structures in it, they shouldn't affect those programs.

As per the link, I'm talking about less "opening" a PDF by shelling out to pdftotext. "Hiding" is a good word for the behavior. :)

So what's the point of root? To keep me out but let malicious people in?

>Is lesspipe run as root?

Not normally

>What could you execute via lesspipe that you couldn't from the command line?

Nothing, but running lesspipe on some file shouldn't lead to remote shell being opened to some guy in Russia.

Imagine the case where someone sends you a malicious file and you open it with less.

I checked what I have on my Ubuntu 12.04.

$ env|grep LESS


LESSOPEN=| /usr/share/source-highlight/src-hilite-lesspipe.sh %s

Safe, as long as source-highlight isn't buggy.

I also checked my .bashrc and found this

# make less more friendly for non-text input files, see lesspipe(1)

# NO! I don't want this!

# [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

So yes, lesspipe was the default and for some reason I commented it out. I vaguely remember at being annoyed about less showing me something different from the actual binary content of the files.

> Many Linux distributions ship with the 'less' command automagically interfaced to 'lesspipe'-type scripts, usually invoked via LESSOPEN. This is certainly the case for CentOS and Ubuntu.

I just ran less on a dir in Ubuntu Trusty (latest LTS) and got the expected "<dir> is a directory" message.

Less really is more


  $ sha256 /usr/bin/less /usr/bin/more
  SHA256 (/usr/bin/less) = 9d517bcdf5bb22e756337ee450657de4d4edba779f7c17acfcaf7ba71f76bf59
  SHA256 (/usr/bin/more) = 9d517bcdf5bb22e756337ee450657de4d4edba779f7c17acfcaf7ba71f76bf59

The `more` command is often provided by `util-linux` package.

Ubuntu 12.04:

    dpkg -S /bin/more /bin/less

    util-linux: /bin/more
    less: /bin/less
CentOS 5.10:

    rpm -qf /bin/more /usr/bin/less


Using Ubuntu 12.04

        md5sum /bin/less /bin/more 
        4be69e915a4b4514d3d06ee07dd56a68  /bin/less
        11fc84952b1f31a91b5b6ec29740e497  /bin/more

Or it just my system?

/bin/more is in the util-linux package.

Less can pretend to be more if run that way, but debian also does ship the real more command.

Should this show when I run env or not?

  $ env | grep ^LESS
  LESSOPEN=||/usr/bin/lesspipe.sh %s

  $ cat /etc/redhat-release 
  Red Hat Enterprise Linux Server release 6.6 (Santiago)

That's why

  alias less=/usr/share/vim/macros/less.sh

I used to do that with vimpager (a better implementation of this shell script). Unfortunately this cannot work as it doesn't process data as it arrives, you need to wait for input to end. So it's only good for fast & short commands.

Sure, but I have a separate alias for streams. alias -g even, zsh-style.

but also vim. and irssi. and gpg. and a bunch of other day to day linux programs nobody bothered to review very thoroughly.

Just any case anybody still believe its just java, flash, openssl and bash that suffers bad vulns (oh oh oh).

Using cat in terminal can get you owned.

rm /bin/less

Problem solved.

Any binary utility that I haven't used in a 6 month period can get lost. The problem is that there are probably a hundred or so more issues like this hiding in /bin/* and /usr/bin/* and wherever else executables are hiding.

Is there a way to retrofit 'can shell out' as a capability flag not unlike the regular access permission bits?

You probably did use less in the past 6 months if you used Linux in the past 6 months. less is usually the default pager for things like 'man', 'git log', etc.

But as pointed out in another comment the problem is not with 'less', but with some distros configuring it to use 'lesspipe' as preprocessor.

Or just unset LESSOPEN ?

I checked on Debian Jessie and the /usr/bin/lesspipe script runs entirely off the file extension. So there is no issue with less itself. If someone sends me, say, a malicious doc file I would have to type "less blort.doc" to get owned by catdoc. The only time i would ever type that is if I knew that less would invoke catdoc, that I actually had catdoc installed on the machine and for some reason I wanted to use catdoc to look at a doc file.

Less only installs a mailcap entry for "text/*". A mail reader that could not handle plain text itself would not be much of a mail reader.

That also means that it is kind of stupid to have less display non-text things. Still not a real security issue.

User continence works great until someone figures out how to get lesspipe called automatically...

'I would never run bash manually on unsanitized input from HTTP requests!' [years pass] /shellshocked!

Pretty sure that shellshock doesn't involve any manual invocation of bash....

You're right, it doesn't. Think a little harder.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact