This is a trend not uncommon in GNU software -- features added by someone who at some point thought it was a good idea, but probably didn't even bother using them much beyond an initial test to see that they are somewhat working. Most users likely think of 'less' as nothing more than a bidirectional version of 'more', and not as the "file viewer that attempts to do everything" that it seems to actually be. It's also a little reminiscent of ShellShock.
Features are nice - and ultimately in this scenario you'd just get owned 5 mins later when your archiving tool would get hacked directly, instead of through less.
The regular user won't use less to check the binary content for malware anyway...
Oh, looks like it's been updated, the man page no longer describes the -fwdownload flag as "HAS NEVER BEEN PROVEN TO WORK and will probably destroy both the drive and all data on it"
Whatever library is used by lesspipe - you're safe as long as the output terminal and your kernel are safe.
This means that in a lot of cases, SELinux won't save you from this, unless you've decided to confine your users (which is by no means impossible - I do so - but requires a bit more knowledge of SELinux than "usual" SELinux tasks)
No on ArchLinux, Gentoo, and apparently, not on Debian.
I also remember that the default settings were not sane on older versions of Fedora (this may have since changed): sshd would get killed by selinux because of some "violation". I never managed to understand what it was, and since sshd is kind of critical, I disabled SELinux so I could be sure not to lose access to that machine.
The added problem regarding SELinux is that most user's have no idea how it works, and there's no "short" explanation for it. Even after reading the entire wikipedia article, I'm left scratching my head wondering how it work or how one would use it.
OTOH, file modes, for example , are explained in a single paragraph, and is something extremely easy to understand.
I would envision this to be useful if there were a pre-installed local database of common use cases for each utility, and the user invoking the utility could interactively decide whether to allow specific exceptions, with a big fat warning.
Then, it doesn't matter whether someone exploits your tool, they're only ever allowed to utilize the tool for its intended use cases.
Apple aren't using that model just for security, but also so they can tightly control the platform and restrict what the device owner can do on it.
apple -> system preferences
security + privacy
Under "allow apps downloaded from:"
Why is it ok on mac? Because many computer users are still dependent on non-appstore distributed software. Particularly developers. Nobody has evolved that dependency on ios, so it's not necessary.
It does not require "Click here to get owned" level of work around. I'm perfectly fine with needing to boot into the equivalent of single user mode and entering some security key. But if I want to break the security model on a device I own I should be able to.
> Why is it ok on mac? Because many computer users are still dependent on non-appstore distributed software. Particularly developers. Nobody has evolved that dependency on ios, so it's not necessary.
That is circular reasoning. The "dependency" doesn't exist on iOS because Apple tries very hard to prevent non-appstore distribution being a viable avenue for app developers.
You are misinformed. Right click -> Open instead of double click overrides my current signing requirements and executes the app anyway.
* The Mac App Store's business model, i.e. no upgrade pricing, no trial versions; content-wise, nothing that you wouldn't sell in Disneyland
* Paying $99/yr for a developer account, even for free apps
* App sandboxing
And even then, this still doesn't mean that all apps on the App Store are safe. Because you used to be able to submit unsandboxed apps; as far as I know, they have never been removed? Sigh...
But at least many of the currently un-sandboxed apps are moving to the browser (MS Office & Skype being my biggest worries).
But here's the thing: if you want a secure platform usable by the majority of people, the best choice (by a lot) is ios, followed by osx. The app store is certainly imperfect, but hopefully that will get tightened up over the next couple years.
Relatives who've been hacked ask for how not to get hacked, and my best answer is the above: use ios, particularly for banking. For a laptop, use osx and buy programs either from well known brands (adobe, microsoft) or buy app-store apps.
I'm really worried that Linux has become a liability. The lax security practices of the culture its part of are obvious. What now? Do we migrate to OpenBSD? I've been considering this at work. It just seems that the culture surrounding OpenBSD just takes security more seriously.
The idea that we can just put a application on the internet sounds dangerous to me. If its isn't hardened by something like apparmor/selinux and also behind some kind of application-level firewall (or at least snort), then we're just asking for trouble. Something has to change. The status quo is now failing us. How many high-priority patches have there been in the FOSS world this year?
Sadly, I worry more about my linux machines than my windows machines.
The "Selinux is too hard for mortals" thing is a relic from the early days of its inclusion into linux, and turning around this attitude is important.
I don't recall the last time I had to do a full relabel on a production system. Not saying it hasn't happened, but I can't recall an instance.
Overall, I spend less time managing SELinux (and that includes the custom policies I maintain) than I do managing IPTables. It's really not the nightmare it's made out to be.
Whereas on Android, people expect the barrier, and developers don't expect to be able to disable it.
Ubuntu has Mandatory Access Control in the form of AppArmor (https://wiki.ubuntu.com/AppArmor). It's switched on with a set of policies on the desktop (do sudo apparmor_status to see) and on the server.
On the mobile side a lot of work has been done to confine individual applications in a manner similar to Android. That's future work that could go into mainstream Linux eventually.
On the traditional server side both RedHat and SUSE have MAC on by default - for them SELinux I believe.
I’m running Yosemite 10.10.1; according to Activity Monitor, Dropbox, VimR and Terminal aren’t sandboxed.
Of course, many low-level processes aren’t sandboxed either.
Rust can't save you from that; pulling in libraries makes it way too easy to increase the surface area of your code by several orders of magnitude. Even assuming rust offers an order of magnitude better security than C/C++, it still can't keep up. Sure, it mitigates things a little, but only until the next whiz-bang library gets linked in (even if it's written in rust, it will have some flaws).
In other words, the security of a program is based on the absolute number of bugs in it; but a programming language can't do much more than reduce the ratio of bugs to program size.
The OS is in a much better position to address the problem.
In reality, there is nothing wrong with C++, particularly not modern C++ (since c++11) as the approaches laid out in Stroustrup's book discourage writing C++ in the old-fashioned "let's throw pointers around!" approach.
As with ANY language, if you write it badly, bad things will happen. The language choice doesn't mean bug-free code.
So, in essence this would mean to many that Rusts safety focus happens too late for it to be a big enough benefit for having to give up the investment in modern C++? As in, memory safety is a solved problem with modern C++?
The addition of move semantics and using references everywhere makes pointers unnecessary for the most part.
You can use STL containers for putting your items into so shouldn't see raw "new" or "delete" operations in your own code very much; this is particularly true where you define your own move operators and move constructors.
It's a difficult habit to break though!
See Stroustrup's "The C++ Programming Language Fourth Edition" (the blue book) section 3.3.3 Resource Management and 188.8.131.52 "A Container", where in this early part of the book Stroustrup explicitly directs to 'avoid "naked" new and delete operations" and to "use resource handles and RAII to manage resources".
See std::move to force moving where it isn't clear that you're moving things.
Only greefield projects can benefit from modern C++ and there are very few out there.
After reading the updated version of Effective C++, I doubt the average C++ developer will be able to deal with a mixed codebase of pre-C++98, C++98, C++03, C++11 and C++14.
But it helps to learn the new stuff and apply it within my own projects as best I can.
Or more recently and widely publicised, ShellShock.
Though it seems that some languages, or at least how they are used, are more disciplined when it comes to treating things like SQL queries/commands; that they don't treat them as plain text, and instead assign some kind of structure/typing to the query command. Or don't they? I don't really know how SQL injections are handled in such languages, but I do know that some like to be more principled when it comes to SQL queries/commands in the source code text; like using macros to ensure that an SQL query doesn't have a typo in it. Maybe that kind of practice extends to validating user input.
Conceptually there are a ton of different string types. File names, path names, SQL query templates, SQL query strings, URLs, URL query parameters, command line arguments, full command lines, human-readable text.... Yet just about everything gloms them together into one "string" type. Even APIs that allow for structured construction of SQL queries tend to rely on the programmer not to put arbitrary data in the template bits. These really should all be completely separate types requiring conversion.
Can you expand on that?
just one example: https://news.ycombinator.com/item?id=3191021
I know there are many alternatives to Skype, but there are people that have to use it from time to time for professional reasons.
I started doing this for Skype on Ubuntu 14.04 - but have to admit that I quickly ran out of steam. Need to pick up where I ended.
But I would appreciate if I knew that Skype is _not_ able to access my confidential files through the file system. As a bonus it would be great if I could as well forbid Skype to access the local network and just allow it to talk through the default gateway.
I then would start Skype on the laptop only when I have to speak with people. During these teleconferences I anyway have to clean my screen. After the call I can stop the skype process but keep being reachable through skype on phone.
Would this scenario be possible or is there still a security risk for my data?
I suppose that normally you're able to see these activities. But the app could observe periods of inactivity so as to wait until it thinks you're away, then quickly open a terminal, enter the command to wget then start the binary in the background and close the terminal again; that might be like 1/10 second of flickering. Or it might even be possible for it to act similar to a window manager and open the window where it wants, like off-screen or behind other windows, so you wouldn't see anything.
Since it directs the window manager or panel program to open the terminal instead of starting the binary directly, any SELinux properties that say that commands exec'ed by Skype don't have the permission to do stuff will not trigger. You can't stop the panel from opening programs of course unless you're fine with preventing yourself from doing the same. (I don't think SELinux is able to track which X connection leads a panel app to start a binary; that would require it to be able to track code execution and understanding the logic of the program, which seems too complex to be feasible. Cooperation between panel and SELinux might solve this. But then this seems more like a cat&mouse game, will you not have missed other ways in which a program can be started? What about terminals that you opened, they would have to cooperate as well. In the end what you need is a replacement for X11.)
 it is possible to inject keyboard and mouse events into the X11 event delivery system; although it might be possible for programs to differentiate between those and real keyboard/mouse events (I know that the events are tagged as articial or some such, but can the programs ignore them without ill effects, and are they doing it?) Here my knowledge is getting spotty, so perhaps it is still possible to get a (somewhat?) more secure situation than what I've depicted above. You'll have to ask someone who knows better. In any case it seems to me that entities that are willing and able to use Skype to break into people's computers will also have way more funding than you to find any obscure ways in which X11 can be used to break out. X11 (at least its normal, trusted mode of operation (and as I said, Skype won't run in untrusted mode)) isn't designed to provide security.
I am wondering if Wayland will be more secure by design. I hope, since it is highly awaited as something that will save us from X11.
1. how is video speed through the ssh tunnel, then? Unsolved, I guess.
2. as I said, Skype won't do untrusted X11 (at least the version I tried didn't). This means that you need to tunnel it with trusted X11 access. Which means, no security (it will log your passwords and open and use X terminals if it wants to). Which makes the whole exercise moot.
Other suggestions on this page are suffering from the same flaw. It seems all the writers of that page have never heard that X11 doesn't provide security. Also, they advise to use xhost. In the end you're worse off when following the recipes on this page than you were before.
For instance, I didn't know until reading this article that 'less' on ubuntu went through lesspipe, but the people maintaining the tools know that. If for some legitimate use case lesspipe needed access to the internet, the local database could allow that behavior without any warning, but as a user I would still be dumbfounded that a 'less' command ends up generating network connections.
One thing that it is proving (exactly as a lot of people expected) is, we don't have any idea where security bugs (think the next heartbleed or shellshock) are going to show up, we have no idea how good the software out there is (meaning it is bad), and most of the time we don't even know what's running on our own boxes.
If these basic things we use hundreds of times a day (less, strings) have huge flaws, we have a lot of work ahead of us.
"if you start a new software project it is a good idea to avoid C right from the start. however ... most major software components we use today [are] written in C. Rewriting things from scratch is hard, compared to that finding bugs with fuzzing is the low hanging fruit."
While something like Go has good performance, writing low-level tools in interpreted languages (Python, Ruby, etc) means you would always have the performance overhead of firing up the interpreter for everything. A hypothetical non-C system would feel sluggish compared to what we have now.
Modula-2, Modula-3, Go, D, Haskell, OCaml, Oberon, Ada, Extended Pascal, Object Pascal, C#, Java, ...
The list of languages with native code compilers is quite big, no need for interpreters.
The problem with fuzzing is that, like the use of safer languages, it just won't get adopted unless forced down the throat of developers.
This is why, Google, Microsoft and now Apple are adopting a "our way or the high way" in memory safe languages for their platforms. After all, proprietary APIs would already be enough for platform lockin.
I do agree no one is going to re-write the huge amount of code out there. However it is already good enough if new code get written into something else.
Languages like Object Pascal, Modula-2 and Ada that don't use garbage collection and many that do, like Modula-3 provide much more control about value types and GC behavior than what Java and C# are known for.
Finally many of the garbage collection jitter issues are caused by developers clueless about writing GC friendly algorithms or simply how to use a profiler.
Yes, it would be nice to get rid of one class of errors. But proper sandboxing would get rid of them all.
No one's saying it's impossible to write memory-safe C code, it's just exponentially harder, and when you're not 100% sure of your use case and exact performance needs, you're going to spend a lot more time than you want to making C do only what you want and nothing else.
Probably the best exploit in this line is crafting JPEG files which cause buffer overflows in forensic tools and take over the machine being used for forensics.
We need an effort to convert Linux userspace tools likely to be invoked as root or during installs from C/C++ to something with subscript checking.
Inspecting files is apparently hazardous.
Rust can solve the first problem. A sense of elegance or just good taste can solve the second.
I think due to the rise in VM based runtimes and lost of investment in alternative languages (e.g. Modula-2....) many developers created the myth that C and C++ are the only languages with native code compilers.
I sure hope so. But the language needs more mileage on it to be confident that the checking works.
% less .
. is a directory
% less imadir
imadir is a directory
On my end, and on every "end" where I ever log in. "less" works as a simple pager on my own system, but now I need to remember to possibly fix it everywhere else. "You can fix it in the config file" is not an excuse for broken-by-design software.
> ... not inherent to the less codebase.
Well yes, it kind of is. "less" used to be a better replacement for "more." Now it's something else, which should have a different name. Should "cat" be changed to automatically uncompress files and concatenate archives?
"less" already means "pager with all the bells and whistles". If you just want a simple pager, "more" is still there. How many fine gradations do you need?
If you use a simpler program like 'more' or 'pg', you won't have these kinds of vulnerabilities.
Using a simpler program is useless when that's not the problem in the first place. Using more instead gains you nothing except a less useful program.
The problem is parsing complex file formats in unsafe programming languages, and any system which does that will be vulnerable.
This reminds me of an old program called list.com that could open anything as a text file. Any size whatsoever, any line size, etc.
less: somedir: is a directory
LESSOPEN or LESSPIPE is a feature that is already achievable via manual means. But automation is the king so it's a nice feature to have it in the software implemented.
If we could just stop and move on whenever software is capable of doing what they are intended for as smooth as possible, many of these issues would diminish to exist.
When you use less, you do not expect any code to be executed, and this article shows how this might be the case (through unexpected invocations of other utilities which may have bugs in them).
There are many ways to go from normal shell to root shell after that.
If I type "make" on a project I downloaded, arbitrary stuff gets run. I accept that as part of make.
If I sha256sum a hostile file, that should never lead to RCE.
If I open a PDF in a full-featured reader, I'm slightly ill at ease. If I sha256sum it or wc it, I should be (read: ought to be, in any sane world) perfectly at ease. No matter how complicated the data structures in it, they shouldn't affect those programs.
>What could you execute via lesspipe that you couldn't from the command line?
Nothing, but running lesspipe on some file shouldn't lead to remote shell being opened to some guy in Russia.
$ env|grep LESS
LESSOPEN=| /usr/share/source-highlight/src-hilite-lesspipe.sh %s
Safe, as long as source-highlight isn't buggy.
I also checked my .bashrc and found this
# make less more friendly for non-text input files, see lesspipe(1)
# NO! I don't want this!
# [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
So yes, lesspipe was the default and for some reason I commented it out. I vaguely remember at being annoyed about less showing me something different from the actual binary content of the files.
I just ran less on a dir in Ubuntu Trusty (latest LTS) and got the expected "<dir> is a directory" message.
$ sha256 /usr/bin/less /usr/bin/more
SHA256 (/usr/bin/less) = 9d517bcdf5bb22e756337ee450657de4d4edba779f7c17acfcaf7ba71f76bf59
SHA256 (/usr/bin/more) = 9d517bcdf5bb22e756337ee450657de4d4edba779f7c17acfcaf7ba71f76bf59
dpkg -S /bin/more /bin/less
rpm -qf /bin/more /usr/bin/less
md5sum /bin/less /bin/more
Less can pretend to be more if run that way, but debian also does ship the real more command.
$ env | grep ^LESS
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
Just any case anybody still believe its just java, flash, openssl and bash that suffers bad vulns (oh oh oh).
Any binary utility that I haven't used in a 6 month period can get lost. The problem is that there are probably a hundred or so more issues like this hiding in /bin/* and /usr/bin/* and wherever else executables are hiding.
Is there a way to retrofit 'can shell out' as a capability flag not unlike the regular access permission bits?
But as pointed out in another comment the problem is not with 'less', but with some distros configuring it to use 'lesspipe' as preprocessor.
Less only installs a mailcap entry for "text/*". A mail reader that could not handle plain text itself would not be much of a mail reader.
That also means that it is kind of stupid to have less display non-text things. Still not a real security issue.
'I would never run bash manually on unsanitized input from HTTP requests!' [years pass] /shellshocked!