Hacker Newsnew | comments | show | ask | jobs | submit | cbsmith's comments login

> The article discusses the use of software to catch rouge employee's before they can do any harm

It's amazing how much harm you can do with a bit of blush. ;-)

Sorry, couldn't resist poking fun at "rouge", even though it is a perfectly understandable typo.


This article is from 2013. Why is it getting attention now?

This web page is from this month, not 2013. They published a paper on the architecture of Hyperion in 2013, but that never received a lot of attention as it appeared out of the blue and it was published at a 'minor' conference. As most people thought that Disney was still using RenderMan, people never realized that they were actually using that architecture in a production renderer. A few months ago FXGuide published an article on Hyperion and in the past few months there have been a few talks by Disney on it, but this is the first public article by Disney specifically about their renderer. And besides, there aren't a whole lot of discussions on Hacker News on offline rendering, so this article is a good opportunity to discuss graphics.

Thanks. I had heard about it last year, and so thought it was all old news.

The boost::variant example seems quite painful. I'd use boost::hana instead.

You'd be surprised. With an allocator and collector that is aware of real time constraints, GC can actually be a pretty huge advantage for achieving low latency.

GC is essentially never an advantage for low latency, but it is not incompatible with it either. Things like metronome can give you extremely well defined latencies.

It's fairly moot for hard real-time programs though, as those typically completely eschew dynamic allocation (malloc can have unpredictable time too).


> GC is essentially never an advantage for low latency

I can't really agree with that statement. One way to get to lower latency is to avoid using locks and rely on lock free algorithms.

Many of those are much easier to implement if you can rely on a GC, because the GC solves the problem that you can have objects that are still referenced in some thread, but that aren't reachable from the lock-free datastructure anymore. There are ways around this, e.g. using RCU or hazard pointers, but mostly it's easier with a GC.


Do you have an example? I'm not super familiar with lock-free structures, since when I've worked on low-latency things there has been a need to quantify the worst-case timing which rules out most of the lock-free options.

In a latency sensitive system, you want to minimize how much time you spend allocating and deallocating memory during performance critical moments. GC gives you a great way to leave those operations as trivial as possible (increment a pointer to allocate, noop to deallocate) during performance critical moments, and clean up/organize the memory later when outside the time critical window.

Similarly, it makes it easier to amortise costs across multiple allocations/deallocations.

GC does have a bad rep in the hard real-time world, because in the worst case scenario, a poorly timed GC creates all kinds of trouble, which is why I mentioned that it helps if the allocator/deallocator is aware of hard real-time commits.


> In a latency sensitive system, you want to minimize how much time you spend allocating and deallocating memory during performance critical moments. GC gives you a great way to leave those operations as trivial as possible (increment a pointer to allocate, noop to deallocate) during performance critical moments, and clean up/organize the memory later when outside the time critical window.

This only works if you enter a critical section with sufficient free heap. You could have just malloc()ed that space ahead of time if you weren't using a GC, so I don't see an improvement, just a convenience.

> Similarly, it makes it easier to amortise costs across multiple allocations/deallocations.

Amortizing costs is often the opposite of what you want to do to minimize latency; with hard real-time you care more about the worst-case than the average-case, and amortizing only helps the average-case (often at the expense of the worst-case)

> GC does have a bad rep in the hard real-time world, because in the worst case scenario, a poorly timed GC creates all kinds of trouble, which is why I mentioned that it helps if the allocator/deallocator is aware of hard real-time commits.

Yes, and GC can be made fully compatible with hard real-time systems; any incremental GC can be made fixed-cost with very little effort. It's somewhat moot since most hard real-time systems also want to never run out of heap, and the easiest way to do that is to never heap allocate after initialization, so most hard real-time systems don't use malloc() either.


It might make it easier, no? I'm working on a perf-sensitive program now. It's written in C (mainly for performance). It's spending about 25% of CPU time in free/malloc. Yikes.

This happened because it has an event dispatcher where each event has a bunch of associated name/value keypairs. Even though most of the names are fixed ("SourceIP", "SourceProfile", "SessionUuid", etc.) the event system ends up strdup'ing all of them, each time. With GC we could simply ignore this. All the constant string names would just end up in a high gen, and the dynamic stuff would get cleaned in gen0, no additional code. (As-is, I'm looking at a fairly heavy rewrite, affecting thousands of callsites.)


So what's the reason for strdup'ing vs having const names that never get freed? Also, sounds like you could use ints/enum to represent the key and provide string conversion util functions. Anyway, spending 25%in malloc/free is just poor code, but you already know that. This really isn't about GC :).

Gen0 or young GC still involves a safepoint, a few trips to kernel scheduler, trashes the cpu instruction and data caches, possibly causes other promotions to tenured space (with knock on effects later), etc. It's no panacea when those tens/hundreds of millis are important.


'Cause all of the strings aren't const, some are created dynamically. Third parties add on to these event names at runtime, so we don't know ahead of time. An int-string registry would work at runtime, except for the dynamic names.

I was just pointing out that GC can "help", by reducing complexity and enabling a team that otherwise might get mired in details to deliver something OK.


> My zlib implementation for example is consistently 25% faster than the reference version, despite me simply "hand compiling" it straight from the C source.

Yeah, but that is a very CPU bound processing pipeline. You would expect that to maximize the impact of any inefficiencies in the compiler.

That you can hand tune for better performance is conceivable. That you can get a 2x win over some pretty tuned code suggests that there is something larger at work than simply tuning lots of little things.


Yeah, I'm not seeing the point of doing that. You might as well create the key, encrypt it with the passphrase as per usual, and then post it online for all to see.

It's effectively the same thing as generating the key from the pass phrase, except you are mixing in a nice giant product of two primes in to the mix.


Well, I don't know a lot of the math and crypto, but I would probably seed an RNG from the stretched key and then pull large primes out just as ssh-keygen would. You're right, though -- given my "requirements", it's not much more to assume a convenient place to store a normal, encrypted key. It can be public since I already based my security on the password strength and some obscurity. I wouldn't use either of these ideas for any serious production server.

For the deterministic key, you could "backdoor" ssh-keygen's RNG through something like LD_PRELOAD... at that point it's probably just piping a couple of pre-made shell utils together, which could be more portable and simple than rolling-your-own all the key export stuff.

Actually, this strikes me as potentially useful and secure in some niche cases where you don't trust your RNG and/or storage. Use diceware and write your long key down, derive anew for each use. Maybe it's silly, but I would consider something like that (at least the initial deriving part) if I were very paranoid or making a long-lived, deeply-deployed key.

Here's a deterministic Ed25519 SSH key generator that takes a 32-byte seed: https://github.com/mithrandi/ssh-key-generator


Hmm... how about I reframe my point this way...

Let's say you have this deterministic SSH key generator. However, instead of your approach to using it, you took the resulting key and XOR'd it with a passphraseless private key, and then posted the XOR'd file up to the internet for all to see.

Now imagine what it would take for someone to crack it. They'd basically have to figure out what your deterministic key was. The publicly available file really wouldn't be of any help in that unless they already knew the underlying passphraseless key they were trying to hack.

Now explain to me how the "deterministic key + XOR" is in any way making it more difficult to crack it than just using "ssh -o -a [some reasonably high number]".

It's not. If anything, it is less. The deterministic key generator is essentially how most streaming ciphers designs work (well, aside from those derived from block ciphers), and generally offers weaker protections than block encryption.

So, rather than go through all that pain, just block encrypt your private key well and upload it to the cloud. ;-)


> I'd also recommend still requiring a sudo password on the other end and sending auth events to a auditing server.

You should be auditing everything already, and adding in sudo now adds another attack vector. You also now have two different accounts that can be manipulated to compromise a system.


If your threat model is someone stealing and cracking your private key, you really ought to be more worried that they are going to use their time machine to go back in time and kill your mother.

Both actions happen at the same time, someone phishes you, captures your input and they have the password for it as well. Now that they've got a shell on your box IP address restrictions are bypassed as well, as is port knocking. This is completely plausible, time machines are not. I read a (private) incident report very recently where half a company had their devices spear phished in order to gain access to a single internal server.

Oh, someone capturing your private key's passphrase is something to be genuinely worried about.

Someone brute forcing said passphrase? Yeah no.


I keep trying to explain to people that if anything this adds complexity to the security problem...

Can you explain this to me?

We do it at work for accountability purposes, but I'm not sure I'm clear on the security implications.


From an accountability standpoint, if each person logs in to the root account directly with a distinct private/public key, you can still have full individual accountability.

I'd think the complexity side of the sudo strategy is self-evident, so perhaps I'm not understanding the part that needs explaining.


I don't find 'sudo' to be complex at all. If in your situation I would have had the ability to log in as root, in my situation I will be in the 'wheel' group. It seems very straightforward to me.

I think you misunderstand my point. You have increased the complexity of securing the system. You now have each user's login shells, all of the joys of what those login shells touch, the sudo program and its configuration, all added to the attack surface.

For accountability the openssh server already logs the key fingerprint.

The main security implications of sudo are a false sense of security and the risks related to having a password on your account and typing it in all the time (how often have you spilled it into a bash history?).

Sudo can't provide a dependable audit trail because it is trivially circumvented ('sudo bash'). It doesn't protect you from local-to-root exploits either.

It doesn't even protect you from yourself but rather makes your critical commands more complicated and error prone (shell globbing/escaping, pipes and redirects, etc.).


Fair enough (wrt accountability).

What is the false sense of security created by sudo? You don't have to have a password on your account to use it. You can use NOPASSWD or you can use pam_ssh_agent_auth to verify ssh keys if you are very paranoid. (Users still should not log into the servers with passwords.)

sudo doesn't provide a dependable audit trail, but neither does anything else. If you are root, why not just fix the logs? To fix this, you must use snoopy to log commands to syslog, then syslog to a different computer. This makes both 'sudo' and root's ssh keys as accountable as one could ask for, I think. (As I could ask for, anyway.)

It protects you from yourself in the sense that you are meant to think about what you type if it begins with "sudo." If you are running as your own user, it's much harder for a stray 'rm' to bring down the system. You might clobber your own files, but then you just restore from your backup.


What is the false sense of security created by sudo? You don't have to have a password on your account to use it.

You seem to think sudo adds security even when you don't use a sudo password. That's precisely the false sense it creates.

snoopy

On vanilla Linux there is no dependable audit trail after someone becomes root, period.

that you are meant to think about what you type if it begins with "sudo."

Because you don't think when you type into a root shell?

The mythical "stray rm" is a strawman. I've never heard of one happening in the real world. If you're that careless then sudo won't save you either.

What I do see often is people spending minutes on trial & error because sudo turns even trivial commands into a minefield when a pipe/redirect, the shell environment, globbing or a loop gets involved.


> sudo doesn't provide a dependable audit trail, but neither does anything else. If you are root, why not just fix the logs? To fix this, you must use snoopy to log commands to syslog, then syslog to a different computer.

Wait, you just said "neither does anything else", and then gave an example of how to create an audit trail. ;-)

In truth, you should use auditd or similar systems to really have a proper audit trail.

You're absolutely right that sudo doesn't make an audit trail more worse. It just doesn't make it any better either. It does, however, create a bunch more ways that someone can hack your system.

> It protects you from yourself in the sense that you are meant to think about what you type if it begins with "sudo."

How about you write a shell script on your own system, call it "sudo", and have it do "ssh root@admin.system $*"? You see what I mean? Maybe you are used to using sudo, but there is no reason the "oh noes, now I need to be careful mode" has to be a privilege escalation command on the host your are administrating. In fact, it shouldn't be. It really should be before you have logged in.


> Minifying your JS and CSS files is a very good practice as it's not only secure, but also is compact.

If you think you gain much in terms of compactness, you might not understand how the subsequent gzip compression works. ;-) There may well still be a gain, but it won't be significant.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: