Hacker News new | past | comments | ask | show | jobs | submit | fuzzbang's comments login

Thats because there is more than one program providing this sort of information. MAPP is for AV and other security software companies. There is also: CIPP http://www.microsoft.com/security/cipp/ and another one for "Defence" which is all government, military and intelligence agencies (apparently).


It is not that difficult to prove that a hidden volume exists. The TrueCrypt implementation of hidden volumes means that the "hidden" partition is all allocated at the end of the visible partition. If you have a 20G TC volume with a 4G hidden volume, the file system in the non-hidden volume will never allocate a block beyond 16G. This shows up as very anomalous file system layouts at the block level. Simple visualization of the block allocations will show a clear delineation where the hidden partition starts. The TC implementation of hidden volumes is definitely not robust as plausible deniablility.

The police forensics investigators know to look for this already. It is in their recommended best practices for how to handle TrueCrypt volumes.

The safest way to use a TrueCrypt hidden volume is:

  * Create the largest regular volume that you can. 
  * Create the smallest hidden volume that you can.
  * Never mount the hidden volume as "protected"
The idea is that your sparsely populated cover volume won't create enough block allocations to have an obvious "end", and additionally, that those blocks will have a low likelihood of being allocated inside your hidden volume and overwriting your secret data.


I'm pretty sure this isn't true. You cannot prove the hidden volume exists. In fact hidden volumes are completely useless without plausible deniability. The file system of the outer volume will happily overwrite your hidden volume if you tell it to. The point is that you know its there so you intentionally don't write more than 16G on your 20G volume. But the "unused" space looks just like random data so you can't prove there is anything meaningful there.


Assuming you don't have it protected, you need to be more cautious than that, as the file system is not simply a long stream of bytes.


Very true, my numbers were just an illustrative example given the context set up by the comment I replied to.


Which you apparently didn't read thoroughly. I clearly state that the secure way to use TrueCrypt is to never mount the hidden volume in protected mode. That will enable the scenario you describe. I even state the reason why you want use it the way I suggest is to minimize the amount of hidden data that is overwritten.


I think the point of TrueCrypt is simply "plausible deniability". They can suspect with a very high degree of confidence that there's an hidden volume, but how do they prove it?


They can prove it sufficiently to force you to hand over your password. File systems have a particular behaviour, they allocate blocks in certain ways. Most notably, they use all of the space available to them equally (or, pseudorandomly anyway). If a file system never allocates any data in the last N bytes, where N is a very large number, that is indication that the file system is treating the volume as Size-N. Since this behaviour is the signature of a hidden volume in a TrueCrypt container, that is "proof". It will be sufficient proof for a court of law.

Essentially you are arguing that the file system implementation exhibited implausible behavior (it allocated only from the first N% of bytes), and that TrueCrypt exhibited implausible behavior ("ok, normally that would mean a hidden volume, but not in this case!").

All of which is to say, that TrueCrypt's implementation of Hidden volumes (as typically used by end users) is not actually plausibly deniable.


Just because a filesystem isn't using all the space available in a partition does not mean that the rest of the space is being used by something else. Imagine I run `newfs -s 2097152 sd1p` where sd1p is actually 4194304 sectors. Now imagine sd1 is a softraid volume with a CRYPTO discipline. There's no way you can prove that the extra 2097152 sectors aren't being used, but there's also no way you can prove that they are.


That's certainly true. But also (from a investigation point of view) more suspicious than a filesystem covering the whole harddisk, but only filled to 20%. That will always be a problem, as long as the "visible" filesystem just maps block 1..N directly onto encrypted blocks 1+k..N+k (with k being a constant offset), as it's currently the case e.g. in linux LUKS (I assume CRYPTO discipline in BSD is similar).

The proper solution most likely would be to integrate a kind of block-mapping into the encryption software which allocates randomly distributed blocks from the encrypted harddisks whenever a filesystem begins to write to the blocks of an volume. This randomization algortithm then will be aware of all currently active "hidden partitions", but due to the randomness, a pattern to draw conclusions about the existence of other partitions would not emerge.


"More suspicious" is meaningless. If you can't prove - with incontrovertible evidence and beyond any reasonable doubt - that there's something there, then there's plausible deniability.


Plausible deniability won't protect you from the "rubber hose" of a contempt charge.


It would be really nice if truecrypt implemented a feature whereby you could have a special password to use to render the secret partition un-usable, perhaps by rendering your existing password/key worthless.


Something like this wouldn't really protect you from law enforcement. They perform forensic disk duplication before mucking around with a drive. If you provide a fake password to TrueCrypt and it starts overwriting things, it would be pretty obvious to anyone investigating the drive what's going on.


I'm not sure how such a feature would work or how useful it would be, for that matter. Maybe the TrueCrypt binary would attempt to decrypt the first X bytes of the partition under the "coercion" password and then check if it matches some known signature. If so, flip a bit in each encrypted block to scramble it.

Problem: forensics people can use a write-blocking adapter on the original disk and simply make copies to try out the decryption. So, the feature sounds both irritating to implement and (worse) perhaps give a false sense of security to a novice.


http://www.truecrypt.org/docs/?s=hidden-volume is probably more what you want.


> they use all of the space available to them equally (or, pseudorandomly anyway)

This is simply false for many (if not most) filesystems, which preferentially write to blocks near the beginning of the disk. For spinning disks, random distribution of blocks would kill performance.


No, it is not false. The important thing with a spinning disk is locality of reference. You want the blocks which store the file content to be as close together as possible, to minimize the head seek times. This means you want as long a chain of contiguous blocks as possible. This does not mean that you want all those blocks to be at the beginning of the disk. In fact, the exact opposite. You want to start that chain at a random location so you are more likely to have a large number of contiguous unallocated blocks. See the implementation of HFS+ Extents, or Ext4, or UFS for examples of how this works.


A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.

B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.


A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.

If you haven't actually looked at the block allocation patterns of common filesystems, then you can't say conclusively that fuzzbang is incorrect. Arguments from first principles (e.g. "basic physics") cannot override empirical evidence.

Further, locality of reference will have a much, much bigger influence on spinning disk I/O throughput than location at the front of the disk. The difference between the outer rim and inner rim might be 120MB/s to 70MB/s, so reading a contiguous 200MB file will take 1.7x as long if it's stored at the inner rim (286ms vs 167ms). However, if that 200MB file is stored in 100 2MB fragments, and seeking to each fragment takes 4ms, your reading time will be dominated by the seek time due to fragmentation (686ms vs 567ms, or a mere 1.2x difference).

Based on my experience I'm inclined to accept fuzzbang's description of block allocation strategies. It used to be common wisdom that you could defragment a volume by copying all the data off, formatting it, then copying the data back on. I did this once with an NTFS volume (using ntfs-3g), then checked the resulting data in the Windows disk defragmenter. The data was primarily located around the center of the volume, with numerous gaps. Filesystems leave gaps to allow room for files to expand.

B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.

"Commonness" for the purposes of forensics is a much lower bar than for market analysis. I'd also wager that, servers included, there are at least as many ext2/ext3/ext4 volumes on the planet as NTFS volumes.


> If you haven't actually looked at the block allocation patterns of common filesystems

I have.

> you can't say conclusively that fuzzbang is incorrect

And I can.

I'm aware of the degree of difference in speed. It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications. Or at least it was, in the last few years we've become more likely to just use SSDs or keep everything in RAM.

> if that 200MB file is stored in 100 2MB fragments

Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.


> If you haven't actually looked at the block allocation patterns of common filesystems

I have.

And? What distribution of block allocation did you observe on said filesystems? Does it contradict the original supposition that filesystems spread out allocations to prevent fragmentation, thus possibly overwriting hidden data at the end of a partition?

It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications.

This has as much to do with seek times as sequential reading speed. A drive with 8ms average seek times might average 2ms if you only make the heads travel 25% of the width of the platter.

The fact that you have to restrict the filesystem to the beginning of the disk suggests that filesystems don't do this automatically.

Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.

I'm not sure how you got that impression. I was just providing numbers to complete the example. There's no need to become defensive; and if you find that you might be wrong, saying, "That's a fair point, I'll have to do some more research," goes a lot further than continuing to beat a dead horse.

Maybe it's the fact that this thread is on an article related to law, politics, and morality that is causing needless argumentation.


HFS+ is not exactly uncommon


As a full-time Mac user for 8 years, and a Linux user for much of the decade before that, I am acutely aware of how common many filesystems, including HFS+, actually are.

If OS X ever breaks 10% (and still uses HFS+ at that time), I'll reconsider my judgement of its commonality.


I'm writing this from a computer with two partitions: one HFS+ running OSX and one Ext4 running Linux. These are both the default options when installing OSX (Mountain Lion) and Ubuntu 12.04, respectively:

https://www.dropbox.com/s/x3cluigwaog8sad/gparted.png


Of course you are. And if you took a poll of HN users, you might even find OS X + Linux near a majority. That has nothing at all to do with what filesystems the vast majority of people are using, much less what they'd be using with TrueCrypt.


It's over 7%, I still wouldn't call that uncommon and say few people use it.

http://en.m.wikipedia.org/wiki/Usage_share_of_operating_syst...


That's how it does hidden volumes huh? I always thought it was something sneakier, like flag a handful of files that exist in the regular volume for it to build its hidden volume out of.


Well ... a normal PC can have up to 10-12 TB of storage these days . It is more than normal for 1 or 2 to be left empty. You just create the hidden volume there.


Excellent. You read my directions on how to use TrueCrypt correctly! :)


Or they aren't about to kill the same bug in MobileSafari, since it is worth exponentially more.

https://twitter.com/i0n1c/status/309585202810867712


WebKit code execution against Chrome is also likely to work (in modified form, but same basic exploit) against desktop or mobile Safari. Desktop Safari sandbox escape is likely to be completely different from MobileSafari sandbox escape. And in all three cases, the sandbox escape is the harder part.

So that logic does not explain to me why people are going after Chrome but not Safari.

I honestly don't know why it is. In particular, I don't have specific reason to believe Mac Safari's sandbox is more bulletproof than Windows Chrome's, but I guess Safari has the advantage of not being exposed to Windows kernel bugs.


Yeah, the WebKit exploit will work effectively unmodified on Safari. And the sandbox escape used against Chrome on Windows was a kernel bug in surface that can't be turned of from user-space (or really at all on Win7). Also, they softened the target quite a bit by using 32-bit Win7 for the contest, rather than 64-bit Win8 (or even 64-bit Win7).

As for why no one's targeting Safari, I think it's simple market forces at play. The iOS exploit market is established and pays very well, while the core vulnerabilities, expertise, and techniques are all shared with Safari on Mac OSX. And since Safari isn't a soft target (in no small part due to Abhishek's mass slaughter of WebKit security bugs and our bounty program), $65k just doesn't compete with the real-world exploit market.


Getting sandbox escapes from Mac Safari and iOS Safari requires completely different exploits. The code execution stage of a complete exploit could be shared, but it could also be shared with Chrome. So you'd think the same argument of iOS Safari exploit market value would apply either way.

My theory is that not much research has been done yet on breaking the WebProcess sandbox. Which makes me sad.


>Getting sandbox escapes from Mac Safari and iOS Safari requires completely different exploits.

You're focusing too narrowly on the sandbox itself. You have to consider the whole stack, and all of the surface exposed from within the sandbox. Consider the Chrome sandbox escape from yesterday, which didn't use anything specific to Chrome. It targeted part of the Windows stack that's guaranteed to be exposed to every process on the system.


While there are definitely benefits to using a VPN, they do not provide anonymity. They provide privacy, and it is not the same thing.

"No one is going to go to jail for you". If a VPN provider is legally required to log your activity or face jail time, guess what? you're getting logged! To assume otherwise is just asking for trouble.

All of this is better addressed in this slidedeck.

http://www.slideshare.net/grugq/opsec-for-hackers


Actually, @fygrave put together the document (using scigen and a draft from jonathan), so both of them should get credit.


You should clean the data from the user before passing it to the shell. There is a trivial remote command execution vulnerability in the URL ("echo 'GET /;$(cat /etc/passwd)'|nc ..."). I assume there are more.


There are likely many MANY more :-) I suppose I should cook up a sanitising method. Fortunately, stackoverflow to the rescue! http://stackoverflow.com/questions/89609/in-a-bash-script-ho...


We have basic filtering now :-)


You shouldn't have to sanitize if you quote your variables properly. If general, the rules for "$foo" are "parse first, expand afterwards" while those for $foo are "parse, expand, parse again", so

  foo="hello    world"
  echo $foo # prints "hello world" [2 arguments]
  echo "$foo" # prints "hello    world" [1 argument]
Note that you can't trivially inject a command this way, though you can inject arbitrary arguments:

  foo="; ls"
  echo $foo # prints "; ls"
I would recommend never using test ([) with more than three arguments. Once you start doing -a and -o, then people can inject confusing values for variables, making it impossible to parse:

  foo='!'
  bar='2'
  [ "$foo" = 1 -a "$bar" = 2 ] # -bash: [: too many arguments
Since test runs a subprocess, it can't tell the difference between data and arguments. Instead, you should do something like this:

  foo='!'
  bar='2'
  [ "$foo" = 1 ] && [ "$bar" = 2 ]
(This also has the advantage of being way more legible.)


You can also solve that problem with double square brackets:

  [[ "$foo" = 1 && "$bar" = 2 ]]


Really, we should be using double square brackets. We're not aiming for POSIX compliance and double square solves a lot of problems.


You have to be more specific than that. These days information security is a huge field. Network security is a misnomer since almost no one works on "network security", which is about protocols not applications / operating systems.

I'd suggest being more specific about what sort of project you're interested in. Will you be doing original research on the size of botnets (you and everyone else in the world); maybe write a tool for something, I'd suggest writing a real webapp security assessment tool. I hate doing web app assessments. Another thing that would be really useful would be collaborative information sharing during a pen test (I've put a lot of thought into this one and could give you more pointers)...

If you want to play it safe, just write another fuzzer. Everyone writes fuzzers. Or you could write some VoIP security tools.


Where is it announced? In particular, how does one find out where / when it is to show up?


The Londoner is larger and doesn't get as packed as quickly (avoiding peak times, of course). Bull's head is a superior pub no doubt, but is not great for a largish meeting.

At any rate, anywhere will do. When is this scheduled for? I'm figuring on heading to the islands later this week.


Only ask that the venue have simple directions for the country bumpkins like myself


The Londoner is at the corner of Sukhumvit 33. Easy to get to. The Bull's Head is down Suk 33/1, very easy to get to by BTS. The Phrom Phong station exit is just in front of the mouth of the soi.

I don't know where the Langsuang Starbucks is, never been there.


The Londoner brewpub on suk soi 33. Brewed beer > brewed coffee.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: