Hacker News new | past | comments | ask | show | jobs | submit login

That is risky, since without a partition table, some operating systems and disk management tools will treat the disk as empty, making it easy to accidentally overwrite data.

> 100% of the capacity, with no wasted sectors.

You will never have that. SSDs have a large amount of reserved space, and even on HDDs, there are some reserved tracks for defect management.




By "some operating systems and disk management tools" you mean MS Windows and Windows tools.

Obviously, I do not use unpartitioned SSDs/HDDs with Windows. On the other hand, With Linux and *BSD systems they work perfectly fine, regardless whether they are internal or removable.

For interchange with Windows, I use only USB drives or SSDs that are partitioned and formatted as exFAT. On the unpartitioned SSDs/HDDs I use file systems like XFS, UFS or ZFS, which could not be used with Windows anyway.

Any SSD/HDD that uses non-Windows file systems should never be inserted in a Windows computer, even when it is partitioned. When a SSD/HDD is partitioned, it may be hoped that Windows will not alter a partition marked as type 0x83 (Linux), but Windows might still destroy the partition table and the boot sector of a FAT partition. It happens frequently that a bootable Linux USB drive is damaged when it is inserted in a Windows computer, so the boot loader must be reinstalled. So partitioning an USB drive or SSD does not protect them from Windows.

>> 100% of the capacity, with no wasted sectors. > You will never have that.

I thought that it is obvious that I have meant 100% of the capacity available for users, because there is no way to access the extra storage used by the drive controller and also no reason to want to access that, because it has a different purpose than storing user data, so your "correction" is pointless.


Also some dumb firmware may write to such disks, Asrock boards were reported in past to do that.

Efi+boot partitions usually take less than 2G of space, and can be made like 200MBs total, while mainstream disk capacity is hundreds of GBs nowadays.

This "loss of useful space" is immaterial in most cases. Maybe if you have something like a 2GB drive from 1990s that you want to use (why?) then it makes sense to shave off 1G off that. But it is more work, as you have to buy, prepare and manage the USB drive.


There are more reasons to set it up more or less like that.

Think of an expensive, super fast but considerably small SSD, and some cheap big mass storage (maybe even on spinning rust) along that.

You'll likely try to use the expensive SSD as efficiently as possible. Every GB counts if you have "only", say, 0.5 TB.

A boot partition on such expensive and small (but fast) media is pure wastage.

Also this kind of setup seems not so uncommon, as I can claim that I've done something similar. :-)

There are even more reasons. It makes things even more simple and less error prone:

The argument that you can swap disks more easy was mentioned already. But that's not everything one gains.

SSDs are very prone to get worn out much quicker and loose at least half of their performance when you mess up the data alignment on them. In case of FDE with partitions (maybe on top of LVM even) the alignment issues isn't trivial. It's quite easy to mess up the alignment by mistake. You can read a lot of docs, try to find out details about the chips on your SSD, do calculations, yada yada, or you just encrypt the raw device and use the whole disk without partitions. That's considerably simpler, nothing can go wrong.


> Every GB counts if you have "only", say, 0.5 TB.

It doesn't. You can make the overhead partition take 200MB. That's immaterial fraction of 0.5TB. You ain't gonna see impact of this loss. Additionally, by partitioning the drive, you protect it from dumb programs who like to create partition tables.

Yes, there are reasons for not partitioning your OS disk, like full disk encryption. But it is more work.

> when you mess up the data alignment on them. In case of FDE with partitions (maybe on top of LVM even) the alignment issues isn't trivial.

This sounds interesting. What are these alignment issues? Why do you think they are present on disk with partitions (I never had those issues) and why do you think they are not present on disk without partitions (may be they are, due to compression/encryption)?


In case you would use anyway only one partition (because boot is elsewhere) not having any partitions at all is not more but less work.

Alignment issues are only really relevant in case of SSDs. The FS blocks need to align with the "physical" blocks of the chips used. (Actually this are also "only logical blocks", presented to you by the SSD controller, but at least this is fully transparent). If the alignment is messed up the SSD needs to consider at least 2 "physical" blocks (as presented by the controller to the OS) when accessing a single FS block. This leads to doubling the wear and halves the performance. (At least, in really unhappy scenarios this can even triple the access effort).

Where exactly a FS block starts and ends in relation to the underlying "physical" block(s) depends on all the "headers" that are "in front" of the FS blocks (or logically sometimes "a layer up", even "physically" this also only means "in front"). Partition tables are headers. LUKS headers are obviously also headers that need to be taken into account. LVM headers (and blocks, groups, volumes) are even one more layer to consider.

To make things more fun, like said, the "physical" blocks are only an abstraction presented by the controller. In some cases their size is configurable through the SDD controller firmware. (But this shouldn't be done without looking at the chips themself). The more interesting part is: The "physical" blocks can have "funny" sizes… (Something with some factor of 3 for example). Documentation on this is frankly spare…

The usual tools just assume some values that "work most of the time". But this whole problematic is actually quite new. Older version of all the related tools didn't know anything about SSD block alignment. (Like I said, they still don't know anything for sure, there is not way to know without looking a the docs and specs of the concrete device, but now at least they try to guess some "safe values"; with a large margin).

If you use partitions you'll end up with those "funny" few MiBs large offsets, which you have seen for sure. (If you don't use offsets it's very likely that the alignment is wrong).

Without partitions the other storage layers are much easier to align. You don't need to waste a few MiBs around your partitions, and especially don't need to remember (and maybe even recalculate) this stuff when changing something.

Not many people know about this whole dance as misalignment isn't a fatal problem. It will just kill your SSD much quicker, and half the performance (at least). But SSDs are so fast that most people would not notice without doing benchmarks… (Benchmarks of the different storage layers is actually the only way to test whether you got the alignment right).

If you don't look into this yourself you can only pray that all tools used were aware of this issues and guessed some values that work by chance properly with your hardware. But if you created partitions without the "safe" offsets (usually by setting values yourself and not letting the tool chose its "best guess") the alignment is quite likely wrong.

I'm came across this issue because I was wondering why Windows' fdisk always added seemingly "random" offsets around partitions it created. It turns out it's a safety measure. Newer Unix tools will do the same when using proposed defaults.

TL;DR: If you don't create a partition table on a NVM device you can just start your block layer directly on block zero and don't have to care about much as long as you also set the logical block size of that layer to the exact same value as the (probably firmware configurable) "physical" block size of the device. If you have a (GPT) partition table in front (which is by the way of varying size to make things even more funny) you need to add "safety offsets" to your partitions. Otherwise you're torturing your NVM device, resulting in servery crippled performance and lifetime.

I hope further details are now easy to google in case anybody likes to know more about this issue.

---

> Additionally, by partitioning the drive, you protect it from dumb programs who like to create partition tables.

The better protection would be do keep drives far away form operating systems and their tools that are known to randomly shred data… ;-)


Thanks for the effort, but this is not very convincing. Is there any documented case where physical blocks have size that in bytes is not some power of 2? I suspect if that exists, it is quite a rare device. Blocks of size 512B, 4K, 8K are the most common case, and correct alignment is completely taken care of by the 1MiB offset which is standard and default in fdisk and similar tools on Linux. You mention "random" offsets with newer Unix tools - I have never encountered this. Any examples?


> Thanks for the effort, but this is not very convincing.

I've written this to shed light on the alignment issue as I was under the impression that this would be be something completely new to you. ("This sounds interesting. What are these alignment issues?")

> Is there any documented case where physical blocks have size that in bytes is not some power of 2?

Yes, there are examples online. I did not make this up!

It was in fact some major WTF as I came across it…

> I suspect if that exists, it is quite a rare device.

Jop, that for sure.

Also the documentation is very spare on this, like already mentioned.

I think it was the early triple-cell chips that had such crazy layouts. (Did not look it up again; maybe this was only a temporary quirk; but maybe it still exists, no clue).

> Blocks of size 512B, 4K, 8K are the most common case, and correct alignment is completely taken care of by the 1MiB offset which is standard and default in fdisk and similar tools on Linux.

Well, it depends.

This thingies I've read about with some factor of 3 in their block size would need at least a 1.5 MiB offset… (And the default 1 MiB offset would torture them to a quicker death; but most people would likely never find out).

There are devices with much bigger (optimal) block sizes, I think in the MiB ballpark (don't remember the details out of the top of my head, would need to look it up again myself). Also in such cases the 1 MiB would not suffice.

Those devices are usually in some compatibility mode in factory settings, with much smaller blocks than optimal for maximal performance and least wear. You need to tell the firmware explicitly to switch the block size to get best results (which is of course not possible after the fact as it obviously shreds all data on the device).

Also it's not only the offset around the partitions. You need to take the block sizes into account also regarding the block layers "inside" the partitions. Which was actually my point: This makes things more complicated than strictly needed.

> You mention "random" offsets with newer Unix tools - I have never encountered this. Any examples?

By "random" I've meant that the offsets appear seemingly random when you don't know the underlying issue. It's not only the one offset after the partition table. Depending how large the partitions are there may be or may not be additional offsets around the partitions themself.

Of course all this is not rocket science. We're talking about simple calculations. But that's just one more thing to be aware of.

My conclusion form that journey back than was: Just screw it, and don't add partitions to the mix if you don't strictly need them. One thing less to care about!

For example the laptop I'm just writing on has two NVM devices. The bigger and faster one is used as (encrypted) root FS, without any partitions on it, the other smaller and slower one carries the EFI partition and an (encrypted) data partition. If I would have have partitions on the root disk this would not give me any advantages, but additional stuff to think about. So why should I do that? OTOH I need an EFI partition to boot. So I have created one on the other disk. I think this is a very pragmatic solution. Just don't add anything that you don't need. <insert Saint-Exupéry quote about perfection here>


Alright that makes sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: