The main benefits of metric (actually SI) are:
- One unit for each dimension, e.g. all distances are in metres.
- The conversion factor between dimensions is 1, e.g. 1Nm = 1J, 1Pa = 1N/m^2, etc.
We often use prefixes like "centi" or "kilo", but they're just multipliers rather than separate units (SI calls them "derived units").
The main problems with metric are:
- Still some redundancy, e.g. litres, tonnes, hours, etc. These are avoided by SI.
- The base unit of mass is confusingly called the "kilogram". Hence any conversion factor involving grams will get a 1/1000, since the gram is 1/1000 of the base unit.
I wrote about this at http://chriswarbo.net/blog/2020-05-22-metric_red_herring.htm...
Now it is just "This piece is 340mm, I need to subtract 9mm and find the midpoint: 116mm": relatively easy peasy.
If you "normalize everything" you get the metric system (up to a scaling factor): all distances are expressed in metres, all energies in Joules, all forces in Newtons, all pressures in Pascals, etc. Note that the latter are just naming conventions (for kgm^2/s^2, kgm/s^2 and kg/ms^2, respectively).
This can lead to some unwieldy numbers, e.g. "turn left after four million 1/64 inches", so naming conventions are often used, like "mega" for "million".
Probably if I was a better woodworker I would have a solution for this without switching to metric.
Nobody who grew up with metric uses such measurements.
Outside of the US, metric measurements are usually to the millimeter (add decimal places for serious CNC etc work), not to some odd fractions.
The equivalent math is what they said: (340mm-9mm)/2. No weird fractions.
I hate converting between 3/8ths and 7/32nds all the time. It's such a stupid system, and I miss metric.
The choice of base 10 units is also completely arbitrary. So we have ten fingers? So what? That's not useful. Metric takes the view that everyone will work with arbitrary real number measurements, but we don't. We naturally use fractions. The fact you can't evenly divide a metric unit in three is terrible. If you look at standard sizes for, say, kitchen units in Europe, they actually are multiples of 6. A unit is 600mm and therefore divisible by 2, 3, 4, 5, 6, 8, 10 and 12. Wouldn't it be so much neater if we could just write 100mm instead of 600mm? We know humans can remember more than ten symbols because our alphabet is larger. Sigh... I suppose having one standard is better than a few competing standards or no standard at all.
Useful mnemonic, but completely arbitrary. If you cook/bake regularly, you know the standard amounts used. I know the amounts needed for a bread that fits into my rising basket, starting with 450g white flour and 50g rye flour, 350g water and so on. If you decide to go more advanced, you use baker's percentages anyway, which of course map neatly to a base-10 system.
> "The fact you can't evenly divide a metric unit in three is terrible."
The elegance of metric is that everything divides neatly into 10ths and powers of 10, which makes it simple to convert measurements from one scale to another, for whichever level of precision you require. Meters work for farming, centimeters work for houses, millimeters for woodworking, micrometers for machining.
Just as the common misconception of a 2x4" (really 1.5x3.5") becoming a bizarre 3.81x8.89cm is unrealistic and based on misconceptions, so is the idea of people somehow being unable to divide a metric measurement into thirds. Your pencil line is going to be thicker than any inaccuracy anyway, but the precision (repeatability) is going to be the same no matter which measuring system you use, as long as your measuring device is accurate. If you want a more accurate measurement and cut, you use a more accurate measuring device and scale.
I grew up learning and using metric for everything. Imperial may as well be an alien measurement from another planet, to me an inch is just an old-fashioned way of saying 2.54cm, and eyeballing a cm has literally never been an issue.
Regarding the kitchen cabinets, if you were to redefine 100mm as "the width of a kitchen cabinet", you would be optimizing for one specific use case over others. The entire point of a measurements system is that it is universal, repeatable and logical, not based on arbitrary units. 600mm is a perfectly good measurement, no harder to remember than 100mm or 20 kilometers or 250 grams.
Who needs easy fifth, other than musicians?
I had no previous experience with imperial measurements, my country is metric for over a hundred years, but the majority of information on woodworking on the internet or in print comes from English speaking countries that use imperial units in woodworking almost exclusively.
So instead of trying to translate everything to metric I bought an imperial tape measure and a set of rulers. Since then I switched most of my woodworking measurements to imperial, because it just very convenient and makes much more sense when building items that will be used by humans. Chairs, tables, cabinets, chests, even guitars.
Walking is usually measured in pairs, many music systems in fours, many cooking measures in thirds, many art techniques in combinations of thirds and powers of two.
It’s also worth pointing out that as far as I know efforts to 10-base time have failed.
Sure, the French Republican Calendar "failed". But it "failed" less so because it was 10-base. Napoleon abolished it because of 2 main flaws:
(a) The decree that formalized the calendar contradicted itself when it came to determining leap years. (b) The beginning of the year/era was based on commemorating a historical event rather then observing the beginning of Winter or Spring. Which would make adoption of the calendar hard.
Then again, the traditional Chinese timekeeping was/is partly 10-based. This system developed in it's own right separate of systems that emerged throughout Europe.
Time keeping is a "hard problem" anyway. As a historian, dating a historic document can be a daunting task when you're confronted with different calendars.
I don't understand this at all, but I want to. Is it that we use pairs of steps sometimes (single paces seem just as useful)? Walk out and back? Walk in groups of two?
It probably helped that I controlled the constraints so I could use nice whole numbers.
You certainly might have a point. I might just be noticing the "romantic" feeling for how clean it all seemed to be.
 I have no skin in this game - I grew up in England and America and literally don't care - just pointing out how weird the "2 by 4" terminology is
The advantage I'm talking about is what comes from having 12 inches in a foot. If you're only using inches and not feet then there's obviously no advantage from that.
It's actually illuminating to trace the linguistic history:
- The 'hour' used to be the base unit for time
- Smaller intervals were called 'small parts', where a 'small part' is 1/60 of an hour (the sexagesimal equivalent to a 'decihour')
- Even smaller intervals were called 'second small parts', defined as 1/60 of a 'small part'
- Even smaller intervals were called 'third small parts', defined as 1/60 of a 'second small part'
- And so on
This was done by Latin speakers, whose phrase for 'small part' is "pars minuta". Hence 1/60 of an hour is a "pars minuta" ('minute' in English), and 1/60 of a pars minuta is a "pars minuta secunda" ('second' in English).
Metric/SI units take the "second" to be their base unit, and give us shorthand multipliers like "kilo". Tracing this back, we find that a kilosecond is 1000/3600 of an hour! Likewise we don't tend to use thirds or fourths anymore, instead using multipliers like "milli" and "nano".
I'm not aware of any shorthand multipliers for sexagesimal. In my blog post linked above I suggest “prota” for x60 and “defter” for x3600, so we could use "protasecond" instead of minute and "defersecond" instead of hour (I also suggest abbreviating second to "sec", since it's no longer used as an ordinal number ("second small"))
The last thing I learned before this was the word coolth.
It just so happens that some break points are usefully close (but not quite the same as) base 10 break points.
2^10 = 1024 ~~ (1 KB) ~~ 1000
Naturally given this 2.4% bonus to base 10, and failing upwards in being able to cleanly store the corresponding base 10 datatype, the natural thing to do is to favor the larger and also native unit.
For me, KB will _always_ be 1024 bytes. In the context of computers this is what makes sense.
Similarly I only ever want those to be the binary (base 2, also implied by bytes) base sizes for all other units of storage, data transfer speed, etc. The marketing fixation on providing less value by focusing on the non-native to the system human sizes that are smaller is the error.
Counter-argument to anyone arguing otherwise. What is the size of the smallest addressable unit in any of the block based device storage you care about? Hard disks / flash? (512, 4096, or possibly some very large power of 2 for SMR). CDs/DVDs? (2048 bytes)
The argument is pretty straightforward. Kilo, Mega, Giga, etc. are already well defined prefixes and it is simply confusing if we start to redefine them in some context.
I agree that clearly hardware manifacturer are taking advantage of this confusion to inflate their marketed memory size, but again they are using the proper meaning of the prefixes. Maybe the solution would be to require them using KiB, MiB or GiB, but certainly not to change the meaning of Kilo.
#1 (256 Computer Science Mega Bytes) is the size of the minimum write zone for 'zoned storage' according to the Wikipedia page. https://en.wikipedia.org/wiki/Shingled_magnetic_recording#Pr...
Edit + Update: I recalled incorrectly, 128 CSMB "Sony's patent proposes going way beyond 32kB chunks to using 128MB chunks for the FTL" https://www.anandtech.com/show/15848/storage-matters-xbox-ps...
2^10B = 1024B = 1KB ~ 1kB = 1000B
as I understand it. And for MB or Mb the accuracy is not really interesting.
Edit: It should in fact be 1kB ~ 1000B
E.g. 1kg is not exactly 1000g, that would be 1.000kg.
But 1KB is meant to be exactly 1024B.
Offhand, and at a glance, none of the upper case correspond to a lower-case prefix. This is probably to detect incorrect usage.
In Computer Science / programming generally, the convention has been that an uppercase B indicates bytes, while a lowercase indicates bits.
Modern computing systems frequently address integer numbers of size 64b, 32b, 16b, 8b (8B, 4B, 2B, 1B respectively).
1 Gb (SI G bit, since if discussing bits telecom stuff is implied; I argue that this too is incorrect, but it's efemeral, not storage. Still I can't send less than a byte.)
1 GB (CS G Byte) storage somewhere, memory. Invariably a base 2 power since otherwise packing and transcription to accommodate re-packing would be a power-hungry nightmare.
The prefix kilo is lower case k. Hecto and deca are also lower case abbreviations.
Really? Why? I would have assumed that 1kg is in fact exactly 1000g, and that the trailing zeros in 1.000kg indicated a rounding to three decimal places: i.e. 999.5g <= 1.000kg < 1000.5g
Also, things started to go downhill pretty early on: 1.44MB floppy disks were 1440KB where 1KB was 1024.
Ebdit: whoebver downbvotes tgbis canb hear mbe sbpeakinbg it
A lot of people don't like how they sound, and would like for the standard prefixes to have one meaning, but the reality of the state of the world is such that you will create ambiguity if you don't use the binary prefixes.
My particular set of anecdata would strongly suggest it's not a very successful standard so far. That "nearest power of two to decimal kilo/mega/giga" is a weird concept in itself doesn't help, and the chosen prefixes are completely non-obvious and alien to me and apparently others as well. I don't think it's going to take hold anytime soon.
Besides, if ambiguity is all there is to worry about, just use 2^30 bytes. No ambiguity at all (I believe) – but it may be a lot less effective in actual communication in lots of situations.
(SI for https://en.wikipedia.org/wiki/International_System_of_Units The International System of Units (SI, abbreviated from the French Système international (d'unités)))
((International System of Units) Mega) Bytes
-- Edit, addition
Additionally, I'd be willing to prefix my preferred units some variation of:
Base Two M B (BTMB)
Computer Science M B (CSMB)
Sure, the kilo-, mega- etc prefixes might literally mean 10^x but as long as one is raised with the understanding that to do so is overly literal, then it’s never really a problem. base = 2 if context == computer else 10
Kibi and mibi feel like this generation’s centripetal force. Another popular and unsolicited explanation I get a lot is about equality vs equity. The distinction might be real but after being corrected for the nth time one feels like I am being corrected for the sake of it than for any real reason.
The manufacturers of memory and hard drives seem to have always had a problem with that. They’ve preferred to use KB=1000 bytes exactly, MB=10^6 bytes, GB=10^9 bytes, etc. for as long as I’ve been buying computers, which is long enough to remember when computers didn’t come with MBs.
I can handle the base2 / base10 problem by using context. I don’t need a clumsy new prefix.
If someone on the street asked you how much memory your computer has, how would you answer? “Seven gigs”? “Seven gibs”? “Seven gibibytes”?
I feel numbers ure a much more powerfull abstraction than any particular unit, so when there is friction like this between conventions, I prefer to favor the number. Giga is 10⁹ and Gibi is 2¹⁰⁰ whatever the actual unit behind it.
The underlying storage cell arrangement is still (typically) 4096 bytes in size. (or 65536 (64KB) for many embedded devices).
The device probably has about 10% of it's real storage capacity dedicated to ECC and other 'administrative' uses for provisioning the user facing storage.
The talk of weird sounding binary prefixes and long scales reminds me of the British long scale for large numbers that has different names for what we know as ‘million’ and ‘billion’: “milliard” and “billiard” (not to be confused with pool.) https://en.wikipedia.org/wiki/Long_and_short_scales
I think you can perceive the different factor relationship between the dimensions of timber, fittings and spacing. Although I'm not entirely certain that I'm not picking up other cultural habits in Australian and American building styles. Should I ever ascend the throne, I shall have an imperial palace (naturally) and a metric one built to the same plan to verify my intuition
edit: Words are hard
my two favourite moments in a documentary are #1 in 'the smashing machine' when you glimpse a the boot of the bad girlfriend in the corner, and feel before you know that everything is about to go bad
and #2 in the documentary 'Jean Nouvel: Reflections' about the architect of the same name, where you see him at work, in a dark cozy restaurant at a table surrounded by staff/devotees swirling a full glass of red wine, and he's looping loose enigmatic squiggles on wine stained napkins which he then passes out, a sacred relic, one to each person at the table.
And off they go and make the buildings that bear his name.
This implies those Roman Numerals we learned in school are wrong.
Until 1400s or 1800s?
EDIT: For English
It is only recently I learned that the spread of Arabic Numerals was so recent
Curiously, the abacus was in use in Europe for centuries before written arabic numerals were used. So, the concept of place-zero was in practical use long before it got a name.
It is claimed that our numbers are big-endian because they were written, in Arabic, right-to-left embedded in right-to-left text, so little-endian. But we picked up the Arabic visual representation without the meaning. All the nuisance of right-justifying numbers to line them up by place (now automated, but it used to be a huge hassle on mechanical typewriters) could have been avoided by adopting the same endianness relative to our text.
But, I don't know what the original Indian form was, either the digit order or the text. It predated the advent of Arabic usage in India by some centuries.
And at one time the British spoke German? Fascinating. I always thought the single biggest language influence on British was French (because William the Conqueror), and the pre-Conquest indigeneous British was a mix of Celtic, Roman, and Norse from the Vikings that's more-or-less unique. How'd they end up speaking German?
"England" is named after the Angles, one of the Germanic tribes that showed up and settled down in England after Roman rule ended. They brought their own proto-Germanic language with them and displaced the local languages- and to some extent people too. French was a comparatively small addition, it contributed a lot of vocabulary and spelling and so on, less so grammar. The Norman invasion was not a lot of actual people- enough to staff up a new aristocratic class speaking French, not enough to displace the local language or people so much.
A bunch of French words were grafted on recently (not in 1066 but over the following couple of centuries), but none of the grammar came with it.
As for German-speaking people in England specifically: well, George I and George II spoke German (and French), not English. George III was the first Hanoverian king to speak English. Not surprisingly, he was mad :-)
> How'd they end up speaking German?
You've probably heard of Anglo Saxons? After the Roman withdrawal came the Anglo Saxon invasion from Germany and Old German became the predominant language of England, even the name comes from these invaders (Angles -> Angle Land -> England). After that the Vikings and French added their layers to varying degrees, Celtic languages remained dominant in Ireland, Scotland and Wales until much later.
You mean Old English or Anglo-Saxon. Also, they came not just from what became northern Germany but also from what is now Denmark.
German grammar is pretty easy to pick up for English speakers. Lots of sentences are constructed the same way in German and English.