Hacker News new | past | comments | ask | show | jobs | submit login
Long Hundred (wikipedia.org)
97 points by luu 8 months ago | hide | past | favorite | 106 comments



120 is divisible by a lot of numbers, which is convenient for many calculations. There's a similar reasoning behind 24 hour days, 60 minute hours, and 360 degrees in a circle.

https://en.wikipedia.org/wiki/Highly_composite_number

https://en.wikipedia.org/wiki/Sexagesimal


I laughed a lot at imperial until I did a lot of woodworking. 12 is divisible evenly by 1,2,3,4,6,12. What a great number. Now I only laugh a little.


Base-10 is rather incidental to metric: we can have a "kilofoot" just as easily as "a dozen metres".

The main benefits of metric (actually SI) are:

- One unit for each dimension, e.g. all distances are in metres.

- The conversion factor between dimensions is 1, e.g. 1Nm = 1J, 1Pa = 1N/m^2, etc.

We often use prefixes like "centi" or "kilo", but they're just multipliers rather than separate units (SI calls them "derived units").

The main problems with metric are:

- Still some redundancy, e.g. litres, tonnes, hours, etc. These are avoided by SI.

- The base unit of mass is confusingly called the "kilogram". Hence any conversion factor involving grams will get a 1/1000, since the gram is 1/1000 of the base unit.

I wrote about this at http://chriswarbo.net/blog/2020-05-22-metric_red_herring.htm...


I'm a lifelong imperial-ist but switched to metric after getting into woodworking. It was too tempting to do math in my head like "This piece is 13 & 3/8ths, I need to subtract 13/32nds and then find the midpoint of that..." and then screw it all up at some point.

Now it is just "This piece is 340mm, I need to subtract 9mm and find the midpoint: 116mm": relatively easy peasy.


You might consider switching back to imperial... (165.5mm)


That's not a problem with imperial (13+3/8 mm and 13/32 mm have the same problem); it's a problem with fractions (especially 'proper' fractions, which should never be used for anything ever, since they combine all the downsides of both decimals and rational numbers, with no benefit). Normalize everything to, say, 1/64 inch units and "This piece is 856/64in, I need to subtract 26/64in and then find the midpoint of that: (856-26)/2 = 415, so it's 415/64in." works fine.


> Normalize everything to, say, 1/64 inch units

If you "normalize everything" you get the metric system (up to a scaling factor): all distances are expressed in metres, all energies in Joules, all forces in Newtons, all pressures in Pascals, etc. Note that the latter are just naming conventions (for kgm^2/s^2, kgm/s^2 and kg/ms^2, respectively).

This can lead to some unwieldy numbers, e.g. "turn left after four million 1/64 inches", so naming conventions are often used, like "mega" for "million".


True, but 1/2^n fractions are highly encouraged only with imperial in my experience, and my problem comes in that my different measuring instruments give me different levels of precision, ranging from 1/8" if I'm using a big measuring stick, to 1/16" on my tape measure, to 1/32" on my embedded measure, to 1/128" on my calipers. And then there are times when you are given measurements in tenths of an inch which feels natural for most (13.2" long), and that gets ugly...

Probably if I was a better woodworker I would have a solution for this without switching to metric.


Are the conversions to normalised proper fractions and back to conventional improper fractions easier to do in the head or should one just use mm?


> 13+3/8 mm and 13/32 mm have the same problem

Nobody who grew up with metric uses such measurements.

Outside of the US, metric measurements are usually to the millimeter (add decimal places for serious CNC etc work), not to some odd fractions.

The equivalent math is what they said: (340mm-9mm)/2. No weird fractions.


Er, yes, that was my point: the problem is with the fractions (and especially though not exclusively the 'proper' fractions), not with the choice of metric vs imperial for units.


Imperial explicitly endorses the use of all the weird fractions and measuring tenths of an inch with a measuring tape is simply not a thing that would ever happen in the US.

I hate converting between 3/8ths and 7/32nds all the time. It's such a stupid system, and I miss metric.


Having grown up in the UK I was taught metric for everything, but there is plenty of exposure to imperial from older generations. The one thing I've come to realise about imperial is, it's much more "human". Centimetres, kilograms etc. seem completely arbitrary and unrelatable. Even people my age who learnt metric can eyeball "about an inch", but "about a centimetre"? It's too small to be useful, and being smaller means your error percentage is way higher. Similarly, some recipes are expressed really neatly in imperial. Shortcrust pastry to fill a 9-inch pan? 5 ounces of flour and 4 ounces of butter. Good luck trying to remember the metric equivalent. Same with Celsius vs Farenheit. The former puts landmarks at freezing and boiling of water. OK, freezing is useful, but who actually cares about the boiling point of water? It boils when it boils. Farenheit instead gives you a scale from "bloody cold" to "bloody hot".

The choice of base 10 units is also completely arbitrary. So we have ten fingers? So what? That's not useful. Metric takes the view that everyone will work with arbitrary real number measurements, but we don't. We naturally use fractions. The fact you can't evenly divide a metric unit in three is terrible. If you look at standard sizes for, say, kitchen units in Europe, they actually are multiples of 6. A unit is 600mm and therefore divisible by 2, 3, 4, 5, 6, 8, 10 and 12. Wouldn't it be so much neater if we could just write 100mm instead of 600mm? We know humans can remember more than ten symbols because our alphabet is larger. Sigh... I suppose having one standard is better than a few competing standards or no standard at all.


> "Shortcrust pastry to fill a 9-inch pan? 5 ounces of flour and 4 ounces of butter"

Useful mnemonic, but completely arbitrary. If you cook/bake regularly, you know the standard amounts used. I know the amounts needed for a bread that fits into my rising basket, starting with 450g white flour and 50g rye flour, 350g water and so on. If you decide to go more advanced, you use baker's percentages anyway, which of course map neatly to a base-10 system.

> "The fact you can't evenly divide a metric unit in three is terrible."

The elegance of metric is that everything divides neatly into 10ths and powers of 10, which makes it simple to convert measurements from one scale to another, for whichever level of precision you require. Meters work for farming, centimeters work for houses, millimeters for woodworking, micrometers for machining.

Just as the common misconception of a 2x4" (really 1.5x3.5") becoming a bizarre 3.81x8.89cm is unrealistic and based on misconceptions, so is the idea of people somehow being unable to divide a metric measurement into thirds. Your pencil line is going to be thicker than any inaccuracy anyway, but the precision (repeatability) is going to be the same no matter which measuring system you use, as long as your measuring device is accurate. If you want a more accurate measurement and cut, you use a more accurate measuring device and scale.

I grew up learning and using metric for everything. Imperial may as well be an alien measurement from another planet, to me an inch is just an old-fashioned way of saying 2.54cm, and eyeballing a cm has literally never been an issue.

Regarding the kitchen cabinets, if you were to redefine 100mm as "the width of a kitchen cabinet", you would be optimizing for one specific use case over others. The entire point of a measurements system is that it is universal, repeatable and logical, not based on arbitrary units. 600mm is a perfectly good measurement, no harder to remember than 100mm or 20 kilometers or 250 grams.


If humans had six fingers per hand as we should, we'd use a base-12 system with the same scaling properties and divisibility into halves, thirds, quarters, sixths, and twelfths.

Who needs easy fifth, other than musicians?


On reading this I thought, "hmm, I can eyeball a centimetre just fine." And then I realized that I do that by eyeballing an inch and taking a bit less than half of it.


A centimetre is about the width of my pinky finger. And, an inch is about the width of my thumb.


That is exactly my experience with imperial in woodworking!

I had no previous experience with imperial measurements, my country is metric for over a hundred years, but the majority of information on woodworking on the internet or in print comes from English speaking countries that use imperial units in woodworking almost exclusively. So instead of trying to translate everything to metric I bought an imperial tape measure and a set of rulers. Since then I switched most of my woodworking measurements to imperial, because it just very convenient and makes much more sense when building items that will be used by humans. Chairs, tables, cabinets, chests, even guitars.


On the other hand, in metalworking and precision machining, decimals are used again - a common resolution is 1/1000th of an inch (0.001"), or a "thou".


In the US we call that a "mil". I have not encountered "thou".


I’m all for metric for standardization but I do think something is lost with 10-based systems.

Walking is usually measured in pairs, many music systems in fours, many cooking measures in thirds, many art techniques in combinations of thirds and powers of two.

It’s also worth pointing out that as far as I know efforts to 10-base time have failed.


> It’s also worth pointing out that as far as I know efforts to 10-base time have failed.

Sure, the French Republican Calendar "failed". But it "failed" less so because it was 10-base. Napoleon abolished it because of 2 main flaws:

(a) The decree that formalized the calendar contradicted itself when it came to determining leap years. (b) The beginning of the year/era was based on commemorating a historical event rather then observing the beginning of Winter or Spring. Which would make adoption of the calendar hard.

https://en.wikipedia.org/wiki/French_Republican_calendar

Then again, the traditional Chinese timekeeping was/is partly 10-based. This system developed in it's own right separate of systems that emerged throughout Europe.

https://en.wikipedia.org/wiki/Traditional_Chinese_timekeepin...

Time keeping is a "hard problem" anyway. As a historian, dating a historic document can be a daunting task when you're confronted with different calendars.


Complete tangent, but I am currently listening to the excellent "Age of Napoleon" podcast, and the host suggests that the abolition of the calendar by Napoleon was (a) basically ratifying what was already widely done in society and (b) a symbolic guesture to assuage conservatives and catholics.


> Walking is usually measured in pairs

I don't understand this at all, but I want to. Is it that we use pairs of steps sometimes (single paces seem just as useful)? Walk out and back? Walk in groups of two?


I hear this a lot as an abstract statement but can you give a real world example of where you personally have used that fact? I've struggled to think of any way it would be useful. Dividing by 2 is useful for finding a center, but obviously you can do that just as easily in metric.


Built a deck and was doing a ton of dividing to break my 8' and 12' boards down to the hundreds of smaller pieces I needed for stairs, railings, etc.

It probably helped that I controlled the constraints so I could use nice whole numbers.

You certainly might have a point. I might just be noticing the "romantic" feeling for how clean it all seemed to be.


Can you be more specific? I can't imagine needing to cut a longer piece into a known number of smaller pieces, rather than smaller pieces of a known size. Even if you do, won't the nice numbers be lost by having to account for kerf width? I've also built a deck and choosing the quantity of material to buy probably involved some dividing to see how many pieces I can get from a big one. But I would add a rough estimate for kerf width and length error. Dividing metric materials is often easy because they're sized in easily divisible numbers like 2.4, 3.6, or 4.8 m.


Woodworking could be a mix of divisibility and industry specific product sizing. Dividing a 2" by 4" into pieces seems easier than dividing a 50.8 mm × 101.6 mm.


Except a 2"x4" is actually a 1.5"x3.5"

[edit] I have no skin in this game - I grew up in England and America and literally don't care - just pointing out how weird the "2 by 4" terminology is


My understanding was that those were meant to be the dimensions prior to planing . . .


Agreed, but that's a heck of a margin to plane off.


The board is 2"x4" before drying; after drying it will usually have cupped and twisted slightly, so the planing really is bringing the sides back into plane, which takes more removal than if it were just to smooth them.


You mean the advantage is in the fact that materials come in convenient imperial measurements? In metric countries, they come in convenient metric sizes too, so that's no real advantage when everyone else cooperates. Obviously, nobody uses 50.8 x 101.6 mm timber unless they've converted from inches, so without inches, it wouldn't exist.

The advantage I'm talking about is what comes from having 12 inches in a foot. If you're only using inches and not feet then there's obviously no advantage from that.


I'm developing a big-pixel retro platformer game. Rather than the traditional power-of-two tile sizes, I ended up going for 36x36 tiles. It's much easier to draw repeating patterns when tiles can be evenly divided into halves, thirds, quarters, sixths, and so on.


Would fit neatly to hardware from 1960s which was often 36-bit.


Seems like the best of all world would be a do-decimal (12-based) based system of units along with do-decimal notation and two extra digits to notate it. But I can't see us switching any time soon!


Nah we should go metric on time. 20 new-hours a day, 100 new-minutes per new-hour.


The existing system is perfectly rational, although using base-60 ( https://en.wikipedia.org/wiki/Sexagesimal ) rather than base-10 (decimal). We can use the ideas of metric without base-10 (see http://chriswarbo.net/blog/2020-05-22-metric_red_herring.htm... ); and I would argue that base-60 is actually better, since it has many factors (see http://chriswarbo.net/blog/2020-05-22-improving_our_units.ht... ).

It's actually illuminating to trace the linguistic history:

- The 'hour' used to be the base unit for time

- Smaller intervals were called 'small parts', where a 'small part' is 1/60 of an hour (the sexagesimal equivalent to a 'decihour')

- Even smaller intervals were called 'second small parts', defined as 1/60 of a 'small part'

- Even smaller intervals were called 'third small parts', defined as 1/60 of a 'second small part'

- And so on

This was done by Latin speakers, whose phrase for 'small part' is "pars minuta". Hence 1/60 of an hour is a "pars minuta" ('minute' in English), and 1/60 of a pars minuta is a "pars minuta secunda" ('second' in English).

Metric/SI units take the "second" to be their base unit, and give us shorthand multipliers like "kilo". Tracing this back, we find that a kilosecond is 1000/3600 of an hour! Likewise we don't tend to use thirds or fourths anymore, instead using multipliers like "milli" and "nano".

I'm not aware of any shorthand multipliers for sexagesimal. In my blog post linked above I suggest “prota” for x60 and “defter” for x3600, so we could use "protasecond" instead of minute and "defersecond" instead of hour (I also suggest abbreviating second to "sec", since it's no longer used as an ordinal number ("second small"))


Bring back the French Revolutionary clock!


And I thought time libraries were a mess already...


This is the kind of thing I love to use in conversation with my girlfriend because it annoys her. Basically anything that sounds off, or odd but is technically correct.

The last thing I learned before this was the word coolth.


To throw further confusion into it, both hundred[1] and centum[2] came from the same Proto-Indo-European word *ḱm̥tóm.

1. https://en.wiktionary.org/wiki/hundred#Etymology

2. https://en.wiktionary.org/wiki/centum#Latin


It’s a shame we didn’t go with “long megabyte,” etc. instead of the god-awful binary unit prefixes.


The native base of our computing systems is base-2. Engineering prefixes for this base should thus also be based on natively divisible break points.

It just so happens that some break points are usefully close (but not quite the same as) base 10 break points.

2^10 = 1024 ~~ (1 KB) ~~ 1000

Naturally given this 2.4% bonus to base 10, and failing upwards in being able to cleanly store the corresponding base 10 datatype, the natural thing to do is to favor the larger and also native unit.

For me, KB will _always_ be 1024 bytes. In the context of computers this is what makes sense.

Similarly I only ever want those to be the binary (base 2, also implied by bytes) base sizes for all other units of storage, data transfer speed, etc. The marketing fixation on providing less value by focusing on the non-native to the system human sizes that are smaller is the error.

Counter-argument to anyone arguing otherwise. What is the size of the smallest addressable unit in any of the block based device storage you care about? Hard disks / flash? (512, 4096, or possibly some very large power of 2 for SMR). CDs/DVDs? (2048 bytes)


> Counter-argument to anyone arguing otherwise.

The argument is pretty straightforward. Kilo, Mega, Giga, etc. are already well defined prefixes and it is simply confusing if we start to redefine them in some context.

I agree that clearly hardware manifacturer are taking advantage of this confusion to inflate their marketed memory size, but again they are using the proper meaning of the prefixes. Maybe the solution would be to require them using KiB, MiB or GiB, but certainly not to change the meaning of Kilo.


Sure, but megabytes and terabytes are just arbitrary numbers. There’s nothing where they correspond to a meaningful device limitation. KB should be 2^10 bytes and MB should be 1e6 bytes. There’s no reason to link them.


2^28 bytes / 256 MB / CSMB (#1) is the size of current SMR spinning rust drives. The flash zone size used in more modern consoles (E.G. PS5 / Xbox whatever) might similarly be 128 MB / CSMB.

#1 (256 Computer Science Mega Bytes) is the size of the minimum write zone for 'zoned storage' according to the Wikipedia page. https://en.wikipedia.org/wiki/Shingled_magnetic_recording#Pr...

Edit + Update: I recalled incorrectly, 128 CSMB "Sony's patent proposes going way beyond 32kB chunks to using 128MB chunks for the FTL" https://www.anandtech.com/show/15848/storage-matters-xbox-ps...


> 2^10 = 1024 ~~ (1 KB) ~~ 1000

2^10B = 1024B = 1KB ~ 1kB = 1000B

as I understand it. And for MB or Mb the accuracy is not really interesting.

Edit: It should in fact be 1kB ~ 1000B

E.g. 1kg is not exactly 1000g, that would be 1.000kg.

But 1KB is meant to be exactly 1024B.


Base 10 Engineering Notation uses SI Prefixes, for these a set of letters exist with the upper-case indicating positive exponents (large numbers) and lower-case indicating negative exponents (small parts of one unit) https://en.wikipedia.org/wiki/International_System_of_Units#...

Offhand, and at a glance, none of the upper case correspond to a lower-case prefix. This is probably to detect incorrect usage.

In Computer Science / programming generally, the convention has been that an uppercase B indicates bytes, while a lowercase indicates bits.

Modern computing systems frequently address integer numbers of size 64b, 32b, 16b, 8b (8B, 4B, 2B, 1B respectively).

Examples:

1 Gb (SI G bit, since if discussing bits telecom stuff is implied; I argue that this too is incorrect, but it's efemeral, not storage. Still I can't send less than a byte.)

1 GB (CS G Byte) storage somewhere, memory. Invariably a base 2 power since otherwise packing and transcription to accommodate re-packing would be a power-hungry nightmare.


> upper-case indicating positive exponents (large numbers) and lower-case indicating negative exponents

The prefix kilo is lower case k. Hecto and deca are also lower case abbreviations.


It’s highly context dependent if 1GB means exactly 2^30 bytes or another number in the neighborhood of 1GB.


> 1kg is not exactly 1000g, that would be 1.000kg

Really? Why? I would have assumed that 1kg is in fact exactly 1000g, and that the trailing zeros in 1.000kg indicated a rounding to three decimal places: i.e. 999.5g <= 1.000kg < 1000.5g


I may have used ‘exactly’ sloppy here. What I mean is: When the context doesn’t say otherwise, one should assume accuracy or precision as rounding to the least significant digit given. I.e. by default 1kg may indicate a mass between 500g and 1.499g. If the accuracy is about 10g you should write 1.00kg.


It's 2.4% for 1KB, but it's 10% for 1TB.

Also, things started to go downhill pretty early on: 1.44MB floppy disks were 1440KB where 1KB was 1024.


Mibi you just don't like them because they are new? You should gibi them a chance.


I reabd thbis wholbe combment wib mby toumbge imb mby teebth

Ebdit: whoebver downbvotes tgbis canb hear mbe sbpeakinbg it


Thou shalt not speak ebcdic!


In my opinion the most important factor by far is avoiding ambiguity. This means sucking up to the binary prefixes and using them in every scenario possible.

A lot of people don't like how they sound, and would like for the standard prefixes to have one meaning, but the reality of the state of the world is such that you will create ambiguity if you don't use the binary prefixes.


In my experience, they're just as likely to be read as a weird typo of mega/giga as not and I've seen SSDs advertised using the wrong prefixes a lot, so there's another layer of ambiguity we wouldn't have if those prefixes had been picked with relative laypeople in mind. Even people who should know about these use 1024 for a kilobyte, and I usually add (1024^3) or something like that if ambiguity is a potential issue. I've been burnt before by trusting that gibi will be understood.

My particular set of anecdata would strongly suggest it's not a very successful standard so far. That "nearest power of two to decimal kilo/mega/giga" is a weird concept in itself doesn't help, and the chosen prefixes are completely non-obvious and alien to me and apparently others as well. I don't think it's going to take hold anytime soon.

Besides, if ambiguity is all there is to worry about, just use 2^30 bytes. No ambiguity at all (I believe) – but it may be a lot less effective in actual communication in lots of situations.


Preferably "megabyte" and "short megabyte", because that will annoy HDD manufacturers more.


Maybe the prefix and separation: SI M B

(SI for https://en.wikipedia.org/wiki/International_System_of_Units The International System of Units (SI, abbreviated from the French Système international (d'unités)))

((International System of Units) Mega) Bytes

-- Edit, addition

Additionally, I'd be willing to prefix my preferred units some variation of:

Base Two M B (BTMB)

Computer Science M B (CSMB)


Does anyone actually have a problem with kB meaning 1024 bytes and MB meaning 1<<20 bytes, etc?

Sure, the kilo-, mega- etc prefixes might literally mean 10^x but as long as one is raised with the understanding that to do so is overly literal, then it’s never really a problem. base = 2 if context == computer else 10

Kibi and mibi feel like this generation’s centripetal force. Another popular and unsolicited explanation I get a lot is about equality vs equity. The distinction might be real but after being corrected for the nth time one feels like I am being corrected for the sake of it than for any real reason.


> Does anyone actually have a problem with kB meaning 1024 bytes and MB meaning 1<<20 bytes, etc?

The manufacturers of memory and hard drives seem to have always had a problem with that. They’ve preferred to use KB=1000 bytes exactly, MB=10^6 bytes, GB=10^9 bytes, etc. for as long as I’ve been buying computers, which is long enough to remember when computers didn’t come with MBs.


People don't use it like you say. My SSD that says "500 GB" on the label is actually only 500e9 bytes or 465 gibibytes. If I see a file size listed somewhere, I never know which version of the unit it's using so I tend to think in terms of bytes anywhere that the difference matters.


Context is everything and I’m fine with that.

I can handle the base2 / base10 problem by using context. I don’t need a clumsy new prefix.

If someone on the street asked you how much memory your computer has, how would you answer? “Seven gigs”? “Seven gibs”? “Seven gibibytes”?


Perhaps you have a clearer picture of when GB is a power of ten and when it's a power of 2. Now I think of it, maybe it's only hard drives that use powers of ten? Hmm, what about SD cards? Internet bandwidth? Anything that might be labeled for marketing purposes? To me it's still a blur.


> I can handle the base2 / base10 problem by using context. I don’t need a clumsy new prefix.

I feel numbers ure a much more powerfull abstraction than any particular unit, so when there is friction like this between conventions, I prefer to favor the number. Giga is 10⁹ and Gibi is 2¹⁰⁰ whatever the actual unit behind it.


The shrinkage is all just Sales and Marketing ignoring engineering sensibilities in the quest for a bigger number on the box.

The underlying storage cell arrangement is still (typically) 4096 bytes in size. (or 65536 (64KB) for many embedded devices).

The device probably has about 10% of it's real storage capacity dedicated to ECC and other 'administrative' uses for provisioning the user facing storage.


It doesn't matter the reason. At the end of the day, it's used inconsistently so you can't just treat GB as meaning GiB when it's to do with computers and the units have become a confused muddle.


What’s bad about the binary prefixes? Does it help if you think about how it’s designed to be “MEga BInary” and “GIga BInary”? Or is that the problem?

The talk of weird sounding binary prefixes and long scales reminds me of the British long scale for large numbers that has different names for what we know as ‘million’ and ‘billion’: “milliard” and “billiard” (not to be confused with pool.) https://en.wikipedia.org/wiki/Long_and_short_scales


Knuth once suggested "large megabyte" (and so on) with a wicked abbreviation "MMB" (for "both binary-ness and large-ness").



This is the basis for customary nail sizes, as it's (for example) how many 6d nails you could get for 6 pennies in medieval England. https://en.wikipedia.org/wiki/Penny_(unit)


I'm of the opinion - though not 100% confident - that you can tell the difference between houses built with metric and imperial materials.

I think you can perceive the different factor relationship between the dimensions of timber, fittings and spacing. Although I'm not entirely certain that I'm not picking up other cultural habits in Australian and American building styles. Should I ever ascend the throne, I shall have an imperial palace (naturally) and a metric one built to the same plan to verify my intuition


With CAD being what it is, if you have them built to the same plan the engineers will just convert with fairly good precision. What you need is a metric architect and an imperial architect to do plans based on the drawing of a pre-numerate child or (better) a tasteful adult from a culture without measurement units who won't have internalized the visual languages of metric or imperial ratios.

edit: Words are hard


> pre-numerate child or (better) a tasteful adult

my two favourite moments in a documentary are #1 in 'the smashing machine' when you glimpse a the boot of the bad girlfriend in the corner, and feel before you know that everything is about to go bad

and #2 in the documentary 'Jean Nouvel: Reflections' about the architect of the same name, where you see him at work, in a dark cozy restaurant at a table surrounded by staff/devotees swirling a full glass of red wine, and he's looping loose enigmatic squiggles on wine stained napkins which he then passes out, a sacred relic, one to each person at the table. And off they go and make the buildings that bear his name.


This is relevant in my field (Waste Management) as there's the long ton, the short ton, and long and short hundredweight (CWT). All different and confusing!


Tangential, reminds me of this question about name of the decades https://ell.stackexchange.com/questions/107437/there-are-70s...


> Within the original Latin text, the numeral c. is used for a value of 120: Et quodlibet c. continet vi. xx. ("And each such 'hundred' contains six twenties.")[2]

This implies those Roman Numerals we learned in school are wrong.


We have a lot of place names in Scandinavia and the former Danish Viking colonies, and apparently in Pennsylvania and New Jersey, reminiscent of this. A "hundred" is a geographical entity that can deliver a long hundred men to the armed forces.


> prior to the 15th century

Until 1400s or 1800s?


There are also long billions and trillions, still in common use: https://en.wikipedia.org/wiki/Billion


I think I would debate “in common use”, but “was in use until very recently” seems reasonable

EDIT: For English


Maybe you are restricting the discussion to American and British English, which I am not very qualified to argue about, but they are in common use in my native Finnish and many other European languages.


Yeah in most common forms of English you will likely go your entire life without ever hearing a single person use the long scale forms. Many of the words in other languages that used to translate to the (long) "million", "billion", etc. now actually translate to the esoteric "milliard", "billiard" in English.



> The reckoning by long hundreds waned as Arabic numerals ... spread throughout Europe during and after the 14th century

It is only recently I learned that the spread of Arabic Numerals was so recent


Given the barriers to communication back then it feels like it didn't take them all that long to make their way to Europe. The numeral system seems to have been codified in India by 700, then extended by the Arabs to decimals and fractions by 900, then began making its way into Europe by 1000. The callout to the 14th century might just be because that's about when the printing press was developed, which caused the dissemination of knowledge to really start kicking off in general.


The spread of decimal numerals in the 14th century had less to do with the printing press and more to do with growing commerce and increasingly powerful states -- both of which necessitated performing arithmetic on larger quantities.


The explanation I read was that accounting ledgers were the first big market for printed books after the Bible, and being printed ruled with columns, numbers had to fit in the columns, so became decimal. The notion that commerce was small before 1300 does not seem to me to withstand scrutiny.

Curiously, the abacus was in use in Europe for centuries before written arabic numerals were used. So, the concept of place-zero was in practical use long before it got a name.

It is claimed that our numbers are big-endian because they were written, in Arabic, right-to-left embedded in right-to-left text, so little-endian. But we picked up the Arabic visual representation without the meaning. All the nuisance of right-justifying numbers to line them up by place (now automated, but it used to be a huge hassle on mechanical typewriters) could have been avoided by adopting the same endianness relative to our text.

But, I don't know what the original Indian form was, either the digit order or the text. It predated the advent of Arabic usage in India by some centuries.


So wait. The English once used C to mean what we now notate as "120" and the Romans used C to mean "100"? Wouldn't that have been super confusing when the English sent trade ships to mainland Europe? Especially in the days when England was a Roman colony?

And at one time the British spoke German? Fascinating. I always thought the single biggest language influence on British was French (because William the Conqueror), and the pre-Conquest indigeneous British was a mix of Celtic, Roman, and Norse from the Vikings that's more-or-less unique. How'd they end up speaking German?


As I understand it, the various bits of medieval England could barely agree on units among each other.

"England" is named after the Angles, one of the Germanic tribes that showed up and settled down in England after Roman rule ended. They brought their own proto-Germanic language with them and displaced the local languages- and to some extent people too. French was a comparatively small addition, it contributed a lot of vocabulary and spelling and so on, less so grammar. The Norman invasion was not a lot of actual people- enough to staff up a new aristocratic class speaking French, not enough to displace the local language or people so much.


Well, maybe not so much language. What we call 'swear words' are Saxon(?) for ordinary things, but demonized because they are not the sanitized 'correct' Norman words e.g. fuck vs sexual intercourse, shit vs defecate and so on. That's pretty much displacing language? Certainly language got changed permanently.


English is a Germanic language (as are German, Swedish, Dutch, etc). The largest components of its grammar are similar to other Germanic languages, as is a lot of vocabulary (butter, swine, cow, deer....).

A bunch of French words were grafted on recently (not in 1066 but over the following couple of centuries), but none of the grammar came with it.

As for German-speaking people in England specifically: well, George I and George II spoke German (and French), not English. George III was the first Hanoverian king to speak English. Not surprisingly, he was mad :-)


A fairly decent rule of thumb is that most of the short words in English come from Germanic roots, and most of the long words come from French. Obviously there are exceptions to this rule.


The Romans weren't super consistent either, at least when it came to the size of their military units: https://en.wikipedia.org/wiki/Centurion

> How'd they end up speaking German?

You've probably heard of Anglo Saxons? After the Roman withdrawal came the Anglo Saxon invasion from Germany and Old German became the predominant language of England, even the name comes from these invaders (Angles -> Angle Land -> England). After that the Vikings and French added their layers to varying degrees, Celtic languages remained dominant in Ireland, Scotland and Wales until much later.


> Old German

You mean Old English or Anglo-Saxon. Also, they came not just from what became northern Germany but also from what is now Denmark.


Probably the same way Americans and English can trade despite their gallons being different. Or the Australians and the Americans despite their dollar being different. It was only last century that for the English, a billion was 1e12 while it was 1e9 for Americans. I'm sure it was a pain but humans can manage details like that.


Who did William conquer? The Anglo-Saxons. If you know where Anglia and Saxony are, well, it's easy to see what happened.

German grammar is pretty easy to pick up for English speakers. Lots of sentences are constructed the same way in German and English.


And we glom words together the same way, just maybe less gratuitously


Close enough to 1 gross (dozen dozens) I guess?


i’m not so sure. 144 has been a special number for a good long time.


So a dozen times ten?


Reminds me of the ambiguity of “gigabyte”


kilobyte, kibibyte.


That’s what she said?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: