For anyone else confused by this, this is not a new calendar proposal. This is a fix for a bug in the RK808 real time clock (RTC) where November has 31 days.
And this is why, tempting as it is sometimes, you shouldn't use sarcasm in comments, commits, etc, that will be read by people who might not realise that you are joking. Not even subtle, well-crafted sarcasm.
[Edit] Not sarcasm, but when I was a young programmer I worked with a colleague on a system that detected short circuits in analogue circuit boards. Many of the variables in our code used the names of short people (Ronnie, Corbett, etc). We thought this hilarious. Our boss didn't, and gave us a stern lecture on the possible maintenance / legal ramifications.
I don't agree. The one line summary should be enough to contextualize the rest. It isn't necessary to suck the joy out of funny commit messages to help the few people who don't bother to read the summary.
Of course, naming variables funny names is probably a less good idea since variables will often have to convey meaning on their own. But that isn't true of the funny paragraph in this commit.
Wow, I was wondering what the reasoning was that a calendar with 31 days in November would be be better aligned with the motions of the universe, but that patch didn't have it. Only this comment made me realize that they were being sarcastic.
> Similarly, in A.D. 2013 Rockchip hardware engineers found that the new Gregorian calendar still contained flaws, and that the month of November should be counted up to 31 days instead.
Not sure what you mean, the patch notes definitely make it sound like the Rockchip engineers did it intentionally...
"... just like more than 300 years went by before the last Protestant nation implemented Greg's proposal ..."
This is not correct. The last Protestant nation that adopted the Gregorian calendar was Sweden. The adoption was stepwise (including some oddities) and was complete in 1753 (less than 200 years). The Swiss canton of Grissons (which was half Protestant and half Catholic) adopted it in 1811. It was the Orthodox countries that took more than 300 years to adopt it. The last one was Greece in 1923.
Ever wondered why Calendar components in Microsoft's WinForms framework does not go further back then 1753? -- It was the first year when the British Empire fully used the Gregorian calendar (the transition was in 1752). Protestant software ...
Technically the Serbian Orthodox Church still hasn't adopted the Gregorian calendar, so their religious holidays continue to gradually fall further out of sync with the rest of the world. Eventually they'll have Christmas in July.
I didn't want to make things too complicated, so I focused on the adoption by nation states in my original post. The use of the Julian calendar in todays Serbian Orthodox Church you mentioned is a special of a broader movement. Some other Orthodox Churches use it as well. In a sense, the only territory that AFAIK uses the Julian calendar today is the Monastic Republic of Mount Athos[1], which is an autonomous self-governed territory inside Greece.
Both the "Rockchip engineers found that the Gregorian calendar contained flaws" in the title, and the "in A.D. 2013 Rockchip hardware engineers found that the new Gregorian calendar still contained flaws, and that the month of November should be counted up to 31 days instead" in the post are sarcasm.
Apparently the Rockchip chip has a bug, where November has 31 days. And you need to have this workaround in software.
Probably has to do with hardware and firmware roots. Building an RTC that is programmable with year, month, day, hour, minute, second without multiplication and division is much much easier if you store the date time in year, month, day, hour, minute, second. You just need some simple comparisons whenever there is a rollover.
The real WTF is that something as fuzzy as the calendar was done in hardware. Our time-keeping methods are notoriously fickle, the only stable thing is how long a second takes, and so that's the only thing we should have in hardware.
Nah the length of a second is fixed as precisely 9,192,631,770 periods of radiation of caesium-133.
The leap second concept again comes from the calendar. All of those higher-level concepts can come from software.
The hardware just needs a clock that tells us how many (normal) seconds passed since someone shoved a battery on the thing, and how many seconds passed since the computer was booted. That's all. Well, also assuming it's approximate, unless the computer has caesium in it.
Most pain in system design comes from poor judgments of where to put what. Separation of responsibilities can be non-obvious at first.
Short of decapping and studying RK808 silicon, there is probably no way to tell with certainty whether this is a hardware bug, or a software bug, in what could easily turn out to be ROM for an internal 8051 core.
I know auxiliary micros are everywhere these days, but there is very little chance an RTC is implemented this way. RTCs are by design extremely simple, low power CMOS logic since they need to run off of a button cell for a decade. It's just a bunch of counters chained together.
Chip in question is power-management IC, with RTC being just one function of. Others are (programmable) power sequencing and suspend/resume state machine, all accessible via same I2C interface. Here is datasheet:
Yes, and the RTC portion is going to be part of a separate always-on power domain and designed to consume single-digit microamps of power (5uA for this one). It's not going to involve a microcontroller.
That doesn't really make much sense as a design. If they were going to return calendar dates, they'd count in calendar dates. I can't think of a reason why someone would implement something like that, it's just not how these things are designed.
Firmware - Possible, but abnormal, to change (maybe limited times, risky)
Software - Easiest to update, often during normal system lifetime
Wikipedia's definition disagrees though, since it agrees that ROM is firmware (I agree EPROM is firmware since it can be modified with more extreme measures), but I suggest that is less correct than the categorization I outlined as I would like to include FPGAs, etc.
The cartridges are hardware, the game within is software. Although yes, sometimes. I mean some cartridges like Super FX (Starfox) or SA1 (Super Mario RPG) SNES cartridges include specific hardware accelerators.
The point GP is making is that the game within the cartridge is immutable.
But, I actually think cartridges are a great demonstration of how the hardware-software divide isn't obvious. Byuu used to write about the importance of capturing a cartridge's mapping. The chips in different cartridges are also connected differently, and if you don't capture that information, they game can't be fully recreated. Emulators either have to guess or use a database of known titles.
The cartridge would indeed be Hardware, possibly with some bit of firmware or software storage are for game saves. Technically it might be entirely Firmware if the manufacturers decided to just use a single EEPROM chip for the whole thing, even if during normal operation the software on the chip only ever re-programmed a single page.
HDL is still code and can produce hardware or software.
The ASIC is 100% hardware because the logic is embedded directly into physical components. The same logic running on an FPGA is software although conversationally one might talk about it as being closer to hardware.
It most likely is. Built in RTCs need to operate off battery backup without the main processor running. A general purpose support microcontroller would burn more energy than a hardware calendar state machine.
Mm-hmm. Story time! Let me tell you about my favorite chip bug that was resolved with firmware. :-)
My first assignments out of college were on firmware for complex enterprise-class servers. I wrote power/thermal/clock control code, and was quite proud of doing integration tests like overclocking or underclocking a 256-core computer, and having it not crash. :-)
We got a bug, though, that the system would crash down on first boot when it had been left unpowered in a thermal chamber overnight at the minimum ambient operating temperature. This simulated it sitting in a loading dock or warehouse before getting "rolled into the lab" for install. We all assumed condensation was a problem, but were able to disprove it by controlling the humidity in the chamber. Someone thought that some capacitors or the power supply weren't producing a clean signal, but we also disproved that. Interestingly, the system would boot up just fine if the engineers immediately shut it down and restarted it. If we tried to repeat the test on the same day, it'd pass. It only failed if we waited overnight.
Looking closer at the crash, the crash reason was due to a thermal limit being tripped. The hardware was detecting an overtemp on one of the main processors. This was strange, because the test was a "cold start" test, and it wasn't triggering any warnings. The problem processor also changed each day/night we repeated the test. We actually were forced to wait ~8 hours between test windows because timing was inexplicably tied to reproducing the fail.
I checked everything, confirmed that we'd programmed the correct warning and limit values, and while talking it out with the hardware engineer and explaining my logic, my hardware colleague who wrote the VHDL realized what was going on.
The thermal sensor we were using had a margin of error that turned out to be +/- 10 degrees C. When the system was left at 5 degrees C overnight and started, the sensor's first readings might be 5 C +/- 10... and for an 8-bit unsigned register value, would result in an underflow, immediately tripping the hardware checker which was implemented as an 8-bit unsigned comparison.
Rather than change the VHDL, we turned off the built-in thermal protection which thankfully had its own dedicated mode bits, and implemented a firmware protection scheme where my code (which could mask off the high order bit) would check the temp and set warning/error bits. If the high order two bits were set, we'd ignore the sensor value and just log an informational event. If the sensor didn't report an in-range value after a certain amount of time, we'd report that the sensor was bad, and fall back to a "failsafe" operating mode, iirc. It wasn't as fast as the hardware, so we had to add more margin, but it saved having to re-spin the processor.
That code's still running in a bunch of machines out in the world today. :-)
I should also say that all of the above discusses registers and logic that are outside of the architected state of the computer that normal software has no access to.
This is a cryptic / sarcastic commit message throwing shade at Rockchip because their chip had a hardware bug counting 31 days in November.
Looks like it's caused confusion just 5 years later. Commit messages ideally shouldn't be like this, they're meant to be read by other humans in the future.
> Looks like it's caused confusion just 5 years later.
No. The commit message is perfectly clear, the first line (the one highlighted in bold, which was also the subject line on the original mail) summarizes exactly what this is about, the first paragraph, although containing some mild sarcasm, is IMO perfectly clear about what the exact problem is and the second part explains how it is solved.
The only thing that is causing confusion is the headline on HN, by ripping a single sentence completely out of context.
EDIT: The title has now been changed to something less clickbaity. At the time I wrote this comment, it was "in 2013 Rockchip hardware engineers found that the new Gregorian calendar still contained flaws"
Time and dates are awful enough without having to deal with hardware feeding you bogus data. I could imagine that if you were unaware of this that you could really drive yourself insane thinking your code isn't right.
Some RTCs do work with a simple counter like that. This calendar nonsense is basically cargo cult from the early IBM PC days. But hardware engineers rarely talk to software engineers, and these IP blocks were designed decades ago and people don't want to touch them...
...and IBM PC RTC chip (Motorola MC146818) cargo-culted that format from HP98035 clock module. That latter was literally a wrist-watch chip (TI AC5954N), surrounded by logic/microcontroller that programmatically "pressed" buttons and "read back" display digits. CuriousMark did a video about it: https://www.youtube.com/watch?v=pr6HTiWrMmk
Okay, but if we're looking at Linux kernel workarounds, then we have much bigger system than that so this is not really anything that should ever exist, right?
Here’s a quick example that is mentioned in the rationale of the commit: hardware that is offline and powered down for a significant amount of time. At boot, without an RTC (real time clock), it will have no idea of the current date or time. Most PCs have an RTC that will keep time for a while; if the time is wrong at power up, it’s not too big of a deal to fix it. For an embedded system, though, resetting the time could be a really big deal, especially if it is headless and had no continuous internet connection.
Sadly some do! The best kind of RTC in hardware is a second counter that the OS can program as it pleases. This design choice offers the least chance to mess it up. Yet, for some reason so many companies invent over-complicated RTC hardware blocks that count days, weeks, time zones, moon phases, and who the hell knows what else. Most of them suffer from myopic design (I've seen chips with y2k issues in hardware) or outright bugs (as seen in TFA).
You know what most drivers do to those? They use them as second counters in the simplest way the driver writer could imagine.
And yet, more and more chips keep being made with over-complex RTCs
What's a good knee-jerk "use this" recommendation that's inexpensive and likely to exist for a long time?
Popularity and/or intuitivity would be nice bonuses, but I'd value functionally correct/sane behavior more highly.
(I'm guessing this question belies my extreme ignorance of the field :), and that the RTCs in even eg basic Arduino etc designs are probably already reasonably decent.)
Edit: Just noticed the parent sibling comment about RTCs obviating divide/multiply logic. Much clicks into place now.
You don't buy RTCs. You buy a big SoC with a 5000 page datasheet, of which 3-10 pages are the RTC, and you take what they give you. There is no room for the RTC type/design to be part of the part selection process for a SoC, which is why they can be vaguely crappy and nobody cares or can do anything about it.
Edit: it's a power management IC for a SoC, not the SoC itself, but you often don't get much choice of those either since SoCs tend to be designed to pair with a specific PMIC. Whether the RTC is in the SoC or PMIC (or both) depends on the design.
What a terrible calendar do we currently have. One guy offered this one: 28x13=364. In that case we have 13 months each of 28 days. Strict 4 weeks in one month. 365 and 366 are non-calendar holidays.
13 months improves some things, but it does not work well for 4 seasons.
There are two definitions of seasons. In the simpler one, they start March 1, June 1, September 1, and December 1. (In the other, they are truer to astronomy and start about 20 days later.)
With 13 months in a year, that would be a lot messier.
Since there is symmetry to the year, it makes sense to divide it into a number of pieces that works well with that symmetry. 12 pieces makes sense because it's a multiple of 4.
If you look at the length of days as a function of year, it resembles a sine wave. You wouldn't create a unit of angular measurement where one full circle is 13 units. (You'd have trig identities like sin(x+13/4)=cos(x)!) There would be similar disadvantages if you divided a year into 13 pieces.
You'd still need leap-days, then, though. Otherwise, in 150 years, june, or whatever is the 6th month, has drifted from summer to winter (or viceversa) by then.
Another issue, is that while 28 is neatly divisible, 13 is not. What is half a year? When do you make up the numbers for Q3? etc. It introduces so much issues, that the net-benefit is hardly there.
Except that people can still be born and die and check in and out of hospitals on those days, so you have to be able to record those days, so they immediately get back onto the calendar but in a far uglier and more hacky way than the entire system that you just tried to replace.
That's what we have, september is month 7, october is month 8, november is month 9, december is month 10, february is the last month, and february 29 is an intercalary day. Someone just fucked up the offset at some point.
Pretty sure this explains the origins of the seven day week:
By the seventh day God had finished the work he had been doing; so on the seventh day he rested from all his work. Then God blessed the seventh day and made it holy, because on it he rested from all the work of creating that he had done.
The most disturbing thing about this bug is that an RTC which can count days of the month and even leap years correctly is not at all new leading-edge technology. The pattern is simple enough that you can write a single expression to calculate it, which could be compiled down to a similarly compact arrangement of gates, but I'm guessing someone went the "lame" way with an array of entries -- and typo'd the November entry. Then no one noticed nor tested until it was far too late.
With the exception of February and an inversion after July, the pattern alternates binarily between 30 and 31. Thus, assuming zero-based months,
Had they used random constrained testing (property based testing/quickcheck for the HW world) on the HDL for their design, this likely wouldn't have gotten past even the initial review phase.