That's a good analogy (as long as you mean "HTML" as a banner term to include the HTML spec and associated specs like CSS and Custom Elements ).
• AMP is a subset of HTML.
• Any normal HTML engine can, therefore, render any AMP file, without needing to know about the AMP spec.
• It's also possible to make an engine that specialises in rendering AMP (either exclusively, or by switching to a faster mode when it detects the file is just AMP).
• Even in normal HTML engines, AMP files will generally be fast because they avoid 'slow' parts of HTML.
Where the analogy falls short:
• It's easy to write AMP by hand, but asm.js is really only meant to be generated by a compiler.
• Obviously, what is meant by 'fast' is very different in each case. In HTML:AMP, it's about things like reducing network usage, and avoiding layout thrashing by requiring up-front declarations of image dimensions, etc. In JS:asm.js, it's about making compiler optimisations possible so code can execute faster.
Note: AMP does create extra elements, like `<amp-img>`, but these are legit uses of the Custom Elements spec. You can see these elements rendering correctly in Chrome. 
AMPHTML whitelists a handful of HTML tags, but adds a bunch of its own custom elements, like <amp-instagram>, <amp-img>, and <amp-ad> on top. It's neither a subset nor a superset of HTML in the common understanding of those terms.
Technologically it's not a walled garden, but "business wise" - for lack of a better word- it is definitely a way to improve content consumption experience on Facebook, not for the general Internet at large.
But it's not, really. It's HTML + a mandatory web components polyfill and standard library loaded from a remote origin. If I just rendered the HTML, I wouldn't see any images or videos, because they're hidden behind custom <amp-img> and <amp-video> tags.
I guess the upside is that I also wouldn't see content in the <amp-ad> or <amp-pixel> elements.
I wouldn't disagree with AMP being "just web technologies," but it's sure as hell not "just HTML."
Thanks! Let me know if you have any problems or questions. Re: the implementation, take a look at the task that fetches new comments: http://git.io/vOGT3.
reddit's primary 'new comments' endpoint let's you fetch at most the 100 newest comments. But you have to account for times when the previously 'newest' model is more than 100 comments in the past, ie. you have a gap because there were too many comments or another request took longer than expected.
The endpoints themselves are simple, but the task to make sure you don't miss any or fall behind is tricky.
"This is why you see so many popups and banners on mobile websites
that try to get you to download apps. It is also why so many mobile
websites are broken."
Huh, is that surprising? They break their mobile sites and expect people to still use them more than apps? This is self fulfulling: if you make a sucky website, people will NOT use it. Make a wonderful mobile site and people will not care about your apps.
The problem that i see is that once big 'popular' readers are gone, it's possible that publishers decide that RSS is a feature not worth having/maintaining. That's a choice that has been made already by platforms like Twitter and such.
A lot of new (more recent) blog platforms also did not prioritize RSS.
So, as much as I agree with the statement, I'm worried that in the end, we'll all lose.
Finally, Feedly's strategy here is probably one of the best way to make sure that publishers stop puvblising RSS feeds.