Hacker News new | past | comments | ask | show | jobs | submit | _getify's comments login

The "WTF" label is not about correctness, it's about surprise/intuition. It doesn't really matter to me whether it's mathematically or IEEE correct. It's strange and inconsistent.

Yes, two numbers of different signs added together are supposed to be 0, but... a number is never supposed to change (sign or magnitude) when you add 0 to it, and here it does... so... it's a strange corner case that I think defies intuition.

Given the two precedences that are incompatible, I think far more people are likely to think "anything + 0 ======= anything" than they are to think "anything + -anything ========== +0". So that's why I marked it as a WTF.


> number is never supposed to change (sign or magnitude) when you add 0 to it

In the “real numbers”, zero doesn’t have a sign at all, and -0 and 0 mean precisely the same thing. Floating point is an approximation which needs to make some choices about edge cases, for the sake of practical uses (for instance it is useful to distinguish negative underflow from positive underflow, so there is an extra -0 value included).

The behavior that 0 - 0 or -0 + 0 produces 0 as output is not an unreasonable choice (it is what I would expect, as someone with a decent amount of mathematical experience). I would not expect very many people to have the “intuition” that -0 + 0 or 0 - 0 should produce -0 as a result, assuming they had any intuition at all about what should happen in this edge case.


In JS:

  -0 + 0;   // 0
  -0 + (-0);  // -0
  -0 - 0;  // -0
I claim that the `-0 + 0` case is the strange inconsistent one, so that's the reason for my WTF label.

----

Consider the counter-argument, that it's intuitive/correct because in math `X + (-X) = 0`:

  3 + (-3);   // 0
  -3 + (3);  // 0
  -0 + (0);   // 0
  0 + (-0);  // 0
It's true that this characteristic by itself is preserved, but where it falls apart:

  X + (-X) = 0    // -->
  X = 0 - (-X)   // -->
  X = 0 + X
That final statement should be true for all X, but as demonstrated above, it's not true for -0.


-0 is truly a mistake on the part of the IEEE committee. You can get it by dividing by -infinity, it's supposed to indicate "zero approached from the left in this case, but it's not consistent; sqrt(-0) will give you -0 in most implementations


1/–∞ == –0 seems like obviously the correct behavior in context of IEEE floats.

I think if someone is careful it should be possible to make an implementation of complex square root on top of IEEE floats such that √(–a² + 0i) == ai, whereas √(–a² – 0i) == –ai, representing the two sides of a branch cut.

Yes, √–0 should be 0. File a bug against whatever implementation returned –0 for that one.


IEEE 754 says √–0 is –0 though.


Ah really? What is the purpose of that?


Not sure. Via https://stackoverflow.com/questions/19236117/what-numerical-... I found https://people.freebsd.org/~das/kahan86branch.pdf, but that's a more complex read than I'm prepared for right now ;)


That paper is about implementing branch cuts in a complex-valued function (e.g. complex square root), and doesn’t discuss real square roots.

In the context of that paper it seems to me that √(–0 – 0i) should be 0 – 0i and √(–0 + 0i) should be 0 + 0i, but under no circumstances should the result be –0 ± 0i, which is on the wrong branch.

The obvious extension to real-valued square root would be √(–0) == +0.


(self post)


Just FTR: this is not accidental or lazy on my part. It's highly intentional. But I can certainly appreciate that it frustrates some learners.

There's a principle called "Cognitive Load Theory" [1] which I believe all teachers have to take seriously. Specifically, the "extraneous cognitive load". It's why I choose to introduce concepts with foo/bar style first.

I am concerned by the observations I've made over many years of teaching (JS) that many people tend to get distracted by problem domains and miss out on the underlying concepts. For example, if I am teaching about `this` but I use an example like a login system, it's far too easy for someone to get distracted by what they know -- or worse, what they don't know -- about how login systems work.

It's an overload on the cognitive side because the learner (reader) is having to juggle not only the new concepts but also their knowledge/opinions/baggage about the problem domain.

Cognitive overload is so "dangerous" not just because you might overflow someone's capacity and they stop getting anything out of the teaching, but because it has the tendency to become subtractive and actually cause them to lose what they already knew or had learned about the concept.

So... my approach in teaching, which is also reflected in the books, is to default to teaching a concept first without a problem domain, using generics like foo/bar, and THEN once I feel someone has the concepts, later, you can reinforce those with practical applications in problem domains.

IOW, I'd teach the abstract principles of `this` first, then the abstract principles of the Prototype system, then I'd start to show some examples like a login system that show how you can put those abstract concepts together into a real thing.

Anyway, I understand it doesn't meet your preference, but I just wanted to explain the reasoning behind it from my side.

[1] https://en.wikipedia.org/wiki/Cognitive_load


First I'd like to thank you for your work. I've learned a lot from you and your teachings.

When it comes to "foo bar" examples, as a learner, I think it causes MORE cognitive load not less.

I think this is because, it's hard to make connections between meaningless words like foo, bar and baz

Conversely, most of the "aha" moments come from your ELI5-like explanations of concepts

The concept of a "promise" made sense to me when you used buying a hamburger at a fast food store in your workshop on asynchronous javascript.


I agree with this, foo/bar/baz stuff is abstract and means nothing. This means the learner need stop associate something they don't know with nothing which is really hard. An analogy of something the learner knows is way easier to grasp..


Instead of getting lost in the jungle, 'foo', 'bar' style helps understand the concept. To the ones who say it's not an effective way; What if a learner is coming from non 'web-dev' background e.g. from a mathematical one who wishes to create complex alogorithms and geometries, interfaces etc. with JS, Won't he, on seeing 'login' code, wish that it were of his interest? After all JS is not only used to just create typical sites. What if learner was going to use JS on iot, robots etc.? So stereotyping the code with some particular 'web-dev' usecases is not useful (atleast not at the beginning). 'foo', 'bar' style helps focus on what it's really important to learn at first.

It's like -- 'a human without makeup is the most real'


TL;DR: When teaching nuclear physics, do not place a bike shed on the diagram.


First, I think it's disingenuous for an author to revel only in the positive reviews and "just ignore" the negative. An author should take all feedback and do the best with it he/she can. I read all reviews, and try to find anything I can in there to get better. That's why I'm sending this reply.

You probably think I should just "ignore the trolls" and you probably also think that anyone's negative expression is "freedom of speech" and that it shouldn't be countered.

I don't see it that way. The public ratings on my books are how a lot of people figure out if the books are worth looking at. No matter how legitimate or crazy a negative review may be, the one star hurts my overall rating exactly the same.

Furthermore, if I let a negative review go unanswered, I've lost an opportunity to show other readers a different perspective, and I've also lost the opportunity to (perhaps) engage in productive discussion that helps either that reviewer, or myself, or both, get better. Yes, this has actually happened before on a couple of occasions.

I don't think I'm reacting poorly by trying to find useful stuff even in the negative reviews. And I also don't think it's wrong to point out that unhelpful reviews are unhelpful.

I'm sorry this comes across as off-putting. But it's because I care so deeply about improving JS education through these books.


The irony of your comment is glorious. Thank you.


I'm excited about the `SharedArrayBuffer` addition, but quite meh on the `Atomic.wait()` and `Atomic.wake()`.

I think CSP's channel-based message control is a far better fit here, especially since CSP can quite naturally be modeled inside generators and thus have only local-blocking.

That means the silliness of "the main thread of a web page is not allowed to call Atomics.wait" becomes moot, because the main thread can do `yield CSP.take(..)` and not block the main UI thread, but still simply locally wait for an atomic operation to hand it data at completion.

I already have a project that implements a bridge for CSP semantics from main UI thread to other threads, including adapters for web workers, remote web socket servers, node processes, etc: https://github.com/getify/remote-csp-channel

What's exciting, for the web workers part in particular, is the ability to wire in SharedArrayBuffer so the data interchange across those boundaries is extremely cheap, while still maintaining the CSP take/put semantics for atomic-operation control.


self submission


I wrote a six book, 1100 page series in Markdown (all on github). I write all my blog posts in Markdown.

I'm sorry but I just can't sympathize with any of the complaints leveled against Markdown. They are so minor and compared to the massive adoption of the common bits of Markdown, there's no chance I'd ever choose to write anything in any other markup than that just for those small annoyances.

I don't need custom extensibility when I can just add HTML. HTML is pretty good at markup, and it enhances the markdown in those rare occasions where what's provided seems too limited. The very last thing I'd want to do is literally write code as an extension to my markup language. The only code in my markdown is inside nice tidy code blocks for display purposes only.

I also use JSON for configuration, even though many have tried to convince the community that YAML or some other format is better. I pre-process my JSON to strip it of "non standard" comments, and I'm quite happy that I chose the by-far-defacto-standard there instead of going off the beaten path.


Markdown is fine for books, but not for technical docs.

Techdocs have lots of cross references and other similar elements, their implementation in all markdown “standards” is really bad.

ReStructured text is much, much better.


I didn't mention in my comment, but I also use Markdown for all my technical documentation (READMEs) on my dozens of github OSS projects. I have lots of cross-references and I use normal links to named section anchors (sometimes that I insert manually via HTML).

It's not perfect or even ideal, by any means, but the other benefits of Markdown are more than enough to make up for the slight annoyances here.

I don't know about ReStructured text, but I can say that none of the examples in the OP swayed me that I'm missing anything of any import.


To your second point if you use a YAML parser, you can continue to consume and emit your JSON files and get comments for free. Plus, when you get to a use case JSON doesn't support, you can continue to use the same parser for the one-off.


YAML claims to not be markup, but it clearly is a markup for data (significant whitespace/indentation, etc). When I want to do markup, I use Markdown.

When I want to data serialization, I use JSON. It's incredibly easy to strip comments (and whitespace, for that matter) from extended JSON before transmitting and/or parsing. I wrote `JSON.minify(..)` for that years ago, and it's literally never been a problem for me since.


YAML is markup in the same way that XML and JSON are markup. All three are methods of serializing data into text documents.

It also offers several advantages over JSON, in that it can also store and sanely represent relational data, has support for comments built into all YAML parsers, and it can be used to represent data structures that don't fall into the list/hash map paradigm.


Curious how you cross-referenced across the books? Did you do listings of API's?

Markdown is great for text content, not for formal documentation.


The books were intended as source material for publishing, so I didn't embed any crosslinks in the documents.

However, I always use markdown for all my OSS project documentation (READMEs, etc), and I do lots and lots of cross-referencing in that context. I use simple markdown links to named-section anchors.

It's a tiny bit more manual than I'd like, but it's never stopped me from doing technical documentation effectively.


markdown can indeed work "good enough" for books.

assuming you're using one of the extended flavors.

but once it comes to the repurposing of your text, switching to another flavor (for many good reasons) will cause _many_ undesirable wrinkles to appear, with a lot of them being rather difficult to find.

you might think this is a problem you'll never face. i hope, for your sake, that you are right about that.

but don't say that you weren't warned.

because the experience is unhappy, and tedious.

and not what you'd expect from a "plain-text" format.

the _idea_ for markdown (which, by the way, should be credited to ian feldman, with a follow-on by dean allen) is spot-on. but the gruber realization is short-sighted. (and, alas, stubbornly so, for absolutely no good reason.)


(self post)


self submission


disclosure: self submission


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: