Yes, some overloaded octopus methods fall back on Object. What helps the SpiderMonkey type-inference-driven JIT is online live-data profiling, as Marijn notes. This may be the crucial difference.
However, new algorithms such as CFA2 promise more precision even without runtime feedback.
And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.
Is this a good trade? Users will vote with their fingers provided the tools show up. In big orgs (Google, where Closure is still used to preprocess JS) they may, but in general, no.
Renaming is just not high-enough frequency from what I hear to motivate JS devs to swallow type annotation.
In many cases types can be inferred. ML is able to infer almost all types in a program (however the algorithm requires that the language doesn't have subtyping). Haskell has very good type inference which support subtyping (you declare very few types). They both have strong static type system and don't tax developers by making them having to declare every type. Algorithms which are used in Haskell are complicated, but they can be implemented.
We worked in the context of ES4 on gradual typing -- not just inference (as you imply, H-M is fragile) -- to cope with the dynamic code loading inherent in the client side of the Web. Gradual typing is a research program, nowhere near ready for prime time.
Unsound systems such as TypeScript and Dart are good for warnings but nothing is guaranteed at runtime.
A more modular approach such as Typed Racket could work, but again: Research, and TR requires modules and contracts of a Scheme-ish kind. JS is just getting modules in ES6.
Anyway, your point of reference was more practical systems such as Java and .NET but these do require too much annotation, even with 'var' in C#. Or so JS developers tell me.