If you've only used TypeScript, then you haven't really seen what gradual typing can do. In Typed Racket, you get actual guarantees about your types, and a type system that works for almost all existing untyped code. Of course, it might be more work to make it type check, since you have to get it right. ;)
This is also true of other systems like GradualTalk, and other systems like Typed Clojure have much more expressive type systems than TypeScript, even if they don't guarantee soundness in the presence of untyped code.
The designers of TypeScript went for simplicity first. This has real advantages, but also disadvantages, which you seem to be running into.
(Typed) Racket has been on my radar, though there have been some worries, e.g. the performance issues when crossing the typed/unityped boundary. Hopefully those can be fixed.
I know it's my own failing, but I've also had a lot of trouble with the incredible amount of parens and symbols in TR/TC -- I'm used to the near-math notation of Haskell dammit! :). It's just one of those things -- I have actually written non-trivial code in Scheme, but I've been spoiled by Haskell. Intellectually, I deeply appreciate the value of homoiconicity, but sadly I just can't get past the aesthetics.
EDIT: FWIW, if I were to go with gradual typing, I think TR would probably be close to ideal. It's an amazingly well thought-through language in terms of module system, integration of the macro system with modules, the syntax-parse macro system, etc. Kudos!
This comment is very confusing. You spend a lot of time complaining about how Racket is overly focused on functional programming, leading to slower data structures. But the queue data structure you refer to is an imperative queue with O(1) operations for everything. Also, it isn't a priority queue.
The Rosetta code example for Priority Queues in Racket is also built on a mutable heap data structure. Of course, it has more complex time bounds, because of priorities, but not because of immutability.
Finally, the Rosetta code example for Queues in Racket uses `mcons` (which you claim is somehow taboo) explicitly.
That it takes an intimacy with the code base of a regular repo contributor to find it suggests how deeply buried 'mlist is.Or to put it another way, Racket the language does not provide 'mlist. Racket the ecosystem does but deliberately makes it hard to use (by obfuscation) on the moral ground that mutating a list ought incur the pain of iterative 'mcons-ing...and it's associated pretty-print. .
For us that were around when the default changed from mutable to immutable cons cells, known the rule "just prepend an m". I agree that the link in section 4.10 can be improved - but calling it "obfuscation" and "deliberate" is taken it too far. File a bug report, and I bet it will be fixed.
'mcons is part of 'racket/base. 'mlist is part of a library that contains two layers of warnings, is not linked to by the Racket Guide or Racket reference and lives under the 90th of 96 links on the Documentation page. It's lower in the great chain of being than "Unstable May Change without Warning".
(cons 1 (cons 2 (cons 3 '()))) is commonly required in the Racket family of languages for purely pedagogical purposes. That it is the only pattern for 'mcons provided by 'racket is a deliberate statement of community values in regard to mutable lists.
In regards to filing a bug report, the bug is that there is mention of mutable lists and a hyper-link in the documentation for the Racket Language. Because mutable lists are not part of the language the bug fix is properly removing the reference - at least in so far as maintaining consistency with the rest of the documentation because other libraries are not cross linked from the Guide and Reference. Linking "mutable lists" to 'compatibility/mlist, would just replace one bug with another...unless leaving 'mlist out of 'racket/base was itself a bug. That that is the case, is unlikely.
It turns out that fexprs, which is what Kernel provides, are widely considered a bad idea in the Lisp community, and were abandoned in favor of macros for a reason. The basic problem is that you can never tell whether your function call is really a function call or a macro application, until you run the program, and therefore you can't reason about your code at all. Incidentally, this also means that your compiler can't reason about your code, and thus optimization is basically impossible.
The fexprs provided by Kernel are not the same as the ones in traditional lisps which were abandoned - Shutt has developed a theory for reasoning about them in some ways (the vau-calculus, which is capable of defining the lambda calculus too). He describes pure and impure versions of the vau calculus in his dissertation.
Firstly, operatives are improved over the traditional ones by having lexical scoping - where the original lisps were dynamically scoped. The other main feature of operatives is that they gain access to the dynamic scope from where they were called - as this environment is passed implicitly to the operative when it is evaluated.
You're right that they do not allow us to reason inductively about code in the same way as the lambda calculus due to this, but it's a trade off for incredible flexibility. I think we can build alternative models to reason about them for purposes of optimisation and such. (Or alternatively, one can have operatives behave as compilers - in which you feed in type-tagged expressions, and your compiler-operatives can perform full type checking - then output an applicative which strips this type information). I see the possibilities as endless, because the model is extremely abstract.
Kernel choses not to enforce the static separation of operatives and applicatives for simplcity - instead, a convention based approach is used whereby operatives are prefixed with "$", (such as $vau, $lambda, $define!, $provide!). One can check them during runtime with the operative? and applicative? predicates too.
However, it would be perfectly possible to create a compiler which can know this difference, since the forms `($define! ($vau ...))` and `($define! ($lambda ...))` which distinguish them could be given special forms or special syntax in an alternative representation of this model, although you would not be able to tell whether an applicative uses some operatives internally - that's the idea of encapsulation - implementation information is meant to be hidden. (Even at the cost of performance, which often doesn't matter.)
Even if Kernel is not as widely practical as lambda-calculus based languages - Shutt's dissertation is definitely worth a read, and shouldn't be discarded because of a perception of bad fexprs from 40 years ago.