Hacker News new | past | comments | ask | show | jobs | submit login
Types will be part of Ruby 3 stdlib source (twitter.com)
387 points by darkdimius 37 days ago | hide | past | web | favorite | 210 comments



Very cool! I didn't think this would happen, as Matz has expressed disinterest in adding type annotations. However, keeping an open mind and reconsidering one's positions are the hallmarks of a great leader :D

I worked on a summer project to add type annotations to Ruby. Didn't get very far since I ran into some challenges with the internals of the parser and the parser library, Ripper. I'm extremely interested in seeing how the Ruby team designs the type system. It'll be gradual, of course, but also it'll be interesting what adaptations they'll have to make to accommodate existing code. JavaScript relied on a lot of stringly typed code, so TypeScript added string literal types. Perhaps Ruby's dynamic, block oriented style could lead to some interesting decisions in the type system.

Not to mention, the types will most likely be reified as per Ruby's philosophy.

Super excited for this. Between the JIT and types, Ruby could definitely see a renaissance in the near future.


Indeed Sorbet does have literal types for strings and Ruby symbols. We're still figuring out the details and converging on a common type system for Ruby3, but we've found them super useful, as you rightly point out!

And +1 on the Ruby renaissance! Super excited about all the exciting things that are currently being built!


I can't wait for Sorbet's open sourcing! Ngl I tried decompiling the wasm binary just for fun. Not that it ended up being readable haha


We're currently looking for beta users! Reach out to us at sorbet@stripe.com. If you describe your team&codebase in the email, it will help us figure out what cohort to include you in.


I honestly think that more than Matz reconsidering his own opinions, it probably turned out that having types is an instrumental thing to enable performance improvements.

Keep in mind, Ruby development is headed towards a goal that the dev team has called "3x3" as in Ruby 3 aims to be three times faster than current Ruby implementation.


My recollection is that 3x3 is a goal to be 3x faster than Ruby 2.0–presumably many of those gains have already been realized, so best not to depend on tripling _current_ performance.


Yeah, the NES emulator that's one of the benchmarks is about 1.8x faster so far, so only another 1.6x to go on top of that.


>it probably turned out that having types is an instrumental thing to enable performance improvements.

I was disappointed to find out that adding more types in Perl6 actually slows down performance.

I wonder what the differences are that adding types in one language speeds it up, while adding types in another language slows it down.


It depends what do the type annotations do. I'm not sure how perl6 does it, but for example in python type annotations are completely ignored at runtime, so don't have any impact. We'll see how much / for what does Ruby 3 actually want to use the type information. Sorbet on its own is unlikely to affect runtime either.


Going to keep praying for type/performance optimizations in Python so we can all get past the "python is slow" thing.

Async python is an absolute joy to develop with.


Care to elaborate which type of work you're doing and which libraries you're using?


Libraries:

aiohttp: web framework

aiopg: async postgres driver with SQLAlchemy support

asyncssh: async ssh library with SFTP capability

I generally work on CRUD microservices to automate some steps of a business workflow - activating/registering a resource with our vendors, generating and updating pricing, picking up new files off an FTP site and processing.


Type checking is runtime, although if the static optimizer can figure out that a certain call will never work at compile time, it will throw a compile time error.


On the other hand, if the static optimizer can figure out that a certain call will always work at compile time, it can remove the runtime check for that part.


Could you elaborate on how you got to the conclusion that adding types in Perl 6 slows things down? They shouldn't, unless you create types that actually run Perl 6 code during type checking. Which is usually not the case.


>Could you elaborate on how you got to the conclusion that adding types in Perl 6 slows things down?

I was playing around with adding types to everything in my program. Creating kind of a little Haskell style script. I was sad when it ran slower than it did without the types. Someone informed me that that is the expected outcome because (as you said) type checking is done at run time.


The thing is that even if you do not specify a type, you've implicitly specified the `Any` type. And type checking (which always happens at runtime, whether or not you've explicitly specified any types) will be done against that.

So I'm very curious as to what code exposed a slowdown after explicit types were added.


What is the rationale of adding types to a language that will still retain all performance penalties from the need to have dynamic typing code to interact with non-typed data?


The story didn't start as 'add types to Ruby'. It starts from someone having a codebase in the hundreds of thousands of lines of Ruby, dedicated to financial software, and the costs that they had by trying to keep said codebase from costing a lot of money: In those situations, you can go as far as toevaluate how much each bug deployed to production cost you.

Quite a few large companies have found themselves in this situation: Very large codebases in a programming language without types stop being fast to develop in. Then you get to either rewrite everything, with the well documented risks, or start doing all kinds of other things to make programming safer, like banning certain parts of the language, until eventually dedicating a team to improve the language is the most cost effective way to go.

In this case, I am also pretty certain that the interaction with data started having informal types a while ago too.

What I find really interesting here is that what starts as a library to help a single company handle the subset of Ruby they were using in the first place now aims to be good enough for general purpose Ruby outside of said company. It's one thing to have problems with an experimental, home-made thing, and just get support via slack, but adding this to the language has a far higher barrier. This is also probably the reason it's not OSS yet: The code that is enough for production use in Stripe's approach to Ruby might not be the greatest in a random codebase with different opinions on how many dynamic methods you want to have.

So it's not that a team decides to add types to Ruby instead of just picking a language that already has the types: It's solving a private problem and, a while later, realize that accidentally the solution is very close to being good enough for the language.


A lot of great insight in this comment.

The only difference is that Stipe has foresaw the problems and has been working on productivity for quite a while, with a dedicated group of people who help our engineers by building tools and abstractions. For example https://youtu.be/lKMOETQAdzs is done by the same org couple of years ago.


The rationale is that types are there to check correctness (types as proofs), not to improve performance.

Speed is not the first and foremost benefit of types. Type checking is (and other stuff that comes with that, like better completions, self-documenting code, etc).


Who's to say we couldn't use the types to make the runtime faster in the future?

One of the reasons why Sorbet does both runtime checking[1] more than just static checking is so that we can know that signatures are accurate, even when a typed method is called from untyped code.

If the signatures are accurate, a future project could take advantage of method's signatures to make decisions about how the code should actually be run. If the signatures lie, then any runtime optimization made using the types would only be overhead, because the runtime would have to abort the optimization and fall back to just running the interpreter.

[1]: https://sorbet.org/docs/runtime


Well I am skeptical about performance improvements due to type annotations as well. Other languages have similar different systems and didn't get faster.

Dart had gradual types but didn't enforce them at runtime because of performance. The PyPy devs don't believe that type annotations help them for performance (http://doc.pypy.org/en/latest/faq.html#would-type-annotation...). Also there is no JS engine that uses TypeScript annotations so far to improve performance.

Types are usually on the wrong boundary: e.g. Integer doesn't state whether that value fits into a register or is a Bignum.

Also: Aren't some type checks quite expensive? So more expensive than a simple map/class/shape check? E.g. passing an array from untyped code to a signature with something like `Array<Integer>`. Wouldn't a runtime that verifies signatures have to check all elements in the array to be actual Integers?


It's because PyPy relies on traced runtime statistics for optimizations via inlining. There's another approach where you translate your typed program into a lower-level target language and compile it into a native binary. See https://github.com/mypyc/mypyc and https://github.com/cython/cython/wiki/Python-Typing-Proposal


Dart has long changed.


True, that's why I wrote "Dart had".


I work on an alternative Ruby implementation, and it looks like I'll be able to take these type definitions and use them to add extra type constraints to my intermediate representation very easily - just insert a type constraining node around each expression that's annotated with a type. It'll remove extra guards and increase performance, so should definitely be an option.


Thank you chris for all your work on Truffle! Looking forward to using it.


I programmed C back in my high school years 14+ years ago. Today I am mostly using JS because it is the cash crop of the industry, and it made me quite some money when I was away from electronics business (my main occupation) for a year after getting troubles with Canadian visas.

To me, it feels that there is a very thick wall in between high level languages and something with raw data access like C, C++, and D. You either completely throw out every convenience feature, or go all in on them.

In C, a lot of data access turns into single digit number of load/store or register access instructions. It is easy to see that it is close to impossible to add fancy data access functionality on top of that without going from single cycles to kilocycles.

I was once told "when your try improving a programming language performance, it eventually turns into C"

P.S. on JIT - it is not given that a JIT language be automatically faster than a well written interpreter on a modern CPU. One of early tricks of making fast interpreters was to keep as much of interpreter in cache and data in registers as possible to benefit from more or less linear execution flow of unoptimised code in comparison to unpredictable flow of JIT made executable code. Today, with 16MB caches, I think the benefit of that will be even bigger.


> I was once told "when your try improving a programming language performance, it eventually turns into C"

Which is kind of ironic given how bad C compilers generated code during the mid-80s, versus other mainframe languages.


>To me, it feels that there is a very thick wall in between high level languages and something with raw data access like C, C++, and D.

That is why we need something that offer 80% the Speed of C, 80% of Simplicity / expressiveness of Javascript / Ruby, and 80% of ease of long term maintenance of a functional PL like Ocaml.

I actually think Java will one day evolve very close to that goal.


Types are the means of "improving language performance" without turning into C. It's all about encoding invariants for the optimizer.

I doubt a competent JIT is ever slower than a competent interpreter, but it may not be that much faster or worth the workload.

It depends on the size of the primitives. An array language could be close to 1:1, while for a cpu-level instructions you will struggle to reach 1/6 of JITted perf.


You forget this is ruby, which will not have a competent JIT. They have such a bad JIT with enormous overhead that only very special cases will be faster, most cases are slower and will wait for locks or be racy.


With types you get more compile-time checks - safer code, better documentation, and the possibility to improve runtime performance and ffi. With a guaranteed int you don't need to check for bignum overflows, and you can avoid all runtime type checks. A typed ffi struct can be used as is for the ffi, raw data. strings are guaranteed to be 0 delimited.

In certain basic blocks typed ints or floats can be unboxed, if they will not escape. This is what php7 made 2x faster. the stack will get much leaner. simple arithmetic ops can be inlined, using native variants. ops with typed vars cannot be overridden.


Optimizing is possible even in those cases. A JIT usually runs a function hundreds of times to collect type data before attempting to optimize the function. Types can be used to pre-fill that type data. The JIT can then optimize immediately, but still bailout if the wrong types show up in the future. I wouldn't think such an approach would yield huge benefits overall (the most used code will be optimizing pretty quickly anyway), but on server apps, it could speed up edge-case behaviors a bit.

Another feature of even optional types is creating uniformity to allow JIT optimization. A great real-world example of this is Typescript or ReasonML. It's converted to JS, but still winds up faster on average. The JS JITs have multiple tiers of optimization. Changing data types and function signatures are the biggest performance killers. If you can ensure a list is always strings or numbers, then the optimizer can reach the top tier of optimization. When lots of people work together on untyped languages, there tend to be small changes in the signatures and structures that drop you out of that top optimization level. Even partial types are useful for preventing this.

Related to that is the potential for runtime type warnings. Even though the types aren't used by the JIT, it should be possible to give a warning message if the received types don't match up. That could be a huge assistance in finding where a bug is located.


Readability most likely. Type checking tends to also reduce basic bugs from mismatched inputs as well.


so that you can gradually add types?

now that ruby has an actual jit compiler, it could benefit from typing to optimize code further. And a gradual migration process will help people speed up parts of their code. Unless they mess it up like python where abstractions are costly.


Fascinating to see the circle turn further back towards strong / static typing.

One of the major things that has kept me using Groovy over the last 10 years was the reluctance to leave optional / gradual typing behind. Now, nearly every major dynamic language has given in and introduced types, so it seems like this idea of hybrid dynamic/typed languages is now fully mainstream. The problem of course, is they are all built on a legacy of untyped code, not to mention giant communities of people with no culture or habit of tying their code. So it's not clear to me that any amount of added language features can actually compensate for that.


I get what you mean, but dynamic typing is a feature, just like static typing is.

Some languages are better from a static POV, and offer some auto features. Some languages are better from a dynamic POV and offer some hinting feature.

You don't want to type your code to do data exploration and analysis, but you may want to extend the original project later to something bigger and move on to types.

There is no such thing as the perfect language for everything anyway. Plus, it's very good that some languages integrates unnatural features to them, for the case where you want to go beyond their initial best case scenario. It won't be perfect, but I don't need perfect, I need programmatic.

The world of programming is vast, the pool of programmers very heterogeneous, and the constraints are super diverse.


> There is no such thing as the perfect language for everything anyway.

People tend to forget this all to easily. For example most of the static type discussions for the past 10 years have taken place on a website built in a dynamic programming language, I'm talking about http://lambda-the-ultimate.org/ which afaik is built in Drupal (i.e. PHP).


To elaborate, I think people don't care what medium they have their discussions in as long as it works.

Are you not going to use StackOverflow because it is written in C# using Microsoft servers instead of Go on Linux?

Did you not use Twitter originally because it was built on Rails?

10 years ago... what CMS was popular in a static typed language? Hell today... what statically typed CMS is popular?

But if that still matters to someone, PHP now supports types!


I think it's interesting to see mostly-static languages implement dynamic features too. For example, C# is mostly statically-typed. But a while back, they introduced the `dynamic` variable type, which makes that variable actually dynamically typed. Once a variable is dynamically typed, you can assign anything to it, call any function on it with any arguments and put its return into any statically-typed variable. It all gets type checked at runtime and blows up then if what you called doesn't actually match any functions on that object.

You could theoretically do all of that before anyways with clever use of reflection, but this makes the compiler create all of that extra code for you from what looks like normal code.


When Apache Groovy had dynamic types only, it was marketed as a complement to Java, but when Groovy 2 came along with static typing added, its backers started pitching it as a replacement for Java, even targeting Android. This endeavor was ultimately unsuccessful, though, and the programmer who wrote the static typing enhancements recently pulled out of Groovy's project management group at Apache. Groovy doesn't run on Android anywhere I know of, and converting large swaths of dynamic code to static using @CompileStatic doesn't work -- you need to repeatedly compile the code and add manual type conversions all over the place until it all runs OK.

Best use Groovy for dynamically typed scripty stuff only, and a JVM language built with static typing from the ground up for building the actual systems, such as Java, Scala, or Kotlin.


I expect a split between very strictly coded and strongly typed everywhere libraries and shared components, and looser high level scripts mix and matching when convenient.

That could be the best of both worlds.


We're collaborating with @yukihiro_matz, @mametter, @soutaro and Jeff Forster to make sure that types are not disruptive to Ruby. Thus, types are optional. The intention is to deliver value for unmodified Ruby programs. Hear more from Matz at https://youtu.be/cmOt9HhszCI?t=2148


So, something along the lines of https://github.com/soutaro/steep but without the annotations in the original source code, because Matz said "no annotations". That's nice because it doesn't pollute the code. I use Ruby because of Rails and because I don't have to write types. I can use many other languages if I want to write them.


As long as types can be required to be explicit where ambiguous (e.g., TypeScript) in the file itself (via a magic comment or similar), I'm all for it. I am happy to declare types for external calls if I need to.

I have said for awhile that "Ruby with types" would be my favorite language to work in. I recently returned to Ruby briefly and had to integrate with a poorly-documented API. I spent more time digging through third-party code trying to figure out what certain parameters were supposed to be than writing the program itself.


i haven't used it, but have you looked at the Crystal language? i think the idea is basically "statically typed Ruby".


I've heard of it and looked at it a bit, but haven't had a chance to use it yet!


Could elixir be a good alternative? It has typespecs https://elixir-lang.org/getting-started/typespecs-and-behavi...


Learning from the clojure.spec success story, what might make/break Sorbet is its runtime capabilities, aka reification.

* Can I emit typed REST API docs out of sorbet types?

* Can I coerce HTTP params out of sorbet types?

* Can I emit ActiveRecord columns? ActiveModel validations?

* Can I emit generative tests?

You can do all of those (and whatever else you imagine) with clojure.spec in a DRY manner, i.e. types are defined once, and reused in a variety of contexts.

As a Rails dev, I would greatly value all of those, particularly because they're practical things directly related to my webdev activity. Ensuring the type safety of the codebase is great, but also implicitly exercised by an adequate test suite.


This is HUGE! Ruby's pace of development continues to impress. It's always been an impressively practical language and keeps getting better.


It's an interesting turn of event that Ruby, Python and JavaScript are all getting types.

Meanwhile, I've gotten myself more and more into Clojure. Which now that other dynamic languages seems to move closer to types, seems to be in a niche in that Clojure is moving further away from types.

It'll be interesting to see what happens at both extremes and in the happy middles.


>Clojure is moving further away from types

What about clojure.spec?


I feel Spec is a part of that "move away from types" which I was talking about. It's an approach to software documentation, specification and verification that is at the other end of the spectrum from types.

Clojure seems to have double downed on dynamism and runtime construct, away from static types. It seems to have made the bet that better software (less defects, cheaper to maintain and extend, more targeted to the users needs) is better achieved through:

* Simpler primitives * Immutability * Interactive development * Higher level constructs * Data driven DSLs * Generative testing * Contract specifications * Data specifications

Which are all very good ideas, but they're non traditional compared to formal static type systems and proofs.

They're used to be more drive behind these in the past, Common Lisp and Eiffel embody a lot of these ideas, but miss on others. So Clojure is like a new take trying to fit in all these ideas of interactive, dynamic, safe languages together a new.

And I just find it interesting, because it is counter current. As others have pivoted back to static types, Clojure went all in on dynamism.

Time will be the true test, and I'm looking forward from the learnings in all directions.


It's also an interesting case of language extensibility benefits. Spec is just a library. And its spritual predecessor, Prismatic/Plumatic Schema, was also just a library. This enabled the markers of spec to learn from a body of usage when creating spec with an eye of making it a a standard feature, first as the spec.alpha library and hopefully now the final form.

An always-on static type system could not develop this way.


Yes, clojure.spec enjoys a great momentum and acceptance in the community. It's being used for a great deal of use cases from safety to generative testing to HTTP params coercion.


Why not a standard Common Lisp istead? Or even Scheme/Racket?


Well, I'm not seeing as much activity on CL. But for me, it's mostly a practical reason. I can easily use Clojure or sprinkle some around in most enterprise context because it runs in a symbiosis with existing platforms like the JVM, the CLR, and the various Javascript VMs. So it's much more usable in my day to day.

I also feel that CL and Racket have embraced types a lot more. Doesn't CL have a fair bit of static typing already? And with Racket, Typed Racket has pretty much pioneered the concept of gradual typing now being applied to JS, Python and Ruby.

I know Racket also explored contracts, and has a lot of great ideas. But I feel overall it's missing the: "and we dog food it all on real business use cases in production" aspect that Clojure has.

And for CL, it doesn't seem to have as much in terms of contracts, data DSLs, immutability, simpler primitives, etc. It feels more like a traditional mutable, OOP, dynamic language. It has nailed down the interactive development part though. I don't want to put it down as I'm interested to try more of CL, but overall, it just doesn't seem as active or opinionated anymore. If anything, CL seems to lack any form of opinion, and goes more for the: we just add all features of every other language. Which is a quality on its own, but not driving the discussions forward either.


CL has static typing and such in at least three important senses that aren't usually called out explicitly in these discussions. First, you can annotate types and some compilers (notably the popular SBCL) can warn you, at compile time, about issues around using the wrong types, undefined vars, unused vars, wrong arguments passed to function, etc. I've read somewhere that's all based on Kaplan-Ullman type inference developed in the '80s, along with an example implementation for CL: http://home.pipeline.com/~hbaker1/TInference.html This is the sense of using types to help you write more correct software. It's obviously not as rigorous as Java/Haskell/Shen and not everything can be done at compile time. On the other hand CL has very flexible type definitions, so you get possibilities like "integers 2,3,4,5" as a type rather than limited to "all integers" or custom types like "is keyword :a or :b or :c".

Second, compilers can use type information (or inlining hints, etc.) to compile efficient machine code, which you can inspect with the built-in function 'DISASSEMBLE.

Third, types are used in the CLOS system to support efficient multiple dispatch for multimethods, and the rest of the CLOS and MOP machinery that makes it a very "non-traditional" (despite CLOS being first to ANSI standardize) OOP system with a lot more power than other fashionable languages provide. I'm basically in agreement with the title of http://www.smashcompany.com/technology/object-oriented-progr... with the caveats that Lisp is different and an exception (even if not perfect, there's an unfortunate mismatch in methods not allowing any type for dispatching on, though you can work around it in a similar fashion to Clojure's multimethods of dispatching on a runtime value) and that carefully designed Java can make the forced OOP tolerable.

CL is also super dynamic and lets you redefine basically everything so none of this is truly "static", and that's why compilers will warn rather than error, and runtimes while developing will preserve your state and drop you in a debugger instead of destroying state, printing a stacktrace, and giving up, because you can fix it and recompile that little bit or rerun after defining a missing variable. CL's conditions and restarts system has yet to be convincingly cloned by other languages.

Contracts, DSLs, immutable data structures, simple primitives, and other things (some not present in any other languages) are available in CL... My own reading has found that a lot of them have been there or in the predecessors to the CL standardization or in things built on top since for a long time (some well before I was born) and explored by big production business applications, not just academic exercises... Some of course are more modern transplants as they haven't gotten popular until recently. But for the things that were there already, in a sense the discussions have already been done and may help explain the lack of driving force for them now. In other languages I see them driving themselves close to where CL already is more often than driving to a completely new place (but then CL provides and lets you go there too, or at least somewhere close). You're free to use these things in CL, or not; you're right that it's not an opinionated language and I'd argue never was. Fortunately there's enough capability for modularization that we can have different opinions (e.g. the meaning of syntax like [Click me] in a UI component) and still trivially share code. It's a shame that Clojure code and CL code can't be trivially shared.


I understand why PHP started to add support for type annotations as the hype around type annotations (Dart, Flow and Typescript) still was quite strong a few years ago.

By now I think it is quite obvious that type annotations aren’t as helpful as initially expected and that a library approach seems more pragmatic and more powerful. See Clojure + Spec.

The thing is dynamically typed languages with type annotations tend to no longer feel like dynamically typed languages as the annotations and the tooling spreads and spreads and spreads. Not easy to put up boundaries.


Types are massively helpful with JavaScript. I’ll never write untyped JS again if I can help it. Switching to typescript has done wonders for my productivity and code quality.


Agreed. Unsurprisingly a lot of libraries are being rewritten in TS, including some high profile ones. I've been writing JS for a decade and TypeScript was a game changer for me.


Tell me few .. to me types were useful for hinting in ide but vscode already gives good hints


Types are not just for autocompleting, but also for making illegal states unrepresentable[0].

For example, let's say you have Question model with two types: MultipleChoice and ShortAnswer. In TypeScript you can model it like this:

    type MultipleChoice = {
      mode: 'mc'
      body: string
      choices: string[]
      expectedAnswer: number
    }
    type ShortAnswer = {
      mode: 'sa'
      body: string
      exampleAnswer: string
    }

    type Question = MultipleChoice | ShortAnswer
TypeScript's compiler will then enforce data structure consistency across your entire codebase. For example, if you were rendering a question in React:

    type Props = {
      question: Question
    }
    const MyComponent = (props: Props) => {
      props.question.body // Ok, since all questions have a body
      props.question.choices // Type error, since only MC has choices

      if (props.question.mode === 'mc') {
        props.question.choices // Ok now, since we checked the mode of question
      }
    }
You can also use these types to force certain code to always be correct. For example, if you wanted to display a human-readable version of a Question's mode, you could write:

    const prettyQuestionType: Record<Question["mode"], string> = {
      mc: 'Multiple Choice',
      sa: 'Short Answer',
    }
and now TypeScript will force prettyQuestionType to contain keys for all modes. That includes when you add a new Question mode later.

Once you learn how to lean on the type checker, you think less about such details, and your mind becomes freer to think at a higher level, increasing your overall productivity. There is a learning curve though, so be aware.

[0] https://fsharpforfunandprofit.com/posts/designing-with-types...


this indeed looks useful. so you can give multiple types by using '|' ? and if there is vagueness compiler will let you know.. mind blown.


> I understand why PHP started to add support for type annotations as the hype around type annotations (Dart, Flow and Typescript) still was quite strong a few years ago.

PHP started added type hinting (aka specifying types for function arguments) in 5.0.0, back in 2004. Dartlang didn't exist until 2011, TypeScript until 2012, and Flow (I assume you mean the FB tool) didn't exist until 2014, as best I can tell.

>By now I think it is quite obvious that type annotations aren’t as helpful as initially expected

My only take away from this is that you obviously haven't used PHP's type system.


It's it obvious?

There seems to be a never ending cycle of new languages that are dynamically typed because it is easy for small codebases, which then become popular, get large codebases and then realise that static types are actually a really good idea.

Python, Dart, Ruby, JavaScript (via Typescript), etc...


The biggest problem for me with gradual typing is code clutter. My favourite languages are Clojure and Ruby precisely because they reduce code clutter. What I would prefer, if we are to have types, is for the signatures to go in a companion file. I've never understood why types have to be inlined. A good IDE can easily provide the signature in a mouseover or something similar.


The way to reduce clutter in strongly, statically typed languages is to use strong, robust type inference.

For example, Java is pretty terrible at type inference (still) and you have to annotate types almost everywhere (Java 8 had a very tepid improvement on that front.)

But languages like Haskell and Rust are very good at type inference, and you almost never actually need to specify the types.

It's still good Haskell style to always annotate the type sigs of top-level functions. Why? Because they serve as more than just hints to the compiler: they are part (and a very important part!) of the documentation. That is why they're in-line. Because A function like

    zipWith:: [a] -> [b] -> (a -> b -> c) -> [c]
tells you what it does in its type signature.


There's nothing lost by putting the sig in a companion file and leaving it to your editor/IDE to provide a popup.

Java 10 and 11 introduced real type inference, at least for local variables and function parameters.


> There's nothing lost by putting the sig in a companion file and leaving it to your editor/IDE to provide a popup.

I don't want to go back to having to keep C header file in sync. Your IDE can hide that information from your as well, if you don't want to see it all the time.


It's been a while since I wrote Ocaml, but IIRC the compiler yells at you if your mli files are out-of-date, for whatever that's worth. So keeping them synced isn't really an issue.


”There's nothing lost by putting the sig in a companion file and leaving it to your editor/IDE to provide a popup”

It requires every (1) editor and IDE on the planet to add code for doing that, which means every (1) programming language on the planet needs a library for parsing such companion files, for the benefit of ??????

Except for historical corner cases such as original java with its repeated type annotations that make code with types tedious to read, I wouldn’t know what that benefit would be.

(1) that ‘every’ is a bit of hyperbole, but essentially true.


Types in a companion file?! Every written C/C++ with "companion" header-files? That is clutter my friend.

Half of the documentation will be in the header file and the other half in the implementation file and you will have to edit two files for every tiny change you make. No thanks. Types are part of the code and should be as close to the code as possible to reduce any possible source of friction while editing.


what's cool is that you can write your types in another directory / another repo, e.g. https://github.com/sorbet/sorbet-typed


Nice! Reminds me of crystal[0], the LLVM-compiled ruby-alike language.

0: https://crystal-lang.org


Except crystal's type system seems much more capable & powerful.


The trend of adding type annotations to dynamically typed languages is now unstoppable. I wonder if some more exotic features (eg. side effects handling, monads or dependent types) will ever become mainstream in the feature.


It's hardly unstoppable its been there for literally decades. Common lisp had it for a very long time for example and has a few compiler implementations that are really quite sophisticated.

The problem is that most popular dynamic languages are really quite terrible. They have atrocious runtime environments and usually quite limiting language semantics.


It was not mainstream back then.

I agree that most popular dynamic are quite terrible. But, honestly, I think the real problem is not the particular implementations, but the whole idea of dynamic typing. At first it did make sense, but now that compiler writers have figured out "cheap" and general type inference, I don't see the point anymore.

However, I use Python on a daily basis because I have no decent alternative for the libraries I use.


You could argue that something that has been around for decades is unstoppable :)


I don't use Ruby day-to-day other than a few small tools, but why not focus efforts on evolving Crystal [1] to make it more suited for rapid web development? It already has a powerful type system and incredible performance, and should be an easy transition for rubyists.

[1] https://crystal-lang.org/


Because it is about Rails and tons of useful gems which would need to be ported 1:1 to Crystal, plus keep compatibility with CRuby for some time. Too much effort which nobody would pay for.


I agree with this sentiment. I like the type checking in Crystal, and it is pretty much the newer, younger brother to Ruby. I don't see the issue of leaving Ruby pretty much 'as is' so that legacy code does not break, and focus on making Crystal a much better evolution of Ruby.


Why should the people who built and maintain Ruby focus their efforts on a different language?


You can ask the creator himself: https://github.com/mruby/mruby


Interesting. I thought the Ruby community generally prefers shorter code, e.g. `to_s` instead of `to_string`, and yet that type signature is very verbose: `sig {params(x: Integer).returns(String)}`


The type signature in (secure) ruby [1] - an alternative ruby (subset) with type (optional) annotation - is `sig Integer => String` or `sig [Integer] => [String]. Since the sig is just ruby you can create an alias for types e.g. I = Integer, S = String and than use sig I=>S, for example. [1]: https://github.com/s6ruby


Integer and String are the actual classes:

    “foo”.class # => String


There is something similar (but more powerful) for Smalltalk, it collects the types as you run the code and then it is used to improve the refactors.

Check it out. https://github.com/hernanwilkinson/LiveTyping


Why do these efforts have to move into Ruby proper? Why can't sorbet or steep stay their own thing, and if it solves your Stripe-like-codebase problems, great. What I don't see a lot of here (or in general these days) is advocacy for the advantages of dynamic typing. And if you're objective, there most certainly are, even if they're not worth the disadvantages, or don't surpass the advantages of static typing for you, personally.

But Ruby used to advocate for them, and it's definitely what drew me in. I find it disappointing that we're moving away from that. More and more, it seems we’re attempting to make Ruby all things to all people. Which eventually makes it the right thing for no one.


How do optional typing annotations break ruby/Python/... for you?


Well, I think there's a bit of mandatory-ness that comes with adding it to Ruby itself. Sounds like the standard library is going to ship with rbi files defined, for instance. Plus, tools for generating rbi files. On some level, it's an endorsement to do things this way, right? And that's before it (potentially) becomes a community practice to do so.

If it's not, why not leave these solutions in gems?

Btw, I don't think static typing alone is Ruby becoming all things to all people. In recent history, it's also aliasing `Enumerable#filter` to `Enumerable#select`, numbered block arguments, a shorthand special notation for `Object#method` -- it feels like a trend of "hey these other languages do this, we should too". I'm not convinced that's always the case.


Partnering with Ruby Core is a bit dubious for a project which is still closed source.

Why the privacy? Are programmers too dumb to understand something is a beta?

What if in the end adoption is marginal and Ruby Core's time was wasted?

Best adoption is organic, not hyped up.


From the slides that are linked in the tweet in the tweet: https://sorbet.run/talks/RubyKaigi2019/#/45


Ruby already had types, no? This is about static typing.


If you're going for precision, then probably: type-annotations. The runtime doesn't change with sorbet. All the verification is via an external tool. So there's no static typing - your code can still violate the rules.


My thought too. Interesting what the definition of "static" will turn out to be in the context of something so inherently dynamic as Ruby.


FYI: For an alternative ruby (subset) with type annotations today see sruby, that is, secure ruby - https://github.com/s6ruby


I recommend whatching "Ruby3: What's Missing?", a presentation Matz gave earlier this month: https://www.youtube.com/watch?v=cmOt9HhszCI

This might be misleading. That is, jump to around the 29 minute mark where he talks about the type profiler and .rbi file stuff.


As a user of Homebrew, I just wonder if Ruby's ever going to have performance.


homebrew's performance is mostly network (git / http / https) and compilation times when needed.

also for some reason homebrew really likes to updates its index all the time (I think it got tamed in the newest version), but setting HOMEBREW_NO_AUTO_UPDATE to 1 helps a lot.


With HOMEBREW_NO_AUTO_UPDATE and HOMEBREW_NO_GITHUB_API, it still gobbles up a core for a full minute to find a substring in a list of strings. Even if it looks through directories for this, I can't imagine how this task would perform so badly. It's not system cpu time primarily.


Does homebrew still download sources and compile them? I thought it moved to only downloading binaries.


It is no worse than python, but with the 3x3 initiative the main implementation will be a lot faster than today, which will never happen to python unless the current lead will go 180 degrees against what Guido always claimed.


By the way, could you please summarize as to what biggest improvements you expect from Ruby that are against Python's policy?


> It is no worse than python

Eeeeeeeh…


Python and ruby have about the same speed for most common tasks (or at least in the same ballpark), IE: dirt slow once you leave the comfort of the fast parts of the runtime that are written in C.

In reality this is fast enough for most tasks.


The thing is, I do some Python programming for money, and I'm having a hard time imagining what the Brew team did to make `brew search` and its other parts so slow. I'd probably have to compare strings byte by byte in Python code for that.

Might have to learn me some Ruby just to figure out this mystery.


The slowness of brew isnt ruby's fault. They don't keep a local cache of the taps, but instead searches for taps using an API that interfaces with github and searches local taps, remote taps, then blacklisted taps and then probably something more. It is limited by network speed, not by string searching.

Edit: explained better here: https://github.com/Homebrew/brew/issues/3056#issuecomment-32...


It's not limited by network speed because I have HOMEBREW_NO_GITHUB_API and HOMEBREW_NO_AUTO_UPDATE enabled. It's not like I just today stumbled into the problem.

Brew's not even consuming much system cpu time during `brew search`, despite hogging a core for a full minute.

Even just `brew help` takes almost 10 sec cold.


Are you using JRuby or something? 10 seconds is too long.

> time brew help

> brew help 0.55s user 0.26s system 96% cpu 0.849 total


Regular Ruby 2.5.1. I do have a throttled CPU due to no battery in the MacBook, but I still have Python and other stuff to compare, and Brew (or Ruby) definitely does something wrong on my machine. It's hugely CPU-bound without a particular reason for being so.


Weird. I'd guess Brew because MRI Ruby tends to have pretty good boot times.


Yes and no. You need to basically fork the compiler and invent a new type of ruby that sacrifices certain things in favor of performance.


FYI: The RubyKaigi 2019 Progress Report on Ruby 3 Talk Slides have more (from the source) info. See the slides titled "Static Analysis" [1]

Ruby 3 static analysis will have four items:

1. Type signature format 2. Level-1 type checking tool 3. Type signature profiling / prototyping tool 4. Level-2 type checking tools

and so on. [1]: https://docs.google.com/presentation/d/1z_5JT0-MJySGn6UGrtda...


My concern about all of this is that it might lead to basically two ruby communities; Rails and Rails devs will mostly keep writing type free code (dhh has always indicated he's not a fan of types), but a lot of other rubyists will gradually introduce types into their code. This could create two different ecosystems with different gems, best practices, blogs etc etc etc. We will see how it plays out but I'm quite conflicted about this one. The good thing is that it's optional.


It’s not in the tweet but we specifically covered that it’s possible to type check Rails in our talk actually:

- https://sorbet.run/talks/RubyKaigi2019/#/53

- https://sorbet.run/talks/RubyKaigi2019/#/55

So I don’t think that the divide will be at Rails. And more than that, I think there will be very little divide at all. Sorbet is designed to be gradual, so it works 100% fine with untyped code:

https://sorbet.org/docs/gradual


Hi thanks for this! Could you shed more light on the last part where they show parts of Rails, Gitlab etc are already typed? How is this possible?


Sorbet has multiple Strictness Levels[1]. The two most relevant ones are `typed: false` and `typed: true`. `typed: false` is the default level and at this level only errors related to constants are reported, like this one:

https://sorbet.run/talks/RubyKaigi2019/#/14

But we'd like to catch more than just errors related to constants, like those related to missing methods, or calling a method with the wrong arguments. Errors like these are only reported in files marked `typed: true`:

https://sorbet.run/talks/RubyKaigi2019/#/19

Sorbet doesn't need to have method signatures to know whether a method exists at all, or what arity that method has.

But more than that, Sorbet ships with type definitions for the standard library. So you don't even need to start annotating your methods to type check the body of your methods, because most of the body of your methods are calling standard library things (arrays, hashes, map, each, etc.).

The statistics in those slides are sharing "out of the box, what's the highest strictness level that could be added to a file without introducing errors?" So ideally an entire project be made `typed: true`, but Sorbet can be adopted gradually, so a project can exist in a partial state of typedness. We wanted to see how painful it would be to adopt Sorbet in a handful of large, open source rails projects, and it turned out to be not that bad.

[1]: https://sorbet.org/docs/static#file-level-granularity-strict...


Can you comment something about the possibility of Ruby getting faster because of Sorbet ? Can it be combined with DragonRuby or other compilers that would produce more optimized Ruby programs?


I’m curious which projects you used to try this out on?


I really like python's approach, which is to provide syntactic support for type annotations, but have them treated as pure comments by the language runtime. That way, type checkers can check your code if you like, but no one is forced to use typed code if they don't want to, even if they are using a "typed" library.


Yes it is optional until it gets hard to find a job with typeless Ruby. And indeed split Ruby communities just as with Javascript. The bad thing is that static typing in dynamic languages is HOT, which means if you don't move over to the typed camp you will look old and stupid.


If anything, dynamically typed languages (without special tooling) is for super smart people or people who are lying to themselves/others about the limitations of the human mind.

But I personally wouldn't hire someone who maintained that dynamic typing produced as good results and was reasonable for an even mid-sized project. They've either never had a long running project or they've never dealt with a big enough code base at that point, or they're simply being dishonest or lack self awareness. None of those are good signals. Not having worked on a project that goes on for long enough is fine, but having opinions on software maintenance in that case is foolish.


perl6 and python3 were my first thought. I think this is great though, but maybe they should change the naming of this new version completely so that they can make a clean cut. Rather have less backward compatibility and clean design as opposed to forcing a square peg in a round hole


typescript, es6, es5....


Great for Ruby (OP was arguably the most important compiler dev behind Martin Odersky on the Dotty/Scala 3 project), types for the win.


Why is everything moving to types?


Generally no one uses dynamic typing for the abilities it gives you. Do you declare string variables to later assign them to numbers? Do you dynamically add new functions and properties to objects? Do you ever really need the flexibility that dynamic typing is giving you?

If not then why are you using a dynamically typed language? If you're not using it's abilities then it doesn't sound like the right tool for the job.

It's like a cost/benefit analysis where none of the benefits you're using, and the cost is the total inability to validate, refactor, and navigate your code base.


> Do you declare string variables to later assign them to numbers? Do you dynamically add new functions and properties to objects? Do you ever really need the flexibility that dynamic typing is giving you?

Yes, yes, and no. I do most of my work in languages that prevent the first two, but when I do have access to this kind of runtime trickery I do use it when useful.


> Do you declare string variables to later assign them to numbers? Do you dynamically add new functions and properties to objects? Do you ever really need the flexibility that dynamic typing is giving you?

The issue is not "Do you purposefully do those things?", but rather "Do you have a call stack where you can't guarantee it won't happen by accident?" Type checking is not relevant when you know what will happen and want the dynamic/ducktyping behaviour.

Another issue is: I'd use a different framework which doesn't use Ruby, but this was the most productive framework at the time the codebase was started, and nobody will port that many lines of code to a non-dynamic language now. So the best course of action is to validate the current code is not overly-dynamic.


I think most rubyists do benefit from dynamic types. How easy would it be to build rspec and Rails in java? The whole dependency injection thing in Spring is in part a by product of types making it way harder to test things. That's just one example.


Can you give a concrete code example you are talking about? What is your problem with DI with spring? Why do you feel it is a problem with static type checker?


DI for testability adds complexity to code and reduces readability. In Ruby it's almost always unncessessary to use DI because in Ruby you can stub at runtime.

In other statically typed langues like Rust the type system itself eliminates the need for a lot of these tests but at the cost of mental overhead of expressing your logic in a way which will satisy the type system.


What jashmatthews said mostly. It adds bloat and isn't very readable. It's another "ceremony" that together with types, interfaces, generics etc increases lines of code. I can see the benefits for huge projects, but I don't like this way by default out of the box.


I believe it depends on the use case. If you notice, Stripe and Coinbase are the first few companies that use the type system. They are both dealing with financial systems and numbers in general where having types would help a lot in catching errors and bugs earlier. I've worked on financial systems before in a dynamic language, JavaScript, and from my experience there would be cases where a number would be passed from a place where it's a string (in a textfield) that then needs to be passed around as an integer at times. Type systems would help catch bugs here or in similar situations.


Because it's easier to deal with your CI telling you that you made a mistake before a deployment, than with Rollbar telling you that you're losing money due to a stupid bug in a case you forgot to test for.


It does seem like there is a fad around moving to types, mostly because of typescript's current popularity.

Will be interesting to see how ruby handles types vs duck typing etc 10 years from now, when the new best practices have been figured out.


There is benefit to future proofing code with basic static analysis for type checking.

But it is always an incomplete solution because a) old code needs to be retrofitted, see TypeScript's way of defining type maps for vanilla JS or b) more commonly you keep the code around that's using unsafe types, effectively passing void*|Object|"choose your poison" around.


It's a sad state of affairs. Every programming language is just copying the 'next cool' feature from another language. Duck typing, deconstruction, functional stream-like constructs you name it. I guess this ends when every language features is copied and we get X omni languages with Y omni SDK's all having same features with different syntax. The thing is that I only really need 1 omni language, not a dozen of them, so I feel that all the feature stealing in the end will be detrimental to all but the best supported omni language.

I guess some companies started fast with Ruby/Python and similar and instead of rewriting to static/typed languages they pushed forward features that would allow them to just continue where they left off at the expense of having a more concise problem oriented programming language that's good for solving specific problems.


Not really, what many seem to keep missing is that programming languages are products like anything else.

One buys into eco-systems, not language features bullet point list.

And there isn't something like an universal eco-system for any kind of business case, hence multiple languages.


Harder to make bugs. Better documentation. Typed code is easier to optimize - just look at Crystal performance. Better IDE/Text editor tools. Simpler deploy / distribution - just copy a binary file. I really wish there will be optional type to prevent nil errors at compile time.


Because TypeScript has proven that a type system can be helpful without being clunky and annoying.


I thought the ML family of languages showed that long ago? I guess TypeScript popularized the notion.


ML family has "type inference", which means the compiler figured out the type even if not explicitly written into the code. However, the language spec is still statically typed - an int will not turn into a string and vice versa (ex: "1").

Javascript and ruby, the underlying types can change depending on where the code is in execution - a variable holding a 1 can turn into a "1" and back (implicit type conversion - try 3 * "3"). This leads to a whole class of bugs not possible in a statically typed codebase where explicit conversion needs to happen - I have no hard data, but I remember debugging this type of stuff far too often and far too many times when I could've spend my time better elsewhere. (but I actually like ruby a lot!)

Type checking is not the same as being statically vs. dynamically typed!


> a variable holding a 1 can turn into a "1" and back

This is true of Javascript, but not of Ruby.

  irb(main):001:)> 3 * "3"
  TypeError: String can't be coerced into Fixnum
People commonly conflate dynamic typing with weak typing, Ruby has the former, but not the latter (with some explicit exceptions, e.g. to_ary and friends).

That's not to say you can't still end up with some interesting problems though -- if we just slightly change your example:

  a = 3
  b = 3
  # later...
  a = "oops"
  product = a * b
  # product is now "oopsoopsoops"
But this isn't due to automatic "weak types" style coercion -- just that Ruby lets you build a repeated string by multiplying a string by a number.


The alternative view is that those so-called 'dynamically typed' or 'untyped' languages should really be called 'monotyped languages' since all variables and expressions have the same type: a giant union of all possibilities.

See https://news.ycombinator.com/item?id=8206562


I don’t know but I hate it. Not sure if I’m in the minority but it sure feels like it. In an ideal world. I feel that types are something that should be dealt with at the IDE level. In fact, there so many things that can be done at that level, but no one has really been brave enough to do so I suppose.


So, what, everyone standardizes around an IDE then? You really think that's gonna unite the vim and emacs camps?

I'm personally tired of staring at variables trying to figure out what they're supposed to be, then having to dive into source to see how its used. C/C++/C# solved that problem, why are we still dealing with it?


C had some typing, but I'm not going to call it "solved" until "numberOfHats = distanceInPixels + weightInKg" is considered a compile-time error due to the three "int" values being incompatible; but "numberOfHats = aliceHatCount + bobHatCount" is acceptable.

How does nobody(?) support this yet?

Python supports some parts: you can subclass int, and you get all of the int methods like addition and subtraction for free, but "distanceInKm + distanceInKm" gives you an int instead of a distanceInKm; and "distanceInKm + distanceInMiles" gives you an int instead of an error.

Rust also has partial support but from the other end: distanceInMiles and distanceInKm can be two distinct subclasses of int, and adding them together is a compile time error. But also adding distanceInMiles with distanceInMiles is a compiler error, because these are basically "completely new classes" rather than "subclasses of int", and so you have to implement add / subtract / stringify / etc for yourself for every type D: (I'm fairly new to Rust so if there is a shortcut there that I'm missing please do point it out)


Every language that has generics supports this via phantom type variables that can encode extra information only in the type system alongside some other type, or with a specific newtype keyword that effectively does the same:

    newtype Pixel = Pixel Int
    newtype Em = Em Int

    pixelWidthToEm :: Pixel -> Em
    pixelWidthToEm (Pixel px) = Em px
You can try to call `pixelWidthToEm` with anything other than pixels and it won't work.

More dynamically, with an open type variable that only exists in the type system:

    data User a =
      User { name :: String, socialSecurityNumber :: String }

    data LogSafe
    data LogUnsafe

    logUser :: User LogSafe -> IO ()
    logUser = undefined

    makeUserLogSafe :: User LogUnsafe -> User LogSafe
    makeUserLogSafe = undefined
We can never log the user unless the user is deemed LogSafe and we make functions that produce log safe users that you have to call before hand, in order to make sure that sensitive data isn't printed to logs.

These are things that have been around for a long time in almost every type system, but people's general lack of interest in using type systems to help them conspires to keep them in the dark.

Here's how you can create a number type distinct from other number types in TypeScript:

    type DistanceInPixels = number & { readonly __newtype__: "DistanceInPixels" }
And a type alias that allows you to create them:

    export type Newtype<T, Tag extends string> = T & { readonly __newtype__: Tag }
    type Pixels = Newtype<number, "Pixels">


I get similar concerns with functions that take multiple strings - how do I make sure I didn't swap the bucket with the key? I've seen enums used here, as well as "wrapper" classes.

In any case, to answer the question of "How does nobody(?) support this yet?", have you heard of https://frinklang.org/ ? It's not a useful tool for most codebases I work on but it's an interesting idea.


You could do this in C++ by storing the unit of measure with the measurement value and then performing unit conversion in overloaded math operators.


(They won’t be subclasses in Rust, Rust doesn’t have classes nor inheritance. They’d be “newtypes”, a struct with one member.)


Do they, the language is called F#.


It rarely happens to me that I stare at a variable and have to wonder what type it is. And yes, an IDE like Rubymine is becoming crazy good at autocomplete and method lookup. I think the experience of developing on Rubymine isn't that far behind from Intellij nowadays. Not everyone have to use the same IDE, the vim or emacs guys will have to find equivalent tools.


Algol solved the problem.


What do you mean by types being dealt with at the IDE level?

Depending on your type system, a well-typed program can eg run faster, because the compiler / interpreter can elide certain runtime safety checks that would be necessary in untyped code.

If your type system is crazy enough, you can even track the runtime complexity of your program at the type level, including whether your program runs in finite time. See eg Dhall (https://dhall-lang.org/) whose type systems only allows programs running in finite time.


I think what GP means is that the IDE for a theoretical programming language could automatically infer types and have you not type any code for that explicitly. It might even not show you types as code at all and by default and overlay/add this info only on request.

Generally, there is this huge disconnect between how code is expressed as text and how it is handled in as a graph structure inside the tooling. It is soon time to move beyond simple text files for code, I believe.


ML family languages (like Haskell) had proper type inference for decades now. And yes, integrating that with your editor/IDE is a good idea.

Depending on what you want to do, you might also want to start with the types and have the computer figure out the implementation.

> It might even not show you types as code at all and by default and overlay/add this info only on request.

Types annotations are often are great documentation, and there's a lot of practical knowledge in the ML communities about what types annotations to show in the source for helping with understand and debugging and which ones to leave out as clutter.

I wrote 'proper type inference' above, because it's much more powerful than the watered down version Go and C++ give you. See eg https://news.ycombinator.com/item?id=8447280 for a Rust example.


In an ideal world, humans write bug-free code ;)

But honestly, if you're asking your IDE to do it, that means you're asking your IDE to do static analysis of your code - and type-checking in a lot of ways is just another static analysis technique. And for a lot of us (myself included) we prefer to catch as many of these bugs as possible using static analysis, instead of waiting for someone to get paged when it causes an outage.

Yes, there's a trade-off, and types can be obnoxious (Java imports being probably one of the worst offenders, C++11 introduced `auto` for a reason), but that's the cost we pay.


Ruby being so dynamic means that without type annotations you cannot infer the type (or types) within a variable statically, so you basically have to eval the program, which in Ruby means basically running the whole codebase.

So, please, be brave and evaluate a 100kloc codebase+deps that may contain a top level `rm -rf ~/`


By IDE level, you mean compile level. Types are at compile level.


No, types can be at compile level. They can also be checked without compiling any time you like. Or automatically. By an IDE, for example.


Almost all good IDEs are essentially interactive compilers. The lines have blurred during the last two decades. For example, QtCreator and XCode use clang to provide code annotations in the editor. Eclipse is built around ECJ, its own Java compiler, which exists mostly to provide information back to the editor and refactoring tools (the editor maintains a complete bidirectional mapping between code as text and code as AST at all times). Code generation is almost only a byproduct there.


I come from Assembly and C, now working in Javascript. One of the reasons I made the move to JS is dynamic typing, getting rid of that administrative pain and now being able to create stuff much faster. Even in large JS apps I hardly ever have type related bugs at all, and when I have one I fix it mostly within minutes, don't need an entirely different language and ecosystem for that.

Now the JS fanboys discovered and moved (from Coffeescript to Babel ESxx) to Typescript they apparently think that they can write beautiful and bugfree software just because of static type checking! Let them please move to C++ or whatever statically typed language and shoot themselves in the foot by making all those mistakes that have nothing to do with static type checking at all! Oh, and of course hitting the wall because they are missing their precious 'any' keyword!

I totally agree that type checking for dynamic languages should be done in the IDE, tooling. But static typing in the dynamic language world is a hype at the moment, so we'll have to go through a wave static type checking frenzy. For my work I look at horrible code bases, perfectly typed and strictly formatted by tslint..


I started using TypeScript back when it was 0.8, before it even had generics. Does that make me a fanboy? I have a project with about 45k SLOC of TypeScript (using Knockout.js for presentation). There is really no way I would maintain that same project without types.

> For my work I look at horrible code bases, perfectly typed and strictly formatted by tslint.

There is no language that can stop people from producing horrible code.


> There is really no way I would maintain that same project without types.

That's bold. Do you think no developer would be able to manage it without TS? In that case you must be a fanboy!

And honestly, are you not using 'any'? And do you think your app cannot crash because of a type error at runtime? And do you trust all the third party libraries you are using that they always provide you with consistent types, also during runtime? I ask this because most TS proponents live in some kind of dream.


I'm sure some developers would manage such a project without TS. Good for them. I wouldn't maintain it that way because:

1. I don't have the mental capacity to keep every single function's argument/return shape in my head, or to manually check it every time I make a change. Unit tests can't deliver 100% code coverage in practice.

2. Nor do I want to perform refactorings with stone age tools like s/setFoo/setBar/g. Setting up type information lets my IDE understand which calls to .push() deal with a native Array and which ones deal with my own class, so it can rename the latter ones when I ask. I can also use tools like "Find References" and avoid false positives.

3. I'm not a one-man-band. My coworkers need to deal with this project too, and new developers need to be introduced to it from time to time, and types serve as documentation and guard rails for them much better than jsdoc or regular comments. (This also serves as a significant barrier against using anything more esoteric like Elm, because nobody around would be familiar with it. TypeScript adds just enough syntax on top of regular JS to keep JS users comfortable.)

---

I do use `any` (and `unknown`), I have no doubts that an edge case can crash my app because I didn't validate something, and I never expect third-party code to work flawlessly whether it has types or not. Rejecting TS completely because "but run time loopholes" is throwing the baby out with the bathwater (or to put it in a more hyperbolic way, being an anti-vaxxer because vaccines are not 100% safe from side-effects). TS and types are an additional safety net/force multiplier†, not a silver bullet. (That said, what is a silver bullet? Because I'd sure like one.)

---

† Only applies to a project that has passed its initial rapid prototyping phase. During wild prototyping rides, types can indeed slow you down. But that's really the same debate as RDBMS vs NoSQL.


C/C++ doesn't really have much static typing to speak of. I don't think you are realizing the power of real statically type languages, such as Haskell, OCaml, or Scala.


Are you thinking of the way ints and chars (and floating-point types!) inter-convert? That is a weakness or convenience, but otherwise the typing is pretty strong.


Not much experience with C++ I suppose.


FYI: You can add the missing Bool type today :-), use the safebool library / gem - https://github.com/s6ruby/safebool


Are TrueClass/FalseClass being united under one class for 3?


It looks like they've conflated type with class. If so, that's the antithesis of duck typing. The impedence mismatch to Ruby seems to me an overwhelming contraindication.


> It looks like they've conflated type with class.

Classes are types by default, but you can define non-class types as well: https://sorbet.org/docs/abstract


Not sure you can call yjis conflation. From what I've read that was a deliberate thought out decision and a nominal type system is as valid a choice as a structural one and generally better understood in research and industry.

Having said that as far as I understand, type support in Ruby 3 will not prescribe which type checker is used and what limitations exist. Some of the mentioned projects are structural and I think even Sorbet might add support for it at some point.


Well, classes are types in a language where everything is an object. It's the same in Smalltalk, no?


That is exactly the category error I am calling out.

In a duck-typed language, type is defined by the willingness of a message receiver to receive that message. Class, inheritance, composition are all means to achieve this, but the type of an object is determined by its signature, not its ancestor chain.


Care to explain the difference for the sake of the dimwitted such as myself?


Do you have any reference to any documentation of how they implemented types?


The docs say "Every Ruby class and module doubles as a type in Sorbet" and it was explicitly described as a nominal type system in a talk at Strange Loop 2018.


All of these beneficial refinements are meaningless if increasing performance optimization in the runtime isn't made more of a priority.


Instead of:

  sig {params(name: String).returns(Integer)}
... why not simply:

  sig {name: String, returns: Integer}


Ruby itself has zero changes from sorbet, so all sorbet syntax has to be valid Ruby. `sig` is implemented as a library.

In this case, your example is not valid syntax, which violates this rule. Not that I personally could tell you why the parser makes a distinction here, but it's at least part of the reason :)

  irb(main):010:0> foo {a: "b"}
  SyntaxError: (irb):10: syntax error, unexpected ':', expecting '}'
  foo {a: "b"}
       ^
  (irb):10: syntax error, unexpected '}', expecting end-of- input
  foo {a: "b"}
            ^
   from /Users/bhuga/.rbenv/versions/2.4/bin/irb:11:in `<main>'
  irb(main):011:0> foo {params(a: "b")}
  NoMethodError: undefined method `foo' for main:Object
   from (irb):11
   from /Users/bhuga/.rbenv/versions/2.4/bin/irb:11:in `<main>'
  irb(main):012:0> 

The `sig` syntax has gone through multiple iterations; within the boundaries of Ruby syntax this is the best we've had.


The parser thinks that's a block not a hash.


Right, so why not implement the sig as a block and keep the syntax concise - the Ruby Way.


The sigs are implemented as blocks.

We had them as hashes for a while, but it meant that code in all sigs was loaded as the code was loaded, even if runtime typechecking was disabled. We were forced to load all constants in any signature in a file, an effect which cascades quickly. It had a big impact on the dev-edit-test loop.

For example, if we're testing `method1` on `Foo`, but `method2` has a sig that references `Bar`, we'd have to load `Bar` to run a test against `method1`.

Now sigs are blocks and lazy, and we pay that load penalty the first time the method is called and a typecheck is performed.


Then you'd need an extra set of delimiters, e.g.:

  sig {{name: String, returns: Integer}}


But that would make the hash braces redundant so you could just use parenthesis.


That's a block returning a Hash; see bhuga's sibling comment where he notes that they're using blocks to lazy load the constants in the type definition, which may seem silly for e.g. Integer, but consider e.g. some high-dependency Rails model which requires auto-loading 10,000 other classes.


Instead of:

   sig {params(name: String).returns(Integer)}
... why not simply:

   sig [String]=>[Integer]
Yes, that's just ruby - see https://github.com/s6ruby/ruby-to-michelson/blob/master/cont... for example for live running code (in secure ruby) :-)


At the very least, sigs need to be in blocks so they can be lazy and not require that all constants in any sig be loaded at require time.

It was a design decision that all type annotation arguments be named as opposed to positional. As one example why, it makes the error messages better. You can always say "You're missing a type declaration for parameter 'foo'" as opposed to "You have four positional arguments and 3 types".

We could probably still bikeshed our annotations inside the `sig { ... }`. I'm not sure we'd make constants with unicode like BigMap‹Address→Account› for generics, though, how do you even type that? :)


For the "do not require" the type annotations / signatures so they can be lazy I would use / recommend a language pragma and not a block. Learn more about language pragmas (works kind like a pre-processor) :-) - https://github.com/s6ruby/pragmas I think you already have made-up your own "magic comments" / language pragma e.g. # type: true and so on.

> I'm not sure we'd make constants with unicode like > BigMap‹Address→Account› for generics, though, how do > you even type that? :)

I see you managed to type it! What's your secret? :-). By the way, you can use the alternate ASCII-style e.g. BigMap.of(Address=>Account).


Looking through the online docs. Here's another instead of:

   sig {params(new_value: T.nilable(Integer)).void}
... why not simply:

   sig Integer?                  # or
   sig [Option.of(Integer)]=>[]  # longest form in sruby
   sig [Integer?]=>[]            # same as Integer?


Ruby has named parameters right? In theory signature needs parameter name and wouldn’t be sufficient for all cases to with a simple String in Integer out I think.


How would you write the type of a function that takes a parameter called `returns`?


Simple - make 'returns' a reserved word.


This is super cool. I would expect the same support for Rails after Ruby 3 release


small discussion that was marked as dupe https://news.ycombinator.com/item?id=19696669


I hope Python 4 goes this route.


when Ruby 3.0 will be released? 2022 Christmas?


Not necessarily. There are certain things the core team wants to add to Ruby 3.0 and once all goals are reached, the next release will be called 3.0 (so we might possibly see Ruby 2.8, 2.9, 2.10 etc before).

The earliest possible date (and somewhere I read it's a probable one, but I can't find it right now) is Christmas 2020.


2019?


2019 is 2.7. GP is counting as if version numbers were decimals (2.8 in 2020, 2.9 in 2021, 3.0 in 2022 when it could just as well be 2.10)


[flagged]


Ruby inferior to Javascript and Python? What are you smoking? Python's BDFL begrudgingly added lambda to Python but amputated it to single-line expressions because he didn't want to "encourage" functional programming. By contrast Ruby is an artistically-curated blend of the best of Smalltalk, Lisp and Perl fully embracing functional programming. No contest.


Types will be optional right? Otherwise I am gonna have to jump ship sadly.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: