A great idea. Now, everyone that learns this stuff, show some restraint!
The drawback of a powerful type system is you can very easily get yourself into a type complexity mudhole. Nothing worse than trying to call a method where a simple `Foo` object would do but instead you've defined 60 character definition of `Foo` capabilities in the type system in the method signature.
My coding philosophy is centered around simple interfaces.
I think of power sockets and plugs.
The simpler the socket/plug design, the easier it is to plug in.
It's easier to connect a European plug which has 2 round pins than it is to connect a UK plug which has 3 rectangular pins at different angles.
You can imagine how difficult it would be to connect a plug with 10 pins; it would be difficult to get the alignment right and you would have to push hard and fiddle quite a bit to get it all the way in.
If you can get a module to do the same thing with a simpler interface, then that's generally a better module; it's typically a sign of good separation of concerns. Complex interfaces are often a sign that the module encourages micromanagement of its internal state; a leaky abstraction.
A module should be trusted to do its job. The only reason a module would provide complex interfaces is to provide flexibility... But modules don't need to provide flexibility because the whole point of a module is that it can be easily replaced with other modules when requirements change.
The thing with electrical plus is that they should be designed around safety first, rather than convenience. And the U.K. plug is a lot more safety focused than many other plug standards.
The advantage of the U.K. plug is that live pins are physically blocked and only released when the Earth pin is present. This is why the Earth pin is slightly longer on U.K. plugs and why insulated devices have a plastic Earth pin rather than no pin at all. The advantage of this is so you cannot jam things into them (either accidentally or intentionally) without the Earth pin. Thus making the plug much safer.
I’ve found U.K. plugs to be much more secure inside the socket too. US plugs often come away from the wall when there is a little bit of weight or tension on the plug. U.K. plugs require a great deal more pressure to come loose from the socket.
If I were to bring this back to types I’d say one needs to evaluate what the requirements are: safety or convenience.
When I was a kid before we had legos we had some soviet alternative called constructor or something. Everything was made from metal, crews etc. obviously as a kid one of the first “hello world” things you will build is “plug” that you can insert into those holes in the wall. My older brother was lucky when he did it as the fuse in the house went off. My younger brother did the same about a decade later but holding in his hand two metal pins. He was also lucky that my dad was just passing through corridor and pushed him out. He got away with just burned skin on the fingers and a bit of shock.
Regular Meccano could played with this way too. Albeit I don't know if the pieces are shaped right to fit into a plug socket. But I'd wager that's more by accident than design
And now try the Schuko, which improves on all metrics you named (except the polarity isn't fixed, that's its one theoretical disadvantage), but also significantly improves usability (you can plug it in either way, the plug goes in much easier, and it stays in with much more force)
It’s a good design but I disagree that it improves on all the safety features. For starts the child proof safety shutters are still only optional in some regions.
> except the polarity isn't fixed, that's its one theoretical disadvantage
Polarity isn’t fixed on any mains sockets. That’s why the A in AC stands for “alternating”
And yet there is a big difference between the hot and neutral conductors in a 120V outlet in the US. The neutral remains close to ground potential, and is actually bonded to earth at the breaker panel.
I’ll admit that my understanding of these things is rather superficial. I might understand more than the average person but that’s not exactly a high bar to set.
However I always understood potential to be different to polarity. And that AC (which, to my knowledge, all electric grids globally carry) is the literal oscillation of polarity. What am I missing/misunderstanding from the GPs post?
English isn’t my native language, so I mixed up those two terms.
But that’s what I meant, the UK and US plugs (as well as switzerland I think?) theoretically have one pin always be hot, one always be neutral.
With Schuko, you can reverse the plug, and it'll still work, which is on the one hand awesome when you've got a tight space and want more plugs to fit, but can also require higher costs, as you've always got to switch both wires instead of just switching the hot one (although this is best practices everywhere, as you can never know how well the electrician followed specs when wiring your apartment 90 years ago).
The transformer on the pole delivers 240V rms center tapped, with the center tap being connected to ground at the pole. The center tap wire is actually uninsulated, and the two hots are wrapped around it for support. The center tap is connected to the neutral bus at the breaker panel.
Half the breakers are connected to one hot, and half are connected to the other hot. For 120V you wire hot/neutral to the receptacle, while for 240V you wire hot/hot. (Plus ground, of course.)
In the EU, receptacles are wired hot/hot, and there is no neutral conductor.
Thanks for the EU detail! Are you saying the typical house in the EU has three phase power delivered to it? All three phases?
Here in the US, where split-phase is the residential standard, a house with three phase is quite rare. The HV lines running on poles in a neighborhood are mostly single phase, at least in rural areas like mine.
> Thanks for the EU detail! Are you saying the typical house in the EU has three phase power delivered to it? All three phases?
Yes! And many electrical devices rely on it, though sometimes fallback to regular 230V single-phase at 32A is possible, e.g. for stoves.
And considering a typical stove runs at 11-15kW and a typical electric water heater between 15kW to 25kW, you'll need it as otherwise you'll need far higher amps than is reasonable.
Honestly, only due to the technology connections video did I realize that the US does not use triphase power in most homes, which was genuinely surprising.
Although I use a propane range, I'm wired for a 240V 30A electric one. That's only 7.2KW. My water heater is also propane.
I have a friend with a Bridgeport vertical mill in his garage/workshop. He had to build a single phase to three phase converter, so he could run its three phase motors.
It's true that EU plugs of some devices can be a bit loose; especially 2-pin variants where the wall socket is not indented or the indent is taller than the plug.
USB-C would be a nice looking single-argument function... but parameterized with every possible generic template meta-programming feature in the language.
It's even better than it looks in Europe: there are actuality 3 contact points but only 2 are salient which allows for 2 easily pluggable positions (rotate 180deg). The ground is positioned twice for that matter.
Only France has a variation around that to my knowledge, that is still compatible across Europe.
And no one notices and just plugs in and out without thinking twice about it.
The EU has a huge variety of plugs and sockets, what people often call the EU plug are variation of the schuko[0] design. The standard EU plug is the Europlug[1] which is compatible with most (but not all) sockets in Europe and only handles low amperage.
AFAIK Italy is the major outlier in having widespread sockets[2] that do not accept the Europlug
A tiny bit more complex: the German/European plug has the ground at the sides and so goes in both ways. The French/Belgian plug has instead a ground pin that goes from the socket into the plug (so the opposite of the other two). That means generally you'll be able to combine both and most products have a plug that allows to connect to both varieties. But there are some (especially Chinese and American) products which miss this diversity and either only fit in one of the two or produce local varieties for either one when it would well be feasible to produce a single version that fits both.
This is so true. I've been thinking recently that in the same way that "use boring technology" is a pretty well-known concept, so should "use boring types" enter the collective conscious. Type operators are exciting and flashy, but I found that using them too much leads to brittle and confusing types. Saving them for a last resort tends to be the right strategy. Often there's an extremely dumb way to write your types that works just as well - maybe even better :)
Agreed! Boring interface definitions is my rule for typing.
I see way too many folks trying to use Omit<>(…) and Partial<>(…) creating absolute typing monstrosities. Feels like typing duct tape and it’s impossible to read the type definition when it’s generated in a tooltip.
Although to be fair, lack of visibility in the tooltip seems like a problem with the tools not with types. Many times I’ve wished I could tell TS to expand the next level of type signature.
If you only use "simple types" then you often get implicit assumptions about the "values". For instance note that "web-services" basically means you have a very simple interface defined by the HTTP-protocol.
But what is actually inside the HTTP-payloads can then have many constraints on them which are not declared anywhere. For instance your code might assume the payload is JSON with several required fields in it.
For that you have the "closed for modification but open for extension" principle.
If the server is new and fresh, yeah it's ok to assume the payload of a request will be a JSON object with some required fields, but leave the parameter there in case someone decides they will start sending XML payloads to it
"Duplication is better than the wrong abstraction"
This is where a lot of developers go overboard - not just in type systems, but in general. They are so afraid of duplication, they over-generalize and end up in a quagmire of unreadable overly complicated code.
Some duplication is easy. It's just code volume, and volume shouldn't be as scary as complexity.
The focus should be readability of code while respecting abstractions and design patterns.
I prefer some duplicate lines over having to go back and forth over some source files only because some developers think that less code is better code.
The code we create should be made for humans to read, no to machines and specially not to brag about how clever is our code
Second this. By the end of a progressive multi-year TS migration at my last company, we were refactoring `HTTPRequest<GetRequestBody<IncrementUpdate>>` back into its JS ancestor `HTTPGetRequest`.
You maybe just lack experience with more complex types. Which is totally expected, since you probably learned programming the runtime all the time, but not the typesystem - unless you come from one of those rare languages that have a similar powerful typesystem.
But just like you probably struggled with and overcame many things before, it will be the same now. It's just that you can opt out of the typesystem in typescript whereas you are forced to learn how to deal with the runtime.
But if you make it, your development experience will change drastically. The time might be very well spent.
I’ve used C++, C# and Java. Do these count as languages with complex types? I also used typescript a lot and have plenty of more complex types using generics and dynamic generation of types based on other types.
The problem with typescript is that the types are very often wrong or needlessly complicated.
Having separate type definitions from the library is stupid. Having types missing from @types/node is stupid. Many libraries lie about their types.
You can act like typescript is some kind of magic miracle that will save you time, but in reality it is just riddled with many small time-consuming stupidities. Typescript is just trying to a polished turd. Shiny on the outside, but shit within, and this is by design because it needs to work with JS and other poorly typed libraries and code.
If I haven’t convinced you TS is a polished turd, just wait until you find out how to import a modern nodejs module in a typescript project. Hint: you have to import a .js file, even though your file is called .ts.
The tsconfig has so many options that every project is different and a lot of code isn’t interchangeable and can break if copied from one project to another.
Want to convert some TS code to JS without checking types? No can do! Ts-node can do it, but tsc cannot.
None of these count as a language with complex types in my opinion. C++ has templates but that's not to be confused with its typesystem.
I'm not saying typescripts typesystem is perfect and I'm definitely not saying that most people use it correctly. But at least it has great potential, compared to e.g. Java and C# which still fail to let me describe basic data types and operations in the typesystem.
No need to show any restraint. It's much more fun to explore and burn yourself once into understanding how much of this power your need. Or twice. Or as many times until it gets more fun to be pragmatic.
Ah, the "smoke a whole pack of cigarettes" method of teaching software dev.
I kinda agree - often no amount of "this is a bad idea" will teach as well as just letting someone make the mistake and actually experience the consequences.
The only problem is that hard to maintain code often does not cause any problems until you write a critical mass of it and end up trying to develop enough non trivial extra features on top of it.
I wouldn't say it's a matter of fun but often a matter of necessity in modern software development. Less experienced people can often have inflated egos and will refuse to listen to any advice from anyone else. Letting them fuck something up themselves (not too badly) after you've explained why it's a bad idea can be a hard but good lesson.
The types in your code are just as designed just like any other aspect of it. It's not a matter of restraint, it's a matter of doing things on the correct way.
I don't believe in "correct" when it comes to software dev. There's 1000 solutions to any given problem.
It's a matter of choosing a solution that is clear, easy to understand, and easy to maintain. There are nearly limitless solutions that can fit that definition.
Restraint comes into play because devs tend to "treat every problem like a nail when they have a hammer". When devs learn new concepts, they often look for places to use that concept even when it's a bad fit.
An example of this is excessive use of inheritance when simpler types fit better. Many of us have dealt with the greenhorn that creates a giant inheritance tree or generic mess after they first learn that "neat" concept.
I definitely know what you mean. There is usually a type that is "just right" in terms of rigidity and flexibility.
The reasons OP encourage restraint might be the mental overhead of understanding what's "correct", as well as needing to rely on not only yourself but other people to be correct.
Sometimes simple is faster and harder to screw up.
Less is more is also important for writing performant code. JS engines care about types, in the form of ‘shapes’ which is the V8 term for a specific structural layout and maps pretty neatly to TS structural types for obvious reasons. Simple types make performance issues like megamorphic inline cache issues much harder to create. If you see type spaghetti it’s a good hint that performance issues may be lurking depending on actual runtime usage. And if your runtime usage is actually simple then you don’t particularly need the flexibility the type spaghetti provides.
Also typescript doesn't always infer things thoroughly with generics and deeply nested types, so I’ve ended up avoiding them when possible. Performance also explodes if you have too much unions and stuff. Beware! Definitely some popular libraries out there that went overboard and nuke compiler performance for minimal gain.
I just finished writing a mess of very complicated types that can parse an open API spec, provide the request types (body & params) as input, and restrict the return to the appropriate response type... I hooked this higher order function into our API and immediately found 30-40 different places where the implementation was not aligned with the spec.
The devs now have guardrails in place to make sure they follow the spec...
Advanced types are invaluable when you are writing a framework or library... But in every day implementation, I agree they should be used sparingly.
I have 7 years of ts experience and I'll still 'as any' a reduce function from time to time
If your `Foo` object has 60 potential character definitions... that sounds (smells) like it would be a code smell of something wrong upstream. Perhaps it's time to refactor the function.
Ignoring the 60 different character definitions isn't going to make the problem that you have 60 possible variants go away just because you didn't type it.
> Over the years, the type system of TypeScript has grown from basic type annotations to a large and complex programming language.
Give someone (particularly a developer) the opportunity to build something complicated and undoubtedly they will. So now you have two problems, the complicated program that actually does some hopefully useful work, and another complicated program on top of it that fills your head and slows you down when trying to fix the first complicated program. You may say 'ah yes, but the second complicated program validates the first!'. Not really, it just makes things more complicated. Almost all bugs are logic bugs or inconsistent state bugs (thanks OOP!), almost none are type bugs.
However, static analysis of existing code (in Javascript), without having to write a single extra character, may well have great value in indicating correctness.
Edit:
> TypeScript's type system is a full-fledged programming language in itself!
Run! Run as fast as you can! Note that this 'full-fledged programming language' doesn't actually do anything (to their credit they admit this later on)
Edit2:
> [...] is a type-level unit test. It won't type-check until you find the correct solution.
> I sometimes use @ts-expect-error comments when I want to check that an invalid input is rejected by the type-checker. @ts-expect-error only type-checks if the next line does not!
What new level of hell are we exploring now??
I am genuinely afraid and I'm only halfway through this thing. What's next? A meta type level language to check that our type checking checks??
Hard disagree. For me, the proof is in the pudding. When writing regular Javascript, I need to run and debug my code often as I'm developing it to make sure all intermediate states make sense and that there's no edge cases I've missed. With Typescript I can often write code for hours without even starting it up, and when I finally do run it, it usually works correctly right out of the box.
I recently started working on a project with typescript for the first time. I have been astounded by how often my code works the first time I try it because so much is caught by the typing system. I know what I want my code to do, how it should do it, and that it does work - my errors are almost all from mistyping things.
Massive productivity boost, and I have a kind of confidence in my code that I never have had before, not having used a strongly typed language before.
The core functionality of Typescript of adding basic type hints to JS is definitely better than writing plain Javascript, but the type system feature set is way beyond it's peak and deep in diminishing returns territory. I realize that this complexity may be required to gradually convert a 'worst case' Javascript code base into 'correct' Typescript, but IMHO most new Typescript code should limit itself to a simple subset of the type system.
I've been coding C++ most of my life, and I must say, TS is starting to look more and more like C++ (which definitely isn't a good thing because it lures programmers into complexity).
It’s certainly nice that you know your code will still run when any of your values are null or undefined, and you are forced to deal with that scenario.
There are a depressingly large number of people who just don't get static typing. I think a lot of them use very basic editor setups - think Vim or Notepad++ so they don't see half of the benefits.
The mental gymnastics I have to engage in to work on large JS projects without TypeScript is unbearable. I have to switch between the two often and it’s night and day.
Have you worked with typescript? Adding strict types to JavaScript fixes pretty much all the complaints I use to have when working with frontend services/clients.
Typescript isn’t a “now you have two problems” anymore than types in any other language are.
Maybe in a controlled environment Typescript can produce benefits - however much of my argument is that our environments are typical uncontrolled, let's not give the monsters any more magic than we have to.
I’m inclined to say that any magic applied here is a win. Making things complicated with Typescript is simply hard enough that those people that’d mess things up in the first place wouldn’t even consider trying.
Writing simple types with Typescript is simple, and improves any Javascript code.
Writing complex types (inference, generics, inheritance) is really punishing. And people that don’t truly know what they’re doing won’t even try (or at least in my experience, I’ve never seen them try).
It's the dreaded "I know enough to be dangerous" type that you have to worry about; the people who don't actually know all that much but enough to make a mess of things. I was brought in to help for a while on a project where they had compile times up at 4 or so minutes, if I remember correctly. The solution in the end really was just "Write dumber types", because their types were completely overboard and written just well enough to work.
A big bulk of their compile time came from things that could've been checked in much more efficient (for the compiler) ways.
I would never urge anyone to not use statically typed languages, mind you, I think people just need to be a bit more pragmatic. Sometimes I find it unfortunate that TypeScript provides the facilities it does while still not having solved the basic ergonomics of types (more consistent inference, etc.). Having types that generate types creates problems that I honestly don't experience anywhere else and I would rather that people just not in general, but that's something you can fix with rules.
An ironic part of all of this is that Haskell's type system is a lot easier to use and use well than TypeScript's, in the end, which is especially funny considering all the talk of pragmatism.
> Almost all bugs are logic bugs or inconsistent state bugs (thanks OOP!), almost none are type bugs.
Most, possibly all, inconsistent state bugs and many logic bugs are type bugs with a sufficiently-expressive type system properly used. That's why type systems have progressed from basic systems evolved from ones whose main purpose was laying out memory rather than correctness to more elaborate systems.
> Almost all bugs are logic bugs or inconsistent state bugs (thanks OOP!), almost none are type bugs.
Many bugs of these classes can be avoided with a sufficiently expressive type system. There’s a reason that Haskell programmers say if it compiles, it probably works correctly.
Ah but Haskell was built that way from the beginning, and has important invariants (pure functions etc) that make it possible to produce inherently sound programs. Of course none of that will help you make the right inherently sound program.
> Almost all bugs are logic bugs or inconsistent state bugs (thanks OOP!), almost none are type bugs.
With a sufficiently powerful type system (and typescript is basically the only non-functional language that makes the cut here) these aren't all that distinct. But even in codebases that don't take advantage of that power, this has not been my experience. I recently converted about ten thousand lines of legacy javascript to typescript at work, and discovered several hundred type errors in the process. State bugs also slip through pretty often, but we almost always catch pure business logic errors at code review.
Complexity or not, it's incredibly useful. To me the development experience is just superior, you feel more in control. No one likes building the app only to see "Cannot read properties of undefined" and then needlessly scratching head what's wrong. I have done a large refactor recently and it was such a breeze, with TS pointing out pretty much everything that needs to be fixed to complete it. Coding new features and sometimes it just works after the first build. TS is obviously the inevitable future since it offloads the stuff that the computer can do better and frees up the mental energy for the more creative stuff. The code is also more readable, when the types mean something there's less need for describing function parameters for instance, since you have a complete type definition of what that param represents and any comments made on those types are re-usable across functions.
A simple trick with plain JavaScript is to give all arguments default values. That gives a pretty good insight for anybody reading the code as to what "type" of arguments the function expects.
If you test-call such a function without arguments you will then know what kinds of values you can expect it to return.
The argument default values can not be inner functions but they can be any function that is in scope. Or if you are using classes it could a reference to any method of 'this'.
Then add some asserts inside the function to express how the result relates to the arguments. No complicated higher-order type-definitions needed to basically make it clear what you can expect from a function. Add a comment to make it even clearer.
> That gives a pretty good insight for anybody reading the code as to what "type" of arguments the function expects.
If you test
Only true if you're using very simple types, i.e. number and string. But "string" is pretty close to "any" and doesn't give you much info. If my function only expects two or three possible strings, it should be typed to only take those ones.
Comments are not a solution for much of anything, btw, and they only "work" if you read them. How many comments are in your node_modules folder?
> Comments are not a solution for much of anything,
What are Unix man-pages but comments about the APIs they describe? Are you saying you would prefer to replace them with the TypeScript type-language?
As I see it TypeScript is a a solution to the problem of how to describe a function, what it does, what it expects from its arguments and what it returns. That information can often be clearly and simply expressed with a comment, rather than with a complicated type-declaration. And type-language only describes the syntactic behavior of a module, not its semantics.
When type-declarations become more complicated than the code they are describing I think we're at a point of diminishing returns.
The other purpose of type-declarations is to catch errors. But if a declaration is very complicated how can we be sure there's no errors in it?
> What are Unix man-pages but comments about the APIs they describe?
Documentation. Obviously not the same thing as inline comments in the code. You can generate documentation using comments (i.e. jsdoc), but comments are the weakest form of guidance for other developers. Types don't replace documentation, but are part of the same goal: making code easier to consume.
> When type-declarations become more complicated than the code they are describing
Do you find this happening to you often? JavaScript is a very permissive language, and most JS devs learn to write code in a way that is difficult to type. That's a part of the learning curve of the language. Part of using TS well, is realizing that complicated types are a smell for complicated behavior, and modeling your data in a conceptually simple way.
It's not just about labeling everything string or number, but making it impossible to use the code the wrong way.
> But if a declaration is very complicated how can we be sure there's no errors in it?
Simple. You test them. Same as any other code. How do you test your comments?
Sometimes, but as a last resort. I'd prefer types, tests and readable code over comments, and if I feel the need to leave a comments, I usually see that as a code smell
Agree for the OOP part, most devs doing the transition from Java,C# to TypeScript are using the tool to port what they learned and apply it to a JS project and projects like Angular are encouraging this by enabling experimental stuff like decorators, but missing the real point behind TS, the type checking system which is to be fair, really awesome.
Yes, there is a huge difference between type checking a minimally type-annotated program, and dreaming up a rich type system just because you could (I should know, I used to be a Java dev).
You're giving yourself away by making that comparison - your experience as a Java dev gives you minimal insight into the expressiveness or utility of TypeScript's very different, much more powerful type system.
It's not about the expressiveness or utility - it is about types you write (bad) vs types the type checker generates and maybe shows you in an IDE (good).
The point being that people should write actual code, and should not write type code, simply because given an opportunity to write things, people will indeed write things, whether actually helpful or (more likely) not.
I currently have to occasionally contribute to a TypeScript codebase at work. I appreciate how much better it is than Javascript. When I write code as an outsider (Java and Go developer), I feel like I use the type system in sensible and readable ways. When I look at the code written by the native TypeScript experts in my company it is a bewildering, abstract, unreadable morass. I have no idea what's going on half the time.
compose is a higher order function. In the first step it accepts a function that converts 3 values (V1 to V3) into a single values (T1) and a series of conversion functions that converts this single value into another value (T1 into T2, T2 into T3 and so on until T6). Using these functions it produces a new function that converts the combination of V1, V2 and V3 into a T6.
I don't know ramda, but I assume this is only part of the type definition of compose and this is just the longest part of it. I think compose is written in such a way that it can accept a many conversion functions as you want and this is just the longest variant that is encoded in the types.
But even at its simplest variant, with just one or two functions passed, what the V0 or T1 do is pretty confusing. I thiiiiiiiiink it's trying to ensure the return type of one function is correctly passed as the input type of the subsequent function, and so on and so forth, but I don't really know.
It indeed ensures one a type level that this series of functions can work as a series so every subsequent function can accept the return type of the previous one.
The latest version seems to have abandoned the typing of the initial input, which makes the types a little simpler.
My point however is that the types you point our here are actually not particularly complex. They are just long, with lot's of inputs for the generics (and the syntax may be confusing). The types in the original article are much more complicated, with conditional type inference etc.
Imho it’s kind of pointless to judge how simple a type definition looks like. The definition is there to make sure the interface is correctly typed. Edit: and sometimes a simple/beautiful interface requires complex types..
When I'm using someone else's code, though, I need to be able to understand what it's expecting and returning... isn't that the point of a typing system? It's not just an internal unit test, but a signal to other developers of how the function is supposed to work (especially if it's exported and intended for reuse).
Quite often I get a function that's working correctly but typed incorrectly (including in someone else's typescript definitions), and sometimes I can correct them but other times I can't even read the original author's intent...
And edit: It's not that types have to be simple, but that complex types (especially) should be readable, as in you can follow the complexity step by step, line by line.
I feel like that definition is the TypeScript equivalent of "callback hell" or similar. It almost looks minified or obfuscated, or just written to be super terse instead of clear. I don't really know which it is, because I can't even begin to read it...
I'm not a TypeScript expert by any stretch, but I've been using it 40 hours a week for the last year and I SHOULD at least be able to start to read it... but nope. And I come across examples like that multiple times a week. It's just a really bizarre syntax, unlike any of the other languages I've ever used. I think it's like that because they had to hack it on top of Javascript, vs a language being strongly-typed from the getgo.
> I think it's like that because they had to hack it on top of Javascript
This is the source of most of typescript's flaws, but the mediocre type syntax was a deliberate choice: it's all erased at compile time, so javascript imposes no constraints. My guess is that it's just because many of the original typescript devs were on the C# language team.
What compose() does is not the point here, it's that the type definition is totally unreadable. I was trying to figure out what compose was supposed to return (the function or the value), in that case, and I still don't really know.
Another random example from Axios:
<T = V>(onFulfilled?: (value: V) => T | Promise<T>, onRejected?: (error: any) => any): number;
Or eslint:
type Prepend<Tuple extends any[], Addend> = ((_: Addend, ..._1: Tuple) => any) extends (..._: infer Result) => any
? Result
: never;
Here's another real example from today... I was trying to figure out how to type "the name of this type's key has to be one of the following strings in this enum, but the type doesn't need to have all the keys". Here's a Stack link with the right answer: https://stackoverflow.com/a/59213781, but it wasn't easy to figure out. At first I thought it would be `[key in Partial<MyEnum>]`, but nope. Maybe optional? `[key in MyEnum]?` kind of works but fails in an new way (see the Stack for details). The correct way to do it is apparently `Partial<Record<MyEnum, unknown>>`, which I NEVER would have been able to figure out. Why the record? Why the unknown? Who knows..?
Don't get me wrong, I love TypeScript for the simpler use cases, and a lot of it IS that, thankfully. But the more complex compositions, especially in popular third-party libs? I've given up lol.
The use of single-letter keywords (K, T, V, P, R, etc.) combined with confusing re-use of punctuation (<> and : and () and []) that mean subtly different things depending on where they're used, on top of how JS already uses them, makes it even more so. Sometimes I wish TypeScript were more verbose and opted for longer, clearer constructs rather than stacked shorthands...
> I was trying to figure out what compose was supposed to return (the function or the value), in that case, and I still don't really know.
It returns a function. The one that's equivalent to applying the arguments in reverse order. I think that this signature is pretty clear for anyone experienced with a statically typed language with generics and higher order functions.
On the other hand, I have no idea why a compose function that takes exactly 6 arguments, the last of which is function which takes 3 arguments would be a desirable abstraction. But I don't think static typing is necessarily to blame for this -- this just looks like a clunky function that has a clunky type.
> I think that this signature is pretty clear for anyone experienced with a statically typed language with generics and higher order functions.
Sounds like I have stuff to study!
Maybe ramda was an extreme example (with or without typescript, it was so hard to read that our dev team decided to just remove it altogether and replace it with more verbose but easier to read vanillaJS code or equivalent lodash functions). But I come across difficult TypeScript examples nearly every day of my work, where I feel like I'm reading an obfuscated leetcode challenge instead of the straightforward business logic in the rest of the codebase.
Once I finally understand a complex type, my usual reaction is, "That's it? That's all that was trying to express?" It's just an arcane syntax to me. Sounds like learning about generics and higher-order functions in statically typed languages would be a good starting place... thanks!
From what you are writing, you really lack the basics here. Learning on the job is fine, but sometimes it's worth to spent dedicated time to learn foundations or at least get someone on board who can teach them.
I suggest to try to get your boss to sponsor this, since you need it for the job. It will also make your dev experience so much more fun!
For sure. I am no TypeScript expert, and for the most part I don't really need to be... most of our types are pretty straightforward, arrays of strings and such.
Learning is great, but this job, like many others, doesn't really have a well-defined system for training, documentation, professional development, or anything like that. Either I learn on the job as it happens or I don't learn at all. There's always too much to build, with constantly shifting priorities defined and redefined by higher-ups who don't know or care what TypeScript is. Sure, I can push back on that to some degree and beg for a resource or two, but even that is difficult, and there's always so much else that's even more pressing to learn. And web dev by its nature is a broad and shallow career anyway, especially on the frontend... by the time you begin to master something, it's already obsolete lol.
TypeScript looks like it'll have some staying power, so I'm happy to learn it as I go. But over time, I've learned to stop chasing perfection and to just go for "Will this survive long enough until the rewrite in a year or two? If so, good enough...". I've never known a job like this where code survives longer than 2 or 3 years before someone, either a dev or a manager, wants to rewrite it from scratch.
A lot of our existing codebase was written by contractors who had a lot of experience, but little desire to document anything or comment anywhere. Our current generation of (relatively junior) devs inherited that, has trouble with a lot of it, and ends up rewriting large overengineered swaths in simpler patterns as we go. A complete rewrite is already planned. And so the cycle continues :)
In general, the barrier to entry for JS/web dev is pretty low, and so there are a lot of low-to-med skilled devs like me in the industry. I think, philosophically, I lean against writing code that is overly "clever" rather than readable. Similarly with types. If a typing becomes complex enough that it's not really readable, I'd rather just leave a clear comment as to what the intent is and then move on, knowing that the code itself -- much less its typing -- is unlikely to survive long anyway. At the end of the day, IMO, it doesn't make sense to have types that are more complicated than the code itself... if correctness is important but the typing is complicated, I try to break down the code itself, add comments, add unit tests, add documentation, etc. rather than try to coerce TypeScript into sentience.
Is that the most "correct" way? Probably not. But it sure makes things easier to read in PRs rather than telling everyone, "Well, you need to learn advanced TypeScript if you want to read my contribs."
I think I see where you come from. And for libraries and even frameworks I would agree.
But typescripts advanced features are more like advanced SQL. Sure, you'll learn some chunk specific to your database, but the majority will be transferable. And just like SQL it won't become outdated knowledge in the next decades most likely.
So I still think it will be worth. If not for the company then at least for yourself. :)
That's a good point. Like even if TypeScript goes away, some other language will still use generics and higher order functions. I can see the value in that!
My 2 cents: I think the fact that you're having trouble with the compose signature is something you should rectify and that would transfer well to other languages. I'm however not sure the same holds true for more advanced features of typescript.
I'm only a casual user, so take this with more than a grain of salt, but for me typescript occupies quite a weird point in statically typed language space: on the other hand, typescript's type system is enormously complex (and also quite expressive). Part, but I don't think all of this, comes from being retrofitted onto a untyped language and its ecosystem (so e.g. sum-types tend to be implicit rather than tagged, and in addition to discriminated unsions, there is support for complex sub-typing from the OO heritage). Most statically typed languages have no direct equivalent for many of typescript's more advanced features (e.g. partial types, although Scala and Ocaml have related constructs, in Ocaml's case e.g. polymorphic variants).
But on the other hand it's surprisingly awkward to get what I would consider one of the most basic and beneficial features of a sane statically typed language, namely exhaustiveness checks -- so most people don't even bother. In fact there is not even an agreed upon idiom (just google "exhaustiveness check typescript", all the answers will look spuriously different). The basic pattern is that you want a helper function like so:
function assertUnreachable(_value: never): never {
throw new Error("Statement should be unreachable");
}
and then for any switch(foo) you do an default: assertUnreachable(foo). I can't really fathom why there isn't a better way to express this (the ability is clearly there, but it's un-ergonomic). But if you want something that transfers well, I'd probably de-emphasize the fancy stuff typescript offers unless needed for acceptable JS interop and concentrate more on thinking about exhaustiveness and making undesirable states unrepresentable.
This reply is pretty good. As a Scala developer, many people have told me that a lot of features of the Scala compiler are niche and will go away. I never thought so - and now a good part of then has appeared in Typescript and some in Rust. Python is also pushing forward with optional typing.
Therefore I think the chance is high that even if there are currently not too many languages with advanced features (generics are not advanced btw), we will see more and more of them in the future.
Well, you are likely taking examples from library internals.
Libraries exist, in part, to encapsulate high complexity.
There's likely accompanying documentation for the examples you provided.
In some other languages you have similar stuff, with the added complexity of concurrency and memory management related types. If you are struggling with TypeScript, let me tell you about a whole new world of pain called C++.
That's why I think every programmer should learn C++.
I've been professionally employed as a software developer since 1998 and I am currently curled up in a ball in the corner rocking slowly back and forth after seeing that.
Personally, I think the equivalent code would look bad in most languages, even with more verbose argument/type names and comments (which people seem to overlook often) - it's just that we try to make our applications do a bit too much.
Just look at some of other examples in the sibling comments!
Might as well ask here. On our teams, we have the occasional developer that is insistent on using Typescript in an OO fashion. This has always struck me as square peg round hole. Even though I come from an OO background, Typescript strict settings really seem to push me in a direction of using interfaces and types for type signatures, and almost never classes, subclasses, instantiated objects. I don't have a very good answer for "yeah, but what about dependency injection"? though. Any thoughts from anyone?
>I don't have a very good answer for "yeah, but what about dependency injection"? though. Any thoughts from anyone?
There is no "dependency injection" in a functional world, take this opportunity to show your colleague how FP makes their life easier. It's just a function.
Instead of a class, implementing an interface, created by a factory, requiring a constructor, all you need is a function.
Anything that was previously a "dependency" in OO terms is now an argument to your function. If you want to "inject" that dependency you simply partially apply your function, the result is then of course a function with that "dependency" "injected" which can then be used as usual. In JavaScript there's even a nifty built-in prototype method on every function called `Function.prototype.bind` which allows you to do the partial application to create the "dependency injected" function!
I think the problem with DI is less about how they are passed in and more about how usually there is are only two implementations: the real one and a test one. The test one is a mock/stub based on assumed behaviours of the real thing. Obviously, such an approach is essential in some situations but in general it is mostly bad.
The problem with that is that eventually you want to have the dependencies automatically injected (like how an IoC container would be used in a typical OOP application).
Sure, there's solutions for this in the FP world, but in my experience they tend to have their own drawbacks. Admittedly, I've only ever used TS on the front-end (with no DI), so I've never really looked at what FP-style libraries exist for this.
You could model that with one function taking that dependency and returning a new one with it included. Then you just use the one that has it included instead of the one that doesn't.
Are you seeing any decorator or OOP style in React? Yet plenty of dependencies are automatically injected without the OOP jargon, if you want to see a pure JS example of auto DI go check angular 1.5 dep system, and Vuejs.
I totally agree there are other good/better options (as evidenced by react and many others), but I don't think this is an entirely fair comparison. The main "DI" alternatives in React are hooks, context, and imports, none of which could really replace traditional IoC containers on the backend without some modifications.
I think the main thing that makes automatic DI easy with OOP is the clear separation between dependencies (constructor parameters), and method parameters. Admittedly, this is totally possible with FP, but requires some good conventions and doesn't seem to be nearly as popular.
I also think most OOP langs have terrible syntax for constructors, which makes it look clunkier than it really is. Primary constructors (e.g. kotlin) make this not much more verbose than the FP alternative.
> Angular 1.5 dep system
Yes it's vanilla JS, but I doubt there's many FP people that would call that functional. The examples I saw are all just using JS functions as "discount classes", which definitely aren't pure or functional.
Because in typical back-end software, the dependency tree can get very big very fast. Having an IoC container means devs only have to declare direct dependencies for each service, rather than constructing the entire tree (and figure out the necessary ordering, etc).
I get this question sometimes from a developer new to my team asking if it’s ok to add OOP code since most of the existing code is just functions.
My view on that is that it’s ok to use OOP and define classes if you are really defining an OOP style object. Back in the 90s is was taught that an object has identify, state and behaviour. So you you don’t have all three, it’s not really an object in the OOP style.
Looking at it through this lens helps make it clearer when you should add classes or just stick to function and closures.
Indeed, if you want to use emitDecoratorMetadata for automatic dependency injection, you should use classes. If the library itself takes advantage (again likely due to decorators) of classes e.g. https://typegraphql.com/docs/getting-started.html then yes, classes are again a fine choice.
The general answer is that they're useful when the type also needs to have a run-time representation (and metadata). Otherwise, not really.
I keep our stack mostly functional and that’s how it was done before me on our current project.
A few objects contain state like say a DB connection/client or a RequestContext you pass down through your request handler middleware’s. Those are an OOP class with an interface definition.
Everything else is just functions and closures. We also generate interface objects from our GraphQL types but that’s not a real OOP type, it’s just an interface.
If you keep to that structure, you’ll largely avoid the whole polymorphism OOP type hierarchy hell and all the dangers that come with it.
As for DI (dependency injection), that’s honestly just a fancy form of passing parameters down through function calls. Technically, the RequestContext I mentioned before is a “ball of mud” provider pattern DI code smell. So maybe down the road we will use DI to create more constrained context scopes.
If I do go that route for DI, I would likely strongly follow a CQRS style class pattern to inject objects and keep them nicely named and organized. Would also fit nicely pattern wise with the existing function + closures architecture.
But yeah, overall, stick to functions and closures, use OOP style classes sparingly and you’ll get the best of all worlds.
If you got your first taste of typescript from angular and have a full stack background in c#/ Java class based style will make you feel right at home.
React seems to oscillate between the 2 styles.
My recent work in Svelte send to favor functions and types.
IMO the biggest benefit of classes is the code organization it brings. Have you ever seen a “util” class or folder. That’s what tends to happen to a code base without strong cohesion. It becomes hard to find anything.
With typescript/js you have modules as a pretty good substitute.
What I love about typescript is that you can mix the two.
Mostly classless module based with the occasional class (logger with a constructor to pass in the current module name for example) seems to be what I like most now. Use what makes sense.
I successfully used functional programming with DI and it was quite pleasant!
Because DI is just “give me the dependencies I need when I declare I need it” you can use simple classes as scopes similar to CQRS patterns and continue doing functional programming from there.
It’s quite neat how you can interchange between the two and have it work rather nicely.
Technically, you could even do the same thing with closures and avoid OOP style classes all together even.
DI lives on, it just looks a little bit different than the constructor injection we’re used to seeing in OOP.
As I am not familiar with TS, I am gonna use F# as an example:
let sort (iterator: 'b -> 'a list) (collector: 'a list -> 'c) (comparator: 'a -> 'a -> bool) (collection: 'b) -> 'c =
...
I added the type annotations to, hopefully make it clear. The iterator is a helper to convert some arbitrary collection to a list, with the collector turning it back into a collection type again (not necessarily the same.)
For example, the iterator could map from a tree to a list, and the collector then to an array. Or if you already have a list and want a list back, you could pass in the identity function for those.
One call could be the following:
sort id id (>) somelist
Hope it isn't too unreadable.
Edit: Adjusted the order of the arguments, as the original order wouldn't work too well with partial application.
Some native JS constructs are class-based (or constructed using the new keyword). Promises, for example [0]. Nothing wrong the odd class here and there.
Though I would be wary of introducing patterns and paradigms that make sense in a different language when Typescript offers an ultimately simpler solution. Working against the grain helps nobody. Goes for both OOP and FP, really.
ES modules, functions, and well designed TS models get you 95% of the way.
This. Creating classes to wrap dependencies is a pattern only needed because of language limitations. With JS/TS, you can mock at the import statement level, so no need to twist your code to abstract away importing.
Also, even if you didn't want to mock that way, you can get dependency injection with functions just by taking a parameter for a dependency. If dependency injection is the only reason you have to use a class, you probably shouldn't use a class.
Request-scoped DI (as seen in ASP.NET MVC) is great on the backend for servicing requests. You can ask for e.g. a class representing the current user information to be injected anywhere, or to keep track of request-associated state like opentelemetry spans, or a transaction, etc. The alternative is to pass the user information class or transaction to all other services, which can be annoying
Its rarely seen in the ecosystem as a solution, unfortunately (everyone is passing all arguments all the time), but its one of the rare places where this is still useful. I've had bad experience with the alternative (continuation local storage) and its not nearly as elegant.
The IoC is a nice paradigm that I tried using with a couple different libraries in JS, but they all felt like they had missing features and sometimes were annoying to run tests with (depending on the how they implemented the IoC container within their frameworks). The work Microsoft puts into their web frameworks to make them so cohesive with the rest of their ecosystem libraries is sometimes underrated
I think the word "interface" means something conceptually different in different languages.
in Java it means "you should implement this contract"
in Typescript it means "this data type has this particular shape. It may have methods, too".
in Go it means "I'd like users of this code to implement these methods" (client interfaces).
In all cases you have to work differently with them. It's not even about OOP, I think, to the point where I'm not sure now if the keyword 'interface' is part of OOP at all.
We use Classes extensively in our code because it is an Electron app that interfaces with hardware. OO Typescript is incredibly useful, as we have clearly defined objects and inheritance schemes that would be a nightmare in native JS. The syntactic sugar of TS classes delivers an enormously powerful verification system.
I like classes but I'm in the minority. They are just syntactic sugar over functions, but I like the explicitness of them. If a closure works for you, that's great but there isn't an objective reason to use one over the other.
Interfaces are late bound, which, in conjunction with the concept of object identity and state, is OO. Subclassing is merely one specific approach to code reuse within the OO paradigm.
I know the universe at large has moved away from eclipse, but I loved their rich tooltips where you had nice structured representation (not just a blob of text from lsp) and could click through and navigate the type hierarchy.
Those hacks work but in practice I wouldn't classify them as "good". You end up having to look at a ton of types including in library code like React and etc that can be quite complex. Having to stop and try to wrap those on the fly is terrible ergonomics.
You might also be able to leverage typeof in conjunction with Id<T>. Like if you have some parameter x with a complex type you can create a temporary variable of type Id<typeof x> to avoid looking up any additional types. Totally agree it should be supported out of the box, however.
Although, as a Haskell developer, I am curious what type system TS is using (System F? Intuitionist? etc) and what limitations one can expect. Aside from the syntax of TS being what it is, what are the trade-offs and limitations?
I was under the impression, and this was years ago -- things are probably different now?, that TS's type system wasn't sound (in the mathematical logic sense).
I don't believe it is using any type system described outside of the TypeScript compiler. The goal of the system is to accurately describe real-world JavaScript programs and semantics, not to introduce a formalization that existed elsewhere and impose it on JS.
As a consequence, it has aspects of structural types, dependent types, type narrowing, and myriad other features that exist solely to model real-world JavaScript.
Non-soundness is sort of a feature, it lets you force your way through and just say "trust me, this is a Thing" when it's just hard (or impossible) to make TypeScript see that. In practice, you can write large code bases where you only need to do this every 1000 lines or so. Not ideal, but better than no typing.
You can make "believe_me" assertions, which is incredibly useful when writing advanced (metaprogramming heavy) library code. The idea is to try and contain / and heavily test the small "unsafe" library part and isolate it from the rest of the code, then enjoy the advanced type transformations and checks in the "normal" application code.
For example, an SQL query builder library may internally do unchecked assertions about the type of the result row that a query transformation would produce (e.g. group_by), however assuming that part is correct, all application code using the query builder's group_by method would benefit from the correct row types being produced by `group_by` which can then be matched against the rest of the application code.
Any isn’t required. The go-to example of unsoundness is the Cat[] ref that you alias as an Animal[], append a Dog to, then map over the original ref calling ‘meow()’ on each entry.
JavaScript arrays are heterogeneous and TypeScript permits covariance here because the underlying language lacks any mechanism to freeze any shared references to the array to prevent it.
TypeScript could restrict this to be through an operation that creates a new array (e.g.: via .slice()), however that would impose a performance deficit versus vanilla JavaScript. It's not acceptable to the TypeScript project to impose a cost on what would, in JS, just be array assignment or argument passing.
It would be a neat idea to allow a "strictCovariance" mode to allow covariance only with readonly arrays - I think that might solve the issue? i.e.: I can cast "Cat[]" to a "readonly Animal[]", but not to a mutable "Animal[]".
One think I struggled a lot with until I got it is that the TypeScript types and the JavaScript code live in totally separate universes, and you cannot cross from the type world to JavaScript values because the types are erased when transpilling - meaning they can't leave any trace.
This means that it's impossible to write this function:
Put another way, you can't do reflection with TypeScript.
You can write that function in C++ templates, and I naively assumed that it's possible in TypeScript too, since from my observations TypeScript allows complex typing to be expressed easier in general than C++.
I think that’s actually a positive feature of TypeScript -- a useful limitation. Reflection is generally a bad idea and it’s good to be forced to do without it.
It is a bit annoying sometimes that you can’t have overloaded functions with different types, but in that case you can usually just give the overloads different names, and usually that’s better for readability anyway. (Or if you really want to, write one function and use JS reflection to do the overloading manually) (but you really don’t!)
You’re right, I was just thinking about dynamic reflection.
My concern is with fiddly runtime stuff like “instanceof” -- I find that can go wrong in surprising ways. Better to just trust values to implement the interface they say they implement rather than trying to forcibly cast them.
Wanted to note that you can do function signature overloading in typescript- you just have to have a single function at the bottom that encapsulates all the different signatures and then branches its logic dynamically based on the values it's given: https://stackoverflow.com/questions/13212625/typescript-func...
I actually think this is a super cool and elegant way to do overloading
I’m just thinking of separate functions like this, which you can do in many languages:
add(n: number)
add(s: string)
In TypeScript (and also in C and Objective-C) you need to give them different names:
addNumber(n: number)
addString(s: string)
But see also brundolf’s reply -- if you have a single function that happens to take multiple overloads, TS does let you declare each overload; but it still needs a single implementation (likely with some runtime dispatching) in that case. I haven’t used that much myself, but maybe I should give it a go!
You might be interested in DeepKit[0]. In short, it enables introspection/reflection of typescript types at runtime, and builds off of that to do super interesting things like an ORM, an API framework, ... etc.
Thanks, their @deepkit/type is exactly what I would need, but it seems they do that by a TypeScript plugin, and I'm in an esbuild setup which completely bypasses TypeScript.
But I will check if maybe I can use DeepKit to auto-generate files with the reflection info I need as a separate build step.
That boundary is why I have a hard time taking TypeScript seriously. A type system that doesn't participate in code generation is what.. just for linting and documentation basically? Is that what we are become? Is that all people think a type system is good for?
Worse there is one value that is both a user-definable TypeScript type and a JS value.
> That boundary is why I have a hard time taking TypeScript seriously. A type system that doesn't participate in code generation is what
How is being able to check code correctness at compile-time even close to "just linting and documentation basically"? This has to be a bad faith argument
> Worse there is one value that is both a user-definable TypeScript type and a JS value.
What does this even mean? I don't think you understand TypeScript.
I agree that TypeScript, in and of itself, doesn't change the way the code executes. This is self-evident.
But the idea that that makes it simply "fancy documentation" is hilarious. I have never seen documentation that can tell you at compile-time how your code will behave, it's fundamentally stupid to argue that static type checking is in any way comparable with documentation
Seems like a good start, but there are a lot of interesting offerings in the introduction that really don't exist in the content so far.
Maybe finishing one of the more advanced chapters would be enough to lure people who are more experienced to check back on progress / pay / whatever you want traffic for.
> And I don't know how to express the correct solution (i.e. where we actually assert that A and B are object types).
You can do this:
function merge<
A extends Record<string, unknown>,
B extends Record<string, unknown>
>(a: A, b: B): A & B {
return { ...a, ...b }
}
const result = merge({ a: 1 }, { b: 2 })
The correct type is quite complex and depends on whether or not `exactOptionalPropertyTypes` is enabled.
EDIT: I think this is correct for when `exactOptionalPropertyTypes` is off.
type OptionalKeys<T extends { [key in symbol | string | number]?: unknown }> = { [K in keyof T]: {} extends Pick<T, K> ? K : never }[keyof T]
function merge<
A extends { [K in symbol | string | number]?: unknown },
B extends { [K in symbol | string | number]?: unknown },
>(a: A, b: B): {
[K in Exclude<keyof B, keyof A>]: B[K]
} & {
[K in Exclude<keyof A, keyof B>]: A[K]
} & {
[K in keyof A & keyof B]: K extends OptionalKeys<B> ? A[K] | Exclude<B[K], undefined> : B[K];
}
That's for when `exactOptionPropertyTypes` is enabled. With it disabled, then you'd replace `Exclude<B[K], undefined>` with `B[K]`.
As to whether this is a good idea. Ah... it's not :P
Wow yeah it gets way too complex if you want to track the types of properties within the objects too! If that is the case, then I would just prefer to do this instead as it is much simpler:
type Value = { a: string }
const result = merge<Value, Value>({ a: 1 }, { a: "fdsfsd" })
Avoid the Object and {} types, as they mean 'any non-nullish value'.
This is a point of confusion for many developers, who think it means 'any object type'.
Linters for TypeScript recommend using `Record<string, any>` instead of `object`, since using the `object` type is misleading and can make it harder to use as intended.
Because you only want to merge two objects that have keys with string type. "object" is represented as Record<any, any>. That would mean, you can use any type as key. Here is an example:
function merge<
A extends object,
B extends object
>(a: A, b: B): A & B {
return { ...a, ...b }
}
const result = merge(() => {}, () => {}) // should fail!
const anotherResult = merge([1, 2], [3, 4]) // should fail!
Yeah, TypeScript gets funky around the boundary of "things that can only be objects", because JavaScript itself gets funky around "what is an object"
Technically TypeScript "object types" only describe properties of some value. And in JavaScript... arrays have properties, and primitives have properties. Arrays even have a prototype object and can be indexed with string keys. So... {} doesn't actually mean "any object", it means "any value"
At its boundaries, TypeScript has blind-spots that can't realistically be made totally sound. So the best way to think of it is as a 90% solution to type-safety (which is still very helpful!)
I'm always impressed at how much the type system in TS is capable of. The provided examples remind me of what I'd expect in something like Rust; it brings me joy that we can do this sort of stuff in our frontend code and tooling these days.
This is fantastic! I've often had to advise coworkers to read documentation for OCaml or Rust to learn idiomatic, functional, statically typed programming. It's great to see a Typescript specific resource with exercises.
my HTML blog only has 30 lines of JS. i didn't need a framework and i certainly don't need types. grumble grumble web should just be simple html javascript grumble serverside render in my pet language grumble /s
This looks fantastic, not just for people learning Typescript, but I'd think it would be useful (when completed) as an introduction to generics and type-level thinking etc for lots of newcomers to those areas.
Types are here to help but with this power comes great responsibility.
The deeper you allow an input to be used in your library/method the more it makes sense to put a well defined type contract on it.
Exhaustive type definitions may show that the author doesn't have a understanding of a required interface, abstractions or pattern to use.
I use c# professionally for 20 years and I love the type system. It helps to tell the contract about the types and functions you expect yet you can overcomplicate things if you really about to. c# makes it "hard" to create method type definitions and you should use interfaces to achieve the contract definition. This helps to avoid the inline type definition as it's done in typescript.
My approach is to use types in typescript as I'm used to in c#.
One of the worst things about Next.js, Remix etc. is their file system driven routes. I really wish these frameworks would stop relying so much on hidden magic. Conventions are good, but as to why those conventions aren't in code is quite peculiar.
Previously, I wrote my route definitions with types for both path params and query params in one file, and used TypeScript to enforce that the equivalent back-end definitions (async loaders etc.) and front-end definitions (React components) were kept in sync.
When I first implemented this in a previous project, I found many instances where routes were expecting query params but they were being dropped off (e.g. post login redirects).
Supporting things like parameters for nested routes certainly means the TS types themselves are non-trivial, but they're the kind of thing you write (and document) once, but benefit from daily.
Examples of stuff that can and should be 100% type checked:
Looks like remix-routes is using string literals, which can be a pain for refactoring and jumping to usages etc. But it's certainly better than nothing. I'll give it a go. Thanks!
This seems like a great time to bring up "Why Static Languages Suffer From Complexity"[1], which explains the "statics-dynamics biformity" that leads to languages like TypeScript that are actually two languages: the runtime one and the compile-time type-system one.
Really interesting articles, will definitely have a look at night.
After so many years of JS programming, moving to a company that uses TS extensively (in a huge scale) feels life changing. You don't even know the effect until you use it daily. Even so, daily usage of typescript at a large web application doesn't seem to be using its full potential. I feel like libraries creator and maintainer use them more in the definitions that they created (i.e redux toolkit type is mind blowing).
Thanks for creating this lesson, it will definitely teach me a lot
Looks good. Given the course is unfinished, it's a shame the only way to get updates is via a third party corporate web site ("Twitter"). RSS would be the obvious way to go.
I was agreeing with you, and adding that even if you have a Twitter account, you would still be likely to miss an update to the tutorial among the author's other Twitter posts.
This is great. Can you please create an email list where we can sign up to be notified when new chapters are available? If I follow you on Twitter, I will invariably miss any announcements.
Just came here to say that this is really nice. Fantastic way to play with Typescript Types even for people with decent knowledge of Typescript as I would consider myself.
Speaking of types, what are your thoughts on fp-ts if you've used it? It brings functional programming concepts like in Haskell such as monads into TypeScript.
haven’t used it myself but other teams at the company I work for have tried with mixed results.
It’s very opinionated about the way you structure your code and basically makes anything thats not fully fp-ts hard to integrate, and also is quite hard for general JS people to wrap their head around.
It’s been designed by FP people for FP people and if there are some on your team who are not fully on board or are just starting to learn FP - expect lots of friction.
At my company it was mostly scala coders and “cats” lovers (category theory stuff lib for scala) mixed in with regular nodejs devs and I could sense a lot of animosity around fp-ts and its use.
But on a more practical note, the more they converted their codebase to fp-ts the more they reported massive compile time slowness. Like it would start to take minutes to compile their relatively isolated and straight forward services.
From what I gathered, if you want to go fp-ts its just too much friction and you’re much better off picking up a language designed from the bottom up for that - scala / ocaml / elixr / etc.
To be honest once I’ve been comfortable enough with the more advanced TS features, you can write plain old javascript in a very functional style, and thats actually pretty great, especially if you throw date-fns, lodash/fp or ramda into the mix, and it remains largely approachable to people outside of FP and you can easily integrate external libs.
That sounds about what I've expected. Frankly in a TS codebase with many other devs that are not versed in FP, I wouldn't want to bring in a pure FP library because it, like you said, needs everyone to understand the "meta-language" of FP so to speak, such as how monads work, not having raw side effects, mutation etc.
Ramda et al seem like a good compromise. Looking through its docs though, doesn't JS have a lot of this stuff covered? ie filter, map, reduce etc. What new stuff is it bringing in that covers say the 90% of most use cases?
Well currying is a big one, and also functions like flow/pipe (in lodash/fp) mimic piping in FP languages which allows for very nice expressions of business logic for modifying data.
You can sometimes loose the type though, so I prefer to do it with off the shelf filter/map/reduce even if its a bit unsightly.
I personally reach for lodash/fp for more specific expressions like orderBy or groupBy. There are some very nice well documented and powerful primitives there.
At one team a guy got so enamored with the functional style that he went ahead and rewrote mountains of logic in lodash/fp. And it turned out quite hard to maintain by the rest of the team, so you also have the danger of “overdoing it for the rest of the team” danger as well.
I haven't used fp-ts directly, but I use an adjacent package that declares fp-ts as a peer dependency: io-ts. I've almost exclusively for easier type management during deserialization. In vanilla TypeScript I would have defined an interface and a user-defined type guard to handle deserialization:
but user-defined type guards basically duplicate the interface, are prone to error, and can be very verbose. io-ts solves this by creating a run-time schema from which build-time types can be inferred, giving you both an interface and an automatically generated type guard:
Very nifty for my client/server monorepo using Yarn workspaces where the client and server message types are basically just a union of interfaces (of various complexity) defined in io-ts. Then I can just:
fp-ts [0] and the accompanying io-ts [1] are very well designed and neatly implemented. I'd consider them the reference for all things FP in Typescript.
In practice though I find that they don't mesh well with the language and ecosystem at large. Using them in a React/Vue/Whatever app will catch you at every step, as neither the language nor the frameworks have these principles at their core. It takes a lot of effort to escape from their gravitational pull. Using Zod [2] for your parsing needs and strict TS settings for the rest feel more natural.
It could work in a framework-agnostic backend or logic-heavy codebase where the majority of the devs are in to FP and really want / have to use Typescript.
If you stick to a few useful types like `Option` and `Either`/`TaskEither` you can get a lot of value out of it when writing server side code, particularly when combined with `io-ts` for safely parsing data.
If you go all in and use every utility it provides to write super succint FP code, it can get pretty unreadable.
Similar to the paid tool https://www.executeprogram.com/ , which goes very much more in-depth with TypeScript's type system (as well with other languages)
reminds me of C++ templated types being used for similar... except in this case there is no performance advantage from removing run-time logic by force.
this kind of stuff is often confusing when working with teams. using simple dumb stuff is always the better option when you can.
Yeah but how is Turing completeness directly relevant to that? Article doesn't seem to explain, just says "Turing Complete" in the title. Again, so what? I suppose it's somewhat indirectly vaguely reassuring?
Hi n
I have a question. I will go as simple and short as possible. I joined a small team working on the internal invoicing tool. Backend is Spring. Front-end is ExtJS used for me in very peculiar way. It emulates Java classes, there are Ext.define declarations with FQN names eg: "com.projectName.ds.Board.ui.extjs" (as string, casing important)
Then in the code this class is instantiated by its FQN but used as identifier eg: var Board = new com.projectName.ds.Board.ui.extjs();
There are also a lot of FQNs with short namespaces, different are associated with business short names and other like Dc, Ds, Frame belong to code architecture domain (data controller, data store, a frame on the screen). How I could use typescript to improve developer experience here? I'm from the react world, I programmed 4 years only in typescript, react, node and mongo. Thanks!
The drawback of a powerful type system is you can very easily get yourself into a type complexity mudhole. Nothing worse than trying to call a method where a simple `Foo` object would do but instead you've defined 60 character definition of `Foo` capabilities in the type system in the method signature.
Less is more.